repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
Lukx19/UvA-ML1 | 10158006_10185453_11896493_lab2.ipynb | mit | NAME = "Laura Ruis"
NAME2 = "Fredie Haver"
NAME3 = "Lukás Jelínek"
EMAIL = "lauraruis@live.nl"
EMAIL2 = "frediehaver@hotmail.com"
EMAIL3 = "lukas.jelinek1@gmail.com"
"""
Explanation: Save this file as studentid1_studentid2_lab#.ipynb
(Your student-id is the number shown on your student card.)
E.g. if you work with 3 people, the notebook should be named:
12301230_3434343_1238938934_lab1.ipynb.
This will be parsed by a regexp, so please double check your filename.
Before you turn this problem in, please make sure everything runs correctly. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your names and email adresses below.
End of explanation
"""
%pylab inline
plt.rcParams["figure.figsize"] = [9,5]
"""
Explanation: Lab 2: Classification
Machine Learning 1, September 2017
Notes on implementation:
You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.
Please write your answers right below the questions.
Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.
Use the provided test cells to check if your answers are correct
Make sure your output and plots are correct before handing in your assignment with Kernel -> Restart & Run All
$\newcommand{\bx}{\mathbf{x}}$
$\newcommand{\bw}{\mathbf{w}}$
$\newcommand{\bt}{\mathbf{t}}$
$\newcommand{\by}{\mathbf{y}}$
$\newcommand{\bm}{\mathbf{m}}$
$\newcommand{\bb}{\mathbf{b}}$
$\newcommand{\bS}{\mathbf{S}}$
$\newcommand{\ba}{\mathbf{a}}$
$\newcommand{\bz}{\mathbf{z}}$
$\newcommand{\bv}{\mathbf{v}}$
$\newcommand{\bq}{\mathbf{q}}$
$\newcommand{\bp}{\mathbf{p}}$
$\newcommand{\bh}{\mathbf{h}}$
$\newcommand{\bI}{\mathbf{I}}$
$\newcommand{\bX}{\mathbf{X}}$
$\newcommand{\bT}{\mathbf{T}}$
$\newcommand{\bPhi}{\mathbf{\Phi}}$
$\newcommand{\bW}{\mathbf{W}}$
$\newcommand{\bV}{\mathbf{V}}$
End of explanation
"""
from sklearn.datasets import fetch_mldata
# Fetch the data
mnist = fetch_mldata('MNIST original')
data, target = mnist.data, mnist.target.astype('int')
# Shuffle
indices = np.arange(len(data))
np.random.seed(123)
np.random.shuffle(indices)
data, target = data[indices].astype('float32'), target[indices]
# Normalize the data between 0.0 and 1.0:
data /= 255.
# Split
x_train, x_valid, x_test = data[:50000], data[50000:60000], data[60000: 70000]
t_train, t_valid, t_test = target[:50000], target[50000:60000], target[60000: 70000]
"""
Explanation: Part 1. Multiclass logistic regression
Scenario: you have a friend with one big problem: she's completely blind. You decided to help her: she has a special smartphone for blind people, and you are going to develop a mobile phone app that can do machine vision using the mobile camera: converting a picture (from the camera) to the meaning of the image. You decide to start with an app that can read handwritten digits, i.e. convert an image of handwritten digits to text (e.g. it would enable her to read precious handwritten phone numbers).
A key building block for such an app would be a function predict_digit(x) that returns the digit class of an image patch $\bx$. Since hand-coding this function is highly non-trivial, you decide to solve this problem using machine learning, such that the internal parameters of this function are automatically learned using machine learning techniques.
The dataset you're going to use for this is the MNIST handwritten digits dataset (http://yann.lecun.com/exdb/mnist/). You can download the data with scikit learn, and load it as follows:
End of explanation
"""
def plot_digits(data, num_cols, targets=None, shape=(28,28)):
num_digits = data.shape[0]
num_rows = int(num_digits/num_cols)
for i in range(num_digits):
plt.subplot(num_rows, num_cols, i+1)
plt.imshow(data[i].reshape(shape), interpolation='none', cmap='Greys')
if targets is not None:
plt.title(int(targets[i]))
plt.colorbar()
plt.axis('off')
plt.tight_layout()
plt.show()
plot_digits(x_train[0:40000:5000], num_cols=4, targets=t_train[0:40000:5000])
"""
Explanation: MNIST consists of small 28 by 28 pixel images of written digits (0-9). We split the dataset into a training, validation and testing arrays. The variables x_train, x_valid and x_test are $N \times M$ matrices, where $N$ is the number of datapoints in the respective set, and $M = 28^2 = 784$ is the dimensionality of the data. The second set of variables t_train, t_valid and t_test contain the corresponding $N$-dimensional vector of integers, containing the true class labels.
Here's a visualisation of the first 8 digits of the trainingset:
End of explanation
"""
# 1.1.2 Compute gradient of log p(t|x;w,b) wrt w and b
def logreg_gradient(x, t, w, b):
# define dimensions
dim_k = w.shape[1]
dim_m = w.shape[0]
# compute q vector and Z
logq = (w.T @ x.T).squeeze() + b
q = exp(logq)
Z = np.sum(q)
# compute delta vector
delta = -(1/Z) * q
delta[t] = 1 + delta[t]
dL_dw = (np.reshape(delta, (dim_k,1)) @ x).transpose()
# compute logp
a = np.amax(np.log(q))
logZ = a + np.log(np.sum(np.exp(-a) * q))
logp = np.reshape(np.log(q) - logZ, (1,dim_k))
dL_db = delta
return logp[:,t].squeeze(), dL_dw, dL_db.squeeze()
np.random.seed(123)
# scalar, 10 X 768 matrix, 10 X 1 vector
w = np.random.normal(size=(28*28,10), scale=0.001)
# w = np.zeros((784,10))
b = np.zeros((10,))
# test gradients, train on 1 sample
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
print("Test gradient on one point")
print("Likelihood:\t", logpt)
print("\nGrad_W_ij\t",grad_w.shape,"matrix")
print("Grad_W_ij[0,152:158]=\t", grad_w[152:158,0])
print("\nGrad_B_i shape\t",grad_b.shape,"vector")
print("Grad_B_i=\t", grad_b.T)
print("i in {0,...,9}; j in M")
assert logpt.shape == (), logpt.shape
assert grad_w.shape == (784, 10), grad_w.shape
assert grad_b.shape == (10,), grad_b.shape
# It's always good to check your gradient implementations with finite difference checking:
# Scipy provides the check_grad function, which requires flat input variables.
# So we write two helper functions that provide can compute the gradient and output with 'flat' weights:
from scipy.optimize import check_grad
np.random.seed(123)
# scalar, 10 X 768 matrix, 10 X 1 vector
w = np.random.normal(size=(28*28,10), scale=0.001)
# w = np.zeros((784,10))
b = np.zeros((10,))
def func(w):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)
return logpt
def grad(w):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w.reshape(784,10), b)
return grad_w.flatten()
finite_diff_error = check_grad(func, grad, w.flatten())
print('Finite difference error grad_w:', finite_diff_error)
assert finite_diff_error < 1e-3, 'Your gradient computation for w seems off'
def func(b):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
return logpt
def grad(b):
logpt, grad_w, grad_b = logreg_gradient(x_train[0:1,:], t_train[0:1], w, b)
return grad_b.flatten()
finite_diff_error = check_grad(func, grad, b)
print('Finite difference error grad_b:', finite_diff_error)
assert finite_diff_error < 1e-3, 'Your gradient computation for b seems off'
"""
Explanation: In multiclass logistic regression, the conditional probability of class label $j$ given the image $\bx$ for some datapoint is given by:
$ \log p(t = j \;|\; \bx, \bb, \bW) = \log q_j - \log Z$
where $\log q_j = \bw_j^T \bx + b_j$ (the log of the unnormalized probability of the class $j$), and $Z = \sum_k q_k$ is the normalizing factor. $\bw_j$ is the $j$-th column of $\bW$ (a matrix of size $784 \times 10$) corresponding to the class label, $b_j$ is the $j$-th element of $\bb$.
Given an input image, the multiclass logistic regression model first computes the intermediate vector $\log \bq$ (of size $10 \times 1$), using $\log q_j = \bw_j^T \bx + b_j$, containing the unnormalized log-probabilities per class.
The unnormalized probabilities are then normalized by $Z$ such that $\sum_j p_j = \sum_j \exp(\log p_j) = 1$. This is done by $\log p_j = \log q_j - \log Z$ where $Z = \sum_i \exp(\log q_i)$. This is known as the softmax transformation, and is also used as a last layer of many classifcation neural network models, to ensure that the output of the network is a normalized distribution, regardless of the values of second-to-last layer ($\log \bq$)
Warning: when computing $\log Z$, you are likely to encounter numerical problems. Save yourself countless hours of debugging and learn the log-sum-exp trick.
The network's output $\log \bp$ of size $10 \times 1$ then contains the conditional log-probabilities $\log p(t = j \;|\; \bx, \bb, \bW)$ for each digit class $j$. In summary, the computations are done in this order:
$\bx \rightarrow \log \bq \rightarrow Z \rightarrow \log \bp$
Given some dataset with $N$ independent, identically distributed datapoints, the log-likelihood is given by:
$ \mathcal{L}(\bb, \bW) = \sum_{n=1}^N \mathcal{L}^{(n)}$
where we use $\mathcal{L}^{(n)}$ to denote the partial log-likelihood evaluated over a single datapoint. It is important to see that the log-probability of the class label $t^{(n)}$ given the image, is given by the $t^{(n)}$-th element of the network's output $\log \bp$, denoted by $\log p_{t^{(n)}}$:
$\mathcal{L}^{(n)} = \log p(t = t^{(n)} \;|\; \bx = \bx^{(n)}, \bb, \bW) = \log p_{t^{(n)}} = \log q_{t^{(n)}} - \log Z^{(n)}$
where $\bx^{(n)}$ and $t^{(n)}$ are the input (image) and class label (integer) of the $n$-th datapoint, and $Z^{(n)}$ is the normalizing constant for the distribution over $t^{(n)}$.
1.1 Gradient-based stochastic optimization
1.1.1 Derive gradient equations (20 points)
Derive the equations for computing the (first) partial derivatives of the log-likelihood w.r.t. all the parameters, evaluated at a single datapoint $n$.
You should start deriving the equations for $\frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}$ for each $j$. For clarity, we'll use the shorthand $\delta^q_j = \frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}$.
For $j = t^{(n)}$:
$
\delta^q_j
= \frac{\partial \mathcal{L}^{(n)}}{\partial \log p_j}
\frac{\partial \log p_j}{\partial \log q_j}
+ \frac{\partial \mathcal{L}^{(n)}}{\partial \log Z}
\frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
= 1 \cdot 1 - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
= 1 - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
$
For $j \neq t^{(n)}$:
$
\delta^q_j
= \frac{\partial \mathcal{L}^{(n)}}{\partial \log Z}
\frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
= - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
$
Complete the above derivations for $\delta^q_j$ by furtherly developing $\frac{\partial \log Z}{\partial Z}$ and $\frac{\partial Z}{\partial \log q_j}$. Both are quite simple. For these it doesn't matter whether $j = t^{(n)}$ or not.
For $j = t^{(n)}$:
\begin{align}
\delta^q_j
&= 1 - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j} \
&= 1 - \frac{1}{Z} \frac{\partial \sum_i \exp (\log (q_i))}{\partial \log(q_j)} \
&= 1 - \frac{1}{Z} \exp (\log (q_j)) \
&= 1 - \frac{q_j}{Z}
\end{align}
For $j \neq t^{(n)}$:
\begin{align}
\delta^q_j
&= - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j} \
&= - \frac{q_j}{Z}
\end{align}
Given your equations for computing the gradients $\delta^q_j$ it should be quite straightforward to derive the equations for the gradients of the parameters of the model, $\frac{\partial \mathcal{L}^{(n)}}{\partial W_{ij}}$ and $\frac{\partial \mathcal{L}^{(n)}}{\partial b_j}$. The gradients for the biases $\bb$ are given by:
$
\frac{\partial \mathcal{L}^{(n)}}{\partial b_j}
= \frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}
\frac{\partial \log q_j}{\partial b_j}
= \delta^q_j
\cdot 1
= \delta^q_j
$
The equation above gives the derivative of $\mathcal{L}^{(n)}$ w.r.t. a single element of $\bb$, so the vector $\nabla_\bb \mathcal{L}^{(n)}$ with all derivatives of $\mathcal{L}^{(n)}$ w.r.t. the bias parameters $\bb$ is:
$
\nabla_\bb \mathcal{L}^{(n)} = \mathbf{\delta}^q
$
where $\mathbf{\delta}^q$ denotes the vector of size $10 \times 1$ with elements $\mathbf{\delta}_j^q$.
The (not fully developed) equation for computing the derivative of $\mathcal{L}^{(n)}$ w.r.t. a single element $W_{ij}$ of $\bW$ is:
$
\frac{\partial \mathcal{L}^{(n)}}{\partial W_{ij}} =
\frac{\partial \mathcal{L}^{(n)}}{\partial \log q_j}
\frac{\partial \log q_j}{\partial W_{ij}}
= \mathbf{\delta}j^q
\frac{\partial \log q_j}{\partial W{ij}}
$
What is $\frac{\partial \log q_j}{\partial W_{ij}}$? Complete the equation above.
If you want, you can give the resulting equation in vector format ($\nabla_{\bw_j} \mathcal{L}^{(n)} = ...$), like we did for $\nabla_\bb \mathcal{L}^{(n)}$.
$
\frac{\partial \log q_j}{\partial W_{ij}} = \frac{\partial }{\partial W_{ij}}(\textbf{w}^T_{j} \textbf{x} + b_{j}) = \frac{\partial }{\partial W_{ij}} (\sum_i W_{ij}x_i +b_j) = x_i
$
So
$\frac{\partial \mathcal{L}^{(n)}}{\partial W_{ij}} = \mathbf{\delta}_j^q x_i $
1.1.2 Implement gradient computations (10 points)
Implement the gradient calculations you derived in the previous question. Write a function logreg_gradient(x, t, w, b) that returns the gradients $\nabla_{\bw_j} \mathcal{L}^{(n)}$ (for each $j$) and $\nabla_{\bb} \mathcal{L}^{(n)}$, i.e. the first partial derivatives of the log-likelihood w.r.t. the parameters $\bW$ and $\bb$, evaluated at a single datapoint (x, t).
The computation will contain roughly the following intermediate variables:
$
\log \bq \rightarrow Z \rightarrow \log \bp\,,\, \mathbf{\delta}^q
$
followed by computation of the gradient vectors $\nabla_{\bw_j} \mathcal{L}^{(n)}$ (contained in a $784 \times 10$ matrix) and $\nabla_{\bb} \mathcal{L}^{(n)}$ (a $10 \times 1$ vector).
For maximum points, ensure the function is numerically stable.
End of explanation
"""
def sgd_iter(x_train, t_train, W, b):
# get vector with random indices and intialize vector for logpt
l = len(t_train)
w_old = W
b_old = b
rand_list = [i for i in range(l)]
shuffle(rand_list)
logp_train = [0 for i in range(l)]
# get gradient
logpt, grad_w, grad_b = logreg_gradient(x_train[rand_list[0]:rand_list[0]+1,:], t_train[rand_list[0]:rand_list[0]+1], W, b)
logp_train[0] = np.asscalar(logpt)
learning_rate = 1 * 10**(-6)
w_new = w_old + learning_rate * grad_w
b_new = b_old + learning_rate * grad_b
# go through dataset in randomized order
for i in range(1, l):
logpt, grad_w, grad_b = logreg_gradient(x_train[rand_list[i]:rand_list[i] + 1,:], t_train[rand_list[i]:rand_list[i] + 1],
w_new, b_new)
w_new = w_new + learning_rate * grad_w
b_new = b_new + learning_rate * grad_b
logp_train[i] = np.asscalar(logpt)
logp_train_mean = mean(logp_train)
return logp_train_mean, w_new, b_new
# Sanity check:
np.random.seed(1243)
w = np.zeros((28*28, 10))
b = np.zeros(10)
logp_train, W, b = sgd_iter(x_train[:5], t_train[:5], w, b)
"""
Explanation: 1.1.3 Stochastic gradient descent (10 points)
Write a function sgd_iter(x_train, t_train, w, b) that performs one iteration of stochastic gradient descent (SGD), and returns the new weights. It should go through the trainingset once in randomized order, call logreg_gradient(x, t, w, b) for each datapoint to get the gradients, and update the parameters using a small learning rate of 1E-6. Note that in this case we're maximizing the likelihood function, so we should actually performing gradient ascent... For more information about SGD, see Bishop 5.2.4 or an online source (i.e. https://en.wikipedia.org/wiki/Stochastic_gradient_descent)
End of explanation
"""
iterations = 10
def test_sgd(x_train, t_train, w, b):
iterations = 10
w_old = w
logp_iter = [0 for i in range(iterations)]
logp_iter_valid = [0 for i in range(iterations)]
w_valid = w_old
b_valid = b
for i in range(iterations):
print('iteration: ', i)
# training set
logp_train, w, b = sgd_iter(x_train, t_train, w, b)
logp_iter[i] = logp_train
# validation est
logp_valid_train, w_valid, b_valid = sgd_iter(x_valid, t_valid, w_valid, b_valid)
logp_iter_valid[i] = logp_valid_train
return logp_iter, logp_iter_valid, w, b, b_valid
np.random.seed(1243)
w = np.zeros((28*28, 10))
b = np.zeros(10)
logp_iter, logp_iter_valid, w, b, b_valid = test_sgd(x_train, t_train, w, b)
plt.scatter(range(iterations), logp_iter)
plt.scatter(range(iterations), logp_iter_valid)
plt.show()
"""
Explanation: 1.2. Train
1.2.1 Train (10 points)
Perform 10 SGD iterations through the trainingset. Plot (in one graph) the conditional log-probability of the trainingset and validation set after each iteration.
End of explanation
"""
tars = [i for i in range(10)]
plot_digits(w.transpose(), num_cols=5, targets=tars)
"""
Explanation: 1.2.2 Visualize weights (10 points)
Visualize the resulting parameters $\bW$ after a few iterations through the training set, by treating each column of $\bW$ as an image. If you want, you can use or edit the plot_digits(...) above.
End of explanation
"""
logps = [0 for i in range(len(t_valid))]
for i in range(len(t_valid)):
logpt, grad_w, grad_b = logreg_gradient(x_valid[i:i+1,:], t_valid[i:i+1], w, b_valid)
logps[i] = np.asscalar(logpt)
sorted_logps = np.sort(logps)
highest = sorted_logps[-8:]
highest_t = [t_valid[logps.index(i)] for i in highest]
highest_x = np.concatenate([x_valid[logps.index(i):logps.index(i)+1,:] for i in highest])
lowest = sorted_logps[:8]
lowest_t = [t_valid[logps.index(i)] for i in lowest]
lowest_x = np.concatenate([x_valid[logps.index(i):logps.index(i)+1,:] for i in lowest])
print("Highest probability of true class label: ")
plot_digits(highest_x, num_cols=4, targets=highest_t)
print("Lowest probability of true class label: ")
plot_digits(lowest_x, num_cols=4, targets=lowest_t)
"""
Explanation: Describe in less than 100 words why these weights minimize the loss
These weights are found with SGD (ascent) for maximizing the log-likelihood. In multiclass logistic regression the loss is the negative log-likelihood. These weights maximize the log-likelihood, so they minimize the negative log-likelihood e.g. the loss.
1.2.3. Visualize the 8 hardest and 8 easiest digits (10 points)
Visualize the 8 digits in the validation set with the highest probability of the true class label under the model.
Also plot the 8 digits that were assigned the lowest probability.
Ask yourself if these results make sense.
End of explanation
"""
import math
def sigmoid(z):
return 1.0/(1.0+np.exp(np.negative(z)))
def sigmoid_prime(z):
return sigmoid(z)*(1-sigmoid(z))
class Network(object):
def __init__(self,sizes):
self.num_layers = len(sizes)-1
self.sizes = sizes
self.biases = [np.random.randn(1,j) for j in self.sizes[1:]]
self.b_grads = [np.zeros((1,j)) for j in self.sizes[1:]]
self.weights = [(np.random.randn(y, x)/np.sqrt(x)).transpose()
for x, y in zip(self.sizes[:-1], self.sizes[1:])]
self.w_grads = [np.zeros((y, x)).transpose()
for x, y in zip(self.sizes[:-1], self.sizes[1:])]
self.activations = [np.zeros((1,j)) for j in self.sizes]
# print('V',self.weights[0].shape,'a',self.biases[0].shape,'activ:',self.activations[0].shape)
# print('W',self.weights[1].shape,'b',self.biases[1].shape,'activ2:',self.activations[1].shape)
def feedForward(self,x):
self.activations[0] = x.reshape((1,len(x)))
for i,b,w in zip(range(1,self.num_layers),self.biases[:-1],self.weights[:-1]):
self.activations[i]= sigmoid(self.activations[i-1].dot(w) + b)
# last layer activation no activation function
self.activations[-1] = self.activations[-2].dot(self.weights[-1]) + self.biases[-1]
return self.activations[-1]
def SGD(self,x,t,learn):
idx = list(range(len(x_train)))
random.shuffle(idx)
log_train = 0
for i in idx:
self.feedForward(x[i,:])
err = self.backprop(t[i])
for i in range(self.num_layers):
self.biases[i] += learn * self.b_grads[i]
self.weights[i] += learn * self.w_grads[i]
log_train +=err
return log_train
def backprop(self,t):
max_activ = np.max(self.activations[-1])
logZ = max_activ + np.log(np.sum(np.exp(self.activations[-1] - max_activ)))
logp = self.activations[-1] - logZ
# solving weights changes for last layer
d = np.zeros(self.activations[-1].shape[1])
d[t] = 1
delta_q = d - np.exp(self.activations[-1])/np.exp(logZ)
self.w_grads[-1] = self.activations[-2].T * delta_q
self.b_grads[-1] = delta_q
delta_h = delta_q.dot(self.weights[-1].T)
for i in range(1,self.num_layers):
curr_l = self.num_layers - i
prev_l = curr_l +1
next_l = curr_l -1
sigmoid_prime = self.activations[curr_l]*(1-self.activations[curr_l])
self.w_grads[curr_l-1] = self.activations[next_l].T.dot(delta_h*sigmoid_prime)
self.b_grads[curr_l-1] = delta_h*sigmoid_prime
delta_h = delta_h.dot(self.weights[curr_l-1].T)
# print(logp[0,t])
return logp[0,t].squeeze()
def validate(self,x,t):
self.feedForward(x)
max_activ = np.max(self.activations[-1])
logZ = max_activ + np.log(np.sum(np.exp(self.activations[-1] - max_activ)))
logp = self.activations[-1] - logZ
# print(logp)
return logp[0,t]
def test_mlp(x_train, t_train, x_valid, t_valid, network,learn=1e-2):
measure_points=[]
t_logp = []
v_logp = []
iterations = 10
for i in range(iterations):
print(i)
logp_train = network.SGD(x_train,t_train,learn)
if i %2 ==0:
measure_points.append(i)
t_logp.append(logp_train)
sum_valid = 0
for j in range(len(t_valid)):
sum_valid += network.validate(x_valid[j,:],t_valid[j])
v_logp.append(sum_valid)
print('Iteration ' + str(i))
plot_digits(network.weights[0].transpose(), num_cols=5,shape=(28,28))
fig, ax = plt.subplots()
ax.plot(measure_points, t_logp, marker='.')
ax.plot(measure_points, v_logp, marker='.')
plt.legend(['train', 'validation'])
plt.show()
return v_logp[-1]
# Write training code here:
# Plot the conditional loglikelihoods for the train and validation dataset after every iteration.
# Plot the weights of the first layer.
np.random.seed(99)
nt = Network([784,20,10])
test_mlp(x_train, t_train, x_valid, t_valid,nt)
"""
Explanation: Part 2. Multilayer perceptron
You discover that the predictions by the logistic regression classifier are not good enough for your application: the model is too simple. You want to increase the accuracy of your predictions by using a better model. For this purpose, you're going to use a multilayer perceptron (MLP), a simple kind of neural network. The perceptron wil have a single hidden layer $\bh$ with $L$ elements. The parameters of the model are $\bV$ (connections between input $\bx$ and hidden layer $\bh$), $\ba$ (the biases/intercepts of $\bh$), $\bW$ (connections between $\bh$ and $\log q$) and $\bb$ (the biases/intercepts of $\log q$.
The conditional probability of the class label $j$ is given by:
$\log p(t = j \;|\; \bx, \bb, \bW) = \log q_j - \log Z$
where $q_j$ are again the unnormalized probabilities per class, and $Z = \sum_j q_j$ is again the probability normalizing factor. Each $q_j$ is computed using:
$\log q_j = \bw_j^T \bh + b_j$
where $\bh$ is a $L \times 1$ vector with the hidden layer activations (of a hidden layer with size $L$), and $\bw_j$ is the $j$-th column of $\bW$ (a $L \times 10$ matrix). Each element of the hidden layer is computed from the input vector $\bx$ using:
$h_j = \sigma(\bv_j^T \bx + a_j)$
where $\bv_j$ is the $j$-th column of $\bV$ (a $784 \times L$ matrix), $a_j$ is the $j$-th element of $\ba$, and $\sigma(.)$ is the so-called sigmoid activation function, defined by:
$\sigma(x) = \frac{1}{1 + \exp(-x)}$
Note that this model is almost equal to the multiclass logistic regression model, but with an extra 'hidden layer' $\bh$. The activations of this hidden layer can be viewed as features computed from the input, where the feature transformation ($\bV$ and $\ba$) is learned.
2.1 Derive gradient equations (20 points)
State (shortly) why $\nabla_{\bb} \mathcal{L}^{(n)}$ is equal to the earlier (multiclass logistic regression) case, and why $\nabla_{\bw_j} \mathcal{L}^{(n)}$ is almost equal to the earlier case.
Like in multiclass logistic regression, you should use intermediate variables $\mathbf{\delta}_j^q$. In addition, you should use intermediate variables $\mathbf{\delta}_j^h = \frac{\partial \mathcal{L}^{(n)}}{\partial h_j}$.
Given an input image, roughly the following intermediate variables should be computed:
$
\log \bq \rightarrow Z \rightarrow \log \bp \rightarrow \mathbf{\delta}^q \rightarrow \mathbf{\delta}^h
$
where $\mathbf{\delta}_j^h = \frac{\partial \mathcal{L}^{(n)}}{\partial \bh_j}$.
Give the equations for computing $\mathbf{\delta}^h$, and for computing the derivatives of $\mathcal{L}^{(n)}$ w.r.t. $\bW$, $\bb$, $\bV$ and $\ba$.
You can use the convenient fact that $\frac{\partial}{\partial x} \sigma(x) = \sigma(x) (1 - \sigma(x))$.
K = size of output layer. In this example K = 10
I = size of input layer. In this example I = 784
First we calculate the delta:
\begin{align}
\frac{\partial \log q_j}{\partial h_j} &=
\left(\frac{\partial}{\partial h_j}\left(\sum_{i=1}^{L}w_{i,j}h_i\right)
+ \frac{\partial}{\partial h_j}\left(b_j\right)\right) \
&= W_{j,j}
\end{align}
\begin{align}
\frac{\partial Z}{\partial \log q_j} = \frac{\partial}{\partial \log q_j}\left(\sum_{k=1}^{K}q_k\right) = q_j
\end{align}
For $j = t^{(n)}$:
\begin{align}
\delta_{j}^{h}
&= \frac{\partial \log q_j}{\partial \log q_j} \frac{\partial \log q_j}{\partial h_j}
- \frac{\partial \log Z}{\partial Z} \frac{\partial Z}{\partial \log q_j} \frac{\partial \log q_j}{\partial h_j} \
&=W_{j,j} - \frac{W_{j,j}}{Z}q_j
\end{align}
For $j \neq t^{(n)}$:
\begin{align}
\delta_{j}^{h}
&= - \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
\frac{\partial \log q_j}{\partial \log h_j}\
&= - \frac{W_{j,j}}{Z}q_j
\end{align}
The elements of the gradient with respect to the bias $\nabla_{\bb} \mathcal{L}^{(n)}$:
For $j \neq t^{(n)}$:
\begin{align}
\frac{\partial \log q_j}{\partial b_j}&= 1\
\frac{\partial \mathcal{L}^{n}}{\partial b_j} &=
\frac{\partial \log q_j}{\partial \log q_j}
\frac{\partial \log q_j}{\partial b_j}
- \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
\frac{\partial \log q_j}{\partial b_j} \
&= 1- \frac{q_j}{Z}
\end{align}
For $j \neq t^{(n)}$:
\begin{align}
\frac{\partial \mathcal{L}^{n}}{\partial b_j} = - \frac{q_j}{Z}
\end{align}
The Bias is the same as in the case of Logistic regression because the activations of the hidden layer do not change the bias from the ones in the multiclass logistic regression (the dependence on the bias does not change).
The elements of the gradient with respect to the weights $W_{k,j}$ $\nabla_{\bW} \mathcal{L}^{(n)}$:
For $j \neq t^{(n)}$:
\begin{align}
\frac{\partial \log q_j}{\partial W_{i,j}} &=
\left(\frac{\partial}{\partial W_{i,j}}\left(\sum_{i=1}^{L}w_{i,j}h_i\right)
+ \frac{\partial}{\partial W_{i,j}}\left(b_j\right)\right) \
&= h_{i}\
\frac{\partial \mathcal{L}^{n}}{\partial W_{i,j}} &=
\frac{\partial \log q_j}{\partial \log q_j}
\frac{\partial \log q_j}{\partial W_{i,j}}
- \frac{\partial \log Z}{\partial Z}
\frac{\partial Z}{\partial \log q_j}
\frac{\partial \log q_j}{\partial W_{i,j}} \
&= h_i - \frac{q_j}{Z}h_i
\end{align}
For $j \neq t^{(n)}$:
\begin{align}
\frac{\partial \mathcal{L}^{n}}{\partial W_{i,j}} = - \frac{q_j}{Z}h_i
\end{align}
The weight vector is similar to logistic regresion because they only depend on the hidden layer h linearly and that comes back in the multiplication with h in the gradient.
The elements of the gradient with respect to the biases $ a_j$ $\nabla_{\ba} \mathcal{L}^{(n)}$:
\begin{align}
\frac{\partial}{\partial a_j}\left(v_j x+ a_j\right) &=1 \
\frac{\partial \mathcal{L}^{n}}{\partial a_{j}} &= \delta_{j}^{h}
\frac{\partial h_j}{\partial \sigma(s)}
\frac{\partial \sigma(s)}{\partial s}
\frac{\partial s}{\partial a_j} \
&= \delta_{j}^{h} \sigma \left(v_j x+ a_j\right) (1- \sigma \left(v_j x+ a_j\right))
\end{align}
The elements of the gradient with respect to the weights $V_{i,j}$ $\nabla_{\bV} \mathcal{L}^{(n)}$:
\begin{align}
\frac{\partial}{\partial V_{i,j}}\left(v_j x+ a_j\right) &=
\left(\frac{\partial}{\partial V_{i,j}}\left(\sum_{i=1}^{I}V_{i,j}x_i\right)
+ \frac{\partial}{\partial V_{i,j}}\left(a_j\right)\right) = x_i\
\frac{\partial \mathcal{L}^{n}}{\partial V_{i,j}} &= \delta_{j}^{h}
\frac{\partial h_j}{\partial \sigma(s)}
\frac{\partial \sigma(s)}{\partial s}
\frac{\partial s}{\partial V_{i,j}} \
&= \delta_{j}^{h} \sigma \left(v_j x+ a_j\right) (1- \sigma \left(v_j x+ a_j\right))x_i
\end{align}
2.2 MAP optimization (10 points)
You derived equations for finding the maximum likelihood solution of the parameters. Explain, in a few sentences, how you could extend this approach so that it optimizes towards a maximum a posteriori (MAP) solution of the parameters, with a Gaussian prior on the parameters.
We could extend this by changing the $\mathcal{L}^{(n)}$, or the function we are trying to optimize. In this case we are optimizing the likelihood, but if we want to include some prior belief on the distribution of the parameters we should add a Gaussian prior. Now the gradient descent algorithm tries to adjust the weights to minimize the posterior distribution instead of the likelihood. This would mean different gradient equations, updates and a different final output.
2.3. Implement and train a MLP (15 points)
Implement a MLP model with a single hidden layer of 20 neurons.
Train the model for 10 epochs.
Plot (in one graph) the conditional log-probability of the trainingset and validation set after each two iterations, as well as the weights.
10 points: Working MLP that learns with plots
+5 points: Fast, numerically stable, vectorized implementation
End of explanation
"""
predict_test = np.zeros(len(t_test))
# Fill predict_test with the predicted targets from your model, don't cheat :-).
# YOUR CODE HERE
raise NotImplementedError()
assert predict_test.shape == t_test.shape
n_errors = np.sum(predict_test != t_test)
print('Test errors: %d' % n_errors)
"""
Explanation: 2.3.1. Explain the weights (5 points)
In less than 80 words, explain how and why the weights of the hidden layer of the MLP differ from the logistic regression model, and relate this to the stronger performance of the MLP.
The weight are more noisy but they represent different structure than in case of lienar regression.
2.3.1. Less than 250 misclassifications on the test set (10 bonus points)
You receive an additional 10 bonus points if you manage to train a model with very high accuracy: at most 2.5% misclasified digits on the test set. Note that the test set contains 10000 digits, so you model should misclassify at most 250 digits. This should be achievable with a MLP model with one hidden layer. See results of various models at : http://yann.lecun.com/exdb/mnist/index.html. To reach such a low accuracy, you probably need to have a very high $L$ (many hidden units), probably $L > 200$, and apply a strong Gaussian prior on the weights. In this case you are allowed to use the validation set for training.
You are allowed to add additional layers, and use convolutional networks, although that is probably not required to reach 2.5% misclassifications.
End of explanation
"""
|
h-mayorquin/time_series_basic | examples/2015-09-21(How Fourier Transform and Inverse Fourier Transform Work).ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sampling_rate = 20 # This quantity is on Hertz
step = 1.0 / sampling_rate
Tmax = 20.0
time = np.arange(0, Tmax, step)
N_to_use = 1024 # Should be a power of two.
"""
Explanation: How the FFT (Fast Fourier Tranform) works in Python and how to use it. A practical guide.
Motivation
I wrote this in order to have a future reference in how to FFT works in Python. Basically everytime that I try to do some frequency related analysis I have to rethink all the quantities. Therefore I decided to write here a basic implementation with all the details.
Implementation Number 1
Here we juts show how to go from a signal composed from a handful of frequencies to the FFT that revelas those frequencies in the proper units.
End of explanation
"""
print("The smalles frequency that the FFT will discern: ", sampling_rate / N_to_use)
print("Nyquist Frequency: ", sampling_rate / 2)
"""
Explanation: Size of the FFT
The FFT is going to be of the size N_to_use. Having multiples of 2 here allows us to do the calculation faster. Otherwise zero padding is used.
End of explanation
"""
f1 = 1.0
f2 = 2.0
f3 = 4.0 # All of this on Hertz
y1 = np.sin(2 * np.pi * f1 * time)
y2 = np.sin(2 * np.pi * f2 * time)
y3 = np.sin(2 * np.pi * f3 * time)
y = y1 + y2 + y3
transform = np.fft.fft(y, N_to_use)
# We get the proper frequencies for the FFT
frequencies = np.fft.fftfreq(N_to_use, d=step)
"""
Explanation: Analysis of the sampling rate on the limits of what the FFT can tell us.
The Smallest Possible Frequency
The smallest or slower frequency (or the period of the signal) is proportional to the samplin rate and inversely proportional number of points that we use. This make sense because the higher is the sampling rate then the higher the quantity of points that we will require to fill one period and therefore get information about the signal. On the other hand the bigger the number of points the easier to cover one period of the signal and therefore we can get information for smaller frequencies.
The Nyquist Frequency or the Biggest Possible Frequency
This is more straighforward. The bigger the sampling frequency the higher the range of frequencies that we can get information from.
End of explanation
"""
%matplotlib inline
plt.plot(frequencies, np.abs(transform))
plt.title('Fast Fourier Transform')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Power Spectrum')
plt.xlim([-6, 6])
"""
Explanation: A word about frequencies units and the pi value
When we multiply the frequency and time by the $ 2\pi $ in the argument of trigonometric function we are actually asking that the natural period of the sine is equal to one. Otherwise we will require $ 2 \pi $ units to go from one period to the other.
In other words we are doing this so we can talk about the frequency in ordinary terms (1 / s) instead of angular units.
See angular frequency vs cycles per second in order to further understand this point.
End of explanation
"""
aux = int(N_to_use / 2)
freq_aux = frequencies[0: aux]
plt.plot(freq_aux, np.abs(transform[:aux]))
plt.title('Fast Fourier Transform')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Power Spectrum')
plt.xlim([0, 6])
"""
Explanation: Final Comments
So we see that the FFT gives back the frequencies at the proper values that we know because we defined them to be that way. Finally, usually one ignores the negatives frequencies and only takes the positive values in this kind of analysis. This can be achieved by cutting the freq and the transform apporpiately. The frequencies start at 0 till the middle of the vector and then show the negative frequencies, so we get only the first half of the frequencies and of the transform vector consequently.
End of explanation
"""
sampling_rate = 100 # This quantity is on Hertz
step = 1.0 / sampling_rate
Tmax = 20.0
time = np.arange(0, Tmax, step)
N_to_use = 1024 * 2 # Should be a power of two.
"""
Explanation: Implementation Number 2
Here I show how we can calculate the inverse fourier transform, then the inverse that get us back to the original signal and how the sampling rate and the number of points that we use to calculate the FFT affects the units of the inverse.
End of explanation
"""
T = 10.0 # Period
f = 1.0 / T # Frequency relationship
y = np.sin(2 * np.pi * f * time)
transform = np.fft.fft(y, N_to_use)
inverse = np.fft.ifft(transform, N_to_use)
time_inverse = np.arange(0, N_to_use * step, step)
# Now we plot this.
plt.subplot(1, 2, 1)
plt.title('Original Signal')
plt.plot(time, y)
plt.subplot(1, 2, 2)
plt.title('Recovered Signal')
plt.plot(time_inverse, inverse.real)
"""
Explanation: Here we will give the frequency in terms of the period for ease of interpretation.
End of explanation
"""
sampling_rate * T
"""
Explanation: About the Period of the Recovered Signal
The recovered signal is going to repeat itself after sampling_rate * Period which in this case is:
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/tutorials/datasets_advanced.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Advanced: Datasets
Datasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model.
If you're not already familiar with the basic functionality of adding datasets, make sure to read the datasets tutorial first.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
"""
b.add_dataset('lc',
times=[0,1],
dataset='lc01',
overwrite=True)
print(b.get_parameter(qualifier='times', dataset='lc01'))
print(b.filter(qualifier='ld_mode', dataset='lc01'))
"""
Explanation: Passband Options
Passband options follow the exact same rules as dataset columns.
Sending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method).
Note that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided.
End of explanation
"""
b.add_dataset('lc',
times=[0,1],
ld_mode='manual',
ld_func={'primary': 'logarithmic', 'secondary': 'quadratic'},
dataset='lc01',
overwrite=True)
print(b.filter(qualifier='ld_func', dataset='lc01'))
"""
Explanation: As you might expect, if you want to pass different values to different components, simply provide them in a dictionary.
End of explanation
"""
print(b.filter(qualifier'ld_func@lc01', check_default=False))
"""
Explanation: Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well.
End of explanation
"""
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
b.add_dataset('lc',
times=times,
fluxes=fluxes,
sigmas=sigmas,
dataset='lc01',
overwrite=True)
"""
Explanation: This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value.
Adding a Dataset from a File
Manually from Arrays
For now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section).
Here we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy.
End of explanation
"""
print(b.get_ephemeris())
print(b.to_phase(0.0))
print(b.to_time(-0.25))
"""
Explanation: Enabling and Disabling Datasets
See the Compute Tutorial
Dealing with Phases
Datasets will no longer accept phases. It is the user's responsibility to convert
phased data into times given an ephemeris. But it's still useful to be able to
convert times to phases (and vice versa) and be able to plot in phase.
Those conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time.
End of explanation
"""
print(b.to_phase(b.get_value(qualifier='times')))
"""
Explanation: All of these by default use the period in the top-level of the current hierarchy,
but accept a component keyword argument if you'd like the ephemeris of an
inner-orbit or the rotational ephemeris of a star in the system.
We'll see how plotting works later, but if you manually wanted to plot the dataset
with phases, all you'd need to do is:
End of explanation
"""
print(b.to_phase('times@lc01'))
"""
Explanation: or
End of explanation
"""
b.add_dataset('lc',
compute_phases=np.linspace(0,1,11),
dataset='lc01',
overwrite=True)
"""
Explanation: Although it isn't possible to attach data in phase-space, it is possible to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed.
End of explanation
"""
b.add_dataset('lc',
times=[0],
dataset='lc01',
overwrite=True)
print(b['compute_phases@lc01'])
b.flip_constraint('compute_phases', dataset='lc01', solve_for='compute_times')
b.set_value('compute_phases', dataset='lc01', value=np.linspace(0,1,101))
"""
Explanation: The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced: compute times & phases tutorial.
Note also that although you can pass compute_phases directly to add_dataset, if you do not, it will be constrained by compute_times by default. In this case, you would need to flip the constraint before setting compute_phases. See the constraints tutorial and the flip_constraint API docs for more details on flipping constraints.
End of explanation
"""
print(b.datasets)
"""
Explanation: Removing Datasets
Removing a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo.
End of explanation
"""
b.remove_dataset('lc01')
print(b.datasets)
"""
Explanation: The simplest way to remove a dataset is by its dataset tag:
End of explanation
"""
b.remove_dataset(kind='rv')
print(b.datasets)
"""
Explanation: But remove_dataset also takes any other tag(s) that could be sent to filter.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb | apache-2.0 | import os
import tensorflow as tf
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["TFVERSION"] = '2.5'
"""
Explanation: Training Hybrid Recommendation Model with the MovieLens Dataset
Note: It is recommended that you complete the companion als_bqml.ipynb notebook before continuing with this als_bqml_hybrid.ipynb notebook. If you already have the movielens dataset and trained model you can skip the "Import the dataset and trained model" section.
Learning objectives
1. Extract user and product factors from a BigQuery Matrix Factorizarion Model.
2. Format inputs for a BigQuery Hybrid Recommendation Model.
Introduction
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
"""
!bq mk movielens
"""
Explanation: Import the dataset and trained model
In the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery ML
To save you the steps of having to do so again (if this is a new environment) you can run the below commands to copy over the clean data and trained model.
First create the BigQuery dataset and copy over the data. If you get already exists in the output, please move forward in the notebook.
End of explanation
"""
%%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender_16 \
movielens.recommender_16
bq --location=US cp \
cloud-training-demos:movielens.recommender_hybrid \
movielens.recommender_hybrid
"""
Explanation: Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp.
End of explanation
"""
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `movielens.recommender_16`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
"""
Explanation: Next, ensure the model still works by invoking predictions for movie recommendations:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
processed_input,
feature,
TO_JSON_STRING(factor_weights) AS factor_weights,
intercept
FROM ML.WEIGHTS(MODEL `movielens.recommender_16`)
WHERE
(processed_input = 'movieId' AND feature = '96481')
OR (processed_input = 'userId' AND feature = '54192')
"""
Explanation: Incorporating user and movie information
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
The answer lies in recognizing that the user factors and product factors that result from the matrix factorization approach end up being a concise representation of the information about users and products available from the ratings matrix. We can concatenate this information with other information we have available and train a regression model to predict the rating.
Obtaining user and product factors
We can get the user factors or product factors from ML.WEIGHTS. For example to get the product factors for movieId=96481 and user factors for userId=54192, we would do:
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.users AS
SELECT
userId,
RAND() * COUNT(rating) AS loyalty,
CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode
FROM
movielens.ratings
GROUP BY userId
"""
Explanation: Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.
These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given the user factors, product factors, and any other information we know about our users and products.
Creating input features
The MovieLens dataset does not have any user information, and has very little information about the movies themselves. To illustrate the concept, therefore, let’s create some synthetic information about users:
End of explanation
"""
%%bigquery --project $PROJECT
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
)
SELECT * FROM userFeatures
LIMIT 5
"""
Explanation: Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
End of explanation
"""
%%bigquery --project $PROJECT
WITH productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT * FROM productFeatures
LIMIT 5
"""
Explanation: Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.hybrid_dataset AS
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
),
productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT
p.* EXCEPT(movieId),
u.* EXCEPT(userId),
rating
FROM productFeatures p, userFeatures u
JOIN movielens.ratings r
ON r.movieId = p.movieId AND r.userId = u.userId
"""
Explanation: Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.
TODO 1: Combine the above two queries to get the user factors and product factor for each rating.
NOTE: The below cell will take approximately 4~5 minutes for the completion.
End of explanation
"""
%%bigquery --project $PROJECT
SELECT *
FROM movielens.hybrid_dataset
LIMIT 1
"""
Explanation: One of the rows of this table looks like this:
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64,
u16 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)],
u[OFFSET(15)]
));
"""
Explanation: Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies.
Training hybrid recommendation model
At the time of writing, BigQuery ML can not handle arrays as inputs to a regression model. Let’s, therefore, define a function to convert arrays to a struct where the array elements are its fields:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT movielens.arr_to_input_16_users(u).*
FROM (SELECT
[0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u)
"""
Explanation: which gives:
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>)
RETURNS
STRUCT<
p1 FLOAT64,
p2 FLOAT64,
p3 FLOAT64,
p4 FLOAT64,
p5 FLOAT64,
p6 FLOAT64,
p7 FLOAT64,
p8 FLOAT64,
p9 FLOAT64,
p10 FLOAT64,
p11 FLOAT64,
p12 FLOAT64,
p13 FLOAT64,
p14 FLOAT64,
p15 FLOAT64,
p16 FLOAT64
> AS (STRUCT(
p[OFFSET(0)],
p[OFFSET(1)],
p[OFFSET(2)],
p[OFFSET(3)],
p[OFFSET(4)],
p[OFFSET(5)],
p[OFFSET(6)],
p[OFFSET(7)],
p[OFFSET(8)],
p[OFFSET(9)],
p[OFFSET(10)],
p[OFFSET(11)],
p[OFFSET(12)],
p[OFFSET(13)],
p[OFFSET(14)],
p[OFFSET(15)]
));
"""
Explanation: We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.
TODO 2: Create a function that returns named columns from a size 16 product factor array.
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_hybrid
OPTIONS(model_type='linear_reg', input_label_cols=['rating'])
AS
SELECT
* EXCEPT(user_factors, product_factors),
movielens.arr_to_input_16_users(user_factors).*,
movielens.arr_to_input_16_products(product_factors).*
FROM
movielens.hybrid_dataset
"""
Explanation: Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating:
NOTE: The below cell will take approximately 25~30 minutes for the completion.
End of explanation
"""
|
probml/pyprobml | deprecated/IPM_divergences.ipynb | mit | import jax
import random
import numpy as np
import jax.numpy as jnp
import seaborn as sns
import matplotlib.pyplot as plt
import scipy
!pip install dm-haiku
!pip install optax
import haiku as hk
import optax
sns.set(rc={"lines.linewidth": 2.8}, font_scale=2)
sns.set_style("whitegrid")
"""
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/IPM_divergences.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Critics in IPMs variational bounds on $f$-divergences
Author: Mihaela Rosca
This colab uses a simple example (two 1-d distributions) to show how the critics of various IPMs (Wasserstein distance and MMD) look like. We also look at how smooth estimators (neural nets) can estimte density ratios which are not
smooth, and how that can be useful in providing a good learning signal for a model.
End of explanation
"""
import scipy.stats
from scipy.stats import truncnorm
from scipy.stats import beta
# We allow a displacement from 0 of the beta distribution.
class TranslatedBeta:
def __init__(self, a, b, expand_dims=False, displacement=0):
self._a = a
self._b = b
self.expand_dims = expand_dims
self.displacement = displacement
def rvs(self, size):
val = beta.rvs(self._a, self._b, size=size) + self.displacement
return np.expand_dims(val, axis=1) if self.expand_dims else val
def pdf(self, x):
return beta.pdf(x - self.displacement, self._a, self._b)
p_param1 = 3
p_param2 = 5
q_param1 = 2
q_param2 = 3
start_p = 0
start_r = 1
start_q = 2
p_dist = TranslatedBeta(p_param1, p_param2, displacement=start_p)
q_dist = TranslatedBeta(q_param1, q_param2, displacement=start_q)
r_dist = TranslatedBeta(q_param1, q_param2, displacement=start_r)
plt.figure(figsize=(14, 10))
p_x_samples = p_dist.rvs(size=15)
q_x_samples = q_dist.rvs(size=15)
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
p_x_pdfs = p_dist.pdf(p_linspace_x)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
q_x_pdfs = q_dist.pdf(q_linspace_x)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p_1(x)$")
plt.plot(p_x_samples, [0] * len(p_x_samples), "bo", ms=10)
plt.plot(q_linspace_x, q_x_pdfs, "r", label=r"$p_2(x)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), "rd", ms=10)
plt.ylim(-0.5, 2.7)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend()
plt.xticks([])
plt.yticks([])
plt.figure(figsize=(14, 8))
local_start_p = 0
local_start_r = 1.2
local_start_q = 2.4
local_p_dist = TranslatedBeta(p_param1, p_param2, displacement=local_start_p)
local_q_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_q)
local_r_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_r)
p_linspace_x = np.linspace(local_start_p, local_start_p + 1, 100)
q_linspace_x = np.linspace(local_start_q, local_start_q + 1, 100)
r_linspace_x = np.linspace(local_start_r, local_start_r + 1, 100)
p_x_pdfs = local_p_dist.pdf(p_linspace_x)
q_x_pdfs = local_q_dist.pdf(q_linspace_x)
r_x_pdfs = local_r_dist.pdf(r_linspace_x)
plt.plot(p_linspace_x, p_x_pdfs, "b")
plt.plot(q_linspace_x, q_x_pdfs, "r")
plt.plot(r_linspace_x, r_x_pdfs, "g")
num_samples = 15
plt.plot(local_p_dist.rvs(size=num_samples), [0] * num_samples, "bo", ms=10, label=r"$p^*$")
plt.plot(local_q_dist.rvs(size=num_samples), [0] * num_samples, "rd", ms=10, label=r"$q(\theta_1)$")
plt.plot(local_r_dist.rvs(size=num_samples), [0] * num_samples, "gd", ms=10, label=r"$q(\theta_2)$")
plt.ylim(-0.5, 2.7)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(framealpha=0)
plt.xticks([])
plt.yticks([])
"""
Explanation: KL and non overlapping distributions
non overlapping distributions (visual)
explain ratio will be infinity - integral
move the distributions closer and they will not have signal
End of explanation
"""
model_transform = hk.without_apply_rng(
hk.transform(
lambda *args, **kwargs: hk.Sequential(
[hk.Linear(10), jax.nn.relu, hk.Linear(10), jax.nn.tanh, hk.Linear(40), hk.Linear(1)]
)(*args, **kwargs)
)
)
BATCH_SIZE = 100
NUM_UPDATES = 1000
dist1 = TranslatedBeta(p_param1, p_param2, expand_dims=True, displacement=start_p)
dist2 = TranslatedBeta(q_param1, q_param2, expand_dims=True, displacement=start_q)
@jax.jit
def estimate_kl(params, dist1_batch, dist2_batch):
dist1_logits = model_transform.apply(params, dist1_batch)
dist2_logits = model_transform.apply(params, dist2_batch)
return jnp.mean(dist1_logits - jnp.exp(dist2_logits - 1))
def update(params, opt_state, dist1_batch, dist2_batch):
model_loss = lambda *args: -estimate_kl(*args)
loss, grads = jax.value_and_grad(model_loss, has_aux=False)(params, dist1_batch, dist2_batch)
params_update, new_opt_state = optim.update(grads, opt_state, params)
new_params = optax.apply_updates(params, params_update)
return loss, new_params, new_opt_state
NUM_UPDATES = 200
rng = jax.random.PRNGKey(1)
init_model_params = model_transform.init(rng, dist1.rvs(BATCH_SIZE))
params = init_model_params
optim = optax.adam(learning_rate=0.0005, b1=0.9, b2=0.999)
opt_state = optim.init(init_model_params)
for i in range(NUM_UPDATES):
# Get a new batch of data
x = dist1.rvs(BATCH_SIZE)
y = dist2.rvs(BATCH_SIZE)
loss, params, opt_state = update(params, opt_state, x, y)
if i % 50 == 0:
print("Loss at {}".format(i))
print(loss)
plotting_x = np.expand_dims(np.linspace(-1.0, 3.5, 100), axis=1)
# TODO: how do you get the ratio values form the estimate - need to check the fgan paper
ratio_values = model_transform.apply(params, plotting_x)
# ratio_values = 1 + np.log(model_transform.apply(params, plotting_x))
plt.figure(figsize=(14, 8))
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 200)
ratio = p_dist.pdf(x) / q_dist.pdf(x)
plt.hlines(6.1, -0.6, start_q, linestyles="--", color="r")
plt.hlines(6.1, start_q + 1, 3.5, linestyles="--", color="r")
plt.text(3.4, 5.6, r"$\infty$")
plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4)
plt.plot(
plotting_x, ratio_values[:, 0].T, color="darkgray", label=r"MLP approx to $\frac{p^*}{q(\theta)}$", linewidth=4
)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0)
plt.xticks([])
plt.yticks([])
"""
Explanation: Approximation of the ratio using the f-gan approach
End of explanation
"""
plt.figure(figsize=(14, 8))
grad_fn = jax.grad(lambda x: model_transform.apply(params, x)[0])
grad_values = jax.vmap(grad_fn)(plotting_x)
plt.figure(figsize=(14, 8))
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 200)
ratio = p_dist.pdf(x) / q_dist.pdf(x)
plt.hlines(5.8, -0.6, start_q, linestyles="--", color="r")
plt.hlines(5.8, start_q + 1, 3.5, linestyles="--", color="r")
plt.text(3.4, 5.4, r"$\infty$")
plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4)
plt.plot(
plotting_x,
ratio_values[:, 0].T,
color="darkgray",
label=r"$f_{\phi}$ approximating $\frac{p^*}{q(\theta)}$",
linewidth=4,
)
plt.plot(plotting_x, grad_values[:, 0].T, color="orange", label=r"$\nabla_{x} f_{\phi}(x)$", linewidth=4, ls="-.")
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0)
plt.xticks([])
plt.yticks([])
"""
Explanation: Gradients
In order to see why the learned density ratio has useful properties for learning, we can plot the gradients of the learned density ratio across the input space
End of explanation
"""
from scipy.optimize import linprog
def get_W_witness_spectrum(p_samples, q_samples):
n = len(p_samples)
m = len(q_samples)
X = np.concatenate([p_samples, q_samples], axis=0)
## AG: repeat [-1/n] n times
c = np.array(n * [-1 / n] + m * [1 / m])
A_ub, b_ub = [], []
for i in range(n + m):
for j in range(n + m):
if i == j:
continue
z = np.zeros(n + m)
z[i] = 1
z[j] = -1
A_ub.append(z)
b_ub.append(np.abs(X[i] - X[j]))
## AG: Minimize: c^T * x
## Subject to: A_ub * x <= b_ub
res = linprog(c=c, A_ub=A_ub, b_ub=b_ub, method="simplex", options={"tol": 1e-5})
a = res["x"]
## AG: second argument xs to be passed into the internal
## function.
def witness_spectrum(x):
diff = np.abs(x - X[:, np.newaxis])
one = np.min(a[:, np.newaxis] + diff, axis=0)
two = np.max(a[:, np.newaxis] - diff, axis=0)
return one, two
return witness_spectrum
x = np.linspace(-1, 3.5, 100)
wass_estimate = get_W_witness_spectrum(p_x_samples + start_p, q_x_samples + start_q)(x)
wa, wb = wass_estimate
w = (wa + wb) / 2
w -= w.mean()
plt.figure(figsize=(14, 6))
display_offset = 0.8
plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 100)
plt.plot(x, w + display_offset, "r", label=r"$f^{\star}$", linewidth=4)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0)
plt.xticks([])
plt.yticks([])
"""
Explanation: Wasserstein distance for the same two distributions
Computing the Wasserstein critic in 1 dimension. Reminder that the Wasserstein distance is defined as:
$$
W(p, q) = \sup_{\|\|f\|\|_{Lip} \le 1} E_p(x) f(x) - E_q(x) f(x)
$$
The below code finds the values of f evaluated at the samples of the two distributions. This vector is computed to maximise the empirical (Monte Carlo) estimate of the IPM:
$$
\frac{1}{n}\sum_{i=1}^n f(x_i) - \frac{1}{m}\sum_{i=1}^m f(y_j)
$$
where $x_i$ are samples from the first distribution, while $y_j$ are samples
from the second distribution. Since we want the function $f$ to be 1-Lipschitz,
an inequality constraint is added to ensure that for all two choices of samples
in the two distributions, $\forall x \in {x_1, ... x_n, y_1, ... y_m}, \forall y \in {x_1, ... x_n, y_1, ... y_m}$
$$
f(x) - f(y) \le |x - y| \
f(y) - f(x) \le |x - y| \
$$
This maximisation needs to occur under the constraint that the function $f$
is 1-Lipschitz, which is ensured uisng the constraint on the linear program.
Note: This approach does not scale to large datasets.
Thank you to Arthur Gretton and Dougal J Sutherland for this version of the code.
End of explanation
"""
def covariance(kernel_fn, X, Y):
num_rows = len(X)
num_cols = len(Y)
K = np.zeros((num_rows, num_cols))
for i in range(num_rows):
for j in range(num_cols):
K[i, j] = kernel_fn(X[i], Y[j])
return K
def gaussian_kernel(x1, x2, gauss_var=0.1, height=2.2):
return height * np.exp(-np.linalg.norm(x1 - x2) ** 2 / gauss_var)
def evaluate_mmd_critic(p_samples, q_samples):
n = p_samples.shape[0]
m = q_samples.shape[0]
p_cov = covariance(gaussian_kernel, p_samples, p_samples)
print("indices")
print(np.diag_indices(n))
p_samples_norm = np.sum(p_cov) - np.sum(p_cov[np.diag_indices(n)])
p_samples_norm /= n * (n - 1)
q_cov = covariance(gaussian_kernel, q_samples, q_samples)
q_samples_norm = np.sum(q_cov) - np.sum(q_cov[np.diag_indices(m)])
q_samples_norm /= m * (m - 1)
p_q_cov = covariance(gaussian_kernel, p_samples, q_samples)
p_q_norm = np.sum(p_q_cov)
p_q_norm /= n * m
norm = p_samples_norm + q_samples_norm - 2 * p_q_norm
def critic(x):
p_val = np.mean([gaussian_kernel(x, y) for y in p_samples])
q_val = np.mean([gaussian_kernel(x, y) for y in q_samples])
return (p_val - q_val) / norm
return critic
critic_fn = evaluate_mmd_critic(p_x_samples, q_x_samples)
plt.figure(figsize=(14, 6))
display_offset = 0
plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 100)
plt.plot(
start_p + x, np.array([critic_fn(x_val) for x_val in x]) + display_offset, "r", label=r"$f^{\star}$", linewidth=4
)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0)
plt.xticks([])
plt.yticks([])
"""
Explanation: MMD computation
The MMD is an IPM defined as:
$$
MMD(p, q) = \sup_{\|\|f\|\|_{\mathcal{H}} \le 1} E_p(x) f(x) - E_q(x) f(x)
$$
where $\mathcal{H}$ is a RKHS. Using the mean embedding operators in an RKHS, we can write:
$$
E_p(x) f(x) = \langle f, \mu_p \rangle \
E_q(x) f(x) = \langle f, \mu_q \rangle \
$$
replacing in the MMD:
$$
MMD(p, q) = \sup_{\|\|f\|\|_{\mathcal{H}} \le 1} \langle f, \mu_p - \mu_q \rangle
$$
which means that
$$
f = \frac{\mu_p - \mu_q}{\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}}
$$
To obtain an estimate of $f$ evaluated at $x$ we use that:
$$
f(x) = \frac{\mathbb{E}{p(y)} k(x, y) - \mathbb{E}{q(y)} k(x, y)}{\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}}
$$
to estimate $\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}$ we use:
$$
\|\|\mu_p - \mu_q\|\|_{\mathcal{H}} = \langle \mu_p - \mu_q, \mu_p - \mu_q \rangle = \langle \mu_p, \mu_p \rangle + \langle \mu_q, \mu_q \rangle
- 2 \langle \mu_p, \mu_q \rangle
$$
To estimate the dot products, we use:
$$
\langle \mu_p, \mu_p \rangle = E_p(x) \mu_p(x) = E_p(x) \langle \mu_p, k(x, \cdot) \rangle = E_p(x) E_p(x') k(x, x')
$$
For more details see the slides here: http://www.gatsby.ucl.ac.uk/~gretton/coursefiles/lecture5_distribEmbed_1.pdf
End of explanation
"""
|
cgpotts/cs224u | hw_formatting_guide.ipynb | apache-2.0 | __author__ = "Insop"
__version__ = "CS224u, Stanford, Spring 2022"
"""
Explanation: Homework and bake-off code: Formatting guide
End of explanation
"""
def test_create_glove_embedding(func):
vocab = ['NLU', 'is', 'the', 'future', '.', '$UNK', '<s>', '</s>']
# DON'T modify functions like this!
#
# glove_embedding, glove_vocab = func(vocab, 'data/glove.6B/glove.6B.50d.txt')
# DO KEEP the code as it was, since the autograder calls functions in
# the same way shown in this line:
glove_embedding, glove_vocab = func(vocab, 'glove.6B.50d.txt')
assert isinstance(glove_embedding, np.ndarray), \
"Expected embedding type {}; got {}".format(
glove_embedding.__class__.__name__, glove_embedding.__class__.__name__)
assert glove_embedding.shape == (8, 50), \
"Expected embedding shape (8, 50); got {}".format(glove_embedding.shape)
assert glove_vocab == vocab, \
"Expected vocab {}; got {}".format(vocab, glove_vocab)
"""
Explanation: Contents
Overview
Original system code
Modifying provided code in the original notebook
External imports
Custom code
Long running test code
Overview
This notebook provides a list of Dos and Don'ts for writing code for original systems and bake-offs.
Original system code
Our assignments need to handle specific homework questions and also very open ended original systems that can have arbitrary dependencies and data requirements, so our instructions have to be quite detailed to handle both.
Here's one quick reminder/clarification of a common issue:
Please be sure to include your Original System code and bake-off call within the scope of this if conditional:
if 'IS_GRADESCOPE_ENV' not in os.environ:
test_evaluate_pooled_bert(evaluate_pooled_bert)
This ensures that the autograder does not attempt to run your original system code. This includes any import statements used in your Original System – they should be within the if conditional.
Overall – please do not modify any portion of these cells other than
the comment spaces for system text description and peak score reporting; and
the space in the if conditional where you are meant to put your code.
Since we encourage creativity and do not want to constrain things, your original system code will instead be awarded credit manually by CFs after the assignment due date. This is also why you will not see a full grade out of 10 until after the submission deadline, when CFs have manually awarded the original system points.
Modifying provided code in the original notebook
Please do not modify provided code in the original notebook, such as changing the function arguments or default parameters. The autograder will call functions to test the homework problem code, and the autograder uses the function arguments as shown in the original notebook.
Here is an example (from hw_colors.ipynb) where the provided code was modified to use func(vocab, 'data/glove.6B/glove.6B.50d.txt') instead of the original code func(vocab, 'glove.6B.50d.txt'). This might work fine in your local environment; however, the autograder will separately call func the same way as shown in the original notebook. That's why we suggest you to not modify the provided code.
End of explanation
"""
#
# DON'T!
#
# This will cause the autograder to fail!
pip install 'git+https://github.com/NVIDIA/dllogger'
# Directly importing external modules outside of `if 'IS_GRADESCOPE_ENV'` scope
# will also cause the autograder to fail.
#
# DO!
#
# This is good!
#
if 'IS_GRADESCOPE_ENV' not in os.environ:
# You can install and import modules of your choice --
# for example:
# https://github.com/NVIDIA/dllogger/issues/1
pip install 'git+https://github.com/NVIDIA/dllogger'
"""
Explanation: External imports
End of explanation
"""
#
# DON'T!
#
# This type of custom code will fail, since the autograder is not
# equipped with a GPU:
#
try:
t_gpu = torch.randn(3,3, device='cuda:0')
except AssertionError as err:
print(err)
t_gpu
#
# DO
#
# This is good!
#
if 'IS_GRADESCOPE_ENV' not in os.environ:
# This is okay since this code will not run in the autograder
# environment:
try:
t_gpu = torch.randn(3,3, device='cuda:0')
except AssertionError as err:
print(err)
t_gpu
"""
Explanation: Custom code
End of explanation
"""
#
# DON'T!
#
# This type of custom code will cause the autograder to time out:
#
my_test_function_runs_an_hour()
#
# DO
#
# This is good!
#
if 'IS_GRADESCOPE_ENV' not in os.environ:
# Run as many tests as you wish!
my_test_function_runs_an_hour()
"""
Explanation: Long running test code
Any long running test code should be inside the if conditional block.
End of explanation
"""
#
# DON'T!
#
# This type of custom code will cause the autograder fail with this message
# "NameError: name 'get_ipython' is not defined"
#
%%time
if 'IS_GRADESCOPE_ENV' not in os.environ:
my_func_to_measure_time()
#
# DO
#
# This is good!
#
if 'IS_GRADESCOPE_ENV' not in os.environ:
%%time
my_func_to_measure_time()
"""
Explanation: Time measurements
Any time measurement code, such as %%time, should be inside the if conditional block.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/08_image/mnist_models.ipynb | apache-2.0 | import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "dnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
"""
Explanation: MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
End of explanation
"""
%%bash
rm -rf mnistmodel.tar.gz mnist_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
-- \
--output_dir=${PWD}/mnist_trained \
--train_steps=100 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
"""
Explanation: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Let's first run it locally for a few steps to test the code.
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE --batch_norm
"""
Explanation: Now, let's do it on Cloud ML Engine so we can train on GPU (--scale-tier=BASIC_GPU)
Note the GPU speed up depends on the model type. You'll notice the more complex CNN model trains significantly faster on GPU, however the speed up on the simpler models is not as pronounced.
End of explanation
"""
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/mnist/trained_{}".format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
"""
Explanation: Monitoring training with TensorBoard
Use this cell to launch tensorboard
End of explanation
"""
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/mnist/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
"""
Explanation: Here's what it looks like with a linear model for 10,000 steps:
<img src="images/eval_linear_10000.png" width="60%"/>
Here are my results:
Model | Accuracy | Time taken | Model description | Run time parameters
--- | :---: | ---
linear | 91.53 | 3 min | | 100 steps, LR=0.01, Batch=512
linear | 92.73 | 8 min | | 1000 steps, LR=0.01, Batch=512
linear | 92.29 | 18 min | | 10000 steps, LR=0.01, Batch=512
dnn | 98.14 | 15 min | 300-100-30 nodes fully connected | 10000 steps, LR=0.01, Batch=512
dnn | 97.99 | 48 min | 300-100-30 nodes fully connected | 100000 steps, LR=0.01, Batch=512
dnn_dropout | 97.84 | 29 min | 300-100-30-DL(0.1)- nodes | 20000 steps, LR=0.01, Batch=512
cnn | 98.97 | 35 min | maxpool(10 5x5 cnn, 2)-maxpool(20 5x5 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 98.93 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 99.17 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits only) | 20000 steps, LR=0.01, Batch=512
cnn | 99.27 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits, deep) | 10000 steps, LR=0.01, Batch=512
cnn | 99.48 | 12 hr | as-above but nfil1=20, nfil2=27, dprob=0.1, lr=0.001, batchsize=233 | (hyperparameter optimization)
Deploying and predicting with model
Deploy the model:
End of explanation
"""
import json, codecs
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
HEIGHT = 28
WIDTH = 28
mnist = input_data.read_data_sets("mnist/data", one_hot = True, reshape = False)
IMGNO = 5 #CHANGE THIS to get different images
jsondata = {"image": mnist.test.images[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(mnist.test.images[IMGNO].reshape(HEIGHT, WIDTH));
"""
Explanation: To predict with the model, let's take one of the example images.
End of explanation
"""
%%bash
gcloud ml-engine predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
"""
Explanation: Send it to the prediction service
End of explanation
"""
%%writefile hyperparam.yaml
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_m_gpu
hyperparameters:
goal: MAXIMIZE
maxTrials: 30
maxParallelTrials: 2
hyperparameterMetricTag: accuracy
params:
- parameterName: train_batch_size
type: INTEGER
minValue: 32
maxValue: 512
scaleType: UNIT_LINEAR_SCALE
- parameterName: learning_rate
type: DOUBLE
minValue: 0.001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nfil1
type: INTEGER
minValue: 5
maxValue: 20
scaleType: UNIT_LINEAR_SCALE
- parameterName: nfil2
type: INTEGER
minValue: 10
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: dprob
type: DOUBLE
minValue: 0.1
maxValue: 0.6
scaleType: UNIT_LINEAR_SCALE
"""
Explanation: DO NOT RUN anything beyond this point
This shows you what I did, but trying to repeat this will take several hours.
<br/>
Hyperparameter tuning
This is what hyperparam.yaml looked like:
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}_hparam
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--runtime-version=$TFVERSION \
--config hyperparam.yaml \
-- \
--output_dir=$OUTDIR \
--model=$MODEL_TYPE --batch_norm
"""
Explanation: This takes <b>13 hours and 250 ML Units</b>, so don't try this at home :)
The key thing is here the --config parameter.
End of explanation
"""
|
snowch/movie-recommender-demo | notebooks/Prerequisites 01 - Spark Hello World.ipynb | apache-2.0 | import socket
"""
Explanation: Spark Cluster Overview
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).
End of explanation
"""
print( "Hello World from " + socket.gethostname() )
"""
Explanation: This code runs on the 'driver' node
End of explanation
"""
rdd = spark.sparkContext.parallelize( range(0, 100) )
"""
Explanation: Create some data and distribute on the cluster 'executor' nodes
End of explanation
"""
rdd = rdd.map( lambda x: "Hello World from " + socket.gethostname() ).collect()
"""
Explanation: Run a function on the nodes and return the values back to the 'driver' node
End of explanation
"""
print( rdd )
"""
Explanation: Print all the values
End of explanation
"""
print( set(rdd) )
"""
Explanation: Print out the unique values
End of explanation
"""
! rm -f ratings.dat
! wget https://raw.githubusercontent.com/snowch/movie-recommender-demo/master/web_app/data/ratings.dat
"""
Explanation: Alternating Least Squares (ALS) Hello World
Load the data
End of explanation
"""
! head -3 ratings.dat
! echo
! tail -3 ratings.dat
"""
Explanation: Inspect the data
End of explanation
"""
from pyspark.mllib.recommendation import Rating
ratingsRDD = sc.textFile('ratings.dat') \
.map(lambda l: l.split("::")) \
.map(lambda p: Rating(
user = int(p[0]),
product = int(p[1]),
rating = float(p[2]),
)).cache()
from pyspark.mllib.recommendation import ALS
# set some values for the parameters
# these should be ascertained via experimentation
rank = 5
numIterations = 20
lambdaParam = 0.1
model = ALS.train(ratingsRDD.toDF(), rank, numIterations, lambdaParam)
"""
Explanation: Load the data in spark
End of explanation
"""
model.predict(user=1, product=1)
"""
Explanation: Predict how the user=1 would rate product=1
End of explanation
"""
model.recommendProductsForUsers(1).toDF().collect()
"""
Explanation: Predict the top (1) recommendations for all users.
End of explanation
"""
|
evanmiltenburg/python-for-text-analysis | Assignments/ASSIGNMENT-4b-BA.ipynb | apache-2.0 | import json
my_tweets = json.load(open('my_tweets.json'))
for id_, tweet_info in my_tweets.items():
print(id_, tweet_info)
break
"""
Explanation: Assignment 4b-BA: Sentiment analysis using VADER
Due: Friday October 15, 2021, before 14:30
Please note that this is Assignment 4 for the Bachelor version of the Python course: Introduction to Python for Humanities and Social Sciences (L_AABAALG075)
Please submit your assignment (notebooks of parts 4a + 4b + JSON file) as a single .zip file.
Please name your zip file with the following naming convention: ASSIGNMENT_4_FIRSTNAME_LASTNAME.zip
Please submit your assignment on Canvas: Assignments --> Assignment 4
If you have questions about this chapter, please contact us at cltl.python.course@gmail.com. Questions and answers will be collected in this Q&A document, so please check if your question has already been answered.
Credits
The notebooks in this block have been originally created by Marten Postma and Isa Maks. Adaptations were made by Filip Ilievski.
Part I: VADER assignments
Preparation (nothing to submit):
To be able to answer the VADER questions you need to know how the tool works.
* Read more about the VADER tool in this blog.
* VADER provides 4 scores (positive, negative, neutral, compound). Be sure to understand what they mean and how they are calculated.
* VADER uses rules to handle linguistic phenomena such as negation and intensification. Be sure to understand which rules are used, how they work, and why they are important.
* VADER makes use of a sentiment lexicon. Have a look at the lexicon. Be sure to understand which information can be found there (lemma?, wordform?, part-of-speech?, polarity value?, word meaning?) What do all scores mean? You can inspect the README of the VADER system for more information.
Exercise 1
Consider the following sentences and their output as given by VADER. Analyze sentences 1 to 7, and explain the outcome for each sentence. Take into account both the rules applied by VADER and the lexicon that is used. You will find that some of the results are reasonable, but others are not. Explain what is going wrong or not when correct and incorrect results are produced.
```
INPUT SENTENCE 1 I love apples
VADER OUTPUT {'neg': 0.0, 'neu': 0.192, 'pos': 0.808, 'compound': 0.6369}
INPUT SENTENCE 2 I don't love apples
VADER OUTPUT {'neg': 0.627, 'neu': 0.373, 'pos': 0.0, 'compound': -0.5216}
INPUT SENTENCE 3 I love apples :-)
VADER OUTPUT {'neg': 0.0, 'neu': 0.133, 'pos': 0.867, 'compound': 0.7579}
INPUT SENTENCE 4 These houses are ruins
VADER OUTPUT {'neg': 0.492, 'neu': 0.508, 'pos': 0.0, 'compound': -0.4404}
INPUT SENTENCE 5 These houses are certainly not considered ruins
VADER OUTPUT {'neg': 0.0, 'neu': 0.51, 'pos': 0.49, 'compound': 0.5867}
INPUT SENTENCE 6 He lies in the chair in the garden
VADER OUTPUT {'neg': 0.286, 'neu': 0.714, 'pos': 0.0, 'compound': -0.4215}
INPUT SENTENCE 7 This house is like any house
VADER OUTPUT {'neg': 0.0, 'neu': 0.667, 'pos': 0.333, 'compound': 0.3612}
```
Exercise 2: Collecting 25 tweets for evaluation
Collect 25 tweets. Try to find tweets that are interesting for sentiment analysis, e.g., very positive, neutral, and negative tweets. These could be your own tweets (typed in) or collected from the Twitter stream. You can simply copy-paste tweets into the JSON file. Do not attempt to crawl them!
We will store the tweets in the file my_tweets.json (use a text editor to edit).
For each tweet, you should insert:
sentiment analysis label: negative | neutral | positive (this you determine yourself, this is not done by a computer)
the text of the tweet
the Tweet-URL
from:
"1": {
"sentiment_label": "",
"text_of_tweet": "",
"tweet_url": "",
to:
"1": {
"sentiment_label": "positive",
"text_of_tweet": "All across America people chose to get involved, get engaged and stand up. Each of us can make a difference, and all of us ought to try. So go keep changing the world in 2018.",
"tweet_url" : "https://twitter.com/BarackObama/status/946775615893655552",
},
You can load your tweets with the sentiment labels you provided in the following way:
End of explanation
"""
def run_vader(nlp,
textual_unit,
lemmatize=False,
parts_of_speech_to_consider=set(),
verbose=0):
"""
Run VADER on a sentence from spacy
:param str textual unit: a textual unit, e.g., sentence, sentences (one string)
(by looping over doc.sents)
:param bool lemmatize: If True, provide lemmas to VADER instead of words
:param set parts_of_speech_to_consider:
-empty set -> all parts of speech are provided
-non-empty set: only these parts of speech are considered
:param int verbose: if set to 1, information is printed
about input and output
:rtype: dict
:return: vader output dict
"""
doc = nlp(textual_unit)
input_to_vader = []
for sent in doc.sents:
for token in sent:
if verbose >= 2:
print(token, token.pos_)
to_add = token.text
if lemmatize:
to_add = token.lemma_
if to_add == '-PRON-':
to_add = token.text
if parts_of_speech_to_consider:
if token.pos_ in parts_of_speech_to_consider:
input_to_vader.append(to_add)
else:
input_to_vader.append(to_add)
scores = vader_model.polarity_scores(' '.join(input_to_vader))
if verbose >= 1:
print()
print('INPUT SENTENCE', sent)
print('INPUT TO VADER', input_to_vader)
print('VADER OUTPUT', scores)
return scores
def vader_output_to_label(vader_output):
"""
map vader output e.g.,
{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.4215}
to one of the following values:
a) positive float -> 'positive'
b) 0.0 -> 'neutral'
c) negative float -> 'negative'
:param dict vader_output: output dict from vader
:rtype: str
:return: 'negative' | 'neutral' | 'positive'
"""
compound = vader_output['compound']
if compound < 0:
return 'negative'
elif compound == 0.0:
return 'neutral'
elif compound > 0.0:
return 'positive'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.0}) == 'neutral'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.01}) == 'positive'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': -0.01}) == 'negative'
import spacy
! python -m spacy download en_core_web_sm
nlp = spacy.load('en_core_web_sm')
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
vader_model = SentimentIntensityAnalyzer()
my_annotation = 'positive' # what you annotate yourself
sentence = "I like Python"
vader_output = run_vader(nlp, sentence)
vader_label = vader_output_to_label(vader_output)
accurate = my_annotation == vader_label
print()
print('SENTENCE', sentence) # the sentence
print('VADER OUTPUT', vader_output) # the VADER output
print('VADER LABEL', vader_label) # the VADER output mapped to a label, in this case 'positive'
print('MY ANNOTATION', my_annotation) # my annotation
print('ACCURACY', accurate) # did VADER predict the same label as my manual annotation?
"""
Explanation: Exercise 3
In this exercise, we are going to run VADER on our own tweets and evaluate it against the sentiment labels that we manually annotated for each tweet. We are going to make use of the following two functions:
End of explanation
"""
import json
my_tweets = json.load(open('my_tweets.json'))
tweets = []
all_vader_output = []
manual_annotation = []
for id_, tweet_info in my_tweets.items():
the_tweet = tweet_info['text_of_tweet']
vader_output = ''# run vader
vader_label = ''# convert vader output to category
tweets.append(the_tweet)
all_vader_output.append(vader_label)
manual_annotation.append(tweet_info['sentiment_label'])
"""
Explanation: Exercise 3a
You will now run VADER on the tweets you've collected. You will process each tweet using the code we have shown you above. The goal is add information about each tweet (i.e. in every iteration of the loop) to each of the three lists listed below. We can use these lists to compare the Vader output to the correct labels you provided.
tweets: append your tweet
all_vader_output: append the vader_label: negative | neutral | positive
manual_annotation: append your annotation: negative | neutral | positive
You can use the code snippet below as a starting point.
End of explanation
"""
|
GoogleCloudPlatform/bigquery-notebooks | notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/01_train_bqml_mf_pmi.ipynb | apache-2.0 | from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
from google.cloud import bigquery
"""
Explanation: Part 1: Learn item embeddings based on song co-occurrence
This notebook is the first of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to complete the following tasks:
Explore the sample playlist data.
Compute Pointwise mutual information (PMI) that represents the co-occurence of songs on playlists.
Train a matrix factorization model using BigQuery ML to learn item embeddings based on the PMI data.
Explore the learned embeddings.
Before starting this notebook, you must run the 00_prep_bq_procedures notebook to complete the solution prerequisites.
After completing this notebook, run the 02_export_bqml_mf_embeddings notebook to process the item embedding data.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
Import libraries
End of explanation
"""
PROJECT_ID = "yourProject" # Change to your project.
!gcloud config set project $PROJECT_ID
"""
Explanation: Configure GCP environment settings
Update the PROJECT_ID variable to reflect the ID of the Google Cloud project you are using to implement this solution.
End of explanation
"""
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
"""
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Explore the sample data
Use visualizations to explore the data in the vw_item_groups view that you created in the 00_prep_bq_and_datastore.ipynb notebook.
Import libraries for data visualization:
End of explanation
"""
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE TABLE recommendations.valid_items
AS
SELECT
item_Id,
COUNT(group_Id) AS item_frequency
FROM recommendations.vw_item_groups
GROUP BY item_Id
HAVING item_frequency >= 15;
SELECT COUNT(*) item_count FROM recommendations.valid_items;
"""
Explanation: Count the number of songs that occur in at least 15 groups:
End of explanation
"""
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE TABLE recommendations.valid_groups
AS
SELECT
group_Id,
COUNT(item_Id) AS group_size
FROM recommendations.vw_item_groups
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
GROUP BY group_Id
HAVING group_size BETWEEN 2 AND 100;
SELECT COUNT(*) group_count FROM recommendations.valid_groups;
"""
Explanation: Count the number of playlists that have between 2 and 100 items:
End of explanation
"""
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) record_count
FROM `recommendations.vw_item_groups`
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
AND group_Id IN (SELECT group_Id FROM recommendations.valid_groups);
"""
Explanation: Count the number of records with valid songs and playlists:
End of explanation
"""
%%bigquery size_distribution --project $PROJECT_ID
WITH group_sizes
AS
(
SELECT
group_Id,
ML.BUCKETIZE(
COUNT(item_Id), [10, 20, 30, 40, 50, 101])
AS group_size
FROM `recommendations.vw_item_groups`
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
AND group_Id IN (SELECT group_Id FROM recommendations.valid_groups)
GROUP BY group_Id
)
SELECT
CASE
WHEN group_size = 'bin_1' THEN '[1 - 10]'
WHEN group_size = 'bin_2' THEN '[10 - 20]'
WHEN group_size = 'bin_3' THEN '[20 - 30]'
WHEN group_size = 'bin_4' THEN '[30 - 40]'
WHEN group_size = 'bin_5' THEN '[40 - 50]'
ELSE '[50 - 100]'
END AS group_size,
CASE
WHEN group_size = 'bin_1' THEN 1
WHEN group_size = 'bin_2' THEN 2
WHEN group_size = 'bin_3' THEN 3
WHEN group_size = 'bin_4' THEN 4
WHEN group_size = 'bin_5' THEN 5
ELSE 6
END AS bucket_Id,
COUNT(group_Id) group_count
FROM group_sizes
GROUP BY group_size, bucket_Id
ORDER BY bucket_Id
plt.figure(figsize=(20, 5))
q = sns.barplot(x="group_size", y="group_count", data=size_distribution)
"""
Explanation: Show the playlist size distribution:
End of explanation
"""
%%bigquery occurrence_distribution --project $PROJECT_ID
WITH item_frequency
AS
(
SELECT
Item_Id,
ML.BUCKETIZE(
COUNT(group_Id)
, [15, 30, 50, 100, 200, 300, 400]) AS group_count
FROM `recommendations.vw_item_groups`
WHERE item_Id IN (SELECT item_Id FROM recommendations.valid_items)
AND group_Id IN (SELECT group_Id FROM recommendations.valid_groups)
GROUP BY Item_Id
)
SELECT
CASE
WHEN group_count = 'bin_1' THEN '[15 - 30]'
WHEN group_count = 'bin_2' THEN '[30 - 50]'
WHEN group_count = 'bin_3' THEN '[50 - 100]'
WHEN group_count = 'bin_4' THEN '[100 - 200]'
WHEN group_count = 'bin_5' THEN '[200 - 300]'
WHEN group_count = 'bin_6' THEN '[300 - 400]'
ELSE '[400+]'
END AS group_count,
CASE
WHEN group_count = 'bin_1' THEN 1
WHEN group_count = 'bin_2' THEN 2
WHEN group_count = 'bin_3' THEN 3
WHEN group_count = 'bin_4' THEN 4
WHEN group_count = 'bin_5' THEN 5
WHEN group_count = 'bin_6' THEN 6
ELSE 7
END AS bucket_Id,
COUNT(Item_Id) item_count
FROM item_frequency
GROUP BY group_count, bucket_Id
ORDER BY bucket_Id
plt.figure(figsize=(20, 5))
q = sns.barplot(x="group_count", y="item_count", data=occurrence_distribution)
%%bigquery --project $PROJECT_ID
DROP TABLE IF EXISTS recommendations.valid_items;
%%bigquery --project $PROJECT_ID
DROP TABLE IF EXISTS recommendations.valid_groups;
"""
Explanation: Show the song occurrence distribution:
End of explanation
"""
%%bigquery --project $PROJECT_ID
DECLARE min_item_frequency INT64;
DECLARE max_group_size INT64;
SET min_item_frequency = 15;
SET max_group_size = 100;
CALL recommendations.sp_ComputePMI(min_item_frequency, max_group_size);
"""
Explanation: Compute song PMI data
You run the sp_ComputePMI stored procedure to compute song PMI data. This PMI data is what you'll use to train the matrix factorization model in the next section.
This stored procedure accepts the following parameters:
min_item_frequency — Sets the minimum number of times that a song must appear on playlists.
max_group_size — Sets the maximum number of songs that a playlist can contain.
These parameters are used together to select records where the song occurs on a number of playlists equal to or greater than the min_item_frequency value and the playlist contains a number of songs between 2 and the max_group_size value. These are the records that get processed to make the training dataset.
The stored procedure works as follows:
Selects a valid_item_groups1 table and populates it with records from thevw_item_groups` view that meet the following criteria:
The song occurs on a number of playlists equal to or greater than the
min_item_frequency value
The playlist contains a number of songs between 2 and the max_group_size
value.
Creates the item_cooc table and populates it with co-occurrence data that
identifies pairs of songs that occur on the same playlist. It does this by:
Self-joining the valid_item_groups table on the group_id column.
Setting the cooc column to 1.
Summing the cooc column for the item1_Id and item2_Id columns.
Creates an item_frequency table and populates it with data that identifies
how many playlists each song occurs in.
Recreates the item_cooc table to include the following record sets:
The item1_Id, item2_Id, and cooc data from the original item_cooc
table. The PMI values calculated from these song pairs lets the solution
calculate the embeddings for the rows in the feedback matrix.
<img src="figures/feedback-matrix-rows.png" alt="Embedding matrix that shows the matrix rows calculated by this step." style="width: 400px;"/>
The same data as in the previous bullet, but with the item1_Id data
written to the item2_Id column and the item2_Id data written to the
item1_Id column. This data provides the mirror values of the initial
entities in the feedback matrix. The PMI values calculated from these
song pairs lets the solution calculate the embeddings for the columns in
the feedback matrix.
<img src="figures/feedback-matrix-columns.png" alt="Embedding matrix that shows the matrix columns calculated by this step." style="width: 400px;"/>
The data from the item_frequency table. The item_Id data is written
to both the item1_Id and item2_Id columns and the frequency data is
written to the cooc column. This data provides the diagonal entries of
the feedback matrix. The PMI values calculated from these song pairs lets
the solution calculate the embeddings for the diagonals in the feedback
matrix.
<img src="figures/feedback-matrix-diagonals.png" alt="Embedding matrix that shows the matrix diagonals calculated by this step." style="width: 400px;"/>
Computes the PMI for item pairs in the item_cooc table, then recreates the
item_cooc table to include this data in the pmi column.
Run the sp_ComputePMI stored procedure
End of explanation
"""
%%bigquery --project $PROJECT_ID
SELECT
a.item1_Id,
a.item2_Id,
b.frequency AS freq1,
c.frequency AS freq2,
a.cooc,
a.pmi,
a.cooc * a.pmi AS score
FROM recommendations.item_cooc a
JOIN recommendations.item_frequency b
ON a.item1_Id = b.item_Id
JOIN recommendations.item_frequency c
ON a.item2_Id = c.item_Id
WHERE a.item1_Id != a.item2_Id
ORDER BY score DESC
LIMIT 10;
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) records_count
FROM recommendations.item_cooc
"""
Explanation: View the song PMI data
End of explanation
"""
%%bigquery --project $PROJECT_ID
DECLARE dimensions INT64 DEFAULT 50;
CALL recommendations.sp_TrainItemMatchingModel(dimensions)
"""
Explanation: Train the BigQuery ML matrix factorization model
You run the sp_TrainItemMatchingModel stored procedure to train the item_matching_model matrix factorization model on the song PMI data. The model builds a feedback matrix, which in turn is used to calculate item embeddings for the songs. For more information about how this process works, see Understanding item embeddings.
This stored procedure accepts the dimensions parameter, which provides the value for the NUM_FACTORS parameter of the CREATE MODEL statement. The NUM_FACTORS parameter lets you set the number of latent factors to use in the model. Higher values for this parameter can increase model performance, but will also increase the time needed to train the model. Using the default dimensions value of 50, the model takes around 120 minutes to train.
Run the sp_TrainItemMatchingModel stored procedure
After the item_matching_model model is created successfully, you can use the the BigQuery console to investigate the loss through the training iterations, and also see the final evaluation metrics.
End of explanation
"""
%%bigquery song_embeddings --project $PROJECT_ID
SELECT
feature,
processed_input,
factor_weights,
intercept
FROM
ML.WEIGHTS(MODEL recommendations.item_matching_model)
WHERE
feature IN ('2114406',
'2114402',
'2120788',
'2120786',
'1086322',
'3129954',
'53448',
'887688',
'562487',
'833391',
'1098069',
'910683',
'1579481',
'2675403',
'2954929',
'625169')
songs = {
"2114406": "Metallica: Nothing Else Matters",
"2114402": "Metallica: The Unforgiven",
"2120788": "Limp Bizkit: My Way",
"2120786": "Limp Bizkit: My Generation",
"1086322": "Jacques Brel: Ne Me Quitte Pas",
"3129954": "Édith Piaf: Non, Je Ne Regrette Rien",
"53448": "France Gall: Ella, Elle l'a",
"887688": "Enrique Iglesias: Tired Of Being Sorry",
"562487": "Shakira: Hips Don't Lie",
"833391": "Ricky Martin: Livin' la Vida Loca",
"1098069": "Snoop Dogg: Drop It Like It's Hot",
"910683": "2Pac: California Love",
"1579481": "Dr. Dre: The Next Episode",
"2675403": "Eminem: Lose Yourself",
"2954929": "Black Sabbath: Iron Man",
"625169": "Black Sabbath: Paranoid",
}
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
def process_results(results):
items = list(results["feature"].unique())
item_embeddings = dict()
for item in items:
emebedding = [0.0] * 100
embedding_pair = results[results["feature"] == item]
for _, row in embedding_pair.iterrows():
factor_weights = list(row["factor_weights"])
for _, element in enumerate(factor_weights):
emebedding[element["factor"] - 1] += element["weight"]
item_embeddings[item] = emebedding
return item_embeddings
item_embeddings = process_results(song_embeddings)
item_ids = list(item_embeddings.keys())
for idx1 in range(0, len(item_ids) - 1):
item1_Id = item_ids[idx1]
title1 = songs[item1_Id]
print(title1)
print("==================")
embedding1 = np.array(item_embeddings[item1_Id])
similar_items = []
for idx2 in range(len(item_ids)):
item2_Id = item_ids[idx2]
title2 = songs[item2_Id]
embedding2 = np.array(item_embeddings[item2_Id])
similarity = round(cosine_similarity([embedding1], [embedding2])[0][0], 5)
similar_items.append((title2, similarity))
similar_items = sorted(similar_items, key=lambda item: item[1], reverse=True)
for element in similar_items[1:]:
print(f"- {element[0]}' = {element[1]}")
print()
"""
Explanation: Explore the trained embeddings
End of explanation
"""
|
mari-linhares/tensorflow-workshop | test_install.ipynb | apache-2.0 | import tensorflow as tf
print("Expected version is 1.2.0 or higher")
print("You have version %s" % tf.__version__)
"""
Explanation: You can press shift + enter to quickly advance through each line of a notebook. Try it!
Check that you have a recent version of TensorFlow installed, v1.2.0 or higher.
End of explanation
"""
%matplotlib inline
import pylab
import numpy as np
# create some data using numpy. y = x * 0.1 + 0.3 + noise
x = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.01, size=len(x))
y = x * 0.1 + 0.3 + noise
# plot it
pylab.plot(x, y, '.')
"""
Explanation: Check if Matplotlib is working. After running this cell, you should see a plot appear below.
End of explanation
"""
import PIL.Image as Image
import numpy as np
from matplotlib.pyplot import imshow
image_array = np.random.rand(200,200,3) * 255
img = Image.fromarray(image_array.astype('uint8')).convert('RGBA')
imshow(np.asarray(img))
"""
Explanation: Check if Numpy and Pillow are working. After runnign this cell, you should see a random image appear below.
End of explanation
"""
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
BabyDataSet = list(zip(names,births))
pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
"""
Explanation: Check if Pandas is working. After running this cell, you should see a table appear below.
End of explanation
"""
|
mdeff/ntds_2016 | algorithms/02_ex_clustering.ipynb | mit | # Load libraries
# Math
import numpy as np
# Visualization
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.max_open_warning': 0})
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import ndimage
# Print output of LFR code
import subprocess
# Sparse matrix
import scipy.sparse
import scipy.sparse.linalg
# 3D visualization
import pylab
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot
# Import data
import scipy.io
# Import functions in lib folder
import sys
sys.path.insert(1, 'lib')
# Import helper functions
%load_ext autoreload
%autoreload 2
from lib.utils import construct_kernel
from lib.utils import compute_kernel_kmeans_EM
from lib.utils import compute_kernel_kmeans_spectral
from lib.utils import compute_purity
# Import distance function
import sklearn.metrics.pairwise
# Remove warnings
import warnings
warnings.filterwarnings("ignore")
# Load MNIST raw data images
mat = scipy.io.loadmat('datasets/mnist_raw_data.mat')
X = mat['Xraw']
n = X.shape[0]
d = X.shape[1]
Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()
nc = len(np.unique(Cgt))
print('Number of data =',n)
print('Data dimensionality =',d);
print('Number of classes =',nc);
"""
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Exercise 4 - Code 2 : Unsupervised Learning
Unsupervised Clustering with Kernel K-Means
End of explanation
"""
# Your code here
"""
Explanation: Question 1a: What is the clustering accuracy of standard/linear K-Means?<br>
Hint: You may use functions Ker=construct_kernel(X,'linear') to compute the
linear kernel and [C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_classes,Ker,Theta,10) with Theta= np.ones(n) to run the standard K-Means algorithm, and accuracy = compute_purity(C_computed,C_solution,n_clusters) that returns the
accuracy.
End of explanation
"""
# Your code here
"""
Explanation: Question 1b: What is the clustering accuracy for the kernel K-Means algorithm with<br>
(1) Gaussian Kernel for the EM approach and the Spectral approach?<br>
(2) Polynomial Kernel for the EM approach and the Spectral approach?<br>
Hint: You may use functions Ker=construct_kernel(X,'gaussian') and Ker=construct_kernel(X,'polynomial',[1,0,2]) to compute the non-linear kernels<br>
Hint: You may use functions C_kmeans,__ = compute_kernel_kmeans_EM(K,Ker,Theta,10) for the EM kernel KMeans algorithm and C_kmeans,__ = compute_kernel_kmeans_spectral(K,Ker,Theta,10) for the Spectral kernel K-Means algorithm.<br>
End of explanation
"""
# Your code here
"""
Explanation: Question 1c: What is the clustering accuracy for the kernel K-Means algorithm with<br>
(1) KNN_Gaussian Kernel for the EM approach and the Spectral approach?<br>
(2) KNN_Cosine_Binary Kernel for the EM approach and the Spectral approach?<br>
You can test for the value KNN_kernel=50.<br>
Hint: You may use functions Ker = construct_kernel(X,'kNN_gaussian',KNN_kernel)
and Ker = construct_kernel(X,'kNN_cosine_binary',KNN_kernel) to compute the
non-linear kernels.
End of explanation
"""
|
peterwittek/qml-rg | Archiv_Session_Spring_2017/Exercises/06_aps_with_classifiers.ipynb | gpl-3.0 | import numpy as np
import os
from skimage.transform import resize
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
import tools as im
from matplotlib import pyplot as plt
%matplotlib inline
path=os.getcwd()+'/' # finds the path of the folder in which the notebook is
path_train=path+'images/train/'
path_test=path+'images/test/'
path_real=path+'images/real_world/'
"""
Explanation: Solving the Captcha problem with Random Forest and Suppor Vector
To be able to run this notebook you should have place in the folder Peter's program image_loader.py and the folder images used last week
End of explanation
"""
def prep_datas(xset,xlabels):
X=list(xset)
for i in range(len(X)):
X[i]=resize(X[i],(32,32,1)) # reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X[i]=np.reshape(X[i],1024) # reshape from 32x32 to a flat 1024 vector
X=np.array(X) # transforms it into an array
Y=np.asarray(xlabels) # transforms from list to array
return X,Y
"""
Explanation: We define the function prep_datas (props to Alexandre), already used the previous week. However now we reshape the images from a 32x32 matrix (this value seems unnecessary, however the bigger the image the worst the classifiers will work) to a flat 1024 vector, a constraint given by the Random Forest classifier.
End of explanation
"""
training_set, training_labels = im.load_images(path_train)
X_train, Y_train = prep_datas(training_set,training_labels)
test_set, test_labels = im.load_images(path_test)
X_test,Y_test=prep_datas(test_set,test_labels)
"""
Explanation: Then we load the training and the test set:
End of explanation
"""
classifierForest = RandomForestClassifier(n_estimators=1000)
classifierSVC=svm.SVC(kernel='linear')
classifierForest.fit(X_train, Y_train)
classifierSVC.fit(X_train,Y_train)
"""
Explanation: We define the classifiers Random Forest Classifier and Support Machine Classifier and we train them throught the fit function. Taking a linear kernel for SVC gives the best results for this classifier.
End of explanation
"""
expectedF = Y_test
predictedF = classifierForest.predict(X_test)
predictedS = classifierSVC.predict(X_test)
print(expectedF)
print(predictedF)
print(predictedS)
"""
Explanation: Let's test how good the system is doing
End of explanation
"""
real_world_set=[]
for i in np.arange(1,73):
filename=path+'images/real_world/'+str(i)+'.png'
real_world_set.append(im.deshear(filename))
fake_label=np.ones(len(real_world_set),dtype='int32')
X_real,Y_real=prep_datas(real_world_set,fake_label)
"""
Explanation: Now we load the real set of images and test it. This part of the program has been taken from Alexandre's program from last week. First we load the 'real world' images
End of explanation
"""
y_predF = classifierForest.predict(X_real)
y_predS = classifierSVC.predict(X_real)
"""
Explanation: Then we make the predictions with both classifiers
End of explanation
"""
f=open(path+'images/real_world/labels.txt',"r")
lines=f.readlines()
result=[]
for x in lines:
result.append((x.split(' ')[1]).replace('\n',''))
f.close()
result=np.array([int(x) for x in result])
result[result>1]=1
plt.plot(y_predF,'o')
plt.plot(1.2*y_predS,'o')
plt.plot(2*result,'o')
plt.ylim(-0.5,2.5);
"""
Explanation: Finally we plot the results
End of explanation
"""
|
knu2xs/arcgis-machine-learning-demonstrations | Retrieve Data as Pandas Data Frame.ipynb | apache-2.0 | import arcgis
"""
Explanation: Import the Python API module and Instantiate the GIS object
Import the Python API
End of explanation
"""
gis_retail = arcgis.gis.GIS('Pro')
"""
Explanation: Create an GIS object instance using the account currently logged in through ArcGIS Pro
End of explanation
"""
trade_area_itemid = 'bf361f9081fd43a7ba57357e74ccc373'
item = arcgis.gis.Item(gis=gis_retail, itemid=trade_area_itemid)
item
"""
Explanation: Get a Feature Set, data to work with, from the Web GIS Item ID
Create a Web GIS Item instance using the Item ID
End of explanation
"""
feature_layer = item.layers[0]
feature_layer
"""
Explanation: Since the item only contains one feature layer, get the first layer in the item, the Feature Layer we need to work with.
End of explanation
"""
feature_set = feature_layer.query(where="AREA_DESC = '0 - 8 minutes'", returnGeometry=False)
"""
Explanation: Now, for this initial analysis, query to return just the attributes for the eight minute trade areas as a Feature Set.
End of explanation
"""
data_frame = feature_set.df
data_frame.head()
"""
Explanation: Convert the Data into a Pandas Data Frame
Take advantage of the df function on the Feature set object returned from the query to convert the data to a Pandas Data Frame.
End of explanation
"""
field_name_independent_list = [field['name'] for field in feature_set.fields if
field['type'] != 'esriFieldTypeOID' and # we don't need the Esri object identifier field
field['name'].startswith('Shape_') == False and # exclude the Esri shape fields
field['type'] == 'esriFieldTypeDouble' and # ensure numeric, quantatative, fields are the only fields used
field['name'] != 'STORE_LAT' and # while numeric, the fields describing the location are not independent varaibles
field['name'] != 'STORE_LONG' and # while numeric, the fields describing the location are not independent varaibles
field['name'] != 'SALESVOL' # exclude the dependent variable
]
print(field_name_independent_list)
"""
Explanation: Save dependent and independent variable names as Python variables
Use a quick list comprehension to create a list of field names to be used as independent variables.
End of explanation
"""
field_name_dependent = 'SALESVOL'
"""
Explanation: Also, save the name of the dependent variable field as well.
End of explanation
"""
|
pfschus/fission_bicorrelation | methods/singles_n_sum.ipynb | mit | import os
import sys
import matplotlib.pyplot as plt
import matplotlib.colors
import numpy as np
import os
import scipy.io as sio
import sys
import pandas as pd
from tqdm import *
# Plot entire array
np.set_printoptions(threshold=np.nan)
import seaborn as sns
sns.set_style(style='white')
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_plot as bicorr_plot
import bicorr_e as bicorr_e
import bicorr_sums as bicorr_sums
%load_ext autoreload
%autoreload 2
det_df = bicorr.load_det_df()
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists()
"""
Explanation: Goal: Calculate singles sum $S_i, S_j$
Patricia Schuster
University of Michigan
January 11, 2018
I built the singles histogram in the other methods notebook, singles_histogram.ipynb. Now I am going to calculate the sum on that histogram.
Update: July 17, 2018. I am going to redo this method using the energy histograms.
The reason we care about the singles rates:
In order to correct for differences in detection efficiencies and solid angles, we will divide all of the doubles rates by the singles rates of the two detectors as follows:
$ W_{i,j} = \frac{D_{i,j}}{S_i*S_j}$
I can calculate $S_i$ and $S_j$ from the singles histogram. I will write a general function to calculate the histogram, and also demonstrate how to store those sums in a pandas dataframe.
End of explanation
"""
bicorr_e.load_singles_hist_both?
singles_hist_e_n, e_bin_edges, dict_det_to_index, dict_index_to_det = bicorr_e.load_singles_hist_both(filepath = r'../analysis/Cf072115_to_Cf072215b/datap', plot_flag=True, show_flag = True)
singles_hist_e_n.shape
"""
Explanation: (new method) ENERGY: Load singles_histogram_e_n data
End of explanation
"""
e_min = 0.62
e_max = 15
i_min = np.digitize(e_min,e_bin_edges)-1
i_max = np.digitize(e_max,e_bin_edges)-1
print(i_min, i_max)
e_bin_edges[i_max]
e_bin_edges[np.digitize(e_max,e_bin_edges)-1]
singles_hist_e_n[0,i_min:i_max]
singles_hist_e_n[0,0:10]
bicorr_sums.calc_n_sum_e(singles_hist_e_n, e_bin_edges, 0, e_min, e_max)
"""
Explanation: Calculate sum over a given energy range
What goes in
e_bin_edges
singles_hist_e_n
End of explanation
"""
singles_e_df = bicorr_sums.init_singles_e_df(dict_index_to_det)
singles_e_df.head()
singles_e_df = bicorr_sums.fill_singles_e_df(dict_index_to_det, singles_hist_e_n, e_bin_edges, e_min, e_max)
singles_e_df.head()
"""
Explanation: Initialize sums dataframe
The method for this is described below.
End of explanation
"""
plt.errorbar(singles_e_df['ch'],
singles_e_df['Se'],
yerr=singles_e_df['Se_err'], fmt='.')
plt.xlabel('Detector channel')
plt.ylabel('Singles count rate')
plt.title('Singles count rate for each detector (errorbars shown)')
plt.show()
"""
Explanation: Calculate for each detector.
End of explanation
"""
singles_hist, dt_bin_edges_sh, dict_det_to_index, dict_index_to_det = bicorr.load_singles_hist(filepath=r'../analysis/Cf072115_to_Cf072215b/datap/',
plot_flag = True, show_flag =True)
"""
Explanation: (old method) TIME: Load singles_histogram data
I'm going to load it from the combined data sets Cf072115 - Cf072215b. The dimensions of singles_hist are:
Dimension 0: particle type, 0=n, 1=g
Dimension 1: detector channel
Dimension 2: dt bin
End of explanation
"""
emin = 0.62
emax = 12
tmin = bicorr.convert_energy_to_time(emax)
tmax = bicorr.convert_energy_to_time(emin)
print(tmin,tmax)
dt_bin_edges_sh[-10:]
i_min = np.min(np.argwhere(tmin<dt_bin_edges_sh))
i_max = np.min(np.argwhere(tmax<dt_bin_edges_sh))
print(i_min,i_max)
"""
Explanation: Convert energy to time
End of explanation
"""
i_min_neg = np.min(np.argwhere(-tmax<dt_bin_edges_sh))
i_max_neg = np.min(np.argwhere(-tmin<dt_bin_edges_sh))
print(i_min_neg,i_max_neg)
dt_bin_centers = (dt_bin_edges_sh[:-1]+dt_bin_edges_sh[1:])/2
plt.plot(dt_bin_centers,np.sum(singles_hist[0,:,:],axis=(0)))
plt.plot(dt_bin_centers,np.sum(singles_hist[1,:,:],axis=(0)))
plt.axvline(dt_bin_edges_sh[i_min],color='r',linestyle='--')
plt.axvline(dt_bin_edges_sh[i_max],color='r',linestyle='--')
plt.axvline(dt_bin_edges_sh[i_min_neg],color='r',linestyle='--')
plt.axvline(dt_bin_edges_sh[i_max_neg],color='r',linestyle='--')
plt.xlabel('Time (ns)')
plt.ylabel('Number of events')
plt.title('Singles TOF distribution, all channels')
plt.legend(['N','G'])
plt.yscale('log')
plt.show()
"""
Explanation: I also need to find the time ranges for the negative sum.
End of explanation
"""
np.sum(singles_hist[[0],:,i_min:i_max])
"""
Explanation: Sum over indices
For this calculation, I am only working with neutron events, so selected 0 in the first dimension of singles_hist.
For now, I will use all detector pairs. Start with the positive sum.
End of explanation
"""
np.sum(singles_hist[[0],:,i_min_neg:i_max_neg])
"""
Explanation: Now the negative sum
End of explanation
"""
dict_det_to_index[4]
ch_1 = 10
ch_2 = 11
print(np.sum(singles_hist[[0],dict_det_to_index[ch_1],i_min:i_max]))
print(np.sum(singles_hist[[0],dict_det_to_index[ch_2],i_min:i_max]))
"""
Explanation: For now I am going to limit this to a single detector pair, since at the moment I only envision occasions where I have to calculate the sum for a specific pair and therefore the sums for each detector.
End of explanation
"""
help(bicorr.calc_n_sum_br)
bicorr.calc_n_sum_br(singles_hist, dt_bin_edges_sh, 1)
"""
Explanation: Functionalize it
I am going to perform the background subtraction and functionalize this to bicorr.calc_n_sum_br.
End of explanation
"""
singles_rates = np.zeros((num_dets,2))
# Dimension 0: Detector index
# Dimension 1: 0- n_sum_br
# 1- n_sum_br_err
for det in detList:
det_i = dict_det_to_index[det]
singles_rates[det_i,:] = bicorr.calc_n_sum_br(singles_hist, dt_bin_edges_sh, det_i)
plt.errorbar(detList, singles_rates[:,0], yerr=singles_rates[:,1], fmt='.')
plt.xlabel('Detector channel')
plt.ylabel('Singles count rate')
plt.title('Singles count rate for each detector (errorbars shown)')
plt.show()
"""
Explanation: Loop through all of the detectors and calculate it. Populate an array with these values
End of explanation
"""
singles_df = pd.DataFrame.from_dict(dict_index_to_det,orient='index',dtype=np.int8).rename(columns={0:'ch'})
chIgnore = [1,17,33]
singles_df = singles_df[~singles_df['ch'].isin(chIgnore)].copy()
singles_df['Sp']= 0.0
singles_df['Sn']= 0.0
singles_df['Sd']= 0.0
singles_df['Sd_err'] = 0.0
singles_df.head()
for index in singles_df.index.values:
Sp, Sn, Sd, Sd_err = bicorr.calc_n_sum_br(singles_hist, dt_bin_edges_sh, index, emin=emin, emax=emax)
singles_df.loc[index,'Sp'] = Sp
singles_df.loc[index,'Sn'] = Sn
singles_df.loc[index,'Sd'] = Sd
singles_df.loc[index,'Sd_err'] = Sd_err
singles_df.head()
"""
Explanation: Store in pandas dataframe, singles_df
Follow the same method as in nn_sum_and_br_subtraction.ipynb. I am going to store the results in a pandas dataframe.
Columns:
Channel number
Sp - Singles counts, positive
Sn - Singles counts, negative
Sd - Singles counts, br-subtracted
Sd_err - Singles counts, br-subtracted, err
End of explanation
"""
bicorr_plot.Sd_vs_angle_all(singles_df)
"""
Explanation: Now use the plotting functions I developed.
End of explanation
"""
|
svdwulp/da-programming-1 | week_03_oefeningen_uitwerkingen.ipynb | gpl-2.0 | # 1.1
getallen_a = []
# 1.2
for i in range(2, 11, 2):
getallen_a.append(i)
# 1.3
print("Lijst getallen_a:", getallen_a)
print("Lengte of aantal elementen:", len(getallen_a))
print("Getal op plek 0:", getallen_a[0])
print("Getal op plek 3:", getallen_a[3])
print("Getal op plek -1:", getallen_a[-1])
# 1.4
getallen_a.append(14.0)
getallen_a.insert(2, 7)
# 1.5
getallen_b = [3.14, 8, 0]
getallen_a = getallen_a + getallen_b # ook: getallen_a += getallen_b
print("Lijst getallen_a:", getallen_a)
# 1.6
print("Index van 3.14:", getallen_a.index(3.14))
print("Aantal van 8:", getallen_a.count(8))
print("Is 4 element van de lijst:", 4 in getallen_a)
# of: print("Is 4 element van de lijst:", getallen_a.count(4) > 0)
print("Minimum van lijst:", min(getallen_a))
print("Maximum van lijst:", min(getallen_b))
# 1.7
del getallen_a[4]
print("Lijst getallen_a:", getallen_a)
# 1.8
getallen_a.sort()
print("Lijst getallen_a:", getallen_a)
getallen_a.reverse()
print("Lijst getallen_a:", getallen_a)
"""
Explanation: Opgave 1
Maak een lege lijst aan en noem deze lijst getallen_a.
Vul deze lijst via een for loop met de getallen 2, 4, 6 , 8 en 10
Check of de lijst er zo uitziet als verwacht, hoeveel elementen bevat je lijst? Welk getal staat op plek 0? En op plek 3? En op plek -1?
Voeg het getal 14.0 toe achter aan je lijst, en het getal 7 op plek 2.
Maak een lijst getallen_b aan: [3.14, 8, 0] en vervang getallen_a door de samenvoeging van getallen_a en getallen_b
In getallen_a: op welke plek staat het getal 3.14? En het getal 8? Hoe vaak komt het getal 8 voor in de lijst? Bevat de lijst het getal 4? Wat zijn de minimale en maximale waarde die in de lijst voorkomen?
Verwijder het getal op plek 4.
Sorteer de lijst en geef hem ook eens van achter naar voren weer.
End of explanation
"""
# 2.1
numbers = [2, 4, 6, 8]
length = 0
for i in numbers:
length += 1
print("Lengte van numbers:", length)
# 2.2
numbers = range(2, 10, 2)
multiplier = float(input("Geef de multiplier op:"))
for number in numbers:
print(number * multiplier)
# of:
numbers = list(range(2, 10, 2))
multiplier = float(input("Geef de multiplier op:"))
for i in range(len(numbers)):
numbers[i] *= multiplier # dit past de inhoud van de list aan!
print("Producten:", numbers)
# 2.3
numbers = [1, 3, 5, 7]
print("Getallen:", numbers)
query = int(input("Welk geheel getal zoek je?"))
is_element = False
for number in numbers:
if number == query:
is_element = True
print("Het getal {} zit in de lijst: {}".format(query, is_element))
# 2.4
numbers_a = [1, 3, 5, 7]
numbers_b = [2, 4, 6, 8]
has_common_elem = False
for number_a in numbers_a:
for number_b in numbers_b:
if number_a == number_b:
has_common_elem = True
print(("De lijsten hebben (minstens) een "
"gemeenschappelijk element:"), has_common_elem)
# 2.5
values = [4, 9, 7]
for value in values:
print("*" * value)
"""
Explanation: Opgave 2
Schrijf een programma dat het aantal elementen (de lengte) van een list variable afdrukt naar het scherm zonder de len()-functie te gebruiken.
Schrijf een programma dat alle elementen in een list vermenigvuldigt met een door de gebruiker opgegeven getal en de uitkomsten afdrukt.
Schrijf een programma dat bepaalt of een door de gebruiker opgegeven getal een element is van een list. Gebruik daarbij niet de in-operator of de functies index() of count().
Schrijf een programma dat kan bepalen of twee lists (minstens) een gemeenschappelijk element hebben. Gebruik een geneste (dubbele) for-loop om dit te bepalen.
Schrijf en programma dat een 'histogram' kan afdrukken voor een list van integers. Voor de list [4, 9, 7] bijvoorbeeld, zou het onderstaande afgedrukt moeten worden:
```
```
End of explanation
"""
# 3.2
spam = [
"rolex", "replica", "korting",
"klik", "korting", "viagra",
"korting", "politiek", "krediet",
]
ham = [
"politiek", "bepaalt", "korting",
"lariekoek", "in", "politiek",
"klik", "politiek", "verslag",
"journalist", "bespeelt", "politiek",
"politiek", "amsterdam", "stagneert",
]
message = "rolex korting amsterdam"
words = message.split()
P_words_spam = 1.0
P_words_ham = 1.0
for word in words:
if word in spam:
P_words_spam *= spam.count(word) / len(spam)
if word in ham:
P_words_ham *= ham.count(word) / len(ham)
P_spam = len(spam) / (len(spam) + len(ham))
P_ham = len(ham) / (len(spam) + len(ham))
P_message_spam = ((P_words_spam * P_spam) /
((P_words_spam * P_spam) + (P_words_ham * P_ham)))
print("P(M|Spam) = {:.4f}".format(P_message_spam))
"""
Explanation: Opdracht 3
Tijdens het college is (naive) Bayesian spam detection besproken. Er werden twee voorbeelden behandeld en in beide gevallen werd de kans op spam bepaald voor een bericht van één woord. Maar de meeste e-mails bestaan uit meer woorden.
Wanneer we, helaas wat optimistisch, de aanname doen dat woorden in e-mails onafhankelijk van elkaar optreden, kunnen we met de behandelde vergelijking (Bayes' theorem) uitspraken doen over de kans dat een e-mail met meerdere woorden spam is.
Gegeven de trainingsdata zoals gebruikt in het college, zie evt. hieronder, bereken met hand de kans op het bericht M = "rolex korting amsterdam" als je uitgaat van onderlinge onafhankelijkheid van de woorden in een bericht.
Spam berichten: rolex replica korting, klik korting viagra, korting politiek krediet
Ham berichten: politiek bepaalt korting, lariekoek in politiek, klik politiek verslag, journalist bespeelt politiek, politiek amsterdam stagneert
Schrijf een programma om voor berichten van meerdere woorden (minstens 1) de kans op spam uit te kunnen rekenen.
Je kunt een bericht eenvoudig in woorden splitsen in Python:
python
message = "rolex korting amsterdam"
words = message.split() # split maakt een list van de elementen
# na het splitsen op whitespace
Hint: begin met uit uitdrukken van $P(M|Spam)$ als $P(W_1, W_2, W_3|Spam) = P(W_1|Spam) \cdot P(W_2|Spam) \cdot P(W_3|Spam)$ en evenzo voor $P(M|Ham)$.
3.1
$$
\begin{align}
P(Spam|M) &= \scriptsize{\frac{P(M|Spam) \cdot P(Spam)}
{P(M|Spam) \cdot P(Spam) + P(M|Ham) \cdot P(Ham)}} \
&= \scriptsize{\frac{P(W_1, W_2, W_3|Spam) \cdot P(Spam)}
{P(W_1, W_2, W_3|Spam) \cdot P(Spam) + P(W_1, W_2, W_3|Ham) \cdot P(Ham)}} \
&= \scriptsize{\frac{P(W_1|Spam) \cdot \ldots \cdot P(W_3|Spam) \cdot P(Spam)}
{P(W_1|Spam) \cdot \ldots \cdot P(W_3|Spam) \cdot P(Spam) + P(W_1|Ham) \cdot \ldots \cdot P(W_3|Ham) \cdot P(Ham)}} \
&= \scriptsize{\frac{1/9 \cdot 3/9 \cdot 1 \cdot 9/24}
{1/9 \cdot 3/9 \cdot 1 \cdot 9/15 + 1 \cdot 1/15 \cdot 1/15 \cdot 15/24}} \
&= \scriptsize{\frac{1/72}{1/72 + 1/360}} = \scriptsize{\frac{5}{6}}
\end{align}
$$
End of explanation
"""
|
ixkael/AstroHackWeek2015 | day1/day1_ecosystem.ipynb | gpl-2.0 | from __future__ import print_function
import math
import numpy as np
"""
Explanation: Orienting Yourself
Image: @jakevdp
How to install packages using conda
If you're using anaconda, you probably already have most (if not all) of these installed. If you installed miniconda:
conda install numpy
Conda also has channels which allows anybody to distribute their own conda packages. There is an "astropy" channel for AstroPy affiliated packages. You can do:
conda install -c astropy astroml
To check if a package is available on conda:
conda search numpy
How to install packages using pip
Many smaller packages are not available via the conda package manager. For these, use pip:
pip install --no-deps corner
Why prefer conda?
conda is an actual package manager that will take care of resolving dependencies optimally.
NumPy
End of explanation
"""
def add_one(x):
return [xi + 1 for xi in x]
x = list(range(1000000))
%timeit add_one(x)
"""
Explanation: If you use Python for any amount of time, you'll quickly find that there are some things it is not so good at.
In particular, performing repeated operations via loops is one of its weaknesses.
For example, in pure Python:
End of explanation
"""
x = np.arange(1000000)
%timeit np.add(x, 1)
"""
Explanation: Using numpy we would do:
End of explanation
"""
# Point coordinates
x = np.random.rand(100000)
y = np.random.rand(100000)
# calculate distance from origin
%%timeit
dist = np.empty(len(x))
for i in range(len(x)):
dist[i] = math.sqrt(x[i]**2 + y[i]**2)
%%timeit
dist = np.sqrt(x**2 + y**2)
"""
Explanation: Why is pure Python so slow?
Image: @jakevdp
Operations in NumPy are faster than Python functions involving loops, because
The data type can be checked just once
The looping then happens in compiled code
Using NumPy efficiently
The name of the game is moving all array-oriented code into vectorized NumPy operations.
End of explanation
"""
x = np.arange(10)**2
x
# difference between adjacent elements
x[1:] - x[:-1]
# by the way, this is basically the implementation of `np.ediff1d`
np.ediff1d??
"""
Explanation: Aside: How many arrays are created in the above cell?
Sometimes you have to get a little creative to "vectorize" things:
End of explanation
"""
x = np.arange(5)
y = np.arange(1, 6)
x + y
"""
Explanation: Some interesting properties of numpy functions
Functions that operate element-wise on arrays are known as universal functions ("UFuncs").
UFuncs have some methods built-in, which allow for some very interesting, flexible, and fast operations:
End of explanation
"""
np.add(x, y)
"""
Explanation: All operators (like +) actuall call an underlying numpy function: in this case np.add:
End of explanation
"""
np.add.accumulate(x)
np.multiply.accumulate(x)
np.multiply.accumulate(y)
np.add.identity
np.multiply.identity
np.add.outer(x, x)
np.multiply.outer(x, x)
"""
Explanation: These ufuncs have some interesting and useful properties:
End of explanation
"""
z = np.arange(10, dtype=np.float64).reshape((2, 5))
z
np.sum(z)
# alternate spelling:
z.sum()
np.mean(z)
np.min(z), np.max(z)
# could also use ufunc
np.add.reduce(z, axis=0)
np.add.reduce(x)
# equivalent to sum:
np.sum(z, axis=0)
"""
Explanation: numpy aggregates
Aggregates are functions that take an array and return a smaller-dimension array.
End of explanation
"""
x = np.arange(15)
x
x[0:5]
x[0:10:2] # with a stride
x[10:0:-2] # reversed
"""
Explanation: Indexing
Slice indexing
End of explanation
"""
y = x[0:10:2]
y[0] = 100. # modify y
y
# x is modified:
x
"""
Explanation: This sort of indexing does not make a copy:
End of explanation
"""
x = np.arange(15)
y = x[[1, 2, 4]]
y
y[0] = 100
y
# x is not modified
x
"""
Explanation: Indexing with indicies
End of explanation
"""
x = np.arange(5)
x
mask = np.array([True, True, False, True, False])
x[mask]
# creates a copy
y = x[mask]
y[0] = 100.
print(y)
print(x)
"""
Explanation: Indexing with booleans
End of explanation
"""
x = np.array([1, 2, 3, -999, 2, 4, -999])
"""
Explanation: How do you remember which type of indexing creates a copy?
NumPy arrays are stored as a chunk of data and a set of strides in each dimension.
Boolean and arbitrary indicies cannot be represented this way, so numpy must make a copy.
More on masking
All indexing operations also work in assigning to an array. Here we demonstrate assignment with booleans.
For example, imagine you have an array of data where negative values indicate some kind of error.
End of explanation
"""
for i in range(len(x)):
if x[i] < 0:
x[i] = 0
x
x = np.array([1, 2, 3, -999, 2, 4, -999])
mask = (x < 0)
mask
"""
Explanation: How might you clean this array, setting all negative values to, say, zero?
End of explanation
"""
x[mask] = 0
x
"""
Explanation: And the mask can be used directly to set the value you desire:
End of explanation
"""
x = np.array([1, 2, 3, -999, 2, 4, -999])
x[x < 0] = 0
x
# additional boolean operations: invert
x = np.array([1, 2, 3, -999, 2, 4, -999])
x[~(x < 0)] = 0
x
x = np.array([1, 2, 3, -999, 2, 4, -999])
x[(x < 0) | (x > 3)] = 0
x
"""
Explanation: Often you'll see this done in a separate step:
End of explanation
"""
x = np.arange(4)
x
x + 3
"""
Explanation: Broadcasting
End of explanation
"""
x = np.array([[0, 0, 0],
[10, 10, 10],
[20, 20, 20],
[30, 30, 30]])
y = np.array([0, 1, 2])
print("x shape:", x.shape)
print("y shape: ", y.shape)
# If x and y are different dimensions, shape is padded on left with 1s
# before broadcasting.
x + y
"""
Explanation:
End of explanation
"""
x = np.array([[0],
[10],
[20],
[30]])
y = np.array([0, 1, 2])
print("x shape:", x.shape)
print("y shape: ", y.shape)
x + y
"""
Explanation:
End of explanation
"""
np.random.seed(0)
X = np.random.rand(5000)
X[np.random.randint(0, 5000, size=500)] = np.nan # 10% missing
X = X.reshape((1000, 5)) # 1000 points in 5 dimensions
# 1. Compute the number of points (rows) with no missing values, using `np.any` or `np.all`.
# 2. Clean the array, leaving only rows with no missing values
# 3. Compute the whitened version of the array using np.mean and np.std.
"""
Explanation: Broadcasting rules:
If the two arrays differ in their number of dimensions, the shape of the array with fewer dimensions is padded with ones on its leading (left) side.
If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
Mini exercises
Assume you have $N$ points in $D$ dimensions, represented by an array of shape (N, D), where there are some mising values scattered throughout the points.
Count the number of points (rows) with no missing values, using np.any or np.all.
Clean the array of the points with missing values.
Construct the matrix M, the centered and normalized version of the X array: $$ M_{ij} = (X_{ij} - \mu_j) / \sigma_j $$ using np.mean and np.std. This is one version of whitening the array.
End of explanation
"""
print(np.random.__doc__)
"""
Explanation: What else is in NumPy?
numpy.random: Random number generation
numpy.linalg: Some linear algebra routines
numpy.fft: Fast Fourier Transform
End of explanation
"""
# contents of scipy:
import scipy
print(scipy.__doc__)
"""
Explanation: SciPy
Interestingly, scipy predates numpy by more than half a decade (circa 1999), even though it is built on top of numpy.
Originally "scipack", a collection of wrappers for Fortran NetLib libraries.
End of explanation
"""
from astropy import coordinates as coords
from astropy import units as u
ra = 360. * np.random.rand(100)
dec = -90. + 180. * np.random.rand(100)
print("RA:", ra[:5], "...")
print("Dec:", dec[:5], "...")
c = coords.SkyCoord(ra, dec, unit=(u.deg, u.deg))
c
# convert to galactic
g = c.galactic
g
# access longitude or latitude
g.l
type(g.l)
# get underlying numpy array
g.l.degree
"""
Explanation: Note the overlap:
numpy.fft / scipy.fft
numpy.linalg / scipy.linalg
Why the duplication? The scipy routines are based on Fortran libraries, whereas numpy is C-only.
AstroPy
Project started in 2011, in response to increasing duplication in Python astronomy ecosystem.
Initially brought together several existing Python packages:
astropy.io.fits (formerly pyfits)
astropy.io.ascii (formerly asciitable)
astropy.wcs (formerly pywcs)
astropy.cosmology (formerly cosmolopy)
Now also contains:
astropy.table (Table class and table operations)
astropy.units (Quantity: arrays with units)
astropy.coordinates (astronomical coordinate systems)
astropy.time (UTC, UT, MJD, etc)
astropy.stats (additional stats not in scipy)
astropy.modeling (simple model fitting)
astropy.vo (virtual observatory)
astropy.io.votable
astropy.analytic_functions
Example: Coordinates
End of explanation
"""
|
UWashington-Astro300/Astro300-W17 | 05_Python_StringsAndStuff.ipynb | mit | import numpy as np
from astropy import units as u
"""
Explanation: Strings and Stuff in Python
End of explanation
"""
s = 'spam'
s,len(s),s[0],s[0:2]
s[::-1]
"""
Explanation: Strings are just arrays of characters
End of explanation
"""
s = 'spam'
e = "eggs"
s + e
s + " " + e
4 * (s + " ") + e
print(4 * (s + " ") + s + " and\n" + e) # use \n to get a newline with the print function
"""
Explanation: Arithmetic with Strings
End of explanation
"""
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"sp" < "spam"
"spam" < "eggs"
"""
Explanation: You can compare strings
End of explanation
"""
print("This resistor has a value of 100 k\U000003A9")
Ω = 1e3
Ω + np.pi
"""
Explanation: Python supports Unicode characters
You can enter unicode characters directly from the keyboard (depends on your operating system), or you can use the ASCII encoding.
A list of ASCII encoding can be found here.
For example the ASCII ecoding for the greek capital omega is U+03A9, so you can create the character with \U000003A9
End of explanation
"""
radio_active = "\U00002622"
wink = "\U0001F609"
print(radio_active + wink)
"""
Explanation: Emoji are unicode characters, so you can use them a well (not all OSs will show all characters!)
End of explanation
"""
☢ = 2.345
☢ ** 2
"""
Explanation: Emoji can not be used as variable names (at least not yet ...)
End of explanation
"""
n = 4
print("I would like " + n + " orders of spam")
print("I would like " + str(n) + " orders of spam")
"""
Explanation: Watch out for variable types!
End of explanation
"""
A = 42
B = 1.23456
C = 1.23456e10
D = 'Forty Two'
"I like the number {0:d}".format(A)
"I like the number {0:s}".format(D)
"The number {0:f} is fine, but not a cool as {1:d}".format(B,A)
"The number {0:.3f} is fine, but not a cool as {1:d}".format(C,A) # 3 places after decimal
"The number {0:.3e} is fine, but not a cool as {1:d}".format(C,A) # sci notation
"{0:g} and {1:g} are the same format but different results".format(B,C)
"""
Explanation: Use explicit formatting to avoid these errors
Python string formatting has the form:
{Variable Index: Format Type} .format(Variable)
End of explanation
"""
"Representation of the number {1:s} - dec: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(A,D)
"""
Explanation: Nice trick to convert number to a different base
End of explanation
"""
NH_D = 34.47 * u.AU
"The New Horizons spacecraft is {0:.1f} from the Sun".format(NH_D)
"The New Horizons spacecraft is at a distance of {0.value:.1f} in the units of {0.unit:s} from the Sun".format(NH_D)
"""
Explanation: Formatting works with units
End of explanation
"""
from astropy.table import QTable
planet_table = QTable.read('Planets.csv', format='ascii.csv')
for Idx,Val in enumerate(planet_table['Name']):
a = planet_table['a'][Idx] * u.AU
if (a < 3.0 * u.AU):
Place = "Inner"
else:
Place = "Outer"
S = "The planet {0:s}, at a distance of {1:.1f}, is in the {2:s} solar system".format(Val,a,Place)
print(S)
"""
Explanation: Formatting is way better than piecing strings together
End of explanation
"""
line = "My hovercraft is full of eels"
"""
Explanation: Working with strings
End of explanation
"""
line.replace('eels', 'wheels')
"""
Explanation: Find and Replace
End of explanation
"""
line.center(100)
line.ljust(100)
line.rjust(100, "*")
line2 = " My hovercraft is full of eels "
line2.strip()
line3 = "*$*$*$*$*$*$*$*$My hovercraft is full of eels*$*$*$*$"
line3.strip('*$')
line3.lstrip('*$'), line3.rstrip('*$')
"""
Explanation: Justification and Cleaning
End of explanation
"""
line.split()
'_*_'.join(line.split())
' '.join(line.split()[::-1])
"""
Explanation: Splitting and Joining
End of explanation
"""
anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
anotherline.upper()
anotherline.lower()
anotherline.title()
anotherline.capitalize()
anotherline.swapcase()
"""
Explanation: Line Formatting
End of explanation
"""
import re
myline = "This is a test, this in only a test"
print(myline)
"""
Explanation: Regular Expression in Python (re)
End of explanation
"""
regex1 = r"test"
match1 = re.search(regex1, myline)
match1
myline[10:14]
match3 = re.findall(regex1, myline)
match3
"""
Explanation: Raw strings begin with a special prefix (r) and signal Python not to interpret backslashes and other special metacharacters in the string, allowing you to pass them through directly to the regular expression engine.
End of explanation
"""
mynewline = re.sub(regex1, "*TEST*", myline)
mynewline
"""
Explanation: One of the useful tings about regular expressions in Python is using the to search and replace parts of string (re.sub)
End of explanation
"""
golf_file = open("golf_00").read().splitlines()
golf_file
for i in golf_file:
print(i)
def regex_test_list(mylist, myregex):
for line in mylist:
mytest = re.search(myregex, line)
if (mytest):
print(line + " YES")
else:
print(line + " NOPE")
regex = r"one"
regex_test_list(golf_file, regex)
regex = r"t|n"
regex_test_list(golf_file, regex)
"""
Explanation: RegEx Golf!
End of explanation
"""
import os
os.chdir("./MyData")
my_data_dir = os.listdir()
my_data_dir
for file in my_data_dir:
if file.endswith(".txt"):
print(file)
for file in my_data_dir:
if file.endswith(".txt"):
print(os.path.abspath(file))
"""
Explanation: Working with Files and Directories (OS agnostic)
The os package allows you to do operating system stuff without worrying about what system you are using
End of explanation
"""
import glob
my_files = glob.glob('02_*.fits')
my_files
for file in my_files:
file_size = os.stat(file).st_size
out_string = "The file {0} as a size of {1} bytes".format(file,file_size)
print(out_string)
"""
Explanation: You can also find files with glob
End of explanation
"""
|
AtmaMani/pyChakras | udemy_ml_bootcamp/Python-for-Data-Visualization/Matplotlib/Matplotlib Concepts Lecture.ipynb | mit | import matplotlib.pyplot as plt
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Matplotlib Overview Lecture
Introduction
Matplotlib is the "grandfather" library of data visualization with Python. It was created by John Hunter. He created it to try to replicate MatLab's (another programming language) plotting capabilities in Python. So if you happen to be familiar with matlab, matplotlib will feel natural to you.
It is an excellent 2D and 3D graphics library for generating scientific figures.
Some of the major Pros of Matplotlib are:
Generally easy to get started for simple plots
Support for custom labels and texts
Great control of every element in a figure
High-quality output in many formats
Very customizable in general
Matplotlib allows you to create reproducible figures programmatically. Let's learn how to use it! Before continuing this lecture, I encourage you just to explore the official Matplotlib web page: http://matplotlib.org/
Installation
You'll need to install matplotlib first with either:
conda install matplotlib
or
pip install matplotlib
Importing
Import the matplotlib.pyplot module under the name plt (the tidy way):
End of explanation
"""
%matplotlib inline
"""
Explanation: You'll also need to use this line to see plots in the notebook:
End of explanation
"""
import numpy as np
x = np.linspace(0, 5, 11)
y = x ** 2
x
y
"""
Explanation: That line is only for jupyter notebooks, if you are using another editor, you'll use: plt.show() at the end of all your plotting commands to have the figure pop up in another window.
Basic Example
Let's walk through a very simple example using two numpy arrays:
Example
Let's walk through a very simple example using two numpy arrays. You can also use lists, but most likely you'll be passing numpy arrays or pandas columns (which essentially also behave like arrays).
The data we want to plot:
End of explanation
"""
plt.plot(x, y, 'r') # 'r' is the color red
plt.xlabel('X Axis Title Here')
plt.ylabel('Y Axis Title Here')
plt.title('String Title Here')
plt.show()
"""
Explanation: Basic Matplotlib Commands
We can create a very simple line plot using the following ( I encourage you to pause and use Shift+Tab along the way to check out the document strings for the functions we are using).
End of explanation
"""
# plt.subplot(nrows, ncols, plot_number)
plt.subplot(1,2,1)
plt.plot(x, y, 'r--') # More on color options later
plt.subplot(1,2,2)
plt.plot(y, x, 'g*-');
"""
Explanation: Creating Multiplots on Same Canvas
End of explanation
"""
# Create Figure (empty canvas)
fig = plt.figure()
# Add set of axes to figure
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)
# Plot on that set of axes
axes.plot(x, y, 'b')
axes.set_xlabel('Set X Label') # Notice the use of set_ to begin methods
axes.set_ylabel('Set y Label')
axes.set_title('Set Title')
"""
Explanation: Matplotlib Object Oriented Method
Now that we've seen the basics, let's break it all down with a more formal introduction of Matplotlib's Object Oriented API. This means we will instantiate figure objects and then call methods or attributes from that object.
Introduction to the Object Oriented Method
The main idea in using the more formal Object Oriented method is to create figure objects and then just call methods or attributes off of that object. This approach is nicer when dealing with a canvas that has multiple plots on it.
To begin we create a figure instance. Then we can add axes to that figure:
End of explanation
"""
# Creates blank canvas
fig = plt.figure()
axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes
axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
# Larger Figure Axes 1
axes1.plot(x, y, 'b')
axes1.set_xlabel('X_label_axes2')
axes1.set_ylabel('Y_label_axes2')
axes1.set_title('Axes 2 Title')
# Insert Figure Axes 2
axes2.plot(y, x, 'r')
axes2.set_xlabel('X_label_axes2')
axes2.set_ylabel('Y_label_axes2')
axes2.set_title('Axes 2 Title');
"""
Explanation: Code is a little more complicated, but the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure:
End of explanation
"""
# Use similar to plt.figure() except use tuple unpacking to grab fig and axes
fig, axes = plt.subplots()
# Now use the axes object to add stuff to plot
axes.plot(x, y, 'r')
axes.set_xlabel('x')
axes.set_ylabel('y')
axes.set_title('title');
"""
Explanation: subplots()
The plt.subplots() object will act as a more automatic axis manager.
Basic use cases:
End of explanation
"""
# Empty canvas of 1 by 2 subplots
fig, axes = plt.subplots(nrows=1, ncols=2)
# Axes is an array of axes to plot on
axes
"""
Explanation: Then you can specify the number of rows and columns when creating the subplots() object:
End of explanation
"""
for ax in axes:
ax.plot(x, y, 'b')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('title')
# Display the figure object
fig
"""
Explanation: We can iterate through this array:
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=2)
for ax in axes:
ax.plot(x, y, 'g')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('title')
fig
plt.tight_layout()
"""
Explanation: A common issue with matplolib is overlapping subplots or figures. We ca use fig.tight_layout() or plt.tight_layout() method, which automatically adjusts the positions of the axes on the figure canvas so that there is no overlapping content:
End of explanation
"""
fig = plt.figure(figsize=(8,4), dpi=100)
"""
Explanation: Figure size, aspect ratio and DPI
Matplotlib allows the aspect ratio, DPI and figure size to be specified when the Figure object is created. You can use the figsize and dpi keyword arguments.
* figsize is a tuple of the width and height of the figure in inches
* dpi is the dots-per-inch (pixel per inch).
For example:
End of explanation
"""
fig, axes = plt.subplots(figsize=(12,3))
axes.plot(x, y, 'r')
axes.set_xlabel('x')
axes.set_ylabel('y')
axes.set_title('title');
"""
Explanation: The same arguments can also be passed to layout managers, such as the subplots function:
End of explanation
"""
fig.savefig("filename.png")
"""
Explanation: Saving figures
Matplotlib can generate high-quality output in a number formats, including PNG, JPG, EPS, SVG, PGF and PDF.
To save a figure to a file we can use the savefig method in the Figure class:
End of explanation
"""
fig.savefig("filename.png", dpi=200)
"""
Explanation: Here we can also optionally specify the DPI and choose between different output formats:
End of explanation
"""
ax.set_title("title");
"""
Explanation: Legends, labels and titles
Now that we have covered the basics of how to create a figure canvas and add axes instances to the canvas, let's look at how decorate a figure with titles, axis labels, and legends.
Figure titles
A title can be added to each axis instance in a figure. To set the title, use the set_title method in the axes instance:
End of explanation
"""
ax.set_xlabel("x")
ax.set_ylabel("y");
"""
Explanation: Axis labels
Similarly, with the methods set_xlabel and set_ylabel, we can set the labels of the X and Y axes:
End of explanation
"""
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x, x**2, label="x**2")
ax.plot(x, x**3, label="x**3")
ax.legend()
"""
Explanation: Legends
You can use the label="label text" keyword argument when plots or other objects are added to the figure, and then using the legend method without arguments to add the legend to the figure:
End of explanation
"""
# Lots of options....
ax.legend(loc=1) # upper right corner
ax.legend(loc=2) # upper left corner
ax.legend(loc=3) # lower left corner
ax.legend(loc=4) # lower right corner
# .. many more options are available
# Most common to choose
ax.legend(loc=0) # let matplotlib decide the optimal location
fig
"""
Explanation: Notice how are legend overlaps some of the actual plot!
The legend function takes an optional keyword argument loc that can be used to specify where in the figure the legend is to be drawn. The allowed values of loc are numerical codes for the various places the legend can be drawn. See the documentation page for details. Some of the most common loc values are:
End of explanation
"""
# MATLAB style line color and style
fig, ax = plt.subplots()
ax.plot(x, x**2, 'b.-') # blue line with dots
ax.plot(x, x**3, 'g--') # green dashed line
"""
Explanation: Setting colors, linewidths, linetypes
Matplotlib gives you a lot of options for customizing colors, linewidths, and linetypes.
There is the basic MATLAB like syntax (which I would suggest you avoid using for more clairty sake:
Colors with MatLab like syntax
With matplotlib, we can define the colors of lines and other graphical elements in a number of ways. First of all, we can use the MATLAB-like syntax where 'b' means blue, 'g' means green, etc. The MATLAB API for selecting line styles are also supported: where, for example, 'b.-' means a blue line with dots:
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(x, x+1, color="blue", alpha=0.5) # half-transparant
ax.plot(x, x+2, color="#8B008B") # RGB hex code
ax.plot(x, x+3, color="#FF8C00") # RGB hex code
"""
Explanation: Colors with the color= parameter
We can also define colors by their names or RGB hex codes and optionally provide an alpha value using the color and alpha keyword arguments. Alpha indicates opacity.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(x, x+1, color="red", linewidth=0.25)
ax.plot(x, x+2, color="red", linewidth=0.50)
ax.plot(x, x+3, color="red", linewidth=1.00)
ax.plot(x, x+4, color="red", linewidth=2.00)
# possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’
ax.plot(x, x+5, color="green", lw=3, linestyle='-')
ax.plot(x, x+6, color="green", lw=3, ls='-.')
ax.plot(x, x+7, color="green", lw=3, ls=':')
# custom dash
line, = ax.plot(x, x+8, color="black", lw=1.50)
line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...
# possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ...
ax.plot(x, x+ 9, color="blue", lw=3, ls='-', marker='+')
ax.plot(x, x+10, color="blue", lw=3, ls='--', marker='o')
ax.plot(x, x+11, color="blue", lw=3, ls='-', marker='s')
ax.plot(x, x+12, color="blue", lw=3, ls='--', marker='1')
# marker size and color
ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2)
ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4)
ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red")
ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8,
markerfacecolor="yellow", markeredgewidth=3, markeredgecolor="green");
"""
Explanation: Line and marker styles
To change the line width, we can use the linewidth or lw keyword argument. The line style can be selected using the linestyle or ls keyword arguments:
End of explanation
"""
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
axes[0].plot(x, x**2, x, x**3)
axes[0].set_title("default axes ranges")
axes[1].plot(x, x**2, x, x**3)
axes[1].axis('tight')
axes[1].set_title("tight axes")
axes[2].plot(x, x**2, x, x**3)
axes[2].set_ylim([0, 60])
axes[2].set_xlim([2, 5])
axes[2].set_title("custom axes range");
"""
Explanation: Control over axis appearance
In this section we will look at controlling axis sizing properties in a matplotlib figure.
Plot range
We can configure the ranges of the axes using the set_ylim and set_xlim methods in the axis object, or axis('tight') for automatically getting "tightly fitted" axes ranges:
End of explanation
"""
plt.scatter(x,y)
from random import sample
data = sample(range(1, 1000), 100)
plt.hist(data)
data = [np.random.normal(0, std, 100) for std in range(1, 4)]
# rectangular box plot
plt.boxplot(data,vert=True,patch_artist=True);
"""
Explanation: Special Plot Types
There are many specialized plots we can create, such as barplots, histograms, scatter plots, and much more. Most of these type of plots we will actually create using seaborn, a statistical plotting library for Python. But here are a few examples of these type of plots:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/official/explainable_ai/sdk_custom_image_classification_batch_explain.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex SDK: Custom training image classification model for batch prediction with explainabilty
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_batch_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_batch_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_image_classification_batch_explain.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom image classification model for batch prediction with explanation.
Dataset
The dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
Objective
In this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a batch prediction with explanations on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train the TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Set explanation parameters.
Upload the model as a Vertex Model resource.
Make a batch prediction with explanations.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
if os.getenv("IS_TESTING"):
! apt-get update && apt-get install -y python3-opencv-headless
! apt-get install -y libgl1-mesa-dev
! pip3 install --upgrade opencv-python-headless $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for CIFAR10.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
"""
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
"""
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(args.model_dir)) to the specified model directory.
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
job = aip.CustomTrainingJob(
display_name="cifar10_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
"""
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
"""
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
"""
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
"""
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
"""
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
"""
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
"""
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
"""
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
"""
Explanation: Evaluate the model
Now find out how good the model is.
Load evaluation data
You will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
y_test:<br/>
2. The labels are currently scalar (sparse). If you look back at the compile() step in the trainer/task.py script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
End of explanation
"""
local_model.evaluate(x_test, y_test)
"""
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
"""
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(local_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
local_model,
model_path_to_deploy,
signatures={
"serving_default": serving_fn,
# Required for XAI
"xai_preprocess": preprocess_fn,
"xai_model": m_call,
},
)
"""
Explanation: Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- image.convert_image_dtype - Changes integer pixel values to float 32.
- image.resize - Resizes the image to match the input shape for the model.
- resized / 255.0 - Rescales (normalization) the pixel data between 0 and 1.
At this point, the data can be passed to the model (m_call).
XAI Signatures
When the serving function is saved back with the underlying model (tf.saved_model.save), you specify the input layer of the serving function as the signature serving_default.
For XAI image models, you need to save two additional signatures from the serving function:
xai_preprocess: The preprocessing function in the serving function.
xai_model: The concrete function for calling the model.
End of explanation
"""
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
"""
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
"""
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
"""
Explanation: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:
parameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
Shapley - Note, not recommended for image data -- can be very long running
XRAI
Integrated Gradients
metadata: This is the specification for how the algoithm is applied on your custom model.
Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
path_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.
Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.
End of explanation
"""
random_baseline = np.random.rand(32, 32, 3)
input_baselines = [{"number_vaue": x} for x in random_baseline]
INPUT_METADATA = {"input_tensor_name": CONCRETE_INPUT, "modality": "image"}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"image": input_metadata}, outputs={"class": output_metadata}
)
"""
Explanation: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
outputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.
y = f(x)
Consider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.
y, z = f(x)
The dictionary format for outputs is:
{ "outputs": { "[your_display_name]":
"output_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the output to explain. A common example is "probability".<br/>
- "output_tensor_name": The key/value field to identify the output layer to explain. <br/>
- [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.
</blockquote>
inputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.
y = f(a,b)
The minimum dictionary format for inputs is:
{ "inputs": { "[your_display_name]":
"input_tensor_name": [layer]
}
}
<blockquote>
- [your_display_name]: A human readable name you assign to the input to explain. A common example is "features".<br/>
- "input_tensor_name": The key/value field to identify the input layer for the feature attribution. <br/>
- [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.
</blockquote>
Since the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
Since the inputs to the model are images, you can specify the following additional fields as reporting/visualization aids:
<blockquote>
- "modality": "image": Indicates the field values are image data.
</blockquote>
End of explanation
"""
model = aip.Model.upload(
display_name="cifar10_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
"""
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
"""
test_image_1 = x_test[0]
test_label_1 = y_test[0]
test_image_2 = x_test[1]
test_label_2 = y_test[1]
print(test_image_1.shape)
"""
Explanation: Get test items
You will use examples out of the test (holdout) portion of the dataset as a test items.
End of explanation
"""
import cv2
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
"""
Explanation: Prepare the request content
You are going to send the CIFAR10 images as compressed JPG image, instead of the raw uncompressed bytes:
cv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
Denormalize the image data from [0,1) range back to [0,255).
Convert the 32-bit floating point values to 8-bit unsigned integers.
End of explanation
"""
! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg
! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg
test_item_1 = BUCKET_NAME + "/tmp1.jpg"
test_item_2 = BUCKET_NAME + "/tmp2.jpg"
"""
Explanation: Copy test item(s)
For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
End of explanation
"""
import base64
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file(test_item_1)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
bytes = tf.io.read_file(test_item_2)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
"""
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
input_name: the name of the input layer of the underlying model.
'b64': A key that indicates the content is base64 encoded.
content: The compressed JPG image bytes as a base64 encoded string.
Each instance in the prediction request is a dictionary entry of the form:
{serving_input: {'b64': content}}
To pass the image data to the prediction service you encode the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network.
tf.io.read_file: Read the compressed JPG images into memory as raw bytes.
base64.b64encode: Encode the raw bytes into a base64 encoded string.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="cifar10_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
generate_explanation=True,
sync=False,
)
print(batch_predict_job)
"""
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
machine_type: The type of machine to use for training.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
"""
batch_predict_job.wait()
"""
Explanation: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
"""
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
explanation_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("explanation"):
explanation_results.append(blob.name)
tags = list()
for explanation_result in explanation_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{explanation_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
"""
Explanation: Get the explanations
Next, get the explanation results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more explanation requests in a CSV format:
CSV header + predicted_label
CSV row + explanation, per prediction request
End of explanation
"""
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
uber/pyro | tutorial/source/boosting_bbvi.ipynb | apache-2.0 | import os
from collections import defaultdict
from functools import partial
import numpy as np
import pyro
import pyro.distributions as dist
import scipy.stats
import torch
import torch.distributions.constraints as constraints
from matplotlib import pyplot
from pyro.infer import SVI, Trace_ELBO
from pyro.optim import Adam
from pyro.poutine import block, replay, trace
"""
Explanation: Boosting Black Box Variational Inference
Introduction
This tutorial demonstrates how to implement boosting black box Variational Inference [1] in Pyro. In boosting Variational Inference [2], we approximate a target distribution with an iteratively selected mixture of densities. In cases where a single denisity provided by regular Variational Inference doesn't adequately approximate a target density, boosting VI thus offers a simple way of getting more complex approximations. We show how this can be implemented as a relatively straightforward extension of Pyro's SVI.
Contents
Theoretical Background
Variational Inference
Boosting Black Box Variational Inference
BBBVI in Pyro
The Model
The Guide
The Relbo
The Approximation
The Greedy Algorithm
Theoretical Background <a class="anchor" id="theoretical-background"></a>
Variational Inference <a class="anchor" id="variational-inference"></a>
For an introduction to regular Variational Inference, we recommend having a look at the tutorial on SVI in Pyro and this excellent review [3].
Briefly, Variational Inference allows us to find approximations of probability densities which are intractable to compute analytically. For instance, one might have observed variables $\textbf{x}$, latent variables $\textbf{z}$ and a joint distribution $p(\textbf{x}, \textbf{z})$. One can then use Variational Inference to approximate $p(\textbf{z}|\textbf{x})$. To do so, one first chooses a set of tractable densities, a variational family, and then tries to find the element of this set which most closely approximates the target distribution $p(\textbf{z}|\textbf{x})$.
This approximating density is found by maximizing the Evidence Lower BOund (ELBO):
$$ \mathbb{E}_q[\log p(\mathbf{x}, \mathbf{z})] - \mathbb{E}_q[\log q(\mathbf{z})]$$
where $q(\mathbf{z})$ is the approximating density.
Boosting Black Box Variational Inference <a class="anchor" id="bbbvi"></a>
In boosting black box Variational inference (BBBVI), we approximate the target density with a mixture of densities from the variational family:
$$q^t(\mathbf{z}) = \sum_{i=1}^t \gamma_i s_i(\mathbf{z})$$
$$\text{where} \sum_{i=1}^t \gamma_i =1$$
and $s_t(\mathbf{z})$ are elements of the variational family.
The components of the approximation are selected greedily by maximising the so-called Residual ELBO (RELBO) with respect to the next component $s_{t+1}(\mathbf{z})$:
$$\mathbb{E}_s[\log p(\mathbf{x},\mathbf{z})] - \lambda \mathbb{E}_s[\log s(\mathbf{z})] - \mathbb{E}_s[\log q^t(\mathbf{z})]$$
Where the first two terms are the same as in the ELBO and the last term is the cross entropy between the next component $s_{t+1}(\mathbf{z})$ and the current approximation $q^t(\mathbf{z})$.
It's called black box Variational Inference because this optimization does not have to be tailored to the variational family which is being used. By setting $\lambda$ (the regularization factor of the entropy term) to 1, standard SVI methods can be used to compute $\mathbb{E}_s[\log p(\mathbf{x}, \mathbf{z})] - \lambda \mathbb{E}_s[\log s(\mathbf{z})]$. See the explanation of the section on the implementation of the RELBO below for an explanation of how we compute the term $- \mathbb{E}_s[\log q^t(\mathbf{z})]$. Imporantly, we do not need to make any additional assumptions about the variational family that's being used to ensure that this algorithm converges.
In [1], a number of different ways of finding the mixture weights $\gamma_t$ are suggested, ranging from fixed step sizes based on the iteration to solving the optimisation problem of finding $\gamma_t$ that will minimise the RELBO. Here, we used the fixed step size method.
For more details on the theory behind boosting black box variational inference, please refer to [1].
BBBVI in Pyro <a class="anchor" id="bbbvi-pyro"></a>
To implement boosting black box variational inference in Pyro, we need to consider the following points:
1. The approximation components $s_{t}(\mathbf{z})$ (guides).
2. The RELBO.
3. The approximation itself $q^t(\mathbf{z})$.
4. Using Pyro's SVI to find new components of the approximation.
We will illustrate these points by looking at simple example: approximating a bimodal posterior.
End of explanation
"""
def model(data):
prior_loc = torch.tensor([0.])
prior_scale = torch.tensor([5.])
z = pyro.sample('z', dist.Normal(prior_loc, prior_scale))
scale = torch.tensor([0.1])
with pyro.plate('data', len(data)):
pyro.sample('x', dist.Normal(z*z, scale), obs=data)
"""
Explanation: The Model <a class="anchor" id="the-model"></a>
Boosting BBVI is particularly useful when we want to approximate multimodal distributions. In this tutorial, we'll thus consider the following model:
$$\mathbf{z} \sim \mathcal{N}(0,5)$$
$$\mathbf{x} \sim \mathcal{N}(\mathbf{z}^2, 0.1)$$
Given the set of iid. observations $\text{data} ~ \mathcal{N}(4, 0.1)$, we thus expect $p(\mathbf{z}|\mathbf{x})$ to be a bimodal distributions with modes around $-2$ and $2$.
In Pyro, this model takes the following shape:
End of explanation
"""
def guide(data, index):
scale_q = pyro.param('scale_{}'.format(index), torch.tensor([1.0]), constraints.positive)
loc_q = pyro.param('loc_{}'.format(index), torch.tensor([0.0]))
pyro.sample("z", dist.Normal(loc_q, scale_q))
"""
Explanation: The Guide <a class="anchor" id="the-guide"></a>
Next, we specify the guide which in our case will make up the components of our mixture. Recall that in Pyro the guide needs to take the same arguments as the model which is why our guide function also takes the data as an input.
We also need to make sure that every pyro.sample() statement from the model has a matching pyro.sample() statement in the guide. In our case, we include z in both the model and the guide.
In contrast to regular SVI, our guide takes an additional argument: index. Having this argument allows us to easily create new guides in each iteration of the greedy algorithm. Specifically, we make use of partial() from the functools library to create guides which only take data as an argument. The statement partial(guide, index=t) creates a guide that will take only data as an input and which has trainable parameters scale_t and loc_t.
Choosing our variational distribution to be a Normal distribution parameterized by $loc_t$ and $scale_t$ we get the following guide:
End of explanation
"""
def relbo(model, guide, *args, **kwargs):
approximation = kwargs.pop('approximation')
# We first compute the elbo, but record a guide trace for use below.
traced_guide = trace(guide)
elbo = pyro.infer.Trace_ELBO(max_plate_nesting=1)
loss_fn = elbo.differentiable_loss(model, traced_guide, *args, **kwargs)
# We do not want to update parameters of previously fitted components
# and thus block all parameters in the approximation apart from z.
guide_trace = traced_guide.trace
replayed_approximation = trace(replay(block(approximation, expose=['z']), guide_trace))
approximation_trace = replayed_approximation.get_trace(*args, **kwargs)
relbo = -loss_fn - approximation_trace.log_prob_sum()
# By convention, the negative (R)ELBO is returned.
return -relbo
"""
Explanation: The RELBO <a class="anchor" id="the-relbo"></a>
We implement the RELBO as a function which can be passed to Pyro's SVI class in place of ELBO to find the approximation components $s_t(z)$. Recall that the RELBO has the following form:
$$\mathbb{E}_s[\log p(\mathbf{x},\mathbf{z})] - \lambda \mathbb{E}_s[\log s(\mathbf{z})] - \mathbb{E}_s[\log q^t(\mathbf{z})]$$
Conveniently, this is very similar to the regular ELBO which allows us to reuse Pyro's existing ELBO. Specifically, we compute
$$\mathbb{E}_s[\log p(x,z)] - \lambda \mathbb{E}_s[\log s]$$
using Pyro's Trace_ELBO and then compute
$$ - \mathbb{E}_s[\log q^t]$$
using Poutine. For more information on how this works, we recommend going through the Pyro tutorials on Poutine and custom SVI objectives.
End of explanation
"""
def approximation(data, components, weights):
assignment = pyro.sample('assignment', dist.Categorical(weights))
result = components[assignment](data)
return result
"""
Explanation: The Approximation <a class="anchor" id="the-approximation"></a>
Our implementation of the approximation $q^t(z) = \sum_{i=1}^t \gamma_i s_i(z)$ consists of a list of components, i.e. the guides from the greedy selection steps, and a list containing the mixture weights of the components. To sample from the approximation, we thus first sample a component according to the mixture weights. In a second step, we draw a sample from the corresponding component.
Similarly as with the guide, we use partial(approximation, components=components, weights=weights) to get an approximation function which has the same signature as the model.
End of explanation
"""
initial_approximation = partial(guide, index=0)
components = [initial_approximation]
weights = torch.tensor([1.])
wrapped_approximation = partial(approximation, components=components, weights=weights)
"""
Explanation: The Greedy Algorithm <a class="anchor" id="the-greedy-algorithm"></a>
We now have all the necessary parts to implement the greedy algorithm. First, we initialize the approximation:
End of explanation
"""
# clear the param store in case we're in a REPL
pyro.clear_param_store()
# Sample observations from a Normal distribution with loc 4 and scale 0.1
n = torch.distributions.Normal(torch.tensor([4.0]), torch.tensor([0.1]))
data = n.sample((100,))
#T=2
smoke_test = ('CI' in os.environ)
n_steps = 2 if smoke_test else 12000
pyro.set_rng_seed(2)
n_iterations = 2
locs = [0]
scales = [0]
for t in range(1, n_iterations + 1):
# Create guide that only takes data as argument
wrapped_guide = partial(guide, index=t)
losses = []
adam_params = {"lr": 0.01, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)
# Pass our custom RELBO to SVI as the loss function.
svi = SVI(model, wrapped_guide, optimizer, loss=relbo)
for step in range(n_steps):
# Pass the existing approximation to SVI.
loss = svi.step(data, approximation=wrapped_approximation)
losses.append(loss)
if step % 100 == 0:
print('.', end=' ')
# Update the list of approximation components.
components.append(wrapped_guide)
# Set new mixture weight.
new_weight = 2 / (t + 1)
# In this specific case, we set the mixture weight of the second component to 0.5.
if t == 2:
new_weight = 0.5
weights = weights * (1-new_weight)
weights = torch.cat((weights, torch.tensor([new_weight])))
# Update the approximation
wrapped_approximation = partial(approximation, components=components, weights=weights)
print('Parameters of component {}:'.format(t))
scale = pyro.param("scale_{}".format(t)).item()
scales.append(scale)
loc = pyro.param("loc_{}".format(t)).item()
locs.append(loc)
print('loc = {}'.format(loc))
print('scale = {}'.format(scale))
# Plot the resulting approximation
X = np.arange(-10, 10, 0.1)
pyplot.figure(figsize=(10, 4), dpi=100).set_facecolor('white')
total_approximation = np.zeros(X.shape)
for i in range(1, n_iterations + 1):
Y = weights[i].item() * scipy.stats.norm.pdf((X - locs[i]) / scales[i])
pyplot.plot(X, Y)
total_approximation += Y
pyplot.plot(X, total_approximation)
pyplot.plot(data.data.numpy(), np.zeros(len(data)), 'k*')
pyplot.title('Approximation of posterior over z')
pyplot.ylabel('probability density')
pyplot.show()
"""
Explanation: Then we iteratively find the $T$ components of the approximation by maximizing the RELBO at every step:
End of explanation
"""
import os
from collections import defaultdict
from functools import partial
import numpy as np
import pyro
import pyro.distributions as dist
import scipy.stats
import torch
import torch.distributions.constraints as constraints
from matplotlib import pyplot
from pyro.infer import SVI, Trace_ELBO
from pyro.optim import Adam
from pyro.poutine import block, replay, trace
# this is for running the notebook in our testing framework
n_steps = 2 if smoke_test else 12000
pyro.set_rng_seed(2)
# clear the param store in case we're in a REPL
pyro.clear_param_store()
# Sample observations from a Normal distribution with loc 4 and scale 0.1
n = torch.distributions.Normal(torch.tensor([4.0]), torch.tensor([0.1]))
data = n.sample((100,))
def guide(data, index):
scale_q = pyro.param('scale_{}'.format(index), torch.tensor([1.0]), constraints.positive)
loc_q = pyro.param('loc_{}'.format(index), torch.tensor([0.0]))
pyro.sample("z", dist.Normal(loc_q, scale_q))
def model(data):
prior_loc = torch.tensor([0.])
prior_scale = torch.tensor([5.])
z = pyro.sample('z', dist.Normal(prior_loc, prior_scale))
scale = torch.tensor([0.1])
with pyro.plate('data', len(data)):
pyro.sample('x', dist.Normal(z*z, scale), obs=data)
def relbo(model, guide, *args, **kwargs):
approximation = kwargs.pop('approximation')
# We first compute the elbo, but record a guide trace for use below.
traced_guide = trace(guide)
elbo = pyro.infer.Trace_ELBO(max_plate_nesting=1)
loss_fn = elbo.differentiable_loss(model, traced_guide, *args, **kwargs)
# We do not want to update parameters of previously fitted components
# and thus block all parameters in the approximation apart from z.
guide_trace = traced_guide.trace
replayed_approximation = trace(replay(block(approximation, expose=['z']), guide_trace))
approximation_trace = replayed_approximation.get_trace(*args, **kwargs)
relbo = -loss_fn - approximation_trace.log_prob_sum()
# By convention, the negative (R)ELBO is returned.
return -relbo
def approximation(data, components, weights):
assignment = pyro.sample('assignment', dist.Categorical(weights))
result = components[assignment](data)
return result
def boosting_bbvi():
# T=2
n_iterations = 2
initial_approximation = partial(guide, index=0)
components = [initial_approximation]
weights = torch.tensor([1.])
wrapped_approximation = partial(approximation, components=components, weights=weights)
locs = [0]
scales = [0]
for t in range(1, n_iterations + 1):
# Create guide that only takes data as argument
wrapped_guide = partial(guide, index=t)
losses = []
adam_params = {"lr": 0.01, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)
# Pass our custom RELBO to SVI as the loss function.
svi = SVI(model, wrapped_guide, optimizer, loss=relbo)
for step in range(n_steps):
# Pass the existing approximation to SVI.
loss = svi.step(data, approximation=wrapped_approximation)
losses.append(loss)
if step % 100 == 0:
print('.', end=' ')
# Update the list of approximation components.
components.append(wrapped_guide)
# Set new mixture weight.
new_weight = 2 / (t + 1)
# In this specific case, we set the mixture weight of the second component to 0.5.
if t == 2:
new_weight = 0.5
weights = weights * (1-new_weight)
weights = torch.cat((weights, torch.tensor([new_weight])))
# Update the approximation
wrapped_approximation = partial(approximation, components=components, weights=weights)
print('Parameters of component {}:'.format(t))
scale = pyro.param("scale_{}".format(t)).item()
scales.append(scale)
loc = pyro.param("loc_{}".format(t)).item()
locs.append(loc)
print('loc = {}'.format(loc))
print('scale = {}'.format(scale))
# Plot the resulting approximation
X = np.arange(-10, 10, 0.1)
pyplot.figure(figsize=(10, 4), dpi=100).set_facecolor('white')
total_approximation = np.zeros(X.shape)
for i in range(1, n_iterations + 1):
Y = weights[i].item() * scipy.stats.norm.pdf((X - locs[i]) / scales[i])
pyplot.plot(X, Y)
total_approximation += Y
pyplot.plot(X, total_approximation)
pyplot.plot(data.data.numpy(), np.zeros(len(data)), 'k*')
pyplot.title('Approximation of posterior over z')
pyplot.ylabel('probability density')
pyplot.show()
if __name__ == '__main__':
boosting_bbvi()
"""
Explanation: We see that boosting BBVI successfully approximates the bimodal posterior distributions with modes around -2 and +2.
The Complete Implementation
Putting all the components together, we then get the complete implementation of boosting black box Variational Inference:
End of explanation
"""
|
sassoftware/sas-viya-programming | python/AX2016/Machine Learning Algorithm Comparison.ipynb | apache-2.0 | import pandas as pd
import swat
from matplotlib import pyplot as plt
from swat.render import render_html
%matplotlib inline
"""
Explanation: Machine Learning Algorithm Comparison
This example illustrates fitting and comparing several Machine Learning algorithms for classifying the binary target in the
HMEQ data set.
The data set used for this pipeline is from a financial services company that offers a home equity line of credit.
The company has extended several thousand lines of credit in the past, and many of these accepted
applicants have defaulted on their loans. Using demographic and financial variables, the company wants to build a model to classify whether an applicant will default.
The target variable "BAD" indicates whether an applicant defaulted on the home equity line of credit.
The steps include:
PREPARE AND EXPLORE
a) Check data is loaded into CAS
<br>
PERFORM SUPERVISED LEARNING
a) Fit a model using a Random Forest
b) Fit a model using Gradient Boosting
c) Fit a model using a Neural Network
d) Fit a model using a Support Vector Machine
<br>
EVALUATE AND IMPLEMENT
a) Score the data
b) Assess model performance
c) Generate ROC and Lift charts
Import packages
End of explanation
"""
indata = "hmeq"
indata_ext = ".sas7bdat"
"""
Explanation: CAS Server connection details
End of explanation
"""
sess = swat.CAS("cas01", 19640)
"""
Explanation: Start CAS session
End of explanation
"""
sess.loadactionset(actionset="dataStep")
sess.loadactionset(actionset="dataPreprocess")
sess.loadactionset(actionset="cardinality")
sess.loadactionset(actionset="sampling")
sess.loadactionset(actionset="decisionTree")
sess.loadactionset(actionset="neuralNet")
sess.loadactionset(actionset="svm")
sess.loadactionset(actionset="astore")
sess.loadactionset(actionset="percentile")
"""
Explanation: Import action sets
End of explanation
"""
if not sess.table.tableExists(table=indata).exists:
tbl = sess.upload_file(indata + indata_ext, casout={"name": indata})
sess.tableinfo()
"""
Explanation: Load data into CAS if needed
End of explanation
"""
tbl.head()
"""
Explanation: Explore and Impute missing values
View first 5 observations from the data set
End of explanation
"""
tbl.columninfo()
tbl.shape
tbl.describe()
"""
Explanation: View table column information
End of explanation
"""
sess.cardinality.summarize(
table={"name":indata},
cardinality={"name":"data_card", "replace":True}
)
tbl_data_card = sess.CASTable('data_card').query('_NMISS_ > 0')
print("Data Summary".center(80, '-')) # print title
df_data_card = tbl_data_card.to_frame(fetchvars=['_VARNAME_', '_NMISS_', '_NOBS_'])
df_data_card['PERCENT_MISSING'] = (df_data_card['_NMISS_'] / df_data_card['_NOBS_']) * 100
tbl_forplot = pd.Series(list(df_data_card['PERCENT_MISSING']), index=list(df_data_card['_VARNAME_']))
ax = tbl_forplot.plot(
kind='bar',
title='Percentage of Missing Values',
figsize=(11,5)
)
ax.set_ylabel('Percent Missing')
ax.set_xlabel('Variable Names');
"""
Explanation: Explore data and plot missing values
End of explanation
"""
sess.dataPreprocess.transform(
table={"name":indata},
casOut={"name":"hmeq_prepped_pr", "replace":True},
copyAllVars=True,
outVarsNameGlobalPrefix="IM",
requestPackages=[
{"impute":{"method":"MEAN"}, "inputs":{"clage"}},
{"impute":{"method":"MEDIAN"}, "inputs":{"delinq", "debtinc", "yoj", "ninq"}},
{"impute":{"method":"MODE"}, "inputs":{"job", "reason"}}
]
)
"""
Explanation: Impute missing values
End of explanation
"""
sess.dataStep.runcode(code = """
data hmeq_prepped;
set hmeq_prepped_pr;
if missing(DEBTINC) then DEBTINC_IND = 1;
else DEBTINC_IND = 0;
run;
""")
tbl_tmp = sess.CASTable("hmeq_prepped")
tbl_tmp.head()
"""
Explanation: Create new indicator variable for missing DEBTINC values
End of explanation
"""
target = "bad"
class_inputs = ["im_reason", "im_job", "debtinc_ind"]
class_vars = [target] + class_inputs
interval_inputs = ["im_clage", "clno", "im_debtinc", "loan", "mortdue", "value", "im_yoj", "im_ninq", "derog", "im_delinq"]
all_inputs = interval_inputs + class_inputs
"""
Explanation: Set variables for input data
End of explanation
"""
sess.sampling.stratified(
table={"name":"hmeq_prepped", "groupBy":"bad"},
output={"casOut":{"name":"hmeq_part", "replace":True}, "copyVars":"ALL"},
samppct=70,
partind=True
)
"""
Explanation: Partition data into Training and Validation
End of explanation
"""
sess.help(actionset="decisionTree");
rf = sess.decisionTree.forestTrain(
table={
"name":"hmeq_part",
"where":"strip(put(_partind_, best.))='1'"
},
inputs=all_inputs,
nominals=class_vars,
target="bad",
nTree=20,
nBins=20,
leafSize=5,
maxLevel=21,
crit="GAINRATIO",
varImp=True,
seed=100,
OOB=True,
vote="PROB",
casOut={"name":"forest_model", "replace":True}
)
# Output model statistics
render_html(rf)
# Score
sess.decisionTree.forestScore(
table={"name":"hmeq_part"},
modelTable={"name":"forest_model"},
casOut={"name":"_scored_rf", "replace":True},
copyVars={"bad", "_partind_"},
vote="PROB"
)
# Create p_bad0 and p_bad1 as _rf_predp_ is the probability of event in _rf_predname_
sess.dataStep.runCode(
code="""data _scored_rf; set _scored_rf; if _rf_predname_=1 then do; p_bad1=_rf_predp_;
p_bad0=1-p_bad1; end; if _rf_predname_=0 then do; p_bad0=_rf_predp_; p_bad1=1-p_bad0; end; run;"""
)
list(rf.keys())
rf['DTreeVarImpInfo']
"""
Explanation: Random Forest
End of explanation
"""
gb = sess.decisionTree.gbtreeTrain(
table={
"name":"hmeq_part",
"where":"strip(put(_partind_, best.))='1'"
},
inputs=all_inputs,
nominals=class_vars,
target="bad",
nTree=10,
nBins=20,
maxLevel=6,
varImp=True,
casOut={"name":"gb_model", "replace":True}
)
# Output model statistics
render_html(gb)
# Score
sess.decisionTree.gbtreeScore(
table={"name":"hmeq_part"},
modelTable={"name":"gb_model"},
casOut={"name":"_scored_gb", "replace":True},
copyVars={"bad", "_partind_"}
)
# Create p_bad0 and p_bad1 as _gbt_predp_ is the probability of event in _gbt_predname_
sess.dataStep.runCode(
code="""data _scored_gb; set _scored_gb; if _gbt_predname_=1 then do; p_bad1=_gbt_predp_;
p_bad0=1-p_bad1; end; if _gbt_predname_=0 then do; p_bad0=_gbt_predp_; p_bad1=1-p_bad0; end; run;"""
)
"""
Explanation: Gradient Boosting
End of explanation
"""
nn = sess.neuralNet.annTrain(
table={
"name":"hmeq_part",
"where":"strip(put(_partind_, best.))='1'"
},
validTable={
"name":"hmeq_part",
"where":"strip(put(_partind_, best.))='0'"
},
inputs=all_inputs,
nominals=class_vars,
target="bad",
hiddens={2},
acts={"TANH"},
combs={"LINEAR"},
targetAct="SOFTMAX",
errorFunc="ENTROPY",
std="MIDRANGE",
randDist="UNIFORM",
scaleInit=1,
nloOpts={
"optmlOpt":{"maxIters":250, "fConv":1e-10},
"lbfgsOpt":{"numCorrections":6},
"printOpt":{"printLevel":"printDetail"},
"validate":{"frequency":1}
},
casOut={"name":"nnet_model", "replace":True}
)
# Output model statistics
render_html(nn)
# Score
sess.neuralNet.annScore(
table={"name":"hmeq_part"},
modelTable={"name":"nnet_model"},
casOut={"name":"_scored_nn", "replace":True},
copyVars={"bad", "_partind_"}
)
# Create p_bad0 and p_bad1 as _nn_predp_ is the probability of event in _nn_predname_
sess.dataStep.runCode(
code="""data _scored_nn; set _scored_nn; if _nn_predname_=1 then do; p_bad1=_nn_predp_;
p_bad0=1-p_bad1; end; if _nn_predname_=0 then do; p_bad0=_nn_predp_; p_bad1=1-p_bad0; end; run;"""
)
"""
Explanation: Neural Network
End of explanation
"""
sv = sess.svm.svmTrain(
table={
"name":"hmeq_part",
"where":"_partind_=1"
},
inputs=all_inputs,
nominals=class_vars,
target="bad",
kernel="POLYNOMIAL",
degree=2,
id={"bad", "_partind_"},
savestate={"name":"svm_astore_model", "replace":True}
)
# Output model statistics
render_html(sv)
# Score using ASTORE
render_html(sess.astore.score(
table={"name":"hmeq_part"},
rstore={"name":"svm_astore_model"},
out={"name":"_scored_svm", "replace":True}
))
"""
Explanation: Support Vector Machine
End of explanation
"""
def assess_model(prefix):
return sess.percentile.assess(
table={
"name":"_scored_" + prefix,
"where": "strip(put(_partind_, best.))='0'"
},
inputs=[{"name":"p_bad1"}],
response="bad",
event="1",
pVar={"p_bad0"},
pEvent={"0"}
)
rfAssess=assess_model(prefix="rf")
rf_fitstat =rfAssess.FitStat
rf_rocinfo =rfAssess.ROCInfo
rf_liftinfo=rfAssess.LIFTInfo
gbAssess=assess_model(prefix="gb")
gb_fitstat =gbAssess.FitStat
gb_rocinfo =gbAssess.ROCInfo
gb_liftinfo=gbAssess.LIFTInfo
nnAssess=assess_model(prefix="nn")
nn_fitstat =nnAssess.FitStat
nn_rocinfo =nnAssess.ROCInfo
nn_liftinfo=nnAssess.LIFTInfo
svmAssess=assess_model(prefix="svm")
svm_fitstat =svmAssess.FitStat
svm_rocinfo =svmAssess.ROCInfo
svm_liftinfo=svmAssess.LIFTInfo
"""
Explanation: Assess Models
End of explanation
"""
# Add new variable to indicate type of model
rf_liftinfo["model"]="Forest"
rf_rocinfo["model"]="Forest"
gb_liftinfo["model"]="GradientBoosting"
gb_rocinfo["model"]="GradientBoosting"
nn_liftinfo["model"]="NeuralNetwork"
nn_rocinfo["model"]="NeuralNetwork"
svm_liftinfo["model"]="SVM"
svm_rocinfo["model"]="SVM"
# Append data
all_liftinfo = rf_liftinfo.append(gb_liftinfo, ignore_index=True) \
.append(nn_liftinfo, ignore_index=True) \
.append(svm_liftinfo, ignore_index=True)
all_rocinfo = rf_rocinfo.append(gb_rocinfo, ignore_index=True) \
.append(nn_rocinfo, ignore_index=True) \
.append(svm_rocinfo, ignore_index=True)
"""
Explanation: Create ROC and Lift plots (using Validation data)
Prepare assessment results for plotting
End of explanation
"""
print("AUC (using validation data)".center(80, '-'))
all_rocinfo[["model", "C"]].drop_duplicates(keep="first").sort_values(by="C", ascending=False)
"""
Explanation: Print AUC (Area Under the ROC Curve)
End of explanation
"""
# Draw ROC charts
plt.figure(figsize=(15,4))
for key, grp in all_rocinfo.groupby(["model"]):
plt.plot(grp["FPR"], grp["Sensitivity"], label=key)
plt.plot([0,1], [0,1], "k--")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.grid(True)
plt.legend(loc="best")
plt.title("ROC Curve (using validation data)")
plt.show()
# Draw lift charts
plt.figure(figsize=(15,4))
for key, grp in all_liftinfo.groupby(["model"]):
plt.plot(grp["Depth"], grp["Lift"], label=key)
plt.xlabel("Depth")
plt.ylabel("Lift")
plt.grid(True)
plt.legend(loc="best")
plt.title("Lift Chart (using validation data)")
plt.show()
"""
Explanation: Draw ROC and Lift plots
End of explanation
"""
sess.close()
"""
Explanation: End CAS session
End of explanation
"""
|
gcgruen/homework | data-databases-homework/.ipynb_checkpoints/Homework_3_Gruen-checkpoint.ipynb | mit | from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
"""
Explanation: Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.
End of explanation
"""
h3_tags = document.find_all('h3')
h3_tags_count = 0
for tag in h3_tags:
h3_tags_count = h3_tags_count + 1
print(h3_tags_count)
"""
Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
End of explanation
"""
#inspecting webpace with help of developer tools -- shows infomation is stored in an a tag that has the class 'tel'
a_tags = document.find_all('a', {'class':'tel'})
for tag in a_tags:
print(tag.string)
#Does not return the same: [tag.string for tag in a_tags]
"""
Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
End of explanation
"""
search_table = document.find_all('table',{'class': 'widgetlist'})
#print(search_table)
tables_content = [table('td', {'class':'wname'}) for table in search_table]
#print(tables_content)
for table in tables_content:
for single_table in table:
print(single_table.string)
"""
Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
End of explanation
"""
widgets = []
#STEP 1: Find all tr tags, because that's what tds are grouped by
for tr_tags in document.find_all('tr', {'class': 'winfo'}):
#STEP 2: For each tr_tag in tr_tags, make a dict of its td
tr_dict ={}
for td_tags in tr_tags.find_all('td'):
td_tags_class = td_tags['class']
for tag in td_tags_class:
tr_dict[tag] = td_tags.string
#STEP3: add dicts to list
widgets.append(tr_dict)
widgets
#widgets[5]['partno']
"""
Explanation: Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
End of explanation
"""
#had to rename variables as it kept printing the ones from the cell above...
widgetsN = []
for trN_tags in document.find_all('tr', {'class': 'winfo'}):
trN_dict ={}
for tdN_tags in trN_tags.find_all('td'):
tdN_tags_class = tdN_tags['class']
for tagN in tdN_tags_class:
if tagN == 'price':
sliced_tag_string = tdN_tags.string[1:]
trN_dict[tagN] = float(sliced_tag_string)
elif tagN == 'quantity':
trN_dict[tagN] = int(tdN_tags.string)
else:
trN_dict[tagN] = tdN_tags.string
widgetsN.append(trN_dict)
widgetsN
"""
Explanation: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
[{'partno': 'C1-9476',
'price': 2.7,
'quantity': 512,
'widgetname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': 9.36,
'quantity': 967,
'widgetname': 'Widget For Furtiveness'},
... some items omitted ...
{'partno': '5B-941/F',
'price': 13.26,
'quantity': 919,
'widgetname': 'Widget For Cinema'}]
(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)
End of explanation
"""
widget_quantity_list = [element['quantity'] for element in widgetsN]
sum(widget_quantity_list)
"""
Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: 7928
End of explanation
"""
for widget in widgetsN:
if widget['price'] > 9.30:
print(widget['wname'])
"""
Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
End of explanation
"""
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
"""
Explanation: Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):
End of explanation
"""
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
"""
Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
End of explanation
"""
for h3_tags in document.find_all('h3'):
if h3_tags.string == 'Hallowed widgets':
hallowed_table = h3_tags.find_next_sibling('table')
for element in hallowed_table.find_all('td', {'class':'partno'}):
print(element.string)
"""
Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output:
MZ-556/B
QV-730
T1-9731
5B-941/F
End of explanation
"""
category_counts = {}
for x_tags in document.find_all('h3'):
x_table = x_tags.find_next_sibling('table')
tr_info_tags = x_table.find_all('tr', {'class':'winfo'})
category_counts[x_tags.string] = len(tr_info_tags)
category_counts
"""
Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
End of explanation
"""
|
erikdrysdale/erikdrysdale.github.io | _rmd/extra_cancer/cancer_calc.ipynb | mit | import os
import pandas as pd
import numpy as np
import plotnine
from plotnine import *
from matplotlib import cm, colors
from plydata.cat_tools import *
# Load the CSV files
df_cancer = pd.read_csv('1310039401.csv',usecols=['year','number'])
df_cancer.rename(columns={'year':'years','number':'cancer'}, inplace=True)
df_pop = pd.read_csv('1710000501.csv')
df_pop = df_pop.melt('years',None,'grp','population')
# Raw rate per 100K
df_pop_total = df_pop.query('grp=="All ages"').drop(columns='grp').rename(columns={'population':'total'})
df_rate = df_pop_total.merge(df_cancer,how='right',on='years').assign(rate=lambda x: x.cancer/(x.total/100000))
# plot
tmp = df_rate.melt('years',['cancer','rate'],'tt')
tmp.head()
plotnine.options.figure_size = (9, 4)
gg_agg = (ggplot(tmp,aes(x='years',y='value')) + theme_bw() +
geom_point() + geom_line() +
facet_wrap('~tt',scales='free_y',labeller=labeller(tt={'cancer':'Total','rate':'Per 100K'})) +
labs(y='Malignant neoplasm deaths', x='Year') +
ggtitle('Figure 1: Raw Canadian cancer deaths since 2000') +
theme(subplots_adjust={'wspace': 0.15}))
gg_agg
"""
Explanation: Canadian cancer statistics: a mixed story
I recently reviewed Azra Raza's new book The First Cell, whose primary thesis is that cancer research has failed to translate into effective treatments due to a combination or poor incentives and a flawed experimental paradigm. The cancer that Dr. Raza treats, myelodysplastic syndromes (MDS) and its evolutionary successor, malignancy acute myeloid leukemia (AML), are a type of cancer which have seen paltry advances in therapeutics over the last 30 years. Despite the hype about "blockbuster" therapies being "game changers" from the "immunotherapy revolution", overall gains from new drugs are usually measured in months for solid tumors. This is hardly surprising given that 95% of cancer drugs fail to show biological or clinical value and never reach the market. After finishing Raza's book, I became curious as to how cancer statistics have evolved in Canada over time. Are fewer people dying form cancer over time, and which cancers have seen the greatest improvements?
Oncologists will anecdotally tell you that the number of cancer patients they treat seems to be rising each year. But the main reason for the phenomena is almost surely the rapidly aging population. I set out to answer three questions:
How has Canadian cancer mortality changed over time?
How many cancer deaths are attributable to Canada's aging population?
Are we getting better at treating cancer in Canada?
It is impossible to determine from the data I have access whether differences over time in cancer mortality rates are due to new therapeutics, better treatment strategies with existing drugs, or changes in environmental exposure. However, if mortality rates are declining consistently across time and before the onset of "breakthrough" therapeutics, this signals that novel therapies are probably not to be credited.
The Canadian Cancer Society (CCS) releases detailed annual reports, and readers may be interested in their work. Unlike the CCS reports, I am not interested in incidence and survival rates. From a statistical perspective, measures like 5-year survival rates can be misleading due to a lead time bias that can be induced by more comprehensive screening.[[^1]] Also my analysis does not consider cancer from an epidemiological or public policy perspective. I am solely interested in assessing whether more or less people are dying from cancer in Canada after adjusting from demographic changes.
[^1]: Determining the cause of death is not always easy however. There is certainly some error in this calculation. Furthermore, since most people that die of cancer are elderly, and elderly patients tend to have comorbidities, it is difficult to disentangle what pathology actually killed a patient. Despite these caveats, the cause of death is still more reliable and consistent across years than other measurements.
(1) Data sources
To calculate the number of Cancer deaths each year I used deaths by Malignant neoplasms from StatsCan table 13-10-0394-01: Leading causes of death, total population, by age group. Cancer makes up the plurality of deaths of annual deaths in Canada (30%), followed by heart disease. This table can be further decomposed into the type of neoplasm with table 13-10-0142-01: Deaths, by cause, Chapter II: Neoplasms, to break down ICD-10 cause of death codes C00-C97. To make the number of deaths comparable between years, cancer deaths are normalized by the population (per 100K persons) using StatsCan table 17-10-0005-01. Note that a copy of all the csv files needed to replicate the analysis below can be found here.
End of explanation
"""
# Age-specific cancer
df_age = pd.read_csv('1310039401_by_age.csv')
df_age.age = df_age.age.str.split('\\,\\s',expand=True).iloc[:,1]
df_age.age = df_age.age.fillna(method='ffill')
df_age.age = df_age.age.str.replace('\\syears','')
# Age-specific population
di_age = {'0 to 4':'1 to 14', '5 to 9':'1 to 14', '10 to 14':'1 to 14',
'15 to 19':'15 to 24', '20 to 24':'15 to 24',
'25 to 29':'25 to 34', '30 to 34':'25 to 34',
'35 to 39':'35 to 44', '40 to 44':'35 to 44',
'45 to 49':'45 to 54', '50 to 54':'45 to 54',
'55 to 59':'55 to 64', '60 to 64':'55 to 64',
'65 to 69':'65 to 74', '70 to 74':'65 to 74',
'75 to 79':'75 to 84', '80 to 84':'75 to 84',
'85 to 89':'85 and over','90 to 94':'85 and over',
'95 to 99':'85 and over', '100 and over':'85 and over',
'Under 1 year':'1 to 14', '1 to 4':'1 to 14', '90 and over':'85 and over'}
df_pop_sub = df_pop.query('grp!="All ages"').assign(grp=lambda x: x.grp.str.replace('\\syears',''))
df_pop_sub = df_pop_sub.assign(age=lambda x: x.grp.map(di_age)).groupby(['years','age']).population.sum().reset_index()
# Merge
df_age = df_pop_sub.merge(df_age,how='right',on=['years','age'])
df_age = df_age.assign(rate=lambda x: x.number/(x.population/1e5))
df_age = df_age.merge(df_pop_total,'left','years').assign(pop_share=lambda x: x.population / x.total)
tmp = df_age.assign(rate=lambda x: np.log10(x.rate)).melt(['years','age'],['pop_share','rate'],'tt')
# Repeat for index
tmp2 = tmp.groupby(['age','tt']).head(1).drop(columns='years').rename(columns={'value':'bl'})
tmp = tmp.merge(tmp2, 'left', ['age','tt']).assign(idx=lambda x: x.value/x.bl*100)
lblz = list(tmp.age.unique())
n_age = len(lblz)
mat = cm.get_cmap('viridis',n_age).colors
colz = []
for ii in range(n_age):
colz.append(colors.to_hex(mat[ii]))
plotnine.options.figure_size = (8, 3.5)
di_age_rate = {'pop_share':'Population share (%)', 'rate':'log10(Death rate (per 100K))'}
gg_age_rate = (ggplot(tmp, aes(x='years',y='value',color='age',group='age')) + theme_bw() +
geom_line() + labs(y='Value',x='Year') +
facet_wrap('~tt',scales='free_y',labeller=labeller(tt=di_age_rate)) +
theme(subplots_adjust={'wspace':0.1}) +
scale_color_manual(name='Age',values=colz,labels=lblz) +
ggtitle("Figure 2A: Trend in population structure age-specific death rates"))
print(gg_age_rate)
"""
Explanation: The number of annual cancer deaths has been rising with a linear trend from 2000 to 2018, as Figure 1 shows. A staggering 1.36 million Canadians have died of cancer in this 19 year time frame. The second panel in the figure reveals that the death rate per capita has also been increasing, although in a less smooth fashion. Overall, per capita death rates are about 4% higher in 2018 then they were in 2000. These data are not encouraging. However, it is important to remember that these plots do not take into account the change in the composition of the Canadian population (i.e. our society is aging). As Raza points out in her book, the cancers that emerge in older patients are much more genetically complex due to a higher mutational load. Successful treatments are therefore likely to be, at best, temporary since the cancer cells that survive will be resistant to the treatment, and there is more variation in the cancer cells for natural selection to occur over.
(2) Adjusting for age
The goal of this section is to estimate how many people would have died of cancer had the population structure of Canada stayed the same. Even if age-specific cancer mortality rates are declining for all population groups, cancer deaths per capita can still rise if the decline in mortality is offset by a higher share of the population going into the higher death rate age groups. One challenge when dealing with publicly available census data is that the age categories are usually aggregated. The data discussed in section (1) uses 10-year age ranges (e.g. 45-54) for age categories. One subtle effect of an aging population may be to change the distribution within these age bins (e.g. more 54 compared to 45 year olds over time). I do not, and cannot account for this. However, the bias of this confounder will be to understate declines in age-specific deaths if the population is generally aging. As will soon be made clear, I show age-adjusted declines in mortality, so this bias would not change the sign the interpretation.
End of explanation
"""
tmp3 = df_age[df_age.years.isin([2000,2018])].sort_values(['years','age'])
tmp3 = tmp3.assign(rate_d=lambda x: x.rate/x.groupby('years').rate.shift(1))
tmp3 = tmp3[tmp3.rate_d.notnull()].assign(years=lambda x: pd.Categorical(x.years))
plotnine.options.figure_size = (4.5, 3)
gg_rate_delta = (ggplot(tmp3, aes(x='age',y='rate_d',color='years',group='years')) + theme_bw() +
geom_point() + geom_line() +
ggtitle('Figure 2B: Relative cancer deaths to previous age cohort') +
theme(axis_title_x=element_blank(), axis_text_x=element_text(angle=90),
legend_position=(0.5,0.35)) +
labs(y='Relative rate change') +
scale_y_continuous(limits=[0,4],breaks=list(np.arange(0,5,1))) +
scale_color_discrete(name='Years'))
gg_rate_delta
"""
Explanation: Two statistical facts drive the increase in per capita cancer deaths in Canada, as Figure 2A shows below:
The Canadian population is aging
Older Canadians are more likely to die of cancer
The aging of the Canadian population has been modest but sustained over the past 20 years. From 2000 to 2018 the share of population >55 years old increased from 22% to 31%, and the equivalent figure for the 65+ was 12% to 17%. The second panel in Figure 2A shows why, from the perspective of overall cancer death rates, this affect has tremendous consequences for death rates. The cancer death rate grows exponentially by age category. This is why 85+ year olds are a 1000 times more likely than children (1-14) to die of cancer. Every decade increase effectively doubles or triples the age-specific cancer risk. Hence a 75-84 year old is four times more likely than a 55-64 to develop cancer.
End of explanation
"""
plotnine.options.figure_size = (8, 3.5)
gg_age_rate_idx = (ggplot(tmp, aes(x='years',y='idx',color='age',group='age')) + theme_bw() +
geom_line() + labs(y='Value',x='Year') +
facet_wrap('~tt',scales='free_y',labeller=labeller(tt=di_age_rate)) +
theme(subplots_adjust={'wspace':0.12}) +
scale_color_manual(name='Age',values=colz,labels=lblz) +
ggtitle("Figure 2C: Trend in population structure age-specific death rates\nIndexed 100==2000"))
print(gg_age_rate_idx)
"""
Explanation: Figure 2B shows the exponential trend is age-specific mortality rates has been fairly constant over 20 years suggesting that the increases in cancer risk are biological. Environmental factors may also be present (e.g. lung cancer takes decades to kill you), but these effects are likely to be consistent over time. To make the trends shown in Figure 2A more clear, it can be useful to index the population shares and age-specific cancer mortality rates to the year 2000.
End of explanation
"""
tmp4 = df_age.merge(df_cancer,'left','years').assign(share = lambda x: x.number/x.cancer)
tmp4 = tmp4.melt(['years','age'],['number','share'],'tt')
plotnine.options.figure_size = (8.5, 3.5)
di_tt = {'number':'Number of death','share':'Share of deaths'}
gg_age_breakdown = (ggplot(tmp4, aes(x='years',weight='value',fill='age')) + theme_bw() +
labs(y='Malignant neoplasm deaths', x='Year') +
geom_bar() + facet_wrap('~tt',scales='free_y',labeller=labeller(tt=di_tt)) +
ggtitle('Figure 2D: Age-specific contributions to cancer death') +
scale_fill_manual(name='Age',values=colz,labels=lblz) +
theme(subplots_adjust={'wspace': 0.15}))
gg_age_breakdown
"""
Explanation: Figure 2C shows that the biggest relative gains in the population share have occurred for the 65-74, 75-84, and 85+ age categories, with the latter increasing by more than 60% in 20 years. Because cancer deaths in the 85+ age range are extremely high, a trend of the population structure towards this age demographic will drastically increase overall cancer death numbers. The second panel in figure in 2C is the more promising: age-specific cancer death rates have declined in a consistent manner for every age group since 2000. While the relative decline has been a modest 5% for 65-74 year olds, it has been an impressive 35% reduction for those aged 15-24. This finding was the most surprising result of this analysis for me, and gives definitive evidence that we are getting better at preventing people from dying from cancer. I will discuss this further in the conclusion.
End of explanation
"""
df_pop_r = df_pop_total.assign(r=lambda x: x.total/x.total.head(1).values).drop(columns='total')
df_idx = df_age.assign(population=lambda x: np.where(x.years==x.years.head(1).values[0],x.population,np.NaN)).drop(columns='number')
df_idx.population = df_idx.groupby('age').population.fillna(method='ffill')
df_idx = df_idx.merge(df_pop_r,'left','years').assign(population=lambda x: x.population*x.r)
df_idx = df_idx.assign(cancer=lambda x: x.rate*(x.population/1e5))
df_idx = df_idx.groupby('years')['population','cancer'].sum().astype(int).reset_index()
df_idx = df_idx.assign(rate=lambda x: x.cancer/(x.population/1e5))
tmp3 = df_idx.melt('years',['cancer','rate'],'tt')
plotnine.options.figure_size = (8, 3.5)
gg_theory = (ggplot(tmp3,aes(x='years',y='value')) + theme_bw() +
geom_point(color='blue') + geom_line(color='blue') +
facet_wrap('~tt',scales='free_y',labeller=labeller(tt={'cancer':'Total','rate':'Per 100K'})) +
labs(y='Malignant neoplasm deaths', x='Date') +
ggtitle('Figure 2E: Counterfactual cancer deaths since 2000') +
theme(subplots_adjust={'wspace': 0.15}))
gg_theory
# df_idx[df_idx.years.isin([2000,2018])]/df_idx[df_idx.years.isin([2000,2018])].shift(1)-1
"""
Explanation: In 2018 half and three-quarters of all cancer deaths today come from those over the age of 75 and 65, respectively. Cancer deaths for those less than 55 made up only 7% of the total share in 2018. Cancer is overwhelmingly an killer of the elderly and retired. So far the data has shown that:
The Canadian population is slowly greying (Figure 2A)
Cancer risk approximately doubles every decade of aging (Figure 2B)
Age-specific mortality has declined for every category except those of 85+, where it has remained stable (Figure 2C)
Cancer deaths are growing in aggregate and per capita because the share of 85+ cancer deaths has increased, the 55-84 has remained constant, and the <55 has declined (Figure 2D)
No we are at a point where we can try to estimate the counterfactual number of cancer deaths by holding the population structure the same. Because age-specific mortality rates have gotten better (or at least no worse) for all age ranges (Figure 2C), this adjusted measure will necessarily show a decline. The notation used in the formula below is as follows. The age category and year are denoted by $i$ and $t$, respectively. The population level and cancer deaths are referred to as $p$ and $c$, respectively. The age-specific death rate from cancer is $d_{i,t}$ and the overall Canadian population growth is $g_t$.
$$
\begin{align}
g_t &= p_t / p_{t_1} \
d_{i,t} &= c_{i,t} / p_{i,t} \
\tilde p_{i,t} &= \tilde p_{i,t-1} \cdot g_t \
\tilde c_{i,t} &= \tilde p_{i,t} \cdot d_{i,t}
\end{align}
$$
The formula above shows that the counterfactual cancer death numbers, \(\tilde c_{i,t}\), are the actual cancer death rate for that age cohort for a given year, multiplied by a theoretical population level whose cohort's growth rate matched the overall Canadian population rate, rather that its true age-specific growth rates. This ensures that the total counterfactual Canadian population remains identical in any year, but that its relative population shares remain unchanged since the year 2000.
End of explanation
"""
df_type = pd.read_csv('1310014201_type_age.csv')
df_type.rename(columns={'Age group':'age','Cause of death (ICD-10)':'cod','VALUE':'number'},inplace=True)
drop_age = ['Total, all ages','Age, not stated']
drop_cod = ['Total, all causes of death [A00-Y89]','Chapter II: Neoplasms [C00-D48]','Malignant neoplasms [C00-C97]']
df_type = df_type[~(df_type.age.isin(drop_age) | df_type.cod.isin(drop_cod))].reset_index(None,True)
df_type.age = df_type.age.str.replace('\\syears','').map(di_age)
df_type.cod = df_type.cod.str.replace('Malignant neoplasms{0,1} of ','').str.split('\\s\\[',expand=True).iloc[:,0]
di_cod = {'Melanoma and other malignant neoplasms of skin':'melanoma',
'bone and articular cartilage':'bone',
'breast':'breast', 'digestive organs':'digestive',
'eye, brain and other parts of central nervous system':'brain/eye/CNS',
'female genital organs':'female sex organ',
'ill-defined, secondary and unspecified sites':'ill-defined',
'lip, oral cavity and pharynx':'mouth',
'lymphoid, haematopoietic and related tissue':'lymphoid',
'male genital organs':'male sex organ',
'mesothelial and soft tissue':'mesothelial',
'respiratory and intrathoracic organs':'respitory',
'thyroid and other endocrine glands':'thyroid',
'urinary tract':'urinary'}
df_type = df_type.assign(cod=lambda x: x.cod.map(di_cod))
df_type = df_type.merge(df_age[['years','age','population']],'left',['years','age'])
df_type = df_type.assign(rate=lambda x: x.number/(x.population/1e5))
# Calculation population-consistent population
df_alt_type = df_type.drop(columns='number').assign(population=lambda x: np.where(x.years==x.years.head(1).values[0],x.population,np.NaN))
df_alt_type.population = df_alt_type.groupby(['age','age']).population.fillna(method='ffill')
df_alt_type = df_alt_type.merge(df_pop_r,'left','years').assign(population=lambda x: (x.population*x.r).astype(int))
df_alt_type = df_alt_type.assign(number=lambda x: x.rate*x.population/1e5).drop(columns='r')
# Merge datasets
df_alt_both = pd.concat([df_type.assign(tt='raw'), df_alt_type.assign(tt='counter')]).reset_index(None,True)
n_fac = 6
df_alt_both = df_alt_both.assign(cod2=lambda x: cat_lump(c=x.cod,w=x.number,n=n_fac))
tmp1 = df_alt_both.groupby(['years','cod2','tt']).number.sum().reset_index()
df_cod2 = tmp1.merge(df_pop_total,'left','years').assign(rate=lambda x: x.number/(x.total/1e5))
df_cod2 = df_cod2.sort_values(['tt','cod2','years']).reset_index(None,True).drop(columns=['number','total'])
mi_yr = df_cod2.years.min()
df_cod2 = df_cod2.merge(df_cod2.query('years==@mi_yr').drop(columns='years').rename(columns={'rate':'tmp'}),'left',['cod2','tt'])
df_cod2 = df_cod2.assign(idx=lambda x: x.rate/x.tmp*100).drop(columns='tmp')
df_cod2_long = df_cod2.melt(['tt','years','cod2'],['rate','idx'],'msr')
df_cod2_long = df_cod2_long.assign(tt=lambda x: cat_rev(x.tt), msr=lambda x: cat_rev(x.msr))
plotnine.options.figure_size = (8, 5.5)
#
di_msr = {'idx':'2000=100','rate':'Deaths per 100K'}
di_tt = {'counter':'Counterfactual', 'raw':'Raw'}
gg_cod = (ggplot(df_cod2_long, aes(x='years',y='value',color='cod2')) +
theme_bw() + geom_path() +
labs(x='Date',y='Cancer deaths') +
facet_grid('msr~tt',scales='free',labeller=labeller(msr=di_msr, tt=di_tt)) +
ggtitle('Figure 3A: Death rates by major cancer type') +
theme(subplots_adjust={'wspace': 0.05}) +
scale_color_discrete(name='Cancer'))
gg_cod
"""
Explanation: Had the population structure remained the same in Canada, the aggregate number of Cancer deaths would have declined by 6%, despite the overall population increasing by 21%. On a per capita basis this amount to a 22% decrease in age-consistent death rates. After this adjustment, per capita death rates on a population-consistent level have been falling at a linear rate. Were this trend to continue, it would mean very few Canadians below the age of retirement would be dying from cancer in several decades.
(3) Cancer-specific trends
Section (2) has shown without a doubt that age factors are the key to understanding Canada's cancer statistics. Our rapidly aging population is causing overall and per capita death rates to rise even while age-specific cancer deaths rate are declining for every age group outside those over 85. However cancer is an umbrella term for a variety of disease that share common factors, but are biologically distinct. Melanoma's are radically different than lymphomas both in their biology, origin, and treatment strategy.
What cancer are the most deadly in Canada and how has this changed over time? In the analysis below I aggregate 14 cancer types into 6 main categories by mortality risk: breast, digestive, lymphoid, male sex organ, respiratory, and ill-defined, with the remainder being lumped in "other". Note that by mortality risk I am referring to the number of individuals that die from the disease, rather than the death rate conditional on having the disease.
End of explanation
"""
tmp2 = df_alt_both.query('cod2=="other"').groupby(['years','cod','tt']).number.sum().reset_index()
df_cod = tmp2.merge(df_pop_total,'left','years').assign(rate=lambda x: x.number/(x.total/1e5))
df_cod = df_cod.sort_values(['tt','cod','years']).reset_index(None,True).drop(columns=['number','total'])
mi_yr = df_cod.years.min()
df_cod = df_cod.merge(df_cod.query('years==@mi_yr').drop(columns='years').rename(columns={'rate':'tmp'}),'left',['cod','tt'])
df_cod = df_cod.assign(idx=lambda x: x.rate/x.tmp*100).drop(columns='tmp')
df_cod_long = df_cod.melt(['tt','years','cod'],['rate','idx'],'msr')
df_cod_long = df_cod_long.assign(tt=lambda x: cat_rev(x.tt), msr=lambda x: cat_rev(x.msr))
gg_other = (ggplot(df_cod_long, aes(x='years',y='value',color='cod')) +
theme_bw() + geom_path() +
labs(x='Date',y='Cancer deaths') +
facet_grid('msr~tt',scales='free',labeller=labeller(msr=di_msr, tt=di_tt)) +
ggtitle('Figure 3B: Death rates by "other" cancer type') +
theme(subplots_adjust={'wspace': 0.05}) +
scale_color_discrete(name='Cancer'))
gg_other
"""
Explanation: The majority of cancer deaths come from lung and gastrointestinal (digestive) cancers. Many GI cancers like pancreatic, gallbladder, or esophageal are notoriously difficult to treat. Overall the actual (raw) per capita death rates has been rising for some cancers and declining for others, as shown in the first column in Figure 3A. However, these rates may be rising for the same reason aggregate per capita death rates are increasing: a greying population. The second column of the figure re-calculates these trends using the population-consistent approach discussed in section (2). After applying these adjustments, the results are much more impressive: the major cancer categories have seen a decline in per capita mortality rates ranging from 15 to 40%. The one exception to this trend in the "other" category, which has shown no improvement, much like Dr. Raza's MDS and AML patients. It is worth looking into these categories.
End of explanation
"""
|
pdamodaran/yellowbrick | examples/bbengfort/testing.ipynb | apache-2.0 | %matplotlib inline
import os
import sys
import nltk
import pickle
# To import yellowbrick
sys.path.append("../..")
"""
Explanation: Visual Diagnosis of Text Analysis with Baleen
This notebook has been created as part of the Yellowbrick user study. I hope to explore how visual methods might improve the workflow of text classification on a small to medium sized corpus.
Dataset
The dataset used in this study is a sample of the Baleen Corpus. The Baleen corpus has been ingesting RSS feeds on the hour from a variety of topical feeds since March 2016, including news, hobbies, and political documents and currently has over 1.2M posts from 373 feeds. Baleen (an open source system) has a sister library called Minke that provides multiprocessing support for dealing with Gigabytes worth of text.
The dataset I'll use in this study is a sample of the larger data set that contains 68,052 or roughly 6% of the total corpus. For this test, I've chosen to use the preprocessed corpus, which means I won't have to do any tokenization, but can still apply normalization techniques. The corpus is described as follows:
Baleen corpus contains 68,052 files in 12 categories.
Structured as:
1,200,378 paragraphs (17.639 mean paragraphs per file)
2,058,635 sentences (1.715 mean sentences per paragraph).
Word count of 44,821,870 with a vocabulary of 303,034 (147.910 lexical diversity).
Category Counts:
books: 1,700 docs
business: 9,248 docs
cinema: 2,072 docs
cooking: 733 docs
data science: 692 docs
design: 1,259 docs
do it yourself: 2,620 docs
gaming: 2,884 docs
news: 33,253 docs
politics: 3,793 docs
sports: 4,710 docs
tech: 5,088 docs
This is quite a lot of data, so for now we'll simply create a classifier for the "hobbies" categories: e.g. books, cinema, cooking, diy, gaming, and sports.
Note: this data set is not currently publically available, but I am happy to provide it on request.
End of explanation
"""
CORPUS_ROOT = os.path.join(os.getcwd(), "data")
CATEGORIES = ["books", "cinema", "cooking", "diy", "gaming", "sports"]
def fileids(root=CORPUS_ROOT, categories=CATEGORIES):
"""
Fetch the paths, filtering on categories (pass None for all).
"""
for name in os.listdir(root):
dpath = os.path.join(root, name)
if not os.path.isdir(dpath):
continue
if categories and name in categories:
for fname in os.listdir(dpath):
yield os.path.join(dpath, fname)
def documents(root=CORPUS_ROOT, categories=CATEGORIES):
"""
Load the pickled documents and yield one at a time.
"""
for path in fileids(root, categories):
with open(path, 'rb') as f:
yield pickle.load(f)
def labels(root=CORPUS_ROOT, categories=CATEGORIES):
"""
Return a list of the labels associated with each document.
"""
for path in fileids(root, categories):
dpath = os.path.dirname(path)
yield dpath.split(os.path.sep)[-1]
"""
Explanation: Loading Data
In order to load data, I'd typically use a CorpusReader. However, for the sake of simplicity, I'll load data using some simple Python generator functions. I need to create two primary methods, the first loads the documents using pickle, and the second returns the vector of targets for supervised learning.
End of explanation
"""
from nltk.corpus import wordnet as wn
from nltk.stem import WordNetLemmatizer
from unicodedata import category as ucat
from nltk.corpus import stopwords as swcorpus
from sklearn.base import BaseEstimator, TransformerMixin
def identity(args):
"""
The identity function is used as the "tokenizer" for
pre-tokenized text. It just passes back it's arguments.
"""
return args
def is_punctuation(token):
"""
Returns true if all characters in the token are
unicode punctuation (works for most punct).
"""
return all(
ucat(c).startswith('P')
for c in token
)
def wnpos(tag):
"""
Returns the wn part of speech tag from the penn treebank tag.
"""
return {
"N": wn.NOUN,
"V": wn.VERB,
"J": wn.ADJ,
"R": wn.ADV,
}.get(tag[0], wn.NOUN)
class TextNormalizer(BaseEstimator, TransformerMixin):
def __init__(self, stopwords='english', lowercase=True, lemmatize=True, depunct=True):
self.stopwords = frozenset(swcorpus.words(stopwords)) if stopwords else frozenset()
self.lowercase = lowercase
self.depunct = depunct
self.lemmatizer = WordNetLemmatizer() if lemmatize else None
def fit(self, docs, labels=None):
return self
def transform(self, docs):
for doc in docs:
yield list(self.normalize(doc))
def normalize(self, doc):
for paragraph in doc:
for sentence in paragraph:
for token, tag in sentence:
if token.lower() in self.stopwords:
continue
if self.depunct and is_punctuation(token):
continue
if self.lowercase:
token = token.lower()
if self.lemmatizer:
token = self.lemmatizer.lemmatize(token, wnpos(tag))
yield token
"""
Explanation: Feature Extraction and Normalization
In order to conduct analyses with Scikit-Learn, I'll need some helper transformers to modify the loaded data into a form that can be used by the sklearn.feature_extraction text transformers. I'll be mostly using the CountVectorizer and TfidfVectorizer, so these normalizer transformers and identity functions help a lot.
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from yellowbrick.text import FreqDistVisualizer
visualizer = Pipeline([
('norm', TextNormalizer()),
('count', CountVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
('viz', FreqDistVisualizer())
])
visualizer.fit_transform(documents(), labels())
visualizer.named_steps['viz'].poof()
vect = Pipeline([
('norm', TextNormalizer()),
('count', CountVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
])
docs = vect.fit_transform(documents(), labels())
viz = FreqDistVisualizer()
viz.fit(docs, vect.named_steps['count'].get_feature_names())
viz.poof()
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from yellowbrick.text import TSNEVisualizer
vect = Pipeline([
('norm', TextNormalizer()),
('tfidf', TfidfVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
])
docs = vect.fit_transform(documents(), labels())
viz = TSNEVisualizer()
viz.fit(docs, labels())
viz.poof()
"""
Explanation: Corpus Analysis
At this stage, I'd like to get a feel for what was in my corpus, so that I can start thinking about how to best vectorize the text and do different types of counting. With the Yellowbrick 0.3.3 release, support has been added for two text visualizers, which I think I will test out at scale using this corpus.
End of explanation
"""
from sklearn.model_selection import train_test_split as tts
docs_train, docs_test, labels_train, labels_test = tts(docs, list(labels()), test_size=0.2)
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import ClassBalance, ClassificationReport, ROCAUC
logit = LogisticRegression()
logit.fit(docs_train, labels_train)
logit_balance = ClassBalance(logit, classes=set(labels_test))
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
logit_balance = ClassificationReport(logit, classes=set(labels_test))
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
logit_balance = ClassificationReport(LogisticRegression())
logit_balance.fit(docs_train, labels_train)
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
logit_balance = ROCAUC(logit)
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
"""
Explanation: Classification
The primary task for this kind of corpus is classification - sentiment analysis, etc.
End of explanation
"""
|
M0nica/python-foundations-hw | 08/.ipynb_checkpoints/08-checkpoint.ipynb | mit | # workon dataanalysis - my virtual environment
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# df = pd.read_table('34933-0001-Data.tsv')
odf = pd.read_csv('accreditation_2016_03.csv')
odf.head()
odf.columns
odf['Campus_City'].value_counts().head(10)
top_cities = odf['Campus_City'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
top_cities.set_title('Top 10 College Cities (By Number of Colleges in State)')
top_cities.set_xlabel('City')
top_cities.set_ylabel('# of Colleges')
plt.savefig('topcollegecities.png')
top_cities = odf['Campus_State'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
top_cities.set_title('Top 10 College States (By Number of Campuses in State)')
top_cities.set_xlabel('City')
top_cities.set_ylabel('# of Colleges')
plt.savefig('topcollegecities.png')
odf['Accreditation_Status'].value_counts()
df = pd.read_csv('Full Results - Stack Overflow Developer Survey - 2015 2.csv', encoding ='mac_roman')
df.head()
df.columns
df.info()
"""
Explanation: Note: you can find my iPython Notebook for Dataset 1 here -> https://github.com/M0nica/2016-new-coder-survey
End of explanation
"""
df['Age'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
"""
Explanation: How old are the programmers that answered this survey?
End of explanation
"""
df['Industry'].value_counts().head(10).plot(kind="barh", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df['Preferred text editor'].value_counts().head(10).plot(kind="barh", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df['Preferred text editor'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
#df['Training & Education: BS in CS'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df['Occupation'].value_counts()
df.groupby('Gender')['Occupation'].value_counts().plot(kind="bar", color = ['#599ad3', '#f9a65a']) # too mmuch data to appropriately display
#df.groupby('Gender')['Years IT / Programming Experience'].value_counts()
#i = ["Male", "Female"]
gender_df = df[(df['Gender'] == 'Male') | (df['Gender'] == 'Female')]
print(gender_df['Gender'].value_counts())
#gender_df.groupby('Gender')['Years IT / Programming Experience'].value_counts().plot(kind="bar", color = ['#599ad3', '#f9a65a'])
gender_df.groupby('Gender')['Occupation'].value_counts()
gender_df = gender_df[gender_df['Occupation'] == "Full-stack web developer"]
gender_df.groupby('Gender')['Occupation'].value_counts().plot(kind="bar", color = ['#599ad3', '#f9a65a'])
gender_df.groupby('Gender')['Years IT / Programming Experience'].value_counts().head(10).plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df['Age'].value_counts()
df.groupby('Gender')['Age'].value_counts().plot(kind="bar", color = ['#624ea7', '#599ad3', '#f9a65a', '#9e66ab', 'purple'])
df["AgeScale"] = df["Age"].apply(str).replace("< 20", "0").apply(str).replace("20-24", "1").apply(str).replace("25-29", "2").apply(str).replace("30-34", "3").apply(str).replace("30-34", "3").apply(str).replace("35-39", "4").apply(str).replace("40-50", "5").apply(str).replace("51-60", "6").apply(str).replace("> 60", "7")
print(df["AgeScale"].head(10))
years_df =df[df['AgeScale'] != "Prefer not to disclose"]
years_df['AgeScale'] = years_df['AgeScale'].astype(float)
print(years_df).head()
years_df['Years IT / Programming Experience'].value_counts()
years_df['ExperienceRank'] = years_df['Years IT / Programming Experience'].apply(str).replace("Less than 1 year", "0").apply(str).replace("1 - 2 years", "1").apply(str).replace("2 - 5 years", "2").apply(str).replace("6 - 10 years", "3").apply(str).replace("11+ years", "4").astype(float)
# years_df.head()
years_df['ExperienceRank'].value_counts()
years_df['AgeScale'].value_counts()
#years_df['ExperienceRank'] = float(years_df['ExperienceRank'])
# years_df['AgeScale'] = float(years_df['AgeScale'])
# years_df['AgeScale'] = years_df['AgeScale'].apply(int)
#years_df['ExperienceRank'] = parseInt(years_df['ExperienceRank'])
#years_df['ExperienceRank'] = pd.Series(years_df['ExperienceRank'])
#years_df['AgeScale'] = pd.Series(years_df['AgeScale'])
moneyScatter = years_df.plot(kind='scatter', x='ExperienceRank', y='AgeScale', alpha=0.2) # caegorical data dos not display well on scatter plots
#moneyScatter.set_title('Distribution of Money Spent Amongst Respondents to the Survey by Age')
#moneyScatter.set_xlabel('Months Programming')
#moneyScatter.set_ylabel('Hours Spent Learning Each Week')
#plt.savefig('studyingovertime.png')
years_df['ExperienceRank'].describe()
years_df[['ExperienceRank','AgeScale']] = years_df[['ExperienceRank','AgeScale']].apply(pd.to_numeric)
# years_df.apply(lambda x: pd.to_numeric(x, errors='ignore'))
years_df['ExperienceRank'].describe()
years_df['ExperienceRank'].head()
years_df['AgeScale'].head()
"""
Explanation: What industries are these individuals working in?
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/official/migration/UJ6 Vertex SDK AutoML Text Classification.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex AI: Vertex AI Migration: AutoML Text Classification
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ6%20Vertex%20SDK%20AutoML%20Text%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ6%20Vertex%20SDK%20AutoML%20Text%20Classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the Happy Moments dataset from Kaggle Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
"""
Explanation: Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
"""
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
"""
Explanation: Quick peek at your data
This tutorial uses a version of the Happy Moments dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
"""
dataset = aip.TextDataset.create(
display_name="Happy Moments" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.text.single_label_classification,
)
print(dataset.resource_name)
"""
Explanation: Create a dataset
datasets.create-dataset-api
Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
"""
dag = aip.AutoMLTextTrainingJob(
display_name="happydb_" + TIMESTAMP,
prediction_type="classification",
multi_label=False,
)
print(dag)
"""
Explanation: Example Output:
INFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset
INFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/3193181053544038400
INFO:google.cloud.aiplatform.datasets.dataset:TextDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664
INFO:google.cloud.aiplatform.datasets.dataset:To use this TextDataset in another session:
INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.TextDataset('projects/759209241365/locations/us-central1/datasets/3704325042721521664')
INFO:google.cloud.aiplatform.datasets.dataset:Importing TextDataset data: projects/759209241365/locations/us-central1/datasets/3704325042721521664
INFO:google.cloud.aiplatform.datasets.dataset:Import TextDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/5152246891450204160
INFO:google.cloud.aiplatform.datasets.dataset:TextDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664
projects/759209241365/locations/us-central1/datasets/3704325042721521664
Train a model
training.automl-api
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A text classification model.
sentiment: A text sentiment analysis model.
extraction: A text entity extraction model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
sentiment_max: If a sentiment analysis task, the maximum sentiment value.
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
End of explanation
"""
model = dag.run(
dataset=dataset,
model_display_name="happydb_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
"""
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.AutoMLTextTrainingJob object at 0x7fc3b6c90f10>
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
"""
# Get model resource ID
models = aip.Model.list(filter="display_name=happydb_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.training_jobs:View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/8859754745456230400?project=759209241365
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:
PipelineState.PIPELINE_STATE_RUNNING
...
INFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/6389525951797002240
Evaluate the model
projects.locations.models.evaluations.list
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
"""
test_items = ! gsutil cat $IMPORT_FILE | head -n2
if len(test_items[0]) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(",")
_, test_item_2, test_label_2 = str(test_items[1]).split(",")
else:
test_item_1, test_label_1 = str(test_items[0]).split(",")
test_item_2, test_label_2 = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
"""
Explanation: Example output:
name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824"
metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml"
metrics {
struct_value {
fields {
key: "auPrc"
value {
number_value: 0.9891107
}
}
fields {
key: "confidenceMetrics"
value {
list_value {
values {
struct_value {
fields {
key: "precision"
value {
number_value: 0.2
}
}
fields {
key: "recall"
value {
number_value: 1.0
}
}
}
}
Make batch predictions
predictions.batch-prediction
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
import json
import tensorflow as tf
gcs_test_item_1 = BUCKET_NAME + "/test1.txt"
with tf.io.gfile.GFile(gcs_test_item_1, "w") as f:
f.write(test_item_1 + "\n")
gcs_test_item_2 = BUCKET_NAME + "/test2.txt"
with tf.io.gfile.GFile(gcs_test_item_2, "w") as f:
f.write(test_item_2 + "\n")
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": gcs_test_item_1, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
data = {"content": gcs_test_item_2, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
"""
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
content: The Cloud Storage path to the file with the text item.
mime_type: The content type. In our example, it is a text file.
For example:
{'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}
End of explanation
"""
batch_predict_job = model.batch_predict(
job_display_name="happydb_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
"""
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
"""
batch_predict_job.wait()
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
"""
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
"""
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
End of explanation
"""
endpoint = model.deploy()
"""
Explanation: Example Output:
{'instance': {'content': 'gs://andy-1234-221921aip-20210803210202/test2.txt', 'mimeType': 'text/plain'}, 'prediction': {'ids': ['681905016918769664', '3564208778435887104', '8175894796863275008', '5538192790107717632', '5870051787649581056', '3232349780894023680', '926506771680329728'], 'displayNames': ['affection', 'achievement', 'bonding', 'enjoy_the_moment', 'nature', 'leisure', 'exercise'], 'confidences': [0.9977309, 0.0017838771, 0.0002530971, 0.00014939539, 4.747714e-05, 2.6297073e-05, 8.965492e-06]}}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
End of explanation
"""
test_item = ! gsutil cat $IMPORT_FILE | head -n1
if len(test_item[0]) == 3:
_, test_item, test_label = str(test_item[0]).split(",")
else:
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
"""
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
instances_list = [{"content": test_item}]
prediction = endpoint.predict(instances_list)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
{ 'content': text_string }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
endpoint.undeploy_all()
"""
Explanation: Example output:
Prediction(predictions=[{'confidences': [0.9807877540588379, 0.0029202369041740894, 0.001903864904306829, 0.013396155089139938, 0.0002868965675588697, 0.00017845185357145965, 0.0005265594809316099], 'displayNames': ['affection', 'achievement', 'enjoy_the_moment', 'bonding', 'leisure', 'nature', 'exercise'], 'ids': ['1022137895017775104', '6210284665748586496', '3327980904231469056', '8516127674962280448', '7939666922658856960', '3904441656534892544', '5633823913445163008']}], deployed_model_id='8719822099612434432', explanations=None)
affection
Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
"""
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
darkomen/TFG | medidas/03082015/modelado.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import signal
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
%pylab inline
#Abrimos el fichero csv con los datos de la muestra para filtrarlos
err_u = 2
err_d = 1
datos_sin_filtrar = pd.read_csv('datos.csv')
datos = datos_sin_filtrar[(datos_sin_filtrar['Diametro X'] >= err_d) & (datos_sin_filtrar['Diametro Y'] >= err_d) & (datos_sin_filtrar['Diametro X'] <= err_u) & (datos_sin_filtrar['Diametro Y'] <= err_u)]
datos.describe()
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
#columns = ['temperatura', 'entrada']
columns = ['Diametro X', 'RPM TRAC']
"""
Explanation: Modelado de un sistema con ipython
Uso de ipython para el modelado de un sistema a partir de los datos obtenidos en un ensayo. Ensayo en lazo abierto
End of explanation
"""
#Mostramos en varias gráficas la información obtenida tras el ensayo
th_u = 1.95
th_d = 1.55
datos[columns].plot(secondary_y=['RPM TRAC'],figsize=(10,5),title='Modelo matemático del sistema').hlines([th_d ,th_u],0,2000,colors='r')
#datos_filtrados['RPM TRAC'].plot(secondary_y=True,style='g',figsize=(20,20)).set_ylabel=('RPM')
"""
Explanation: Representación
Representamos los datos mostrados en función del tiempo. De esta manera, vemos la respuesta física que tiene nuestro sistema.
End of explanation
"""
# Buscamos el polinomio de orden 4 que determina la distribución de los datos
reg = np.polyfit(datos['time'],datos['Diametro X'],4)
# Calculamos los valores de y con la regresión
ry = np.polyval(reg,datos['time'])
print (reg)
d = {'Diametro X' : datos['Diametro X'],
'Ajuste': ry, 'RPM TRAC' : datos['RPM TRAC']}
df = pd.DataFrame(d)
df.plot(subplots=True,figsize=(20,20))
plt.figure(figsize=(10,10))
plt.plot(datos['time'],datos['Diametro X'], label=('f(x)'))
plt.plot(datos['time'],ry,'ro', label=('regression'))
plt.legend(loc=0)
plt.grid(True)
plt.xlabel('x')
plt.ylabel('f(x)')
"""
Explanation: Cálculo del polinomio característico
Hacemos una regresión con un polinomio de orden 4 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
End of explanation
"""
##Respuesta en frecuencia del sistema
num = [25.9459 ,0.00015733 ,0.00000000818174]
den = [1,0,0]
tf = signal.lti(num,den)
w, mag, phase = signal.bode(tf)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6))
ax1.semilogx(w, mag) # Eje x logarítmico
ax2.semilogx(w, phase) # Eje x logarítmico
w, H = signal.freqresp(tf)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
ax1.plot(H.real, H.imag)
ax1.plot(H.real, -H.imag)
ax2.plot(tf.zeros.real, tf.zeros.imag, 'o')
ax2.plot(tf.poles.real, tf.poles.imag, 'x')
t, y = signal.step2(tf) # Respuesta a escalón unitario
plt.plot(t, 2250 * y) # Equivalente a una entrada de altura 2250
"""
Explanation: El polinomio caracteristico de nuestro sistema es:
$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$
Transformada de laplace
Si calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado:
\newline
$G_s = \frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$
End of explanation
"""
|
superbobry/pymc3 | pymc3/examples/GLM-logistic.ipynb | apache-2.0 | %matplotlib inline
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn
import warnings
warnings.filterwarnings('ignore')
from collections import OrderedDict
from time import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import fmin_powell
from scipy import integrate
import theano as thno
import theano.tensor as T
def run_models(df, upper_order=5):
'''
Convenience function:
Fit a range of pymc3 models of increasing polynomial complexity.
Suggest limit to max order 5 since calculation time is exponential.
'''
models, traces = OrderedDict(), OrderedDict()
for k in range(1,upper_order+1):
nm = 'k{}'.format(k)
fml = create_poly_modelspec(k)
with pm.Model() as models[nm]:
print('\nRunning: {}'.format(nm))
pm.glm.glm(fml, df, family=pm.glm.families.Normal())
start_MAP = pm.find_MAP(fmin=fmin_powell, disp=False)
traces[nm] = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True)
return models, traces
def plot_traces(traces, retain=1000):
'''
Convenience function:
Plot traces with overlaid means and values
'''
ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()})
for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
,xytext=(5,10), textcoords='offset points', rotation=90
,va='bottom', fontsize='large', color='#AA0022')
def create_poly_modelspec(k=1):
'''
Convenience function:
Create a polynomial modelspec string for patsy
'''
return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j)
for j in range(2,k+1)])).strip()
"""
Explanation: Bayesian Logistic Regression with PyMC3
This is a reproduction with a few slight alterations of Bayesian Log Reg by J. Benjamin Cook
Author: Peadar Coyle and J. Benjamin Cook
How likely am I to make more than $50,000 US Dollars?
Exploration of model selection techniques too - I use DIC and WAIC to select the best model.
The convenience functions are all taken from Jon Sedars work.
This example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process.
End of explanation
"""
data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data", header=None, names=['age', 'workclass', 'fnlwgt',
'education-categorical', 'educ',
'marital-status', 'occupation',
'relationship', 'race', 'sex',
'captial-gain', 'capital-loss',
'hours', 'native-country',
'income'])
data
"""
Explanation: The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.
The motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression.
End of explanation
"""
data = data[~pd.isnull(data['income'])]
data[data['native-country']==" United-States"]
income = 1 * (data['income'] == " >50K")
age2 = np.square(data['age'])
data = data[['age', 'educ', 'hours']]
data['age2'] = age2
data['income'] = income
income.value_counts()
"""
Explanation: Scrubbing and cleaning
We need to remove any null entries in Income.
And we also want to restrict this study to the United States.
End of explanation
"""
g = seaborn.pairplot(data)
# Compute the correlation matrix
corr = data.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = seaborn.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
seaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
"""
Explanation: Exploring the data
Let us get a feel for the parameters.
* We see that age is a tailed distribution. Certainly not Gaussian!
* We don't see much of a correlation between many of the features, with the exception of Age and Age2.
* Hours worked has some interesting behaviour. How would one describe this distribution?
End of explanation
"""
with pm.Model() as logistic_model:
pm.glm.glm('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())
trace_logistic_model = pm.sample(2000, pm.NUTS(), progressbar=True)
plot_traces(trace_logistic_model, retain=1000)
"""
Explanation: We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income
(which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering).
The model
We will use a simple model, which assumes that the probability of making more than $50K
is a function of age, years of education and hours worked per week. We will use PyMC3
do inference.
In Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters
(in this case the regression coefficients)
The posterior is equal to the likelihood $$p(\theta | D) = \frac{p(D|\theta)p(\theta)}{p(D)}$$
Because the denominator is a notoriously difficult integral, $p(D) = \int p(D | \theta) p(\theta) d \theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity.
What this means in practice is that we only need to worry about the numerator.
Getting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.
The likelihood is the product of n Bernoulli trials, $\prod^{n}{i=1} p{i}^{y} (1 - p_{i})^{1-y_{i}}$,
where $p_i = \frac{1}{1 + e^{-z_i}}$,
$z_{i} = \beta_{0} + \beta_{1}(age){i} + \beta_2(age)^{2}{i} + \beta_{3}(educ){i} + \beta{4}(hours){i}$ and $y{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise.
With the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters.
End of explanation
"""
plt.figure(figsize=(9,7))
trace = trace_logistic_model[1000:]
seaborn.jointplot(trace['age'], trace['educ'], kind="hex", color="#4CB391")
plt.xlabel("beta_age")
plt.ylabel("beta_educ")
plt.show()
"""
Explanation: Some results
One of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.
I'll use seaborn to look at the distribution of some of these factors.
End of explanation
"""
# Linear model with hours == 50 and educ == 12
lm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*12 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 16
lm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*16 +
samples['hours']*50)))
# Linear model with hours == 50 and educ == 19
lm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] +
samples['age']*x +
samples['age2']*np.square(x) +
samples['educ']*19 +
samples['hours']*50)))
"""
Explanation: So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school).
End of explanation
"""
# Plot the posterior predictive distributions of P(income > $50K) vs. age
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color="blue", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color="green", alpha=.15)
pm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color="red", alpha=.15)
import matplotlib.lines as mlines
blue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')
green_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')
red_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')
plt.legend(handles=[blue_line, green_line, red_line], loc='lower right')
plt.ylabel("P(Income > $50K)")
plt.xlabel("Age")
plt.show()
b = trace['educ']
plt.hist(np.exp(b), bins=20, normed=True)
plt.xlabel("Odds Ratio")
plt.show()
"""
Explanation: Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.
End of explanation
"""
lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)
print("P(%.3f < O.R. < %.3f) = 0.95"%(np.exp(3*lb),np.exp(3*ub)))
"""
Explanation: Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval!
End of explanation
"""
models_lin, traces_lin = run_models(data, 4)
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')
g = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
"""
Explanation: Model selection
The Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.
One question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question.
End of explanation
"""
dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])
dfdic.index.name = 'model'
for nm in dfdic.index:
dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])
dfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')
g = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)
"""
Explanation: There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.
Next we look at WAIC. Which is another model selection technique.
End of explanation
"""
|
DavidPowell/openmodes-examples | Modelling Coupled Elements.ipynb | gpl-3.0 | # setup 2D and 3D plotting
%matplotlib inline
from openmodes.ipython import matplotlib_defaults
matplotlib_defaults()
import matplotlib.pyplot as plt
import numpy as np
import os.path as osp
import openmodes
from openmodes.constants import c, eta_0
from openmodes.model import EfieModelMutualWeight
from openmodes.sources import PlaneWaveSource
"""
Explanation: Calculate Coupling Coefficients
Two split rings are placed in a broadside coupled configuration. The scalar model of each resonator is augmented by additional coefficients representing the coupling between them.
In D. A. Powell et al, Phys. Rev. B 82, 155128 (2010) similar coefficients were calculated under the quasi-static approximation for a single mode only. Here the effects of retardation and radiation losses are included, as are the contributions of multiple modes of each ring.
End of explanation
"""
parameters={'inner_radius': 2.5e-3, 'outer_radius': 4e-3}
sim = openmodes.Simulation(notebook=True)
srr = sim.load_mesh(osp.join(openmodes.geometry_dir, "SRR.geo"),
parameters=parameters,
mesh_tol=0.7e-3)
srr1 = sim.place_part(srr)
srr2 = sim.place_part(srr, location=[0e-3, 0e-3, 2e-3])
srr2.rotate([0, 0, 1], 180)
sim.plot_3d()
"""
Explanation: Creating geometry
As in previous examples, we load a pair of SRRs, place them in the simulation and visualise the result.
End of explanation
"""
start_s = 2j*np.pi*1e9
num_modes = 3
estimate = sim.estimate_poles(start_s, modes=num_modes, parts=[srr1, srr2], cauchy_integral=False)
refined = sim.refine_poles(estimate)
"""
Explanation: Solving modes of individual rings
We find the singularities for the two identical rings.
End of explanation
"""
dominant_modes = refined.select([0]).add_conjugates()
simple_model = EfieModelMutualWeight(dominant_modes)
full_modes = refined.select([0, 2]).add_conjugates()
full_model = EfieModelMutualWeight(full_modes)
"""
Explanation: Constructing the models
We now use these singularities to construct a model for each of the rings, where $n$ represents the ring number and $m$ represents the mode number. $s=j\omega$ is the complex frequency, $s_{n,m}$ is the complex resonant frequency of each mode and $\mathbf{V}_{n, m}$ is the corresponding current of the mode. The current on each ring is represented as a sum of modes
$$\mathbf{J}{n} = \sum_m a{n,m}\mathbf{V}_{n, m}$$
This results in the following coupled equation system for the modal coefficients on each ring
$$\frac{s_{n,m}}{s}\left(s-s_{n,m}\right)a_{n,m} + \sum_{n'\neq n}\sum_{m'}\left(sL_{m,n,m',n'} + \frac{1}{s}S_{m,n,m',n'}\right)a_{n',m'} = \mathbf{V}{n, m}\cdot\mathbf{E}{inc}$$
The first term just says that the self-impedance of each mode is calculated directly from the pole expansion.
The second term is the mutual inductance $L$ and capacitance $C = S^{-1}$ between each modes of different rings. These coefficients are obtained by weighting the relevant parts of the impedance matrix, e.g.:
$$L_{m,n,m',n'} = \mathbf{V}{n, m} L{n,n'}\mathbf{V}_{n', m'}$$
The right hand side is just the projection of the incident field onto each mode
Here we construct two different models, one considering only the fundamental mode of each ring, and another considering the first and third modes. Due to symmetry, the second mode of each ring does not play a part in the hybridised modes which will be considered here.
End of explanation
"""
num_freqs = 200
freqs = np.linspace(5e9, 10e9, num_freqs)
plane_wave = PlaneWaveSource([0, 1, 0], [1, 0, 0], p_inc=1.0)
extinction_tot = np.empty(num_freqs, np.complex128)
extinction_single = np.empty(num_freqs, np.complex128)
extinction_full_model = np.empty((num_freqs, len(full_modes)), np.complex128)
extinction_simple_model = np.empty((num_freqs, len(dominant_modes)), np.complex128)
# store the mutual coupling coefficients for plotting purposes
mutual_L = np.empty(num_freqs, np.complex128)
mutual_S = np.empty(num_freqs, np.complex128)
simple_vr = dominant_modes.vr
simple_vl = dominant_modes.vl
full_vr = full_modes.vr
full_vl = full_modes.vl
for freq_count, s in sim.iter_freqs(freqs):
impedance = sim.impedance(s)
V = sim.source_vector(plane_wave, s)
# For reference directly calculate extinction for the complete system, and for a single ring
extinction_tot[freq_count] = np.vdot(V, impedance.solve(V))
extinction_single[freq_count] = np.vdot(V["E", srr1], impedance[srr1, srr1].solve(V["E", srr1]))
# calculate based on the simple model
Z_model = simple_model.impedance(s)
V_model = simple_vl.dot(V)
I_model = Z_model.solve(V_model)
extinction_simple_model[freq_count] = V.conj().dot(simple_vr*(I_model))
# calculate based on the full model
Z_model = full_model.impedance(s)
V_model = full_vl.dot(V)
I_model = Z_model.solve(V_model)
extinction_full_model[freq_count] = V.conj().dot(full_vr*(I_model))
mutual_L[freq_count] = Z_model.matrices['L'][srr1, srr2][0, 0]
mutual_S[freq_count] = Z_model.matrices['S'][srr1, srr2][0, 0]
"""
Explanation: Solving scattering based on models
Now we iterate through all frequencies, and calculate the model parameters. Their accuracy is demonstrated by using them to calculate the extinction cross-section. For reference purposes, this will be compared with the direct calculation.
End of explanation
"""
# normalise the extinction to the cross-sectional area of each ring
area = np.pi*(parameters['outer_radius'])**2
Q_single = extinction_single/area
Q_pair = extinction_tot/area
Q_full_model = extinction_full_model/area
Q_simple_model = extinction_simple_model/area
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.plot(freqs*1e-9, Q_pair.real, label='pair')
plt.plot(freqs*1e-9, Q_single.real, label='single')
plt.plot(freqs*1e-9, np.sum(Q_simple_model.real, axis=1), label='model')
plt.xlim(freqs[0]*1e-9, freqs[-1]*1e-9)
plt.xlabel('f (GHz)')
plt.legend(loc='upper right')
plt.ylabel('Extinction efficiency')
plt.subplot(122)
plt.plot(freqs*1e-9, Q_pair.imag)
plt.plot(freqs*1e-9, Q_single.imag)
plt.plot(freqs*1e-9, np.sum(Q_simple_model.imag, axis=1))
plt.xlim(freqs[0]*1e-9, freqs[-1]*1e-9)
plt.ylabel('Normalised reactance')
plt.xlabel('f (GHz)')
plt.show()
"""
Explanation: Accuracy of the models
The extinction cross section is now plotted for the pair of rings, using both the simpler model and the direct calculation. Additionally, the cross section of a single ring is shown. It can be seen that this fundamental mode of a single ring is split into two coupled modes. Due to the coupling impedance being complex, the hybridised modes have different widths.
End of explanation
"""
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.plot(freqs*1e-9, Q_pair.real, label='exact')
plt.plot(freqs*1e-9, np.sum(Q_simple_model.real, axis=1), label='single mode')
plt.plot(freqs*1e-9, np.sum(Q_full_model.real, axis=1), label='two modes')
plt.legend(loc="upper right")
plt.xlim(5.0, 7)
plt.xlabel('f (GHz)')
plt.subplot(122)
plt.plot(freqs*1e-9, Q_pair.imag)
plt.plot(freqs*1e-9, np.sum(Q_simple_model.imag, axis=1))
plt.plot(freqs*1e-9, np.sum(Q_full_model.imag, axis=1))
plt.xlim(5.3, 6.5)
plt.xlabel('f (GHz)')
plt.show()
"""
Explanation: In the above figure, it can be seen that the simple model of interaction between the rings gives quite good agreement. This can be improved by using the model which considers the two modes of each ring. While the modes on the same ring are independent of each other, between meta-atoms all modes couple to each other, thus there are 3 distinct coupling impedances, and two distinct self impedance terms.
The results of this model are plotted below, showing the improved accuracy by accounting for the higher-order mode.
End of explanation
"""
plt.figure()
plt.plot(freqs*1e-9, mutual_S.real, label='real')
plt.plot(freqs*1e-9, mutual_S.imag, label='imag')
plt.legend(loc="center right")
plt.ylabel('$S_{mut}$ (F$^{-1}$)')
plt.xlabel('f (GHz)')
plt.show()
plt.figure()
plt.plot(freqs*1e-9, mutual_L.real, label='real')
plt.plot(freqs*1e-9, mutual_L.imag, label='imag')
plt.legend(loc="center right")
plt.ylabel('$L_{mut}$ (H)')
plt.xlabel('f (GHz)')
plt.show()
"""
Explanation: Coupling coefficients within the model
Now we can study the coupling coefficients between the dominant modes of the two rings. It can be seen that both $L_{mut}$ and $S_{mut}$ show quite smooth behaviour over this wide frequency range. This justifies the use of a low-order polynomial model for the interaction terms. This frequency variation is due to retardation, which is not very strong for such close separation.
However, the retardation is still strong enough to make the imaginary parts of these coupling terms non-negligible. These parts corresponds to the real part of the mutual impedance, and mean that the coupling affects not only the stored energy, but also the rate of energy loss due to radiation by the modes.
End of explanation
"""
|
dbouquin/DATA_620 | 620_project2_101716.ipynb | mit | import networkx as nx
import os
import ads as ads
import matplotlib.pyplot as plt
import pandas as pd
from networkx.algorithms import bipartite as bi
os.environ["ADS_DEV_KEY"] = "kNUoTurJ5TXV9hsw9KQN1k8wH4U0D7Oy0CJoOvyw"
ads.config.token = 'ADS_DEV_KEY'
#Search for papers (50 most cited) on stars (very general search)
papers1 = list(ads.SearchQuery(q= "stars", sort="citation_count", max_pages=1 ))
# find author names
a = []
for i in papers1:
authors1 = i.author
a.append(authors1)
author_names = a
# find the journals
j = []
for i in papers1:
journals1 = i.pub
j.append(journals1)
journals = j
# create an initial df
df = pd.DataFrame({'Author_Names' : author_names,
'Journal':journals
})
# Expand the df with melt
s1 = df.apply(lambda x: pd.Series(x['Author_Names']),axis=1).stack().reset_index(level=1, drop=True)
s1.name = 'Author_Name'
df_m = df.drop('Author_Names', axis=1).join(s1)
df_m.head()
author_nodes = pd.DataFrame(df_m.Author_Name.unique(),columns=['Author_Name'])
author_nodes['node_type'] = 'Author_Name'
journal_nodes = pd.DataFrame(df_m.Journal.unique(), columns=['Journal'])
journal_nodes['node_type'] = 'Journal'
# Build the graph from the node sets and edges
# set bipartite attribute to ensure weighted projection will work
a_nodes = list(author_nodes['Author_Name'])
j_nodes = list(journal_nodes['Journal'])
edge_bunch = [tuple(i) for i in df_m.values]
g = nx.Graph()
g.add_nodes_from(a_nodes,node_type='Author_Name', bipartite=0)
g.add_nodes_from(j_nodes,node_type='Jurnal', bipartite=1)
g.add_edges_from(edge_bunch)
# Weighted Projections/Clustering
# find the largest most connected graph - 200 as cut-off
big_subg = [i for i in nx.connected_component_subgraphs(g) if len(i) > 200]
# Largest:
sg_largest = big_subg[0] # largest connected subgraph
# weighted_projections can be applied to this subgraph to separate the two components
Journals,Author_Names = bi.sets(sg_largest) # split into bipartites
j_proj_sg_largest = bi.weighted_projected_graph(sg_largest, Journals)
a_proj_sg_largest = bi.weighted_projected_graph(sg_largest, Author_Names)
# Use the Island Method
j = j_proj_sg_largest.edges(data=True)
a = a_proj_sg_largest.edges(data=True)
# Find weights in the projections that are greater than 1
print len([i for i in a if i[2]['weight'] > 1])
print len([i for i in j if i[2]['weight'] > 1])
# With a min threshold of edge weight = 1, find the nodes with strong relationships within the sub-graphs.
# tidy (SNAS Ch. 4) function similar to the one presented in Social Network Analysis Chapter 4.
def tidy(g, weight):
g_temp = nx.Graph()
edge_bunch2 = [i for i in g.edges(data=True) if i[2]['weight'] > weight]
g_temp.add_edges_from(edge_bunch2)
return g_temp
a_sg_island = tidy(a_proj_sg_largest, 1)
j_sg_island = tidy(j_proj_sg_largest,1)
"""
Explanation: 620 Project 2
Further analysis of NASA ADS publications: two-mode network analysis
Daina Bouquin
Below is an analysis of affiliations between authors and journals in the 2-mode NASA Astrophysics Data Systems dataset. This project builds on work performed in Project 2. The primary objective of this project is to use clustering techniques (e.g. the island method) to try to find small sub-networks of important authors that are frequently collaborating together. In doing so we can also see which journals stand out as focal points for these types of collaborations.
End of explanation
"""
# degree centrality of both island clusters
a_degree = nx.degree_centrality(a_sg_island)
j_degree = nx.degree_centrality(j_sg_island)
pd.DataFrame.from_dict(a_degree,orient='index').sort_values(0,ascending=False).head()
pd.DataFrame.from_dict(j_degree,orient='index').sort_values(0,ascending=False).head()
"""
Explanation: We now have two islands of the projected authors and journals. Examining the degree centrality will help reveal which nodes are the key to the networks.
End of explanation
"""
# examine the connected subgraphs
j_connected = [i for i in nx.connected_component_subgraphs(j_proj_sg_largest) if len(i) > 1]
a_connected = [i for i in nx.connected_component_subgraphs(a_proj_sg_largest) if len(i) > 1]
# combining the graphs
def merge_graph(connected_g):
g = nx.Graph()
for h in connected_g:
g = nx.compose(g,h)
return g
a_islands = merge_graph(a_connected)
j_islands = merge_graph(j_connected)
nx.draw(a_islands)
nx.draw(j_islands)
pos=nx.circular_layout(j_islands)
"""
Explanation: Now that the islands are isolated, we can subset them into their largest connected subgraphs and do some basic plots.
End of explanation
"""
|
jorisvandenbossche/2015-EuroScipy-pandas-tutorial | solved - 03 - Indexing and selecting data.ipynb | bsd-2-clause | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
# redefining the example objects
# series
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
# dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
"""
Explanation: Indexing and selecting data
End of explanation
"""
countries = countries.set_index('country')
countries
"""
Explanation: Setting the index to the country names:
End of explanation
"""
countries['area']
"""
Explanation: Some notes on selecting data
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. We now have to distuinguish between:
selection by label
selection by position.
data[] provides some convenience shortcuts
For a DataFrame, basic indexing selects the columns.
Selecting a single column:
End of explanation
"""
countries[['area', 'population']]
"""
Explanation: or multiple columns:
End of explanation
"""
countries['France':'Netherlands']
"""
Explanation: But, slicing accesses the rows:
End of explanation
"""
countries.loc['Germany', 'area']
"""
Explanation: <div class="alert alert-danger">
<b>NOTE</b>: Unlike slicing in numpy, the end label is **included**.
</div>
So as a summary, [] provides the following convenience shortcuts:
Series: selecting a label: s[label]
DataFrame: selecting a single or multiple columns: df['col'] or df[['col1', 'col2']]
DataFrame: slicing the rows: df['row_label1':'row_label2'] or df[mask]
Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
These methods index the different dimensions of the frame:
df.loc[row_indexer, column_indexer]
df.iloc[row_indexer, column_indexer]
Selecting a single element:
End of explanation
"""
countries.loc['France':'Germany', ['area', 'population']]
"""
Explanation: But the row or column indexer can also be a list, slice, boolean array, ..
End of explanation
"""
countries.iloc[0:2,1:3]
"""
Explanation: Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
"""
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
"""
Explanation: The different indexing methods can also be used to assign data:
End of explanation
"""
countries['area'] > 100000
countries[countries['area'] > 100000]
"""
Explanation: Boolean indexing (filtering)
Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL).
The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
End of explanation
"""
countries['density'] = countries['population']*1000000 / countries['area']
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Add a column `density` with the population density (note: population column is expressed in millions)
</div>
End of explanation
"""
countries.loc[countries['density'] > 300, ['capital', 'population']]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select the capital and the population column of those countries where the density is larger than 300
</div>
End of explanation
"""
countries['density_ratio'] = countries['density'] / countries['density'].mean()
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Add a column 'density_ratio' with the ratio of the density to the mean density
</div>
End of explanation
"""
countries.loc['United Kingdom', 'capital'] = 'Cambridge'
countries
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Change the capital of the UK to Cambridge
</div>
End of explanation
"""
countries[(countries['density'] > 100) & (countries['density'] < 300)]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select all countries whose population density is between 100 and 300 people/km²
</div>
End of explanation
"""
s = countries['capital']
s.isin?
s.isin(['Berlin', 'London'])
"""
Explanation: Some other useful methods: isin and string methods
The isin method of Series is very useful to select rows that may contain certain values:
End of explanation
"""
countries[countries['capital'].isin(['Berlin', 'London'])]
"""
Explanation: This can then be used to filter the dataframe with boolean indexing:
End of explanation
"""
'Berlin'.startswith('B')
"""
Explanation: Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method:
End of explanation
"""
countries['capital'].str.startswith('B')
"""
Explanation: In pandas, these are available on a Series through the str namespace:
End of explanation
"""
countries[countries['capital'].str.len() > 7]
"""
Explanation: For an overview of all string methods, see: http://pandas.pydata.org/pandas-docs/stable/api.html#string-handling
<div class="alert alert-success">
<b>EXERCISE</b>: Select all countries that have capital names with more than 7 characters
</div>
End of explanation
"""
countries[countries['capital'].str.contains('am')]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Select all countries that have capital names that contain the character sequence 'am'
</div>
End of explanation
"""
countries.loc['Belgium', 'capital'] = 'Ghent'
countries
countries['capital']['Belgium'] = 'Antwerp'
countries
countries[countries['capital'] == 'Antwerp']['capital'] = 'Brussels'
countries
"""
Explanation: Pitfall: chained indexing (and the 'SettingWithCopyWarning')
End of explanation
"""
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
"""
Explanation: How to avoid this?
Use loc instead of chained indexing if possible!
Or copy explicitly if you don't want to change the original data.
More exercises!
For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation
"""
len(titles)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many movies are listed in the titles dataframe?
</div>
End of explanation
"""
titles.sort('year').head(2)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What are the earliest two films listed in the titles dataframe?
</div>
End of explanation
"""
len(titles[titles.title == 'Hamlet'])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many movies have the title "Hamlet"?
</div>
End of explanation
"""
titles[titles.title == 'Treasure Island'].sort('year')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List all of the "Treasure Island" movies from earliest to most recent.
</div>
End of explanation
"""
t = titles
len(t[(t.year >= 1950) & (t.year <= 1959)])
len(t[t.year // 10 == 195])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many movies were made from 1950 through 1959?
</div>
End of explanation
"""
c = cast
c = c[c.title == 'Inception']
c = c[c.n.isnull()]
len(c)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many roles in the movie "Inception" are NOT ranked by an "n" value?
</div>
End of explanation
"""
c = cast
c = c[c.title == 'Inception']
c = c[c.n.notnull()]
len(c)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: But how many roles in the movie "Inception" did receive an "n" value?
</div>
End of explanation
"""
c = cast
c = c[c.title == 'North by Northwest']
c = c[c.n.notnull()]
c = c.sort('n')
c
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
</div>
End of explanation
"""
c = cast
c = c[(c.title == 'Hamlet') & (c.year == 1921)]
len(c)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: How many roles were credited in the silent 1921 version of Hamlet?
</div>
End of explanation
"""
c = cast
c = c[c.name == 'Cary Grant']
c = c[c.year // 10 == 194]
c = c[c.n == 2]
c = c.sort('year')
c
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year.
</div>
End of explanation
"""
|
bearing/dosenet-analysis | Programming Lesson Modules/Module 8- Measures of Central Tendency.ipynb | mit | %matplotlib inline
import csv
import io
import urllib.request
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
url = 'https://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv'
response = urllib.request.urlopen(url)
reader = csv.reader(io.TextIOWrapper(response))
timedata = []
cpm = []
line = 0
for row in reader:
if line != 0:
timedata.append(datetime.fromtimestamp(float(row[2],)))
cpm.append(float(row[6]))
line += 1
"""
Explanation: Module 8- Measures of Location and Spread
author: Radley Rigonan
This module is the first in a series of modules that explore data and statistical analysis. In this case, we will be using DoseNet data to improve our understanding of central tendency.
I will be using DoseNet data from the following link:
https://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv
End of explanation
"""
mean_cpm1 = sum(cpm)/len(cpm)
print('mean CPM from its definition is: %s' %mean_cpm1)
mean_cpm2 = np.mean(cpm)
print('mean CPM from built-in function is: %s' %mean_cpm2)
if len(cpm)%2 == 0:
median_cpm1 = sorted(cpm)[int(len(cpm)/2)]
else:
median_cpm1 = (sorted(cpm)[int((len(cpm)+1)/2)]+sorted(cpm)[int((len(cpm)-1)/2)]) / 2
print('median CPM from its definition is: %s' %median_cpm1)
median_cpm2 = np.median(cpm)
print('median CPM from built-in function is: %s' %median_cpm2)
from collections import Counter
counter = Counter(cpm)
_,val = counter.most_common(1)[0]
mode_cpm1 = [i for i, target in counter.items() if target == val]
print('mode(s) CPM from its definition is: %s' %mode_cpm1)
import statistics # note: this function fails if there are two statistical modes
mode_cpm2 = statistics.mode(cpm)
print('mode(s) CPM from built-in function is: %s' %mode_cpm2)
fig, ax = plt.subplots()
ax.plot(timedata,cpm,alpha=0.3)
# alpha modifier adds transparency, I add this so the CPM plot doesn't overpower the mean, median, and mode
ax.plot([timedata[0],timedata[-1]], [mean_cpm1,mean_cpm1], label='mean CPM')
ax.plot([timedata[0],timedata[-1]], [median_cpm1,median_cpm1], 'r:', label='median CPM')
ax.plot([timedata[0],timedata[-1]], [mode_cpm1,mode_cpm1], 'c--', label='mode CPM',alpha=0.5)
plt.legend(loc='best')
plt.ylim(ymax = 5, ymin = .5)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b-%Y'))
ax.xaxis.set_minor_locator(mdates.DayLocator())
plt.xticks(rotation=15)
plt.title('DoseNet Data: Etcheverry Roof\nCPM vs. Time with mean, mode, and median')
plt.ylabel('CPM')
plt.xlabel('Date')
fig, ax = plt.subplots()
y,x, _ = plt.hist(cpm,bins=30, alpha=0.3, label='CPM distribution')
ax.plot([mean_cpm1,mean_cpm1], [0,y.max()],label='mean CPM')
ax.plot([median_cpm1, median_cpm1], [0,y.max()], 'r:', label='median CPM')
ax.plot([mode_cpm1,mode_cpm1], [0,y.max()], 'c--', label='mode CPM')
plt.legend(loc='best')
plt.title('DoseNet Data: Etcheverry Roof\nCPM Histogram with mean, mode, and median')
plt.ylabel('Frequency')
plt.xlabel('CPM')
"""
Explanation: Measures of central tendency identify values that lie on the center of a sample and help statisticians summarize their data. The most measures of central tendency are mean, median, and mode. Although you should be familiar with these values, they are defined as:
MEAN = sum(sample) / len(sample)
MEDIAN = sorted(sample)[len(sample)/2]
MODE: element(s) with highest frequency
End of explanation
"""
|
semipi/programming-humanoid-robot-in-python | joint_control/add_training_data.ipynb | gpl-2.0 | %pylab inline
imshow(imread('robot_pose_image/Stand.png'))
"""
Explanation: add data to training set
The provided train data may not sufficient to get good pose recognization results. In case the pose is not recognized correctly, this data can be manually added to the train data.
Defined Poses
Stand: the weight is supported by the feet and the torso is upright.
StandInit: the torso is upright, and legs are bended (be ready to walk)
Sitting: the buttock is in contact with the ground and the torso is upright.
Crouch: sit on its feet
Belly: fall, stretched and facing down
Back: fall, stretched and facing up
Forg: fall, facing down with the trunk lifted.
HeadBack: fall, facing up with the trunk lifted.
Left: fall and facing to its left side
Right: fall and facing to its right side
There are images in robot_pose_image folder as examples for predefined poses.
End of explanation
"""
import pickle
from os import path
pose_name = 'Back'
data_file = path.join('robot_pose.pkl', pose_name)
data = pickle.load(open(data_file))
print data
"""
Explanation: Load avaiable data
End of explanation
"""
new_data = [0, 0, 0, 0, 0, 0, 0, 0, 0, -1.5707963267948966]
data.append(new_data)
pickle.dump(data, open(data_file, 'w'))
"""
Explanation: Add new data
End of explanation
"""
|
darioizzo/d-CGP | doc/sphinx/notebooks/dCGPANNs_for_classification.ipynb | gpl-3.0 | # Initial import
import dcgpy
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
from sklearn.utils import shuffle
import timeit
%matplotlib inline
"""
Explanation: Training a FFNN in dCGPANN vs. Keras (classification)
A Feed Forward Neural network is a widely used ANN model for regression and classification. Here we show how to encode it into a dCGPANN and train it with stochastic gradient descent on a regression task. To check the correctness of the result we perform the same training using the widely used Keras Deep Learning toolbox.
End of explanation
"""
# We import the data for a classification task.
from numpy import genfromtxt
# https://archive.ics.uci.edu/ml/datasets/Abalone
my_data = genfromtxt('abalone_data_set.csv', delimiter=',')
points = my_data[:,:-1]
labels_tmp = my_data[:,-1]
# We trasform the categorical variables to one hot encoding
# The problem is treated as a three class problem
labels = np.zeros((len(labels_tmp), 3))
for i,l in enumerate(labels_tmp):
if l < 9:
labels[i][0] = 1
elif l > 10:
labels[i][2] = 1
else :
labels[i][1] = 1
# And split the data into training and test
X_train = points[:3000]
Y_train = labels[:3000]
X_test = points[3000:]
Y_test = labels[3000:]
# Stable implementation of the softmax function
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
# We define the accuracy metric
def accuracy(ex, points, labels):
acc = 0.
for p,l in zip(points, labels):
ps = softmax(ex(p))
if np.argmax(ps) == np.argmax(l):
acc += 1.
return acc / len(points)
"""
Explanation: Data set
End of explanation
"""
# We encode a FFNN into a dCGP expression. Note that the last layer is made by a sum activation function
# so that categorical cross entropy can be used and produce a softmax activation last layer.
# In a dCGP the concept of layers is absent and neurons are defined by activation functions R->R.
dcgpann = dcgpy.encode_ffnn(8,3,[50,20],["sig", "sig", "sum"], 5)
# By default all weights (and biases) are set to 1 (and 0). We initialize the weights normally distributed
dcgpann.randomise_weights(mean = 0., std = 1.)
dcgpann.randomise_biases(mean = 0., std = 1.)
print("Starting error:", dcgpann.loss(X_test,Y_test, "CE"))
print("Net complexity (number of active weights):", dcgpann.n_active_weights())
print("Net complexity (number of unique active weights):", dcgpann.n_active_weights(unique=True))
print("Net complexity (number of active nodes):", len(dcgpann.get_active_nodes()))
#dcgpann.visualize(show_nonlinearities=True, legend=True)
res = []
# We train
n_epochs = 100
print("Start error (training set):", dcgpann.loss(X_train,Y_train, "CE"), flush=True)
print("Start error (test):", dcgpann.loss(X_test,Y_test, "CE"), flush=True)
start_time = timeit.default_timer()
for i in tqdm(range(n_epochs)):
res.append(dcgpann.sgd(X_train, Y_train, 1., 32, "CE", parallel = 4))
elapsed = timeit.default_timer() - start_time
print("End error (training set):", dcgpann.loss(X_train,Y_train, "CE"), flush=True)
print("End error (test):", dcgpann.loss(X_test,Y_test, "CE"), flush=True)
print("Time:", elapsed, flush=True)
plt.plot(res)
print("Accuracy (test): ", accuracy(dcgpann, X_test, Y_test))
"""
Explanation: Encoding and training a FFNN using dCGP
There are many ways the same FFNN could be encoded into a CGP chromosome. The utility encode_ffnn selects one for you returning the expression.
End of explanation
"""
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import optimizers
# We define Stochastic Gradient Descent as an optimizer
sgd = optimizers.SGD(lr=1.)
# We define weight initializetion
initializerw = keras.initializers.RandomNormal(mean=0.0, stddev=1, seed=None)
initializerb = keras.initializers.RandomNormal(mean=0.0, stddev=1, seed=None)
model = Sequential([
Dense(50, input_dim=8, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('sigmoid'),
Dense(3, kernel_initializer=initializerw, bias_initializer=initializerb),
Activation('softmax'),
])
model.compile(optimizer=sgd,
loss='categorical_crossentropy', metrics=['acc'])
start_time = timeit.default_timer()
history = model.fit(X_train, Y_train, epochs=100, batch_size=32, verbose=False)
elapsed = timeit.default_timer() - start_time
print("End error (training set):", model.evaluate(X_train,Y_train, verbose=False))
print("End error (test):", model.evaluate(X_test,Y_test, verbose=False))
print("Time:", elapsed)
# We plot for comparison the MSE during learning in the two cases
plt.plot(history.history['loss'], label='Keras')
plt.plot(res, label='dCGP')
plt.title('dCGP vs Keras')
plt.xlabel('epochs')
plt.legend()
_ = plt.ylabel('Cross Entropy Loss')
"""
Explanation: Same training is done using Keras (Tensor Flow backend)
IMPORTANT: no GPU is used for the comparison. The values are thus only to be taken as indications of the performances on a simple environment with 4 CPUs.
End of explanation
"""
|
theandygross/HIV_Methylation | PreProcessing/save_detection_p_values.ipynb | mit | PATH = '/cellar/users/agross/TCGA_Code/Methlation/'
cd $PATH
import NotebookImport
from Setup.Imports import *
"""
Explanation: Save Detection P-Values
I have saved the detection p-values in .csv files in the MINFI processing pipeline. Here I am just converting those files into HDFS to make it a bit easier to read in the data and manipulate.
For now I am also saving these in compressed form as most of the p-values are 0.
End of explanation
"""
epic = pd.read_csv(PATH + 'data/EPIC_ITALY/detectionP.csv',
index_col=0)
pData = pd.read_csv(PATH + 'data/EPIC_ITALY/pData.csv',
dtype='str', index_col=0)
epic.columns = epic.columns.map(lambda s: '_'.join(s.split('_')[1:]))
epic = epic.replace(0, nan)
epic = epic.stack()
"""
Explanation: Epic Data
End of explanation
"""
hannum = pd.read_csv(PATH + 'data/Hannum/detectionP.csv',
index_col=0)
pData = pd.read_csv(PATH + 'data/Hannum/pData.csv',
dtype='str', index_col=0)
hannum.columns = hannum.columns.map(lambda s: pData.Sample_Name[s])
hannum = hannum.replace(0, nan)
hannum = hannum.stack()
"""
Explanation: Hannum
End of explanation
"""
ucsd = pd.read_csv(PATH + 'data/UCSD_Methylation/detectionP.csv',
index_col=0)
p = pd.read_csv(PATH + 'data/UCSD_Methylation/pData.csv',
index_col=0)
ucsd.columns = p.Sample_Name
ucsd = ucsd.replace(0, nan)
ucsd = ucsd.stack()
detection_p = pd.concat([ucsd, hannum, epic])
detection_p = detection_p.reset_index()
detection_p.to_hdf(HDFS_DIR + 'dx_methylation.h5', 'detection_p')
"""
Explanation: UCSD
End of explanation
"""
|
therealAJ/python-sandbox | data-science/learning/ud2/Part 1 Exercise Solutions/Data Capstone Projects/911 Calls/911 Calls Data Capstone Project .ipynb | gpl-3.0 | import numpy as np
import pandas as pd
"""
Explanation: 911 Calls Capstone Project
For this capstone project we will be analyzing some 911 call data from Kaggle. The data contains the following fields:
lat : String variable, Latitude
lng: String variable, Longitude
desc: String variable, Description of the Emergency Call
zip: String variable, Zipcode
title: String variable, Title
timeStamp: String variable, YYYY-MM-DD HH:MM:SS
twp: String variable, Township
addr: String variable, Address
e: String variable, Dummy variable (always 1)
Just go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!
Data and Setup
Import numpy and pandas
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
"""
Explanation: Import visualization libraries and set %matplotlib inline.
End of explanation
"""
df = pd.read_csv('911.csv')
"""
Explanation: Read in the csv file as a dataframe called df
End of explanation
"""
df.info()
"""
Explanation: Check the info() of the df
End of explanation
"""
df.head()
"""
Explanation: Check the head of df
End of explanation
"""
df['zip'].value_counts().head(5)
"""
Explanation: Basic Questions
What are the top 5 zipcodes for 911 calls?
End of explanation
"""
df['twp'].value_counts().head(5)
"""
Explanation: What are the top 5 townships (twp) for 911 calls?
End of explanation
"""
df['title'].nunique()
"""
Explanation: Take a look at the 'title' column, how many unique title codes are there?
End of explanation
"""
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
"""
Explanation: Creating new features
In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.
For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS.
End of explanation
"""
df['Reason'].value_counts()
"""
Explanation: What is the most common Reason for a 911 call based off of this new column?
End of explanation
"""
sns.countplot(x='Reason',data=df,palette='viridis')
"""
Explanation: Now use seaborn to create a countplot of 911 calls by Reason.
End of explanation
"""
type(df['timeStamp'].iloc[0])
"""
Explanation: Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?
End of explanation
"""
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
"""
Explanation: You should have seen that these timestamps are still strings. Use pd.to_datetime to convert the column from strings to DateTime objects.
End of explanation
"""
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
"""
Explanation: You can now grab specific attributes from a Datetime object by calling them. For example:
time = df['timeStamp'].iloc[0]
time.hour
You can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.
End of explanation
"""
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day of Week'] = df['Day of Week'].map(dmap)
"""
Explanation: Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week:
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
End of explanation
"""
sns.countplot(x='Day of Week', data=df,hue='Reason', palette='viridis')
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
"""
Explanation: Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column.
End of explanation
"""
sns.countplot(x='Month', data=df,hue='Reason', palette='viridis')
plt.legend(bbox_to_anchor=(1.05,1), loc=2, borderaxespad=0.)
"""
Explanation: Now do the same for Month:
End of explanation
"""
byMonth = df.groupby('Month').count()
byMonth.head()
"""
Explanation: Did you notice something strange about the Plot?
You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...
Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.
End of explanation
"""
byMonth['lat'].plot()
"""
Explanation: Now create a simple plot off of the dataframe indicating the count of calls per month.
End of explanation
"""
sns.lmplot(x='Month',y='twp',data=byMonth.reset_index())
"""
Explanation: Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column.
End of explanation
"""
df['Date'] = df['timeStamp'].apply(lambda timestamp: timestamp.date())
"""
Explanation: Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method.
End of explanation
"""
df.groupby('Date')['lat'].count().plot()
plt.tight_layout()
"""
Explanation: Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.
End of explanation
"""
df[df['Reason']=='Traffic'].groupby('Date')['lat'].count().plot(title='Traffic')
plt.tight_layout()
df[df['Reason']=='Fire'].groupby('Date')['lat'].count().plot(title='Fire')
plt.tight_layout()
df[df['Reason']=='EMS'].groupby('Date')['lat'].count().plot(title='EMS')
plt.tight_layout()
"""
Explanation: Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call
End of explanation
"""
dayHour = df.groupby(by=['Day of Week', 'Hour']).count()['Reason'].unstack()
"""
Explanation: Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an unstack method. Reference the solutions if you get stuck on this!
End of explanation
"""
sns.heatmap(dayHour)
"""
Explanation: Now create a HeatMap using this new DataFrame.
End of explanation
"""
|
georgetown-analytics/machine-learning | examples/erblinm/Post-Operative.ipynb | mit | %matplotlib inline
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import category_encoders as ce
URL = "http://archive.ics.uci.edu/ml/machine-learning-databases/postoperative-patient-data/post-operative.data"
def fetch_data(fname='post-operative.txt'):
"""
Helper method to retreive the ML Repository dataset.
"""
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'w') as f:
f.write(response.content)
return outpath
# Fetch the data if required
POST_OP_DATA = fetch_data()
FEATURES = [
"L_CORE",
"L_SURF",
"L_02",
"L_BP",
"SURF_STBL",
"CORE_STBL",
"BP_STBL",
"COMFORT",
"decision"
]
Fnames = FEATURES[:-1]
decision = FEATURES[-1]
# Read the data into a DataFrame
df = pd.read_csv(POST_OP_DATA, sep=',', header=None, names=FEATURES)
print df.head(8)
print df.describe()
# Determine the shape of the data
print "{} instances with {} features\n".format(*df.shape)
# Determine the frequency of each class
print df.groupby('decision')['decision'].count()
# It is reading the decision in row 4 as a different A, so I am going to drop that row.
df = df.drop([3])
# Describe the dataset
print df.describe()
# Determine the shape of the data
print "{} instances with {} features\n".format(*df.shape)
# Determine the frequency of each class
print df.groupby('decision')['decision'].count()
data = df
data['L_CORE'] = data['L_CORE'].map({'low': 1,'mid': 2,'high':3})
data['L_SURF'] = data['L_SURF'].map({'low': 1,'mid': 2,'high':3})
data['L_BP'] = data['L_BP'].map({'low': 1,'mid': 2,'high':3})
data['L_02'] = data['L_02'].map({'poor': 1, 'fair': 2, 'good': 3,'excellent': 4})
data['SURF_STBL'] = data['SURF_STBL'].map({'unstable': 1,'mod-stable': 2,'stable':3})
data['CORE_STBL'] = data['CORE_STBL'].map({'unstable': 1,'mod-stable': 2,'stable':3})
data['BP_STBL'] = data['BP_STBL'].map({'unstable': 1,'mod-stable': 2,'stable':3})
data['COMFORT'] = data['COMFORT'].map({'15': 15,'10': 10,'?':10, '05' : 15, '07': 10})
data.head(10)
data['COMFORT'].dtype
"""
Explanation: Predict where Patients in Post-Operative areas should be sent next
In this assignment, I select a data set from the UCI Machine Learning Repository, ingest the data from the website, perform some initial analyses to get a sense for what's in the data, then structure the data to fit a Scikit-Learn model and evaluate the results.
Post-Operative Patient Data Set
Downloaded from the UCI Machine Learning Repository on September 9, 2016. The first thing is to fully describe your data in a README file. The dataset description is as follows:
- Data Set: Multivariate
- Tasks: Classification
- Instances: 90
- Attributes: 8
Data Set Information
The attributes correspond roughly to body temperature measurements of patients and the problem is to predict where patients in a postoperative recovery area should be sent to next. The data set can be used for the tasks of classification.
Attribute Information:
L_CORE (patient's internal temperature in C):
high (> 37), mid (>= 36 and <= 37), low (< 36)
L_SURF (patient's surface temperature in C):
high (> 36.5), mid (>= 36.5 and <= 35), low (< 35)
L_O2 (oxygen saturation in %):
excellent (>= 98), good (>= 90 and < 98),
fair (>= 80 and < 90), poor (< 80)
L_BP (last measurement of blood pressure):
high (> 130/90), mid (<= 130/90 and >= 90/70), low (< 90/70)
SURF_STBL (stability of patient's surface temperature):
stable, mod-stable, unstable
CORE_STBL (stability of patient's core temperature)
stable, mod-stable, unstable
BP_STBL (stability of patient's blood pressure)
stable, mod-stable, unstable
COMFORT (patient's perceived comfort at discharge, measured as
an integer between 0 and 20)
decision ADM-DECS (discharge decision):
I (patient sent to Intensive Care Unit),
S (patient prepared to go home),
A (patient sent to general hospital floor)
Relevant Papers
A. Budihardjo, J. Grzymala-Busse, L. Woolery (1991). Program LERS_LB 2.5 as a tool for knowledge acquisition in nursing, Proceedings of the 4th Int. Conference on Industrial & Engineering Applications of AI & Expert Systems, pp. 735-740.
[Web Link]
L. Woolery, J. Grzymala-Busse, S. Summers, A. Budihardjo (1991). The use of machine learning program LERS_LB 2.5 in knowledge acquisition for expert system development in nursing. Computers in Nursing 9, pp. 227-234.
Data Exploration
In this section we will begin to explore the dataset to determine relevant information.
End of explanation
"""
# Create a scatter matrix of the dataframe features
from pandas.tools.plotting import scatter_matrix
scatter_matrix(data, alpha=0.2, figsize=(12, 12), diagonal='kde')
plt.show()
sns.set_context("poster")
sns.countplot(x='SURF_STBL', hue='decision', data = data,)
sns.set_context("poster")
sns.countplot(x='COMFORT', hue='decision', data = data,)
sns.set_context("poster")
sns.countplot(x='BP_STBL', hue='decision', data = data,)
from pandas.tools.plotting import radviz
plt.figure(figsize=(12,12))
radviz(data, 'decision')
plt.show()
data = df
data.head(3)
"""
Explanation: By glancing at the first 5 rows of the data, we can see that we have primarily categorical data. Our target, data.decision is also currently constructed as a categorical field. Unfortunately, with categorical fields, we don't have a lot of visualization options (quite yet). However, it would be interesting to see the frequencies of each class, relative to the target of our classifier. To do this, we can use Seaborn's countplot function to count the occurrences of each data point.
End of explanation
"""
import json
meta = {
'target_names': list(data.decision.unique()),
'feature_names': list(data.columns),
'categorical_features': {
column: list(data[column].unique())
for column in data.columns
if data[column].dtype == 'object'
},
}
with open('data/meta.json', 'w') as f:
json.dump(meta, f, indent=2)
from sklearn import cross_validation
from sklearn.cross_validation import train_test_split
"""
Explanation: Data Management
Now that we've completed some initial investigation and have started to identify the possible feautures available in our dataset, we need to structure our data on disk in a way that we can load into Scikit-Learn in a repeatable fashion for continued analysis. My proposal is to use the sklearn.datasets.base.Bunch object to load the data into data and target attributes respectively, similar to how Scikit-Learn's toy datasets are structured. Using this object to manage our data will mirror the native API and allow us to easily copy and paste code that demonstrates classifiers and technqiues with the built in datasets. Importantly, this API will also allow us to communicate to other developers and our future-selves exactly how to use the data.
In order to organize our data on disk, we'll need to add the following files:
README.md: a markdown file containing information about the dataset and attribution. Will be exposed by the DESCR attribute.
meta.json: a helper file that contains machine readable information about the dataset like target_names and feature_names.
I constructed a pretty simple README.md in Markdown that gave the title of the dataset, the link to the UCI Machine Learning Repository page that contained the dataset, as well as a citation to the author. I simply wrote this file directly using my own text editor.
The meta.json file, however, we can write using the data frame that we already have. We've already done the manual work of writing the column names into a names variable earlier, there's no point in letting that go to waste!
End of explanation
"""
from sklearn.datasets.base import Bunch
def load_data(root='data'):
# Load the meta data from the file
with open(os.path.join(root, 'meta.json'), 'r') as f:
meta = json.load(f)
names = meta['feature_names']
# Load the readme information
with open(os.path.join(root, 'README.md'), 'r') as f:
readme = f.read()
X = data[[
"L_CORE",
"L_SURF",
"L_02",
"L_BP",
"SURF_STBL",
"CORE_STBL",
"BP_STBL",
"COMFORT",
]]
# Remove the target from the categorical features
meta['categorical_features'].pop('decision')
y = data["decision"]
# X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y,test_size = 0.2,random_state=14)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y,test_size = 0.8,random_state=14)
# Return the bunch with the appropriate data chunked apart
return Bunch(
#data = train[names[:-1]],
data = X_train,
#target = train[names[-1]],
target = y_train,
#data_test = test[names[:-1]],
data_test = X_test,
#target_test = test[names[-1]],
target_test = y_test,
target_names = meta['target_names'],
feature_names = meta['feature_names'],
categorical_features = meta['categorical_features'],
DESCR = readme,
)
dataset = load_data()
"""
Explanation: This code creates a meta.json file by inspecting the data frame that we have constructued. The target_names column, is just the two unique values in the data.income series; by using the pd.Series.unique method - we're guarenteed to spot data errors if there are more or less than two values. The feature_names is simply the names of all the columns.
Then we get tricky — we want to store the possible values of each categorical field for lookup later, but how do we know which columns are categorical and which are not? Luckily, Pandas has already done an analysis for us, and has stored the column data type, data[column].dtype, as either int64 or object. Here I am using a dictionary comprehension to create a dictionary whose keys are the categorical columns, determined by checking the object type and comparing with object, and whose values are a list of unique values for that field.
Now that we have everything we need stored on disk, we can create a load_data function, which will allow us to load the training and test datasets appropriately from disk and store them in a Bunch:
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
from sklearn.base import BaseEstimator, TransformerMixin
class EncodeCategorical(BaseEstimator, TransformerMixin):
"""
Encodes a specified list of columns or all columns if None.
"""
def __init__(self, columns=None):
self.columns = columns
self.encoders = None
def fit(self, data, target=None):
"""
Expects a data frame with named columns to encode.
"""
# Encode all columns if columns is None
if self.columns is None:
self.columns = data.columns
# Fit a label encoder for each column in the data frame
self.encoders = {
column: LabelEncoder().fit(data[column])
for column in self.columns
}
return self
def transform(self, data):
"""
Uses the encoders to transform a data frame.
"""
output = data.copy()
for column, encoder in self.encoders.items():
output[column] = encoder.transform(data[column])
return output
encoder = EncodeCategorical(dataset.categorical_features.keys())
data = encoder.fit_transform(dataset.data)
"""
Explanation: The primary work of the load_data function is to locate the appropriate files on disk, given a root directory that's passed in as an argument (if you saved your data in a different directory, you can modify the root to have it look in the right place). The meta data is included with the bunch, and is also used split the train and test datasets into data and target variables appropriately, such that we can pass them correctly to the Scikit-Learn fit and predict estimator methods.
Feature Extraction
Now that our data management workflow is structured a bit more like Scikit-Learn, we can start to use our data to fit models. Unfortunately, the categorical values themselves are not useful for machine learning; we need a single instance table that contains numeric values. In order to extract this from the dataset, we'll have to use Scikit-Learn transformers to transform our input dataset into something that can be fit to a model. In particular, we'll have to do the following:
encode the categorical labels as numeric data
impute missing values with data (or remove)
We will explore how to apply these transformations to our dataset, then we will create a feature extraction pipeline that we can use to build a model from the raw input data. This pipeline will apply both the imputer and the label encoders directly in front of our classifier, so that we can ensure that features are extracted appropriately in both the training and test datasets.
Label Encoding
Our first step is to get our data out of the object data type land and into a numeric type, since nearly all operations we'd like to apply to our data are going to rely on numeric types. Luckily, Sckit-Learn does provide a transformer for converting categorical labels into numeric integers: sklearn.preprocessing.LabelEncoder. Unfortunately it can only transform a single vector at a time, so we'll have to adapt it in order to apply it to multiple columns.
Like all Scikit-Learn transformers, the LabelEncoder has fit and transform methods (as well as a special all-in-one, fit_transform method) that can be used for stateful transformation of a dataset. In the case of the LabelEncoder, the fit method discovers all unique elements in the given vector, orders them lexicographically, and assigns them an integer value. These values are actually the indices of the elements inside the LabelEncoder.classes_ attribute, which can also be used to do a reverse lookup of the class name from the integer value.
Obviously this is very useful for a single column, and in fact the LabelEncoder really was intended to encode the target variable, not necessarily categorical data expected by the classifiers.
In order to create a multicolumn LabelEncoder, we'll have to extend the TransformerMixin in Scikit-Learn to create a transformer class of our own, then provide fit and transform methods that wrap individual LabelEncoders for our columns. My code, inspired by the StackOverflow post “Label encoding across multiple columns in scikit-learn”, is as follows:
End of explanation
"""
from sklearn.preprocessing import Imputer
class ImputeCategorical(BaseEstimator, TransformerMixin):
"""
Encodes a specified list of columns or all columns if None.
"""
def __init__(self, columns=None):
self.columns = columns
self.imputer = None
def fit(self, data, target=None):
"""
Expects a data frame with named columns to impute.
"""
# Encode all columns if columns is None
if self.columns is None:
self.columns = data.columns
# Fit an imputer for each column in the data frame
self.imputer = Imputer(missing_values=0, strategy='most_frequent')
self.imputer.fit(data[self.columns])
return self
def transform(self, data):
"""
Uses the encoders to transform a data frame.
"""
output = data.copy()
output[self.columns] = self.imputer.transform(output[self.columns])
return output
imputer = ImputeCategorical(['L_CORE', 'L_SURF', 'L_02', 'L_BP', 'SURF_STBL', 'CORE_STBL', 'BP_STBL'])
data = imputer.fit_transform(data)
data.head(90)
"""
Explanation: This specialized transformer now has the ability to label encode multiple columns in a data frame, saving information about the state of the encoders. It would be trivial to add an inverse_transform method that accepts numeric data and converts it to labels, using the inverse_transform method of each individual LabelEncoder on a per-column basis.
Imputation
According to the adult.names file, unknown values are given via the "?" string. We'll have to either ignore rows that contain a "?" or impute their value to the row. Scikit-Learn provides a transformer for dealing with missing values at either the column level or at the row level in the sklearn.preprocessing library called the Imputer.
The Imputer requires information about what missing values are, either an integer or the string, Nan for np.nan data types, it then requires a strategy for dealing with it. For example, the Imputer can fill in the missing values with the mean, median, or most frequent values for each column. If provided an axis argument of 0 then columns that contain only missing data are discarded; if provided an axis argument of 1, then rows which contain only missing values raise an exception. Basic usage of the Imputer is as follows:
python
imputer = Imputer(missing_values='Nan', strategy='most_frequent')
imputer.fit(dataset.data)
Unfortunately, this would not work for our label encoded data, because 0 is an acceptable label — unless we could guarentee that 0 was always "?", then this would break our numeric columns that already had zeros in them. This is certainly a challenging problem, and unfortunately the best we can do, is to once again create a custom Imputer.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import r2_score
# we need to encode our target data as well.
yencode = LabelEncoder().fit(dataset.target)
# construct the pipeline
rf = Pipeline([
('encoder', EncodeCategorical(dataset.categorical_features.keys())),
('imputer', ImputeCategorical([
"L_CORE",
"L_SURF",
"L_02",
"L_BP",
"SURF_STBL",
"CORE_STBL",
"BP_STBL",
"COMFORT",
])),
('classifier', RandomForestClassifier(n_estimators=20, oob_score=True, max_depth=7))
])
# ...and then run the 'fit' method to build a forest of trees
rf.fit(dataset.data, yencode.transform(dataset.target))
y_true = yencode.transform(dataset.target_test)
predicted = rf.predict(dataset.data_test)
# Evaluate fit of the model
print "Mean Squared Error: %0.3f" % mse(y_true, predicted)
print "Coefficient of Determination: %0.3f" % r2_score(y_true, predicted)
from sklearn.metrics import classification_report
y_true = yencode.transform(dataset.target_test)
predicted = rf.predict(dataset.data_test)
classificationReport = classification_report(y_true, predicted, target_names=["A", "S", "I"])
#classificationReport = classification_report(y_true, predicted)
print classificationReport
"""
Explanation: Our custom imputer, like the EncodeCategorical transformer takes a set of columns to perform imputation on. In this case we only wrap a single Imputer as the Imputer is multicolumn — all that's required is to ensure that the correct columns are transformed.
I had chosen to do the label encoding first, assuming that because the Imputer required numeric values, I'd be able to do the parsing in advance. However, after requiring a custom imputer, I'd say that it's probably best to deal with the missing values early, when they're still a specific value, rather than take a chance.
Model Build
Now that we've finally acheived our feature extraction, we can continue on to the model build phase. To create our classifier, we're going to create a Pipeline that uses our feature transformers and ends in an estimator that can do classification. We can then write the entire pipeline object to disk with the pickle, allowing us to load it up and use it to make predictions in the future.
A pipeline is a step-by-step set of transformers that takes input data and transforms it, until finally passing it to an estimator at the end. Pipelines can be constructed using a named declarative syntax so that they're easy to modify and develop. Our pipeline is as follows:
Random Forest Classifier
End of explanation
"""
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
# we need to encode our target data as well.
yencode = LabelEncoder().fit(dataset.target)
# construct the pipeline
clf = Pipeline([
('encoder', EncodeCategorical(dataset.categorical_features.keys())),
('imputer', ImputeCategorical([
"L_CORE",
"L_SURF",
"L_02",
"L_BP",
"SURF_STBL",
"CORE_STBL",
"BP_STBL",
"COMFORT",
])),
('classifier', SVC(max_iter = 12))
])
# ...and then run the 'fit' method to build a forest of trees
clf.fit(dataset.data, yencode.transform(dataset.target))
y_true = yencode.transform(dataset.target_test)
predicted = clf.predict(dataset.data_test)
# Evaluate fit of the model
print "Mean Squared Error: %0.3f" % mse(y_true, predicted)
print "Coefficient of Determination: %0.3f" % r2_score(y_true, predicted)
from sklearn.metrics import classification_report
y_true = yencode.transform(dataset.target_test)
predicted = clf.predict(dataset.data_test)
classificationReport = classification_report(y_true, predicted, target_names=["A", "S", "I"])
#classificationReport = classification_report(y_true, predicted)
print classificationReport
"""
Explanation: C-Support Vector Classifier
End of explanation
"""
import pickle
def dump_model(model, path='data', name='classifier.pickle'):
with open(os.path.join(path, name), 'wb') as f:
pickle.dump(model, f)
dump_model(decision)
"""
Explanation: The last step is to save our model to disk for reuse later, with the pickle module:
End of explanation
"""
def load_model(path='data/classifier.pickle'):
with open(path, 'rb') as f:
return pickle.load(f)
def predict(model, meta=meta):
data = {} # Store the input from the user
for column in meta['feature_names'][:-1]:
# Get the valid responses
valid = meta['categorical_features'].get(column)
# Prompt the user for an answer until good
while True:
val = " " + raw_input("enter {} >".format(column))
if valid and val not in valid:
print "Not valid, choose one of {}".format(valid)
else:
data[column] = val
break
# Create prediction and label
yhat = model.predict(pd.DataFrame([data]))
return yencode.inverse_transform(yhat)
# Execute the interface
model = load_model()
predict(model)
"""
Explanation: You should also dump meta information about the date and time your model was built, who built the model, etc. But we'll skip that step here, since this post serves as a guide.
Model Operation
Now it's time to explore how to use the model. To do this, we'll create a simple function that gathers input from the user on the command line, and returns a prediction with the classifier model. Moreover, this function will load the pickled model into memory to ensure the latest and greatest saved model is what's being used.
End of explanation
"""
|
anhquan0412/deeplearning_fastai | deeplearning1/nbs/lesson1.ipynb | apache-2.0 | %matplotlib inline
#change image dim ordering?
# from keras import backend
# backend.set_image_dim_ordering('th')
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
# path = "data/dogscats/"
path = "data/dogscats/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
from importlib import reload
import utils ; reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=16
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
vgg.model.summary()
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
print('1')
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
print('2')
vgg.finetune(batches)
print('3')
vgg.fit(batches, val_batches, batch_size, nb_epoch=1)
print('4')
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=32
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1,batch_size = batch_size)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
gillenbrown/betterplotlib | docs/examples.ipynb | mit | %matplotlib inline
import betterplotlib as bpl
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Using betterplotlib
This page will demonstrate the power of betterplotlib, and hopefully show why it can be useful to you.
End of explanation
"""
bpl.default_style()
"""
Explanation: The first thing betterplotlib can do is set the styles to be much better than they were in default matplotlib. This is just one command. We'll set the default style, which is normally the ideal one to use.
End of explanation
"""
x1 = np.random.normal(-1, 2, 5000)
x2 = np.random.normal(1, 2, 5000)
fig = plt.figure(figsize=[10, 4], tight_layout=True)
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2, projection="bpl")
# matplotlib plot
ax1.hist(x1, alpha=0.5)
ax1.hist(x2, alpha=0.5)
ax1.set_xlabel("X Values")
ax1.set_ylabel("Number")
ax1.set_title("matplotlib")
# betterplotlib plot
ax2.hist(x1, alpha=0.5)
ax2.hist(x2, alpha=0.5)
ax2.add_labels("X Values", "Number", "betterplotlib")
"""
Explanation: We can then start using betterplotlib. Note that if you have any questions about a given function and what it does, there are examples in the documentation of each function that are more complete than what is found here. This is just an overview of the cool stuff betterplotlib can do.
For all the plots we will do, the best way to access the betterplotlib interface is to use the bpl.subplots() function, which is an exact analog of the plt.subplots() function, which is my favorite way to access the matplotlib objects. When you do this, it creates a regular matplotlib figure with betterplotlib axes objects, which is where all the magic happens.
Histograms
First, let's do a comparison of how betterplotlib's histogram is actually better. We'll make the same plot with the default hist() and with bpl.hist(). The only parameter we will use is alpha so we can plot two distributions at once and still see them both. I'll do some trickery here with the axes to show betterplotlib versus matplotlib, but you won't have to do that when plotting yourself.
Note that the syntax is exactly the same for the plotting of both plots. The betterplotlib axes objects overload the functions that matplotlib has created to make them better.
End of explanation
"""
fig, ax = bpl.subplots()
ax.hist(x1, rel_freq=True, histtype="step", bin_size=0.5, lw=3, hatch="\\", label="Data 1")
ax.hist(x2, rel_freq=True, histtype="step", bin_size=0.5, lw=3, hatch= "/", label="Data 2")
ax.make_ax_dark()
ax.add_labels("X Value", "Relative Frequency")
ax.set_limits(-10, 10, 0, 0.12)
ax.legend();
"""
Explanation: Some things to note: The bin size is chosen much better in the betterplotlib plot. The bins line up with each other, too. This doesn't always happen in betterplotlib without user intervention, but sometimes does. The white outline on the bars looks nicer than the black. Also note that the color cycle is changed, as well as the font. This is the same for both plots, since it was done by the bpl.default_style() function above. The matplotlib plot would look even worse without it. Also note how much less work it is to set the axes labels with betterplotlib.
There are some extra parameters that can be passed in to bpl.hist() that plt.hist() doesn't have, too. I also make a dark axis, which makes some plots look nicer.
End of explanation
"""
x1 = np.random.normal(2, 1, 1000)
y1 = np.random.normal(0, 1, 1000)
x2 = np.random.normal(4, 1, 1000)
y2 = np.random.normal(2, 1, 1000)
x3 = np.random.normal(2, 1, 1000)
y3 = np.random.normal(2, 1, 1000)
fig = plt.figure(figsize=[10, 4])
ax1 = fig.add_subplot(1, 2, 1)
ax2 = fig.add_subplot(1, 2, 2, projection="bpl")
# matplotlib plot
ax1.scatter(x1, y1)
ax1.scatter(x2, y2)
ax1.scatter(x3, y3)
ax1.set_xlabel("X Values")
ax1.set_ylabel("Y Values")
ax1.set_title("matplotlib")
# betterplotlib plot
ax2.scatter(x1, y1)
ax2.scatter(x2, y2)
ax2.scatter(x3, y3)
ax2.add_labels("X Values", "Y Values", "betterplotlib")
"""
Explanation: Note the different paramters that the histogram function takes, as well as the convenience functions afterward that make for a better looking plot in fewer steps. Also, when we pass in bin_size, the bins will always line up, which makes for much nicer looking plots with multiple data sets.
Scatter
The default scatter plot in matplotlib is truly bad. Here's an example.
End of explanation
"""
xs = np.concatenate([np.random.normal(0, 1, 10000),
np.random.normal(3, 1, 10000),
np.random.normal(0, 1, 10000)])
ys = np.concatenate([np.random.normal(0, 1, 10000),
np.random.normal(3, 1, 10000),
np.random.normal(3, 1, 10000)])
"""
Explanation: Note how matplotlib's scatter doesn't use the color cycle, has thick borders around its points, and doesn't use transparency. In contrast, bpl.scatter() does use the color cycle, has thin marker edges, and picks an alpha value somewhat smartly based on the number of points in the plot.
Contour
While the default contour in matplotlib is fine, I have created a function to easily make contours based on the density of points. We'll create a dataset similar to the one above.
End of explanation
"""
fig, ax = bpl.subplots()
ax.contour_scatter(xs, ys, bin_size=0.3, scatter_kwargs={"label":"Outliers"})
ax.equal_scale()
ax.make_ax_dark()
ax.set_limits(-4, 8, -4, 8)
ax.legend("light");
"""
Explanation: There are a ton of parameters that this function can take, but we'll only show a few of them. To see them all, check the documentation for that function. This function is hugely versatile, and can make a ton of different-looking plots based on the parameters passed in.
End of explanation
"""
xs = np.random.normal(0, 1, 100)
ys = np.random.normal(0, 1, 100)
fig, [ax1, ax2] = bpl.subplots(ncols=2)
ax1.scatter(xs, ys, label="Data")
ax1.add_labels("X Values", "Y Values")
ax1.set_limits(-4, 4, -4, 4)
ax1.equal_scale()
ax1.remove_spines(["top", "right"])
ax1.easy_add_text(r"All $\mathregular{\sigma = 1}$", "upper left")
ax1.legend()
ax2.scatter(xs, ys, label="Data")
ax2.data_ticks(xs, ys)
ax2.equal_scale()
ax2.make_ax_dark()
ax2.set_limits(-4, 4, -4, 4)
ax2.legend(facecolor="light")
ax2.add_labels("X Values", "Y Values")
"""
Explanation: Convenience Functions
There are a ton of convenience functions, as demonstrated above. Check the API Overview page for more information on all of these things. Here is a demonstration of some of the more useful ones.
End of explanation
"""
data = np.random.normal(0, 1, 100000)
bpl.hist(data, rel_freq=True, bin_size=0.1)
bpl.set_limits(-4, 4)
bpl.make_ax_dark()
bpl.add_labels("Data Values", "Relative Frequency")
bpl.easy_add_text("This is easy!", "upper left");
"""
Explanation: There are some additional convenience functions that are in the documentation.
Imperative Interface
In the previoue examples we used the object oriented interface, where we use the axes objects directly. That isn't necessary, however, and we can use betterplotlib exactly like matplotlib's pyplot interface. Here is an example.
End of explanation
"""
|
dssg/diogenes | examples/CPDB/CPDB.ipynb | mit | #Record arrays
allegations = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Allegations.csv',parse_datetimes=['IncidentDate','StartDate','EndDate'])
citizens = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Citizens.csv')
officers = read.open_csv_url('https://raw.githubusercontent.com/jamestwhedbee/DataProjects/master/CPDB/Officers.csv')
"""
Explanation: Methods
Data obtained from the Citizens Police Data Project.
This data includes only the FOIA dataset from 2011 to present (i.e. the Bond and Moore datasets have been removed).
This was accomplished by entering FOIA in the search bar.
The resulting table was saved to GitHub as a .xslx.
The Allegations, Complaining Witnesses, and Officer Profile tabs were then saved as allegations.csv, citizens.csv, and officers.csv respectively.
Disclaimer
The following disclaimer is included with the data by the Invisible Institute.
This dataset is compiled from three lists of allegations against Chicago Police Department officers,
spanning approximately 2002 - 2008 and 2010 - 2014, produced by the City of Chicago in response
to litigation and to FOIA requests.
The City of Chicago's production of this information is accompanied by a disclaimer that
not all information contained in the City's database may be correct.
No independent verification of the City's records has taken place and this dataset does not
purport to be an accurate reflection of either the City's database or its veracity.
End of explanation
"""
#I shouldn't have to nest function calls just to get a summary of my data. This needs to be a single call.
#Most of the data isn't numeric, so we should find a way to be more helpful than this.
#Also, what is the "None" printing at the end of this?
print display.pprint_sa(display.describe_cols(allegations))
print display.pprint_sa(display.describe_cols(citizens))
print display.pprint_sa(display.describe_cols(officers))
"""
Explanation: What data do we have?
We can see the column names for the three tables below.
The Allegations table includes data on each allegation, including an ID for the complaint witness, the officer, and the outcome of the allegation.
The Citizens table includes additional information for each complaint witness.
The Officers table includes additional information for each officer.
End of explanation
"""
import datetime
#TODO: there is a typo in the "OfficerFirst" column in allegations.
#Should pass this on to Kalven at Invisible Institute along with questions about data.
allegations = utils.remove_cols(allegations,['OfficeFirst','OfficerLast','Investigator','AllegationCode','RecommendedFinding','RecommendedOutcome','FinalFinding','FinalOutcome','Beat','Add1','Add2','City'])
officers = utils.remove_cols(officers,['OfficerFirst','OfficerLast','Star'])
#Convert appointment date days since 1900-1-1 to years prior to today
def tenure(vector):
today = datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%d')
started = np.add(np.datetime64('1900-01-01'),map(lambda x: np.timedelta64(int(x), 'D'),vector))
tenure = np.subtract(np.datetime64(today),started)
return np.divide(tenure,np.timedelta64(1,'D')) / 365
#Impute median date for missing values
officers['ApptDate'] = modify.replace_missing_vals(officers['ApptDate'], strategy='median')
tenure_days = modify.combine_cols(officers,tenure,['ApptDate'])
officers = utils.append_cols(officers,[tenure_days],['Tenure'])
"""
Explanation: For this analysis, we will be removing several columns for the following reasons:
To anonymize our data, names of officers and investiagtors have been removed.
Many of the columns in Allegations are redundant as they code for other columns. We will preserve only the human readable columns.
The Beat column has no data, so it will be removed.
We will only focus on final outcomes, so the "recommended" columns have been removed from Allegations.
We will be limiting our geographic analysis to Location, so the address information has been removed.
We will also translate ApptDate, which specifies the number of days between the hire date and 1900-1-1, to the number of years working.
End of explanation
"""
master = utils.join(allegations,citizens,'left',['CRID'],['CRID'])
#Rename Race and Gender, since citizens and officers have these columns
temp_col_names = list(master.dtype.names)
gender_index = temp_col_names.index("Gender")
race_index = temp_col_names.index("Race")
temp_col_names[gender_index] = "CitizenGender"
temp_col_names[race_index] = "CitizenRace"
master.dtype.names = tuple(temp_col_names)
master = utils.join(master,officers,'left',['OfficerID'],['OfficerID'])
temp_col_names = list(master.dtype.names)
gender_index = temp_col_names.index("Gender")
race_index = temp_col_names.index("Race")
temp_col_names[gender_index] = "OfficerGender"
temp_col_names[race_index] = "OfficerRace"
master.dtype.names = tuple(temp_col_names)
"""
Explanation: For ease of use, let's join our tables.
End of explanation
"""
#This is a pretty awkward way to remove nan, is there a better way I missed?
master = modify.choose_rows_where(master,[{'func': modify.row_val_between, 'col_name': 'OfficerID', 'vals': [-np.inf,np.inf]}])
"""
Explanation: There are some allegations where no officer ID was provided. For this analysis, we will discard those allegations.
End of explanation
"""
#Unit is interpreted as numeric, but we really want to analyze it categorically
#There should be an easier way to treat a numeric column as categorical data
master = utils.append_cols(master,master['Unit'].astype('|S10'),['UnitCat'])
master = utils.remove_cols(master,['Unit'])
master_data, master_classes = modify.label_encode(master)
"""
Explanation: Now, let's encode our data numerically
End of explanation
"""
#Directives
def cat_directives(array,classes):
cat_directives = {}
for column in classes:
cat_directives[column] = {v:[{'func': modify.row_val_eq, 'col_name': column, 'vals': i}] for i,v in enumerate(classes[column])}
return cat_directives
where = cat_directives(master_data,master_classes)
"""
Explanation: For convenience, we'll build every possible categorical directive
End of explanation
"""
#Masks
#Gender
female_officers = modify.where_all_are_true(master_data,where['OfficerGender']['F'])
male_officers = modify.where_all_are_true(master_data,where['OfficerGender']['M'])
female_citizens = modify.where_all_are_true(master_data,where['CitizenGender']['F'])
male_citizens = modify.where_all_are_true(master_data,where['CitizenGender']['M'])
#Race
white_officers = modify.where_all_are_true(master_data,where['OfficerRace']['White'])
black_officers = modify.where_all_are_true(master_data,where['OfficerRace']['Black'])
hispanic_officers = modify.where_all_are_true(master_data,where['OfficerRace']['Hispanic'])
white_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['White'])
black_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['Black'])
hispanic_citizens = modify.where_all_are_true(master_data,where['CitizenRace']['Hispanic'])
#Cross-sections
white_M_officers_black_F_citizens = modify.where_all_are_true(master_data,where['OfficerRace']['White']+
where['OfficerGender']['M']+
where['CitizenRace']['Black']+
where['CitizenGender']['F'])
"""
Explanation: Now, we can build intuitive masks as combinations of our human-readable directives
End of explanation
"""
duration = modify.combine_cols(master_data,np.subtract,['EndDate','StartDate'])
durationDays = duration / np.timedelta64(1, 'D')
duration_data = utils.append_cols(master_data,[durationDays],['InvestigationDuration'])
numeric_data = utils.remove_cols(master_data,['StartDate','EndDate','IncidentDate'])
"""
Explanation: Let's generate a potentially interesting new feature from our existing data, and pull out all non-numeric data
End of explanation
"""
#Ex 1: What percentage of allegations have a black female citizen and a white male officer?
print np.sum(white_M_officers_black_F_citizens.astype(np.float))/np.size(white_M_officers_black_F_citizens.astype(np.float))
#Ex 2: What is the breakdown of officers with complaints by race?
#This seems a little clunky to me
#Would be nice if plot_simple_histogram could handle categorical labels for me
display.plot_simple_histogram(master_data['OfficerRace'],verbose=False)
display.plt.xticks(range(len(master_classes['OfficerRace'])), master_classes['OfficerRace'])
#Ex 3: What does the distribution of complaints look like?
complaint_counter = display.Counter(numeric_data['OfficerID'])
officer_list, complaint_counts = zip(*complaint_counter.items())
display.plot_simple_histogram(complaint_counts)
#Ex 4: What can we learn from the 100 officers who receive the most complaints?
#FYI: Wikipedia says 12,244 officers total, so this is roughly the top 1% of all Chicago officers.
#Obviously, all officers do not have the same quantity and quality of interactions with citizens.
#Need to account for this fact for any real analysis.
#Median imputation makes histogram look unnatural
#Top 100 Officers
top_100 = counts.most_common(100)
top_100_officers = map(lambda x: x[0],top_100)
#We should add this to modify.py for categorical data
def row_val_in(M,col_name,boundary):
return [x in boundary for x in M[col_name]]
top_100_profile = modify.choose_rows_where(officers,[{'func': row_val_in, 'col_name': 'OfficerID', 'vals': top_100_officers}])
#Can't check this against CPDB, their allegation counts are for the whole time period
#Not just 2011 - present.
display.plot_simple_histogram(master_data['Tenure'],verbose=False)
display.plot_simple_histogram(top_100_profile['Tenure'],verbose=False)
#Ex 5: What does the distribution of outcomes look like?
#Hastily written, possibly not useful. Just curious.
#Almost everything is unknown or no action taken
def sortedFrequencies(array,classes,col_name):
if col_name not in classes:
raise ValueError('col_name must be categorical')
counts = display.Counter(array[col_name])
total = float(sum(counts.values()))
for key in counts:
counts[key] /= total
count_dict = {}
for value in counts:
count_dict[classes[col_name][value]] = counts[value]
return sorted(count_dict.items(), key=lambda x: x[1],reverse=True)
print sortedFrequencies(numeric_data,master_classes,'Outcome')
#Ex 6: What has the number of complaints over time been like?
#Looks seasonal (peaking in summer), and declining over time (coud the decline just be a collection issue?)
def numpy_to_month(dt64):
ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
dt = datetime.datetime.utcfromtimestamp(ts)
d = datetime.date(dt.year, dt.month, 1) #round to month
return d
months, counts = zip(*display.Counter(map(numpy_to_month,duration_data['IncidentDate'])).items())
display.plt.plot_date(months,counts)
#How does it look to split complaints by location?
#Very disproportionate. Locations 17,19,3,4 have almost all complaints.
display.plot_simple_histogram(numeric_data['Location'],verbose=False)
display.plt.xticks(range(len(master_classes['Location'])), master_classes['Location'])
#Unit?
#Still uneven, but more even than location.
display.plot_simple_histogram(numeric_data['UnitCat'],verbose=False)
display.plt.xticks(range(len(master_classes['UnitCat'])), master_classes['UnitCat'])
#Are there officers getting a lot of complaints not from the high yield locations?
#What does the social network of concomitant officers look like?
"""
Explanation: We understand what data we have, and we have some tools to easily slice and dice. Let's dive in and learn something.
End of explanation
"""
|
NicolasHemidy/udacity-data-nanodegree | P3/OSMProject - Plaisir.ipynb | apache-2.0 | tags = {}
for event, elem in ET.iterparse("sample.osm"):
if elem.tag not in tags:
tags[elem.tag]= 1
else:
tags[elem.tag] += 1
print tags
"""
Explanation: Audit of the file
End of explanation
"""
tags_details = {}
keys = ["amenity","shop","sport","place","service","building"]
def create_tags_details(binder, list_keys, filename):
"""
Create a dictionnary of every attributes value of the list of attributes key named "keys".
This function aims to help me understand what's inside the datased and what type of analysis could be made.
"""
for key in list_keys:
binder[key] = {}
for event, elem in ET.iterparse(filename, events = ("start",)):
if elem.tag == "tag":
for tag in elem.iter("tag"):
for key in list_keys:
if elem.attrib["k"] == key:
if tag.attrib["v"] not in binder[key]:
binder[key][tag.attrib["v"]] = 1
else:
binder[key][tag.attrib["v"]] += 1
return binder
create_tags_details(tags_details,keys,"sample.osm")
"""
Explanation: What I will do to get a better view of the file?
- Build a dictionnary to count :
- the number and types of amenity,
- the number and types of shops,
- the number and types of sport.
End of explanation
"""
# Create a dict to store weird street types
street_types = col.defaultdict(set)
# Create a list listing expected street types
expected_street_type = ["Rue", "Route", "Ruelle", "Cours", "Avenue", "Impasse", "Mail","Boulevard", "Square", "Place", "Allee"]
# Create a regular expression to isolate weird street types
street_type_re = re.compile(r'^\w+', re.IGNORECASE)
def audit_street(street_types,street_name):
"""
This function aims to check if the first word of every street name matchs the list of expected word name expected_street_type.
If it doesn't, the first word and the complete street name are added to the street_types dictionary as a key / value pair.
Arg1: street_types >> the dictionary where to store the unexpected data.
Arg2: street_name >> the street_name to be audited.
"""
m = street_type_re.search(street_name)
if m:
street_type = m.group()
if street_type not in expected_street_type:
street_types[street_type].add(street_name)
def audit_street_map(file_in, street_types, pretty = False):
"""
This function aims to audit the file by isolating one by one the addr:street tags key/value pair.
The value of each pair is then audited thanks to the audit_street function.
"""
for _, element in ET.iterparse(file_in):
if element.tag == "way" or element.tag == "node":
for tag in element.iter("tag"):
if tag.attrib['k'] == "addr:street":
audit_street(street_types,tag.attrib["v"])
pp.pprint(dict(street_types))
audit_street_map("plaisir.osm",street_types, True)
"""
Explanation: What questions I want to answer?
- What is the most popular type of shop in Plaisir?
- What is the sport with the most facilities in Plaisir?
- Are there more restaurant or Fastfood in Plaisir?
- Is there a correlation between the number of amenity and the number of bus stop in a given area?
Data cleaning plan
Audit and clean:
addr:street
addr:postcode
phone:
Create a list with one document per tag (we will only care about node and way) and a workable structure.
Audit street names
End of explanation
"""
#check addr: with :
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
def clean_street_type(address_tag,address_dict):
"""
This function aims to transform address tags and add them as a key / value pair of a dedicated address dictionary.
The function will take the address tag and the address dictionary as arguments:
address_tag: every sub tag of a "node" or "way" top level tag.
address_dict: a dictionnary where we store key / value pair of adress elements (postcode, street, etc..)
The function first creates a cleaned key based on the current key of the tag by removing the "addr:" part of the string.
If after removing "addr:" the function still detects a colon, the tag isn't added to the dictionary because the data entry was not clean enought.
Else, if the value of the tag starts with "AVE" as first word, "AVE" is replaced by "Avenue".
Finally, the key / value is added to the address dictionary.
"""
key = re.sub('addr:', '', address_tag['k']).strip()
if lower_colon.match(key):
return None
else:
if address_tag['v'].startswith("AVE"):
address_dict[key] = re.sub(r'^AVE', 'Avenue', address_tag['v'])
else:
address_dict[key] = address_tag['v']
"""
Please find below a test of the function which presents how the function works.
"""
tree = ET.parse(OSM_FILE)
root = tree.getroot()
test_tags = root.findall("./node/tag")
test_address_tags_dict = {}
for tag in test_tags:
tag_dict = {}
if tag.attrib['k'].startswith("addr:"):
clean_street_type(tag.attrib,tag_dict)
if tag_dict:
test_address_tags_dict[tag.attrib['k'] + tag.attrib['v']] = tag_dict
def take(n, iterable):
"Return first n items of the iterable as a list"
return list(islice(iterable, n))
pp.pprint(take(20, test_address_tags_dict.iteritems()))
"""
Explanation: One street type needs to be cleaned ('AVE'). We will clean that street type when shapping our data structure. Please find below the function we'll use to clean street types.
End of explanation
"""
# Create a set to store weird postcodes
postcodes = set()
# Create a regular expression to isolate weird postcodes
postcode_re = re.compile(r'\d\d\d\d\d', re.IGNORECASE)
def audit_postcode(postcodes,postcode):
"""
This function aims to check if the postcode matchs the expected format.
If it doesn't, the postcode is added to a dictionary.
Arg1: postcodes >> the dictionary where to store the unexpected data.
Arg2: postcode >> the postcode to be audited.
"""
m = postcode_re.search(postcode)
if m == None:
postcodes.add(postcode)
def audit_postcode_map(file_in, postcodes, pretty = False):
"""
This function aims to audit the file by isolating one by one the postcode tags key/value pair.
The value of each pair is then audited thanks to the audit_postcode function.
"""
for _, element in ET.iterparse(file_in):
if element.tag == "way" or element.tag == "node":
for tag in element.iter("tag"):
if tag.attrib['k'] == "addr:postcode":
audit_postcode(postcodes,tag.attrib["v"])
pp.pprint(postcodes)
audit_postcode_map("plaisir.osm",postcodes, True)
"""
Explanation: Audit postcodes
End of explanation
"""
# Create a list to store weird phone numbers
phone_numbers = []
# Create a regular expression to isolate
phone_re = re.compile(r'\+\d\d\s\d\s\d\d\s\d\d\s\d\d\s\d\d', re.IGNORECASE)
def audit_phone(phone_numbers,phone_number):
"""
This function aims to check if the phone number matchs the expected format.
If it doesn't, the phone number is added to a dictionary.
Arg1: phone_numbers >> the dictionary where to store the unexpected data.
Arg2: phone_number >> the phone number to be audited.
"""
m = phone_re.search(phone_number)
if m == None:
phone_numbers.append(phone_number)
def audit_phone_map(file_in, phone_numbers, pretty = False):
"""
This function aims to audit the file by isolating one by one the phone tags key/value pair.
The value of each pair is then audited thanks to the audit_phone function.
"""
for _, element in ET.iterparse(file_in):
if element.tag == "way" or element.tag == "node":
for tag in element.iter("tag"):
if tag.attrib['k'] == "phone":
audit_phone(phone_numbers,tag.attrib["v"])
return phone_numbers
audit_phone_map("plaisir.osm",phone_numbers, True)
"""
Explanation: Postcode are already clean.
Audit phone numbers
End of explanation
"""
#classic phone number format in France (ie: 01 30 55 84 22)
classic_france = re.compile(r'\d\d\s\d\d\s\d\d\s\d\d\s\d\d')
#classic phone number format with dots in France (ie: 01.30.55.84.22)
classic_france_dot = re.compile(r'\d\d\.\d\d\.\d\d\.\d\d\.\d\d')
#compressed phone number format in France (ie: 0130558422)
classic_france_compiled = re.compile(r'\d\d\d\d\d\d\d\d\d\d')
#wrong format
def clean_phone_numbers(phone_tag,main_dict):
"""
This function aims to clean phone tags and add them as a key / value pair of our node dictionary.
The function will take the phone tag and the main dictionary as arguments:
phone_tag: every phone sub tag of a "node" or "way" top level tag.
main_dict: a dictionnary where we store key / value pair of each elements of our map.
The function first identifies if the phone number follow one of the wrong pattern we identified during our audit.
Using regex, we try to match every pattern and apply the necessary modifications when it's relevant and then store the phone number in the the dictionary.
Else, we store directly the phone number in the dictionary.
"""
if classic_france.match(phone_tag['v']):
value = re.sub(r'^\d', '+33 ', phone_tag['v'])
main_dict[phone_tag['k']] = value
elif classic_france_dot.match(phone_tag['v']):
value = re.sub(r'^\d', '+33 ', phone_tag['v'])
value = re.sub(r'\.', ' ', value)
main_dict[phone_tag['k']] = value
elif classic_france_compiled.match(phone_tag['v']):
value = " ".join(phone_tag['v'][i:i+2] for i in range(0, len(phone_tag['v']), 2))
value = re.sub(r'^\d', '+33 ', value)
main_dict[phone_tag['k']] = value
else:
main_dict[phone_tag['k']] = phone_tag['v']
"""
Please find below a test of the function which presents how the function works.
"""
test_phone_tags_dict = {}
for tag in test_tags:
tag_dict = {}
if tag.attrib['k'] == "phone":
clean_phone_numbers(tag.attrib,tag_dict)
if tag_dict:
test_phone_tags_dict[tag.attrib['k'] + " " + tag.attrib['v']] = tag_dict
pp.pprint(take(20, test_phone_tags_dict.iteritems()))
"""
Explanation: Phone numbers need to be cleaned to match the following pattern (+33 X XX XX XX XX). We will clean that street type when shapping our data structure.</br>
Here are the different case we want to treat:
-'XX XX XX XX XX'
-'XX.XX.XX.XX.XX'
-'XXXXXXXXXX'
Please find below the function we'll use to clean phone numbers.
End of explanation
"""
CREATED = [ "version", "changeset", "timestamp", "user", "uid"]
POS = ["lon","lat"]
BUILDING_TYPES = ["amenity","shop","sport","place","service","building","highway"]
def shape_element(element):
"""
This function aims to shape every element of our XML file.
Each top level node of the XML file is reviewed by this function which creates a dedicated dictionary called "node".
Each node dictionary is then added to the data list. This list will be the file inserted into MongoDb.
"""
node = {}
pos = []
node_refs = []
created = {}
address = {}
types = {}
if element.tag == "node" or element.tag == "way" :
types['type'] = element.tag
if 'lat' in element.attrib.keys() and 'lon' in element.attrib.keys():
try:
lat = float(element.attrib['lat'])
lon = float(element.attrib['lon'])
pos.insert(0,lat)
pos.insert(1,lon)
except:
pass
for k, m in element.attrib.items():
if k not in POS:
if k in CREATED:
created[k] = m
else:
node[k] = m
for child in element:
if child.tag == "nd":
node_refs.append(child.attrib['ref'])
elif child.tag == "tag":
if child.attrib['k'].startswith("addr:"):
clean_street_type(child.attrib,address)
elif child.attrib['k'] == 'phone':
clean_phone_numbers(child.attrib,node)
elif child.attrib['k'] in BUILDING_TYPES:
types[child.attrib['k']] = child.attrib['v']
if types:
node['types'] = types
if created:
node['created'] = created
if pos:
node['pos'] = pos
if address:
node['address'] = address
if node_refs:
node['node_refs'] = node_refs
return node
else:
return None
def process_map(file_in, pretty = False):
data = []
for _, element in ET.iterparse(file_in):
el = shape_element(element)
if el:
data.append(el)
return data
data = process_map('plaisir.osm', True)
"""
Explanation: Shape data
End of explanation
"""
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017")
db = client.osm_udacity
from bson.objectid import ObjectId
def insert_data(data, db):
for item in data:
item['_id'] = ObjectId()
db.plaisir_osm.insert_one(item)
insert_data(data, db)
print db.plaisir_osm.find_one()
"""
Explanation: Insert data in MongoDb
End of explanation
"""
def make_group_pipeline(type_node):
pipeline = [{'$group':{'_id':type_node,'count':{'$sum':1}}},
{'$sort':{'count':-1}},
{'$limit' : 5 }
]
return pipeline
def aggregate(db, pipeline):
return [doc for doc in db.aggregate(pipeline)]
pipeline = make_group_pipeline('$types.type')
result = aggregate(db.plaisir_osm, pipeline)
pp.pprint(result)
"""
Explanation: Querying the MongoDb
End of explanation
"""
pipeline_shop = make_group_pipeline('$types.shop')
result_shop = aggregate(db.plaisir_osm, pipeline_shop)
pp.pprint(result_shop)
"""
Explanation: Answering question
What is the most popular type of shop in Plaisir?
End of explanation
"""
pipeline_sport = make_group_pipeline('$types.sport')
result_sport = aggregate(db.plaisir_osm, pipeline_sport)
pp.pprint(result_sport)
"""
Explanation: Bakery is the most popular shop in Plaisir. No kidding.. Plaisir is in France :)
What is the sport with the most facilities in Plaisir?
End of explanation
"""
pipeline_restaurant = [{'$match': {"$or" : [{"types.amenity": "restaurant"},{"types.amenity":"fast_food"}]}},
{'$group':{'_id':'$types.amenity','count':{'$sum':1}}},
{'$sort':{'count':-1}}]
result_restaurant = aggregate(db.plaisir_osm, pipeline_restaurant)
pp.pprint(result_restaurant)
"""
Explanation: Tennis is the sport with the most facilities in Plaisir.
Are there more restaurant or Fastfood in Plaisir?
End of explanation
"""
#check of the geospacial index - the few lines below are here to check that the 2d index is properly working.
for doc in db.plaisir_osm.find({'pos': {'$near' : [48.5,1.95]}}):
pp.pprint(doc)
break
#Get max and min latitude and longitude
for doc in db.plaisir_osm.aggregate([
{ "$unwind": "$pos" },
{ "$group": {
"_id": "$_id",
"lat": { "$first": "$pos" },
"lon": { "$last": "$pos" }
}},
{ "$group": {
"_id": "null",
"minLat": { "$min": "$lat" },
"minLon": { "$min": "$lon" },
"maxLat": { "$max": "$lat" },
"maxLon": { "$max": "$lon" }
}}
]):
pp.pprint(doc)
"""
Explanation: There are more restaurant than fast_food in Plaisir.. Good news.....
Is there a correlation between the number of amenity and the number of bus stop in a given area?
End of explanation
"""
main_dict = {}
def frange(start, stop, step):
i = start
while i < stop:
yield i
i += step
for lat in frange(48.76,48.86,0.01):
for lon in frange(1.85,2.05,0.02):
main_dict[str(lon) + " - " + str(lat)] = {}
bus_stop = 0
amenity = 0
for doc in db.plaisir_osm.find({'pos': { '$geoWithin': { '$box': [ [ lat, lon ], [ (lat + 0.01), (lon + 0.02) ] ] } }}):
if 'highway' in doc['types']:
if doc['types']['highway'] == "bus_stop":
bus_stop += 1
elif 'amenity' in doc['types']:
if doc['types']['amenity'] == 'bench':
pass
else:
amenity += 1
main_dict[str(lon) + " - " + str(lat)]['bus_stop'] = bus_stop
main_dict[str(lon) + " - " + str(lat)]['amenity'] = amenity
new_dict = {}
for key in main_dict:
if main_dict[key]['amenity'] != 0 and main_dict[key]['bus_stop'] != 0:
new_dict[key] = main_dict[key]
#Now that the dictionnary is ready for the analysis, I can move forward using Pandas.
%matplotlib inline
import seaborn as sns
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame.from_dict(new_dict,orient="index")
df2 = df.groupby('amenity').aggregate(np.average)
df2.plot()
"""
Explanation: What I need to do:
- Aggregate the number of amenity and bus stops by box of 0.02 latitude and 0.01 longitude
- remove empty box from my list
- calculate the correlation
End of explanation
"""
|
leon-adams/datascience | notebooks/linear-classifier.ipynb | mpl-2.0 | # Run some setup code for this notebook.
import sys
import os
sys.path.append('..')
import graphlab
"""
Explanation: Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. We will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Implement the link function for logistic regression.
Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.
Implement gradient ascent.
Given a set of coefficients, predict sentiments.
Compute classification accuracy for the logistic regression model.
End of explanation
"""
products = graphlab.SFrame('datasets/')
"""
Explanation: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
"""
products['sentiment']
"""
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
"""
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
"""
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
"""
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
"""
Explanation: Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
"""
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
"""
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
End of explanation
"""
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
"""
products['perfect']
"""
Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
"""
products['contains_perfect'] = products['perfect'].apply(lambda i: 1 if i>=1 else 0)
print(products['contains_perfect'])
print(products['perfect'])
"""
Explanation: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
End of explanation
"""
print(products['contains_perfect'].sum())
"""
Explanation: Quiz Question. How many reviews contain the word perfect?
End of explanation
"""
import numpy as np
from algorithms.sframe_get_numpy_data import get_numpy_data
"""
Explanation: Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
End of explanation
"""
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
"""
Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
End of explanation
"""
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
"""
Explanation: Let us convert the data into NumPy arrays.
End of explanation
"""
sentiment
"""
Explanation: Now, let us see what the sentiment column looks like:
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
score = np.dot(feature_matrix, coefficients)
predictions = 1.0 / (1 + np.exp(-score))
return predictions
"""
Explanation: Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
End of explanation
"""
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
"""
Explanation: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
End of explanation
"""
def feature_derivative(errors, feature):
derivative = np.dot(errors, feature)
return derivative
"""
Explanation: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block:
End of explanation
"""
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
"""
Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset.
End of explanation
"""
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
"""
Explanation: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
End of explanation
"""
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j].
derivative = feature_derivative(errors, feature_matrix[:,j])
coefficients[j] = coefficients[j] + step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
"""
Explanation: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent:
End of explanation
"""
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
"""
Explanation: Now, let us run the logistic regression solver.
End of explanation
"""
scores = np.dot(feature_matrix, coefficients)
"""
Explanation: Predicting sentiments
Recall that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
End of explanation
"""
class_probabilities = 1.0 / (1 + np.exp(-scores) )
print(class_probabilities)
print( 'Number of positive sentiment predicted is {}'.format( (class_probabilities >= 0.5).sum()) )
"""
Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
End of explanation
"""
class_predictions = np.copy(class_probabilities)
class_predictions[class_predictions >=0.5] = 1
class_predictions[class_predictions < 0.5] = 0
print(class_predictions)
print(len(sentiment))
print(len(class_predictions))
num_mistakes = (sentiment == class_predictions).sum() # YOUR CODE HERE
accuracy = 1 -( 1.0*num_mistakes / len(sentiment)) # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
"""
Explanation: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model.
End of explanation
"""
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
"""
Explanation: Which words contribute most to positive & negative sentiments?
Recall that we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order.
End of explanation
"""
word_coefficient_tuples[:10]
"""
Explanation: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
End of explanation
"""
word_coefficient_tuples[-10:]
"""
Explanation: Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
End of explanation
"""
|
mit-crpg/openmc | examples/jupyter/mgxs-part-ii.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
"""
Explanation: Multigroup Cross Section Generation Part II: Advanced Features
This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:
Creation of multi-group cross sections on a heterogeneous geometry
Calculation of cross sections on a nuclide-by-nuclide basis
The use of tally precision triggers with multi-group cross sections
Built-in features for energy condensation in downstream data processing
The use of the openmc.data module to plot continuous-energy vs. multi-group cross sections
Validation of multi-group cross sections with OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system in order to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
"""
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
"""
Explanation: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
End of explanation
"""
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create box to surround the geometry
box = openmc.model.rectangular_prism(1.26, 1.26, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & box
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(pin_cell_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
"""
Explanation: We now must create a geometry with the pin cell universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
End of explanation
"""
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
"""
Explanation: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
End of explanation
"""
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
"""
Explanation: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
End of explanation
"""
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1e-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
"""
Explanation: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
End of explanation
"""
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
End of explanation
"""
# Run OpenMC
openmc.run()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
"""
Explanation: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
"""
Explanation: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
End of explanation
"""
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
"""
Explanation: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
End of explanation
"""
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
"""
Explanation: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
End of explanation
"""
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
"""
Explanation: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
End of explanation
"""
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
"""
Explanation: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.
End of explanation
"""
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
"""
Explanation: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
End of explanation
"""
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
"""
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
"""
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
"""
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
End of explanation
"""
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
"""
Explanation: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Visualizing MGXS Data
It is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.
One particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.plotter module to plot continuous-energy cross sections from the openly available cross section library distributed by NNDC.
The MGXS data can also be plotted using the openmc.plot_xs command, however we will do this manually here to show how the openmc.Mgxs.get_xs method can be used to obtain data.
End of explanation
"""
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
"""
Explanation: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
End of explanation
"""
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
"""
Explanation: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
End of explanation
"""
|
JackDi/phys202-2015-work | assignments/assignment03/NumpyEx01.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
"""
Explanation: Numpy Exercise 1
Imports
End of explanation
"""
def checkerboard(size):
"""Return a 2d checkboard of 0.0 and 1.0 as a NumPy array"""
b= np.ones((size,size), dtype=np.float)
x=0
y=1
while x < size:
while y < size:
b[x,y]=0
y=y+2
if y%2==1:
y=0
elif y%2==0:
y=1
x=x+1
return b
va.vizarray(checkerboard(5))
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
"""
Explanation: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:
Your function should work for both odd and even size.
The 0,0 element should be 1.0.
The dtype should be float.
End of explanation
"""
# YOUR CODE HERE
va.set_block_size(10)
va.vizarray(checkerboard(20))
assert True
"""
Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
End of explanation
"""
# YOUR CODE HERE
va.set_block_size(5)
va.vizarray(checkerboard(27))
assert True
"""
Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II | 03_Intro_finite_differences/lecture_notebooks/1_fd_intro.ipynb | gpl-3.0 | # Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from (this Jupyter notebook) by Kristina Garina and Heiner Igel (@heinerigel) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, additional modifications by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
"""
# Import Libraries
import numpy as np
import math
import matplotlib.pyplot as plt
from pylab import rcParams
"""
Explanation: Finite-difference approximations
In the last lecture we covered different approaches to discretize continous media.
Now, we only have to understand how to approximate (partial) derivatives by finite-differences.
Approximation of first derivatives
The approximation of first derivatives using finite-differences is quite straightforward. Imagine that we have some time-dependent function $f(t)$, for which we want to calculate the first derivatives $\frac{\partial f(t)}{\partial t}$:
<img src="../images/gauss_disc_final.png" width="95%">
First, we discretize $f(t)$ at discrete times
\begin{equation}
t = i * dt, \nonumber
\end{equation}
where $dt$ denotes a constant temporal sample interval. By definition, the first derivative of $f(t)$ with respect to $t$ is:
\begin{equation}
\frac{\partial f(t)}{\partial t} = \lim_{dt \rightarrow 0} \frac{f(t+dt)-f(t)}{dt}, \nonumber
\end{equation}
In the finite-difference (FD) approximation, we neglect the limit $dt \rightarrow 0$:
\begin{equation}
\frac{\partial f(t)}{\partial t} \approx \frac{f(t+dt)-f(t)}{dt} \nonumber
\end{equation}
Because $dt$ remains finite, this approximation is called finite-differences. Furthermore, we can distinguish between different finite difference operators, depending on the points involved in the approximation. The above example is a forward FD operator
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr)^+ \approx \frac{f(t+dt)-f(t)}{dt}. \nonumber
\end{equation}
Alternatively, we can also define a backward FD operator
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr)^- \approx \frac{f(t)-f(t-dt)}{dt}. \nonumber
\end{equation}
By taking the arithmetic average of the forward and backward operator
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr) \approx \frac{\left(\frac{\partial f(t)}{\partial t}\right)^- + \left(\frac{\partial f(t)}{\partial t}\right)^+}{2},\nonumber
\end{equation}
we get the central FD operator
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr) \approx \frac{f(t+dt)-f(t-dt)}{2dt}. \nonumber
\end{equation}
Exercise
Prove for the quadratic function
\begin{equation}
g(t) = b t^2 \nonumber
\end{equation}
where $b$ is a constant time-independent parameter, that in the limit $dt \rightarrow 0$, the forward, backward and central FD operators lead to the correct temporal first derivative:
\begin{equation}
\frac{\partial g(t)}{\partial t} = 2 b t \nonumber
\end{equation}
End of explanation
"""
# Define figure size
rcParams['figure.figsize'] = 12, 5
# Initial parameters
# maximum time
# number of time sample
# half-width
# defining dt
# defining time shift t0
# defining time
# Define gaussian function
# Plotting of gaussian
plt.plot(time, f)
plt.title('Gaussian function')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.xlim((0, tmax))
plt.grid()
plt.show()
"""
Explanation: As an example, let's try to calculate the first temporal derivative of the Gaussian function
\begin{equation}
f(t)=\dfrac{1}{\sqrt{2 \pi a}}e^{-\dfrac{(t-t_0)^2}{2a}},\nonumber
\end{equation}
where $a$ denotes the half-width of the Gaussian and $t_0$ a time-shift of the maximum.
End of explanation
"""
# First derivative of Gaussian function
# Initiation of numerical and analytical derivatives
# forward FD operator
# backward FD operator
# central FD operator
# analytical derivative
# Numerical FD derivative of the Gaussian function
for it in range (1, nt-1):
# forward operator
# backward operator
# central operator
# Analytical derivative of the Gaussian function
# Plot of the first derivative and analytical derivative
plt.plot(time, nder_for,label="FD forward operator", lw=2, color="y")
plt.plot(time, nder_back,label="FD backward operator", lw=2, color="b")
plt.plot(time, nder_cent,label="FD central operator", lw=2, color="g")
plt.plot(time, ader, label="Analytical derivative", ls="--",lw=2, color="red")
plt.title('First derivative')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
"""
Explanation: Next, we calculate the numerical derivative using finite-differences with the forward operator:
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr)^+ \approx \frac{f(t+dt)-f(t)}{dt}. \nonumber
\end{equation}
backward operator:
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr)^- \approx \frac{f(t)-f(t-dt)}{dt}, \nonumber
\end{equation}
and central operator:
\begin{equation}
\biggl(\frac{\partial f(t)}{\partial t}\biggr) \approx \frac{f(t+dt)-f(t-dt)}{2dt}. \nonumber
\end{equation}
To test the accuarcy of these approaches, we compare them with the analytical derivative:
\begin{equation}
\frac{\partial f(t)}{\partial t}=-\dfrac{t-t_0}{a}\dfrac{1}{\sqrt{2\pi a}}e^{-\dfrac{(t-t_0)^2}{2a}} \nonumber
\end{equation}
End of explanation
"""
# Erros of the FD approximation
# Plot of the errors of the first derivative FD approximations
plt.plot(time, (nder_for-ader),label="FD forward operator error", lw=2, color="y")
plt.plot(time, (nder_back-ader),label="FD backward operator error", lw=2, color="b")
plt.plot(time, (nder_cent-ader),label="FD central operator error", lw=2, color="g")
plt.plot(time, ader, label="Analytical derivative",lw=2, color="red")
plt.title('First derivative errors')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
"""
Explanation: The approximation of the first derivative of the Gaussian by all three FD operators seem to be very accurate. How large are actually the errors?
End of explanation
"""
# 2nd derivative of Gaussian function
# Initiation of numerical and analytical derivatives
nder2=np.zeros(nt) # 2nd derivative FD operator
ader2=np.zeros(nt) # analytical 2nd derivative
# Numerical FD derivative of the Gaussian function
for it in range (1, nt-1):
# ADD YOUR 2ND DERIVATIVE OF THE GAUSSIAN HERE!
# nder2[it]= # 2nd derivative FD operator
# ADD ANALYTICAL 2ND DERIVATIVE OF THE GAUSSIAN HERE!
# ader2=
# Plot of the numerical and analytical second derivative of the Gaussian
#plt.plot(time, nder2,label="FD operator", lw=2, color="y")
#plt.plot(time, ader2, label="Analytical 2nd derivative", ls="--",lw=2, color="red")
plt.title('2nd derivative Gaussian')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
# Error of the 2nd derivative FD approximation
# Plot of the second derivative FD approximation error
#plt.plot(time, nder2-ader2,label="FD operator error", lw=2, color="violet")
#plt.plot(time, ader2,label="Analytical 2nd derivative", lw=2, color="red")
plt.title('2nd derivative errors')
plt.xlabel('Time, s')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
"""
Explanation: The errors for all three operators are actually quite small, so we have to amplify them by a factor 10. Larger errors occur in areas with significant slope. Notice also, that the central operator has the smallest error. You wonder why? We will see in a later lecture.
FD approximation of the 2nd derivative
We derived FD approximations for the 1st derivative. However, to approximate the acoustic wave equation, we also need to know how to approximate a 2nd derivative.
To achieve this, we calculate the Taylor expansion of $f(t+dt)$ up to the second order term:
\begin{equation}
f(t+dt) \approx f(t) + \frac{\partial f(t)}{\partial t} dt + \frac{1}{2}\frac{\partial^2 f(t)}{\partial t^2} dt^2 + \mathcal{O}(dt^3)\nonumber
\end{equation}
We do the same for $f(t-dt)$:
\begin{equation}
f(t-dt) \approx f(t) - \frac{\partial f(t)}{\partial t} dt + \frac{1}{2}\frac{\partial^2 f(t)}{\partial t^2} dt^2 + \mathcal{O}(dt^3)\nonumber
\end{equation}
Adding the approximations $f(t+dt)$ and $f(t-dt)$ leads to
\begin{equation}
f(t-dt) + f(t+dt) \approx 2 f(t) + \frac{\partial^2 f(t)}{\partial t^2} dt^2\nonumber
\end{equation}
Finally, we can rearrange to $\frac{\partial^2 f(t)}{\partial t^2}$ and get the following FD approximation for the 2nd derivative
\begin{equation}
\frac{\partial^2 f(t)}{\partial t^2} \approx \frac{f(t-dt) + f(t+dt) - 2 f(t)}{dt^2}\nonumber
\end{equation}
Excercise
Calculate the 2nd derivative $\frac{\partial^2 f(t)}{\partial t^2}$ of the Gaussian
\begin{equation}
f(t)=\dfrac{1}{\sqrt{2 \pi a}}e^{-\dfrac{(t-t_0)^2}{2a}},\nonumber
\end{equation}
analytically
Compute and compare the numerical and analytical 2nd derivative of the Gaussian and the errors between both solutions
End of explanation
"""
|
DIPlib/diplib | examples/python/tensor_images.ipynb | apache-2.0 | import diplib as dip
"""
Explanation: Tensor images
This notebook gives an overview of the concept of tensor images, and demonstrates how to use this feature.
End of explanation
"""
img = dip.ImageRead('../trui.ics')
img.Show()
"""
Explanation: After reading the "PyDIP basics" notebook, you should be familiar with the concepts of scalar images and color images. We remind the reader that an image can have any number of values associated to each pixel. An image with a single value per pixel is a scalar image. Multiple values can be arranged in one or two dimensions, as a vector image or a matrix image. A color image is an example of a vector image, for example in the RGB color space the vector for each pixel has 3 values, it is a 3D vector.
The generalization of vectors and matrices is a tensor. A rank 0 tensor is a scalar, a rank 1 tensor is a vector, and a rank 2 tensor is a matrix.
This is a scalar image:
End of explanation
"""
g = dip.Gradient(img)
g.Show()
"""
Explanation: We can compute its gradient, which is a vector image:
End of explanation
"""
print(g.TensorElements())
print(g.TensorShape())
"""
Explanation: The vector image is displayed by showing the first vector component in the red channel, and the second one in the green channel. g has two components:
End of explanation
"""
S = g @ dip.Transpose(g)
print("Tensor size:", S.TensorSizes())
print("Tensor shape:", S.TensorShape())
print("Tensor elements:", S.TensorElements())
"""
Explanation: Multiplying a vector with its transposed leads to a symmetric matrix:
End of explanation
"""
S = dip.Gauss(S, [5])
S.Show()
"""
Explanation: Note how the 2x2 symmetric matrix stores only 3 elements per pixel. Because of the symmetry, the [0,1] and the [1,0] elements are identical, and need not be both stored. See the documentation for details on how the individual elements are stored.
Local averaging of this matrix image (i.e. applying a low-pass filter) leads to the structure tensor:
End of explanation
"""
eigenvalues, eigenvectors = dip.EigenDecomposition(S)
print(eigenvalues.TensorShape())
print(eigenvectors.TensorShape())
"""
Explanation: We can still display this tensor image, because it has only 3 tensor elements, which can be mapped to the three RGB channels of the display.
The structure tensor is one of the more important applications for the concept of the tensor image. In this documentation page there are some example applications of the structure tensor. Here we show how to get the local orientation from it using the eigenvalue decomposition.
End of explanation
"""
v1 = eigenvectors.TensorColumn(0)
angle = dip.Angle(v1)
angle.Show('orientation')
"""
Explanation: The eigendecomposition is such that S * eigenvectors == eigenvectors * eigenvalues. eigenvectors is a full 2x2 matrix, and hence has 4 tensor elements. These are stored in column-major order. The first column is the eigenvector that corresponds to the first eigenvalue. Eigenvalues are sorted in descending order, and hence the first eigenvector is perpendicular to the edges in the image.
End of explanation
"""
tmp = dip.Transpose(eigenvectors)
print(tmp.TensorShape())
print(tmp.SharesData(eigenvectors))
"""
Explanation: Note that extracting a column from the tensor yields a vector image, and that this vector image shares data with the column-major matrix image. Transposing a matrix is a cheap operation that just changes the storage order of the matrix, without a need to copy or reorder the data:
End of explanation
"""
H = dip.Hessian(img)
print("Tensor size:", S.TensorSizes())
print("Tensor shape:", S.TensorShape())
print("Tensor elements:", S.TensorElements())
H.Show()
"""
Explanation: A second important matrix image is the Hessian matrix, which contains all second order derivatives. Just like the strucutre tensor, it is a symmetric 2x2 matrix:
End of explanation
"""
|
lexual/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers | Chapter2_MorePyMC/Chapter2.ipynb | mit | import pymc as pm
parameter = pm.Exponential("poisson_param", 1)
data_generator = pm.Poisson("data_generator", parameter)
data_plus_one = data_generator + 1
"""
Explanation: Chapter 2
This chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.
A little more on PyMC
Parent and Child relationships
To assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables.
parent variables are variables that influence another variable.
child variable are variables that are affected by other variables, i.e. are the subject of parent variables.
A variable can be both a parent and child. For example, consider the PyMC code below.
End of explanation
"""
print "Children of `parameter`: "
print parameter.children
print "\nParents of `data_generator`: "
print data_generator.parents
print "\nChildren of `data_generator`: "
print data_generator.children
"""
Explanation: parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.
Likewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.
This nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.
End of explanation
"""
print "parameter.value =", parameter.value
print "data_generator.value =", data_generator.value
print "data_plus_one.value =", data_plus_one.value
"""
Explanation: Of course a child can have more than one parent, and a parent can have many children.
PyMC Variables
All PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:
End of explanation
"""
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour
lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour
tau = pm.DiscreteUniform("tau", lower=0, upper=10) # prior on behaviour change
print "lambda_1.value = %.3f" % lambda_1.value
print "lambda_2.value = %.3f" % lambda_2.value
print "tau.value = %.3f" % tau.value
print
lambda_1.random(), lambda_2.random(), tau.random()
print "After calling random() on the variables..."
print "lambda_1.value = %.3f" % lambda_1.value
print "lambda_2.value = %.3f" % lambda_2.value
print "tau.value = %.3f" % tau.value
"""
Explanation: PyMC is concerned with two types of programming variables: stochastic and deterministic.
stochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.
deterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is.
We will detail each below.
Initializing Stochastic variables
Initializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:
some_variable = pm.DiscreteUniform("discrete_uni_var", 0, 4)
where 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)
The name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.
For multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays.
The size argument also solves the annoying case where you may have many variables $\beta_i, \; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:
beta_1 = pm.Uniform("beta_1", 0, 1)
beta_2 = pm.Uniform("beta_2", 0, 1)
...
we can instead wrap them into a single variable:
betas = pm.Uniform("betas", 0, 1, size=N)
Calling random()
We can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.
End of explanation
"""
type(lambda_1 + lambda_2)
"""
Explanation: The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.
Warning: Don't update stochastic variables' values in-place.
Straight from the PyMC docs, we quote [4]:
Stochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:
A.value = new_value
The following are in-place updates and should never be used:
A.value += 3
A.value[2,1] = 5
A.value.attribute = new_attribute_value
Deterministic variables
Since most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:
@pm.deterministic
def some_deterministic_var(v1=v1,):
#jelly goes here.
For all purposes, we can treat the object some_deterministic_var as a variable and not a Python function.
Prepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:
End of explanation
"""
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
@pm.deterministic
def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
out = np.zeros(n_data_points)
out[:tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after tau is lambda2
return out
"""
Explanation: The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\lambda$ looked like:
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
And in PyMC code:
End of explanation
"""
%matplotlib inline
from IPython.core.pylabtools import figsize
from matplotlib import pyplot as plt
figsize(12.5, 4)
samples = [lambda_1.random() for i in range(20000)]
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
"""
Explanation: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable.
Inside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:
@pm.deterministic
def some_deterministic(stoch=some_stochastic_var):
return stoch.value**2
will return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable.
Notice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values.
Including observations in the Model
At this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like "What does my prior distribution of $\lambda_1$ look like?"
End of explanation
"""
data = np.array([10, 5])
fixed_variable = pm.Poisson("fxd", 1, value=data, observed=True)
print "value: ", fixed_variable.value
print "calling .random()"
fixed_variable.random()
print "value: ", fixed_variable.value
"""
Explanation: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
PyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:
End of explanation
"""
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
obs = pm.Poisson("obs", lambda_, value=data, observed=True)
print obs.value
"""
Explanation: This is how we include data into our models: initializing a stochastic variable to have a fixed value.
To complete our text message example, we fix the PyMC variable observations to the observed dataset.
End of explanation
"""
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
"""
Explanation: Finally...
We wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)
End of explanation
"""
tau = pm.rdiscrete_uniform(0, 80)
print tau
"""
Explanation: Modeling approaches
A good starting thought to Bayesian modeling is to think about how your data might have been generated. Position yourself in an omniscient position, and try to imagine how you would recreate the dataset.
In the last chapter we investigated text message data. We begin by asking how our observations may have been generated:
We started by thinking "what is the best random variable to describe this count data?" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.
Next, we think, "Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?" Well, the Poisson distribution has a parameter $\lambda$.
Do we know $\lambda$? No. In fact, we have a suspicion that there are two $\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\tau$.
What is a good distribution for the two $\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\alpha$.
Do we know what the parameter $\alpha$ might be? No. At this point, we could continue and assign a distribution to $\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\lambda$, ("it probably changes over time", "it's likely between 10 and 30", etc.), we don't really have any strong beliefs about $\alpha$. So it's best to stop here.
What is a good value for $\alpha$ then? We think that the $\lambda$s are between 10-30, so if we set $\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\alpha$ as to reflect our belief is to set the value so that the mean of $\lambda$, given $\alpha$, is equal to our observed mean. This was shown in the last chapter.
We have no expert opinion of when $\tau$ might have occurred. So we will suppose $\tau$ is from a discrete uniform distribution over the entire timespan.
Below we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )
<img src="http://i.imgur.com/7J30oCG.png" width = 700/>
PyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:
Probabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.
Same story; different ending.
Interestingly, we can create new datasets by retelling the story.
For example, if we reverse the above steps, we can simulate a possible realization of the dataset.
1. Specify when the user's behaviour switches by sampling from $\text{DiscreteUniform}(0, 80)$:
End of explanation
"""
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
print lambda_1, lambda_2
"""
Explanation: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution:
End of explanation
"""
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
"""
Explanation: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example:
End of explanation
"""
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
"""
Explanation: 4. Plot the artificial dataset:
End of explanation
"""
def plot_artificial_sms_dataset():
tau = pm.rdiscrete_uniform(0, 80)
alpha = 1. / 20.
lambda_1, lambda_2 = pm.rexponential(alpha, 2)
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlim(0, 80)
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(1, 5):
plt.subplot(4, 1, i)
plot_artificial_sms_dataset()
"""
Explanation: It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability.
The ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
End of explanation
"""
import pymc as pm
# The parameters are the bounds of the Uniform.
p = pm.Uniform('p', lower=0, upper=1)
"""
Explanation: Later we will see how we use this to make predictions and test the appropriateness of our models.
Example: Bayesian A/B testing
A/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results.
Similarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards.
Often, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a "Z-score" and even more confusing "p-values" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural.
A Simple Case
As this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \lt p_A \lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us.
Suppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \frac{n}{N}$. Unfortunately, the observed frequency $\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\frac{1}{6}$. Knowing the true frequency of events like:
fraction of users who make purchases,
frequency of social attributes,
percent of internet users with cats etc.
are common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.
The observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.
With respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be.
To set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:
End of explanation
"""
# set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = pm.rbernoulli(p_true, N)
print occurrences # Remember: Python treats True == 1, and False == 0
print occurrences.sum()
"""
Explanation: Had we had stronger beliefs, we could have expressed them in the prior above.
For this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\ \sim \text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.
End of explanation
"""
# Occurrences.mean is equal to n/N.
print "What is the observed frequency in Group A? %.4f" % occurrences.mean()
print "Does this equal the true frequency? %s" % (occurrences.mean() == p_true)
"""
Explanation: The observed frequency is:
End of explanation
"""
# include the observations, which are Bernoulli
obs = pm.Bernoulli("obs", p, value=occurrences, observed=True)
# To be explained in chapter 3
mcmc = pm.MCMC([p, obs])
mcmc.sample(18000, 1000)
"""
Explanation: We combine the observations into the PyMC observed variable, and run our inference algorithm:
End of explanation
"""
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True)
plt.legend()
"""
Explanation: We plot the posterior distribution of the unknown $p_A$ below:
End of explanation
"""
import pymc as pm
figsize(12, 4)
# these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
# notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
# generate some observations
observations_A = pm.rbernoulli(true_p_A, N_A)
observations_B = pm.rbernoulli(true_p_B, N_B)
print "Obs from Site A: ", observations_A[:30].astype(int), "..."
print "Obs from Site B: ", observations_B[:30].astype(int), "..."
print observations_A.mean()
print observations_B.mean()
# Set up the pymc model. Again assume Uniform priors for p_A and p_B.
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
@pm.deterministic
def delta(p_A=p_A, p_B=p_B):
return p_A - p_B
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True)
obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True)
# To be explained in chapter 3.
mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])
mcmc.sample(20000, 1000)
"""
Explanation: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
A and B Together
A similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
End of explanation
"""
p_A_samples = mcmc.trace("p_A")[:]
p_B_samples = mcmc.trace("p_B")[:]
delta_samples = mcmc.trace("delta")[:]
figsize(12.5, 10)
# histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
"""
Explanation: Below we plot the posterior distributions for the three unknowns:
End of explanation
"""
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print "Probability site A is WORSE than site B: %.3f" % \
(delta_samples < 0).mean()
print "Probability site A is BETTER than site B: %.3f" % \
(delta_samples > 0).mean()
"""
Explanation: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
With respect to the posterior distribution of $\text{delta}$, we can see that the majority of the distribution is above $\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:
End of explanation
"""
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
"""
Explanation: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Try playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.
I hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation.
An algorithm for human deceit
Social data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals "Have you ever cheated on a test?" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit "Yes" to cheating when in fact they hadn't cheated).
To present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.
The Binomial Distribution
The binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:
$$P( X = k ) = {{N}\choose{k}} p^k(1-p)^{N-k}$$
If $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \sim \text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \le X \le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.
End of explanation
"""
import pymc as pm
N = 100
p = pm.Uniform("freq_cheating", 0, 1)
"""
Explanation: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
The expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.
Example: Cheating among students
We will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ "Yes I did cheat" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$.
This is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:
In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers "Yes, I did cheat" if the coin flip lands heads, and "No, I did not cheat", if the coin flip lands tails. This way, the interviewer does not know if a "Yes" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers.
I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars.
Suppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\text{Uniform}(0,1)$ prior.
End of explanation
"""
true_answers = pm.Bernoulli("truths", p, size=N)
"""
Explanation: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
End of explanation
"""
first_coin_flips = pm.Bernoulli("first_flips", 0.5, size=N)
print first_coin_flips.value
"""
Explanation: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.
End of explanation
"""
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
"""
Explanation: Although not everyone flips a second time, we can still model the possible realization of second coin-flips:
End of explanation
"""
@pm.deterministic
def observed_proportion(t_a=true_answers,
fc=first_coin_flips,
sc=second_coin_flips):
observed = fc * t_a + (1 - fc) * sc
return observed.sum() / float(N)
"""
Explanation: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC deterministic variable:
End of explanation
"""
observed_proportion.value
"""
Explanation: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
End of explanation
"""
X = 35
observations = pm.Binomial("obs", N, observed_proportion, observed=True,
value=X)
"""
Explanation: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be "Yes".
The researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:
End of explanation
"""
model = pm.Model([p, true_answers, first_coin_flips,
second_coin_flips, observed_proportion, observations])
# To be explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(40000, 15000)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
"""
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
"""
p = pm.Uniform("freq_cheating", 0, 1)
@pm.deterministic
def p_skewed(p=p):
return 0.5 * p + 0.25
"""
Explanation: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
I would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters.
This kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful.
Alternative PyMC Model
Given a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes:
\begin{align}
P(\text{"Yes"}) &= P( \text{Heads on first coin} )P( \text{cheater} ) + P( \text{Tails on first coin} )P( \text{Heads on second coin} ) \\
& = \frac{1}{2}p + \frac{1}{2}\frac{1}{2}\\
& = \frac{p}{2} + \frac{1}{4}
\end{align}
Thus, knowing $p$ we know the probability a student will respond "Yes". In PyMC, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
End of explanation
"""
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed,
value=35, observed=True)
"""
Explanation: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
If we know the probability of respondents saying "Yes", which is p_skewed, and we have $N=100$ students, the number of "Yes" responses is a binomial random variable with parameters N and p_skewed.
This is where we include our observed 35 "Yes" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.
End of explanation
"""
model = pm.Model([yes_responses, p_skewed, p])
# To Be Explained in Chapter 3!
mcmc = pm.MCMC(model)
mcmc.sample(25000, 2500)
figsize(12.5, 3)
p_trace = mcmc.trace("freq_cheating")[:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
"""
Explanation: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
End of explanation
"""
N = 10
x = np.empty(N, dtype=object)
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
"""
Explanation: More PyMC Tricks
Protip: Lighter deterministic variables with Lambda class
Sometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example,
beta = pm.Normal("coefficients", 0, size=(N, 1))
x = np.random.randn((N, 1))
linear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))
Protip: Arrays of PyMC variables
There is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:
End of explanation
"""
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
# drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
# plot it, as a function of temperature (the first column)
print "Temp (F), O-Ring failure?"
print challenger_data
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature")
"""
Explanation: The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:
Example: Challenger Space Shuttle Disaster <span id="challenger"/>
On January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):
End of explanation
"""
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend();
"""
Explanation: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
We need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t } } $$
In this model, $\beta$ is the variable we are uncertain about. Below is the function plotted for $\beta = 1, 3, -5$.
End of explanation
"""
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left");
"""
Explanation: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:
$$p(t) = \frac{1}{ 1 + e^{ \;\beta t + \alpha } } $$
Some plots are below, with differing $\alpha$.
End of explanation
"""
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1. / _tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1. / _tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
"""
Explanation: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyMC. The $\beta, \alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.
Normal distributions
A Normal random variable, denoted $X \sim N(\mu, 1/\tau)$, has a distribution with two parameters: the mean, $\mu$, and the precision, $\tau$. Those familiar with the Normal distribution already have probably seen $\sigma^2$ instead of $\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\tau$ is always positive.
The probability density function of a $N( \mu, 1/\tau)$ random variable is:
$$ f(x | \mu, \tau) = \sqrt{\frac{\tau}{2\pi}} \exp\left( -\frac{\tau}{2} (x-\mu)^2 \right) $$
We plot some different density functions below.
End of explanation
"""
import pymc as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
# notice the`value` here. We explain why below.
beta = pm.Normal("beta", 0, 0.001, value=0)
alpha = pm.Normal("alpha", 0, 0.001, value=0)
@pm.deterministic
def p(t=temperature, alpha=alpha, beta=beta):
return 1.0 / (1. + np.exp(beta * t + alpha))
"""
Explanation: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter:
$$ E[ X | \mu, \tau] = \mu$$
and its variance is equal to the inverse of $\tau$:
$$Var( X | \mu, \tau ) = \frac{1}{\tau}$$
Below we continue our modeling of the Challenger space craft:
End of explanation
"""
p.value
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
observed = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True)
model = pm.Model([observed, beta, alpha])
# Mysterious code to be explained in Chapter 3
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(120000, 100000, 2)
"""
Explanation: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:
$$ \text{Defect Incident, $D_i$} \sim \text{Ber}( \;p(t_i)\; ), \;\; i=1..N$$
where $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.
End of explanation
"""
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d
beta_samples = mcmc.trace('beta')[:, None]
figsize(12.5, 6)
# histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
"""
Explanation: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
End of explanation
"""
t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
"""
Explanation: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Similarly, all $\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\alpha$ is significantly less than 0.
Regarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected).
Next, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.
End of explanation
"""
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
"""
Explanation: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
An interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.
End of explanation
"""
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
"""
Explanation: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
More generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.
What about the day of the Challenger disaster?
On the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.
End of explanation
"""
simulated = pm.Bernoulli("bernoulli_sim", p)
N = 10000
mcmc = pm.MCMC([simulated, alpha, beta, observed])
mcmc.sample(N)
figsize(12.5, 5)
simulations = mcmc.trace("bernoulli_sim")[:]
print simulations.shape
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i + 1)
plt.scatter(temperature, simulations[1000 * i, :], color="k",
s=50, alpha=0.6)
"""
Explanation: Is our model appropriate?
The skeptical reader will say "You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\; \forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.
We can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:
observed = pm.Bernoulli( "bernoulli_obs", p, value=D, observed=True)
Hence we create:
simulated_data = pm.Bernoulli("simulation_data", p)
Let's simulate 10 000:
End of explanation
"""
posterior_probability = simulations.mean(axis=0)
print "posterior prob of defect | realized defect "
for i in range(len(D)):
print "%.2f | %d" % (posterior_probability[i], D[i])
"""
Explanation: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
We wish to assess how good our model is. "Good" is a subjective term of course, so results must be relative to other models.
We will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.
The following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.
For each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \;\text{Defect} = 1 | t, \alpha, \beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:
End of explanation
"""
ix = np.argsort(posterior_probability)
print "probb | defect "
for i in range(len(D)):
print "%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]])
"""
Explanation: Next we sort each column by the posterior probabilities:
End of explanation
"""
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
"""
Explanation: We can present the above data better in a figure: I've wrapped this up into a separation_plot function.
End of explanation
"""
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7. / 23 * np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model")
"""
Explanation: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.
It is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:
the perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.
a completely random model, which predicts random probabilities regardless of temperature.
a constant model: where $P(D = 1 \; | \; t) = c, \;\; \forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.
End of explanation
"""
# type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$")
"""
Explanation: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
The perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.
Exercises
1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50?
2. Try plotting $\alpha$ samples versus $\beta$ samples. Why might the resulting plot look like this?
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: References
[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.
[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.
[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.
[4] Fonnesbeck, Christopher. "Building Models." PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.
[5] Cronin, Beau. "Why Probabilistic Programming Matters." 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.
[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000
[7] Gelman, Andrew. "Philosophy and the practice of Bayesian statistics." British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.
[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. "The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models." American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.
End of explanation
"""
|
cogstat/cogstat | cogstat/docs/CogStat analyses showcase.ipynb | gpl-3.0 | %matplotlib inline
import os
import warnings
warnings.filterwarnings('ignore')
from cogstat import cogstat as cs
print(cs.__version__)
cs_dir, dummy_filename = os.path.split(cs.__file__) # We use this for the demo data
"""
Explanation: Showcase of various CogStat analyses
Below you can see a few examples what analyses are perfomed for a specific task in CogStat. Note that the specific analyses that are applied depend on the task, number of variables, variable measurement levels, various other properties (e.g. normality), therefore, these examples show only some of the possibilities. See a more extensive list of the available analyses details in the online help.
(The table of contents below may not be visible on all systems.)
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
<script src="https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js"></script>
End of explanation
"""
# Load some data
data = cs.CogStatData(data=os.path.join(cs_dir, 'sample_data', 'example_data.csv'))
# Display the data
cs.display(data.print_data())
"""
Explanation: Data
End of explanation
"""
### Explore variable ###
# Get the most important statistics of a single variable
cs.display(data.explore_variable('X'))
cs.display(data.explore_variable('Z'))
cs.display(data.explore_variable('CONDITION'))
"""
Explanation: Explore variable in interval, ordinal and nominal variables
End of explanation
"""
### Explore variable pair ###
# Get the statistics of a variable pair
cs.display(data.explore_variable_pair('X', 'Y'))
cs.display(data.explore_variable_pair('Z', 'ZZ'))
cs.display(data.explore_variable_pair('TIME', 'CONDITION'))
### Behavioral data diffusion analyses ###
# cs.display(data.diffusion(error_name=['error'], RT_name=['RT'], participant_name=['participant_id'], condition_names=['loudness', 'side']))
"""
Explanation: Explore relation pairs in interval, ordinal and nominal variables
End of explanation
"""
### Compare variables ###
cs.display(data.compare_variables(['X', 'Y'], factors=[]))
cs.display(data.compare_variables(['Z', 'ZZ'], factors=[]))
cs.display(data.compare_variables(['CONDITION', 'CONDITION2'], factors=[]))
"""
Explanation: Compare repeated measures variables with interval, ordinal and nominal variables
End of explanation
"""
### Compare groups ###
cs.display(data.compare_groups('X', grouping_variables=['TIME']))
cs.display(data.compare_groups('X', grouping_variables=['TIME3']))
cs.display(data.compare_groups('Y', grouping_variables=['TIME']))
cs.display(data.compare_groups('Y', grouping_variables=['TIME3']))
cs.display(data.compare_groups('CONDITION', grouping_variables=['TIME']))
cs.display(data.compare_groups('X', grouping_variables=['TIME', 'CONDITION']))
"""
Explanation: Compare groups in interval, ordinal and nominal dependent variables, with one or two grouping variables with 2 or 3 group levels
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/4 Automatic Theorem Proving/AST-2-Dot.ipynb | gpl-2.0 | import graphviz as gv
"""
Explanation: Drawing Abstract Syntax Trees with GraphViz
End of explanation
"""
def tuple2dot(t):
dot = gv.Digraph('Abstract Syntax Tree')
Nodes_2_Names = {}
assign_numbers((), t, Nodes_2_Names)
create_nodes(dot, (), t, Nodes_2_Names)
return dot
"""
Explanation: The function tuple2dot takes a nested tuple t as its argument. This nested tuple is interpreted as an
abstract syntax tree. This tree is visualized using graphviz.
End of explanation
"""
def assign_numbers(address, t, Nodes2Numbers, n=0):
Nodes2Numbers[address] = str(n)
if isinstance(t, str) or isinstance(t, int):
return n + 1
n += 1
j = 1
for t in t[1:]:
n = assign_numbers(address + (j,), t, Nodes2Numbers, n)
j += 1
return n
"""
Explanation: The function assign_numbers takes three arguments:
- t is a nested tuple that is interpreted as a tree,
- Nodes2Numbers is dictionary,
- n is a natural number.
Given a tree t that is represented as a nested tuple, the function assign_numbers assigns a unique natural number
to every node of t. This assignment is stored in the dictionary Nodes2Numbers. n is the first natural number
that is used. The function returns the smallest natural number that is still unused.
End of explanation
"""
def create_nodes(dot, a, t, Nodes_2_Names):
root = Nodes_2_Names[a]
if t[0] == '\\':
t = ('\\\\',) + t[1:]
if isinstance(t, str) or isinstance(t, int):
dot.node(root, label=str(t))
return
dot.node(root, label=t[0])
j = 1
for c in t[1:]:
child = Nodes_2_Names[a + (j,)]
dot.edge(root, child)
create_nodes(dot, a + (j,), c, Nodes_2_Names)
j += 1
"""
Explanation: The function create_nodes takes three arguments:
- dot is an object of class graphviz.digraph,
- t is an abstract syntax tree represented as a nested tuple.
- Nodes_2_Names is a dictionary mapping nodes in t to unique names
that can be used as node names in graphviz.
The function creates the nodes in t and connects them via directed edges so that t is represented as a tree.
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Optimization Exercise 1
Imports
End of explanation
"""
def hat(x,a,b):
return -a*x**2 + b*x**4
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
"""
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
"""
a = 5.0
b = 1.0
X = np.linspace(-3,3,100)
plt.figure(figsize=(10,5))
plt.plot(X, hat(X,a,b))
plt.xlabel("X", fontsize=14)
plt.ylabel("V(x)", fontsize=14)
plt.title("Hat Potential", fontsize=14)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#a2a7ff')
ax.spines['left'].set_color('#a2a7ff')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.tick_params(axis='x', colors='#666666')
ax.tick_params(axis='y', colors='#666666')
plt.show()
assert True # leave this to grade the plot
"""
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
"""
one = opt.minimize(hat, np.array([1]), args=(a,b)).x
two = opt.minimize(hat, np.array([-1]), args=(a,b)).x
plt.figure(figsize=(10,5))
plt.plot(X, hat(X,a,b))
plt.plot(one, hat(one,a,b),"ro", label="Minima")
plt.plot(two, hat(two,a,b), "ro")
plt.xlabel("X", fontsize=14)
plt.ylabel("V(x)", fontsize=14)
plt.title("Hat Potential", fontsize=14)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#a2a7ff')
ax.spines['left'].set_color('#a2a7ff')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.tick_params(axis='x', colors='#666666')
ax.tick_params(axis='y', colors='#666666')
plt.legend()
plt.show()
assert True # leave this for grading the plot
"""
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation
"""
|
SamLau95/nbinteract | docs/notebooks/examples/examples_central_limit_theorem.ipynb | bsd-3-clause | colors = make_array('Purple', 'Purple', 'Purple', 'White')
model = Table().with_column('Color', colors)
model
props = make_array()
num_plants = 200
repetitions = 1000
for i in np.arange(repetitions):
sample = model.sample(num_plants)
new_prop = np.count_nonzero(sample.column('Color') == 'Purple')/num_plants
props = np.append(props, new_prop)
props[:5]
opts = {
'title': 'Distribution of sample proportions',
'xlabel': 'Sample Proportion',
'ylabel': 'Percent per unit',
'xlim': (0.64, 0.84),
'ylim': (0, 25),
'bins': 20,
}
nbi.hist(props, options=opts)
"""
Explanation: The Central Limit Theorem
Very few of the data histograms that we have seen in this course have been bell shaped. When we have come across a bell shaped distribution, it has almost invariably been an empirical histogram of a statistic based on a random sample.
The Central Limit Theorem says that the probability distribution of the sum or average of a large random sample drawn with replacement will be roughly normal, regardless of the distribution of the population from which the sample is drawn.
As we noted when we were studying Chebychev's bounds, results that can be applied to random samples regardless of the distribution of the population are very powerful, because in data science we rarely know the distribution of the population.
The Central Limit Theorem makes it possible to make inferences with very little knowledge about the population, provided we have a large random sample. That is why it is central to the field of statistical inference.
Proportion of Purple Flowers
Recall Mendel's probability model for the colors of the flowers of a species of pea plant. The model says that the flower colors of the plants are like draws made at random with replacement from {Purple, Purple, Purple, White}.
In a large sample of plants, about what proportion will have purple flowers? We would expect the answer to be about 0.75, the proportion purple in the model. And, because proportions are means, the Central Limit Theorem says that the distribution of the sample proportion of purple plants is roughly normal.
We can confirm this by simulation. Let's simulate the proportion of purple-flowered plants in a sample of 200 plants.
End of explanation
"""
def empirical_props(num_plants):
props = make_array()
for i in np.arange(repetitions):
sample = model.sample(num_plants)
new_prop = np.count_nonzero(sample.column('Color') == 'Purple')/num_plants
props = np.append(props, new_prop)
return props
nbi.hist(empirical_props, options=opts,
num_plants=widgets.ToggleButtons(options=[100, 200, 400, 800]))
"""
Explanation: There's that normal curve again, as predicted by the Central Limit Theorem, centered at around 0.75 just as you would expect.
How would this distribution change if we increased the sample size? We can copy our sampling code into a function and then use interaction to see how the distribution changes as the sample size increases.
We will keep the number of repetitions the same as before so that the two columns have the same length.
End of explanation
"""
|
InsightLab/data-science-cookbook | 2020/trabalho-02/Trabalho 2 - Implementacao Perceptron.ipynb | mit | import numpy as np
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
random_state : int
Random number generator seed for random weight
initialization.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications (updates) in each epoch.
"""
def __init__(self, eta=0.01, n_iter=50, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_examples, n_features]
Training vectors, where n_examples is the number of examples and
n_features is the number of features.
y : array-like, shape = [n_examples]
Target values.
Returns
-------
self : object
"""
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
"""
Explanation: Implementação de um Classificador Perceptron
End of explanation
"""
"""Dados de Treinamento """
X = np.array([[1,1],[2,2],[3,3]])
y = np.array([1,1,-1])
"""Criando objeto Perceptron"""
ppn = Perceptron(eta=0.1, n_iter=100)
"""Treinando o modelo"""
ppn.fit(X, y)
"""Testando modelo treinado """
X_newdata = np.array([[4,4],[2,2],[3,3]])
print("Resultado da Predição",ppn.predict(X_newdata));
"""
Explanation: Testando o classificador Perceptron
End of explanation
"""
|
jorisvandenbossche/2015-EuroScipy-pandas-tutorial | solved - 04 - Groupby operations.ipynb | bsd-2-clause | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
pd.options.display.max_rows = 10
"""
Explanation: Groupby operations
Some imports:
End of explanation
"""
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
"""
Explanation: Some 'theory': the groupby operation (split-apply-combine)
The "group by" concept: we want to apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets
This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
<img src="img/splitApplyCombine.png">
Similar to SQL GROUP BY
The example of the image in pandas syntax:
End of explanation
"""
df.groupby('key').aggregate(np.sum) # 'sum'
df.groupby('key').sum()
"""
Explanation: Using the filtering and reductions operations we have seen in the previous notebooks, we could do something like:
df[df['key'] == "A"].sum()
df[df['key'] == "B"].sum()
...
But pandas provides the groupby method to do this:
End of explanation
"""
df = pd.read_csv("data/titanic.csv")
df.head()
"""
Explanation: And many more methods are available.
And now applying this on some real data
We go back to the titanic survival data:
End of explanation
"""
df.groupby('Sex')['Age'].mean()
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Using groupby(), calculate the average age for each sex.
</div>
End of explanation
"""
df['Survived'].sum() / len(df['Survived'])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the average survival ratio for all passengers.
</div>
End of explanation
"""
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing).
</div>
End of explanation
"""
def survival_ratio(survived):
return survived.sum() / len(survived)
df.groupby('Sex')['Survived'].aggregate(survival_ratio)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Is there a difference in this survival ratio between the sexes? (tip: write the above calculation of the survival ratio as a function)
</div>
End of explanation
"""
df.groupby('Pclass')['Survived'].aggregate(survival_ratio).plot(kind='bar')
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Make a bar plot of the survival ratio for the different classes ('Pclass' column).
</div>
End of explanation
"""
|
diana-hep/c2numpy | commonblock/commonblock-demo.ipynb | apache-2.0 | import numpy
import commonblock
tracks = commonblock.NumpyCommonBlock(
trackermu_qoverp = numpy.zeros(1000, dtype=numpy.double),
trackermu_qoverp_err = numpy.zeros(1000, dtype=numpy.double),
trackermu_phi = numpy.zeros(1000, dtype=numpy.double),
trackermu_eta = numpy.zeros(1000, dtype=numpy.double),
trackermu_dxy = numpy.zeros(1000, dtype=numpy.double),
trackermu_dz = numpy.zeros(1000, dtype=numpy.double),
globalmu_qoverp = numpy.zeros(1000, dtype=numpy.double),
globalmu_qoverp_err = numpy.zeros(1000, dtype=numpy.double))
hits = commonblock.NumpyCommonBlock(
detid = numpy.zeros(5000, dtype=numpy.uint64),
localx = numpy.zeros(5000, dtype=numpy.double),
localy = numpy.zeros(5000, dtype=numpy.double),
localx_err = numpy.zeros(5000, dtype=numpy.double),
localy_err = numpy.zeros(5000, dtype=numpy.double))
"""
Explanation: Zero-copy communication between C++ and Python
Numpy arrays are just C arrays wrapped with metadata in Python. Thus, we can share data between C and Python without even copying. In general, this communication is
not overwrite safe: Python protects against out-of-range indexes, but C/C++ does not;
not type safe: no guarantee that C and Python will interpret bytes in memory the same way, including endianness;
not thread safe: no protection at all against concurrent access.
But without much overhead, we can wrap a shared array (or collection of arrays) in two APIs— one in C++, one in Python— to provide these protections.
commonblock is a nascent library to do this. It passes array lengths and types from Python to C++ via ctypes and uses librt.so (wrapped by prwlock in Python) to implement locks that are usable on both sides.
End of explanation
"""
import FWCore.ParameterSet.Config as cms
process = cms.Process("Demo")
process.load("FWCore.MessageService.MessageLogger_cfi")
process.maxEvents = cms.untracked.PSet(input = cms.untracked.int32(1000))
process.source = cms.Source(
"PoolSource", fileNames = cms.untracked.vstring("file:MuAlZMuMu-2016H-002590494DA0.root"))
process.demo = cms.EDAnalyzer(
"DemoAnalyzer",
tracks = cms.uint64(tracks.pointer()), # pass the arrays to C++ as a pointer
hits = cms.uint64(hits.pointer()))
process.p = cms.Path(process.demo)
"""
Explanation: Using it in CMSSW
CMSSW can be executed within a Python process, thanks to Chris's PR #17236. Since the configuration language is also in Python, you can build the configuration and start CMSSW in the same Python process.
We can get our common block into CMSSW by passing its pointer as part of a ParameterSet. Since this is all one process, that pointer address is still valid when CMSSW launches.
End of explanation
"""
import threading
import libFWCorePythonFramework
import libFWCorePythonParameterSet
class CMSSWThread(threading.Thread):
def __init__(self, process):
super(CMSSWThread, self).__init__()
self.process = process
def run(self):
processDesc = libFWCorePythonParameterSet.ProcessDesc()
self.process.fillProcessDesc(processDesc.pset())
cppProcessor = libFWCorePythonFramework.PythonEventProcessor(processDesc)
cppProcessor.run()
"""
Explanation: On the C++ side
NumpyCommonBlock.h is a header-only library that defines the interface. We pick up the object by casting the pointer:
tracksBlock = (NumpyCommonBlock*)iConfig.getParameter<unsigned long long>("tracks");
hitsBlock = (NumpyCommonBlock*)iConfig.getParameter<unsigned long long>("hits");
and then get safe accessors to each array with a templated method that checks C++'s compiled type against Python's runtime type.
```
trackermu_qoverp = tracksBlock->newAccessor<double>("trackermu_qoverp");
trackermu_qoverp_err = tracksBlock->newAccessor<double>("trackermu_qoverp_err");
trackermu_phi = tracksBlock->newAccessor<double>("trackermu_phi");
trackermu_eta = tracksBlock->newAccessor<double>("trackermu_eta");
trackermu_dxy = tracksBlock->newAccessor<double>("trackermu_dxy");
trackermu_dz = tracksBlock->newAccessor<double>("trackermu_dz");
globalmu_qoverp = tracksBlock->newAccessor<double>("globalmu_qoverp");
globalmu_qoverp_err = tracksBlock->newAccessor<double>("globalmu_qoverp_err");
detid = hitsBlock->newAccessor<uint64_t>("detid");
localx = hitsBlock->newAccessor<double>("localx");
localy = hitsBlock->newAccessor<double>("localy");
localx_err = hitsBlock->newAccessor<double>("localx_err");
localy_err = hitsBlock->newAccessor<double>("localy_err");
```
Running CMSSW
Chris's PythonEventProcessor.run() method blocks, so I put it in a thread to let CMSSW and Python run at the same time.
I had to release the GIL with PR #18683 to make this work, and that feature will work its way into releases eventually.
End of explanation
"""
cmsswThread = CMSSWThread(process)
cmsswThread.start()
tracks.wait(1) # CMSSW notifies that it has filled the tracks array
tracks.pandas()
hits.pandas()
%matplotlib inline
tracks.pandas().plot.hist()
df = hits.pandas()
df[numpy.abs(df.localy) > 0].plot.hexbin(x="localx", y="localy", gridsize=25)
"""
Explanation: Demonstration
In this demo, I loop over AlCaZMuMu muons and fill the arrays with track parameters (before and after adding muon hits to the fit) and display them as Pandas DataFrames as soon as they're full (before CMSSW finishes).
The idea is that one would stream data from CMSSW into some Python thing in large blocks (1000 tracks/5000 hits at a time in this example).
Bi-directional communication is possible, but I don't know what it could be used for.
End of explanation
"""
|
iamfullofspam/hep_ml | notebooks/DemoNeuralNetworks.ipynb | apache-2.0 | !cd toy_datasets; wget -O ../data/MiniBooNE_PID.txt -nc MiniBooNE_PID.txt https://archive.ics.uci.edu/ml/machine-learning-databases/00199/MiniBooNE_PID.txt
import numpy, pandas
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
data = pandas.read_csv('../data/MiniBooNE_PID.txt', sep='\s\s*', skiprows=[0], header=None, engine='python')
labels = pandas.read_csv('../data/MiniBooNE_PID.txt', sep=' ', nrows=1, header=None)
labels = [1] * labels[1].values[0] + [0] * labels[2].values[0]
data.columns = ['feature_{}'.format(key) for key in data.columns]
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5, test_size=0.5, random_state=42)
"""
Explanation: Neural networks
Neural networks inside hep_ml are very simple, but flexible. They are using theano library.
hep_ml.nnet also provides tools to optimize any continuos expression as a decision function (there is an example below).
Downloading a dataset
downloading dataset from UCI and splitting it into train and test
End of explanation
"""
from hep_ml.nnet import MLPClassifier
from sklearn.metrics import roc_auc_score
clf = MLPClassifier(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
proba = clf.predict_proba(test_data)
print('Test quality:', roc_auc_score(test_labels, proba[:, 1]))
proba = clf.predict_proba(train_data)
print('Train quality:', roc_auc_score(train_labels, proba[:, 1]))
"""
Explanation: Example of training a network
Training multilayer perceptron with one hidden layer with 5 neurons.
In most cases, we simply use MLPClassifier with one or two hidden layers.
End of explanation
"""
from hep_ml.nnet import AbstractNeuralNetworkClassifier
from theano import tensor as T
class SimpleNeuralNetwork(AbstractNeuralNetworkClassifier):
def prepare(self):
# getting number of layers in input, hidden, output layers
# note that we support only one hidden layer here
n1, n2, n3 = self.layers_
# creating parameters of neural network
W1 = self._create_matrix_parameter('W1', n1, n2)
W2 = self._create_matrix_parameter('W2', n2, n3)
# defining activation function
def activation(input):
first = T.nnet.sigmoid(T.dot(input, W1))
return T.dot(first, W2)
return activation
clf = SimpleNeuralNetwork(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1]))
"""
Explanation: Creating your own neural network
To create own neural network, one should provide activation function and define parameters of a network.
You are not limited here to any kind of structure in this function, hep_ml.nnet will consider this as a black box for optimization.
Simplest way is to override prepare method of AbstractNeuralNetworkClassifier.
End of explanation
"""
from hep_ml.nnet import PairwiseNeuralNetwork
clf = PairwiseNeuralNetwork(layers=[5], epochs=500)
clf.fit(train_data, train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1]))
"""
Explanation: Example of a very specific neural network
this NN has one hidden layer, but the layer is quite strange, as it encounters correlations
End of explanation
"""
class CustomNeuralNetwork(AbstractNeuralNetworkClassifier):
def prepare(self):
# getting number of layers in input, hidden, output layers
# note that we support only one hidden layer here
n1, n2, n3 = self.layers_
# checking that we have three variables in input + constant
assert n1 == 3 + 1
# creating parameters
c1 = self._create_scalar_parameter('c1')
c2 = self._create_scalar_parameter('c2')
c3 = self._create_scalar_parameter('c3')
c4 = self._create_scalar_parameter('c4')
c5 = self._create_scalar_parameter('c5')
# defining activation function
def activation(input):
v1, v2, v3 = input[:, 0], input[:, 1], input[:, 2]
return c1 * v1 + c2 * T.log(T.exp(v2 + v3) + T.exp(c3)) + c4 * v3 / v2 + c5
return activation
"""
Explanation: Fitting very specific expressions as estimators
One can use hep_ml.nnet to optimize any expressions as black-box
for simplicity, let's assume we have only three variables: $\text{var}_1, \text{var}_2, \text{var}_3.$
And for some physical intuition we are sure that this is good expression to discriminate signal and background:
$$\text{output} = c_1 \text{var}_1 + c_2 \log \left[ \exp(\text{var}_2 + \text{var}_3) + \exp(c_3) \right] + c_4 \dfrac{\text{var}_3}{\text{var}_2} + c_5 $$
Note: I have written some random expression here, in practice it appears from physical intuition (or after looking at the data).
End of explanation
"""
from sklearn.base import BaseEstimator, TransformerMixin
from rep.utils import Flattener
class Uniformer(BaseEstimator, TransformerMixin):
# leaving only 3 features and flattening each variable
def fit(self, X, y=None):
self.transformers = []
X = numpy.array(X, dtype=float)
for column in range(X.shape[1]):
self.transformers.append(Flattener(X[:, column]))
return self
def transform(self, X):
X = numpy.array(X, dtype=float)
assert X.shape[1] == len(self.transformers)
for column, trans in enumerate(self.transformers):
X[:, column] = trans(X[:, column])
return X
# selecting three features to train:
train_features = train_data.columns[:3]
clf = CustomNeuralNetwork(layers=[5], epochs=1000, scaler=Uniformer())
clf.fit(train_data[train_features], train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data[train_features])[:, 1]))
"""
Explanation: Writing custom pretransformer
Below we define a very simple scikit-learn transformer which will transform each feature uniform to range [0, 1]
End of explanation
"""
from sklearn.ensemble import AdaBoostClassifier
base_nnet = MLPClassifier(layers=[5], scaler=Uniformer())
clf = AdaBoostClassifier(base_estimator=base_nnet, n_estimators=10)
clf.fit(train_data, train_labels)
print('Test quality:', roc_auc_score(test_labels, clf.predict_proba(test_data)[:, 1]))
"""
Explanation: Ensembling of neural neworks
let's run AdaBoost algorithm over neural network. Boosting of the networks is rarely seen in practice due to the high cost and minor positive effects (but it is not senseless)
End of explanation
"""
|
abhi1509/deep-learning | transfer-learning/Transfer_Learning.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
labels_vecs = # Your one-hot encoded labels array here
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
train_x, train_y =
val_x, val_y =
test_x, test_y =
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
logits = # output layer logits
cost = # cross entropy loss
optimizer = # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
sellaroliandrea/matrix | .ipynb_checkpoints/matrici2-checkpoint.ipynb | mit | import sys; sys.path.append('pyggb')
%reload_ext geogebra_magic
%ggb --width 800 --height 400 --showToolBar 0 --showResetIcon 1 trasformazioni.ggb
"""
Explanation: Alcune applicazioni delle matrici
Subito dopo aver introdotto le matrici e viste le operazioni fondamentali di somma, prodotto e determinante una domanda sorge spontanea. A cosa servono? In verità è una questione ricorrente per tanti argomenti della matematica, ma le matrici sembrano davvero un ''oggetto'' complicato inutilmente. Perché non potevo chiamarle direttamente "tabelle di numeri"? Tutto sommato anche con le tabelle in alcuni casi può essere fatta la somma tra elementi nella stessa posizione della tabella. E la statistica ci ha insegnato come trattare le tabelle di numeri calcolando somme e medie, mediane e deviazioni standard. E ancora, il prodotto tra due matrici era proprio necessario definirlo in quel modo così "artificioso"? Non bastava moltiplicare i singoli componenti come si fa per l'addizione? Tra l'altro in questo modo avremmo ottenuto un'operazione commutativa che è un terreno che conosciamo molto meglio. Per non parlare del determinante, che per calcolarlo abbiamo dovuto scomodare funzioni ricorsive senza però avere nulla di vantaggioso per la matematica che abbiamo studiato.
Una prima risposta viene dalla soluzione dei sistemi lineari. Se consideriamo il sistema
$$\begin{matrix}
a_{1,1}x + a_{1,2}y + a_{1,3}z = b_1\
a_{2,1}x + a_{2,2}y + a_{2,3}z = b_2\
a_{3,1}x + a_{3,2}y + a_{3,3}z = b_3\
\end{matrix}
$$
possiamo scriverlo in forma più compatta utilizzando il prodotto tra matrici così
$$\begin{pmatrix}
a_{1,1} & a_{1,2} & a_{1,3} \
a_{2,1} & a_{2,2} & a_{2,3} \
a_{3,1} & a_{3,2} & a_{3,3}\
\end{pmatrix}
\begin{pmatrix}
x \
y \
z \
\end{pmatrix}=
\begin{pmatrix}
b_1\
b_2 \
b_3 \
\end{pmatrix}
$$
In effetti un sistema è determinato dal valore dei coefficienti $a_{i,j}$ e $b_{k}$. Con la scrittura in forma matriciale posso lavorare solo sui coefficienti senza dovermi portare dietro le $x$,$y$ e $z$. Il metodo di riduzione (talvolta chiamato di addizione e sottrazione) opportunamente generalizzato alle matrici permette di risolvere sistemi lineari di qualsiasi dimensione. Tale metodo viene chiamato Eliminazione di Gauss.
Ritengo tuttavia che la soluzione di sistemi lineari, per quanto sia un ottima applicazione delle matrici, non permetta di cogliere la potenza di questo nuovo strumento. Nel seguito vederemo tre applicazioni delle matrici a diversi settori.
Trasformazioni geometriche piane
Una trasformazione del piano è una funzione che ad un punto del piano, che indicheremo con le due componenti cartesiane $(x,y)$, associa un altro punto del piano $(x',y')$. Ad esempio la trasformazione
$$\begin{matrix}
x' = x\
y' = -y\
\end{matrix}
$$
è la simmetria rispetto all'asse delle x. Usando la notazione matriciale, analogamente a quanto abbiamo visto per i sistemi, possiamo scrivere questa trasformazione così:
$$
\begin{pmatrix}
x'\
y' \
\end{pmatrix}=
\begin{pmatrix}
1 & 0 \
0 & -1 \
\end{pmatrix}
\begin{pmatrix}
x \
y \
\end{pmatrix}
$$
Con questo file di Geogebra si possono vedere le matrici che generano le simmetrie assiali e centrale (rispetto all'origine), le omotetie e le rotazioni.
End of explanation
"""
from IPython.display import Image
Image(filename='noncomm.png')
"""
Explanation: Composizione di due trasformazioni
Se ad esempio volessi applicare prima una simmetria rispetto all'asse x e poi una rotazione, di solito si indicano le due trasformazioni in questo modo
$$\begin{matrix}
x' = -x\
y' = y\
\end{matrix}
$$
$$\begin{matrix}
x'' = x' \cos \alpha + y' \sin \alpha\
y'' = -x' \sin \alpha + y' \cos \alpha\
\end{matrix}
$$
Sostituendo gli $x'$ e $y'$ nel secondo sistema si ottiene la trasformazione composta.
Un primo fatto notevole che possiamo osservare è che la composizione di trasformazioni non è sempre commutativa. Basta un semplice esempio per rendersene conto. In effetti le trasformazioni non sono altro che funzioni e come sappiamo la composizione di funzioni non è commutativa.
End of explanation
"""
%ggb --width 800 --height 400 --showToolBar 1 --showResetIcon 1 determinante.ggb
"""
Explanation: Abbiamo già visto che la moltiplicazione tra matrici non è commutativa. Se facciamo qualche prova vediamo che rappresentando le trasformazioni con matrici la composizione di trasformazioni altro non è che il prodotto tra matrici. Proviamo a comporre due rotazioni di angoli $\alpha$ e $\beta$.
$$
\begin{pmatrix}
\cos \alpha & \sin \alpha \
-\sin \alpha & \cos \alpha \
\end{pmatrix}
\begin{pmatrix}
\cos \beta & \sin \beta \
-\sin \beta & \cos \beta \
\end{pmatrix}=
\begin{pmatrix}
\cos \alpha \cos \beta - \sin \alpha \sin \beta & \cos \alpha \sin \beta + \sin \alpha \cos \beta \
-(\cos \alpha \sin \beta + \sin \alpha \cos \beta)& \cos \alpha \cos \beta - \sin \alpha \sin \beta \
\end{pmatrix}
$$
Che usando le formule di addizione del seno risulta essere proprio la rotazione di $\alpha + \beta$
Determinante
Proviamo ora a capire il senso del determinante. Calcoliamo alcuni determinanti (intanto a mano in attesa che venga completato il programma in Python). Notiamo subito che per le simmetrie assiali vale $-1$, per le rotazioni sempre $1$ (dal teorema di Pitagora) e per le omotetie $k^2$ dove $k$ è il fattore di omotetia. Facciamo un po' di esperimenti con geogebra per capire come si comporta il determinante
End of explanation
"""
|
TheOregonian/long-term-care-db | notebooks/analysis/washington-gardens.ipynb | mit | import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
df = pd.read_csv('../../data/processed/complaints-3-25-scrape.csv')
"""
Explanation: Data were munged here.
End of explanation
"""
move_in_date = '2015-05-01'
"""
Explanation: <h3>How many substantiated complaints occured by the time Marian Ewins moved to Washington Gardens?</h3>
Marian Ewins moved in to Washington Gardens in May, 2015.
End of explanation
"""
df[(df['facility_id']=='50R382') & (df['incident_date']<move_in_date)].count()[0]
"""
Explanation: The facility_id for Washington Gardens is 50R382.
End of explanation
"""
|
ptosco/rdkit | Docs/Notebooks/RGroupDecomposition-StereoChemTest.ipynb | bsd-3-clause | from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
from rdkit.Chem import rdRGroupDecomposition
from IPython.display import HTML
from rdkit import rdBase
rdBase.DisableLog("rdApp.debug")
import pandas as pd
from rdkit.Chem import PandasTools
m = Chem.MolFromSmarts("C1CCO[C@@](*)(*)1")
"""
Explanation: This example shows how RGroupDecomposition works with stereochemistry matching.
If stereochemistry is specified in the core, the RGroup decomposition correctly assigns R1 and R2.
End of explanation
"""
m
"""
Explanation: Perhaps we should file a bug that smarts doesn't show stereochem here.
End of explanation
"""
el = "NOPS"
mols = []
for e in el:
smi = "C1CCO[C@@H](%s)1"%e
print(smi)
m = Chem.MolFromSmiles(smi)
mols.append(m)
smi = "C1CCO[C@H](%s)1"%e
print(smi)
m = Chem.MolFromSmiles(smi)
mols.append(m)
for e2 in el:
if e != e2:
smi = "C1CCO[C@@](%s)(%s)1"%(e,e2)
m = Chem.MolFromSmiles(smi)
if m:
print(smi)
mols.append(m)
from rdkit.Chem import Draw
Draw.MolsToGridImage(mols)
hmols = [Chem.AddHs(m) for m in mols]
Draw.MolsToGridImage(hmols)
core = Chem.MolFromSmarts("C1CCO[C@@](*)(*)1")
"""
Explanation: Make some example stereochemistries
End of explanation
"""
rgroups = rdRGroupDecomposition.RGroupDecomposition(core)
for i,m in enumerate(mols):
rgroups.Add(m)
if i == 10:
break
"""
Explanation: Make RGroup decomposition!
To use RGroupDecomposition:
construct the class on the core rg = RGroupDecomposition(core)
Call rg.Add( mol ) on the molecules. If this returns -1, the molecule is not
compatible with the core
After all molecules are added, call rg.Process() to complete the rgroup
decomposition.
End of explanation
"""
rgroups.Process()
"""
Explanation: We need to call process after all molecules are added. To optimize the RGroups.
End of explanation
"""
groups = rgroups.GetRGroupsAsColumns()
frame = pd.DataFrame(groups)
PandasTools.ChangeMoleculeRendering(frame)
"""
Explanation: The RGroupDecomposition code is quite compatible with the python pandas integration.
Calling rg.GetRGroupsAsColumns() can be sent directly into a pandas table.
n.b. You need to call PandasTools.ChangeMoleculeRendering(frame) to allow the molecules
to be rendered properly.
End of explanation
"""
HTML(frame.to_html())
"""
Explanation: The first two (0 and 1) are different due to the sterochemistry difference.
I still haven't found a way to show the core with stereochem which is annoying.
End of explanation
"""
core = Chem.MolFromSmarts("C1CCOC1")
rgroups = rdRGroupDecomposition.RGroupDecomposition(core)
for m in mols:
rgroups.Add(m)
rgroups.Process()
frame = pd.DataFrame(rgroups.GetRGroupsAsColumns())
PandasTools.ChangeMoleculeRendering(frame)
HTML(frame.to_html())
"""
Explanation: Let's try the same with stereochemistry in the core.
Note that the core has symmetry and the analysis nicely supports this during
the side chain optimization
n.b. stereo information is lost here which is fairly annoying and will be analyzed at a later date.
End of explanation
"""
|
subimal/class-demos | LissajousFigures.ipynb | gpl-3.0 | import numpy as np
import pylab as pl
"""
Explanation: <a href="https://colab.research.google.com/github/subimal/class-demos/blob/master/LissajousFigures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lissajous figures
Import the required libraies
End of explanation
"""
def gcd(a,b):
if a<b:
a,b = b,a
while b>0:
a,b=b,a%b
return a
def lcm(a,b):
return a*b//gcd(a,b)
"""
Explanation: We will need to calculate the LCM of the time periods for the two oscillators. Hence we define the LCM of two integers.
End of explanation
"""
# amplitudes
A = np.array([1,1])
# frequencies
f = np.array([3,5])
# phased difference
delta = 0*np.pi/4
# time periods
T = 1/f
# the time grid
tmin, tmax = 0, lcm(f[0], f[1])
dt=(tmax-tmin)/2000
t=np.arange(tmin, tmax+dt, dt)
# the oscillations - mutually perpendicular
x = A[0]*np.cos(2*np.pi*f[0]*t )
y = A[1]*np.sin(2*np.pi*f[1]*t+delta)
fig = pl.figure(1, figsize=(7,7))
ax = fig.add_subplot(111)
pl.plot(x,y)
pl.grid('on')
d=1.1*max(A)
pl.xlim((-d, d))
pl.ylim((-d, d))
ax.set_aspect(1)
"""
Explanation: The parameters of the oscillators. You need not change anything starting from the time periods.
If the graph is not smooth enough, smoothen it.
End of explanation
"""
|
nlesc-sherlock/analyzing-corpora | notebooks/IntroductionToTopicModeling.ipynb | apache-2.0 | %pylab inline
import scipy.stats as ss
n = 10
w = random.random(size=n)
w = w / sum(w)
topics = []
for i in range(n):
mu = random.uniform(-5,5)
t = ss.norm(mu,1)
topics.append(t)
"""
Explanation: Overview
Generally speaking, a topic model is a probability distribution of words over a document. But what does this mean? In topic modeling, we are starting from the assumption that there exist a number of topics $N$ -- a topic is a random variable which generates words from a vocabulary with a certain probability. A document is assumed to be produced from a mixture of samples from one or many topics. A document would be generated as follow:
Given $N$ topics and $W = { w_n }$ weights for each topic.
for i in lengthOfDocument
n = selectTopicFrom(W)
doc[i] = selectAWord(topic[n])
Notice that a document here is expressed as a bag-of-words, and therefore the ordering of words is not taken into account.
End of explanation
"""
x = linspace(-10,10,1000)
y = np.zeros(x.shape)
for ti,wi in zip(topics,w):
y += wi * ti.pdf(x)
plot(x,y);
"""
Explanation: So, what we observe from a document, is (more or less) a distribution over words:
End of explanation
"""
for ti,wi in zip(topics,w):
yi = wi * ti.pdf(x)
plot(x,yi)
"""
Explanation: But in reality, the document is generated by a more complicated process which we cannot observe directly (a mixture of distributions):
End of explanation
"""
|
RaspberryJamBe/ipython-notebooks | notebooks/en-gb/Communication - Send mails.ipynb | cc0-1.0 | MAIL_SERVER = "mail.****.com"
FROM_ADDRESS = "noreply@****.com"
TO_ADDRESS = "my_friend@****.com"
"""
Explanation: Requirement:
For sending mail you need an outgoing mail server (that, in the case of this script, also needs to allow unauthenticated outgoing communication). Fill out the required credentials in the folowing variables:
End of explanation
"""
from sender import Mail
mail = Mail(MAIL_SERVER)
mail.fromaddr = ("Secret admirer", FROM_ADDRESS)
mail.send_message("Raspberry Pi has a soft spot for you", to=TO_ADDRESS, body="Hi sweety! Grab a smoothie?")
"""
Explanation: Sending a mail is, with the proper library, a piece of cake...
End of explanation
"""
APPKEY = "******"
mail.fromaddr = ("Your doorbell", FROM_ADDRESS)
mail_to_addresses = {
"Donald Duck":"dd@****.com",
"Maleficent":"mf@****.com",
"BigBadWolf":"bw@****.com"
}
def on_message(sender, channel, message):
mail_message = "{}: Call for {}".format(channel, message)
print(mail_message)
mail.send_message("Raspberry Pi alert!", to=mail_to_addresses[message], body=mail_message)
import ortc
oc = ortc.OrtcClient()
oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1"
def on_connected(sender):
print('Connected')
oc.subscribe('doorbell', True, on_message)
oc.set_on_connected_callback(on_connected)
oc.connect(APPKEY)
"""
Explanation: ... but if we take it a little further, we can connect our doorbell project to the sending of mail!
APPKEY is the Application Key for a (free) http://www.realtime.co/ "Realtime Messaging Free" subscription.
See "104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen" voor meer gedetailleerde info. info.
End of explanation
"""
|
do-mpc/do-mpc | documentation/source/example_gallery/industrial_poly.ipynb | lgpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
import sys
from casadi import *
# Add do_mpc to path. This is not necessary if it was installed via pip
sys.path.append('../../../')
# Import do_mpc package:
import do_mpc
"""
Explanation: Industrial polymerization reactor
In this Jupyter Notebook we illustrate the example industrial_poly.
Open an interactive online Jupyter Notebook with this content on Binder:
The example consists of the three modules template_model.py, which describes the system model, template_mpc.py, which defines the settings for the control and template_simulator.py, which sets the parameters for the simulator.
The modules are used in main.py for the closed-loop execution of the controller.
In the following the different parts are presented. But first, we start by importing basic modules and do-mpc.
End of explanation
"""
model_type = 'continuous' # either 'discrete' or 'continuous'
model = do_mpc.model.Model(model_type)
"""
Explanation: Model
In the following we will present the configuration, setup and connection between these blocks, starting with the model.
The considered model of the industrial reactor is continuous and has 10 states and 3 control inputs.
The model is initiated by:
End of explanation
"""
# Certain parameters
R = 8.314 #gas constant
T_F = 25 + 273.15 #feed temperature
E_a = 8500.0 #activation energy
delH_R = 950.0*1.00 #sp reaction enthalpy
A_tank = 65.0 #area heat exchanger surface jacket 65
k_0 = 7.0*1.00 #sp reaction rate
k_U2 = 32.0 #reaction parameter 1
k_U1 = 4.0 #reaction parameter 2
w_WF = .333 #mass fraction water in feed
w_AF = .667 #mass fraction of A in feed
m_M_KW = 5000.0 #mass of coolant in jacket
fm_M_KW = 300000.0 #coolant flow in jacket 300000;
m_AWT_KW = 1000.0 #mass of coolant in EHE
fm_AWT_KW = 100000.0 #coolant flow in EHE
m_AWT = 200.0 #mass of product in EHE
fm_AWT = 20000.0 #product flow in EHE
m_S = 39000.0 #mass of reactor steel
c_pW = 4.2 #sp heat cap coolant
c_pS = .47 #sp heat cap steel
c_pF = 3.0 #sp heat cap feed
c_pR = 5.0 #sp heat cap reactor contents
k_WS = 17280.0 #heat transfer coeff water-steel
k_AS = 3600.0 #heat transfer coeff monomer-steel
k_PS = 360.0 #heat transfer coeff product-steel
alfa = 5*20e4*3.6
p_1 = 1.0
"""
Explanation: System description
The system consists of a reactor into which nonomer is fed.
The monomerturns into a polymer via a very exothermic chemical reaction.
The reactor is equipped with a jacket and with an External Heat Exchanger(EHE) that can both be used to control the temperature inside the reactor.
A schematic representation of the system is presented below:
The process is modeled by a set of 8 ordinary differential equations (ODEs):
\begin{align}
\dot{m}{\text{W}} &= \ \dot{m}{\text{F}}\, \omega_{\text{W,F}} \
\dot{m}{\text{A}} &= \ \dot{m}{\text{F}} \omega_{\text{A,F}}-k_{\text{R1}}\, m_{\text{A,R}}-k_{\text{R2}}\, m_{\text{AWT}}\, m_{\text{A}}/m_{\text{ges}} , \
\dot{m}{\text{P}} &= \ k{\text{R1}} \, m_{\text{A,R}}+p_{1}\, k_{\text{R2}}\, m_{\text{AWT}}\, m_{\text{A}}/ m_{\text{ges}}, \
\dot{T}{\text{R}} &= \ 1/(c{\text{p,R}} m_{\text{ges}})\; [\dot{m}{\text{F}} \; c{\text{p,F}}\left(T_{\text{F}}-T_{\text{R}}\right) +\Delta H_{\text{R}} k_{\text{R1}} m_{\text{A,R}}-k_{\text{K}} A\left(T_{\text{R}}-T_{\text{S}}\right) \
&- \dot{m}{\text{AWT}} \,c{\text{p,R}}\left(T_{\text{R}}-T_{\text{EK}}\right)],\notag\
\dot{T}{S} &= 1/(c{\text{p,S}} m_{\text{S}}) \;[k_{\text{K}} A\left(T_{\text{R}}-T_{\text{S}}\right)-k_{\text{K}} A\left(T_{\text{S}}-T_{\text{M}}\right)], \notag\
\dot{T}{\text{M}} &= 1/(c{\text{p,W}} m_{\text{M,KW}})\;[\dot{m}{\text{M,KW}}\, c{\text{p,W}}\left(T_{\text{M}}^{\text{IN}}-T_{\text{M}}\right) \
&+ k_{\text{K}} A\left(T_{\text{S}}-T_{\text{M}}\right)]+k_{\text{K}} A\left(T_{\text{S}}-T_{\text{M}}\right)], \
\dot{T}{\text{EK}}&= 1/(c{\text{p,R}} m_{\text{AWT}})\;[\dot{m}{\text{AWT}} c{\text{p,W}}\left(T_{\text{R}}-T_{\text{EK}}\right)-\alpha\left(T_{\text{EK}}-T_{\text{AWT}}\right) \
&+ k_{\text{R2}}\, m_{\text{A}}\, m_{\text{AWT}}\Delta H_{\text{R}}/m_{\text{ges}}], \notag\
\dot{T}{\text{AWT}} &= [\dot{m}{\text{AWT,KW}} \,c_{\text{p,W}}\,(T_{\text{AWT}}^{\text{IN}}-T_{\text{AWT}})-\alpha\left(T_{\text{AWT}}-T_{\text{EK}}\right)]/(c_{\text{p,W}} m_{\text{AWT,KW}}),
\end{align}
where
\begin{align}
U &= m_{\text{P}}/(m_{\text{A}}+m_{\text{P}}), \
m_{\text{ges}} &= \ m_{\text{W}}+m_{\text{A}}+m_{\text{P}}, \
k_{\text{R1}} &= \ k_{0} e^{\frac{-E_{a}}{R (T_{\text{R}}+273.15)}}\left(k_{\text{U1}}\left(1-U\right)+k_{\text{U2}} U\right), \
k_{\text{R2}} &= \ k_{0} e^{\frac{-E_{a}}{R (T_{\text{EK}}+273.15)}}\left(k_{\text{U1}}\left(1-U\right)+k_{\text{U2}} U\right), \
k_{\text{K}} &= (m_{\text{W}}k_{\text{WS}}+m_{\text{A}}k_{\text{AS}}+m_{\text{P}}k_{\text{PS}})/m_{\text{ges}},\
m_{\text{A,R}} &= m_\text{A}-m_\text{A} m_{\text{AWT}}/m_{\text{ges}}.
\end{align}
The model includes mass balances for the water, monomer and product hold-ups ($m_\text{W}$, $m_\text{A}$, $m_\text{P}$) and energy balances for the reactor ($T_\text{R}$), the vessel ($T_\text{S}$), the jacket ($T_\text{M}$), the mixture in the external heat exchanger ($T_{\text{EK}}$) and the coolant leaving the external heat exchanger ($T_{\text{AWT}}$).
The variable $U$ denotes the polymer-monomer ratio in the reactor, $m_{\text{ges}}$ represents the total mass, $k_{\text{R1}}$ is the reaction rate inside the reactor and $k_{\text{R2}}$ is the reaction rate in the external heat exchanger. The total heat transfer coefficient of the mixture inside the reactor is denoted as $k_{\text{K}}$ and $m_{\text{A,R}}$ represents the current amount of monomer inside the reactor.
The available control inputs are the feed flow $\dot{m}{\text{F}}$, the coolant temperature at the inlet of the jacket $T^{\text{IN}}{\text{M}}$ and the coolant temperature at the inlet of the external heat exchanger $T^{\text{IN}}_{\text{AWT}}$.
An overview of the parameters are listed below:
Implementation
First, we set the certain parameters:
End of explanation
"""
# Uncertain parameters:
delH_R = model.set_variable('_p', 'delH_R')
k_0 = model.set_variable('_p', 'k_0')
"""
Explanation: and afterwards the uncertain parameters:
End of explanation
"""
# States struct (optimization variables):
m_W = model.set_variable('_x', 'm_W')
m_A = model.set_variable('_x', 'm_A')
m_P = model.set_variable('_x', 'm_P')
T_R = model.set_variable('_x', 'T_R')
T_S = model.set_variable('_x', 'T_S')
Tout_M = model.set_variable('_x', 'Tout_M')
T_EK = model.set_variable('_x', 'T_EK')
Tout_AWT = model.set_variable('_x', 'Tout_AWT')
accum_monom = model.set_variable('_x', 'accum_monom')
T_adiab = model.set_variable('_x', 'T_adiab')
"""
Explanation: The 10 states of the control problem stem from the 8 ODEs, accum_monom models the amount that has been fed to the reactor via $\dot{m}\text{F}^{\text{acc}} = \dot{m}{\text{F}}$ and T_adiab ($T_{\text{adiab}}=\frac{\Delta H_{\text{R}}}{c_{\text{p,R}}} \frac{m_{\text{A}}}{m_{\text{ges}}} + T_{\text{R}}$, hence $\dot{T}{\text{adiab}}=\frac{\Delta H{\text{R}}}{m_{\text{ges}} c_{\text{p,R}}}\dot{m}{\text{A}}-
\left(\dot{m}{\text{W}}+\dot{m}{\text{A}}+\dot{m}{\text{P}}\right)\left(\frac{m_{\text{A}} \Delta H_{\text{R}}}{m_{\text{ges}}^2c_{\text{p,R}}}\right)+\dot{T}_{\text{R}}$) is a virtual variable that is important for safety aspects, as we will explain later.
All states are created in do-mpc via:
End of explanation
"""
# Input struct (optimization variables):
m_dot_f = model.set_variable('_u', 'm_dot_f')
T_in_M = model.set_variable('_u', 'T_in_M')
T_in_EK = model.set_variable('_u', 'T_in_EK')
"""
Explanation: and the control inputs via:
End of explanation
"""
# algebraic equations
U_m = m_P / (m_A + m_P)
m_ges = m_W + m_A + m_P
k_R1 = k_0 * exp(- E_a/(R*T_R)) * ((k_U1 * (1 - U_m)) + (k_U2 * U_m))
k_R2 = k_0 * exp(- E_a/(R*T_EK))* ((k_U1 * (1 - U_m)) + (k_U2 * U_m))
k_K = ((m_W / m_ges) * k_WS) + ((m_A/m_ges) * k_AS) + ((m_P/m_ges) * k_PS)
"""
Explanation: Before defining the ODE for each state variable, we create auxiliary terms:
End of explanation
"""
# Differential equations
dot_m_W = m_dot_f * w_WF
model.set_rhs('m_W', dot_m_W)
dot_m_A = (m_dot_f * w_AF) - (k_R1 * (m_A-((m_A*m_AWT)/(m_W+m_A+m_P)))) - (p_1 * k_R2 * (m_A/m_ges) * m_AWT)
model.set_rhs('m_A', dot_m_A)
dot_m_P = (k_R1 * (m_A-((m_A*m_AWT)/(m_W+m_A+m_P)))) + (p_1 * k_R2 * (m_A/m_ges) * m_AWT)
model.set_rhs('m_P', dot_m_P)
dot_T_R = 1./(c_pR * m_ges) * ((m_dot_f * c_pF * (T_F - T_R)) - (k_K *A_tank* (T_R - T_S)) - (fm_AWT * c_pR * (T_R - T_EK)) + (delH_R * k_R1 * (m_A-((m_A*m_AWT)/(m_W+m_A+m_P)))))
model.set_rhs('T_R', dot_T_R)
model.set_rhs('T_S', 1./(c_pS * m_S) * ((k_K *A_tank* (T_R - T_S)) - (k_K *A_tank* (T_S - Tout_M))))
model.set_rhs('Tout_M', 1./(c_pW * m_M_KW) * ((fm_M_KW * c_pW * (T_in_M - Tout_M)) + (k_K *A_tank* (T_S - Tout_M))))
model.set_rhs('T_EK', 1./(c_pR * m_AWT) * ((fm_AWT * c_pR * (T_R - T_EK)) - (alfa * (T_EK - Tout_AWT)) + (p_1 * k_R2 * (m_A/m_ges) * m_AWT * delH_R)))
model.set_rhs('Tout_AWT', 1./(c_pW * m_AWT_KW)* ((fm_AWT_KW * c_pW * (T_in_EK - Tout_AWT)) - (alfa * (Tout_AWT - T_EK))))
model.set_rhs('accum_monom', m_dot_f)
model.set_rhs('T_adiab', delH_R/(m_ges*c_pR)*dot_m_A-(dot_m_A+dot_m_W+dot_m_P)*(m_A*delH_R/(m_ges*m_ges*c_pR))+dot_T_R)
"""
Explanation: The auxiliary terms are used for the more readable definition of the ODEs:
End of explanation
"""
# Build the model
model.setup()
"""
Explanation: Finally, the model setup is completed:
End of explanation
"""
mpc = do_mpc.controller.MPC(model)
"""
Explanation: Controller
Next, the model predictive controller is configured (in template_mpc.py).
First, one member of the mpc class is generated with the prediction model defined above:
End of explanation
"""
setup_mpc = {
'n_horizon': 20,
'n_robust': 1,
'open_loop': 0,
't_step': 50.0/3600.0,
'state_discretization': 'collocation',
'collocation_type': 'radau',
'collocation_deg': 2,
'collocation_ni': 2,
'store_full_solution': True,
# Use MA27 linear solver in ipopt for faster calculations:
#'nlpsol_opts': {'ipopt.linear_solver': 'MA27'}
}
mpc.set_param(**setup_mpc)
"""
Explanation: Real processes are also subject to important safety constraints that are incorporated to account for possible failures of the equipment. In this case, the maximum temperature that the reactor would reach in the case of a cooling failure is constrained to be below $109 ^\circ$C.
The temperature that the reactor would achieve in the case of a complete cooling failure is $T_{\text{adiab}}$, hence it needs to stay beneath $109 ^\circ$C.
We choose the prediction horizon n_horizon, set the robust horizon n_robust to 1. The time step t_step is set to one second and parameters of the applied discretization scheme orthogonal collocation are as seen below:
End of explanation
"""
_x = model.x
mterm = - _x['m_P'] # terminal cost
lterm = - _x['m_P'] # stage cost
mpc.set_objective(mterm=mterm, lterm=lterm)
mpc.set_rterm(m_dot_f=0.002, T_in_M=0.004, T_in_EK=0.002) # penalty on control input changes
"""
Explanation: Objective
The goal of the economic NMPC controller is to produce $20680~\text{kg}$ of $m_{\text{P}}$ as fast as possible.
Additionally, we add a penalty on input changes for all three control inputs, to obtain a smooth control performance.
End of explanation
"""
# auxiliary term
temp_range = 2.0
# lower bound states
mpc.bounds['lower','_x','m_W'] = 0.0
mpc.bounds['lower','_x','m_A'] = 0.0
mpc.bounds['lower','_x','m_P'] = 26.0
mpc.bounds['lower','_x','T_R'] = 363.15 - temp_range
mpc.bounds['lower','_x','T_S'] = 298.0
mpc.bounds['lower','_x','Tout_M'] = 298.0
mpc.bounds['lower','_x','T_EK'] = 288.0
mpc.bounds['lower','_x','Tout_AWT'] = 288.0
mpc.bounds['lower','_x','accum_monom'] = 0.0
# upper bound states
mpc.bounds['upper','_x','T_S'] = 400.0
mpc.bounds['upper','_x','Tout_M'] = 400.0
mpc.bounds['upper','_x','T_EK'] = 400.0
mpc.bounds['upper','_x','Tout_AWT'] = 400.0
mpc.bounds['upper','_x','accum_monom'] = 30000.0
mpc.bounds['upper','_x','T_adiab'] = 382.15
"""
Explanation: Constraints
The temperature at which the polymerization reaction takes place strongly influences the properties of the resulting polymer. For this reason, the temperature of the reactor should be maintained in a range of $\pm 2.0 ^\circ$C around the desired reaction temperature $T_{\text{set}}=90 ^\circ$C in order to ensure that the produced polymer has the required properties.
The initial conditions and the bounds for all states are summarized in:
and set via:
End of explanation
"""
mpc.set_nl_cons('T_R_UB', _x['T_R'], ub=363.15+temp_range, soft_constraint=True, penalty_term_cons=1e4)
"""
Explanation: The upper bound of the reactor temperature is set via a soft-constraint:
End of explanation
"""
# lower bound inputs
mpc.bounds['lower','_u','m_dot_f'] = 0.0
mpc.bounds['lower','_u','T_in_M'] = 333.15
mpc.bounds['lower','_u','T_in_EK'] = 333.15
# upper bound inputs
mpc.bounds['upper','_u','m_dot_f'] = 3.0e4
mpc.bounds['upper','_u','T_in_M'] = 373.15
mpc.bounds['upper','_u','T_in_EK'] = 373.15
"""
Explanation: The bounds of the inputsare summarized below:
and set via:
End of explanation
"""
# states
mpc.scaling['_x','m_W'] = 10
mpc.scaling['_x','m_A'] = 10
mpc.scaling['_x','m_P'] = 10
mpc.scaling['_x','accum_monom'] = 10
# control inputs
mpc.scaling['_u','m_dot_f'] = 100
"""
Explanation: Scaling
Because the magnitudes of the states and inputs are very different, the performance of the optimizer can be enhanced by properly scaling the states and inputs:
End of explanation
"""
delH_R_var = np.array([950.0, 950.0 * 1.30, 950.0 * 0.70])
k_0_var = np.array([7.0 * 1.00, 7.0 * 1.30, 7.0 * 0.70])
mpc.set_uncertainty_values(delH_R = delH_R_var, k_0 = k_0_var)
"""
Explanation: Uncertain values
In a real system, usually the model parameters cannot be determined exactly, what represents an important source of uncertainty. In this work, we consider that two of the most critical parameters of the model are not precisely known and vary with respect to their nominal value. In particular, we assume that the specific reaction enthalpy $\Delta H_{\text{R}}$ and the specific reaction rate $k_0$ are constant but uncertain, having values that can vary $\pm 30 \%$ with respect to their nominal values
End of explanation
"""
mpc.setup()
"""
Explanation: This means with n_robust=1, that 9 different scenarios are considered.
The setup of the MPC controller is concluded by:
End of explanation
"""
estimator = do_mpc.estimator.StateFeedback(model)
"""
Explanation: Estimator
We assume, that all states can be directly measured (state-feedback):
End of explanation
"""
simulator = do_mpc.simulator.Simulator(model)
"""
Explanation: Simulator
To create a simulator in order to run the MPC in a closed-loop, we create an instance of the do-mpc simulator which is based on the same model:
End of explanation
"""
params_simulator = {
'integration_tool': 'cvodes',
'abstol': 1e-10,
'reltol': 1e-10,
't_step': 50.0/3600.0
}
simulator.set_param(**params_simulator)
"""
Explanation: For the simulation, we use the same time step t_step as for the optimizer:
End of explanation
"""
p_num = simulator.get_p_template()
tvp_num = simulator.get_tvp_template()
"""
Explanation: Realizations of uncertain parameters
For the simulatiom, it is necessary to define the numerical realizations of the uncertain parameters in p_num.
First, we get the structure of the uncertain parameters:
End of explanation
"""
# uncertain parameters
p_num['delH_R'] = 950 * np.random.uniform(0.75,1.25)
p_num['k_0'] = 7 * np.random.uniform(0.75*1.25)
def p_fun(t_now):
return p_num
simulator.set_p_fun(p_fun)
"""
Explanation: We define a function which is called in each simulation step, which returns the current realizations of the parameters with respect to defined inputs (in this case t_now):
End of explanation
"""
simulator.setup()
"""
Explanation: By defining p_fun as above, the function will return a constant value for both uncertain parameters within a range of $\pm 25\%$ of the nomimal value.
To finish the configuration of the simulator, call:
End of explanation
"""
# Set the initial state of the controller and simulator:
# assume nominal values of uncertain parameters as initial guess
delH_R_real = 950.0
c_pR = 5.0
# x0 is a property of the simulator - we obtain it and set values.
x0 = simulator.x0
x0['m_W'] = 10000.0
x0['m_A'] = 853.0
x0['m_P'] = 26.5
x0['T_R'] = 90.0 + 273.15
x0['T_S'] = 90.0 + 273.15
x0['Tout_M'] = 90.0 + 273.15
x0['T_EK'] = 35.0 + 273.15
x0['Tout_AWT'] = 35.0 + 273.15
x0['accum_monom'] = 300.0
x0['T_adiab'] = x0['m_A']*delH_R_real/((x0['m_W'] + x0['m_A'] + x0['m_P']) * c_pR) + x0['T_R']
mpc.x0 = x0
simulator.x0 = x0
estimator.x0 = x0
mpc.set_initial_guess()
"""
Explanation: Closed-loop simulation
For the simulation of the MPC configured for the CSTR, we inspect the file main.py.
We define the initial state of the system and set it for all parts of the closed-loop configuration:
End of explanation
"""
%%capture
for k in range(100):
u0 = mpc.make_step(x0)
y_next = simulator.make_step(u0)
x0 = estimator.make_step(y_next)
"""
Explanation: Now, we simulate the closed-loop for 100 steps (and suppress the output of the cell with the magic command %%capture):
End of explanation
"""
mpc_graphics = do_mpc.graphics.Graphics(mpc.data)
"""
Explanation: Animating the results
To animate the results, we first configure the do-mpc graphics object, which is initiated with the respective data object:
End of explanation
"""
from matplotlib import rcParams
rcParams['axes.grid'] = True
rcParams['font.size'] = 18
"""
Explanation: We quickly configure Matplotlib.
End of explanation
"""
%%capture
fig, ax = plt.subplots(5, sharex=True, figsize=(16,12))
plt.ion()
# Configure plot:
mpc_graphics.add_line(var_type='_x', var_name='T_R', axis=ax[0])
mpc_graphics.add_line(var_type='_x', var_name='accum_monom', axis=ax[1])
mpc_graphics.add_line(var_type='_u', var_name='m_dot_f', axis=ax[2])
mpc_graphics.add_line(var_type='_u', var_name='T_in_M', axis=ax[3])
mpc_graphics.add_line(var_type='_u', var_name='T_in_EK', axis=ax[4])
ax[0].set_ylabel('T_R [K]')
ax[1].set_ylabel('acc. monom')
ax[2].set_ylabel('m_dot_f')
ax[3].set_ylabel('T_in_M [K]')
ax[4].set_ylabel('T_in_EK [K]')
ax[4].set_xlabel('time')
fig.align_ylabels()
"""
Explanation: We then create a figure, configure which lines to plot on which axis and add labels.
End of explanation
"""
from matplotlib.animation import FuncAnimation, ImageMagickWriter
"""
Explanation: After importing the necessary package:
End of explanation
"""
def update(t_ind):
print('Writing frame: {}.'.format(t_ind), end='\r')
mpc_graphics.plot_results(t_ind=t_ind)
mpc_graphics.plot_predictions(t_ind=t_ind)
mpc_graphics.reset_axes()
lines = mpc_graphics.result_lines.full
return lines
n_steps = mpc.data['_time'].shape[0]
anim = FuncAnimation(fig, update, frames=n_steps, blit=True)
gif_writer = ImageMagickWriter(fps=5)
anim.save('anim_poly_batch.gif', writer=gif_writer)
"""
Explanation: We obtain the animation with:
End of explanation
"""
mpc.bounds['upper', '_x', 'T_R']
"""
Explanation: We are displaying recorded values as solid lines and predicted trajectories as dashed lines. Multiple dashed lines exist for different realizations of the uncertain scenarios.
The most interesting behavior here can be seen in the state T_R, which has the upper bound:
End of explanation
"""
|
icoxfog417/scikit-learn-notebook | scikit-learn-tutorial.ipynb | mit | # enable showing matplotlib image inline
%matplotlib inline
"""
Explanation: Introduction
機械学習とは、その名の通り「機械」を「学習」させることで、あるデータに対して予測を行えるようにすることです。
機械とは、具体的には数理・統計的なモデルになります。
学習とは、そのモデルのパラメータを、実際のデータに沿うよう調整することです。
学習の方法は大きく分けて2つあります。
教師有り学習(Supervised learning): データと、そこから予測されるべき値(正解)を与えることで学習させます。
分類(Classification): データがいくつかのカテゴリに分類できるとき、そのカテゴリを予測させます(例:手書きの数字が0~9の何れかであるか判別するなど)
回帰(Regression): データから予測される連続的な値を予測します(例:年齢と体重から慎重を予測するなど)。
教師なし学習(Unsupervised learning): データを与えることで、その裏側にある構造を学習させます
クラスタリング: 似ているデータをまとめることで、データがどれくらいの集合(クラスタ)から構成されるのかを予測します。
分布推定: データを生み出している確率分布の推定を行います。
scikit-learnは、Python製の機械学習ライブラリです。
この中には様々な「機械」が実装されており、その「学習」のための仕組みも備わっています。
以下では、このscikit-learnを利用しデータを準備するところから実際にモデルを構築し学習・評価を行うまでの手順を解説します。
データの準備
データの整備
モデルの選択
データの分割
モデルの学習
モデルの評価
モデルの保管
環境のセットアップについては、以下にまとめてあるのでご参考ください。
Pythonで機械学習アプリケーションの開発環境を構築する
End of explanation
"""
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
"""
Explanation: Loading the Data
scikit-learnでは、よく例として利用されるデータセット(irisのデータや手書き文字のデータなど)を以下のように簡単に取得することができます。
Dataset loading utilities
End of explanation
"""
print(iris.keys())
"""
Explanation: datasetは、以下の内容で構成されています。
data: データ本体(常にサンプル×特徴量の二次元配列。画像などデータ自体が2次元で表示される場合は、imagesからアクセスできる)
target: データから予測されるべき正解(教師データ)
feature_names: 特徴量項目の名前
target_names : 予測値項目の名前
DESCR: データの説明
End of explanation
"""
import csv
import numpy as np
encoding = "utf-8"
ratings = []
with open("./data/ratings.txt", encoding="utf-8") as f:
content = csv.reader(f, delimiter="\t")
lines = list(content)
ratings = np.array(lines)
print(ratings)
"""
Explanation: 通常のデータ読み込みには、Pythonに標準で搭載されているcsvなどが使えます。
End of explanation
"""
from sklearn import datasets
import numpy as np
from sklearn import preprocessing
iris_data = iris["data"]
scaler = preprocessing.StandardScaler().fit(iris_data)
describe = lambda t, x: (t + ":\n {0}").format({"mean": np.mean(x, axis=0), "std": np.std(x, axis=0)})
# before scaling
print(describe("Before scaling", iris_data))
# scaling
iris_data_scaled = scaler.transform(iris_data)
print(describe("After scaling (mean is almost 0, std = 1)", iris_data_scaled))
# inverse
iris_data_inv = scaler.inverse_transform(iris_data_scaled)
print(describe("Inverse the scaling", iris_data_inv))
"""
Explanation: なお、このようなデータの読み込み、また読み込んだデータに対する操作をサポートするライブラリとしてpandasがあります。
Arrange the Data
データの中の各特徴量は、別々の平均・分散を持っています(例: 体重と身長では平均も分散も異なる)。
この状態だと学習が効率的に進まないため、各特徴量の平均を0・分散を1にそろえる正規化(Normalization)を行うことが一般的です。
(これに加え、特徴量間の相関を消す白色化(Whitening)まで行うこともあります)。
scikit-learnではpreprocessingを使用することでこの作業をとても簡単に行うことができます。以下では、StandardScalerを使って処理を行っています。
End of explanation
"""
print(ratings)
"""
Explanation: ※preprocessingにはNormalizationというモジュールがありますが、これは一般的に言う正規化を行うためのものではないので注意してください。
また、データの中にはテキストである項目が含まれていることもあります。
End of explanation
"""
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(["bad", "nbad", "good", "vgood"])
encoded_rating = le.transform(ratings[:, 1])
print("{0} is encoded to {1}".format(ratings[:, 1], encoded_rating))
"""
Explanation: 上記のgoodなどのテキスト項目は、最終的には数値にしないとデータを学習させることができません。 これもpreprocessingを利用することで簡単に数値へ変換することができます。
End of explanation
"""
from sklearn.feature_extraction import DictVectorizer
measurements = [
{"city": "Dubai", "temperature": 33.},
{"city": "London", "temperature": 12.},
{"city": "San Fransisco", "temperature": 18.},
{"city": "Dubai", "temperature": 32.},
]
vec = DictVectorizer()
vectorized = vec.fit_transform(measurements).toarray()
print(vectorized)
feature_names = vec.get_feature_names()
print(feature_names)
"""
Explanation: Feature Extractionでは、テキスト/画像についてより強力に特徴量の数値化(ベクトル化)を行う機能がサポートされています。以下では、cityというテキストの項目がDubai/London/San Fransiscoを表す0/1の特徴量へと変換されています。
End of explanation
"""
import matplotlib.pyplot as plt
features = iris.data[:, :2] # select first 2 feature
label = iris.target
plt.scatter(features[:, 0], features[:, 1], c=label, cmap=plt.cm.Paired)
for i in range(features.shape[1]):
f_data = features[:, i]
if i == 0:
plt.xlabel(iris.feature_names[i])
plt.xlim(f_data.min(), f_data.max())
else:
plt.ylabel(iris.feature_names[i])
plt.ylim(f_data.min(), f_data.max())
plt.title("iris data")
from sklearn import decomposition
digits_data = digits["data"]
show_dimension = lambda dset: len(dset[0])
dimension = 2
digits_recuced = decomposition.TruncatedSVD(n_components=dimension).fit_transform(digits_data)
print("Dimension is reduced from {0} to {1}.".format(show_dimension(digits_data), show_dimension(digits_recuced)))
"""
Explanation: preprocessingには、他にも欠損値の修正を行うImputerなどデータの整備に役立つモジュールが含まれています。
Dimensionality reduction
データを図示することは、この後のモデルの選択を行う時を含め、様々なシーンで非常に重要です。
しかし、単純に特徴量が4つになっただけでもデータを図示することができなくなってしまいますし(4次元の図になってしまうため)、場合によっては非常に多くなることもあります(テキスト解析など)。
そのため、データをなるべく少ない、必要最小限の特徴量で表現することが重要になります。これを行うのがDimensionality reduction(次元削除/次元圧縮)と呼ばれる手法です。
具体的には、データの中に身長と体重があった場合、これらは体が大きくなれば両方とも増える特徴量のため、データの特性を表す上ではどちらかひとつで十分です。このように片方が増えれば片方も増えるといった、互いに相関のある特徴量を消していけば必要最小限の特徴量でデータを表現することができる・・・というのが基本的な考え方です。
scikit-learnではdecompositionを利用しこの処理を行うことができます。以下では、TruncatedSVDによって数字データの特徴量を、上記で述べたとおり互いに相関のない、2つの特徴量へと圧縮しています。
Visualize
実際にデータを図示するには、scikit-learnではなくmatplotlibを利用します。
以下では、最初の2つの特徴量をピックアップし、irisのデータをプロットしています。
End of explanation
"""
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
X, y = iris.data, iris.target
print(X.shape)
X_new = SelectKBest(chi2, k=2).fit_transform(X, y)
print(X_new.shape)
"""
Explanation: Select the Model
機械学習に使えるモデルには色々なものがあり、scikit-learnでも様々なモデルが使えるようになっています。
ただ、その分一体どれを選べば良いのかは非常に悩ましい問題です。
一つの基準として、以下のようなフローチャートがあります。これは、scikit-learnの中のアルゴリズムをどのような基準で選択したらよいのかを図示したものです。
Choosing the right estimator
scikit-learnにはNeural Networkがないため図中にもありませんが、基本的にはSVC/SVRの代替であり、データが多いほど精度が向上します。
ポイントとしては以下になります。
最低でも50件以上はデータを集める
単純なモデルから始める(ClassificationならLinerSVC、RegressionならRasso/ElasticNetなど)
Just lookingから始める(データを見て、必要に応じ次元削除を行う)
機械学習で正しい結果を出すにはデータの整備(図中ではJust looking、前章に当たる部分)が欠かせません。データ整備した上で単純なモデルで検証をしてみて、必要に応じ他のモデルを試していくというのが基本的な進め方になります。
Select Model Features
特徴量が多い場合は、どの特徴量をモデルに使うのかも重要な問題です。scikit-learnには、どの特徴量が予測値に寄与しているか調べるための機能があります。 以下では、Feature selectionを利用し特徴量をもっとも有用な2つに絞っています(k=2)。
End of explanation
"""
from sklearn.model_selection import train_test_split
test_size = 0.3 # use 30% of data to test the model
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=test_size, random_state=0)
test_data_rate = X_test.shape[0] * 100 / (X_train.shape[0] + X_test.shape[0])
print("test data is {0}% of data".format(test_data_rate))
"""
Explanation: Split the Data
学習に当たっては、データを学習用(training set)と評価用(test set)に分けておきます。学習に使ったデータに対して予測がうまくできるのは当たり前なので、正確に精度を測定するため、評価用のデータは学習用とは別にしておきます。
単純に学習用と評価用に2分割するのでなく、データ全体を何個かに分割し、評価用として使うデータを切り替えていくという方法もあります。これにより、少ないデータでも効率的に学習を行うことができます。
K-FOLD CROSS-VALIDATION, WITH MATLAB CODE
これはCross Validationと呼ばれる手法ですが、scikit-learnでは単純な分割からこのCross Validationまで、cross-validationで行えるようになっています。
End of explanation
"""
from sklearn.model_selection import KFold
kf = KFold(n_splits=3) # divide into 3 set
i = 0
for train_index, test_index in kf.split(iris.data):
x_train = iris.data[train_index]
y_train = iris.target[train_index]
x_test = iris.data[test_index]
y_test = iris.target[test_index]
print("{0}: training {1}, test {2}".format(i, len(y_train), len(y_test)))
i += 1
"""
Explanation: Cross Validationを利用する際は、データの分割と学習を合わせて行ってくれるcross_val_scoreを利用するのが簡単ですが(後述します)、データの分割のみ行う場合はKFoldを利用します。
End of explanation
"""
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100.)
"""
Explanation: これでデータの準備は整ったので、いよいよ学習を行っていきます。
Training the Model
以下では、分類を行う際によく利用されるSupport Vector Machineをベースにその学習方法などを解説していきます。
End of explanation
"""
clf.fit(digits.data[:-1], digits.target[:-1])
"""
Explanation: モデルの構築はたったこれだけでおしまいです。そして、学習もたった一行で済ませることができます(以下の例では、最後の1データ以外を学習データとして渡しています)。
End of explanation
"""
clf.predict([digits.data[-1]])
"""
Explanation: そして、取っておいた最後の一つのデータについて、モデルを使って予測させてみます。
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
"""
Explanation: 実際判定した画像は以下です。「8」という予測はそこそこ的を得ているのではないかと思います。
End of explanation
"""
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, digits.data, digits.target, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
"""
Explanation: 今度はCross Validationを使ってみます。cvではデータの分割数(foldの数)を指定します。
End of explanation
"""
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
candidates = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100]},
{'kernel': ['linear'], 'C': [1, 10, 100]}]
clf = GridSearchCV(SVC(C=1), candidates, cv=5)
clf.fit(digits.data, digits.target)
print(clf.best_estimator_)
for params, mean_, std_ in zip(clf.cv_results_["params"], clf.cv_results_["mean_test_score"], clf.cv_results_["std_test_score"]):
print("%0.3f (+/-%0.03f) for %r" % (mean_, std_ / 2, params))
"""
Explanation: Search Model Parameters
上記ではモデルのパラメータを固定で指定しましたが(gamma=0.001など)、実際どんなパラメータを設定すべきかは非常に悩ましい問題です。
最適なパラメータを探すため、各パラメータが取りうる範囲を決め、その組み合わせを試していくという手法があります。これをGrid Searchと呼びますが、scikit-learnではこれを行うためのGrid Searchモジュールが提供されています。
End of explanation
"""
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import SVC
base_clf = svm.SVC()
bagging_clf = BaggingClassifier(base_estimator=clf, n_estimators=10, max_samples=0.9, max_features=2, n_jobs=4)
scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
"""
Explanation: Ensemble Lerning
学習の結果、優秀なモデルができることもあればそうでないこともあります。
特徴量が増える、つまり多次元になるほど「よさそうに見えるパラメーターの組み合わせ」は増えていくので、何度も学習させてみないとなかなか最適と思える結果にはたどり着けなくなります。
こうした問題に対応するため、複数のモデルを別々に学習させ、最終的にはそれらの組み合わせで決めることで精度を上げるという手法があります。
これがアンサンブル学習と呼ばれる手法です。「複数のモデル」は、同じモデルのこともあれば(Bagging)それぞれ異なるモデルを使うこともあります(Boosting)。
scikit-learnでは、ensembleによってこのアンサンブル学習を行うことができます。以下では、Baggingによって10個のモデルを並列で学習させ、その組み合わせで決めるモデルを作成しています。
End of explanation
"""
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
actual = [1, 0, 1, 1, 0, 0, 0, 1]
predict = [0, 0, 1, 1, 0, 1, 1, 1]
c_mx = confusion_matrix(actual, predict)
print(c_mx)
# calculate each score
print(precision_score(actual, predict))
print(recall_score(actual, predict))
print(accuracy_score(actual, predict))
print(f1_score(actual, predict))
"""
Explanation: n_jobsにより並列で学習させる際のプロセス数を簡単に調整することができ、高速な学習が可能です
Evaluate Training Result
学習した結果、つまりモデルの精度はどのように測ればよいでしょうか。
単純には予測と実際の2つを比較することで精度を算出することができます。ただ、この手法だと例えば「90%がAで10%がB」というデータがあった場合、常にAという単純なモデルが生まれていたとしてもその精度は90%となってしまいます。
こうした問題を防ぐため、分類結果を以下のようにまとめ評価を行う混同行列(confusion matrix)があります。
ここでは、以下3点の観点が重要です。
正確度(Accuracy、単に精度といった場合この値): 全データのうち、予測=実際だったものの割合
適合率(Precision): 真と予測したものについて、実際真だったものの割合
再現率(Recall): 実際真だったものについて、予測て真だったものの割合
先ほどの例だと、偽とは予測しないため再現率は常に1になりますが、その分予測したもののうち間違いだったものが増えるため、データが増えるにつれ適合率が悪化します。適合率を上げる場合は逆にほぼ間違いないもの以外を予測しなければ良いですが、その分本当は真だったものが増えるため再現率が下がることになります。
このように適合率と再現率はトレードオフの関係にあり、良いモデルとは基本的にはこの2つの値のバランスが取れているものになります。このバランスを評価する値としてF値があり、これらの値を図ることがモデルの精度を測る上で重要になります。
scikit-learnでは、metricsを利用することでこれらの値を簡単に参照することができます。
End of explanation
"""
from sklearn.metrics import classification_report
actual = [0, 1, 2, 2, 2]
predict = [0, 0, 2, 2, 1]
target_names = ["class 0", "class 1", "class 2"]
print(classification_report(actual, predict, target_names=target_names))
"""
Explanation: classification_reportを使用することで、簡単に一覧表を取得できます(単純に値だけ取得したい場合はprecision_recall_fscore_support)。
End of explanation
"""
from sklearn import svm
from sklearn import datasets
clf = svm.SVC()
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)
import pickle
s = pickle.dumps(clf) #serialize model data
clf2 = pickle.loads(s) #load serialized model data
"""
Explanation: Store the Model
学習したモデルはファイルとして出力し保存しておくことができます。
以下は、標準のpickleを使う手法です。
End of explanation
"""
from sklearn.externals import joblib
joblib.dump(clf, "data/model.pkl")
clf = joblib.load("data/model.pkl")
"""
Explanation: このほか、sklearn.externalsのjoblibを利用しファイルに保管することもできます。大規模なモデルなどにはこちらの方がよいでしょう。
End of explanation
"""
|
sys-bio/tellurium | examples/notebooks/core/tellurium_plotting.ipynb | apache-2.0 | import tellurium as te, roadrunner
r = te.loada ('''
$Xo -> S1; k1*Xo;
S1 -> $X1; k2*S1;
k1 = 0.2; k2 = 0.4; Xo = 1; S1 = 0.5;
at (time > 20): S1 = S1 + 0.35
''')
# Simulate the first part up to 20 time units
m = r.simulate (0, 50, 100, ["time", "S1"])
# using latex syntax to render math
r.plot(m, ylim=(0.,1.), xtitle='Time', ytitle='Concentration', title='My First Plot ($y = x^2$)')
"""
Explanation: Back to the main Index
Add plot elements
Example showing how to embelish a graph - change title, axes labels, set axis limit. Example also uses an event to pulse S1.
End of explanation
"""
import tellurium as te
import os
r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')
result = r.simulate(0, 50, 100)
currentDir = os.getcwd() # gets the current directory
r.plot(title='My plot', xtitle='Time', ytitle='Concentration', dpi=150,
savefig=currentDir + '\\test.png') # save image to current directory as "test.png"
"""
Explanation: Saving plots
To save a plot, use r.plot and the savefig parameter. Use dpi to specify image quality. Pass in the save location along with the image name.
End of explanation
"""
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
r = te.loadTestModel('feedback.xml')
r.integrator.variable_step_size = True
s = r.simulate(0, 50)
r.plot(s, logx=True, xlim=[10E-4, 10E2],
title="Logarithmic x-Axis with grid", ylabel="concentration");
"""
Explanation: The path can be specified as a written out string. The plot can also be saved as a pdf instead of png.
savefig='C:\\Tellurium-Winpython-3.6\\settings\\.spyder-py3\\test.pdf'
Logarithmic axis
The axis scale can be adapted with the xscale and yscale settings.
End of explanation
"""
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
# Load a model and carry out a simulation generating 100 points
r = te.loada ('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')
r.draw(width=100)
# get colormap
# Colormap instances are used to convert data values (floats) from the interval [0, 1]
cmap = plt.get_cmap('Blues')
k1_values = np.linspace(start=0.1, stop=1.5, num=15)
max_k1 = max(k1_values)
for k, value in enumerate(k1_values):
r.reset()
r.k1 = value
s = r.simulate(0, 30, 100)
color = cmap((value+max_k1)/(2*max_k1))
# use show=False to plot multiple curves in the same figure
r.plot(s, show=False, title="Parameter variation k1", xtitle="time", ytitle="concentration",
xlim=[-1, 31], ylim=[-0.1, 11])
te.show()
print('Reference Simulation: k1 = {}'.format(r.k1))
print('Parameter variation: k1 = {}'.format(k1_values))
"""
Explanation: Plotting multiple simulations
All plotting is done via the r.plot or te.plotArray functions. To plot multiple curves in one figure use the show=False setting.
End of explanation
"""
import tellurium as te
import numpy as np
for i in range(1,10):
x = np.linspace(0, 10, num = 10)
y = i*x**2 + 10*i
if i % 2 == 0:
next_tag = "positive slope"
else:
next_tag = "negative slope"
y = -1*y
next_name = next_tag + " (i = " + str(i) + ")"
te.plot(x, y, show = False, tag = next_tag, name = next_name)
te.show()
"""
Explanation: Using Tags and Names
Tags can be used to coordinate the color, opacity, and legend names between several sets of data. This can be used to highlight certain features that these datasets have in common. Names allow you to give a more meaningful description of the data in the legend.
End of explanation
"""
import tellurium as te
import numpy as np
import matplotlib.pylab as plt
r = te.loada ('S1 -> S2; k1*S1; k1 = 0.1; S1 = 20')
r.setIntegrator('gillespie')
r.integrator.seed = '1234'
kValues = np.linspace(0.1, 0.9, num=9) # generate k1 values
plt.gcf().set_size_inches(10, 10) # size of figure
plt.subplots_adjust(wspace=0.4, hspace=0.4) # adjust the space between subplots
plt.suptitle('Variation in k1 value', fontsize=16) # main title
for i in range(1, len(kValues) + 1):
r.k1 = kValues[i - 1]
# designates number of subplots (row, col) and spot to plot next
plt.subplot(3, 3, i)
for j in range(1, 30):
r.reset()
s = r.simulate(0, 10)
t = "k1 = " + '{:.1f}'.format(kValues[i - 1])
# plot each subplot, use show=False to save multiple traces
te.plotArray(s, show=False, title=t, xlabel='Time',
ylabel='Concentration', alpha=0.7)
"""
Explanation: Note that only two items show up in the legend, one for each tag used. In this case, the name found in the legend will match the name of the last set of data plotted using that specific tag. The color and opacity for each tagged groups will also be chosen from the last dataset inputted with that given tag.
Subplots
te.plotArray can be used in conjunction with matplotlib functions to create subplots.
End of explanation
"""
from __future__ import print_function
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
r = te.loada('''
model feedback()
// Reactions:http://localhost:8888/notebooks/core/tellurium_export.ipynb#
J0: $X0 -> S1; (VM1 * (X0 - S1/Keq1))/(1 + X0 + S1 + S4^h);
J1: S1 -> S2; (10 * S1 - 2 * S2) / (1 + S1 + S2);
J2: S2 -> S3; (10 * S2 - 2 * S3) / (1 + S2 + S3);
J3: S3 -> S4; (10 * S3 - 2 * S4) / (1 + S3 + S4);
J4: S4 -> $X1; (V4 * S4) / (KS4 + S4);
// Species initializations:
S1 = 0; S2 = 0; S3 = 0;
S4 = 0; X0 = 10; X1 = 0;
// Variable initialization:
VM1 = 10; Keq1 = 10; h = 10; V4 = 2.5; KS4 = 0.5;
end''')
# simulate using variable step size
r.integrator.setValue('variable_step_size', True)
s = r.simulate(0, 50)
# draw the diagram
r.draw(width=200)
# and the plot
r.plot(s, title="Feedback Oscillations", ylabel="concentration", alpha=0.9);
"""
Explanation: Draw diagram
This example shows how to draw a network diagram, requires graphviz.
End of explanation
"""
|
Merinorus/adaisawesome | Homework/05 - Taming Text/HW05_awesometeam_Q4.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import networkx as nx
import math
import community
import matplotlib.pyplot as plt
G=nx.Graph()
emails = pd.read_csv('hillary-clinton-emails/emails.csv')
receivers = pd.read_csv('hillary-clinton-emails/EmailReceivers.csv')
emails = emails[pd.notnull(emails['SenderPersonId'])]
nodes2 = pd.DataFrame()
nodes2['EmailID'] = receivers['EmailId']
nodes2['ReceiverID'] = receivers['PersonId']
nodes2['SenderID'] = 'nan'
nodes2.reset_index(drop=True, inplace=True)
nodes2.head()
len(nodes2)
# Now we need to link the receivers (all of them) to the EmailID.
# For some reason it crashes past 9121, but that's oK - it's only
# 185 lines, or 2%
nodes_test2 = nodes2.head(9120)
for index, row in nodes_test2.iterrows():
nodes_test2.set_value(index, 'SenderID', emails.iloc[row['EmailID']]['SenderPersonId'])
nodes_test2.head()
tuples_table = nodes_test2.copy()
for index, row in tuples_table.iterrows():
a = row['ReceiverID']
b = int(row['SenderID'])
G.add_edge(a,b)
#print(list(G.edges()))
print(nx.info(G))
"""
Explanation: Assignment:
BONUS: build the communication graph (unweighted and undirected) among the different email senders and recipients using the NetworkX library. Find communities in this graph with community.best_partition(G) method from the community detection module. Print the most frequent 20 words used by the email authors of each community. Do these word lists look similar to what you've produced at step 3 with LDA? Can you identify clear discussion topics for each community? Discuss briefly the obtained results.
Make the Graph
End of explanation
"""
partition = community.best_partition(G)
com
"""
Explanation: Partition the Graph
End of explanation
"""
# from - http://perso.crans.org/aynaud/communities/
#Drawing
size = float(len(set(partition.values())))
pos = nx.spring_layout(G)
count = 0.
for com in set(partition.values()) :
count = count + 1.
list_nodes = [nodes for nodes in partition.keys()
if partition[nodes] == com]
nx.draw_networkx_nodes(G, pos, list_nodes, node_size = 20,
node_color = str(count / size))
nx.draw_networkx_edges(G,pos, alpha=0.5)
plt.show()
# from - http://ryancompton.net/2014/06/16/community-detection-and-colored-plotting-in-networkx/
values = [partition.get(node) for node in G.nodes()]
nx.draw_spring(G, cmap = plt.get_cmap('jet'), node_color = values, node_size=30, with_labels=False)
plt.show()
"""
Explanation: There seem to be 45 communities.
End of explanation
"""
# from - http://perso.crans.org/aynaud/communities/api.html
dendo = community.generate_dendrogram(G)
for level in range(len(dendo) - 1) :
print("partition at level", level,
"is", partition_at_level(dendo, level))
"""
Explanation: Extra
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/time_series_prediction/labs/3_modeling_bqml.ipynb | apache-2.0 | PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env REGION = "us-central1"
"""
Explanation: Time Series Prediction with BQML and AutoML
Objectives
1. Learn how to use BQML to create a classification time-series model using CREATE MODEL.
2. Learn how to use BQML to create a linear regression time-series model.
3. Learn how to use AutoML Tables to build a time series model from data in BigQuery.
Set up environment variables and load necessary libraries
End of explanation
"""
from google.cloud import bigquery
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # Will fail if dataset already exists.
print("Dataset created")
except:
print("Dataset already exists")
def create_features_table():
error = None
try:
bq.query(
"""
CREATE TABLE stock_market.eps_percent_change_sp500
AS
SELECT *
FROM `stock_market.eps_percent_change_sp500`
"""
).to_dataframe()
except Exception as e:
error = str(e)
if error is None:
print("Table created")
elif "Already Exists" in error:
print("Table already exists.")
else:
print(error)
raise Exception("Table was not created.")
create_dataset()
create_features_table()
"""
Explanation: Create the dataset
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
stock_market.eps_percent_change_sp500
LIMIT
10
"""
Explanation: Review the dataset
In the previous lab we created the data, if you haven’t run the previous notebook, go back to 2_feature_engineering.ipynb to create them. We will use modeling and saved them as tables in BigQuery.
Let's examine that table again to see that everything is as we expect. Then, we will build a model using BigQuery ML using this table.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
# TODO: Your code goes here
-- query to fetch training data
SELECT
# TODO: Your code goes here
FROM
`stock_market.eps_percent_change_sp500`
WHERE
# TODO: Your code goes here
"""
Explanation: Using BQML
Create classification model for direction
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
We'll start with creating a classification model to predict the direction of each stock.
We'll take a random split using the symbol value. With about 500 different values, using ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 will give 30 distinct symbol values which corresponds to about 171,000 training examples. After taking 70% for training, we will be building a model on about 110,000 training examples.
Lab Task #1a: Create model using BQML
Use BQML's CREATE OR REPLACE MODEL to train a classification model which predicts the direction of a stock using the features in the percent_change_sp500 table. Look at the documentation for creating a BQML model to get the right syntax. Use ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 to train on a subsample.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
# TODO: Your code goes here.
"""
Explanation: Get training statistics and examine training info
After creating our model, we can evaluate the performance using the ML.EVALUATE function. With this command, we can find the precision, recall, accuracy F1-score and AUC of our classification model.
Lab Task #1b: Evaluate your BQML model.
Use BQML's EVALUATE to evaluate the performance of your model on the validation set. Your query should be similar to this example.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
# TODO: Your code goes here
"""
Explanation: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
Lab Task #1c: Examine the training information in BQML.
Use BQML's TRAINING_INFO to see statistics of the training job executed above.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
WITH
eval_data AS (
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85)
SELECT
direction,
(COUNT(direction)* 100 / (
SELECT
COUNT(*)
FROM
eval_data)) AS percentage
FROM
eval_data
GROUP BY
direction
"""
Explanation: Compare to simple benchmark
Another way to asses the performance of our model is to compare with a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicted the majority class. For the training dataset, the majority class is 'STAY'. The following query we can see how this naive strategy would perform on the eval set.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
# TODO: Your code goes here
-- query to fetch training data
SELECT
# TODO: Your code goes here
FROM
`stock_market.eps_percent_change_sp500`
WHERE
# TODO: Your code goes here
"""
Explanation: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Create regression model for normalized change
We can also use BigQuery to train a regression model to predict the normalized change for each stock. To do this in BigQuery we need only change the OPTIONS when calling CREATE OR REPLACE MODEL. This will give us a more precise prediction rather than just predicting if the stock will go up, down, or stay the same. Thus, we can treat this problem as either a regression problem or a classification problem, depending on the business needs.
Lab Task #2a: Create a regression model in BQML.
Use BQML's CREATE OR REPLACE MODEL to train another model, this time a regression model, which predicts the normalized_change of a given stock based on the same features we used above.
End of explanation
"""
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.price_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.price_model`)
ORDER BY iteration
"""
Explanation: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query
End of explanation
"""
|
khalido/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return None
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return None
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
pass
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = None
batch_size = None
keep_probability = None
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
raoyvn/deep-learning | tv-script-generation/submission/solution_dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
#print(text)
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {word: index for index,word in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
symbols_dict = {'.' : '||Period||' , ',' : '||Comma||' , '"' : '||QuotationMark||' ,
';' : '||Semicolon||' , '!' : '||Exclamationmark||' , '?' : '||Questionmark||',
'(' : '||LeftParentheses||' , ')' : '||RightParentheses||' ,
'--' : '||Dash||' , '\n' : '||Return||'
}
return symbols_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
# Create the graph object
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def lstm_cell(state_size , output_keep_prob):
cell = tf.contrib.rnn.BasicLSTMCell(
state_size, forget_bias=0.0, state_is_tuple=True, reuse=tf.get_variable_scope().reuse)
return cell
def get_init_cell(batch_size, rnn_size,keep_prob=0.75 ):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
# Stack up multiple LSTM layers, for deep learning
num_layers = 3
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell( rnn_size, keep_prob) for _ in range(num_layers)], state_is_tuple = True)
initial_state=cell.zero_state(batch_size, tf.float32)
initial_state=tf.identity(initial_state, name='initial_state')
return cell,initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = embed_dim
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
n_vocab = len(int_to_vocab)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
elements_per_batch = batch_size * seq_length
num_batches = len(int_text)//elements_per_batch
#Keep only enough elements to make full batches
int_text = int_text[:num_batches * elements_per_batch]
int_text = np.array(int_text).reshape((batch_size, -1))
batches = np.zeros((num_batches, 2, batch_size, seq_length), dtype=np.int32)
batch_num = 0
for n in range(0, int_text.shape[1], seq_length):
# The features
x = int_text[:, n:n+seq_length]
# The target shifted by one
if ((n+seq_length)%int_text.shape[1] == 0):
y = np.zeros_like(x)
y[:,:-1] = x[:,1:]
for batch in range(num_batches):
y[batch,-1] = int_text[(batch+1)%num_batches,0]
else:
y = int_text[:, n+1:n+seq_length+1]
batches[batch_num][0] = x
batches[batch_num][1] = y
batch_num += 1
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
word_picked = np.random.choice(list(int_to_vocab.values()), p=probabilities)
return word_picked
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 1200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
quantopian/research_public | notebooks/lectures/Mean_Reversion_on_Futures/answers/notebook.ipynb | apache-2.0 | # Useful Functions
def find_cointegrated_pairs(data):
n = data.shape[1]
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = data.keys()
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = data[keys[i]]
S2 = data[keys[j]]
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < 0.05:
pairs.append((keys[i], keys[j]))
return score_matrix, pvalue_matrix, pairs
# Useful Libraries
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint, adfuller
from quantopian.research.experimental import history, continuous_future
# just set the seed for the random number generator
np.random.seed(107)
import matplotlib.pyplot as plt
"""
Explanation: Exercises: Mean Reversion on Futures - Answer Key
By Chris Fenaroli, Delaney Mackenzie, and Maxwell Margenot
Lecture Link
https://www.quantopian.com/lectures/introduction-to-pairs-trading
https://www.quantopian.com/lectures/mean-reversion-on-futures
IMPORTANT NOTE:
This lecture corresponds to the Mean Reversion on Futures lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Key Concepts
End of explanation
"""
A_returns = np.random.normal(0, 1, 100)
A = pd.Series(np.cumsum(A_returns), name='X') + 50
some_noise = np.random.exponential(1, 100)
B = A - 7 + some_noise
#Your code goes here
## answer key ##
score, pvalue, _ = coint(A,B)
confidence_level = 0.05
if pvalue < confidence_level:
print ("A and B are cointegrated")
print pvalue
else:
print ("A and B are not cointegrated")
print pvalue
A.name = "A"
B.name = "B"
pd.concat([A, B], axis=1).plot();
"""
Explanation: Exercise 1: Testing Artificial Examples
We'll use some artificially generated series first as they are much cleaner and easier to work with. In general when learning or developing a new technique, use simulated data to provide a clean environment. Simulated data also allows you to control the level of noise and difficulty level for your model.
a. Cointegration Test I
Determine whether the following two artificial series $A$ and $B$ are cointegrated using the coint() function and a reasonable confidence level.
End of explanation
"""
C_returns = np.random.normal(1, 1, 100)
C = pd.Series(np.cumsum(C_returns), name='X') + 100
D_returns = np.random.normal(2, 1, 100)
D = pd.Series(np.cumsum(D_returns), name='X') + 100
#Your code goes here
## answer key ##
score, pvalue, _ = coint(C,D)
confidence_level = 0.05
if pvalue < confidence_level:
print ("C and D are cointegrated")
print pvalue
else:
print ("C and D are not cointegrated")
print pvalue
C.name = "C"
D.name = "D"
pd.concat([C, D], axis=1).plot();
"""
Explanation: b. Cointegration Test II
Determine whether the following two artificial series $C$ and $D$ are cointegrated using the coint() function and a reasonable confidence level.
End of explanation
"""
cn = continuous_future('CN', offset = 0, roll = 'calendar', adjustment = 'mul')
sb = continuous_future('SB', offset = 0, roll = 'calendar', adjustment = 'mul')
cn_price = history(cn, 'price', '2015-01-01', '2016-01-01', 'daily')
sb_price = history(sb, 'price', '2015-01-01', '2016-01-01', 'daily')
#Your code goes here
#print history.__doc__
## answer key ##
score, pvalue, _ = coint(cn_price, sb_price)
confidence_level = 0.05
if pvalue < confidence_level:
print ("CN and SB are cointegrated")
print pvalue
else:
print ("CN and SB are not cointegrated")
print pvalue
cn_price.name = "CN"
sb_price.name = "SB"
pd.concat([cn_price, sb_price], axis=1).plot();
"""
Explanation: Exercise 2: Testing Real Examples
a. Real Cointegration Test I
Determine whether the following two assets CN and SB were cointegrated during 2015 using the coint() function and a reasonable confidence level.
End of explanation
"""
cl = continuous_future('CL', offset = 0, roll = 'calendar', adjustment = 'mul')
ho = continuous_future('HO', offset = 0, roll = 'calendar', adjustment = 'mul')
cl_price = history(cl, 'price', '2015-01-01', '2016-01-01', 'daily')
ho_price = history(ho, 'price', '2015-01-01', '2016-01-01', 'daily')
#Your code goes here
## answer key ##
confidence_level = 0.05
score, pvalue, _ = coint(cl_price, ho_price)
if pvalue < confidence_level:
print ("CL and HO are cointegrated")
print pvalue
else:
print ("CL and HO are not cointegrated")
print pvalue
cl_price.name = 'CL'
ho_price.name = 'HO'
pd.concat([cl_price, ho_price.multiply(42)], axis=1).plot();
"""
Explanation: b. Real Cointegration Test II
Determine whether the following two underlyings CL and HO were cointegrated during 2015 using the coint() function and a reasonable confidence level.
End of explanation
"""
## answer key ##
results = sm.OLS(cl_price, sm.add_constant(ho_price)).fit()
b = results.params['HO']
print b
spread = cl_price - b * ho_price
print "p-value for in-sample stationarity: ", adfuller(spread)[1]
# The p-value is less than 0.05 so we conclude that this spread calculation is stationary in sample
spread.plot()
plt.axhline(spread.mean(), color='black')
plt.legend(['Spread']);
"""
Explanation: Exercise 3: Out of Sample Validation
a. Calculating the Spread
Using pricing data from 2015, construct a linear regression to find a coefficient for the linear combination of CL and HO that makes their spread stationary.
End of explanation
"""
cl_out = get_pricing(cl, fields='price',
start_date='2016-01-01', end_date='2016-07-01')
ho_out = get_pricing(ho, fields='price',
start_date='2016-01-01', end_date='2016-07-01')
#Your code goes here
## answer key ##
spread = cl_out - b * ho_out
spread.plot()
plt.axhline(spread.mean(), color='black')
plt.legend(['Spread']);
print "p-value for spread stationarity: ", adfuller(spread)[1]
# Our p-value is less than 0.05 so we conclude that this calculation of
# the spread is stationary out of sample
"""
Explanation: b. Testing the Coefficient
Use your coefficient from part a to plot the weighted spread using prices from the first half of 2016, and check whether the result is still stationary.
End of explanation
"""
# No solution provided for extra credit exercises.
"""
Explanation: Extra Credit Exercise: Hurst Exponent
This exercise is more difficult and we will not provide initial structure.
The Hurst exponent is a statistic between 0 and 1 that provides information about how much a time series is trending or mean reverting. We want our spread time series to be mean reverting, so we can use the Hurst exponent to monitor whether our pair is going out of cointegration. Effectively as a means of process control to know when our pair is no longer good to trade.
Please find either an existing Python library that computes, or compute yourself, the Hurst exponent. Then plot it over time for the spread on the above pair of stocks.
These links may be helpful:
https://en.wikipedia.org/wiki/Hurst_exponent
https://www.quantopian.com/posts/pair-trade-with-cointegration-and-mean-reversion-tests
End of explanation
"""
|
mspieg/dynamical-systems | LorenzEquations.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mpl_toolkits.mplot3d import Axes3D
from numpy.linalg import eigvals
"""
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman, Based on ipython notebook by Kyle Mandli from his course [Introduction to numerical methods](https://github.com/mandli/intro-numerical-methods)</td>
</table>
End of explanation
"""
def Lorenz(state,t,sigma,r,b):
'''
Returns the RHS of the Lorenz equations
'''
# unpack the state vector
x = state[0]
y = state[1]
z = state[2]
# compute state derivatives
xd = sigma * (y-x)
yd = (r-z)*x - y
zd = x*y - b*z
# return the state derivatives
return [xd, yd, zd]
def SolveLorenz(state0,t,sigma=10.,r=28.,b=8./3.0):
'''
use ODEINT to integrate the lorenz equations from initial condition state0 at t=0 for
the range of times given in the numpy array t
'''
Lorenz_p = lambda state,t: Lorenz(state,t,sigma,r,b)
state = odeint(Lorenz_p, state0, t)
return state
def PlotLorenzXvT(state,t,sigma,r,b):
'''
make time series plots of solutions of the Lorenz equations X(t),Y(t),Z(t)
'''
plt.figure()
ax = plt.subplot(111)
X = state[:,0]
Y = state[:,1]
Z = state[:,2]
ax.plot(t,X,'r',label='X')
ax.hold(True)
ax.plot(t,Y,'g',label='Y')
ax.plot(t,Z,'b',label='Z')
ax.set_xlabel('time t')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
# Shrink current axis's height by 10% on the bottom
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1,
box.width, box.height * 0.9])
# Put a legend below current axis
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=3)
plt.show()
def PlotLorenz3D(state,sigma,r,b):
'''
Show 3-D Phase portrait using mplot3D
'''
# do some fancy 3D plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(state[:,0],state[:,1],state[:,2])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
plt.show()
"""
Explanation: Exploring the Lorenz Equations
The Lorenz Equations are a 3-D dynamical system that is a simplified model of Rayleigh-Benard thermal convection. They are derived and described in detail in Edward Lorenz' 1963 paper Deterministic Nonperiodic Flow in the Journal of Atmospheric Science. In their classical form they can be written
$$
\dot{X} = \sigma( Y - X)\
\dot{Y} = rX - Y - XZ \
\dot{Z} = XY -b Z
$$
where $\sigma$ is the "Prandtl Number", $r = \mathrm{Ra}/\mathrm{Ra}_c$ is a scaled "Raleigh number" and $b$ is a parameter that is related to the the aspect ratio of a convecting cell in the original derivation.
This ipython notebook, will provide some simple python routines for numerical integration and visualization of the Lorenz Equations. The primary code is modified from the Wikipedia page
Some useful python routines
End of explanation
"""
# Set the parameters
sigma= 10.
b = 8./3
# set the initial condition
X0 = [2.0, 3.0, 4.0]
# set the time for integration
t = np.arange(0.0, 30.0, 0.01)
# set the Rayleigh number
r = 0.5
# solve the Equations
state = SolveLorenz(X0,t,sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(state,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(state,sigma,r,b)
"""
Explanation: Subcritical behavior $r<1$
Here we will begin exploring the behavior of the Lorenz equations for fixed values of $\sigma$ and $b$ and just changing the Rayleigh number $r$.
We will begin with subcritical behavior $r=0.5$ which rapidly damps to a condition of no motion
End of explanation
"""
# set the Rayleigh number
r = 10.0
X0 = [2.,3.,4.]
state = SolveLorenz(X0,t,sigma,r,b)
PlotLorenzXvT(state,t,sigma,r,b)
PlotLorenz3D(state,sigma,r,b)
# now change the initial condition so X=-2
X0 = [-2.0, -3.0, 4.0]
state = SolveLorenz(X0,t,sigma,r,b)
PlotLorenzXvT(state,t,sigma,r,b)
PlotLorenz3D(state,sigma,r,b)
"""
Explanation: Damped Oscillation $r=10$
Now we increase the Rayleigh number to $r=10$ which admits two steady solutions depending on initial condition.
End of explanation
"""
# set the Rayleigh number
r = 28.0
X0 = [2.,3.,4.]
state = SolveLorenz(X0,t,sigma,r,b)
PlotLorenzXvT(state,t,sigma,r,b)
PlotLorenz3D(state,sigma,r,b)
"""
Explanation: Chaos and the strange attractor $r=28$
Now we increase the Rayleigh number to $r=28$ and the solution becomes highly time-dependent and a-periodic.
End of explanation
"""
# set the Rayleigh number
r = 350
X0 = [2.,3.,4.]
t = np.arange(0,8.,.0001)
state = SolveLorenz(X0,t,sigma,r,b)
PlotLorenzXvT(state,t,sigma,r,b)
PlotLorenz3D(state,sigma,r,b)
"""
Explanation: Limit Cycle at large Rayleigh number
Now we increase the Rayleigh number to $r=350$ and the solution goes to a periodic limit cycle
End of explanation
"""
sigma = 10
b = 8./3.
r_H = sigma*(sigma+b+3)/(sigma-b -1.) # critical value of r at Hopf bifurcation
r_max = 28.
ra = np.linspace(1,28.,20)
xstar = lambda r: np.sqrt(b*(r-1))
J = lambda r: np.array([[-sigma,sigma,0],[1,-1,-xstar(r)],[xstar(r),xstar(r),-b]])
# plot out the eigenvalues
import matplotlib.cm as cm
cmap = cm.get_cmap('coolwarm')
fig = plt.figure()
for r in ra:
L = eigvals(J(r))
plt.plot(np.real(L),np.imag(L),'o',color=cmap((r-min(ra))/(max(ra)-min(ra))))
plt.hold(True)
# plot out eigenvalues at the Hopf Bifurcation
L = eigvals(J(r_H))
plt.plot(np.real(L),np.imag(L),'sy')
plt.xlabel('Re$(\lambda)$')
plt.ylabel('Im$(\lambda)$')
plt.title('Eigenvalues of $C^+$ for $r\in[1,{}]$, $r_H={}$'.format(max(ra),r_H))
plt.grid()
plt.show()
"""
Explanation: Stability of Fixed Points
It is straightforward to show that the Lorenz system has a fixed point at $X=Y=Z=0$ for all values of parameters $r,\sigma,b$. Moreover, the Jacobian for the origin is
$$
J = \left[
\begin{matrix}
-\sigma & \sigma & 0 \
r & -1 & 0 \
0 & 0 & -b
\end{matrix}
\right]
$$
which has three negative eigenvalues for $r<1$ (Stable sink), and 2 negative and 1 positive eigenvalue for $r > 1$ (3-D Saddle point).
At $r=1$, the Jacobian is singular and the origin undergoes a pitchfork bifurcation where two new fixed points appear at coordinates
$$
C^{\pm} = \left[
\begin{matrix}
\pm\sqrt{b(r-1)} \
\pm\sqrt{b(r-1)} \
(r-1)
\end{matrix}
\right]
$$
The stability of $C^+$ and $C^-$ depend on the eigenvalues of the Jacobian at these points, e.g
$$
J = \left[
\begin{matrix}
-\sigma & \sigma & 0 \
1 & -1 & -x^+ \
x^+ & x^+ & -b
\end{matrix}
\right]
$$
where $x^+=\sqrt{b(r-1)}$. These eigenvalues can be found as the roots of the cubic polynomial in $\lambda$ given by $|J -\lambda I|=0$ (which I will ask you to find in a homework problem). But here we will just calculate and plot them numerically
End of explanation
"""
# start by running the Lorenz system long enough to get on the attractor
r = 28.0
X0 = [1.,0.,0.]
t = np.arange(0,20,.01)
state = SolveLorenz(X0,t,sigma,r,b)
# extract the final state and perturb it by a small amount epsilon
X0 = state[-1]
epsilon=1.e-6
X1 = X0 + epsilon*np.random.rand(3)
delta_0 = np.sqrt(np.sum((X1-X0)**2))
# Now run both initial conditions
t=np.arange(0.,50.,.0001)
state0 = SolveLorenz(X0,t,sigma,r,b)
state1 = SolveLorenz(X1,t,sigma,r,b)
# Compare the two trajectories as time-series X
plt.figure()
ax = plt.subplot(111)
ax.plot(t,state0[:,0],'r',t,state1[:,0],'b')
plt.xlabel('t')
plt.ylabel('X(t)')
plt.show()
# and in the phase space
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(state0[:,0],state0[:,1],state0[:,2],'r-')
plt.hold(True)
ax.plot(state1[:,0],state1[:,1],state1[:,2],'b-')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
plt.show()
"""
Explanation: Liapunov Exponents
for $r>r_H$, the Lorenz system exhibits a chaotic or "strange" attractor with nonperiodic orbits that exhibit extreme sensitivity to initial conditions. One measure of this sensitivity is the "Liapunov Exponent" $\lambda$ defined roughly by
$$
||\delta(t)|| = ||\delta_0||e^{\lambda t}
$$
where $||\delta(t)||$ is the distance between to trajectories that started with initial conditions an infinitesimal distance $\delta_0<<1$ apart at $t=0$.
Here we will just estimate numerically the Liaponuv exponent for our chaotic system.
End of explanation
"""
# calculate the distance between the two trajectories
delta = state1-state0
delta = np.sqrt(np.sum(delta**2,1))
# and plot them
plt.figure()
plt.semilogy(t,delta)
plt.xlabel('t')
plt.ylabel('$||\delta(t)||$')
plt.grid()
# now fit the first part with a straight line to determine the slope
# we'll pick the line between tmin and tmax to avoid initial transients and later saturation
tmin = 1.
tmax = 12.
imin = int(np.argwhere(t<tmin)[-1])
imax = int(np.argwhere(t>tmax)[0])
tfit = t[imin:imax]
p= np.polyfit(tfit,np.log(delta[imin:imax]),1)
plt.hold(True)
plt.semilogy(tfit,np.exp(p[1]+p[0]*tfit),'r')
plt.title('Liapunov Exponent Estimate $\lambda={}$'.format(p[0]))
plt.show()
"""
Explanation: Estimating the Liapunov exponent
Now we will calculate and plot the distance between the two trajectories on a log plot
End of explanation
"""
Y = state0[:,1]
Z = state0[:,2]
plt.figure()
plt.subplot(2,1,1)
ihalf = int(len(Y)/2.)
print ihalf,len(Y)
plt.plot(Y[:ihalf],Z[:ihalf])
plt.xlabel('Y')
plt.ylabel('Z')
plt.title('Lorenz system, $Y,Z$ plane: $r={}$, $\sigma={}$,$b={}$'.format(r,sigma,b))
plt.grid()
plt.subplot(2,1,2)
plt.plot(t,Z)
plt.xlabel('$t$')
plt.ylabel('$Z(t)$')
plt.title('$Z$ time series')
plt.show()
"""
Explanation: Calculating the Lorenz Map
The Lorenz Map is a discrete dynamical system that attemps to predict the $n+1$ maximum $Z_{n+1}(t)$ as a function of the previous peak $Z_{n}$. Viewing the dynamical system in either the $Y,Z$ plane or as the time series $Z(t)$, we see that there are roughly two types of behavior, a set of growing oscillations with increasing amplitude in $Z$, separated by jumps to a smaller value of $Z$ where it appears qualitatively, that during the jumps, the larger the value of $Z_n$, the smaller the value $Z_{n+1}$ after the jump.
End of explanation
"""
# first let's estimate the centered derivatve of Z to isolate the extrema
dZ = np.zeros(Z.shape)
dZ[1:-2] = Z[2:-1] - Z[0:-3]
dZ.shape
plt.figure()
plt.plot(t,dZ,t,np.zeros(t.shape),'k:')
plt.ylabel('$dZ$')
plt.xlabel('t')
# now let's find all all intervals that contain zero crossings
icross = np.nonzero(dZ[:-2]*dZ[1:-1] <= 0)
Zextreme = Z[icross]
# and pick out all Extremes greater than mean(Z)
meanZ = np.mean(Z)
Zn = Zextreme[Zextreme > meanZ]
# now plot the Lorenz map Z_{n+1} vs Z{n}
plt.figure()
plt.plot(Zn[:-2],Zn[1:-1],'bo')
xlim = plt.gca().get_xlim()
plt.hold(True)
plt.plot(xlim,xlim,'k')
plt.xlabel('$Z_n$')
plt.ylabel('$Z_{n+1}$')
plt.title('Lorenz map: $r={}$, $\sigma={}$, $b={}$'.format(r,sigma,b))
plt.show()
"""
Explanation: The Lorenz Map
The Lorenz map is simply a plot of the peak amplitude of the $n+1$ maximum $Z_{n+1}(t)$ as a function of the previous peak $Z_{n}$. Here we extract the maxima with a bit of discrete hackery where we
approximate the derivative of Z'
approximate extrema using intervals of the discrete solution with zero crossings of Z'
approximate local maxima using extrema with Z greater than mean(Z)
given the maxima $Z_n$, plot $Z_{n+1}$ vs $Z_{n}$.
End of explanation
"""
|
lmoresi/UoM-VIEPS-Intro-to-Python | Notebooks/Numpy+Scipy/1 - Introduction to Numpy.ipynb | mit | import numpy as np
## This is a list of everything in the module
np.__all__
an_array = np.array([0,1,2,3,4,5,6])
print an_array
print
print type(an_array)
print
help(an_array)
A = np.zeros((4,4))
print A
print
print A.shape
print
print A.diagonal()
print
A[0,0] = 2.0
print A
np.fill_diagonal(A, 1.0)
print A
B = A.diagonal()
B[0] = 2.0
for i in range(0,A.shape[0]):
A[i,i] = 1.0
print A
print
A[:,2] = 2.0
print A
print
A[2,:] = 4.0
print A
print
print A.T
print
A[...] = 0.0
print A
print
for i in range(0,A.shape[0]):
A[i,:] = float(i)
print A
print
for i in range(0,A.shape[0]):
A[i,:] = i
print A
print
print A[::2,::2]
print
print A[::-1,::-1]
"""
Explanation: Numpy data structures
When we looked at python data structures, it was obvious that the only way to deal with arrays of values (matrices / vectors etc) would be via lists and lists of lists.
This is slow and inefficient in both execution and writing code.
Numpy attempts to fix this.
End of explanation
"""
%%timeit
B = np.zeros((1000,1000))
for i in range(0,1000):
for j in range(0,1000):
B[i,j] = 2.0
%%timeit
B = np.zeros((1000,1000))
B[:,:] = 2.0
%%timeit
B = np.zeros((1000,1000))
B[...] = 2.0
"""
Explanation: Speed
End of explanation
"""
print A.reshape((2,8))
print
print A.reshape((-1))
print A.ravel()
print
print A.reshape((1,-1))
print
%%timeit
A.reshape((1,-1))
%%timeit
elements = A.shape[0]*A.shape[1]
B = np.empty(elements)
B[...] = A[:,:].ravel()
%%timeit
elements = A.shape[0]*A.shape[1]
B = np.empty(elements)
for i in range(0,A.shape[0]):
for j in range(0,A.shape[1]):
B[i+j*A.shape[1]] = A[i,j]
"""
Explanation: Views of the data (are free)
It costs very little to look at data in a different way (e.g. to view a 2D array as a 1D vector).
Making a copy is a different story
End of explanation
"""
AA = np.zeros((100,100))
AA[10,11] = 1.0
AA[99,1] = 2.0
cond = np.where(AA >= 1.0)
print cond
print AA[cond]
print AA[ AA >= 1]
"""
Explanation: Exercise: Try this again for a 10000x10000 array
Indexing / broadcasting
In numpy, we can index an array by explicitly specifying elements, by specifying slices, by supplying lists of indices (or arrays), we can also supply a boolean array of the same shape as the original array which will select / return an array of all those entries where True applies.
Although some of these might seem difficult to use, they are often the result of other numpy operations. For example np.where converts a truth array to a list of indices.
End of explanation
"""
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0, 2.0, 2.0])
print a * b
b = np.array([2.0])
print a * b
print a * 2.0
"""
Explanation: Broadcasting is a way of looping on arrays which have "compatible" but unequal sizes.
For example, the element-wise multiplication of 2 arrays
python
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0, 2.0, 2.0])
print a * b
has an equivalent:
python
a = np.array([1.0, 2.0, 3.0])
b = np.array([2.0])
print a * b
or
python
a = np.array([1.0, 2.0, 3.0])
b = 2.0
print a * b
in which the "appropriate" interpretation of b is made in each case to achieve the result.
End of explanation
"""
print a.shape
print b.shape
print (a*b).shape
print (a+b).shape
aa = a.reshape(1,3)
bb = b.reshape(1,1)
print aa.shape
print bb.shape
print (aa*bb).shape
print (aa+bb).shape
"""
Explanation: Arrays are compatible as long as each of their dimensions (shape) is either equal to the other or 1.
Thus, above, the multplication works when a.shape is (1,3) and b.shape is either (1,3) or (1,1)
(Actually, these are (3,) and (1,) in the examples above ...
End of explanation
"""
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([[1.0,2.0,3.0]])
print a + b
print
print a.shape
print b.shape
print (a+b).shape
"""
Explanation: In multiple dimensions, the rule applies but, perhaps, is less immediately intuitive:
End of explanation
"""
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([1.0,2.0,3.0])
print a + b
print
print a.shape
print b.shape
print (a+b).shape
"""
Explanation: Note that this also works for
End of explanation
"""
a = np.array([[ 0.0, 0.0, 0.0],
[10.0,10.0,10.0],
[20.0,20.0,20.0],
[30.0,30.0,30.0]])
b = np.array([[1.0],[2.0],[3.0]])
print a.shape
print b.shape
print (a+b).shape
"""
Explanation: But not for
End of explanation
"""
X = np.arange(0.0, 2.0*np.pi, 0.0001)
print X[0:100]
import math
math.sin(X)
np.sin(X)
S = np.sin(X)
C = np.cos(X)
S2 = S**2 + C**2
print S2
print S2 - 1.0
test = np.isclose(S2,1.0)
print test
print np.where(test == False)
print np.where(S2 == 0.0)
"""
Explanation: Vector Operations
End of explanation
"""
X = np.linspace(0.0, 2.0*np.pi, 10000000)
print X.shape
# ...
%%timeit
S = np.sin(X)
%%timeit
S = np.empty_like(X)
for i, x in enumerate(X):
S[i] = math.sin(x)
X = np.linspace(0.0, 2.0*np.pi, 10000000)
Xj = X + 1.0j
print Xj.shape, Xj.dtype
%%timeit
Sj = np.sin(Xj)
import cmath
%%timeit
Sj = np.empty_like(Xj)
for i, x in enumerate(Xj):
Sj[i] = cmath.sin(x)
"""
Explanation: Exercise: find out how long it takes to compute the sin, sqrt, power of a 1000000 length vector (array). How does this speed compare to using the normal math functions element by element in the array ? What happens if X is actually a complex array ?
Hints: you might find it useful to know about:
- np.linspace v np.arange
- np.empty_like or np.zeros_like
- the python enumerate function
- how to write a table in markdown
| description | time | notes |
|-----------------|--------|-------|
| np.sin | ? | |
| math.sin | ? | |
| | ? | - |
End of explanation
"""
# Test the results here
A = np.array(([1.0,1.0,1.0,1.0],[2.0,2.0,2.0,2.0]))
B = np.array(([3.0,3.0,3.0,3.0],[4.0,4.0,4.0,4.0]))
C = np.array(([5.0,5.0,5.0,5.0],[6.0,6.0,6.0,6.0]))
R = np.concatenate((A,B,C))
print R
print
R = np.concatenate((A,B,C), axis=1)
print R
"""
Explanation: Exercise: look through the functions below from numpy, choose 3 of them and work out how to use them on arrays of data. Write a few lines to explain what you find.
np.max v. np.argmax
np.where
np.logical_and
np.fill_diagonal
np.count_nonzero
np.isinf and np.isnan
Here is an example:
np.concatenate takes a number of arrays and glues them together. For 1D arrays this is simple:
```python
A = np.array([1.0,1.0,1.0,1.0])
B = np.array([2.0,2.0,2.0,2.0])
C = np.array([3.0,3.0,3.0,3.0])
R = np.concatenate((A,B,C))
array([ 1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
```
an equivalent statement is np.hstack((A,B,C)) but note the difference with np.vstack((A,B,C))
With higher dimensional arrays, the gluing takes place along one axis:
```python
A = np.array(([1.0,1.0,1.0,1.0],[2.0,2.0,2.0,2.0]))
B = np.array(([3.0,3.0,3.0,3.0],[4.0,4.0,4.0,4.0]))
C = np.array(([5.0,5.0,5.0,5.0],[6.0,6.0,6.0,6.0]))
R = np.concatenate((A,B,C))
print R
print
R = np.concatenate((A,B,C), axis=1)
print R
```
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/guide/distributed_training.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
# 텐서플로 패키지 가져오기
import tensorflow as tf
"""
Explanation: 텐서플로로 분산 훈련하기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/distributed_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
docs-ko@tensorflow.org로
메일을 보내주시기 바랍니다.
개요
tf.distribute.Strategy는 훈련을 여러 GPU 또는 여러 장비, 여러 TPU로 나누어 처리하기 위한 텐서플로 API입니다. 이 API를 사용하면 기존의 모델이나 훈련 코드를 조금만 고쳐서 분산처리를 할 수 있습니다.
tf.distribute.Strategy는 다음을 주요 목표로 설계하였습니다.
사용하기 쉽고, 연구원, 기계 학습 엔지니어 등 여러 사용자 층을 지원할 것.
그대로 적용하기만 하면 좋은 성능을 보일 것.
전략들을 쉽게 갈아 끼울 수 있을 것.
tf.distribute.Strategy는 텐서플로의 고수준 API인 tf.keras 및 tf.estimator와 함께 사용할 수 있습니다. 코드 한두 줄만 추가하면 됩니다. 사용자 정의 훈련 루프(그리고 텐서플로를 사용한 모든 계산 작업)에 함께 사용할 수 있는 API도 제공합니다.
텐서플로 2.0에서는 사용자가 프로그램을 즉시 실행(eager execution)할 수도 있고, tf.function을 사용하여 그래프에서 실행할 수도 있습니다. tf.distribute.Strategy는 두 가지 실행 방식을 모두 지원하려고 합니다. 이 가이드에서는 대부분의 경우 훈련에 대하여 이야기하겠지만, 이 API 자체는 여러 환경에서 평가나 예측을 분산 처리하기 위하여 사용할 수도 있다는 점을 참고하십시오.
잠시 후 보시겠지만 코드를 약간만 바꾸면 tf.distribute.Strategy를 사용할 수 있습니다. 변수, 층, 모델, 옵티마이저, 지표, 서머리(summary), 체크포인트 등 텐서플로를 구성하고 있는 기반 요소들을 전략(Strategy)을 이해하고 처리할 수 있도록 수정했기 때문입니다.
이 가이드에서는 다양한 형식의 전략에 대해서, 그리고 여러 가지 상황에서 이들을 어떻게 사용해야 하는지 알아보겠습니다.
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
"""
Explanation: 전략의 종류
tf.distribute.Strategy는 서로 다른 다양한 사용 형태를 아우르려고 합니다. 몇 가지 조합은 현재 지원하지만, 추후에 추가될 전략들도 있습니다. 이들 중 몇 가지를 살펴보겠습니다.
동기 훈련 대 비동기 훈련: 분산 훈련을 할 때 데이터를 병렬로 처리하는 방법은 크게 두 가지가 있습니다. 동기 훈련을 할 때는 모든 워커(worker)가 입력 데이터를 나누어 갖고 동시에 훈련합니다. 그리고 각 단계마다 그래디언트(gradient)를 모읍니다. 비동기 훈련에서는 모든 워커가 독립적으로 입력 데이터를 사용해 훈련하고 각각 비동기적으로 변수들을 갱신합니다. 일반적으로 동기 훈련은 올 리듀스(all-reduce)방식으로 구현하고, 비동기 훈련은 파라미터 서버 구조를 사용합니다.
하드웨어 플랫폼: 한 장비에 있는 다중 GPU로 나누어 훈련할 수도 있고, 네트워크로 연결된 (GPU가 없거나 여러 개의 GPU를 가진) 여러 장비로 나누어서, 또 혹은 클라우드 TPU에서 훈련할 수도 있습니다.
이런 사용 형태들을 위하여, 현재 6가지 전략을 사용할 수 있습니다. 이후 내용에서 현재 TF 2.2에서 상황마다 어떤 전략을 지원하는지 이야기하겠습니다. 일단 간단한 개요는 다음과 같습니다.
| 훈련 API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:-------------------------- |:------------------- |:--------------------- |:--------------------------------- |:--------------------------------- |:-------------------------- |
| Keras API | 지원 | 지원 | 실험 기능으로 지원 | 실험 기능으로 지원 | 2.3 이후 지원 예정 |
| 사용자 정의 훈련 루프 | 지원 | 지원 | 실험 기능으로 지원 | 실험 기능으로 지원 | 2.3 이후 지원 예정 |
| Estimator API | 제한적으로 지원 | 미지원 | 제한적으로 지원 | 제한적으로 지원 | 제한적으로 지원 |
MirroredStrategy
tf.distribute.MirroredStrategy는 장비 하나에서 다중 GPU를 이용한 동기 분산 훈련을 지원합니다. 각각의 GPU 장치마다 복제본이 만들어집니다. 모델의 모든 변수가 복제본마다 미러링 됩니다. 이 미러링된 변수들은 하나의 가상의 변수에 대응되는데, 이를 MirroredVariable라고 합니다. 이 변수들은 동일한 변경사항이 함께 적용되므로 모두 같은 값을 유지합니다.
여러 장치에 변수의 변경사항을 전달하기 위하여 효율적인 올 리듀스 알고리즘을 사용합니다. 올 리듀스 알고리즘은 모든 장치에 걸쳐 텐서를 모은 다음, 그 합을 구하여 다시 각 장비에 제공합니다. 이 통합된 알고리즘은 매우 효율적이어서 동기화의 부담을 많이 덜어낼 수 있습니다. 장치 간에 사용 가능한 통신 방법에 따라 다양한 올 리듀스 알고리즘과 구현이 있습니다. 기본값으로는 NVIDIA NCCL을 올 리듀스 구현으로 사용합니다. 또한 제공되는 다른 몇 가지 방법 중에 선택하거나, 직접 만들 수도 있습니다.
MirroredStrategy를 만드는 가장 쉬운 방법은 다음과 같습니다.
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
"""
Explanation: MirroredStrategy 인스턴스가 생겼습니다. 텐서플로가 인식한 모든 GPU를 사용하고, 장치 간 통신에는 NCCL을 사용할 것입니다.
장비의 GPU 중 일부만 사용하고 싶다면, 다음과 같이 하면 됩니다.
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
"""
Explanation: 장치 간 통신 방법을 바꾸고 싶다면, cross_device_ops 인자에 tf.distribute.CrossDeviceOps 타입의 인스턴스를 넘기면 됩니다. 현재 기본값인 tf.distribute.NcclAllReduce 이외에 tf.distribute.HierarchicalCopyAllReduce와 tf.distribute.ReductionToOneDevice 두 가지 추가 옵션을 제공합니다.
End of explanation
"""
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
"""
Explanation: CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy도 동기 훈련을 합니다. 하지만 변수를 미러링하지 않고, CPU에서 관리합니다. 작업은 모든 로컬 GPU들로 복제됩니다. 단, 만약 GPU가 하나밖에 없다면 모든 변수와 작업이 그 GPU에 배치됩니다.
다음과 같이 CentralStorageStrategy 인스턴스를 만드십시오.
End of explanation
"""
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
"""
Explanation: CentralStorageStrategy 인스턴스가 만들어졌습니다. 인식한 모든 GPU와 CPU를 사용합니다. 각 복제본의 변수 변경사항은 모두 수집된 후 변수에 적용됩니다.
Note: 이 전략은 아직 개선 중이고 더 많은 경우에 쓸 수 있도록 만들고 있기 때문에, 실험 기능으로 지원됩니다. 따라서 다음에 API가 바뀔 수 있음에 유념하십시오.
MultiWorkerMirroredStrategy
tf.distribute.experimental.MultiWorkerMirroredStrategy은 MirroredStrategy와 매우 비슷합니다. 다중 워커를 이용하여 동기 분산 훈련을 합니다. 각 워커는 여러 개의 GPU를 사용할 수 있습니다. MirroredStrategy처럼 모델에 있는 모든 변수의 복사본을 모든 워커의 각 장치에 만듭니다.
다중 워커(multi-worker)들 사이에서는 올 리듀스(all-reduce) 통신 방법으로 CollectiveOps를 사용하여 변수들을 같은 값으로 유지합니다. 수집 연산(collective op)은 텐서플로 그래프에 속하는 연산 중 하나입니다. 이 연산은 하드웨어나 네트워크 구성, 텐서 크기에 따라 텐서플로 런타임이 지원하는 올 리듀스 알고리즘을 자동으로 선택합니다.
여기에 추가 성능 최적화도 구현하고 있습니다. 예를 들어 작은 텐서들의 여러 올 리듀스 작업을 큰 텐서들의 더 적은 올 리듀스 작업으로 바꾸는 정적 최적화 기능이 있습니다. 뿐만아니라 플러그인 구조를 갖도록 설계하였습니다. 따라서 추후에는 사용자가 자신의 하드웨어에 더 최적화된 알고리즘을 사용할 수도 있을 것입니다. 참고로 이 수집 연산은 올 리듀스 외에 브로드캐스트(broadcast)나 전체 수집(all-gather)도 구현하고 있습니다.
MultiWorkerMirroredStrategy를 만드는 가장 쉬운 방법은 다음과 같습니다.
End of explanation
"""
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
"""
Explanation: MultiWorkerMirroredStrategy에 사용할 수 있는 수집 연산 구현은 현재 두 가지입니다. CollectiveCommunication.RING는 gRPC를 사용한 링 네트워크 기반의 수집 연산입니다. CollectiveCommunication.NCCL는 Nvidia의 NCCL을 사용하여 수집 연산을 구현한 것입니다. CollectiveCommunication.AUTO로 설정하면 런타임이 알아서 구현을 고릅니다. 최적의 수집 연산 구현은 GPU의 수와 종류, 클러스터의 네트워크 연결 등에 따라 다를 수 있습니다. 예를 들어 다음과 같이 지정할 수 있습니다.
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
"""
Explanation: 다중 GPU를 사용하는 것과 비교해서 다중 워커를 사용하는 것의 가장 큰 차이점은 다중 워커에 대한 설정 부분입니다. 클러스터를 구성하는 각 워커에 "TF_CONFIG" 환경변수를 사용하여 클러스터 설정을 하는 것이 텐서플로의 표준적인 방법입니다. 아래쪽 "TF_CONFIG" 항목에서 어떻게 하는지 자세히 살펴보겠습니다.
Note: 이 전략은 아직 개선 중이고 더 많은 경우에 쓸 수 있도록 만들고 있기 때문에, 실험 기능으로 지원됩니다. 따라서 나중에 API가 바뀔 수 있음에 유념하십시오.
TPUStrategy
tf.distribute.experimental.TPUStrategy는 텐서플로 훈련을 텐서처리장치(Tensor Processing Unit, TPU)에서 수행하는 전략입니다. TPU는 구글의 특별한 주문형 반도체(ASIC)로서, 기계 학습 작업을 극적으로 가속하기 위하여 설계되었습니다. TPU는 구글 코랩, Tensorflow Research Cloud, Google Compute Engine에서 사용할 수 있습니다.
분산 훈련 구조의 측면에서, TPUStrategy는 MirroredStrategy와 동일합니다. 동기 분산 훈련 방식을 사용합니다. TPU는 자체적으로 여러 TPU 코어들에 걸친 올 리듀스 및 기타 수집 연산을 효율적으로 구현하고 있습니다. 이 구현이 TPUStrategy에 사용됩니다.
TPUStrategy를 사용하는 방법은 다음과 같습니다.
Note: 코랩에서 이 코드를 사용하려면, 코랩 런타임으로 TPU를 선택해야 합니다. TPUStrategy를 사용하는 방법에 대한 튜토리얼을 곧 추가하겠습니다.
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
tf.config.experimental_connect_to_host(cluster_resolver.master())
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
TPUClusterResolver 인스턴스는 TPU를 찾도록 도와줍니다. 코랩에서는 아무런 인자를 주지 않아도 됩니다. 클라우드 TPU에서 사용하려면, TPU 자원의 이름을 tpu 매개변수에 지정해야 합니다. 또한 TPU는 계산하기 전 초기화(initialize)가 필요합니다. 초기화 중 TPU 메모리가 지워져서 모든 상태 정보가 사라지므로, 프로그램 시작시에 명시적으로 TPU 시스템을 초기화(initialize)해 주어야 합니다.
Note: 이 전략은 아직 개선 중이고 더 많은 경우에 쓸 수 있도록 만들고 있기 때문에, 실험 기능으로 지원됩니다. 따라서 나중에 API가 바뀔 수 있음에 유념하십시오.
ParameterServerStrategy
tf.distribute.experimental.ParameterServerStrategy은 여러 장비에서 훈련할 때 파라미터 서버를 사용합니다. 이 전략을 사용하면 몇 대의 장비는 워커 역할을 하고, 몇 대는 파라미터 서버 역할을 하게 됩니다. 모델의 각 변수는 한 파라미터 서버에 할당됩니다. 계산 작업은 모든 워커의 GPU들에 복사됩니다.
코드만 놓고 보았을 때는 다른 전략들과 비슷합니다.
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
다중 워커 환경에서 훈련하려면, 클러스터에 속한 파라미터 서버와 워커를 "TF_CONFIG" 환경변수를 이용하여 설정해야 합니다. 자세한 내용은 아래쪽 "TF_CONFIG"에서 설명하겠습니다.
여기까지 여러 가지 전략들이 어떻게 다르고, 어떻게 사용하는지 살펴보았습니다. 이어지는 절들에서는 훈련을 분산시키기 위하여 이들을 어떻게 사용해야 하는지 살펴보겠습니다. 이 문서에서는 간단한 코드 조각만 보여드리겠지만, 처음부터 끝까지 전체 코드를 실행할 수 있는 더 긴 튜토리얼의 링크도 함께 안내해드리겠습니다.
케라스와 함께 tf.distribute.Strategy 사용하기
tf.distribute.Strategy는 텐서플로의 케라스 API 명세 구현인 tf.keras와 함께 사용할 수 있습니다. tf.keras는 모델을 만들고 훈련하는 고수준 API입니다. 분산 전략을 tf.keras 백엔드와 함께 쓸 수 있으므로, 케라스 사용자들도 케라스 훈련 프레임워크로 작성한 훈련 작업을 쉽게 분산 처리할 수 있게 되었습니다. 훈련 프로그램에서 고쳐야하는 부분은 거의 없습니다. (1) 적절한 tf.distribute.Strategy 인스턴스를 만든 다음 (2)
케라스 모델의 생성과 컴파일을 strategy.scope 안으로 옮겨주기만 하면 됩니다. Sequential , 함수형 API, 클래스 상속 등 모든 방식으로 만든 케라스 모델을 다 지원합니다.
다음은 한 개의 밀집 층(dense layer)을 가진 매우 간단한 케라스 모델에 분산 전략을 사용하는 코드의 일부입니다.
End of explanation
"""
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
"""
Explanation: 위 예에서는 MirroredStrategy를 사용했기 때문에, 하나의 장비가 다중 GPU를 가진 경우에 사용할 수 있습니다. strategy.scope()로 분산 처리할 부분을 코드에 지정할 수 있습니다. 이 범위(scope) 안에서 모델을 만들면, 일반적인 변수가 아니라 미러링된 변수가 만들어집니다. 이 범위 안에서 컴파일을 한다는 것은 작성자가 이 전략을 사용하여 모델을 훈련하려고 한다는 의미입니다. 이렇게 구성하고 나서, 일반적으로 실행하는 것처럼 모델의 fit 함수를 호출합니다.
MirroredStrategy가 모델의 훈련을 사용 가능한 GPU들로 복제하고, 그래디언트들을 수집하는 것 등을 알아서 처리합니다.
End of explanation
"""
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
"""
Explanation: 위에서는 훈련과 평가 입력을 위해 tf.data.Dataset을 사용했습니다. 넘파이(numpy) 배열도 사용할 수 있습니다.
End of explanation
"""
# 복제본의 수로 전체 배치 크기를 계산.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
"""
Explanation: 데이터셋이나 넘파이를 사용하는 두 경우 모두 입력 배치가 동일한 크기로 나누어져서 여러 개로 복제된 작업에 전달됩니다. 예를 들어, MirroredStrategy를 2개의 GPU에서 사용한다면, 크기가 10개인 배치(batch)가 두 개의 GPU로 배분됩니다. 즉, 각 GPU는 한 단계마다 5개의 입력을 받게 됩니다. 따라서 GPU가 추가될수록 각 에포크(epoch) 당 훈련 시간은 줄어들게 됩니다. 일반적으로는 가속기를 더 추가할 때마다 배치 사이즈도 더 키웁니다. 추가한 컴퓨팅 자원을 더 효과적으로 사용하기 위해서입니다. 모델에 따라서는 학습률(learning rate)을 재조정해야 할 수도 있을 것입니다. 복제본의 수는 strategy.num_replicas_in_sync로 얻을 수 있습니다.
End of explanation
"""
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
"""
Explanation: 현재 어떤 것이 지원됩니까?
| 훈련 API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras API | 지원 | 지원 | 실험 기능으로 지원 | 실험 기능으로 지원 | 2.3 이후 지원 예정 |
예제와 튜토리얼
위에서 설명한 케라스 분산 훈련 방법에 대한 튜토리얼과 예제들의 목록입니다.
MirroredStrategy를 사용한 MNIST 훈련 튜토리얼.
ImageNet 데이터와 MirroredStrategy를 사용한 공식 ResNet50 훈련.
클라우드 TPU에서 ImageNet 데이터와 TPUStrategy를 사용한 ResNet50 훈련. 이 예제는 현재 텐서플로 1.x 버전에서만 동작합니다.
MultiWorkerMirroredStrategy를 사용한 MNIST 훈련 튜토리얼.
MirroredStrategy를 사용한 NCF 훈련.
MirroredStrategy를 사용한 Transformer 훈련.
사용자 정의 훈련 루프와 함께 tf.distribute.Strategy 사용하기
지금까지 살펴본 것처럼 고수준 API와 함께 tf.distribute.Strategy를 사용하려면 코드 몇 줄만 바꾸면 되었습니다. 조금만 더 노력을 들이면 이런 프레임워크를 사용하지 않는 사용자도 tf.distribute.Strategy를 사용할 수 있습니다.
텐서플로는 다양한 용도로 사용됩니다. 연구자들 같은 일부 사용자들은 더 높은 자유도와 훈련 루프에 대한 제어를 원합니다. 이 때문에 추정기나 케라스 같은 고수준 API를 사용하기 힘든 경우가 있습니다. 예를 들어, GAN을 사용하는데 매번 생성자(generator)와 판별자(discriminator) 단계의 수를 바꾸고 싶을 수 있습니다. 비슷하게, 고수준 API는 강화 학습(Reinforcement learning)에는 그다지 적절하지 않습니다. 그래서 이런 사용자들은 보통 자신만의 훈련 루프를 작성하게 됩니다.
이 사용자들을 위하여, tf.distribute.Strategy 클래스들은 일련의 주요 메서드들을 제공합니다. 이 메서드들을 사용하려면 처음에는 코드를 이리저리 조금 옮겨야 할 수 있겠지만, 한번 작업해 놓으면 전략 인스턴스만 바꿔서 GPU, TPU, 여러 장비로 쉽게 바꿔가며 훈련을 할 수 있습니다.
앞에서 살펴본 케라스 모델을 사용한 훈련 예제를 통하여 사용하는 모습을 간단하게 살펴보겠습니다.
먼저, 전략의 범위(scope) 안에서 모델과 옵티마이저를 만듭니다. 이는 모델이나 옵티마이저로 만들어진 변수가 미러링 되도록 만듭니다.
End of explanation
"""
with mirrored_strategy.scope():
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
"""
Explanation: 다음으로는 입력 데이터셋을 만든 다음, tf.distribute.Strategy.experimental_distribute_dataset 메서드를 호출하여 전략에 맞게 데이터셋을 분배합니다.
End of explanation
"""
@tf.function
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
return cross_entropy
per_example_losses = mirrored_strategy.run(
step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_example_losses, axis=0)
return mean_loss
"""
Explanation: 그리고 나서는 한 단계의 훈련을 정의합니다. 그래디언트를 계산하기 위해 tf.GradientTape를 사용합니다. 이 그래디언트를 적용하여 우리 모델의 변수를 갱신하기 위해서는 옵티마이저를 사용합니다. 분산 훈련을 위하여 이 훈련 작업을 step_fn 함수 안에 구현합니다. 그리고 step_fn을 앞에서 만든 dist_dataset에서 얻은 입력 데이터와 함께 tf.distrbute.Strategy.experimental_run_v2메서드로 전달합니다.
End of explanation
"""
with mirrored_strategy.scope():
for inputs in dist_dataset:
print(train_step(inputs))
"""
Explanation: 위 코드에서 몇 가지 더 짚어볼 점이 있습니다.
손실(loss)을 계산하기 위하여 tf.nn.softmax_cross_entropy_with_logits를 사용하였습니다. 그리고 손실의 합을 전체 배치 크기로 나누는 부분이 중요합니다. 이는 모든 복제된 훈련이 동시에 이루어지고 있고, 각 단계에 훈련이 이루어지는 입력의 수는 전체 배치 크기와 같기 때문입니다. 따라서 손실 값은 각 복제된 작업 내의 배치 크기가 아니라 전체 배치 크기로 나누어야 맞습니다.
tf.distribute.Strategy.mirrored_strategy.run에서 반환된 결과를 모으기 위하여 tf.distribute.Strategy.reduce API를 사용하였습니다. tf.distribute.Strategy.mirrored_strategy.run는 전략의 각 복제본에서 얻은 결과를 반환합니다. 그리고 이 결과를 사용하는 방법은 여러 가지가 있습니다. 종합한 결과를 얻기 위하여 reduce 함수를 사용할 수 있습니다. tf.distribute.Strategy.experimental_local_results 메서드로 각 복제본에서 얻은 결과의 값들 목록을 얻을 수도 있습니다.
분산 전략 범위 안에서 apply_gradients 메서드가 호출되면, 평소와는 동작이 다릅니다. 구체적으로는 동기화된 훈련 중 병렬화된 각 작업에서 그래디언트를 적용하기 전에, 모든 복제본의 그래디언트를 더해집니다.
훈련 단계를 정의했으므로, 마지막으로는 dist_dataset에 대하여 훈련을 반복합니다.
End of explanation
"""
with mirrored_strategy.scope():
iterator = iter(dist_dataset)
for _ in range(10):
print(train_step(next(iterator)))
"""
Explanation: 위 예에서는 dist_dataset을 차례대로 처리하며 훈련 입력 데이터를 얻었습니다. tf.distribute.Strategy.make_experimental_numpy_dataset를 사용하면 넘파이 입력도 쓸 수 있습니다. tf.distribute.Strategy.experimental_distribute_dataset 함수를 호출하기 전에 이 API로 데이터셋을 만들면 됩니다.
데이터를 차례대로 처리하는 또 다른 방법은 명시적으로 반복자(iterator)를 사용하는 것입니다. 전체 데이터를 모두 사용하지 않고, 정해진 횟수만큼만 훈련을 하고 싶을 때 유용합니다. 반복자를 만들고 명시적으로 next를 호출하여 다음 입력 데이터를 얻도록 하면 됩니다. 위 루프 코드를 바꿔보면 다음과 같습니다.
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
"""
Explanation: tf.distribute.Strategy API를 사용하여 사용자 정의 훈련 루프를 분산 처리 하는 가장 단순한 경우를 살펴보았습니다. 현재 API를 개선하는 과정 중에 있습니다. 이 API를 사용하려면 사용자 쪽에서 꽤 많은 작업을 해야 하므로, 나중에 별도의 더 자세한 가이드로 설명하도록 하겠습니다.
현재 어떤 것이 지원됩니까?
| 훈련 API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:----------------------- |:------------------- |:------------------- |:----------------------------- |:------------------------ |:------------------------- |
| 사용자 정의 훈련 루프 | 지원 | 지원 | 실험적으로 지원 | 실험적으로 지원 | 2.3 이후 지원 예정 |
예제와 튜토리얼
사용자 정의 훈련 루프와 함께 분산 전략을 사용하는 예제들입니다.
MirroredStrategy로 MNIST를 훈련하는 튜토리얼.
MirroredStrategy를 사용하는 DenseNet 예제.
MirroredStrategy와 TPUStrategy를 사용하여 훈련하는 BERT 예제.
이 예제는 분산 훈련 도중 체크포인트로부터 불러오거나 주기적인 체크포인트를 만드는 방법을 이해하는 데 매우 유용합니다.
keras_use_ctl 플래그를 켜서 활성화할 수 있는 MirroredStrategy로 훈련한 NCF 예제.
MirroredStrategy를 사용하여 훈련하는 NMT 예제.
추정기(Estimator)와 함께 tf.distribute.Strategy 사용하기
tf.estimator는 원래부터 비동기 파라미터 서버 방식을 지원했던 분산 훈련 텐서플로 API입니다. 케라스와 마찬가지로 tf.distribute.Strategy를 tf.estimator와 함께 쓸 수 있습니다. 추정기 사용자는 아주 조금만 코드를 변경하면, 훈련이 분산되는 방식을 쉽게 바꿀 수 있습니다. 따라서 이제는 추정기 사용자들도 다중 GPU나 다중 워커뿐 아니라 다중 TPU에서 동기 방식으로 분산 훈련을 할 수 있습니다. 하지만 추정기는 제한적으로 지원하는 것입니다. 자세한 내용은 아래 현재 어떤 것이 지원됩니까? 부분을 참고하십시오.
추정기와 함께 tf.distribute.Strategy를 사용하는 방법은 케라스와는 살짝 다릅니다. strategy.scope를 사용하는 대신에, 전략 객체를 추정기의 RunConfig(실행 설정)에 넣어서 전달해야합니다.
다음은 기본으로 제공되는 LinearRegressor와 MirroredStrategy를 함께 사용하는 방법을 보여주는 코드입니다.
End of explanation
"""
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
"""
Explanation: 위 예제에서는 기본으로 제공되는 추정기를 사용하였지만, 직접 만든 추정기도 동일한 코드로 사용할 수 있습니다. train_distribute가 훈련을 어떻게 분산시킬지를 지정하고, eval_distribute가 평가를 어떻게 분산시킬지를 지정합니다. 케라스와 함께 사용할 때 훈련과 평가에 동일한 분산 전략을 사용했던 것과는 차이가 있습니다.
다음과 같이 입력 함수를 지정하면 추정기의 훈련과 평가를 할 수 있습니다.
End of explanation
"""
|
ibm-cds-labs/pixiedust | notebook/Intro to PixieDust Spark 2.x.ipynb | apache-2.0 | !pip install --user --upgrade pixiedust
"""
Explanation: Hello PixieDust!
This sample notebook provides you with an introduction to many features included in PixieDust. You can find more information about PixieDust at https://pixiedust.github.io/pixiedust/. To ensure you are running the latest version of PixieDust uncomment and run the following cell. Do not run this cell if you installed PixieDust locally from source and want to continue to run PixieDust from source.
End of explanation
"""
import pixiedust
"""
Explanation: Import PixieDust
Run the following cell to import the PixieDust library. You may need to restart your kernel after importing. Follow the instructions, if any, after running the cell. Note: You must import PixieDust every time you restart your kernel.
End of explanation
"""
pixiedust.enableJobMonitor();
"""
Explanation: Enable the Spark Progress Monitor
PixieDust includes a Spark Progress Monitor bar that lets you track the status of your Spark jobs. You can find more info at https://pixiedust.github.io/pixiedust/sparkmonitor.html. Note: there is a known issue with the Spark Progress Monitor on Spark 2.1. Run the following cell to enable the Spark Progress Monitor:
End of explanation
"""
pixiedust.installPackage("graphframes:graphframes:0.1.0-spark1.6")
print("done")
"""
Explanation: Example use of the PackageManager
You can use the PackageManager component of Pixiedust to install and uninstall maven packages into your notebook kernel without editing configuration files. This component is essential when you run notebooks from a hosted cloud environment and do not have access to the configuration files. You can find more info at https://pixiedust.github.io/pixiedust/packagemanager.html. Run the following cell to install the GraphFrame package. You may need to restart your kernel after installing new packages. Follow the instructions, if any, after running the cell.
End of explanation
"""
pixiedust.printAllPackages()
"""
Explanation: Run the following cell to print out all installed packages:
End of explanation
"""
sparkSession = SparkSession.builder.getOrCreate()
d1 = sparkSession.createDataFrame(
[(2010, 'Camping Equipment', 3),
(2010, 'Golf Equipment', 1),
(2010, 'Mountaineering Equipment', 1),
(2010, 'Outdoor Protection', 2),
(2010, 'Personal Accessories', 2),
(2011, 'Camping Equipment', 4),
(2011, 'Golf Equipment', 5),
(2011, 'Mountaineering Equipment',2),
(2011, 'Outdoor Protection', 4),
(2011, 'Personal Accessories', 2),
(2012, 'Camping Equipment', 5),
(2012, 'Golf Equipment', 5),
(2012, 'Mountaineering Equipment', 3),
(2012, 'Outdoor Protection', 5),
(2012, 'Personal Accessories', 3),
(2013, 'Camping Equipment', 8),
(2013, 'Golf Equipment', 5),
(2013, 'Mountaineering Equipment', 3),
(2013, 'Outdoor Protection', 8),
(2013, 'Personal Accessories', 4)],
["year","zone","unique_customers"])
display(d1)
"""
Explanation: Example use of the display() API
PixieDust lets you visualize your data in just a few clicks using the display() API. You can find more info at https://pixiedust.github.io/pixiedust/displayapi.html. The following cell creates a DataFrame and uses the display() API to create a bar chart:
End of explanation
"""
python_var = "Hello From Python"
python_num = 10
"""
Explanation: Example use of the Scala bridge
Data scientists working with Spark may occasionaly need to call out to one of the hundreds of libraries available on spark-packages.org which are written in Scala or Java. PixieDust provides a solution to this problem by letting users directly write and run scala code in its own cell. It also lets variables be shared between Python and Scala and vice-versa. You can find more info at https://pixiedust.github.io/pixiedust/scalabridge.html.
Start by creating a python variable that we'll use in scala:
End of explanation
"""
%%scala
println(python_var)
println(python_num+10)
val __scala_var = "Hello From Scala"
"""
Explanation: Create scala code that use the python_var and create a new variable that we'll use in Python:
End of explanation
"""
print(__scala_var)
"""
Explanation: Use the __scala_var from python:
End of explanation
"""
pixiedust.sampleData()
"""
Explanation: Sample Data
PixieDust includes a number of sample data sets. You can use these sample data sets to start playing with the display() API and other PixieDust features. You can find more info at https://pixiedust.github.io/pixiedust/loaddata.html. Run the following cell to view the available data sets:
End of explanation
"""
pixiedust.installPackage("com.databricks:spark-csv_2.10:1.5.0")
pixiedust.installPackage("org.apache.commons:commons-csv:0")
"""
Explanation: Example use of sample data
To use sample data locally run the following cell to install required packages. You may need to restart your kernel after running this cell.
End of explanation
"""
d2 = pixiedust.sampleData(1)
"""
Explanation: Run the following cell to get the first data set from the list. This will return a DataFrame and assign it to the variable d2:
End of explanation
"""
display(d2)
"""
Explanation: Pass the sample data set (d2) into the display() API:
End of explanation
"""
d3 = pixiedust.sampleData("https://openobjectstore.mybluemix.net/misc/milliondollarhomes.csv")
"""
Explanation: You can also download data from a CSV file into a DataFrame which you can use with the display() API:
End of explanation
"""
% pixiedustLog -l debug
"""
Explanation: PixieDust Log
End of explanation
"""
%%scala
val __scala_version = util.Properties.versionNumberString
import platform
print('PYTHON VERSON = ' + platform.python_version())
print('SPARK VERSON = ' + sc.version)
print('SCALA VERSON = ' + __scala_version)
"""
Explanation: Environment Info.
The following cells will print out information related to your notebook environment.
End of explanation
"""
|
santiago-salas-v/walas | PAT MeOH Kinetik.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
from tc_lib import *
%matplotlib inline
plt.style.use('seaborn-deep')
# Stoechiometrische Koeffizienten
nuij = np.zeros([len(namen), 3])
# Hydrierung von CO2
nuij[[
namen.index('CO2'),
namen.index('H2'),
namen.index('CH3OH'),
namen.index('H2O'),
namen.index('CO'),
],0] = np.array([-1, -3, +1, +1, 0], dtype=float)
# Hydrierung von CO
nuij[[
namen.index('CO2'),
namen.index('H2'),
namen.index('CH3OH'),
namen.index('H2O'),
namen.index('CO'),
],1] = np.array([0, -2, +1, 0, -1], dtype=float)
# RWGS Rückwassergasshiftreaktion
nuij[[
namen.index('CO2'),
namen.index('H2'),
namen.index('CH3OH'),
namen.index('H2O'),
namen.index('CO'),
],2] = np.array([+1, +1, 0, -1, -1], dtype=float)
# T-abhängige Parameter, dem Artikel nach
def k_gg(t):
# Geschwindigkeitskonstanten der 3 Reaktionen
k_1 = 10**(3066/t-10.592)
k_2 = 10**(5139/t-12.621)
k_3 = 10**(2073/t-2.029)
# Angepasste Parameter des kinetischen Modells
# A(i) exp(B(i)/RT)
a = np.array([
0.499, 6.62e-11, 3453.38, 1.07, 1.22e-10
], dtype=float)
b = np.array([
17197, 124119, 0, 36696, -94765
], dtype=float)
k_h2 = a[0]**2 * np.exp(2 * b[0]/(r * t))
k_h2o = a[1] * np.exp(b[1]/(r * t))
k_h2o_d_k_8_k_9_k_h2 = a[2] * np.exp(b[2]/(r * t))
k5a_k_2_k_3_k_4_k_h2 = a[3] * np.exp(b[3]/(r * t))
k1_strich = a[4] * np.exp(b[4]/(r * t))
return np.array([
k_1, k_2, k_3,
k_h2, k_h2o, k_h2o_d_k_8_k_9_k_h2,
k5a_k_2_k_3_k_4_k_h2, k1_strich
])
# Gleichgewichtskonstanten bei T, berechnet
def k_t(t, nuij):
h_t = h(t)
g_t = g(t, h_t)
k_t = k(t, g_t, nuij)
return k_t
t = np.linspace(50, 1000, 35) + 273.15
fig = plt.figure()
ax = fig.add_subplot(111)
for i in range(nuij.shape[1]):
ax.plot(1000/t,
[k_t(temp, nuij[:,i]) for temp in t],
'o', label='K_berechnet'+str(i+1))
ax.plot(1000/t, k_gg(t)[i],
'-', label='K_Artikel'+str(i+1))
ax.set_yscale('log')
ax.set_xlabel(r'$\frac{1000}{T/K}$')
ax.legend()
print('Tabelle 8 Kp(RWGS)')
for t in np.array([467, 489, 522, 511, 360, 381, 404]):
print(
str(t) + 'K' + '. Berechnet ' +
'{:.2e}'.format(1/k_t(t+273.15, nuij)[2]*10) +
'\t' + 'Artikel ' +
'{:.2e}'.format(1/k_gg(t+273.15)[2]*10)
)
def r_i(t, p_i):
p_co2 = p[namen.index('CO2')]
p_h2 = p[namen.index('H2')]
p_ch3oh = p[namen.index('CH3OH')]
p_h2o = p[namen.index('H2O')]
[k_1, k_2, k_3,
k_h2, k_h2o, k_h2o_d_k_8_k_9_k_h2,
k5a_k_2_k_3_k_4_k_h2, k1_strich] = k_gg(t)
r_meoh = k5a_k_2_k_3_k_4_k_h2 * p_co2 * p_h2 * (
1 - 1/k_1 * p_h2o * p_ch3oh / (
p_h2**3 * p_co2
)
) / (
1 + k_h2o_d_k_8_k_9_k_h2 * ph2o / p_h2 +
np.sqrt(k_h2 * p_h2) + k_h20 * p_h2o
)
r_rwgs = k1_strich * p_co2 * (
1 - k_3 * p_h2o * p_co / (p_co2 * p_h2) /(
1 + k_h2o_d_k_8_k_9_k_h2 * p_h2o / p_h2 +
np.sqrt(k_h2 * p_h2) + k_h2o * p_h2o
)
)
return np.array([r_meoh, r_rwgs])
"""
Explanation: Vanden Bussche, Fromment 1995
End of explanation
"""
print('1.65*np.exp(-(-94765)/8.3145/501.57)/1e10=' +
'{:g}'.format(1.65*np.exp(-(-94765)/8.3145/501.57)/1e10))
a = np.array([
0.499, 6.62e-11, 3453.38, 1.07, 1.22e10
], dtype=float)
b = np.array([
17197, 124119, 0, 36696, -94765
], dtype=float)
print('')
print('k5a_k_2_k_3_k_4_k_h2:')
print('A_Asp=ln(A/1000)=' + '{:g}'.format(np.log(a[3]/1000.)))
print('B_Asp=B/R=' + '{:g}'.format(b[3]/8.3145)+'K')
print('k5a_k_2_k_3_k_4_k_h2 / k_1:')
print('A_Asp=ln(A/1000)-C ln(10)=' +
'{:g}'.format(np.log(a[3]/1000.)-(-10.592*np.log(10))))
print('B_Asp=B/R-D ln(10)=' +
'{:g}'.format(b[3]/8.3145 - 3066*np.log(10))+'K')
print('')
print('k1_strich:')
print('A_Asp=ln(A/1000)=' + '{:g}'.format(np.log(a[4]/1000.)))
print('B_Asp=B/R=' + '{:g}'.format(b[4]/8.3145)+'K')
print('k1_strich * k_3:')
print('A_Asp=ln(A/1000)+C ln(10)=' +
'{:g}'.format(np.log(a[4]/1000.)+(-2.029*np.log(10))))
print('B_Asp=B/R+D ln(10)=' +
'{:g}'.format(b[4]/8.3145 + 2073*np.log(10))+'K')
print('')
print('k_h2o_d_k_8_k_9_k_h2:')
print('A_Asp=ln(A)=' + '{:g}'.format(np.log(a[2])))
print('B_Asp=B/R=' + '{:g}'.format(b[2]/8.3145)+'K')
print('')
print('k_h2^(1/2):')
print('A_Asp=ln(A)=' + '{:g}'.format(np.log(a[0])))
print('B_Asp=B/R=' + '{:g}'.format(b[0]/8.3145)+'K')
print('')
print('k_h2o:')
print('A_Asp=ln(A)=' + '{:g}'.format(np.log(a[1])))
print('B_Asp=B/R=' + '{:g}'.format(b[1]/8.3145)+'K')
"""
Explanation: Umformung auf Aspen-Form
End of explanation
"""
import locale
import numpy as np
from IPython.display import Latex
locale.setlocale(locale.LC_ALL, '')
phi = 0.3
table_latex_str = [
r'\text{Hohlraumvolumen: } \phi=', locale.format('%4.8g',phi),
r'\\',
r'\begin{array}{lccccc}',
'\hline ',
'Var & A(i) & A(i)_{Hysys} & B(i) & B(i)_{Hysys} & ',
r'\text{Hysys Param}', r'\\',
r'\text{Form } K=A(i) \cdot exp\left(\frac{B(i)}{R T}\right) ','&',
r' \frac{mol}{kg_{Kat}\cdot s} & \frac{mol}{m^3_{Gas}\cdot s}',
r' & \frac{J}{mol} & \frac{kJ}{kgmol} & ',
r'\text{Form } K=A_{Hysys} \cdot exp\left(-\frac{E_{Hysys}}{R T}\right) ',
r'\\',
'\hline ',
r'\\','r_{MeOH}', r'\\', r'\\',
"k_{5a}' K_2' K_3 K_4 K_{H_2}", '&',
locale.format('%4.8g', 7070.34*np.exp(-36696/(8.3145*501.57))), '&',
locale.format('%4.8g', 7070.34*np.exp(-36696/(8.3145*501.57)
)*1190/phi/1000 ),
'&',
locale.format('%4.8g', 36696), '&',
locale.format('%4.8g', -36696), '&', 'k_{Vorwärts}',
r'\\',
r"k_{5a}' K_2' K_3 K_4 K_{H_2}\cdot \frac{1}{K_1}", '&',
locale.format('%4.8g', 7070.34*
np.exp(-36696/(8.3145*501.57)
)*
10**10.592
),'&',
locale.format('%4.8g', 7070.34*
np.exp(-36696/(8.3145*501.57)
)*
1190/phi/1000 *10**10.592
),'&',
locale.format('%4.8g', (36696-3066*8.3145*np.log(10))),'&',
locale.format('%4.8g', -(36696-3066*8.3145*np.log(10))),'&',
"k'_{Rückwärts}", r'\\'
r'\\','r_{RWGS}', r'\\', r'\\',
"k_1'", '&',
locale.format('%4.8g', 1.65*np.exp(-(-94765)/(8.3145*501.57))), '&',
locale.format('%4.8g', 1.65*np.exp(-(-94765)/(8.3145*501.57)
)*1190/phi/1000 ), '&',
locale.format('%4.8g', -94765), '&',
locale.format('%4.8g', 94765), '&', 'k_{Vorwärts}',
r'\\',
r"k_1' \cdot K_3", '&',
locale.format('%4.8g', 1.65*np.exp(-(-94765)/(8.3145*501.57))*
10**-2.029), '&',
locale.format('%4.8g', 1.65*np.exp(-(-94765)/(8.3145*501.57))*
1190/phi/1000 *10**-2.029 ), '&',
locale.format('%4.8g', (-94765+2073*8.3145*np.log(10))),'&',
locale.format('%4.8g', -(-94765+2073*8.3145*np.log(10))),'&',
"k'_{Rückwärts}", r'\\',
r'\\',r'Adsorptionsterme (\beta)', r'\\', r'\\',
r'\sqrt{K_{H_2}}', '&',
locale.format('%4.8g', 30.82*np.exp(-(17197)/(8.3145*501.57))), '&',
locale.format('%4.8g', 30.82*np.exp(-(17197)/(8.3145*501.57))), '&',
locale.format('%4.8g', 17197.), '&',
locale.format('%4.8g', -17197.), '&', 'K1', r'\\',
'K_{H_2O}','&',
locale.format('%4.8g', 558.17*np.exp(-(124119)/(8.3145*501.57))), '&',
locale.format('%4.8g', 558.17*np.exp(-(124119)/(8.3145*501.57))), '&',
locale.format('%4.8g', 124119), '&',
locale.format('%4.8g', -124119), '&', 'K2', r'\\',
r'\frac{K_{H_2O}}{K_8 K_9 K_{H2}}','&',
locale.format('%4.8g', 3453.38*np.exp(-(0)/(8.3145*501.57))), '&',
locale.format('%4.8g', 3453.38*np.exp(-(0)/(8.3145*501.57))), '&',
locale.format('%4.8g', 0), '&',
locale.format('%4.8g', -0), '&', 'K3', r'\\',
r'\hline'
r'\end{array}'
]
Latex('$'+''.join(table_latex_str)+'$')
"""
Explanation: Umformung auf Hysys-Form
Konventionen
| Froment etc. | Hysys |
| - | - |
| $r[=]\frac{mol}{kg_{Kat}\cdot s}$ | $r_{Hysys}[=]\frac{kgmol}{m^3_{Gas} \cdot s}$ |
$\begin{array}{ccl}
\phi &[=]& \frac{m^3_{Gas}}{m^3_{Kat, Schüttung}}=\frac{m^3_{Gas}}{m^3_{Kat}+m^3_{Gas}}\
1-\phi &[=]& \frac{m^3_{Kat}}{m^3_{Kat, Schüttung}}=\frac{m^3_{Kat}}{m^3_{Kat}+m^3_{Gas}}\
\rho_b &[=]& \frac{kg_{Kat}}{m^3_{Kat, Schüttung}}=\frac{kg_{Kat}}{m^3_{Kat}+m^3_{Gas}}\
\rho_c &[=]& \frac{kg_{Kat}}{m^3_{Kat}}\
\Rightarrow \rho_b &=& (1-\phi)\rho_c\
\Rightarrow r_{Hysys} &=& r\cdot \left( \frac{1-\phi}{\phi}\right)\rho_c
\end{array}$
Kinetik-Ausdrücke, aus Van den Bussche-Froment, auf $Cu/ZnO/Al_2O_3$ Katalysator
$p_i$ hier eigentlich: $p_i/p^{\circ}$ , $p^{\circ}=1bar$. Es heißt, $r_{MeOH}$ hat die Einheiten $r_{MeOH}[=]k_{5a}'=k_{5a}c_t^2=\frac{mol}{kg_{Kat}\cdot s}$, und die Reaktionsquotienten haben keine Einheiten.
$r_{MeOH}=\frac{k_{5a}' K_2' K_3 K_4 K_{H_2} p_{CO_2}p_{H_2}\left(1-\frac{1}{K_1^*}\cdot\frac{p_{H_2O}\cdot p_{MeOH}}{p_{H_2}^3\cdot p_{CO_2}}\right)}{\left(1+\left(\frac{K_{H_2O}}{K_8 K_9 K_{H_2}}\right)\frac{p_{H_2O}}{p_{H_2}}+\sqrt{K_{H_2}\cdot p_{H2}}+K_{H_2O}\cdot p_{H_2O}\right)^3}$
$r_{RWGS}=\frac{k_1' p_{CO_2}\left(1-K_3^*\cdot\frac{p_{H_2O}\cdot p_{CO}}{p_{H_2}\cdot p_{CO_2}}\right)}{\left(1+\left(\frac{K_{H_2O}}{K_8 K_9 K_{H_2}}\right)\frac{p_{H_2O}}{p_{H_2}}+\sqrt{K_{H_2}\cdot p_{H2}}+K_{H_2O}\cdot p_{H_2O}\right)}$
Umformung der Geschwindigkeitskonstanten von der Form im Artikel auf die Form
in Hysys:
$k = A \cdot exp\left(\frac{-E}{R T}\right)*T^\beta$
$r_{MeOH}$, Vorwärtsreation
im Artikel (Tabelle 1):
$k(i) = A(i) \cdot exp\left(\frac{B(i)}{R T}\right)$
$\begin{align}
k_{5a}' K_2' K_3 K_4 K_{H_2} &=7070,34 \cdot e^{\frac{-36696}{8,3145\cdot501,57}} \frac{mol}{kg_{Kat}\cdot s} \cdot exp\left(\frac{\frac{36,696 J/mol}{8,3145 J/(mol K)}}{ T}\right) \
& = 1,066\frac{mol}{kg_{Kat}\cdot s} \cdot exp\left(\frac{4413,49K}{ T}\right)
\end{align}$
Da
$r_{MeOH}[=]k_{5a}'=k_{5a}c_t^2=\frac{mol}{kg_{Kat}\cdot s} \quad \text{ und } \quad r_{Hysys}[=]\frac{kgmol}{m^3_{Gas} \cdot s}$,
$\Rightarrow r_{Hysys, MeOH} = r_{MeOH}\cdot \left( \frac{1-\phi}{\phi}\right)\rho_c \cdot \left(\frac{1 kgmol}{1000 mol}\right)$
Diese Faktoren werden am Preexponentiellen Faktor des Geschwindigkeits-Ausdrucks übernommen. Z. B. bei $\phi=0,285$, $\rho_b=1190\frac{kg_{Kat}}{m^3_{Kat, Schüttung}}=(1-\phi)\rho_c$:
$\begin{align}
k_{Vorwärtsreaktion} &=\
k_{5a}' K_2' K_3 K_4 K_{H_2} &= 7070,34\cdot e^{\frac{-36696}{8,3145\cdot501,57}} \frac{mol}{kg_{Kat}\cdot s} \cdot \left( \frac{1-\phi}{\phi}\right)\rho_c \cdot \left(\frac{1 kgmol}{1000 mol}\right) \cdot exp\left(\frac{4413,49K}{ T}\right)\
&= 7070,34\cdot e^{\frac{-36696}{8,3145\cdot501,57}} \frac{mol}{kg_{Kat}\cdot s} \cdot \left( \frac{1190\frac{kg_{Kat}}{m^3_{Kat, Schüttung}}}{0,285\frac{m^3_{Gas}}{m^3_{Kat, Schüttung}} }\right) \cdot \left(\frac{1 kgmol}{1000 mol}\right) \cdot exp\left(\frac{4413,49K}{ T}\right)\
&= 7070,34\cdot e^{\frac{-36696}{8,3145\cdot501,57}} \cdot \frac{1190}{0,285\cdot 1000} \frac{kgmol}{m^3_{Gas}\cdot s} \cdot exp\left(\frac{4413,49K}{ T}\right)\
&= 4,4528 \frac{kgmol}{m^3_{Gas}\cdot s} \cdot exp\left(\frac{36696J/mol}{ R T}\right)\
\end{align}$
$r_{MeOH}$, Rückwärtsreaktion
$\begin{align}
k_{Rückwärtsreaktion} &= k_{5a}' K_2' K_3 K_4 K_{H_2} \cdot \frac{1}{K_1*} \ &= 7070,34\cdot e^{\frac{-36696}{8,3145\cdot501,57}}\cdot \frac{1190}{0,285\cdot 1000} \frac{kgmol}{m^3_{Gas}\cdot s} \cdot exp\left(\frac{36696J/mol}{R T}\right) \cdot 10^\left(-\frac{3066K}{T}+10,592\right)\
&=7070,34\cdot e^{\frac{-36696}{8,3145\cdot501,57}}\cdot \frac{1190}{0,285\cdot 1000}\cdot 10^{10,592} \frac{kgmol}{m^3_{Gas}\cdot s} \cdot exp\left(\frac{(36696-3066\cdot ln(10)\cdot8,3145)J/mol}{R T}\right)\
&=1,740321\cdot 10^{11} \frac{kgmol}{m^3_{Gas}\cdot s} \cdot exp\left(\frac{-22002.09J/mol}{R T}\right)\
\end{align}$
Zusammenfassung der Eingabe von Geschwindigkeitskonstanten aus Tabelle 2 (im Artikel) auf Hysys
$r_{MeOH}$
| Geschwindigkeitskonstante | Parameter | Artikel | Hysys-Eingabe |
| - | - | - | - |
| $k_{5a}' K_2' K_3 K_4 K_{H_2}$ | A(4) | 1,066 $\frac{mol}{kg_{Kat}\cdot s}$ | 4,452761 $\frac{mol}{m^3_{Gas}\cdot s}$ |
| $k_{5a}' K_2' K_3 K_4 K_{H_2}$ | B(4) | 36696 J/mol | -36696 kJ/kgmol |
| $k_{5a}' K_2' K_3 K_4 K_{H_2}\cdot\frac{1}{K_1}$ | | $1,066\cdot10^{10,592}$$ \frac{mol}{kg_{Kat}\cdot s}$ | $1,740321\cdot10^{11}$$\frac{mol}{m^3_{Gas}\cdot s}$ |
| $k_{5a}' K_2' K_3 K_4 K_{H_2}\cdot\frac{1}{K_1}$ | | $(36696-8,3145\cdot 3066)$ J/mol | -11203,743 kJ/kgmol |
$r_{RWGS}$
| Geschwindigkeitskonstante | Parameter | Artikel | Hysys-Eingabe |
| - | - | - | - |
| $k_1'$ | A(5) | 1,220 $\frac{mol}{kg_{Kat}\cdot s}$ | 5,093209e+10 $\frac{mol}{m^3_{Gas}\cdot s}$ |
| $k_1'$ | B(5) | -94765 J/mol | +94765 kJ/kgmol |
| $k_1'\cdot K_3$ | | $1,220\cdot 10^{+2,029}$ $mol/kg_{Kat}/s$ | 4,764217e+08 $\frac{kgmol}{m^3_{Gas} \cdot s}$ |
| $k_1'\cdot K_3$ | | $(-94765+8,3145\cdot 2073)$ J/mol | +77529,04 kJ/kgmol |
Adsorptionsterme
| Adsorptionsterm | Parameter | Artikel | Hysys-Eingabe |
| - | - | - | - |
| $\sqrt{K_{H_2}}$ | A(1) | 30,82 | 30,82 |
| $\sqrt{K_{H_2}}$ | B(1) | 17197 J/mol | -17197 J/mol |
| $K_{H_2O}$ | A(2) | 558,17 | 558,17 |
| $K_{H_2O}$ | B(2) | 124119 J/mol | -124119 J/mol |
| $\frac{K_{H_2O}}{K_8 K_9 K_{H2}}$ | A(3) | 3453,38 | 3453,38 |
| $\frac{K_{H_2O}}{K_8 K_9 K_{H2}}$ | B(3) | 0 J/mol | 0 J/mol |
End of explanation
"""
import locale
import numpy as np
from IPython.display import Latex
locale.setlocale(locale.LC_ALL, '')
epsilon_b = 0.457
table_latex_str = [
r'\text{Hohlraumvolumen: } \phi=', locale.format('%4.8g',epsilon_b),
r'\\',
r'\begin{array}{lccccc}',
'\hline ',
'Var & A(i) & A(i)_{Hysys} & B(i) & E(i)_{Hysys} & ',
r'\text{Hysys Param}', r'\\',
r'\text{Form } K=A(i) \cdot exp\left(\frac{B(i)}{R T}\right) ','&',
r' \frac{mol^{0,36} (m^3_{G})^{0,64}}{kg_{Kat}\cdot s}',
r' & \left(\frac{kgmol}{m^3_{G}}\right)^{0,36} \cdot \frac{1}{s}',
r' & \frac{J}{mol} & \frac{kJ}{kgmol} & ',
r'\text{Form } K=A_{Hysys} \cdot exp\left(-\frac{E_{Hysys}}{R T}\right) ',
r'\\',
'\hline ',
r'\\','r_{WGS}', r'\\', r'\\',
"k_m", '&',
locale.format('%4.8g', np.exp(8.22)), '&',
locale.format('%4.8g', np.exp(8.22)*(1-epsilon_b)/epsilon_b*1960/1000.**0.36),
'&',
locale.format('%4.8g', -8008*8.3145), '&',
locale.format('%4.8g', +8008*8.3145), '&', 'k_{Vorwärts}',
r'\\',
r"k_m\cdot \frac{1}{K_{eq}}", '&',
locale.format('%4.8g', np.exp(8.22+4.27)),'&',
locale.format('%4.8g', np.exp(8.22+4.27)*(1-epsilon_b)/epsilon_b*1960/1000.**0.36),'&',
locale.format('%4.8g', (-8008-4483)*8.3145),'&',
locale.format('%4.8g', -(-8008-4483)*8.3145),'&',
"k'_{Rückwärts}", r'\\'
r'\hline'
r'\end{array}'
]
Latex('$'+''.join(table_latex_str)+'$')
"""
Explanation: WGS auf ICI-Fe3O4-Cr2O3
End of explanation
"""
k_0_hin = 1.98e+7 # mol/(kg Kat * h * (MPa)^0,58)
ea_hin = 56343 # J/mol
nu_i_hin = np.array([0.18, 0.4, 0]) # CO, H2, CH3OH
k_0_zur = 2.15e+10 # mol/(kg Kat * h * (MPa)^0,13)
ea_zur = 85930 # J/mol
nu_i_zur = np.array([0, 0, 0.13]) # CO, H2, CH3OH
r = 8.314 # J/(mol K)
def r_j_wedel(k_0, ea, p_i, t, nu_i):
k_t = k_0 * np.exp(-ea/(r * t))
return k_t * np.product(np.power(p_i, nu_i))
nco0 = 2115
nco20 = 235
xi=(nco0-nco20)/2
print(1/10**(2073/(220+273.15)-2.029))
print(xi)
(2115+xi)/10**(2073/(219.9783+273.15)-2.029)+xi-235
#(2115+xi)*0.00631477737+xi-235
"""
Explanation: Wedel 1988
End of explanation
"""
d =
"""
Explanation: Chem. Eng. Technol. 2011, 34, No. 5, 817–822
End of explanation
"""
|
shankari/folium | examples/WidthHeight.ipynb | mit | width, height = 480, 350
fig = Figure(width=width, height=height)
m = folium.Map(
location=location,
tiles=tiles,
width=width,
height=height,
zoom_start=zoom_start
)
fig.add_child(m)
fig.save(os.path.join('results', 'WidthHeight_0.html'))
fig
"""
Explanation: Using same width and height triggers the scroll bar
End of explanation
"""
width, height = '100%', 350
fig = Figure(width=width, height=height)
m = folium.Map(
location=location,
tiles=tiles,
width=width,
height=height,
zoom_start=zoom_start
)
fig.add_child(m)
fig.save(os.path.join('results', 'WidthHeight_1.html'))
fig
"""
Explanation: Can figure take relative sizes?
End of explanation
"""
width, height = 480, '100%'
fig = Figure(width=width, height=height)
m = folium.Map(
location=location,
tiles=tiles,
width=width,
height=height,
zoom_start=zoom_start
)
fig.add_child(m)
fig.save(os.path.join('results', 'WidthHeight_2.html'))
fig
"""
Explanation: I guess not. (Well, it does make sense for a single HTML page, but not for iframes.)
End of explanation
"""
width, height = '50%', '100%'
fig = Figure(width=width, height=height)
m = folium.Map(
location=location,
tiles=tiles,
width=width,
height=height,
zoom_start=zoom_start
)
fig.add_child(m)
fig.save(os.path.join('results', 'WidthHeight_3.html'))
fig
width, height = '150%', '100%'
try:
folium.Map(location=location, tiles=tiles,
width=width, height=height, zoom_start=zoom_start)
except ValueError as e:
print(e)
width, height = '50%', '80p'
try:
folium.Map(location=location, tiles=tiles,
width=width, height=height, zoom_start=zoom_start)
except ValueError as e:
print(e)
width, height = width, height = 480, -350
try:
folium.Map(location=location, tiles=tiles,
width=width, height=height, zoom_start=zoom_start)
except ValueError as e:
print(e)
"""
Explanation: Not that Figure is interpreting this as 50px. We should raise something and be explicit on the docs.
End of explanation
"""
width, height = 480, 350
fig = Figure(width=width, height=height)
m = folium.Map(
location=location,
tiles=tiles,
width='100%',
height='100%',
zoom_start=zoom_start
)
fig.add_child(m)
fig.save(os.path.join('results', 'WidthHeight_4.html'))
fig
"""
Explanation: Maybe we should recommend
End of explanation
"""
|
jmschrei/pomegranate | tutorials/B_Model_Tutorial_3_Hidden_Markov_Models.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import numpy
from pomegranate import *
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
"""
Explanation: Hidden Markov Models
author: Jacob Schreiber <br>
contact: jmschreiber91@gmail.com
Hidden Markov models (HMMs) are the flagship of the pomegranate package in that they have the most features of all of the models and that they were the first algorithm implemented.
Hidden Markov models are a form of structured prediction method that are popular for tagging all elements in a sequence with some "hidden" state. They can be thought of as extensions of Markov chains where, instead of the probability of the next observation being dependant on the current observation, the probability of the next hidden state is dependant on the current hidden state, and the next observation is derived from that hidden state. An example of this can be part of speech tagging, where the observations are words and the hidden states are parts of speech. Each word gets tagged with a part of speech, but dynamic programming is utilized to search through all potential word-tag combinations to identify the best set of tags across the entire sentence.
Another perspective of HMMs is that they are an extension on mixture models that includes a transition matrix. Conceptually, a mixture model has a set of "hidden" states---the mixture components---and one can calculate the probability that each sample belongs to each component. This approach treats each observations independently. However, like in the part-of-speech example we know that an adjective typically is followed by a noun, and so position in the sequence matters. A HMM adds a transition matrix between the hidden states to incorporate this information across the sequence, allowing for higher probabilities of transitioning from the "adjective" hidden state to a noun or verb.
pomegranate implements HMMs in a flexible manner that goes beyond what other packages allow. Let's see some examples.
End of explanation
"""
d1 = DiscreteDistribution({'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25})
d2 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
"""
Explanation: CG rich region identification example
Lets take the simplified example of CG island detection on a sequence of DNA. DNA is made up of the four canonical nucleotides, abbreviated 'A', 'C', 'G', and 'T'. We can say that regions of the genome that are enriched for nucleotides 'C' and 'G' are 'CG islands', which is a simplification of the real biological concept but sufficient for our example. The issue with identifying these regions is that they are not exclusively made up of the nucleotides 'C' and 'G', but have some 'A's and 'T's scatted amongst them. A simple model that looked for long stretches of C's and G's would not perform well, because it would miss most of the real regions.
We can start off by building the model. Because HMMs involve the transition matrix, which is often represented using a graph over the hidden states, building them requires a few more steps that a simple distribution or the mixture model. Our simple model will be composed of two distributions. One distribution wil be a uniform distribution across all four characters and one will have a preference for the nucleotides C and G, while still allowing the nucleotides A and T to be present.
End of explanation
"""
s1 = State(d1, name='background')
s2 = State(d2, name='CG island')
"""
Explanation: For the HMM we have to first define states, which are a pair of a distribution and a name.
End of explanation
"""
model = HiddenMarkovModel()
model.add_states(s1, s2)
"""
Explanation: Now we define the HMM and pass in the states.
End of explanation
"""
model.add_transition(model.start, s1, 0.5)
model.add_transition(model.start, s2, 0.5)
model.add_transition(s1, s1, 0.9)
model.add_transition(s1, s2, 0.1)
model.add_transition(s2, s1, 0.1)
model.add_transition(s2, s2, 0.9)
"""
Explanation: Then we have to define the transition matrix, which is the probability of going from one hidden state to the next hidden state. In some cases, like this one, there are high self-loop probabilities, indicating that it's likely that one will stay in the same hidden state from one observation to the next in the sequence. Other cases have a lower probability of staying in the same state, like the part of speech tagger. A part of the transition matrix is the start probabilities, which is the probability of starting in each of the hidden states. Because we create these transitions one at a time, they are very amenable to sparse transition matrices, where it is impossible to transition from one hidden state to the next.
End of explanation
"""
model.bake()
"""
Explanation: Now, finally, we need to bake the model in order to finalize the internal structure. Bake must be called when the model has been fully specified.
End of explanation
"""
seq = numpy.array(list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC'))
hmm_predictions = model.predict(seq)
print("sequence: {}".format(''.join(seq)))
print("hmm pred: {}".format(''.join(map( str, hmm_predictions))))
"""
Explanation: Now we can make predictions on some sequence. Let's create some sequence that has a CG enriched region in the middle and see whether we can identify it.
End of explanation
"""
model = HiddenMarkovModel()
model.add_states(s1, s2)
model.add_transition(model.start, s1, 0.5)
model.add_transition(model.start, s2, 0.5)
model.add_transition(s1, s1, 0.89 )
model.add_transition(s1, s2, 0.10 )
model.add_transition(s1, model.end, 0.01)
model.add_transition(s2, s1, 0.1 )
model.add_transition(s2, s2, 0.9)
model.bake()
"""
Explanation: It looks like it successfully identified a CG island in the middle (the long stretch of 0's) and another shorter one at the end. The predicted integers don't correspond to the order in which states were added to the model, but rather, the order that they exist in the model after a topological sort. More importantly, the model wasn't tricked into thinking that every CG or even pair of CGs was an island. It required many C's and G's to be part of a longer stretch to identify that region as an island. Naturally, the balance of the transition and emission probabilities will heavily influence what regions are detected.
Let's say, though, that we want to get rid of that CG island prediction at the end because we don't believe that real islands can occur at the end of the sequence. We can take care of this by adding in an explicit end state that only the non-island hidden state can get to. We enforce that the model has to end in the end state, and if only the non-island state gets there, the sequence of hidden states must end in the non-island state. Here's how:
End of explanation
"""
seq = numpy.array(list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC'))
hmm_predictions = model.predict(seq)
print("sequence: {}".format(''.join(seq)))
print("hmm pred: {}".format(''.join(map( str, hmm_predictions))))
"""
Explanation: Note that all we did was add a transition from s1 to model.end with some low probability. This probability doesn't have to be high if there's only a single transition there, because there's no other possible way of getting to the end state.
End of explanation
"""
print(model.predict_proba(seq)[12:19])
"""
Explanation: This seems far more reasonable. There is a single CG island surrounded by background sequence, and something at the end. If we knew that CG islands cannot occur at the end of sequences, we need only modify the underlying structure of the HMM in order to say that the sequence must end from the background state.
In the same way that mixtures could provide probabilistic estimates of class assignments rather than only hard labels, hidden Markov models can do the same. These estimates are the posterior probabilities of belonging to each of the hidden states given the observation, but also given the rest of the sequence.
End of explanation
"""
trans, ems = model.forward_backward(seq)
print(trans)
"""
Explanation: We can see here the transition from the first non-island region to the middle island region, with high probabilities in one column turning into high probabilities in the other column. The predict method is just taking the most likely element, the maximum-a-posteriori estimate.
In addition to using the forward-backward algorithm to just calculate posterior probabilities for each observation, we can count the number of transitions that are predicted to occur between the hidden states.
End of explanation
"""
model = HiddenMarkovModel( "Global Alignment")
# Define the distribution for insertions
i_d = DiscreteDistribution( { 'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25 } )
# Create the insert states
i0 = State( i_d, name="I0" )
i1 = State( i_d, name="I1" )
i2 = State( i_d, name="I2" )
i3 = State( i_d, name="I3" )
# Create the match states
m1 = State( DiscreteDistribution({ "A": 0.95, 'C': 0.01, 'G': 0.01, 'T': 0.02 }) , name="M1" )
m2 = State( DiscreteDistribution({ "A": 0.003, 'C': 0.99, 'G': 0.003, 'T': 0.004 }) , name="M2" )
m3 = State( DiscreteDistribution({ "A": 0.01, 'C': 0.01, 'G': 0.01, 'T': 0.97 }) , name="M3" )
# Create the delete states
d1 = State( None, name="D1" )
d2 = State( None, name="D2" )
d3 = State( None, name="D3" )
# Add all the states to the model
model.add_states( [i0, i1, i2, i3, m1, m2, m3, d1, d2, d3 ] )
# Create transitions from match states
model.add_transition( model.start, m1, 0.9 )
model.add_transition( model.start, i0, 0.1 )
model.add_transition( m1, m2, 0.9 )
model.add_transition( m1, i1, 0.05 )
model.add_transition( m1, d2, 0.05 )
model.add_transition( m2, m3, 0.9 )
model.add_transition( m2, i2, 0.05 )
model.add_transition( m2, d3, 0.05 )
model.add_transition( m3, model.end, 0.9 )
model.add_transition( m3, i3, 0.1 )
# Create transitions from insert states
model.add_transition( i0, i0, 0.70 )
model.add_transition( i0, d1, 0.15 )
model.add_transition( i0, m1, 0.15 )
model.add_transition( i1, i1, 0.70 )
model.add_transition( i1, d2, 0.15 )
model.add_transition( i1, m2, 0.15 )
model.add_transition( i2, i2, 0.70 )
model.add_transition( i2, d3, 0.15 )
model.add_transition( i2, m3, 0.15 )
model.add_transition( i3, i3, 0.85 )
model.add_transition( i3, model.end, 0.15 )
# Create transitions from delete states
model.add_transition( d1, d2, 0.15 )
model.add_transition( d1, i1, 0.15 )
model.add_transition( d1, m2, 0.70 )
model.add_transition( d2, d3, 0.15 )
model.add_transition( d2, i2, 0.15 )
model.add_transition( d2, m3, 0.70 )
model.add_transition( d3, i3, 0.30 )
model.add_transition( d3, model.end, 0.70 )
# Call bake to finalize the structure of the model.
model.bake()
"""
Explanation: This is the transition table, which has the soft count of the number of transitions across an edge in the model given a single sequence. It is a square matrix of size equal to the number of states (including start and end state), with number of transitions from (row_id) to (column_id). This is exemplified by the 1.0 in the first row, indicating that there is one transition from background state to the end state, as that's the only way to reach the end state. However, the third (or fourth, depending on ordering) row is the transitions from the start state, and it only slightly favors the background state. These counts are not normalized to the length of the input sequence, but can easily be done so by dividing by row sums, column sums, or entire table sums, depending on your application.
A possible reason not to normalize is to run several sequences through and add up their tables, because normalizing in the end and extracting some domain knowledge. It is extremely useful in practice. For example, we can see that there is an expectation of ~2.9 transitions from CG island to background, and ~2.4 from background to CG island. This could be used to infer that there are ~2-3 edges, which makes sense if you consider that the start and end of the sequence seem like they might be part of the CG island states except for the strict transition probabilities used (look at the first few rows of the emission table above.)
Sequence Alignment Example
Lets move on to a more complicated structure, that of a profile HMM. A profile HMM is used to align a sequence to a reference 'profile', where the reference profile can either be a single sequence, or an alignment of many sequences (such as a reference genome). In essence, this profile has a 'match' state for every position in the reference profile, and 'insert' state, and a 'delete' state. The insert state allows the external sequence to have an insertion into the sequence without throwing off the entire alignment, such as the following:
ACCG : Sequence <br>
|| | <br>
AC-G : Reference
or a deletion, which is the opposite:
A-G : Sequence <br>
| | <br>
ACG : Reference
The bars in the middle refer to a perfect match, whereas the lack of a bar means either a deletion/insertion, or a mismatch. A mismatch is where two positions are aligned together, but do not match. This models the biological phenomena of mutation, where one nucleotide can convert to another over time. It is usually more likely in biological sequences that this type of mutation occurs than that the nucleotide was deleted from the sequence (shifting all nucleotides over by one) and then another was inserted at the exact location (moving all nucleotides over again). Since we are using a probabilistic model, we get to define these probabilities through the use of distributions! If we want to model mismatches, we can just set our 'match' state to have an appropriate distribution with non-zero probabilities over mismatches.
Lets now create a three nucleotide profile HMM, which models the sequence 'ACT'. We will fuzz this a little bit in the match states, pretending to have some prior information about what mutations occur at each position. If you don't have any information, setting a uniform, small, value over the other values is usually okay.
End of explanation
"""
for sequence in map( list, ('ACT', 'GGC', 'GAT', 'ACC') ):
logp, path = model.viterbi( sequence )
print("Sequence: '{}' -- Log Probability: {} -- Path: {}".format(
''.join( sequence ), logp, " ".join( state.name for idx, state in path[1:-1] ) ))
"""
Explanation: Now lets try to align some sequences to it and see what happens!
End of explanation
"""
for sequence in map( list, ('A', 'GA', 'AC', 'AT', 'ATCC', 'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ):
logp, path = model.viterbi( sequence )
print("Sequence: '{}' -- Log Probability: {} -- Path: {}".format(
''.join( sequence ), logp, " ".join( state.name for idx, state in path[1:-1] ) ))
"""
Explanation: The first and last sequence are entirely matches, meaning that it thinks the most likely alignment between the profile ACT and ACT is A-A, C-C, and T-T, which makes sense, and the most likely alignment between ACT and ACC is A-A, C-C, and T-C, which includes a mismatch. Essentially, it's more likely that there's a T-C mismatch at the end then that there was a deletion of a T at the end of the sequence, and a separate insertion of a C.
The two middle sequences don't match very well, as expected! G's are not very likely in this profile at all. It predicts that the two G's are inserts, and that the C matches the C in the profile, before hitting the delete state because it can't emit a T. The third sequence thinks that the G is an insert, as expected, and then aligns the A and T in the sequence to the A and T in the master sequence, missing the middle C in the profile.
By using deletes, we can handle other sequences which are shorter than three characters. Lets look at some more sequences of different lengths.
End of explanation
"""
def path_to_alignment( x, y, path ):
"""
This function will take in two sequences, and the ML path which is their alignment,
and insert dashes appropriately to make them appear aligned. This consists only of
adding a dash to the model sequence for every insert in the path appropriately, and
a dash in the observed sequence for every delete in the path appropriately.
"""
for i, (index, state) in enumerate( path[1:-1] ):
name = state.name
if name.startswith( 'D' ):
y = y[:i] + '-' + y[i:]
elif name.startswith( 'I' ):
x = x[:i] + '-' + x[i:]
return x, y
for sequence in map( list, ('A', 'GA', 'AC', 'AT', 'ATCC', 'ACGTG', 'ATTT', 'TACCCTC', 'TGTCAACACT') ):
logp, path = model.viterbi( sequence )
x, y = path_to_alignment( 'ACT', ''.join(sequence), path )
print("Sequence: {}, Log Probability: {}".format( ''.join(sequence), logp ))
print("{}\n{}".format( x, y ))
print()
"""
Explanation: Again, more of the same expected. You'll notice most of the use of insertion states are at I0, because most of the insertions are at the beginning of the sequence. It's more probable to simply stay in I0 at the beginning instead of go from I0 to D1 to I1, or going to another insert state along there. You'll see other insert states used when insertions occur in other places in the sequence, like 'ATTT' and 'ACGTG'.
Now that we have the path, we need to convert it into an alignment, which is significantly more informative to look at.
End of explanation
"""
d = NormalDistribution( 5, 2 )
s1 = State( d, name="Tied1" )
s2 = State( d, name="Tied2" )
s3 = State( NormalDistribution( 5, 2 ), name="NotTied1" )
s4 = State( NormalDistribution( 5, 2 ), name="NotTied2" )
"""
Explanation: Training Hidden Markov Models
There are two main algorithms for training hidden Markov models-- Baum Welch (structured version of Expectation Maximization), and Viterbi training. Since we don't start off with labels on the data, these are both unsupervised training algorithms. In order to assign labels, Baum Welch uses EM to assign soft labels (weights in this case) to each point belonging to each state, and then using weighted MLE estimates to update the distributions. Viterbi assigns hard labels to each observation using the Viterbi algorithm, and then updates the distributions based on these hard labels.
pomegranate is extremely well featured when it comes to regularization methods for training, supporting tied emissions and edges, edge and emission inertia, freezing nodes or edges, edge pseudocounts, and multithreaded training. Lets look at some examples of the following:
Tied Emissions
Sometimes we want to say that multiple states model the same phenomena, but are simply at different points in the graph because we are utilizing complicated edge structure. An example is in the example of the global alignment HMM we saw. All insert states represent the same phenomena, which is nature randomly inserting a nucleotide, and this probability should be the same regardless of position. However, we can't simply have a single insert state, or we'd be allowed to transition from any match state to any other match state.
You can tie emissions together simply by passing the same distribution object to multiple states. That's it.
End of explanation
"""
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5, group='a' )
model.add_transition( model.start, s2, 0.5, group='b' )
model.add_transition( s1, s2, 0.5, group='a' )
model.add_transition( s2, s1, 0.5, group='b' )
model.bake()
"""
Explanation: You have now indicated that these two states are tied, and when training, the weights of all points going to s2 will be added to the weights of all points going to s1 when updating d. As a side note, this is implemented in a computationally efficient manner such that d will only be updated once, not twice (but giving the same result). s3 and s4 are not tied together, because while they have the same distribution, it is not the same python object.
Tied Edges
Edges can be tied together for the same reason. If you have a modular structure to your HMM, perhaps you believe this repeating structure doesn't (or shouldn't) have a position specific edge structure. You can do this simply by adding a group when you add transitions.
End of explanation
"""
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], distribution_inertia=0.3, edge_inertia=0.25 )
"""
Explanation: The above model doesn't necessarily make sense, but it shows how simple it is to tie edges as well. You can go ahead and train normally from this point, without needing to change any code.
Inertia
The next options are inertia on edges or on distributions. This simply means that you update your parameters as (previous_parameter * inertia) + (new_parameter * (1-inertia) ). It is a way to prevent your updates from overfitting immediately. You can specify this in the train function using either edge_inertia or distribution_inertia. These default to 0, with 1 being the maximum, meaning that you don't update based on new evidence, the same as freezing a distribution or the edges.
End of explanation
"""
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5, pseudocount=4.2 )
model.add_transition( model.start, s2, 0.5, pseudocount=1.3 )
model.add_transition( s1, s2, 0.5, pseudocount=5.2 )
model.add_transition( s2, s1, 0.5, pseudocount=0.9 )
model.bake()
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], max_iterations=5, use_pseudocount=True )
"""
Explanation: Pseudocounts
Another way of regularizing your model is to add pseudocounts to your edges (which have non-zero probabilities). When updating your edges in the future, you add this pseudocount to the count of transitions across that edge in the future. This gives a more Bayesian estimate of the edge probability, and is useful if you have a large model and don't expect to cross most of the edges with your training data. An example might be a complicated profile HMM, where you don't expect to see deletes or inserts at all in your training data, but don't want to change from the default values.
In pomegranate, pseudocounts default to the initial probabilities, so that if you don't see data, the edge values simply aren't updated. You can define both edge specific pseudocounts when you define the transition. When you train, you must define use_pseudocount=True.
End of explanation
"""
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4], [5, 7, 2, 3, 5]], max_iterations=5, transition_pseudocount=20, use_pseudocount=True )
"""
Explanation: The other way is to put a blanket pseudocount on all edges.
End of explanation
"""
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4, 7, 3, 6, 3, 5, 2, 4], [5, 7, 2, 3, 5, 1, 3, 5, 6, 2]], max_iterations=5 )
s1 = State( NormalDistribution( 3, 1 ), name="s1" )
s2 = State( NormalDistribution( 6, 2 ), name="s2" )
model = HiddenMarkovModel()
model.add_states( [s1, s2] )
model.add_transition( model.start, s1, 0.5 )
model.add_transition( model.start, s2, 0.5 )
model.add_transition( s1, s2, 0.5 )
model.add_transition( s2, s1, 0.5 )
model.bake()
model.fit( [[5, 2, 3, 4, 7, 3, 6, 3, 5, 2, 4], [5, 7, 2, 3, 5, 1, 3, 5, 6, 2]], max_iterations=5, n_jobs=4 )
"""
Explanation: We can see that there isn't as much of an improvement. This is part of regularization, though. We sacrifice fitting the data exactly in order for our model to generalize better to future data. The majority of the training improvement is likely coming from the emissions better fitting the data, though.
Multithreaded Training
Since pomegranate is implemented in cython, the majority of functions are written with the GIL released. A benefit of doing this is that we can use multithreading in order to make some computationally intensive tasks take less time. However, a downside is that python doesn't play nicely with multithreading, and so there are some cases where training using multithreading can make your model training take significantly longer. I investigate this in an early multithreading pull request <a href="https://github.com/jmschrei/pomegranate/pull/30">here</a>. Things have improved since then, but the gist is that if you have a small model (less than 15 states), it may be detrimental, but the larger your model is, the more it scales towards getting a speed improvement exactly the number of threads you use. You can specify multithreading using the n_jobs keyword. All structures in pomegranate are thread safe, so you don't need to worry about race conditions.
End of explanation
"""
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
d1 = DiscreteDistribution({'A': 0.25, 'C': 0.25, 'G': 0.25, 'T': 0.25})
d2 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
s1 = State( d1, name='background' )
s2 = State( d2, name='CG island' )
hmm = HiddenMarkovModel()
hmm.add_states(s1, s2)
hmm.add_transition( hmm.start, s1, 0.5 )
hmm.add_transition( hmm.start, s2, 0.5 )
hmm.add_transition( s1, s1, 0.5 )
hmm.add_transition( s1, s2, 0.5 )
hmm.add_transition( s2, s1, 0.5 )
hmm.add_transition( s2, s2, 0.5 )
hmm.bake()
print(hmm.to_json())
seq = list('CGACTACTGACTACTCGCCGACGCGACTGCCGTCTATACTGCGCATACGGC')
print(hmm.log_probability( seq ))
hmm_2 = HiddenMarkovModel.from_json( hmm.to_json() )
print(hmm_2.log_probability( seq ))
"""
Explanation: Serialization
General Mixture Models support serialization to JSONs using to_json() and from_json( json ). This is useful is you want to train a GMM on large amounts of data, taking a significant amount of time, and then use this model in the future without having to repeat this computationally intensive step (sounds familiar by now). Lets look at the original CG island model, since it's significantly smaller.
End of explanation
"""
|
squishbug/DataScienceProgramming | 04-Pandas-Data-Tables/HW04/CheckHomework04.ipynb | cc0-1.0 | import pandas as pd
import numpy as np
"""
Explanation: Check Homework HW04
Use this notebook to check your solutions. This notebook will not be graded.
End of explanation
"""
import hw4_answers
reload(hw4_answers)
from hw4_answers import *
"""
Explanation: Now, import your solutions from hw4_answers.py. The following code looks a bit redundant. However, we do this to allow reloading the hw4_answers.py in case you made some changes. Normally, Python assumes that modules don't change and therefore does not try to import them again.
End of explanation
"""
employees_df = load_employees()
print "Number of rows: %d\nNumber of cols: %d\n" % (employees_df.shape[0], employees_df.shape[1])
print "Head of index: %s\n" % (employees_df.index[:10])
print "Record of employee with ID=999\n"
print employees_df.loc[999]
"""
Explanation: Problem 1
Create a function load_employees that loads the employees table from
the file /home/data/AdventureWorks/Employees.xls and sets the index of the DataFrame to the EmployeeID. The function should return a table with the EmployeeID as the index and the remaining 25 columns.
End of explanation
"""
for eid in [274, 999, 102]:
print '%d, "%s"' %(eid, getFullName(employees_df, eid))
"""
Explanation: The output should be
<pre>
Number of rows: 291
Number of cols: 25
Head of index: Int64Index([259, 278, 204, 78, 255, 66, 270, 22, 161, 124], dtype='int64', name=u'EmployeeID')
Record of employee with ID=999
ManagerID 1
TerritoryID NaN
Title NaN
FirstName Chadwick
MiddleName NaN
LastName Smith
Suffix NaN
JobTitle BI Professor
NationalIDNumber 123456789
BirthDate 1967-07-05
MaritalStatus M
Gender M
HireDate 2003-12-31 23:59:59.997000
SalariedFlag 0
VacationHours 55
SickLeaveHours 47
PhoneNumber 555-887-9788
PhoneNumberType Work
EmailAddress chadwick.smith@rentpath.com
AddressLine1 565 Peachtree Rd.
AddressLine2 NaN
City Atlanta
StateProvinceName Georgia
PostalCode 30084
CountryName United States
Name: 999, dtype: object
</pre>
Problem 2
Define a function getFullName which takes the employees table and a single employee ID as arguments, and returns a string with the full name of the employee in the format "LAST, FIRST MIDDLE".
If the given ID does not belong to any employee return the string "UNKNOWN" (in all caps)
If no middle name is given only return "LAST, FIRST". Make sure there are not trailing spaces!
If only the middle initial is given the return the full name in the format "LAST, FIRST M." with the middle initial followed by a '.'.
Arguments:
- df (DataFrame): Employee Table
- empid (int): Employee ID
Returns:
- String with full name
End of explanation
"""
for jt in ['Chief Data Scientist', 'Sales Manager', 'Vice President of Sales']:
if isSales(jt):
print "The job title '%s' is part of the Sales Department." % jt
else:
print "The job title '%s' belongs to a different department." % jt
"""
Explanation: The output should be
<pre>
274, "Jiang, Stephen Y."
999, "Smith, Chadwick"
102, "Mu, Zheng W."
</pre>
Problem 3
Define a function isSales that takes the job title of an employee as string as an argument and return either True if the job title indicates this person works in sales, and False otherwise.
Argument:
- jobtitle (str)
Returns:
- True or False
End of explanation
"""
sales_df = filterSales(employees_df)
print "Number of rows: %d\nNumber of cols: %d\n" % (sales_df.shape[0], sales_df.shape[1])
print "Head of index: %s\n" % (sales_df.index[:10])
print "Record of sales employee with ID=280\n"
print sales_df.loc[280]
"""
Explanation: The output should be
<pre>
The job title 'Chief Data Scientist' belongs to a different department.
The job title 'Sales Manager' is part of the Sales Department.
The job title 'Vice President of Sales' is part of the Sales Department.
</pre>
Problem 4
Define a function filterSales with the employee tables as an argument, that returns a new table of the same schema (i.e. columns and index) containing only row of sales people. You should use the isSales function from the previous problem.
Arguments:
- employees (DataFrame)
Returns:
- DataFrame with only people form the Sales Department
End of explanation
"""
emails = getEmailListByState(sales_df, ", ")
for state in sorted(emails.index):
print "%15s: %s" % (state, emails[state])
"""
Explanation: The output should be
<pre>
Number of rows: 18
Number of cols: 25
Head of index: Int64Index([278, 283, 274, 276, 286, 284, 287, 281, 280, 285], dtype='int64', name=u'EmployeeID')
Record of sales employee with ID=280
ManagerID 274
TerritoryID 1
Title NaN
FirstName Pamela
MiddleName O
LastName Ansman-Wolfe
Suffix NaN
JobTitle Sales Representative
NationalIDNumber 61161660
BirthDate 1969-01-06
MaritalStatus S
Gender F
HireDate 2005-10-01 00:00:00
SalariedFlag 1
VacationHours 22
SickLeaveHours 31
PhoneNumber 340-555-0193
PhoneNumberType Cell
EmailAddress pamela0@yahoo.com
AddressLine1 636 Vine Hill Way
AddressLine2 NaN
City Portland
StateProvinceName Oregon
PostalCode 97205
CountryName United States
Name: 280, dtype: object
</pre>
Problem 5
Define a function getEmailList with that returns a Series of strings of all email addresses of employees in this state or province. The email addresses should be separated by a given character, usually a comma ',' or semicolon ';'.
Arguments:
- employees (DataFrame)
- delimiter (str)
Returns:
- Series of email addresses, concatenated by the given delimiter. The Series is indexed by the state or province.
End of explanation
"""
print managementCounts(employees_df)
"""
Explanation: The output should be
<pre>
Alberta: garrett1@mapleleafmail.ca
California: shu0@adventure-works.com
England: jae0@aol.co.uk
Gironde: ranjit0@adventure-works.com
Hamburg: rachel0@adventure-works.com
Massachusetts: tete0@adventure-works.com
Michigan: michael9@adventure-works.com
Minnesota: jillian0@adventure-works.com
Ontario: josé1@safe-mail.net
Oregon: pamela0@yahoo.com
Tennessee: tsvi0@adventure-works.com
Utah: linda3@adventure-works.com
Victoria: lynn0@adventure-works.com
Washington: david8@adventure-works.com, stephen0@adventure-works.com, amy0@yahoo.com, syed0@yahoo.com, brian3@aol.com
<pre>
## Problem 6 (Bonus)
Define a function `managementCounts` which produces a Series of how many employees report to a manager. The Series is indexed by the `ManagerID`, the count should be performed on the `EmployeeID` because this is the only field that is guaranteed to be unique. The resulting Series should be order by the number of employees in **descending order**.
Arguments:
- employees (DataFrame)
Returns:
- Series of counts (int), indexed by `ManagerID`
End of explanation
"""
|
chi-hung/PythonTutorial | code_examples/KerasMNISTDemo.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
import pandas as pd
import sklearn
import os
import requests
from tqdm._tqdm_notebook import tqdm_notebook
import tarfile
"""
Explanation: Classify handwritten digits with Keras
Data from: the MNIST dataset
Download the MNIST dataset from Internet
Preprocessing the dataset
Softmax Regression
A small Convolutional Neural Network
End of explanation
"""
def download_file(url,file):
# Streaming, so we can iterate over the response.
r = requests.get(url, stream=True)
# Total size in bytes.
total_size = int(r.headers.get('content-length', 0));
block_size = 1024
wrote = 0
with open(file, 'wb') as f:
for data in tqdm_notebook(r.iter_content(block_size), total=np.ceil(total_size//block_size) , unit='KB', unit_scale=True):
wrote = wrote + len(data)
f.write(data)
if total_size != 0 and wrote != total_size:
print("ERROR, something went wrong")
url = "https://github.com/chi-hung/PythonTutorial/raw/master/datasets/mnist.tar.gz"
file = "mnist.tar.gz"
print('Retrieving the MNIST dataset...')
download_file(url,file)
print('Extracting the MNIST dataset...')
tar = tarfile.open(file)
tar.extractall()
tar.close()
print('Completed fetching the MNIST dataset.')
"""
Explanation: <a id="01">1. Download the MNIST dataset from Internet </a>
I've made the dataset into a zipped tar file. You'll have to download it now.
End of explanation
"""
def filePathsGen(rootPath):
paths=[]
dirs=[]
for dirPath,dirNames,fileNames in os.walk(rootPath):
for fileName in fileNames:
fullPath=os.path.join(dirPath,fileName)
paths.append((int(dirPath[len(rootPath) ]),fullPath))
dirs.append(dirNames)
return dirs,paths
dirs,paths=filePathsGen('mnist/') # load the image paths
dfPath=pd.DataFrame(paths,columns=['class','path']) # save image paths as a Pandas DataFrame
dfPath.head(5) # see the first 5 paths of the DataFrame
"""
Explanation: 10 folders of images will be extracted from the downloaded tar file.
<a id="02">2. Preprocessing the dataset</a>
End of explanation
"""
dfCountPerClass=dfPath.groupby('class').count()
dfCountPerClass.rename(columns={'path':'amount of figures'},inplace=True)
dfCountPerClass.plot(kind='bar',rot=0)
"""
Explanation: How many digit classes & how many figures belong to each of the classes?
End of explanation
"""
train=dfPath.sample(frac=0.7) # sample 70% data to be the train dataset
test=dfPath.drop(train.index) # the rest 30% are now the test dataset
# take 50% of the test dataset as the validation dataset
val=test.sample(frac=1/2)
test=test.drop(val.index)
# let's check the length of the train, val and test dataset.
print('number of all figures = {:10}.'.format(len(dfPath)))
print('number of train figures= {:9}.'.format(len(train)))
print('number of val figures= {:10}.'.format(len(val)))
print('number of test figures= {:9}.'.format(len(test)))
# let's take a look: plotting 3 figures from the train dataset
for j in range(3):
img=plt.imread(train['path'].iloc[j])
plt.imshow(img,cmap="gray")
plt.axis("off")
plt.show()
"""
Explanation: Split the image paths into train($70\%$), val($15\%$), test($15\%$)
End of explanation
"""
def dataLoad(dfPath):
paths=dfPath['path'].values
x=np.zeros((len(paths),28,28),dtype=np.float32 )
for j in range(len(paths)):
x[j,:,:]=plt.imread(paths[j])/255
y=dfPath['class'].values
return x,y
train_x,train_y=dataLoad(train)
val_x,val_y=dataLoad(val)
test_x,test_y=dataLoad(test)
"""
Explanation: Load images into RAM
End of explanation
"""
print("tensor shapes:\n")
print('train:',train_x.shape,train_y.shape)
print('val :',val_x.shape,val_y.shape)
print('test :',test_x.shape,test_y.shape)
"""
Explanation: Remark: loading all images to RAM might take a while.
End of explanation
"""
from keras.models import Sequential
from keras.layers import Dense,Flatten
from keras.optimizers import SGD
"""
Explanation: <a id="03">3. Softmax Regression</a>
End of explanation
"""
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
train_y_onehot = np.float32( enc.fit_transform(train_y.reshape(-1,1)) \
.toarray() )
val_y_onehot = np.float32( enc.fit_transform(val_y.reshape(-1,1)) \
.toarray() )
test_y_onehot = np.float32( enc.fit_transform(test_y.reshape(-1,1)) \
.toarray() )
"""
Explanation: Onehot-encoding the labels:
End of explanation
"""
model = Sequential()
model.add(Flatten(input_shape=(28,28)))
model.add(Dense(10, activation='softmax') )
sgd=SGD(lr=0.2, momentum=0.0, decay=0.0)
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
"""
Explanation: Construct the model:
End of explanation
"""
model.summary()
"""
Explanation: More details about the constructed model:
End of explanation
"""
hist=model.fit(train_x, train_y_onehot,
epochs=20, batch_size=128,
validation_data=(val_x,val_y_onehot))
"""
Explanation: Train the model:
End of explanation
"""
plt.plot(hist.history['acc'],ms=5,marker='o',label='accuracy')
plt.plot(hist.history['val_acc'],ms=5,marker='o',label='val accuracy')
plt.legend()
plt.show()
"""
Explanation: See how the accuracy climbs during training:
End of explanation
"""
# calculate loss & accuracy (evaluated on the test dataset)
score = model.evaluate(test_x, test_y_onehot, batch_size=128)
print("LOSS (evaluated on the test dataset)= {}".format(score[0]))
print("ACCURACY (evaluated on the test dataset)= {}".format(score[1]))
"""
Explanation: Now, you'll probably want to evaluate or save the trained model.
End of explanation
"""
import json
with open('first_try.json', 'w') as jsOut:
json.dump(model.to_json(), jsOut)
model.save_weights('first_try.h5')
"""
Explanation: Save model architecture & weights:
End of explanation
"""
from keras.models import model_from_json
with open('first_try.json', 'r') as jsIn:
model_architecture=json.load(jsIn)
model_new=model_from_json(model_architecture)
model_new.load_weights('first_try.h5')
model_new.summary()
"""
Explanation: Load the saved model architecture & weights:
End of explanation
"""
pred_y=model.predict(test_x).argmax(axis=1)
from sklearn.metrics import classification_report
print( classification_report(test_y,pred_y) )
"""
Explanation: Output the classification report (see if the trained model works well on the test data):
End of explanation
"""
train_x = np.expand_dims(train_x,axis=-1)
val_x = np.expand_dims(val_x,axis=-1)
test_x = np.expand_dims(test_x,axis=-1)
"""
Explanation: <a id="04">4. A small Convolutional Neural Network</a>
Reshape the tensors (this step is necessary, because the CNN model wants the input tensor to be 4D):
End of explanation
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten,Conv2D, MaxPooling2D
from keras.layers import Activation
from keras.optimizers import SGD
in_shape=(28,28,1)
# ========== BEGIN TO CREATE THE MODEL ==========
model = Sequential()
# feature extraction (2 conv layers)
model.add(Conv2D(32, (3,3),
activation='relu',
input_shape=in_shape))
model.add(Conv2D(64, (3,3), activation='relu')
)
model.add(MaxPooling2D(pool_size=(2, 2))
)
model.add(Dropout(0.5))
model.add(Flatten())
# classification (2 dense layers)
model.add(Dense(128, activation='relu')
)
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# ========== COMPLETED THE MODEL CREATION========
# Compile the model before training.
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=0.01,momentum=0.1),
metrics=['accuracy'])
"""
Explanation: Create the model:
End of explanation
"""
%%time
hist=model.fit(train_x, train_y_onehot,
epochs=20,
batch_size=32,
validation_data=(val_x,val_y_onehot),
)
"""
Explanation: Train the model:
End of explanation
"""
plt.plot(hist.history['acc'],ms=5,marker='o',label='accuracy')
plt.plot(hist.history['val_acc'],ms=5,marker='o',label='val accuracy')
plt.legend()
plt.show()
"""
Explanation: See how the accuracy climbs during training:
End of explanation
"""
pred_y=model.predict(test_x).argmax(axis=1)
from sklearn.metrics import classification_report
print( classification_report(test_y,pred_y) )
"""
Explanation: Output the classification report (see if the trained model works well on the test data):
End of explanation
"""
|
zlxs23/Python-Cookbook | data_structure_and_algorithm_py3_6.ipynb | apache-2.0 | prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# Make a dictionary of all prices over 200
p1 = {key: value for key, value in prices.items() if value > 200}
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p2 = {key: value for key, value in prices.items() if key in tech_names}
print(p1,'\n',p2)
"""
Explanation: 1.17 从字典中提取子集
有构造的字典 他是另外一个字典的子集
最简单使用 字典推导
通过创建一个元组序列 后将之传至dict()func 实现
End of explanation
"""
p3 = dict((key, value) for key, value in prices.items() if value > 200)
print(p3)
"""
Explanation: 大多数情况下 字典推导能做到的,可通过创建一个元组sequence 然后将之传至 dict() func 也可
End of explanation
"""
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p4 = {key: prices[key] for key in prices.keys() & tech_names}
print(p4)
# p4 = {key: prices[key] for key in prices.keys() and tech_names}
"""
Explanation: but 字典推导表达意思更加清晰 同时 运行速度更快(近一倍)<br>同时 第二个例子程序 可重写
End of explanation
"""
prices.keys()
tech_names
type(prices.keys()) == type(tech_names)
prices.keys() & tech_names
type(prices.keys() & tech_names)
"""
Explanation: 上述两行推导 -- '&' 与 'and' 区别:
一个是位运算
num1 & num2 等同于 bin(num1) & bin(num2)
一个是逻辑运算
num1 and num2 <<if num1 is False => False, else => num2>>[and 与 只要有一个假即假]
在这里 & 的作用是 匹配 prices.keys() 与 tech_names 中 所含元素 相匹配的
结合以下描述,虽然 prices.keys() tech_names 两者类型but
End of explanation
"""
from collections import namedtuple
Subsciber = namedtuple('Subscriber',['addr','joined'])
sub = Subsciber('jonesy@exit.com','2012-10-19')
sub
sub.addr
sub.joined
"""
Explanation: 1.18 映射名称到序列元素
将下标访问的 list or tuple 中的元素<br>想转化成通过名称访问的元素
利用collections.nameetuple() func 来使用一个普通的tuple帮助 解决。<br>实际上 一个返回python 中标准tuple 类型的子类的工厂方法Factory Method<br>需传递一个类型名与所需字段 后其返回一个类 and 你可以初始化 此类 and 为你定义的字段传递值
End of explanation
"""
len(sub)
addr, joined = sub
print(addr,'\n',joined)
"""
Explanation: nametuple 实例 看起来像 一个普通的类实例 and 其跟元组类型 可交换 and 支持所有普通元组操作 如 索引 and 解压
End of explanation
"""
def compute_cost(records):
total = 0.0
for rec in records:
total += rec[1] * rec[2]
return total
"""
Explanation: namedtuple 's 主要用途 即 将你的代码从下标 操作解脱出来 and if 从数据库中调用中返回了很大的元组列表 and 通过通过 下标 去操作其中元素 但你 当表中添加了新的列的时候 你的代码就会出错 if 使用 namedtuple 即不会
使用普通 tuple 的代码
End of explanation
"""
Stock = namedtuple('Stock',['name','shares','price'])
def compute_cost2(records):
total = 0.0
for rec in records:
s = Stock(*rec)
total += s.shares * s.price
return total
"""
Explanation: 下标操作通常会让代码表意不明 并且非常依赖记录records 的结构<br>if 出现 歧义
End of explanation
"""
s = Stock('Ace',100,98.9)
s
s.shares
s.shares = 98
"""
Explanation: namedtuple 另一个用途 是作为dict 的替代 because dict 存储需要更多的内存空间 and 需要构建一个非常大的包含字典的数据结构 and 使用命名元组会更加高效 BUT 不像dict 一个namedtuple 是不可更改
End of explanation
"""
s2 = s._replace(shares=98)
print(s,'\n',s2)
"""
Explanation: 以上不能使用 s.shares = 98<br> if want to change the attr 可以使用namedtuple instance 's _replace() and 其会创建一个全新的namedtuple and 将对应字段用新的值取代
End of explanation
"""
# Create a ST type
ST = namedtuple('ST',['name','share','price','date','time'])
# Create a prototype instance
ST_prototype = ST('', 0, 0.0, None, None)
# Function to convert a dictionary to a ST
def dict_to_ST(s):
return ST_prototype._replace(**s)
a = ('hi',1,12,'2016-09-10','18:19:18')
dict_to_ST(a)
a = {'name':'hi','share':1,'price':12,'date':'2016-09-10','time':'18:19:18'}
dict_to_ST(a)
"""
Explanation: _replace() method and 有用特性 is 当你namedtuple 拥有可选或缺失字段时, 他是个超级方便填充数据的方法 可以先创建一个内含默认值 的原型(初态)tuple and 使用_replace() 创建新值被更新过的instance
End of explanation
"""
# want to 平方和
nums = [1,2,3,4,5,6]
s = sum(x * x for x in nums)
s
# Determine if any .py files exist in a directory
# 判断 python 文件是否存在此目录中
# 只要有一个py file 存在 any() return True
import os
files = os.listdir('f:\Save\python')
if any(name.endswith('.py') for name in files):
print('There be python file!')
else:
print('Sorry no python.')
# Output a tuple as CSV
s = ('ACME',50,123.34)
print(','.join(str(x) for x in s))
# Data reduction across fileds of a data structure
portfolio = [
{'name':'GOOG','share':50},
{'name':'Yahoo','share':75},
{'name':'ALO','share':20},
{'name':'CSX','share':85}
]
min_share = min(s['share'] for s in portfolio)
min_share
"""
Explanation: if you目标是一个需要更新很多instance's attr 高效的数据结构 BUT namedtuple is not 最佳选择 YOU can 使用一个包含 _slots_ method 的类<br>REF: chp 8.4
1.19 转换并同时计算数据
需要在data sequence 上执行聚集func (sum(),min(),max()) BUT 首先需要转换数据或者过滤数据
结合数据计算与转换 使用一个生成器表达式 参数
End of explanation
"""
s = sum((x * x for x in nums)) # 显示的传递一个生成器表达式对象
s = sum(x * x for x in nums) # 更加优雅的实现方式 省略了括号
"""
Explanation: 上述是将 生成器表达式 作为一个单独 argument 传递给func 时候 的巧妙语法 (不需要多加一个括号) 加不加括号 两者等效
End of explanation
"""
s = sum([x * x for x in nums])
s
"""
Explanation: 使用一个生成器表达式作为 argument 会比先创建一个临时列表更加高效 and 优雅
End of explanation
"""
# Odiginal : Return 20
min_s1 = min(s['share'] for s in portfolio)
# Alternative : Return ['name':'AOL,'share':20]
min_s2 = min(portfolio, key=lambda s:s['share'])
print(min_s1,'\n',min_s2)
"""
Explanation: 以上多创建临时列表 速度会变慢 即将会创建一个巨大的仅仅被使用一次就被丢弃的临时数据结构!!!!!!
End of explanation
"""
|
google/xarray-beam | docs/rechunking.ipynb | apache-2.0 | import apache_beam as beam
import numpy as np
import xarray_beam as xbeam
import xarray
def create_records():
for offset in [0, 4]:
key = xbeam.Key({'x': offset, 'y': 0})
data = 2 * offset + np.arange(8).reshape(4, 2)
chunk = xarray.Dataset({
'foo': (('x', 'y'), data),
'bar': (('x', 'y'), 100 + data),
})
yield key, chunk
inputs = list(create_records())
"""
Explanation: Rechunking
Rechunking lets us re-distribute how datasets are split between variables and chunks across a Beam PCollection.
To get started we'll recreate our dummy data from the data model tutorial:
End of explanation
"""
inputs | xbeam.SplitVariables()
"""
Explanation: Choosing chunks
Chunking can be essential for some operations. Some operations are very hard or impossible to perform with certain chunking schemes. For example, to make a plot all the data needs to come toether on a single machine. Other calculations such as calculating a median are possible to perform on distributed data, but require tricky algorithms and/or approximation.
More broadly, chunking can have critical performance implications, similar to those for Xarray and Dask. As a rule of thumb, chunk sizes of 10-100 MB work well. The optimal chunk size is a balance among a number of considerations, adapted here from Dask docs:
Chunks should be small enough to fit comfortably into memory on a single machine. As an upper limit, chunks over roughly 2 GB in size will not fit into the protocol buffers Beam uses to pass data between workers.
There should be enough chunks for Beam runners (like Cloud Dataflow) to elastically shard work over many workers.
Chunks should be large enough to amortize the overhead of networking and the Python interpreter, which starts to become noticeable for arrays with fewer than 1 million elements.
The nbytes attribute on both NumPy arrays and xarray.Dataset objects is a good easy way to figure out how larger chunks are.
Adjusting variables
The simplest transformation is splitting (or consoldating) different variables in a Dataset with SplitVariables() and ConsolidateVariables(), e.g.,
End of explanation
"""
inputs | xbeam.ConsolidateChunks({'x': -1})
"""
Explanation: Adjusting chunks
You can also adjust chunks in a dataset to distribute arrays of different sizes. Here you have two choices of API:
The lower level {py:class}~xarray_beam.SplitChunks and {py:class}~xarray_beam.ConsolidateChunks. These transformations apply a single splitting (with indexing) or consolidation (with {py:function}xarray.concat) function to array elements.
The high level {py:class}~xarray_beam.Rechunk, which uses a pipeline of multiple split/consolidate steps (as needed) to efficiently rechunk a dataset.
Low level rechunking
For minor adjustments (e.g., mostly along a single dimension), the more explicit SplitChunks() and ConsolidateChunks() are good options. They take a dict of desired chunk sizes as a parameter, which can also be -1 to indicate "no chunking" along a dimension:
End of explanation
"""
inputs | xbeam.SplitChunks({'x': 5}) # notice that the first two chunks are still separate!
"""
Explanation: Note that because these transformations only split or consolidate, they cannot necessary fully rechunk a dataset in a single step if the new chunk sizes are not multiples of old chunks (with consolidate) or do not even divide the old chunks (with split), e.g.,
End of explanation
"""
inputs | xbeam.SplitChunks({'x': 5}) | xbeam.ConsolidateChunks({'x': 5})
"""
Explanation: For such uneven cases, you'll need to use split followed by consolidate:
End of explanation
"""
inputs | xbeam.Rechunk(dim_sizes={'x': 6}, source_chunks={'x': 3}, target_chunks={'x': 5}, itemsize=8)
"""
Explanation: High level rechunking
Alternatively, the high-level Rechunk() method applies multiple split and consolidate steps based on the Rechunker algorithm:
End of explanation
"""
|
MonicaGutierrez/PracticalMachineLearningClass | notebooks/02-IntroMachineLearning.ipynb | mit | # Import libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set();
cmap = mpl.colors.ListedColormap(sns.color_palette("hls", 3))
# Create a random set of examples
from sklearn.datasets.samples_generator import make_blobs
X, Y = make_blobs(n_samples=50, centers=2,random_state=23, cluster_std=2.90)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cmap)
plt.show()
"""
Explanation: 02 - Introduction to Machine Learning
by Alejandro Correa Bahnsen
version 0.1, Feb 2016
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Jake Vanderplas
What is Machine Learning?
In this section we will begin to explore the basic principles of machine learning.
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task: the figure shows a
collection of two-dimensional data, colored according to two different class
labels.
End of explanation
"""
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(loss="hinge", alpha=0.01, n_iter=200, fit_intercept=True)
clf.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, .05), np.arange(y_min, y_max, .05))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contour(xx, yy, Z)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cmap)
plt.show()
"""
Explanation: A classification algorithm may be used to draw a dividing boundary
between the two clusters of points:
End of explanation
"""
a = 0.5
b = 1.0
# x from 0 to 10
x = 30 * np.random.random(20)
# y = a*x + b with noise
y = a * x + b + np.random.normal(size=x.shape)
plt.scatter(x, y)
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(x[:, None], y)
# underscore at the end indicates a fit parameter
print(clf.coef_)
print(clf.intercept_)
x_new = np.linspace(0, 30, 100)
y_new = clf.predict(x_new[:, None])
plt.scatter(x, y)
plt.plot(x_new, y_new)
"""
Explanation: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data: if you were to drop another point onto the plane which is unlabeled, this algorithm
could now predict whether it's a blue or a red point.
The next simple task we'll look at is a regression task: a simple best-fit line
to a set of data:
End of explanation
"""
from IPython.core.display import Image, display
imp_path = 'https://raw.githubusercontent.com/jakevdp/sklearn_pycon2015/master/notebooks/images/'
display(Image(url=imp_path+'iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(url=imp_path+'iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(url=imp_path+'iris_virginica.jpg'))
print("Iris Virginica")
display(Image(url='https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/6160065e1e574a20edddc47116a0512d20656e26/notebooks/iris_with_length.png'))
print('Iris versicolor and the petal and sepal width and length')
print('From, Python Data Analytics, Apress, 2015.')
"""
Explanation: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data:
here, we might be given an x-value, and the model would
allow us to predict the y value. Again, this might seem like a trivial problem,
but it is a basic example of a type of operation that is fundamental to
machine learning tasks.
Representation of Data in Scikit-learn
Machine learning is about creating models from data: for that reason, we'll start by
discussing how data can be represented in order to be understood by the computer. Along
with this, we'll build on our matplotlib examples from the previous section and show some
examples of how to visualize data.
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the
iris data stored by scikit-learn.
The data consists of measurements of three different species of irises.
There are three species of iris in the dataset, which we can picture here:
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
n_samples, n_features = iris.data.shape
print((n_samples, n_features))
print(iris.data[0])
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
print(iris.target_names)
"""
Explanation: Quick Question:
If we want to design an algorithm to recognize iris species, what might the data be?
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number i must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-Learn
Scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
"""
import pandas as pd # Pandas is a topic of next session
data_temp = pd.DataFrame(iris.data, columns=iris.feature_names)
data_temp['target'] = iris.target
data_temp['target'] = data_temp['target'].astype('category')
data_temp['target'].cat.categories = iris.target_names
sns.pairplot(data_temp, hue='target', palette=sns.color_palette("hls", 3))
"""
Explanation: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot:
End of explanation
"""
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
pca.fit(X)
X_reduced = pca.transform(X)
X_reduced.shape
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap=cmap)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Iris Dataset by PCA', size=14)
ax.scatter(X_reduced[:,0],X_reduced[:,1],X_reduced[:,2], c=y, cmap=cmap)
ax.set_xlabel('First eigenvector')
ax.set_ylabel('Second eigenvector')
ax.set_zlabel('Third eigenvector')
ax.w_xaxis.set_ticklabels(())
ax.w_yaxis.set_ticklabels(())
ax.w_zaxis.set_ticklabels(())
plt.show()
"""
Explanation: Dimensionality Reduction: PCA
Principle Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance.
Consider the iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it:
End of explanation
"""
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred, cmap=cmap);
"""
Explanation: Clustering: K-means
Clustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.
Note that these clusters will uncover relevent hidden structure of the data only if the criterion used highlights it.
End of explanation
"""
from sklearn.metrics import confusion_matrix
# Compute confusion matrix
cm = confusion_matrix(y, y_pred)
np.set_printoptions(precision=2)
print(cm)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.figure()
plot_confusion_matrix(cm)
"""
Explanation: Lets then evaluate the performance of the clustering versus the ground truth
End of explanation
"""
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X, y)
y_pred = clf.predict(X)
cm = confusion_matrix(y, y_pred)
print(cm)
plt.figure()
plot_confusion_matrix(cm)
"""
Explanation: Classification Logistic Regression
End of explanation
"""
from IPython.display import Image
Image(url="http://scikit-learn.org/dev/_static/ml_map.png")
"""
Explanation: Recap: Scikit-learn's estimator interface
Scikit-learn strives to have a uniform interface across all methods,
and we'll see examples of these below. Given a scikit-learn estimator
object named model, the following methods are available:
Available in all Estimators
model.fit() : fit training data. For supervised learning applications,
this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)).
For unsupervised learning applications, this accepts only a single argument,
the data X (e.g. model.fit(X)).
Available in supervised estimators
model.predict() : given a trained model, predict the label of a new set of data.
This method accepts one argument, the new data X_new (e.g. model.predict(X_new)),
and returns the learned label for each object in the array.
model.predict_proba() : For classification problems, some estimators also provide
this method, which returns the probability that a new observation has each categorical label.
In this case, the label with the highest probability is returned by model.predict().
model.score() : for classification or regression problems, most (all?) estimators implement
a score method. Scores are between 0 and 1, with a larger score indicating a better fit.
Available in unsupervised estimators
model.predict() : predict labels in clustering algorithms.
model.transform() : given an unsupervised model, transform new data into the new basis.
This also accepts one argument X_new, and returns the new representation of the data based
on the unsupervised model.
model.fit_transform() : some estimators implement this method,
which more efficiently performs a fit and a transform on the same input data.
Flow Chart: How to Choose your Estimator
This is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!
End of explanation
"""
|
Caranarq/01_Dmine | Datasets/CFE/Usuarios Electricos (P0609).ipynb | gpl-3.0 | descripciones = {
'P0609': 'Usuarios Electricos'
}
# Librerias utilizadas
import pandas as pd
import sys
import urllib
import os
import csv
import zipfile
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
"""
Explanation: Usuarios de Energía Eléctrica
Parámetros que se obtienen desde esta fuente
ID |Descripción
---|:----------
P0609|Usuarios eléctricos
End of explanation
"""
url = r'http://datos.cfe.gob.mx/Datos/Usuariosyconsumodeelectricidadpormunicipio.csv'
archivo_local = r'D:\PCCS\00_RawData\01_CSV\CFE\UsuariosElec.csv'
if os.path.isfile(archivo_local):
print('Ya existe el archivo: {}'.format(archivo_local))
else:
print('Descargando {} ... ... ... ... ... '.format(archivo_local))
urllib.request.urlretrieve(url, archivo_local)
print('se descargó {}'.format(archivo_local))
"""
Explanation: 2. Descarga de datos
End of explanation
"""
dtypes = { # Los valores numericos del CSV estan guardados como " 000,000 " y requieren limpieza
'Cve Mun':'str',
'2010':'str',
'2011':'str',
'2012':'str',
'2013':'str',
'2014':'str',
'2015':'str',
'2016':'str',
'ene-17':'str',
'feb-17':'str',
'mar-17':'str',
'abr-17':'str',
'may-17':'str',
'jun-17':'str',
'jul-17':'str',
'ago-17':'str',
'sep-17':'str',
'oct-17':'str',
'nov-17':'str',
'dic-17':'str'}
# Lectura del Dataset
dataset = pd.read_csv(archivo_local, skiprows = 2, nrows = 82236, na_values = ' - ',
dtype=dtypes) # Lee el dataset
dataset['CVE_EDO'] = dataset['Cve Inegi'].apply(lambda x: '{0:0>2}'.format(x)) # CVE_EDO de 2 digitos
dataset['CVE_MUN'] = dataset['CVE_EDO'].map(str) + dataset['Cve Mun']
dataset.head()
# Quitar espacios en blanco y comas de columnas que deberian ser numericas
columnums = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', 'ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17']
for columna in columnums:
dataset[columna] = dataset[columna].str.replace(' ','')
dataset[columna] = dataset[columna].str.replace(',','')
dataset.head()
# Convertir columnas a numericas
columnasanios = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', 'ene-17', 'feb-17',
'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17']
for columna in columnasanios:
dataset[columna] = pd.to_numeric(dataset[columna], errors='coerce', downcast = 'integer')
dataset.head()
# Quitar columnas que ya no se utilizarán
dropcols = ['Cve Edo', 'Cve Inegi', 'Cve Mun', 'Entidad Federativa', 'Municipio', 'Unnamed: 25', 'CVE_EDO']
dataset = dataset.drop(dropcols, axis = 1)
# Asignar CVE_EDO como indice
dataset = dataset.set_index('CVE_MUN')
dataset.head()
# Sumar las columnas de 2017
columnas2017 = ['ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17']
dataset['2017'] = dataset[columnas2017].sum(axis = 1)
# Eliminar columnas de 2017
dataset = dataset.drop(columnas2017, axis = 1)
dataset.head()
"""
Explanation: 3. Estandarizacion de datos de Parámetros
End of explanation
"""
len(dataset)
dataset.head(40)
dataset_total = dataset[dataset['Tarifa'] == 'TOTAL']
dataset_total.head()
len(dataset_total)
# Eliminamos la columna "Tarifa" porque este dataset solo tiene Totales
dataset_total = dataset_total.drop(['Tarifa'], axis = 1)
dataset_total.head()
# Metadatos
metadatos = {
'Nombre del Dataset': 'Usuarios de Energía eléctrica',
'Descripcion del dataset': 'Numero de Usuarios de energia electrica sin importar Tarifa',
'Disponibilidad Temporal': '2010 - 2017',
'Periodo de actualizacion': 'Anual',
'Nivel de Desagregacion': 'Municipal',
'Notas': 'S/N',
'Fuente': 'CFE',
'URL_Fuente': 'https://datos.gob.mx/busca/dataset/usuarios-y-consumo-de-electricidad-por-municipio',
'Dataset base': None
}
metadatos = pd.DataFrame.from_dict(metadatos, orient='index', dtype='str')
metadatos.columns = ['Descripcion']
metadatos= metadatos.rename_axis('Metadato')
metadatos
# Guardar el dataset
file = r'D:\PCCS\01_Dmine\Datasets\CFE\Usuarios_Electricidad.xlsx'
writer = pd.ExcelWriter(file)
dataset_total.to_excel(writer, sheet_name = 'DATOS')
metadatos.to_excel(writer, sheet_name = 'METADATOS')
writer.save()
print('---------------TERMINADO---------------')
"""
Explanation: Exportar Dataset
Antes de exportar el dataset voy a reducir su tamaño porque tiene 82,236 renglones divididos por tarifa. ÚNicamente voy a dejar los totales de todas las tarifas.
End of explanation
"""
|
marcinofulus/LDLtransport | LDL_transport_model.ipynb | gpl-3.0 | %pylab inline
import numpy as np
from scipy.sparse import dia_matrix
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
import matplotlib
import matplotlib.pyplot as plt
newparams = { 'savefig.dpi': 100, 'figure.figsize': (12/2., 5/2.) }
plt.rcParams.update(newparams)
params = {'legend.fontsize': 8,
'legend.linewidth': 0.2}
plt.rcParams.update(params)
"""
Explanation: Giant low-density lipoprotein (LDL) accumulation in multi-layer artery wall models
In this document the source code of LDL transport simulations.
We solve numerically the transport equations:
\begin{eqnarray}
\frac{\partial c}{\partial t} +(1-\sigma)\vec u\cdot\nabla c & = & D_{eff} \nabla^2 c - k c
\end{eqnarray}
in four layers.
End of explanation
"""
def phi(WSS = 1.79):
Rcell = 15e-3 # mm
area=.64 # mm^2
SI = 0.38*np.exp(-0.79*WSS) + 0.225*np.exp(-0.043*WSS)
MC = 0.003797* np.exp(14.75*SI)
LC = 0.307 + 0.805 * MC
phi = (LC*np.pi*Rcell**2)/(area)
return( phi)
def Klj(w=14.3e-6,phi=5e-4):
"permability in m^2"
Rcell = 15e-3 # mm
return ( (w**2/3.)*(4.*w*phi)/Rcell * (1e-6) )
def Kend(w=14.3e-6,phi=5e-4):
"permability w m^2"
Kend_70mmHg =3.22e-21
Knj = Kend_70mmHg - Klj() # at 70mmHg
return Knj + Klj(w,phi)
def sigma_end(phi=5e-4,w=14.3*1e-6,r_m = 11e-6):
a = r_m/w
Kend_70mmHg =3.22e-21
Knj = Kend_70mmHg - Klj() # at 70mmHg
sigma_lj = 1-(1-3/2.*a**2+0.5*a**3)*(1-1/3.*a**2)
return 1 - ((1-sigma_lj)*Klj(phi=phi))/(Knj+Klj(phi=phi))
def Diffusivity(w=14.3e-6,phi=5e-4,r_m = 11e-6):
"Diffusivity w um^2/s"
R_cell = 15e-6 # m
a=r_m/w
D_lumen=2.71e-11
return D_lumen*(1-a)*(1.-1.004*a+0.418*a**3-0.16*a**5)*4*w/R_cell*phi*1e-3*1e12
"""
Explanation: WSS dependent parameters
The impact of WSS on the transport properties is given by the equations (8-11) developed by Olgac et al.
End of explanation
"""
class LDL_Parameters_Vafai2012(object):
""" S. Chung, K. Vafai, International Journal of Biomechanics 45(2012)"""
names = [ 'endothel' , 'intima', 'IEL' ,'media' ]
D = [ 5.7e-12 , 5.4e-6 , 3.18e-9 , 5e-8 ]
V = [ 2.3e-2 ]*4
sigma = [ 0.9888 , 0.8272 , 0.9827 , 0.8836 ]
L = [ 2. , 10. , 2. , 200. ]
k_react = [ 0. , 0. , 0. , 3.197e-4 ]
K = [ 3.22e-15 , 2e-10 ,4.392e-13, 2e-12 ]
mu = [ 0.72e-3 , 0.72e-3 , 0.72e-3 , 0.72e-3 ]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
""" Change units to mikrometers
Class can be initialized with a value of WSS in Pa
"""
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
self.D[0]=Diffusivity(w=14.3e-6,phi=self.phi,r_m = 11e-6)
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Parameters_Vafai2006_Ai(object):
""" L. Ai, K. Vafai, International Journal of Heat and Mass Transfer 49 (2006)
Table 2 Physiological parameters used in the numerical simulation
values given in milimeters
With D of endothelium depending on WSS"""
names = [ 'endothel' , 'intima', 'IEL' ,'media' ]
D = [ 6e-11 , 5.0e-6 , 3.18e-9 , 5e-8 ]
V = [ 2.3e-2 ]*4
sigma = [ 0.9886 , 0.8292 , 0.8295 , 0.8660 ]
L = [ 2. , 10. , 2. , 200. ]
k_react = [ 0. , 0. , 0. , 1.4e-4 ]
K = [ 3.2172e-15 , 2.2e-10 ,3.18e-13, 2e-12 ]
mu = [ 0.72e-3 , 0.72e-3 , 0.72e-3 , 0.72e-3 ]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
""" Change units to mikrometers
Class can be initialized with a value of WSS in Pa
"""
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
self.D[0]=Diffusivity(w=14.3e-6,phi=self.phi,r_m = 11e-6)
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Parameters_Vafai2006_Ai_without_D(object):
""" L. Ai, K. Vafai, International Journal of Heat and Mass Transfer 49 (2006)
Table 2 Physiological parameters used in the numerical simulation
values given in milimeters
With D of endothelium from that work.
"""
names = [ 'endothel' , 'intima', 'IEL' ,'media' ]
D = [ 8.15e-11 , 5.0e-6 , 3.18e-9 , 5e-8 ]
V = [ 2.3e-2 ]*4
sigma = [ 0.9886 , 0.8292 , 0.8295 , 0.8660 ]
L = [ 2. , 10. , 2. , 200. ]
k_react = [ 0. , 0. , 0. , 1.4e-4 ]
K = [ 3.2172e-15 , 2.2e-10 ,3.18e-13, 2e-12 ]
mu = [ 0.72e-3 , 0.72e-3 , 0.72e-3 , 0.72e-3 ]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
""" Change units to mikrometers
Class can be initialized with a value of WSS in Pa
"""
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Parameters_Olgac_WSS(object):
""" U. Olgac, V. Kurtcuoglu, D. Poulikakos, American Journal of Physiology-Heart and Circulatory
Physiology 294
The dependecy of WSS is implemented
"""
name = [ 'endothel' , 'wall' ]
D = [ 6e-11 , 8.0e-7 ]
V = [ 2.3e-5 ]*2
sigma = [ 0.988 , 0.8514 ]
L = [ 2. , 338. ]
k_react = [ 0. , 3.0e-4 ]
K = [ 3.32e-15 , 1.2e-12]
mu = [ 0.72e-3 ,0.001]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
""" Change units to mikrometers
Class can be initialized with a value of WSS in Pa
"""
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
self.D[0]=Diffusivity(w=14.3e-6,phi=self.phi,r_m = 11e-6)
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
"""
Explanation: Parameters
In simulation four sets of parameters could be used. Three of them corresond to four layer model. The last one is two layer model.
End of explanation
"""
class LDL_Sim(object):
def __init__(self, pars):
self.pars = pars
self.c_st = None
def discretize(self,N=2000):
self.N = N
k = np.ones(N)
v = np.ones(N)
Dyf = np.ones(N)
D,V,sigma,L,k_react = self.pars.get_params()
l = np.sum(L)
self.l = l
self.x=np.linspace(0,l,N)
layers=[0]+list( np.ceil( (N*(np.cumsum(L)/sum(L)))).astype(np.int32) )
for i,(l1,l2) in enumerate(zip(layers[:],layers[1:])):
k[l1:l2] = k_react[i]
v[l1:l2] = (1.0-sigma[i])*V[i]
Dyf[l1:l2] = D[i]
dx2_1 = (N-1)**2/l**2
dx_1 = (N-1)/l
diag_l = np.ones(N)*(np.roll(Dyf,-1)*dx2_1)
diag = np.ones(N)*(-2.*Dyf*dx2_1 - k + v*dx_1)
diag_u = np.ones(N)*(np.roll(Dyf,1)*dx2_1 - np.roll(v,1)*dx_1)
# Layer's junctions
for j in layers[1:-1]:
diag[j] = v[j-1]-v[j+1]-(Dyf[j-1]+Dyf[j+1])*dx_1
diag_l[j-1] = Dyf[j-1]*dx_1
diag_u[j+1] = Dyf[j+1]*dx_1
#Boundary Conditions
diag[0] = 1
diag[-1] = 1
diag_u[0+1] = 0
diag_l[0-2] = 0
self.L = dia_matrix((np.array([diag_l,diag,diag_u]),np.array([-1,0,1])), shape=(N,N))
def solve_stationary(self,bc=[1,0]):
b = np.zeros(self.N)
b[0],b[-1] = bc
L = self.L.tocsr()
self.c_st = sp.sparse.linalg.linsolve.spsolve(L,b)
def plot_c(self,yrange=(0,0.2),xrange=(0,214),filename=None, color='red', alpha=0.2, style='-'):
i1,i2 = int(xrange[0]/self.l*self.N),int(xrange[1]/self.l*self.N)
plt.plot(self.x[i1:i2],self.c_st[i1:i2],color=color,linewidth=2, ls=style)
plt.ylim( *yrange)
plt.xlim( *xrange)
L=self.pars.L
d=[0]+np.cumsum(self.pars.L).tolist()
colors=['m','g','b','w']
for i,(l1,l2) in enumerate(zip(d[:],d[1:])):
plt.bar([l1,],yrange[1],l2-l1, color=colors[i], linewidth=0.3, alpha=alpha)
plt.grid(True,axis='y', which='major')
plt.xlabel(r"$x \left[\mu m\right]$")
plt.ylabel(r"$c(x)$")
if filename!=None:
plt.savefig(filename)
"""
Explanation: Assembling and solving the discrete system
The LDL_Sim class is responsible for solve the differential equation.
In the first step the simulation region is discretized by the method discretize. In that function flux continuity between layers and boundary conditions are implemented. It is done in a following way:
the layers are encoded as list layers in discretize, containing indices of boundaries for given discretization
system parameters depending on space ($k,\vec u,\sigma,D_{eff}$) are sampled
the diagonals of matrix are assembled using finite differences in space, neglecting for a moment the space dependence
at regions boudaries the equation is replaced with
$$J_L = J_R$$
it takes a form for a boundary at index j:
$$ c_j v_{j-1} - D_{j-1}\frac{c_j-c_{j-1}}{dx} = c_j v_{j+1} - D_{j+1}\frac{c_{j+1}-c_{j}}{dx} $$
collecting terms with the same $c$:
$$ c_j \left( v_{j-1}- v_{j+1} - \frac{D_{j-1}+D_{j+1}}{dx}\right) + c_{j-1}\frac{D_{j-1}}{dx} + c_{j+1}\frac{D_{j+1}}{dx} = 0
$$
therefore we need to enforce the above equation on the system for each boundary inside the domain
the system matrix is stored as scipy.sparse dia_matrix
The plot_c function is used for plot the concentration profiles.
End of explanation
"""
def LDL_simulation(wss=1.79, parameters="2012", bc=[1,0.0047],verbose=True):
if (parameters=="4L_2012"):
pars = LDL_Parameters_Vafai2012(WSS=wss)
elif (parameters=="4L_2006_Ai"):
pars = LDL_Parameters_Vafai2006_Ai(WSS=wss)
elif (parameters=="4L_2006_Ai_without_D"):
pars = LDL_Parameters_Vafai2006_Ai_without_D(WSS=wss)
elif (parameters=="2L"):
pars = LDL_Parameters_Olgac_WSS(WSS=wss)
else:
print "Parameters error"
return
sim = LDL_Sim(pars)
sim.discretize(130*214)
sim.solve_stationary(bc=bc)
if verbose:
print "The total surfaced LDL concentration:",np.sum(sim.c_st)*(sim.l/(sim.N-1))
return sim
"""
Explanation: Class that performs simulation
In that class the whole simulation is performed: parameters are initiated and then the equation is solved. As a result we get the object that contains the values of LDL concentration in discretized points and the function plot_c which can plot the concentration profile.
Class can be iitiated with name of one of the parameters sets:
"4L_2012" for S. Chung, K. Vafai, International Journal of Biomechanics 45(2012)
"4L_2006_Ai" for L. Ai, K. Vafai, International Journal of Heat and Mass Transfer 49 (2006) with D(WSS)
"4L_2006_Ai_without_D" for L. Ai, K. Vafai, International Journal of Heat and Mass Transfer 49 (2006) with D from this publication
"2L" for U. Olgac, V. Kurtcuoglu, D. Poulikakos, American Journal of Physiology-Heart and
Circulatory Physiology 294
End of explanation
"""
LDL_simulation(wss=0.02, parameters="4L_2012").plot_c(yrange=(0,5.0),xrange=(0,214), color='green', alpha=0.1)
LDL_simulation(wss=0.02, parameters="4L_2006_Ai").plot_c(yrange=(0,5.0),xrange=(0,214), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs'));
LDL_simulation(wss=0.02, parameters="4L_2012").plot_c(yrange=(0,5.0),xrange=(0,25), color='green', alpha=0.1)
LDL_simulation(wss=0.02, parameters="4L_2006_Ai").plot_c(yrange=(0,5.0),xrange=(0,25), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs'));
"""
Explanation: Examples
In this section the results presented in publications are calculated
Small WSS=0.02
End of explanation
"""
sim = LDL_simulation(wss=2.2, parameters="4L_2012")
sim.plot_c(yrange=(0,1.1),xrange=(0,214), color='green', alpha=0.1)
sim = LDL_simulation(wss=2.2, parameters="4L_2006_Ai")
sim.plot_c(yrange=(0,1.1),xrange=(0,214), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs'));
# use e.g. xlim(0,10) to zoom
sim = LDL_simulation(wss=2.2, parameters="4L_2012")
sim.plot_c(yrange=(0,1.1),xrange=(0,25), color='green', alpha=0.1)
sim = LDL_simulation(wss=2.2, parameters="4L_2006_Ai_without_D")
sim.plot_c(yrange=(0,1.1),xrange=(0,25), color='orange', alpha=0.1)
sim = LDL_simulation(wss=2.2, parameters="4L_2006_Ai")
sim.plot_c(yrange=(0,1.1),xrange=(0,25), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs without D(WSS)','4LA parametrs with D(WSS)'));
"""
Explanation: High WSS=2.2
End of explanation
"""
WSSs = np.arange(0.0,3.0,0.1)
print WSSs
c_endo_wss=[LDL_simulation(wss=x, parameters="4L_2012",verbose=False).c_st[2*130] for x in WSSs]
c_endo_wss2=[LDL_simulation(wss=x, parameters="4L_2006_Ai",verbose=False).c_st[2*130] for x in WSSs]
plot (WSSs,c_endo_wss2)
plot (WSSs,c_endo_wss, ls='--')
title("WSS dependence of intima side LDL concentration at the endothelium", fontsize=10)
xlim([0.0,2.5])
xlabel('WSS [Pa]')
ylabel('$c_{w}end^{*}$')
grid(True)
legend(('4LA parametrs','4LC parametrs'))
"""
Explanation: Concentration in intima in function of WSS
End of explanation
"""
sim = LDL_simulation(wss=2.2, parameters="2L")
sim.plot_c(yrange=(0,0.012),xrange=(0,340));
"""
Explanation: Two layer model
High WSS=2.2
End of explanation
"""
sim = LDL_simulation(wss=0.02, parameters="2L")
sim.plot_c(yrange=(0,0.25),xrange=(0,340))
"""
Explanation: Low WSS=0.02
End of explanation
"""
c_endo_wss=[LDL_simulation(wss=x, parameters="2L",verbose=False).c_st[2*130] for x in WSSs]
plot (WSSs,c_endo_wss)
xlim([0.0,1.6])
ylim([0,0.3])
xlabel('WSS [Pa]')
ylabel('$c_{w}end^{*}$')
grid(True)
plt.title("WSS dependence of intima side LDL concentration at the endothelium", fontsize=10)
"""
Explanation: Concentration in intima in function of WSS
End of explanation
"""
|
feststelltaste/software-analytics | notebooks/Knowledge Islands.ipynb | gpl-3.0 | import git
from io import StringIO
import pandas as pd
# connect to repo
git_bin = git.Repo("../../buschmais-spring-petclinic/").git
# execute log command
git_log = git_bin.execute('git log --no-merges --no-renames --numstat --pretty=format:"%x09%x09%x09%aN"')
# read in the log
git_log = pd.read_csv(StringIO(git_log), sep="\x09", header=None, names=['additions', 'deletions', 'path','author'])
# convert to DataFrame
commit_data = git_log[['additions', 'deletions', 'path']].join(git_log[['author']].fillna(method='ffill')).dropna()
commit_data.head()
"""
Explanation: TLDR; I show how you can visualize the knowledge distribution of your source code by mining version control systems.
Introduction
In software development, it's all about knowledge – both technical and the business domain. But we software developers transfer only a small part of this knowledge into code. But code alone isn't enough to get a glimpse of the greater picture and the interrelations of all the different concepts. There will be always developers that know more about some concept as laid down in source code. It's important to make sure that this knowledge is distributed over more than one head. More developers mean more different perspectives on the problem domain leading to a more robust and understandable code bases.
How can we get insights about knowledge in code?
It's possible to estimate the knowledge distribution by analyzing the version control system. We can use active changes in the code as proxy for "someone knew what he did" because otherwise, he wouldn't be able to contribute code at all. To find spots where the knowledge about the code could be improved, we can identify areas in the code that are possibly known by only one developer. This gives you a hint where you should start some pair programming or invest in redocumentation.
In this blog post, we approximate the knowledge distribution by counting the number of additions per file that each developer contributed to a software system. I'll show you step by step how you can do this by using Python and Pandas.
Attribution: The work is heavily inspired by Adam Tornhill's book "Your Code as a Crime Scene", who did a similar analysis called "knowledge map". I use the similar visualization style of a "bubble chart" based on his work as well.
Import history
For this analysis, you need a log from your Git repository. In this example, we analyze a fork of the Spring PetClinic project.
To avoid some noise, we add the parameters <tt>--no-merges</tt> and <tt>--no-renames</tt>, too.
bash
git log --no-merges --no-renames --numstat --pretty=format:"%x09%x09%x09%aN"
We read the log output into a Pandas' <tt>DataFrame</tt> by using the method described in this blog post, but slightly modified (because we need less data):
End of explanation
"""
existing_files = pd.DataFrame(git_bin.execute('git ls-files -- *.java').split("\n"), columns=['path'])
existing_files.head()
"""
Explanation: Getting data that matters
In this example, we are only interested in Java source code files that still exist in the software project.
We can retrieve the existing Java source code files by using Git's <tt>ls-files</tt> combined with a filter for the Java source code file extension. The command will return a plain text string that we split by the line endings to get a list of files. Because we want to combine this information with the other above, we put it into a <tt>DataFrame</tt> with the column name <tt>path</tt>.
End of explanation
"""
contributions = pd.merge(commit_data, existing_files)
contributions.head()
"""
Explanation: The next step is to combine the <tt>commit_data</tt> with the <tt>existing_files</tt> information by using Pandas' <tt>merge</tt> function. By default, <tt>merge</tt> will
- combine the data by the columns with the same name in each <tt>DataFrame</tt>
- only leave those entries that have the same value (using an "inner join").
In plain English, <tt>merge</tt> will only leave the still existing Java source code files in the <tt>DataFrame</tt>. This is exactly what we need.
End of explanation
"""
contributions['additions'] = pd.to_numeric(contributions['additions'])
contributions['deletions'] = pd.to_numeric(contributions['deletions'])
contributions.head()
"""
Explanation: We can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code as numbers. We have to convert those accordingly.
End of explanation
"""
contributions_sum = contributions.groupby('path').sum()[['additions', 'deletions']].reset_index()
contributions_sum.head()
"""
Explanation: Calculating the knowledge about code
We want to estimate the knowledge about code as the proportion of additions to the whole source code file. This means we need to calculate the relative amount of added lines for each developer. To be able to do this, we have to know the sum of all additions for a file.
Additionally, we calculate it for deletions as well to easily get the number of lines of code later on.
We use an additional <tt>DataFrame</tt> to do these calculations.
End of explanation
"""
contributions_sum['lines'] = contributions_sum['additions'] - contributions_sum['deletions']
contributions_sum.head()
"""
Explanation: We also want to have an indicator about the quantity of the knowledge. This can be achieved if we calculate the lines of code for each file, which is a simple subtraction of the deletions from the additions (be warned: this does only work for simple use cases where there are no heavy renames of files as in our case).
End of explanation
"""
contributions_all = pd.merge(
contributions,
contributions_sum,
left_on='path',
right_on='path',
suffixes=['', '_sum'])
contributions_all.head()
"""
Explanation: We combine both <tt>DataFrame</tt>s with a <tt>merge</tt> analog as above.
End of explanation
"""
grouped_contributions = contributions_all.groupby(
['path', 'author']).agg(
{'additions' : 'sum',
'additions_sum' : 'first',
'lines' : 'first'})
grouped_contributions.head(10)
"""
Explanation: Identify knowledge hotspots
OK, here comes the key: We group all additions by the file paths and the authors. This gives us all the additions to a file per author. Additionally, we want to keep the sum of all additions as well as the information about the lines of code. Because those are contained in the <tt>DataFrame</tt> multiple times, we just get the first entry for each.
End of explanation
"""
grouped_contributions['ownership'] = grouped_contributions['additions'] / grouped_contributions['additions_sum']
grouped_contributions.head()
"""
Explanation: Now we are ready to calculate the knowledge "ownership". The ownership is the relative amount of additions to all additions of one file per author.
End of explanation
"""
ownerships = grouped_contributions.reset_index().groupby(['path']).max()
ownerships.head(5)
"""
Explanation: Having this data, we can now extract the author with the highest ownership value for each file. This gives us a list with the knowledge "holder" for each file.
End of explanation
"""
plot_data = ownerships.reset_index()
plot_data['responsible'] = plot_data['author']
plot_data.loc[plot_data['ownership'] <= 0.7, 'responsible'] = "None"
plot_data.head()
"""
Explanation: Preparing the visualization
Reading tables is not as much fun as a good visualization. I find Adam Tornhill's suggestion of an enclosure or bubble chart very good:
<img src="https://pbs.twimg.com/media/C-fYgvCWsAAB1y8.jpg" style="width: 500px;"/>
Source: Thorsten Brunzendorf (@thbrunzendorf)
The visualization is written in D3 and just need data in a specific format called "flare". So let's prepare some data for this!
First, we calculate the <tt>responsible</tt> author. We say that an author that contributed more than 70% of the source code is the responsible person that we have to ask if we want to know something about the code. For all the other code parts, we assume that the knowledge is distributed among different heads.
End of explanation
"""
import numpy as np
from matplotlib import cm
from matplotlib.colors import rgb2hex
authors = plot_data[['author']].drop_duplicates()
rgb_colors = [rgb2hex(x) for x in cm.RdYlGn_r(np.linspace(0,1,len(authors)))]
authors['color'] = rgb_colors
authors.head()
"""
Explanation: Next, we need some colors per author to be able to differ them in our visualization. We use the two classic data analysis libraries for this. We just draw some colors from a color map here for each author.
End of explanation
"""
colored_plot_data = pd.merge(
plot_data, authors,
left_on='responsible',
right_on='author',
how='left',
suffixes=['', '_color'])
colored_plot_data.loc[colored_plot_data['responsible'] == 'None', 'color'] = "white"
colored_plot_data.head()
"""
Explanation: Then we combine the colors to the plot data and whiten the minor ownership with all the <tt>None</tt> responsibilities.
End of explanation
"""
import os
import json
json_data = {}
json_data['name'] = 'flare'
json_data['children'] = []
for row in colored_plot_data.iterrows():
series = row[1]
path, filename = os.path.split(series['path'])
last_children = None
children = json_data['children']
for path_part in path.split("/"):
entry = None
for child in children:
if "name" in child and child["name"] == path_part:
entry = child
if not entry:
entry = {}
children.append(entry)
entry['name'] = path_part
if not 'children' in entry:
entry['children'] = []
children = entry['children']
last_children = children
last_children.append({
'name' : filename + " [" + series['responsible'] + ", " + "{:6.2f}".format(series['ownership']) + "]",
'size' : series['lines'],
'color' : series['color']})
with open ( "vis/flare.json", mode='w', encoding='utf-8') as json_file:
json_file.write(json.dumps(json_data, indent=3))
"""
Explanation: Visualizing
The bubble chart needs D3's flare format for displaying. We just dump the <tt>DataFrame</tt> data into this hierarchical format. As for hierarchy, we use the Java source files that are structured via directories.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/text_models/solutions/automl_for_text_classification_vertex.ipynb | apache-2.0 | import os
import pandas as pd
from google.cloud import bigquery
"""
Explanation: AutoML for Text Classification with Vertex AI
Learning Objectives
Learn how to create a text classification dataset for AutoML using BigQuery
Learn how to train AutoML to build a text classification model
Learn how to evaluate a model trained with AutoML
Learn how to predict on new test data with AutoML
Introduction
In this notebook, we will use AutoML for Text Classification to train a text model to recognize the source of article titles: New York Times, TechCrunch or GitHub.
In a first step, we will query a public dataset on BigQuery taken from hacker news ( it is an aggregator that displays tech related headlines from various sources) to create our training set.
In a second step, use the AutoML UI to upload our dataset, train a text model on it, and evaluate the model we have just trained.
End of explanation
"""
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
exists=$(gsutil ls -d | grep -w gs://$BUCKET/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://$BUCKET already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l $REGION gs://$BUCKET
echo -e "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: Replace the variable values in the cell below. Note, AutoML can only be run in the regions where it is available.
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
"""
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
"""
regex = ".*://(.[^/]+)/"
sub_query = """
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
""".format(
regex
)
query = """
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(
sub_query=sub_query
)
print(query)
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
"""
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
"""
print(f"The full dataset contains {len(title_dataset)} titles")
"""
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
"""
title_dataset.source.value_counts()
"""
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
"""
DATADIR = "./data/"
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = "titles_full.csv"
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
"""
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
"""
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
"""
SAMPLE_DATASET_NAME = "titles_sample.csv"
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset.head()
%%bash
gsutil cp data/titles_sample.csv gs://$BUCKET
"""
Explanation: Let's write the sample datatset to disk.
End of explanation
"""
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.get_bucket(BUCKET)
SAMPLE_BATCH_INPUTS = "./batch_predict_inputs.jsonl"
for idx, text in sample_title_dataset.title.items():
# write the text sample to GCS
blob = bucket.blob(f"hacker_news_sample/sample_{idx}.txt")
blob.upload_from_string(data=text, content_type="text/plain")
# add the GCS file to local jsonl
with open(SAMPLE_BATCH_INPUTS, "a") as f:
f.write(
f'{{"content": "gs://{BUCKET}/hacker_news_sample/sample_{idx}.txt", "mimeType": "text/plain"}}\n'
)
"""
Explanation: Train a Model with AutoML for Text Classification
Step 1: Create the dataset in Vertex AI
From the Vertex menu click "Datasets" then click the "+Create" at the top of the window.
Create a dataset called hacker_news_titles and specify it as a Text dataset for text classification (Single-label). Click the Create button at the bottom. Note that here you should choose the region that agrees with the region you specified above; e.g. we use 'us-central1'.
Then, select the file titles_sample.csv from your GCS bucket. Importing the data can take about 10 minutes.
Step 2: Train an AutoML text model
Once the dataset is imported you can browse specific examples, or analyze label distributions. Once you are happy with what you see, proceed to train the model.
Give your model an indicative name like hacker_news_titles_automl and start training. Training may take a few hours.
Step 3: Evaluate the model
Once the model is trained, navigate to the Models tab in Vertex AI and see your model hacker_news_titles_automl. Click on the model and you can "Evaluate" how the model performed. You'll be able to see the averall precision and recall, as well as drill down to preformances at the individual label level.
AutoML will also show you a confusion matrix and you can see examples where the model made a mistake for each of the labels.
Step 4: Predict with the trained AutoML model
Now you can test your model directly by entering new text in the UI and having AutoML predicts the source of your snippet. First deploy your model to an endpoint. Click on Deploy to Endpoint and you'll be directed to a page to create the endpoint. Give the endpoint an indicative name, like hacker_news_model_endpoint. Keep all other options as default, and press DEPLOY. It make take a few minutes to create the endpoint and deploy the model to the endpoint.
Once the deployment has completed, you can test your model in the UI and make online predictions. Just type text into the box and click Predict. You'll see your model's predictions and the corresponding softmax values for each label:
You can also set up a batch prediction job. First we'll need to set up our files for prediction with AutoML text classification. To do this, we'll use a JSONL file to specify a list of documents to make predictions about and then store the JSONL file in a Cloud Storage bucket. A single line in an input JSONL file should have the format:
{"content": "gs://sourcebucket/datasets/texts/source_text.txt", "mimeType": "text/plain"}
We'll create the GCS .txt files and create the jsonl file below:
End of explanation
"""
!head -5 ./batch_predict_inputs.jsonl
!gsutil ls gs://$BUCKET/hacker_news_sample | head -5
"""
Explanation: Let's make sure the jsonl file was written correctly and that the bucket contains the sample .txt files:
End of explanation
"""
!gsutil cp ./batch_predict_inputs.jsonl gs://$BUCKET
"""
Explanation: We'll copy the json file to our GCS bucket and kick off the batch prediction job...
End of explanation
"""
|
liganega/Gongsu-DataSci | previous/notes2017/old/NB-08-More_About_Functions.ipynb | gpl-3.0 | %load_ext tutormagic
%%tutor
L = [1, 2, 3]
a = L.pop()
L
"""
Explanation: 함수
함수의 활용에 대해 좀 더 자세히 살펴본다.
함수의 부작용(side effect)
return과 print의 활용법
함수의 부작용(side effect)
리스트 관련 메소드인 pop() 함수의 경우 특정값을 리턴하는 것과 더불어 리턴하는 값을 해당 리스트에서 삭제하는 부가 기능을 수행한다. 즉 메모리 상태가 변경된다. 이와같이 함수가 값을 리턴하는 것 이외에 메모리 상태를 변경한다면 이를 함수의 부작용이라 부른다.
예제:
In [25]: L = [1, 2, 3]
In [26]: L.pop()
Out[26]: 3
In [27]: L
Out[27]: [1, 2]
즉, pop() 메소드를 호출할 때 마다 메모리에 저장된 L의 값 또한 변한다. 파이썬튜토어를 이용하면 아래와 같다.
End of explanation
"""
sum((1, 2, 3))
"""
Explanation: sum 함수 구현하기
파이썬에 기본적으로 내장된 함수들이 여럿 있다. 지금까지 살펴본 내장함수로 abs, dir, float, format, help, id, int, len, min, max, range, type 등이 있다. 보다 자세한 내용은 아래 사이트를 참조하면 된다.
https://docs.python.org/2/library/functions.html
이번엔 sum 이라는 내장함수를 살펴보고자 한다. sum 함수는 리스트 또는 튜플 자료형을 입력받으면 각 항목들의 값의 합을 리턴해준다. 대신에 숫자들로만 구성된 시퀀스에 대해서만 작동한다.
In [25]: sum([1, 2, 3])
Out[25]: 6
End of explanation
"""
%%tutor
def sum1(xs):
s = 0
for i in xs:
s = s + i
return s
a = sum1([1,2,3])
%%tutor
def sum2(xs):
s = 0
for i in range(len(xs)):
s = s + xs.pop()
return s
a = sum2([1,2,3])
"""
Explanation: sum 내장함수와 동일한 값을 리턴하는 함수를 두 가지 방식으로 직접 구현해보도록 하자.
아래 정의에서 sum1 함수는 부작용이 없도록 구현한 반면에 sum2 함수는 부작용이 있도록 구현하였다. 두 함수의 차이점을 파이썬튜토어를 이용하여 확인할 수 있다.
End of explanation
"""
|
dennisobrien/PublicNotebooks | fivethirtyeight/2017-07-14 How long a series.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.misc
import seaborn as sns
def series_win_probability_of_length(n, m, p=0.6, verbose=False):
"""Return the probability of `n` wins for a series of length `m` games with a base win probaility of `p`.
The wins and losses can occur in any order, but we are removing the occurrences of wins that result
in a shorter series.
The shortest the series can be is `n` games while the longest is `2n-1`.
"""
assert n <= m, 'the number of wins {n} must be less than or equal to the number of games in the series {m}'.format(n=n, m=m)
assert m <= 2*n - 1, 'the series length cannot be greater than 2n-1'
losses = m - n
all_combinations = scipy.misc.comb(m, n)
# lesser_combinations = sum([scipy.misc.comb(i, n) for i in range(n, m)])
lesser_combinations = scipy.misc.comb(m-1, n)
n_combinations = all_combinations - lesser_combinations
prob = p**n * (1-p)**losses * n_combinations
if verbose:
print('{} wins out of {} games, p={}'.format(n, m, p))
print('all combinations: {}'.format(all_combinations))
print('lesser combinations: {}'.format(lesser_combinations))
print('n_combinations: {}'.format(n_combinations))
print('probability: {}'.format(prob))
return prob
def series_win_probability(n, p=0.6, verbose=False):
"""Return the probability of winning a series requiring `n` wins with a base probability of `p`.
"""
return sum([series_win_probability_of_length(n, m, p=p, verbose=verbose) for m in range(n, 2*n)])
series_win_probability_of_length(1, 1, p=0.6, verbose=True)
series_win_probability_of_length(1, 1, p=0.4, verbose=True)
"""
Explanation: How long a series to maximize expected winnings
Riddler Classic 2017-07-14
https://fivethirtyeight.com/features/can-you-eat-more-pizza-than-your-siblings/
Congratulations! The Acme Axegrinders, which you own, are the regular season champions of the National Squishyball League (NSL). Your team will now play a championship series against the Boondocks Barbarians, which had the second-best regular season record. You feel good about Acme’s chances in the series because Acme won exactly 60 percent of the hundreds of games it played against Boondocks this season. (The NSL has an incredibly long regular season.) The NSL has two special rules for the playoffs:
The owner of the top-seeded team (i.e., you) gets to select the length of the championship series in advance of the first game, so you could decide to play a single game, a best two out of three series, a three out of five series, etc., all the way up to a 50 out of 99 series.
The owner of the winning team gets \$1 million minus \$10,000 for each of the victories required to win the series, regardless of how many games the series lasts in total. Thus, if the top-seeded team’s owner selects a single-game championship, the winning owner will collect \$990,000. If he or she selects a 4 out of 7 series, the winning team’s owner will collect \$960,000. The owner of the losing team gets nothing.
Since Acme has a 60 percent chance of winning any individual game against Boondocks, Rule 1 encourages you to opt for a very long series to improve Acme’s chances of winning the series. But Rule 2 means that a long series will mean less winnings for you if Acme does take the series.
How long a series should you select in order to maximize your expected winnings? And how much money do you expect to win?
Let's first define some terms.
$S$: a series with some number of games required to win it.
$n$: the number of wins required for a series $S_n$.
$N$: the maximum number of games a series $S_n$ can go. $N = 2n-1$
$m$: the number of total games a particular instance of a series went before a winner was determined.
$w$: the number of wins in a particular instance of series.
$P_n$: the probability of winning a series requiring $n$ wins.
$P_{n,m}$: the probability of winning a series requiring $n$ wins in a total of $m$ games.
A series $S_n$ can be won in anywhere from $n$ to $2n-1$ games. We can calculate the probability of winning the overall series by adding up the probabilities of each series run length.
$P_n = \sum\limits_{m=n}^{2n-1}P_{n,m}$
But we need to take care that we don't overcount. That is, for a series $S_n$ that is won in $m$ games, we need to remove the combination of wins of $m-1$ games (because that series would have ended in $m-1$ games).
So there are a number of steps to calculate the overall probability.
1. Calculate the probability of winning a series requiring $n$ wins in $m$ games.
- This takes into consideration the combinations of wins and losses as well as the probabilities of wins and losses.
- Remove from this count those combinations that would have resulted in a shorter series.
2. Calculate the probability of winning the series by adding up the probabilities for all the possible series lengths.
End of explanation
"""
N = 2
p = 0.6
for m in range(N, 2*N):
print('\tP of {} wins in {} games: {}'.format(N, m, series_win_probability_of_length(N, m, p=p, verbose=True)))
N = 2
p = 0.4
for m in range(N, 2*N):
print('\tP of {} wins in {} games: {}'.format(N, m, series_win_probability_of_length(N, m, p=p, verbose=True)))
"""
Explanation: So this is a good sanity check.
Let's check that the probabilities for the series going to $m$ games makes sense.
End of explanation
"""
N = 2
p = 0.6
print('\t{}'.format(series_win_probability(N, p=p)))
print('\t{}'.format(series_win_probability(N, p=1-p)))
print('\ntotal probability: {}'.format(series_win_probability(N, p=p) + series_win_probability(N, p=1-p)))
N = 3
p = 0.6
print('\t{}'.format(series_win_probability(N, p=p)))
print('\t{}'.format(series_win_probability(N, p=1-p)))
print('\ntotal probability: {}'.format(series_win_probability(N, p=p) + series_win_probability(N, p=1-p)))
N = 50
p = 0.6
print('\t{}'.format(series_win_probability(N, p=p)))
print('\t{}'.format(series_win_probability(N, p=1-p)))
print('\ntotal probability: {}'.format(series_win_probability(N, p=p) + series_win_probability(N, p=1-p)))
"""
Explanation: Let's also check that all the probabilities sum to 1.
End of explanation
"""
def expected_value(n, p=0.6, total_payout=10**6, win_cost=10**4, verbose=False):
"""Get the expected value for winning a series given the number of games required $n$
and the win probability $p$.
"""
p = series_win_probability(n, p=p, verbose=verbose)
payout = total_payout - n * win_cost
val = p * payout
return val
expected_value(1)
"""
Explanation: So now that we have the probability of the team winning the series given the number of required wins $n$, we can calculate the expected value $E$.
End of explanation
"""
p = 0.6
total_payout=10**6
win_cost = 10**4
wins_required = np.arange(1, 51)
# series_win_p = series_win_probability(wins_required, p=p, verbose=False)
series_win_p = [series_win_probability(w, p=p, verbose=False) for w in wins_required]
payout = [total_payout - w * win_cost for w in wins_required]
e_value = [expected_value(w, p=p) for w in wins_required]
df = pd.DataFrame(
data={
'series_win_probability': series_win_p,
'payout': payout,
'expected_value': e_value,
'wins_required': wins_required,
}
)
df.head()
df.plot.line(x='wins_required', y=['payout', 'expected_value'])
df.iloc[df['expected_value'].argmax()]
"""
Explanation: Calculate all the probabilities and payouts for series from 1 to 50 games. We'll put it in a DataFrame for convenient plotting.
End of explanation
"""
def calculate_and_plot(n_max=50, p=0.6, total_payout=10**6, win_cost=10**4):
"""Calculate and plot the probabilities and payouts for various lengths of series.
"""
wins_required = np.arange(1, n_max+1)
# series_win_p = series_win_probability(wins_required, p=p, verbose=False)
series_win_p = [series_win_probability(w, p=p, verbose=False) for w in wins_required]
payout = [total_payout - w * win_cost for w in wins_required]
e_value = [expected_value(w, p=p) for w in wins_required]
df = pd.DataFrame(
data={
'series_win_probability': series_win_p,
'payout': payout,
'expected_value': e_value,
'wins_required': wins_required,
}
)
max_row = df.iloc[df['expected_value'].argmax()]
max_index = max_row['wins_required']
max_value = max_row['expected_value']
fig, (ax0, ax1, ax2) = plt.subplots(nrows=1, ncols=3, figsize=(15,4))
df.plot.line(x='wins_required', y=['payout', 'expected_value'], ax=ax0)
ax0.axhline(y=max_value, linestyle='dotted', color='black', alpha=0.5)
ax0.axvline(x=max_index, linestyle='dotted', color='black', alpha=0.5)
ax0.set_title('Maximum payout and expected value', fontsize=14)
ax0.set_xlabel('number of games required to win the series')
df.plot.line(x='wins_required', y='series_win_probability', ax=ax1)
ax1.set_title('Probability of winning the series', fontsize=14)
ax1.set_xlabel('number of games required to win the series')
series_lengths = np.arange(n_max, 2*n_max)
p_win_at_length = [series_win_probability_of_length(n_max, m, p=p) for m in series_lengths]
ax2.plot(series_lengths, p_win_at_length)
ax2.set_title('Probability of winning the {}-game series'.format(n_max), fontsize=14)
ax2.set_xlabel('total number of games played')
calculate_and_plot(n_max=50, p=0.6)
"""
Explanation: Let's put this altogether in an interactive form.
End of explanation
"""
|
JENkt4k/pynotes-general | D3Notes/D3.js Workbook.ipynb | gpl-3.0 | %%writefile tutorial.bar.html
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.chart div {
font: 10px sans-serif;
background-color: steelblue;
text-align: right;
padding: 3px;
margin: 1px;
color: white;
}
</style>
<div class="chart"></div>
<script src="https://d3js.org/d3.v3.min.js"></script>
<script>
var data = [4, 8, 15, 16, 23, 42];
var x = d3.scale.linear()
.domain([0, d3.max(data)])
.range([0, 420]);
d3.select(".chart")
.selectAll("div")
.data(data)
.enter().append("div")
.style("width", function(d) { return x(d) + "px"; })
.text(function(d) { return d; });
</script>
from IPython.display import IFrame
IFrame("tutorial.bar.html", width=850, height=150)
"""
Explanation: D3 Notes
Working through the tutorials from https://d3js.org/
Note: '%%HTML' causes CORS exception. To fix this simply use '%%writefile [filename]' and then the following python code to load in an Iframe. (optionally you can load the file '.js' locally as well)
python
from IPython.display import IFrame
IFrame("[filename]", width=850, height=150)
Tutorial #1 - Make a simple bar Chart
tutorial here
source code
End of explanation
"""
%%HTML
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.chart div {
font: 10px sans-serif;
background-color: steelblue;
text-align: right;
padding: 3px;
margin: 1px;
color: white;
}
</style>
<div class="chart"></div>
<script src="d3.v3.min.js"></script>
<script>
var data = [4, 8, 15, 16, 23, 42];
var x = d3.scale.linear()
.domain([0, d3.max(data)])
.range([0, 420]);
d3.select(".chart")
.selectAll("div")
.data(data)
.enter().append("div")
.style("width", function(d) { return x(d) + "px"; })
.text(function(d) { return d; });
</script>
"""
Explanation: Optional - Copy to local directory
```bash
wget https://d3js.org/d3.v3.js
or
wget https://d3js.org/d3.v3.min.js
```
more examples
Then change:
```html
<script src="https://d3js.org/d3.v3.min.js"></script>
orhtml
<script src="//d3js.org/d3.v3.min.js"></script>
```
to
```html
<script src="d3.v3.min.js"></script>
```
See below
End of explanation
"""
%%writefile tutorial.bar2.html
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.chart rect {
fill: steelblue;
}
.chart text {
fill: white;
font: 10px sans-serif;
text-anchor: end;
}
</style>
<svg class="chart"></svg>
<script src="//d3js.org/d3.v3.min.js"></script>
<script>
var width = 420,
barHeight = 20;
var x = d3.scale.linear()
.range([0, width]);
var chart = d3.select(".chart")
.attr("width", width);
d3.tsv("data.tsv", type, function(error, data) {
x.domain([0, d3.max(data, function(d) { return d.value; })]);
chart.attr("height", barHeight * data.length);
var bar = chart.selectAll("g")
.data(data)
.enter().append("g")
.attr("transform", function(d, i) { return "translate(0," + i * barHeight + ")"; });
bar.append("rect")
.attr("width", function(d) { return x(d.value); })
.attr("height", barHeight - 1);
bar.append("text")
.attr("x", function(d) { return x(d.value) - 3; })
.attr("y", barHeight / 2)
.attr("dy", ".35em")
.text(function(d) { return d.value; });
});
function type(d) {
d.value = +d.value; // coerce to number
return d;
}
</script>
%%writefile data.tsv
name value
Locke 4
Reyes 8
Ford 15
Jarrah 16
Shephard 23
Kwon 42
from IPython.display import IFrame
IFrame("tutorial.bar2.html", width=850, height=150)
"""
Explanation: Tutorial #2 - Load External Data
tutorial
source code
Load data from and external '.tsv' file
from source
"To use this data in a web browser, we need to download the file from a web server and then parse it, which converts the text of the file into usable JavaScript objects. Fortunately, these two tasks can be performed by a single function, d3.tsv.
Loading data introduces a new complexity: downloads are asynchronous. When you call d3.tsv, it returns immediately while the file downloads in the background. At some point in the future when the download finishes, your callback function is invoked with the new data, or an error if the download failed. In effect your code is evaluated out of order:"
```javascript
// 1. Code here runs first, before the download starts.
d3.tsv("data.tsv", function(error, data) {
// 3. Code here runs last, after the download finishes.
});
// 2. Code here runs second, while the file is downloading.
```
End of explanation
"""
%%writefile tutorial.bar3.vertical.html
<!DOCTYPE html>
<meta charset="utf-8">
<style>
.bar {
fill: steelblue;
}
.bar:hover {
fill: brown;
}
.axis--x path {
display: none;
}
</style>
<svg width="960" height="500"></svg>
<script src="https://d3js.org/d3.v4.min.js"></script>
<script>
var svg = d3.select("svg"),
margin = {top: 20, right: 20, bottom: 30, left: 40},
width = +svg.attr("width") - margin.left - margin.right,
height = +svg.attr("height") - margin.top - margin.bottom;
var x = d3.scaleBand().rangeRound([0, width]).padding(0.1),
y = d3.scaleLinear().rangeRound([height, 0]);
var g = svg.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");
d3.tsv("data.tutorial3.tsv", function(d) {
d.frequency = +d.frequency;
return d;
}, function(error, data) {
if (error) throw error;
x.domain(data.map(function(d) { return d.letter; }));
y.domain([0, d3.max(data, function(d) { return d.frequency; })]);
g.append("g")
.attr("class", "axis axis--x")
.attr("transform", "translate(0," + height + ")")
.call(d3.axisBottom(x));
g.append("g")
.attr("class", "axis axis--y")
.call(d3.axisLeft(y).ticks(10, "%"))
.append("text")
.attr("transform", "rotate(-90)")
.attr("y", 6)
.attr("dy", "0.71em")
.attr("text-anchor", "end")
.text("Frequency");
g.selectAll(".bar")
.data(data)
.enter().append("rect")
.attr("class", "bar")
.attr("x", function(d) { return x(d.letter); })
.attr("y", function(d) { return y(d.frequency); })
.attr("width", x.bandwidth())
.attr("height", function(d) { return height - y(d.frequency); });
});
</script>
%%writefile data.tutorial3.tsv
letter frequency
A .08167
B .01492
C .02782
D .04253
E .12702
F .02288
G .02015
H .06094
I .06966
J .00153
K .00772
L .04025
M .02406
N .06749
O .07507
P .01929
Q .00095
R .05987
S .06327
T .09056
U .02758
V .00978
W .02360
X .00150
Y .01974
Z .00074
from IPython.display import IFrame
IFrame("tutorial.bar3.vertical.html", width=970, height=530)
"""
Explanation: Tutorial 3 - Rotating Columns
tutorial
source code
Important notes
Fixed width vs calculated width
from source
"We previously multiplied the var barHeight by the index of each data point (0, 1, 2, …) to produce fixed-height bars. The resulting chart’s height thus depended on the size of the dataset. But here the opposite behavior is desired: the chart width is fixed and the bar width variable. So rather than fix the barHeight, now we compute the barWidth by dividing the available chart width by the size of the dataset, data.length."
...
javascript
d3.tsv("data.tsv", type, function(error, data) {
y.domain([0, d3.max(data, function(d) { return d.value; })]);
javascript
var barWidth = width / data.length;
...
Loading Labels
...
javascript
chart.append("g")
.attr("class", "y axis")
.call(yAxis)
.append("text")
.attr("transform", "rotate(-90)")
.attr("y", 6)
.attr("dy", ".71em")
.style("text-anchor", "end")
.text("Frequency");
...
"Unit-appropriate number formatting also improves legibility by tailoring the display to your data. Since our chart displays relative frequency, percentages are more appropriate than the default behavior which shows a number between 0 and 1. A format string as the second argument to axis.ticks will customize the tick formatting, and the scale will automatically choose a precision appropriate to the tick interval."
...
javascript
var yAxis = d3.svg.axis()
.scale(y)
.orient("left")
.ticks(10, "%");
...
End of explanation
"""
# source: https://bost.ocks.org/mike/constancy/
from IPython.display import IFrame
IFrame("https://bost.ocks.org/mike/constancy/", width=970, height=530)
%pwd
from IPython.display import IFrame
IFrame("https://bost.ocks.org/mike/join/", width=970, height=530)
"""
Explanation: Tutorial 4 - Selecting Elements, Entering and Exiting
Selecting Elements
Entering Elements
To size, move, etc, simply enter the
from source
"...By appending to the enter selection, we can create new circles for any missing data."
```javascript
var svg = d3.select("svg");
var circle = svg.selectAll("circle")
.data([32, 57, 112, 293]);
var circleEnter = circle.enter().append("circle");
```
Modify attributes
javascript
circleEnter.attr("cy", 60);
circleEnter.attr("cx", function(d, i) { return i * 100 + 30; });
circleEnter.attr("r", function(d) { return Math.sqrt(d); });
Exiting Elements
from source
"... you have too many existing elements, and you want to remove some of them. Again you can select nodes and remove them manually, but the exit selection computed by a data join is more powerful."
```javascript
var circle = svg.selectAll("circle")
.data([32, 57]);
...
circle.exit().remove();
```
End of explanation
"""
%%HTML
<style>
@import url(../style.css?aea6f0a);
circle {
fill: none;
fill-opacity: .2;
stroke: black;
stroke-width: 1.5px;
}
</style>
<svg width="720" height="240">
<g transform="translate(0,128)">
<g transform="translate(300)">
<circle r="110" style="fill: rgb(49, 130, 189);"></circle>
<text y="-120" dy=".35em" text-anchor="middle" style="font-weight: bold;">Data</text>
<text x="-50" dy=".35em" text-anchor="middle">Enter</text>
</g>
<text x="360" dy=".35em" text-anchor="middle">Update</text>
<g transform="translate(420)">
<circle r="110" style="fill: rgb(230, 85, 13);"></circle>
<text y="-120" dy=".35em" text-anchor="middle" style="font-weight: bold;">Elements</text>
<text x="50" dy=".35em" text-anchor="middle">Exit</text>
</g>
</g>
</svg>
"""
Explanation: Nice Venn diagram, nothing D3 specific though
End of explanation
"""
|
jinzishuai/learn2deeplearn | deeplearning.ai/C5.SequenceModel/Week2_NLP_WordEmbeddings/assignment/Word Vector Representation/Operations on word vectors - v1.ipynb | gpl-3.0 | import numpy as np
from w2v_utils import *
"""
Explanation: Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
After this assignment you will be able to:
Load pre-trained word vectors, and measure similarity using cosine similarity
Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
End of explanation
"""
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
"""
Explanation: Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the word_to_vec_map.
End of explanation
"""
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = None
# Compute the L2 norm of u (≈1 line)
norm_u = None
# Compute the L2 norm of v (≈1 line)
norm_v = None
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = None
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
"""
Explanation: You've loaded:
- words: set of words in the vocabulary.
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> Figure 1: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
Exercise: Implement the function cosine_similarity() to evaluate similarity between word vectors.
Reminder: The norm of $u$ is defined as $ ||u||2 = \sqrt{\sum{i=1}^{n} u_i^2}$
End of explanation
"""
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = None
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the combined_vector and the current word (≈1 line)
cosine_sim = None
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if None > None:
max_cosine_sim = None
best_word = None
### END CODE HERE ###
return best_word
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"a is to b as c is to ____"</font>. An example is <font color='brown'> 'man is to woman as king is to queen' </font>. In detail, we are trying to find a word d, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
Exercise: Complete the code below to be able to perform word analogies!
End of explanation
"""
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
"""
Explanation: Run the cell below to test your code, this may take 1-2 minutes.
End of explanation
"""
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word woman, and $e_{man}$ corresponds to the word vector corresponding to the word man. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
End of explanation
"""
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
"""
Explanation: Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
End of explanation
"""
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
"""
Explanation: As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
End of explanation
"""
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
"""
Explanation: Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to Boliukbasi et al., 2016. Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> Figure 2: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
Exercise: Implement neutralize() to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias_component} = \frac{e*g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
End of explanation
"""
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Set e1_orth and e2_orth to be equal to mu_orth (≈2 lines)
e1_orth = None
e2_orth = None
# Step 5: Adjust the Bias part of u1 and u2 using the formulas given in the figure above (≈2 lines)
e_w1B = None
e_w2B = None
# Step 6: Debias by equalizing u1 and u2 to the sum of their projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
"""
Explanation: Expected Output: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu * \text{bias_axis}}{||\text{bias_axis}||_2} + ||\text{bias_axis}||_2 *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$e_{w1B} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{(e_{\text{w1}} - \mu_{\perp}) - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{7}$$
$$e_{w2B} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{(e_{\text{w2}} - \mu_{\perp}) - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{8}$$
$$e_1 = e_{w1B} + \mu_{\perp} \tag{9}$$
$$e_2 = e_{w2B} + \mu_{\perp} \tag{10}$$
Exercise: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
End of explanation
"""
|
DaanVanHauwermeiren/data-transformation-analysis | envision/MIME_to_csv.ipynb | mit | import pandas as pd
import os
import sys
import mimetypes
import email
import glob
"""
Explanation: Table of Contents
<p>
converting htm files in MIME format to csv files
load libraries
End of explanation
"""
mht_files = glob.glob(os.path.join(os.path.curdir, '*.mht'))
"""
Explanation: ref: http://stackoverflow.com/questions/7446284/read-mime-message-stored-in-text-file-with-python
Store your mht files in the same folder as this notebook.
The cell below gets the file names.
End of explanation
"""
for filepath in mht_files:
# get the name of the file, e.g. ./31521derp.mht -> 31521derp
filename_base = os.path.split(filepath)[-1].split('.mht')[0]
# open mht file
with open(filepath, 'r') as f:
msg = email.message_from_file(f)
# loop over the parts in the file
for i, part in enumerate(msg.walk(), start=1):
print('chunk %g is type: '%i + part.get_content_type())
if part.get_content_maintype() == 'multipart':
print('content type is multipart, skipping chunk %g'%i)
continue
ext = mimetypes.guess_extension(part.get_content_type())
filename = filename_base + '_part-%03d%s'%(i, ext)
filename = os.path.join(os.path.curdir, filename)
print(filename)
with open(filename, 'wb') as fp:
fp.write(part.get_payload(decode=True))
"""
Explanation: the next cell parses the mht-files, splits them by content type (html, jpg, etc.) and writes the output of the chunks to the hard disk
End of explanation
"""
html_files = glob.glob(os.path.join(os.path.curdir, '*part*.htm*'))
html_files
"""
Explanation: get the name of the stripped files with only html content
End of explanation
"""
for filepath in html_files:
filename_base = os.path.split(filepath)[-1].split('_')[0]
# read in html, result is a list of pandas dataframes
input_html = pd.read_html(filepath, thousands='')
# the data of interest appears every three dataframes, starting from index
# two, the end is at -6 to clip the unnecessary data at the end.
# processed_html = input_html[2:-6:3]
# this seems to work better, because it checks if a decimal separator (,)
# exists in the string
processed_html = [x for x in input_html if ',' in str(x[0][0])]
# remove the index from the dataframes
processed_html_values = [x.iloc[0] for x in processed_html]
# concat the dataframes
df_processed_data = pd.concat(processed_html_values, axis=1)
# DECREPATED: index is only needed if you need the first tabel.
# add the index: the values of the first column of any (here the first) df
# in processed_html
#df_processed_data.index = processed_html[0][0].values
# write to file:
#filepath_output = os.path.join(os.path.curdir, filename_base + '.csv')
#df_processed_data.to_csv(filepath_output, encoding='utf-8')
# write transposed to file:
filepath_output = os.path.join(os.path.curdir, filename_base + '_transposed.csv')
df_processed_data.T.to_csv(filepath_output, encoding='utf-8')
"""
Explanation: loop over files, clip the unnecessary data and store the csv files
End of explanation
"""
|
UDST/activitysim | activitysim/examples/example_mtc/notebooks/getting_started.ipynb | bsd-3-clause | !pip install activitysim
"""
Explanation: Getting Started with ActivitySim
This getting started guide is a Jupyter notebook. It is an interactive Python 3 environment that describes how to set up, run, and begin to analyze the results of ActivitySim modeling scenarios. It is assumed users of ActivitySim are familiar with the basic concepts of activity-based modeling. This tutorial covers:
Installation and setup
Setting up and running a base model
Inputs and outputs
Setting up and running an alternative scenario
Comparing results
Next steps and further reading
This notebook depends on Anaconda Python 3 64bit.
Install ActivitySim
The first step is to install activitysim from pypi (the Python package index). It also installs dependent packages such as tables for reading/writing HDF5, openmatrix for reading/writing OMX matrix, and pyyaml for yaml settings files.
End of explanation
"""
!activitysim create -e example_mtc -d example
%cd example
"""
Explanation: Creating an Example Setup
The example is included in the package and can be copied to a user defined location using the package's command line interface. The example includes all model steps. The command below copies the example_mtc example to a new example folder. It also changes into the new example folder so we can run the model from there.
End of explanation
"""
!activitysim run -c configs -d data -o output
"""
Explanation: Run the Example
The code below runs the example, which runs in a few minutes. The example consists of 100 synthetic households and the first 25 zones in the example model region. The full example (example_mtc_full) can be created and downloaded from the activitysim resources repository using activitysim's create command above. As the model runs, it logs information to the screen.
To run the example, use activitysim's built-in run command. As shown in the script help, the default settings assume a configs, data, and output folder in the current directory.
End of explanation
"""
import os
for root, dirs, files in os.walk(".", topdown=False):
for name in files:
print(os.path.join(root, name))
for name in dirs:
print(os.path.join(root, name))
"""
Explanation: Inputs and Outputs Overview
An ActivitySim model requires:
Configs: settings, model step expressions files, etc.
settings.yaml - main settings file for running the model
network_los.yaml - network level-of-service (skims) settings file
[model].yaml - configuration file for the model step (such as auto ownership)
[model].csv - expressions file for the model step
Data: input data - input data tables and skims
land_use.csv - zone data file
households.csv - synthethic households
persons.csv - synthethic persons
skims.omx - all skims in one open matrix file
Output: output data - output data, tables, tracing info, etc.
pipeline.h5 - data pipeline database file (all tables at each model step)
final_[table].csv - final household, person, tour, trip CSV tables
activitysim.log - console log file
trace.[model].csv - trace calculations for select households
simulation.py: main script to run the model
Run the command below to list the example folder contents.
End of explanation
"""
print("Load libraries.")
import pandas as pd
import openmatrix as omx
import yaml
import glob
print("Display the settings file.\n")
with open(r'configs/settings.yaml') as file:
file_contents = yaml.load(file, Loader=yaml.FullLoader)
print(yaml.dump(file_contents))
print("Display the network_los file.\n")
with open(r'configs/network_los.yaml') as file:
file_contents = yaml.load(file, Loader=yaml.FullLoader)
print(yaml.dump(file_contents))
print("Input land_use. Primary key: TAZ. Required additional fields depend on the downstream submodels (and expression files).")
pd.read_csv("data/land_use.csv")
print("Input households. Primary key: HHID. Foreign key: TAZ. Required additional fields depend on the downstream submodels (and expression files).")
pd.read_csv("data/households.csv")
print("Input persons. Primary key: PERID. Foreign key: household_id. Required additional fields depend on the downstream submodels (and expression files).")
pd.read_csv("data/persons.csv")
print("Skims. All skims are input via one OMX file. Required skims depend on the downstream submodels (and expression files).\n")
print(omx.open_file("data/skims.omx"))
"""
Explanation: Inputs
Run the commands below to:
* Load required Python libraries for reading data
* Display the settings.yaml, including the list of models to run
* Display the land_use, households, and persons tables
* Display the skims
End of explanation
"""
print("The output pipeline contains the state of each table after each model step.")
pipeline = pd.io.pytables.HDFStore('output/pipeline.h5')
pipeline.keys()
print("Households table after trip mode choice, which contains several calculated fields.")
pipeline['/households/joint_tour_frequency'] #watch out for key changes if not running all models
print("Final output households table to written to CSV, which is the same as the table in the pipeline.")
pd.read_csv("output/final_households.csv")
print("Final output persons table to written to CSV, which is the same as the table in the pipeline.")
pd.read_csv("output/final_persons.csv")
print("Final output tours table to written to CSV, which is the same as the table in the pipeline. Joint tours are stored as one record.")
pd.read_csv("output/final_tours.csv")
print("Final output trips table to written to CSV, which is the same as the table in the pipeline. Joint trips are stored as one record")
pd.read_csv("output/final_trips.csv")
"""
Explanation: Outputs
Run the commands below to:
* Display the output household and person tables
* Display the output tour and trip tables
End of explanation
"""
print("Final output accessibility table to written to CSV.")
pd.read_csv("output/final_accessibility.csv")
print("Joint tour participants table, which contains the person ids of joint tour participants.")
pipeline['joint_tour_participants/joint_tour_participation']
print("Destination choice sample logsums table for school location if want_dest_choice_sample_tables=True.")
if '/school_location_sample/school_location' in pipeline:
pipeline['/school_location_sample/school_location']
"""
Explanation: Other notable outputs
End of explanation
"""
print("trip matrices by time of day for assignment")
output_files = os.listdir("output")
for output_file in output_files:
if "omx" in output_file:
print(output_file)
"""
Explanation: Trip matrices
A write_trip_matrices step at the end of the model adds boolean indicator columns to the trip table in order to assign each trip into a trip matrix and then aggregates the trip counts and writes OD matrices to OMX (open matrix) files. The coding of trips into trip matrices is done via annotation expressions.
End of explanation
"""
print("All trace files.\n")
glob.glob("output/trace/*.csv")
print("Trace files for auto ownership.\n")
glob.glob("output/trace/auto_ownership*.csv")
print("Trace chooser data for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.choosers.csv")
print("Trace utility expression values for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.eval_utils.expression_values.csv")
print("Trace alternative total utilities for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.utilities.csv")
print("Trace alternative probabilities for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.probs.csv")
print("Trace random number for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.rands.csv")
print("Trace choice for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.choices.csv")
"""
Explanation: Tracing calculations
Tracing calculations is an important part of model setup and debugging. Often times data issues, such as missing values in input data and/or incorrect submodel expression files, do not reveal themselves until a downstream submodels fails. There are two types of tracing in ActivtiySim: household and origin-destination (OD) pair. If a household trace ID is specified via trace_hh_id, then ActivitySim will output a comprehensive set of trace files for all calculations for all household members. These trace files are listed below and explained.
End of explanation
"""
!activitysim run -c configs_mp -c configs -d data -o output
"""
Explanation: Run the Multiprocessor Example
The command below runs the multiprocessor example, which runs in a few minutes. It uses settings inheritance to override setings in the configs folder with settings in the configs_mp folder. This allows for re-using expression files and settings files in the single and multiprocessed setups. The multiprocessed example uses the following additional settings:
```
num_processes: 2
chunk_size: 0
multiprocess_steps:
- name: mp_initialize
begin: initialize_landuse
- name: mp_households
begin: school_location
slice:
tables:
- households
- persons
- name: mp_summarize
begin: write_data_dictionary
```
In brief, num_processes specifies the number of processors to use and a chunk_size of 0 means ActivitySim is free to use all the available RAM if needed. The multiprocess_steps specifies the beginning, middle, and end steps in multiprocessing. The mp_initialize step is single processed because there is no slice setting. It starts with the initialize_landuse submodel and runs until the submodel identified by the next multiprocess submodel starting point, school_location. The mp_households step is multiprocessed and the households and persons tables are sliced and allocated to processes using the chunking settings. The rest of the submodels are run multiprocessed until the final multiprocess step. The mp_summarize step is single processed because there is no slice setting and it writes outputs. See multiprocessing and chunk_size for more information.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/gls.ipynb | bsd-3-clause | import statsmodels.api as sm
"""
Explanation: Generalized Least Squares
End of explanation
"""
data = sm.datasets.longley.load(as_pandas=False)
data.exog = sm.add_constant(data.exog)
print(data.exog[:5])
"""
Explanation: The Longley dataset is a time series dataset:
End of explanation
"""
ols_resid = sm.OLS(data.endog, data.exog).fit().resid
"""
Explanation: Let's assume that the data is heteroskedastic and that we know
the nature of the heteroskedasticity. We can then define
sigma and use it to give us a GLS model
First we will obtain the residuals from an OLS fit
End of explanation
"""
resid_fit = sm.OLS(ols_resid[1:], sm.add_constant(ols_resid[:-1])).fit()
print(resid_fit.tvalues[1])
print(resid_fit.pvalues[1])
"""
Explanation: Assume that the error terms follow an AR(1) process with a trend:
$\epsilon_i = \beta_0 + \rho\epsilon_{i-1} + \eta_i$
where $\eta \sim N(0,\Sigma^2)$
and that $\rho$ is simply the correlation of the residual a consistent estimator for rho is to regress the residuals on the lagged residuals
End of explanation
"""
rho = resid_fit.params[1]
"""
Explanation: While we do not have strong evidence that the errors follow an AR(1)
process we continue
End of explanation
"""
from scipy.linalg import toeplitz
toeplitz(range(5))
order = toeplitz(range(len(ols_resid)))
"""
Explanation: As we know, an AR(1) process means that near-neighbors have a stronger
relation so we can give this structure by using a toeplitz matrix
End of explanation
"""
sigma = rho**order
gls_model = sm.GLS(data.endog, data.exog, sigma=sigma)
gls_results = gls_model.fit()
"""
Explanation: so that our error covariance structure is actually rho**order
which defines an autocorrelation structure
End of explanation
"""
glsar_model = sm.GLSAR(data.endog, data.exog, 1)
glsar_results = glsar_model.iterative_fit(1)
print(glsar_results.summary())
"""
Explanation: Of course, the exact rho in this instance is not known so it it might make more sense to use feasible gls, which currently only has experimental support.
We can use the GLSAR model with one lag, to get to a similar result:
End of explanation
"""
print(gls_results.params)
print(glsar_results.params)
print(gls_results.bse)
print(glsar_results.bse)
"""
Explanation: Comparing gls and glsar results, we see that there are some small
differences in the parameter estimates and the resulting standard
errors of the parameter estimate. This might be do to the numerical
differences in the algorithm, e.g. the treatment of initial conditions,
because of the small number of observations in the longley dataset.
End of explanation
"""
|
terencezl/scientific-python-walkabout | scientific-python-walkabout.ipynb | mit | # copying a referecne vs copying as a new list
# copying a refernce
a = [3,4,5]
b = a
print(b is a)
b[2] = 555
print(a, b)
# slice copying as a new list
a = [3,4,5]
b = a[:] # meaning slicing all
print(b is a)
b[2] = 666
print(a, b)
# removing something from a list
# wrong
a = [1,2,3,3,3,3,4]
for i in a:
if i == 3:
a.remove(i)
print(a)
# right, because a[:] creates a copy
a = [1,2,3,3,3,3,4]
for i in a[:]:
if i == 3:
a.remove(i)
print(a)
# iterate a list as index and value
a = [10,20,30,40]
for idx, value in enumerate(a):
print(idx, value)
# iterate a dict as key and value
b = {'x': 1, 'y': 2, 'z': 3}
for key, value in b.items():
print(key, value)
# use zip to iterate two lists
a = [1,2,3]
b = [4,5,6]
for i, j in zip(a, b):
print(i, j)
"""
Explanation: Scientific Python Walkabout
To use the most up-to-date version of this notebook, go to a safe and quite directory and type in the command line interface
git clone https://github.com/terencezl/scientific-python-walkabout
and
ipython notebook
and click your way into it.
Overview
We are going to get to know the scientific python stack (only a little bit):
NumPy (data analysis foundation, your pythonic "MATLAB")
SciPy (more functionalities, e.g. integration, optimization, Fourier transforms, signal processing, linear algebra)
matplotlib (2D plotting tool, some 3D capabilities)
pandas (statistics, pretty viewing and flexible in/output, your pythonic "R")
(Optional) SymPy (symbolic computation, your pythonic "Mathematica")
Installation
If you are using MacOS, your system has already come with a distrubution of Python (but it is usually older than the up-to-date version, and is difficult to update). There are also some other ways, such as downloading the official Python installation package, along with Windows Users (but it is also difficult to update when it gets old), or using Homebrew, a convenient package manager in MacOS. Linux users can just rely on the system pakage manager.
But all of the above still require the additional installation of Python packages (NumPy, SciPy, matplotlib, pandas, etc.) that come on top of Python itself, the very packages that make Python a powerful and versatile language. There have been more integrated solutions, among which Anaconda scientific Python distribution is what the scientific community is having a great time with. If you install Anaconda, it directly comes with the full scientific stack ready, and provides a very consistent way of updating the Python packages and even Python itself.
Go to the webpage and select what suits your OS. Let's select "I want Python 3" as well.
IPython Configuration Custom Setup
Go to the command line interface and call
ipython locate
It will return a directory. Enter that directory and find profile_default/ipythonrc.py. If there is not, create one, and copy the content below into it. You can of course modify it to your need, and please remember the existence of this config file, in case you want to change it afterwards.
```python
import os
import sys
import NumPy
try:
import numpy as np
print("NumPy is imported.")
except ImportError:
print("NumPy is not imported!")
import matplotlib
try:
import matplotlib as mpl
print("matplotlib is imported.")
import matplotlib.pyplot as plt
# turn on interactive mode
plt.ion()
print("Using matplotlib interactive mode.")
# try using custom style for prettier looks
try:
plt.style.use('ggplot')
print("Using custom ggplot style from matplotlib 1.4.")
except ValueError:
print("If matplotlib >= 1.4 is installed, styles will be used for better looks.")
except ImportError:
print("matplotlib is not imported!")
import pandas
try:
import pandas as pd
print("pandas is imported.")
except ImportError:
print("pandas is not imported!")
```
Basic Python
Tutorial in the official docs
Some rehash
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import curve_fit
np.polyfit?
"""
Explanation: NumPy & SciPy
Very good tutorials and docs:
Tentative NumPy Tutorial
Scientific Python stack official docs
There are a few compilations for helping MATLAB, IDL, R users transitioning to Python/NumPy. HTML and phf versions are both available. (Big thanks to Alex Mulia for bringing this to our attention!)
Thesaurus of Mathematical Languages, or MATLAB synonymous commands in Python/NumPy
End of explanation
"""
a0 = np.arange(6)
a = a0.reshape((2,3))
print(a.dtype, a.itemsize, a.size, a.shape, '\n')
print(a, '\n')
print(repr(a), '\n')
print(a.tolist())
b = a.astype(float)
print(b, '\n')
print(repr(b))
# re-define a0 and a
a0 = np.arange(6)
a = a0.reshape((2,3))
# get a slice of a to make c
c = a[:2, 1:3]
# a and c are both based on a0, the very initial storage space
print(c, '\n')
print(a.base, a.base is c.base)
# changing c will change a and a0
c[0, 0] = 1111
print('\n', c, '\n')
print(a)
# WAT??? This is different from the slice copy of a list, e.g. mylist[:]
# if you want to make a real copy, and re-allocate some RAM, use
d = a[:]
e = a.copy()
print(d is a, e is a)
# Well... You may expect d is the same as a, but it is just not.
# Our reasoning still holds though. You change d, you'll change a.
"""
Explanation: NumPy Arrays
End of explanation
"""
def f(x, a, b):
return a * np.exp(b * x)
x = np.linspace(0, 1, 1000)
y_ideal = f(x, 1, 2)
y = f(x, 1, 2) + np.random.randn(1000)
plt.plot(x, y)
plt.plot(x, y_ideal, lw=2)
popt, pcov = curve_fit(f, x, y)
# popt is the optimized parameters, and pcov is the covariance matrix.
# diagnal members np.diag(pcov) is the variances of each parameter.
# np.sqrt(np.diag(pcov)) is the standard deviation.
print(popt, '\n\n', pcov)
y_fit = f(x, popt[0], popt[1])
plt.plot(x, y_ideal, label='ideal')
plt.plot(x, y_fit, '--', label='fit')
plt.legend(loc=0, fontsize=14)
"""
Explanation: Now, we'll primarily demonstrate SciPy's capability of fitting.
Fitting a single variable simple function
$f(x) = a e^{b x}$
End of explanation
"""
from scipy.integrate import quad
def f(x, a, b, c, d):
# the integrand function should be within function f, because parameters a and b
# are available within.
def integrand(xx):
return a * xx + b
# if the upper/lower limit of the integral is our unknown variable x, x has to be
# iterated from an array to a single value, because the quad function only accepts
# a single value each time.
y = np.zeros(len(x))
for idx, value in enumerate(x):
y[idx] = c * quad(integrand, 0, value)[0] + d
return y
x = np.linspace(0, 1, 1000)
y_ideal = f(x, 1, 2, 3, 4)
y = f(x, 1, 2, 3, 4) + np.random.randn(1000)
plt.plot(x, y)
plt.plot(x, y_ideal, lw=2)
popt, pcov = curve_fit(f, x, y)
print(popt, '\n\n', pcov)
y_fit = f(x, popt[0], popt[1], popt[2], popt[3])
plt.plot(x, y_ideal, label='ideal')
plt.plot(x, y_fit, '--', label='fit')
plt.legend(loc=0, fontsize=14)
"""
Explanation: Fitting a single variable function containing an integral
$f(x) = c \int_o^x (a x' + b) dx' + d$
End of explanation
"""
def f(x, a, b, c):
return a * np.exp(b * x[0]) + np.exp(c * x[1])
x1 = np.linspace(0, 1, 1000)
x2 = np.linspace(0, 1, 1000)
x = [x1, x2]
y_ideal = f(x, 1, 2, 3)
y = f(x, 1, 2, 3) + np.random.randn(1000)
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[0], x[1], y, alpha=.1)
ax.plot(x[0], x[1], y_ideal, 'r', lw=2)
ax.view_init(30, 80)
popt, pcov = curve_fit(f, x, y)
print(popt, '\n\n', pcov)
fig = plt.figure(figsize=(10,8))
y_fit = f(x, popt[0], popt[1], popt[2])
ax = fig.add_subplot(111, projection='3d')
ax.plot(x[0], x[1], y_ideal, label='ideal')
ax.plot(x[0], x[1], y_fit, label='fit')
plt.legend(loc=0, fontsize=14)
ax.view_init(30, 80)
"""
Explanation: Fitting a 2 variable function
$f(x) = a e^{b x_1} + e^{c x_2}$
End of explanation
"""
# pyplot (plt) interface vs object oriented interface
fig, axes = plt.subplots(1, 2)
plt.plot([2,3,4])
# Looks like it automatically chose the right axes to plot on.
# how can I plot on the first graph?
# Either keep (well... kind of) using the convenient pyplot interface
fig, axes = plt.subplots(1, 2)
plt.plot([2,3,4])
# change the state of the focus by switching to the zeroth axes
plt.sca(axes[0])
plt.plot([3,2,1])
# Or use the object oriented interface
fig, axes = plt.subplots(1, 2)
plt.plot([2,3,4])
print(axes)
ax = axes[0]
ax.plot([1,2,3])
# if you are not using notebook, and have switched on interactive mode by plt.ion(),
# you need to explicitly say
plt.draw()
# But it doesn't hurt if you say it anyway.
# So there I said it.
# Similarly, if you have two figures and want to switch back and forth
# create figs
fig1 = plt.figure('Ha')
plt.plot([1,2,32])
fig2 = plt.figure(2)
plt.plot([32,2,1])
# switch back to fig 'Ha'
plt.figure('Ha')
plt.scatter([0,1,2], [3,4,5])
# add text and then delete
plt.plot([2,3,4])
plt.text(1, 2.5, r'This is $\frac{x}{x - 1} = 1$!', fontsize=14)
# to delete the text, first get the axes reference, and pop the just added text object out of the list
plt.plot([2,3,4])
plt.text(1, 2.5, r'This is $\frac{x}{x - 1} = 1$!', fontsize=14)
ax = plt.gca()
# print(ax.texts) will give you a list, with one element
ax.texts.pop()
# you have to redraw the figure
plt.draw()
# same can be applied to lines by `ax.lines.pop()`
# tight_layout() to automatically adjust the elements in a figure
plt.plot([35,3,54])
plt.xlabel('X')
plt.ylabel('Y')
plt.plot([35,3,54])
plt.xlabel('X')
plt.ylabel('Y')
plt.tight_layout()
# locator_params() to have more or less ticks
plt.plot([35,3,54])
plt.locator_params(nbins=10)
"""
Explanation: matplotlib
Some core concepts in http://matplotlib.org/faq/usage_faq.html regarding backends, (non-)interactive modes.
End of explanation
"""
pd.read_csv('https://raw.githubusercontent.com/pydata/pandas/master/doc/data/baseball.csv', index_col='id')
df = pd.read_excel('https://github.com/pydata/pandas/raw/master/doc/data/test.xls')
print(df)
"""
Explanation: pandas
A very good glimpse: Ten minutes of pandas.
Read data from online files.
End of explanation
"""
# simple column selection by label
df['A']
# simple row slice by position, end not included
df[0:2]
# explicit row selection
df.loc['2000-01-03']
# explicit row slicing, end included
df.loc['2000-01-03':'2000-01-05']
# explicit column selection by label
df.loc[:, 'A']
# explicit element selection by label
df.loc['Jan 3, 2000', 'A']
# explicit row selection by position
df.iloc[0]
# explicit row slicing by position, end not included
df.iloc[0:2]
# explicit column selection by position
df.iloc[:, 0]
# explicit element selection by position
df.iloc[0, 0]
# mixed selection, row by position and column by label
df.ix[0, 'A']
"""
Explanation: Ways of Indexing
Very confusing? See http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing
End of explanation
"""
|
milesgranger/cluster-clyde | examples/Demo.ipynb | mit | %matplotlib inline
# Hide info messages from paramiko
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.WARN)
import time
import random
import threading
import pandas as pd
import numpy as np
import plotly.plotly as py
import plotly.graph_objs as go
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (2, 2)
from distributed import progress, Client
from pprint import pprint
from sklearn.decomposition import PCA
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_digits
from cclyde.cluster import Cluster
"""
Explanation: Distributed Hyper-parameter searching
End of explanation
"""
cluster = Cluster(key_name='default_windows', n_nodes=16, cluster_name='default', instance_type='t2.medium')
cluster.configure()
cluster.launch_instances_nonblocking()
"""
Explanation: Create and launch AWS instances.
End of explanation
"""
X, y = load_digits(return_X_y=True)
X = np.asarray([x.flatten() for x in X])
for i in range(3):
plt.imshow(X[i].reshape((8, 8)), cmap='Greys_r')
plt.title('Digit: {}'.format(y[i]))
plt.show()
"""
Explanation: MNIST dataset
Grayscale hand-written digits
End of explanation
"""
pca = PCA(n_components=30)
print 'Features before: ', X.shape[1]
X = pca.fit_transform(X)
print 'Features after: ', X.shape[1]
print '{}% Explained Variance'.format(round(sum(pca.explained_variance_ratio_) * 100, 1))
"""
Explanation: Train a NN to predict the numbers (as simple as it gets)
This also demonstrates the time problem of adjusting hyper-parameters
End of explanation
"""
lr = MLPClassifier(hidden_layer_sizes=(10, 5), batch_size=10,
solver='sgd', learning_rate_init=0.01, early_stopping=True)
start = time.time()
scores = cross_val_score(estimator=lr,
X=X,
y=y,
cv=5)
print("\nAccuracy: {}% (+/- {})".format(round(scores.mean() * 100, 2), round(scores.std(), 3) * 2))
print('Finished in {}sec\n'.format(round(time.time() - start, 2)))
"""
Explanation: Train with some given parameters...
End of explanation
"""
lr = MLPClassifier(hidden_layer_sizes=(10, 10,), batch_size=100,
solver='sgd', learning_rate_init=0.01, early_stopping=True)
start = time.time()
scores = cross_val_score(estimator=lr,
X=X,
y=y,
cv=5)
print("\nAccuracy: {}% (+/- {})".format(round(scores.mean() * 100, 2), round(scores.std(), 3) * 2))
print('Finished in {}sec\n'.format(round(time.time() - start, 2)))
"""
Explanation: Alright, how about something else...
End of explanation
"""
lr = MLPClassifier(hidden_layer_sizes=(10, 10, 10,), batch_size=100,
solver='sgd', learning_rate_init=0.01, early_stopping=True)
start = time.time()
scores = cross_val_score(estimator=lr,
X=X,
y=y,
cv=5)
print("\nAccuracy: {}% (+/- {})".format(round(scores.mean() * 100, 2), round(scores.std(), 3) * 2))
print('Finished in {}sec\n'.format(round(time.time() - start, 2)))
"""
Explanation: and now something different than that..
End of explanation
"""
# Define hyper parameter ranges
batch_sizes = np.linspace(start=5, stop=750, num=50, dtype=np.int64)
n_layers = range(1, 8, 1)
# Make a list of all combinations
params = []
for batch_size in batch_sizes:
for n_layer in n_layers:
n_neuron = np.random.randint(low=5, high=200)
params.append({'batch_size': batch_size,
'hidden_layer_sizes': tuple(n_neuron for _ in range(n_layer)),
'solver': 'sgd',
'learning_rate_init': 0.01,
'early_stopping': True
})
print '{} different combinations.'.format(len(params))
pprint(params[:2])
"""
Explanation: Issue: What hyper params are best?
Train for all/most?
End of explanation
"""
print 'Lauching thread is alive: ', cluster.instance_launching_thread.is_alive()
cluster.install_anaconda()
cluster.install_python_packages(['scikit-learn', 'numpy', 'pandas', 'dask', 'futures'], method='conda')
scheduler_address = cluster.launch_dask()
"""
Explanation: This will take a while, even if using all cores on a local machine; let's distribute the workload
Before executing the next few blocks, make sure the instances are ready to connect. If launched with the non-blocking thread, we can check if it's done with .is_alive()
End of explanation
"""
c = Client(address=scheduler_address)
c
"""
Explanation: Connect to the resulting scheduler
End of explanation
"""
def get_data(kwargs):
"""
Function which gets data and performs PCA on it.
"""
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
import numpy as np
X, y = load_digits(return_X_y=True)
X = np.asarray([x.flatten() for x in X])
pca = PCA(n_components=30)
X = pca.fit_transform(X)
return (kwargs, X, y)
def model_tester(package):
"""
Function which is mapped to cluster. Passes kwargs to model to be trained.
Returns score based on those kwargs.
"""
kwargs, X, y = package
import time
import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import cross_val_score
# Initialize model with given kwargs
lr = MLPClassifier(**kwargs)
scores = cross_val_score(estimator=lr,
X=X,
y=y,
cv=5)
return (kwargs, scores.mean(), scores.std())
def score_combiner(package):
"""
Not needed, but more functions == more pretty colors
"""
import time
import random
time.sleep(random.random())
kwargs, score_m, score_std = package
kwargs.update({'score': score_m, 'std': score_std})
return kwargs
def double(n):
'''
Useless worker function # 1
'''
import time
import random
import sklearn
time.sleep(random.random())
return n * 2, 2
def add_two(package):
"""
Useless worker function # 2
"""
n, n2 = package
import time
import random
time.sleep(random.random())
return n + n2
"""
Explanation: Define functions which will be distributed to workers...
End of explanation
"""
futures = c.map(double, range(250))
futures = c.map(add_two, futures)
progress(futures)
"""
Explanation: Run test functions...
End of explanation
"""
futures = c.map(get_data, params)
futures = c.map(model_tester, futures)
futures = c.map(score_combiner, futures)
progress(futures)
results = c.gather(futures)
df = pd.DataFrame(results)
df['n_layers'] = df.hidden_layer_sizes.map(lambda _tuple: len(_tuple))
df['n_neurons'] = df.hidden_layer_sizes.map(lambda _tuple: _tuple[0])
df.head()
df.n_layers.unique()
data = []
for n_layers in df.n_layers.unique():
temp = df[df.n_layers == n_layers]
trace = go.Scatter(
x = temp.n_neurons,
y = temp.n_layers,
mode='markers',
text=['{}%<br>Layers: {}'.format(round(v * 100, 2), l)
for v, l in zip(temp.score.values, temp.n_layers.values)],
name='{} layers'.format(n_layers),
marker=dict(
size=temp.batch_size / 20.0,
color = temp.score, #set color equal to a variable
colorscale='Viridis',
showscale=False
)
)
data.append(trace)
layout = dict(title = 'Best performing models.<br>(size = batch size)',
xaxis = dict(zeroline = False, title='Neuron Count'),
yaxis = dict(zeroline = False, title='Layer Count'),
)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='styled-scatter')
df.ix[df.score.argmax(), :]
"""
Explanation: Distribute the actual work
End of explanation
"""
from Queue import Queue
local_q = Queue()
remote_q = c.scatter(local_q)
def long_calc1(n):
import time
import random
time.sleep(random.random())
return n + 2
def long_calc2(n):
import time
import random
time.sleep(random.random())
return n * 2
def long_calc3(n):
import time
import random
time.sleep(random.random())
return n - 2
long_calc1_q = c.map(long_calc1, remote_q)
long_calc2_q = c.map(long_calc2, long_calc1_q)
long_calc3_q = c.map(long_calc3, long_calc2_q)
result_q = c.gather(long_calc3_q)
"""
Explanation: Also create a distributed queue system...
End of explanation
"""
result_q.qsize()
"""
Explanation: queue is currently empty...
End of explanation
"""
def start_jobs():
jobs = range(500)
for job in jobs:
time.sleep(random.random())
local_q.put(job)
return
thread = threading.Thread(target=start_jobs)
thread.start()
"""
Explanation: Start submitting jobs to the queue with a thread
End of explanation
"""
def get_jobs():
while True:
print result_q.get()
return
finish_thread = threading.Thread(target=get_jobs)
finish_thread.start()
cluster.terminate_cluster()
"""
Explanation: and begin receiving the results...
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
ga7g08/ga7g08.github.io | _notebooks/2015-07-22-Setting-nice-axes-labels-in-matplotlib.ipynb | mit | x = np.linspace(0, 10, 1000)
y = 1e10 * np.sin(x)
fig, ax = plt.subplots()
ax.plot(x, y)
plt.show()
"""
Explanation: Setting nice axes labels in matplotlib
In this post I want to collect some ideas I had on setting nice labels in matplotlib. In particular, for scientific papers we usually want a label like "time [s]". Then, if the data is very large we may put the exponent next to the units like "time [$10^{-6}$s]", better still is to use an SI prefix e.g "time [$\mu$s]".
The defaults
Matplotlib already has useful routines to format the ticks, but it usually puts the exponent somewhere near to the top of the axis. Here is a typical example using the defaults
End of explanation
"""
x = np.linspace(0, 10, 1000)
y = 1e8 + 1e6 * np.sin(x)
fig, ax = plt.subplots()
ax.yaxis.set_major_formatter(mpl.ticker.ScalarFormatter(useMathText=True, useOffset=False))
ax.plot(x, y)
plt.show()
"""
Explanation: Improving on the defaults
For a scientific publication we tend to use $\times10^{10}$ or $10^{10}$, this can easily be achieved with the ScalarFormatter. There is an example on the docs, but here is a simpler one:
End of explanation
"""
def update_label(old_label, exponent_text):
if exponent_text == "":
return old_label
try:
units = old_label[old_label.index("[") + 1:old_label.rindex("]")]
except ValueError:
units = ""
label = old_label.replace("[{}]".format(units), "")
exponent_text = exponent_text.replace("\\times", "")
return "{} [{} {}]".format(label, exponent_text, units)
def format_label_string_with_exponent(ax, axis='both'):
""" Format the label string with the exponent from the ScalarFormatter """
ax.ticklabel_format(axis=axis, style='sci')
axes_instances = []
if axis in ['x', 'both']:
axes_instances.append(ax.xaxis)
if axis in ['y', 'both']:
axes_instances.append(ax.yaxis)
for ax in axes_instances:
ax.major.formatter._useMathText = True
plt.draw() # Update the text
exponent_text = ax.get_offset_text().get_text()
label = ax.get_label().get_text()
ax.offsetText.set_visible(False)
ax.set_label_text(update_label(label, exponent_text))
"""
Explanation: Note there are two options set here, the first useMathText forces the scientific notation, while the useOffset stops the offset being used which makes most plots illegible in my opinion
Customing the results
At some point I ran into an issue with the location of where the exponent was positioned, it was overlapping with other subplots. I did the usual trick and went to StackOverflow and was helpfully redirected to this post which by chance was posted the same day. Building on the ideas posted there I have developed the following:
End of explanation
"""
fig, ax = plt.subplots()
x = np.linspace(0, 1e8, 1000)
y = 1e10 * np.sin(1e-7*x)
ax.plot(x, y)
ax.set_ylabel("amplitude")
ax.set_xlabel("time [s]")
format_label_string_with_exponent(ax, axis='both')
plt.show()
"""
Explanation: Which can be used like this
End of explanation
"""
x = np.linspace(0, 10, 1000)
y = 1e10 * np.sin(x)
def plot(xlabel="time [s]", ylabel="amplitude [Watts]", axis="both"):
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_ylabel(xlabel)
ax.set_xlabel(ylabel)
format_label_string_with_exponent(ax, axis=axis)
plt.show()
from IPython.html.widgets import interact
interact(plot, axis=["x", "y", "both"])
"""
Explanation: As you will see, the label is edited and given the exponent, if units exist they are appended to that. While this isn't the most flexible solution, it is quite a standard notation.
An interactive widget to check it works in all the right ways
End of explanation
"""
|
IBMStreams/streamsx.topology | samples/python/topology/notebooks/MultiGraph/MultiGraph.ipynb | apache-2.0 | from streamsx.topology.topology import Topology
from streamsx.topology import context
from some_module import jsonRandomWalk, movingAverage
#from streamsx import rest
import json
# Define operators
rw = jsonRandomWalk()
ma_150 = movingAverage(150)
ma_50 = movingAverage(50)
# Define topology & submit
top = Topology("myTop")
ticker_price = top.source(rw)
ma_150_stream = ticker_price.map(ma_150)
ma_50_stream = ticker_price.map(ma_50)
"""
Explanation: MultiGraph
Create 3 views:
* A view on a randomly generated stock price
* A view on a moving average of the last 50 stock prices
* A view on a moving average of the last 150 stock prices
End of explanation
"""
ticker_view = ticker_price.view()
ma_150_view = ma_150_stream.view()
ma_50_view = ma_50_stream.view()
"""
Explanation: Code the user can supply to view the streaming data
Given the streams with the 3 different moving averages, create 3 separate views to obtain the data.
End of explanation
"""
context.submit("DISTRIBUTED", top.graph, username = "streamsadmin", password = "passw0rd")
"""
Explanation: Submit To Distributed Streams Install
End of explanation
"""
%matplotlib inline
%matplotlib notebook
from streamsx.rest import multi_graph_every
l = [ticker_view, ma_150_view, ma_50_view]
multi_graph_every(l, 'val', 1.0)
"""
Explanation: Graph The Stock Price & Moving Averages
End of explanation
"""
|
lisitsyn/shogun | doc/ipython-notebooks/neuralnets/autoencoders.ipynb | bsd-3-clause | %pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat
from shogun import features, MulticlassLabels, Math
# load the dataset
dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = dataset['data']
# the usps dataset has the digits labeled from 1 to 10
# we'll subtract 1 to make them in the 0-9 range instead
Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1
# 4000 examples for training
Xtrain = features(Xall[:,0:4000])
Ytrain = MulticlassLabels(Yall[0:4000])
# the rest for testing
Xtest = features(Xall[:,4000:-1])
Ytest = MulticlassLabels(Yall[4000:-1])
# initialize the random number generator with a fixed seed, for repeatability
Math.init_random(10)
"""
Explanation: Deep Autoencoders
by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn
This notebook illustrates how to train and evaluate a deep autoencoder using Shogun. We'll look at both regular fully-connected autoencoders and convolutional autoencoders.
Introduction
A (single layer) autoencoder is a neural network that has three layers: an input layer, a hidden (encoding) layer, and a decoding layer. The network is trained to reconstruct its inputs, which forces the hidden layer to try to learn good representations of the inputs.
In order to encourage the hidden layer to learn good input representations, certain variations on the simple autoencoder exist. Shogun currently supports two of them: Denoising Autoencoders [1] and Contractive Autoencoders [2]. In this notebook we'll focus on denoising autoencoders.
For denoising autoencoders, each time a new training example is introduced to the network, it's randomly corrupted in some mannar, and the target is set to the original example. The autoencoder will try to recover the orignal data from it's noisy version, which is why it's called a denoising autoencoder. This process will force the hidden layer to learn a good representation of the input, one which is not affected by the corruption process.
A deep autoencoder is an autoencoder with multiple hidden layers. Training such autoencoders directly is usually difficult, however, they can be pre-trained as a stack of single layer autoencoders. That is, we train the first hidden layer to reconstruct the input data, and then train the second hidden layer to reconstruct the states of the first hidden layer, and so on. After pre-training, we can train the entire deep autoencoder to fine-tune all the parameters together. We can also use the autoencoder to initialize a regular neural network and train it in a supervised manner.
In this notebook we'll apply deep autoencoders to the USPS dataset for handwritten digits. We'll start by loading the data and dividing it into a training set and a test set:
End of explanation
"""
from shogun import NeuralLayers, DeepAutoencoder
layers = NeuralLayers()
layers = layers.input(256).rectified_linear(512).rectified_linear(128).rectified_linear(512).linear(256).done()
ae = DeepAutoencoder(layers)
"""
Explanation: Creating the autoencoder
Similar to regular neural networks in Shogun, we create a deep autoencoder using an array of NeuralLayer-based classes, which can be created using the utility class NeuralLayers. However, for deep autoencoders there's a restriction that the layer sizes in the network have to be symmetric, that is, the first layer has to have the same size as the last layer, the second layer has to have the same size as the second-to-last layer, and so on. This restriction is necessary for pre-training to work. More details on that can found in the following section.
We'll create a 5-layer deep autoencoder with following layer sizes: 256->512->128->512->256. We'll use rectified linear neurons for the hidden layers and linear neurons for the output layer.
End of explanation
"""
from shogun import AENT_DROPOUT, NNOM_GRADIENT_DESCENT
ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise
ae.pt_noise_parameter.set_const(0.5) # each input has a 50% chance of being set to zero
ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent
ae.pt_gd_learning_rate.set_const(0.01)
ae.pt_gd_mini_batch_size.set_const(128)
ae.pt_max_num_epochs.set_const(50)
ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing
# uncomment this line to allow the training progress to be printed on the console
#from shogun import MSG_INFO; ae.io.set_loglevel(MSG_INFO)
# start pre-training. this might take some time
ae.pre_train(Xtrain)
"""
Explanation: Pre-training
Now we can pre-train the network. To illustrate exactly what's going to happen, we'll give the layers some labels: L1 for the input layer, L2 for the first hidden layer, and so on up to L5 for the output layer.
In pre-training, an autoencoder will formed for each encoding layer (layers up to the middle layer in the network). So here we'll have two autoencoders: L1->L2->L5, and L2->L3->L4. The first autoencoder will be trained on the raw data and used to initialize the weights and biases of layers L2 and L5 in the deep autoencoder. After the first autoencoder is trained, we use it to transform the raw data into the states of L2. These states will then be used to train the second autoencoder, which will be used to initialize the weights and biases of layers L3 and L4 in the deep autoencoder.
The operations described above are performed by the the pre_train() function. Pre-training parameters for each autoencoder can be controlled using the pt_* public attributes of DeepAutoencoder. Each of those attributes is an SGVector whose length is the number of autoencoders in the deep autoencoder (2 in our case). It can be used to set the parameters for each autoencoder indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all autoencoders.
Different noise types can be used to corrupt the inputs in a denoising autoencoder. Shogun currently supports 2 noise types: dropout noise, where a random portion of the inputs is set to zero at each iteration in training, and gaussian noise, where the inputs are corrupted with random gaussian noise. The noise type and strength can be controlled using pt_noise_type and pt_noise_parameter. Here, we'll use dropout noise.
End of explanation
"""
ae.put('noise_type', AENT_DROPOUT) # same noise type we used for pre-training
ae.put('noise_parameter', 0.5)
ae.put('max_num_epochs', 50)
ae.put('optimization_method', NNOM_GRADIENT_DESCENT)
ae.put('gd_mini_batch_size', 128)
ae.put('gd_learning_rate', 0.0001)
ae.put('epsilon', 0.0)
# start fine-tuning. this might take some time
_ = ae.train(Xtrain)
"""
Explanation: Fine-tuning
After pre-training, we can train the autoencoder as a whole to fine-tune the parameters. Training the whole autoencoder is performed using the train() function. Training parameters are controlled through the public attributes, same as a regular neural network.
End of explanation
"""
# get a 50-example subset of the test set
subset = Xtest[:,0:50].copy()
# corrupt the first 25 examples with multiplicative noise
subset[:,0:25] *= (random.random((256,25))>0.5)
# corrupt the other 25 examples with additive noise
subset[:,25:50] += random.random((256,25))
# obtain the reconstructions
reconstructed_subset = ae.reconstruct(features(subset))
# plot the corrupted data and the reconstructions
figure(figsize=(10,10))
for i in range(50):
ax1=subplot(10,10,i*2+1)
ax1.imshow(subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax1.set_xticks([])
ax1.set_yticks([])
ax2=subplot(10,10,i*2+2)
ax2.imshow(reconstructed_subset[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax2.set_xticks([])
ax2.set_yticks([])
"""
Explanation: Evaluation
Now we can evaluate the autoencoder that we trained. We'll start by providing it with corrupted inputs and looking at how it will reconstruct them. The function reconstruct() is used to obtain the reconstructions:
End of explanation
"""
# obtain the weights matrix of the first hidden layer
# the 512 is the number of biases in the layer (512 neurons)
# the transpose is because numpy stores matrices in row-major format, and Shogun stores
# them in column major format
w1 = ae.get_layer_parameters(1)[512:].reshape(256,512).T
# visualize the weights between the first 100 neurons in the hidden layer
# and the neurons in the input layer
figure(figsize=(10,10))
for i in range(100):
ax1=subplot(10,10,i+1)
ax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)
ax1.set_xticks([])
ax1.set_yticks([])
"""
Explanation: The figure shows the corrupted examples and their reconstructions. The top half of the figure shows the ones corrupted with multiplicative noise, the bottom half shows the ones corrupted with additive noise. We can see that the autoencoders can provide decent reconstructions despite the heavy noise.
Next we'll look at the weights that the first hidden layer has learned. To obtain the weights, we can call the get_layer_parameters() function, which will return a vector containing both the weights and the biases of the layer. The biases are stored first in the array followed by the weights matrix in column-major format.
End of explanation
"""
from shogun import NeuralSoftmaxLayer
nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10))
nn.put('max_num_epochs', 50)
nn.put('labels', Ytrain)
_ = nn.train(Xtrain)
"""
Explanation: Now, we can use the autoencoder to initialize a supervised neural network. The network will have all the layer of the autoencoder up to (and including) the middle layer. We'll also add a softmax output layer. So, the network will look like: L1->L2->L3->Softmax. The network is obtained by calling convert_to_neural_network():
End of explanation
"""
from shogun import MulticlassAccuracy
predictions = nn.apply_multiclass(Xtest)
accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100
print("Classification accuracy on the test set =", accuracy, "%")
"""
Explanation: Next, we'll evaluate the accuracy on the test set:
End of explanation
"""
from shogun import DynamicObjectArray, NeuralInputLayer, NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR
conv_layers = DynamicObjectArray()
# 16x16 single channel images
conv_layers.append_element(NeuralInputLayer(16,16,1))
# the first encoding layer: 5 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2, 2, 2))
# the second encoding layer: 15 feature maps, filters with radius 2 (5x5 filters)
# and max-pooling in a 2x2 region: its output will be 20 4x4 feature maps
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2))
# the first decoding layer: same structure as the first encoding layer
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 5, 2, 2))
# the second decoding layer: same structure as the input layer
conv_layers.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 1, 2, 2))
conv_ae = DeepAutoencoder(conv_layers)
"""
Explanation: Convolutional Autoencoders
Convolutional autoencoders [3] are the adaptation of autoencoders to images (or other spacially-structured data). They are built with convolutional layers where each layer consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. In this section we'll pre-train a convolutional network as a stacked autoencoder and use it for classification.
In Shogun, convolutional autoencoders are constructed and trained just like regular autoencoders. Except that we build the autoencoder using CNeuralConvolutionalLayer objects:
End of explanation
"""
conv_ae.pt_noise_type.set_const(AENT_DROPOUT) # use dropout noise
conv_ae.pt_noise_parameter.set_const(0.3) # each input has a 30% chance of being set to zero
conv_ae.pt_optimization_method.set_const(NNOM_GRADIENT_DESCENT) # train using gradient descent
conv_ae.pt_gd_learning_rate.set_const(0.002)
conv_ae.pt_gd_mini_batch_size.set_const(100)
conv_ae.pt_max_num_epochs[0] = 30 # max number of epochs for pre-training the first encoding layer
conv_ae.pt_max_num_epochs[1] = 10 # max number of epochs for pre-training the second encoding layer
conv_ae.pt_epsilon.set_const(0.0) # disable automatic convergence testing
# start pre-training. this might take some time
conv_ae.pre_train(Xtrain)
"""
Explanation: Now we'll pre-train the autoencoder:
End of explanation
"""
conv_nn = ae.convert_to_neural_network(NeuralSoftmaxLayer(10))
# train the network
conv_nn.put('epsilon', 0.0)
conv_nn.put('max_num_epochs', 50)
conv_nn.put('labels', Ytrain)
# start training. this might take some time
_ = conv_nn.train(Xtrain)
"""
Explanation: And then convert the autoencoder to a regular neural network for classification:
End of explanation
"""
predictions = conv_nn.apply_multiclass(Xtest)
accuracy = MulticlassAccuracy().evaluate(predictions, Ytest) * 100
print("Classification accuracy on the test set =", accuracy, "%")
"""
Explanation: And evaluate it on the test set:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.