Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
7,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Psuedo Weight Pruning and Clustering
2017-05-10
Model
In this post, we use a trained AlexNet model (training on ImageNet dataset). AlexNet has 8 parameterized layers
Step1: The shape of each layers
Step2: Preprocessing
We analyze the statisical properties for each layers.
Step4: Using ggplot (R style) for all the plots. There are 7 colors in the color wheel, we simply use a global variable i to cycle through all the color. Function histogram here plots to axis ax.
Step6: The plot showed that all weights in AlexNet seems to have zero mean and follows a normal (or gamma) distribution. Next, we use violin plot to show the statistical properties of these weights in detail.
Step8: Pruning
This session provide a simple pseudo pruning function. We call this pseudo pruning because these is no re-training involved, hence the accuracy of the neural network would greatly decreased comparing to the pruning-retraining scheme. The prun function here merely used to create a fake sparse matrix to use for testing the compression packing.
Step9: Test the effect of pruning
Step10: Clustering
The next step after prunning is to cluster the weight values using k-means. For the convolutional layers k=256, and k=16 for the fully connected layers. First, we run clustering for un-prunned weight matrices.
Step11: Function quantize_kmeans performs k-means clustering on the for weight values. The cluster_centers_ returned by k-means is the codebook, and the built-in function Kmeans.predict is the decoder.
Step12: We manually get each clustering for each layer. Convolutional layers have 256 centers each; fully connected layers have 16 centers each.
Step13: We plot the cluster center points on top of the histogram for weight values. For the cluster center, x-axis represent its value, y-axis represents its id. We plot in such manner to better observe the concentration of cluster center.
Step14: The encoded data can be stored by using codebooks (cb_*) and encoded matrices (*_e). Since the matrix can be represented as 8-bit data, we think it's a good idea to save the data as png images.
Step15: We check if there is any problem (data loss) while loading the images
Step18: Saving data as compressed format
We now encode the data using non-zero indices list and encoded values. | Python Code:
import numpy as np
import os
import sys
weights_path = '/'.join(os.getcwd().split('/')[:-1]) + '/local-trained/alexnet/weights/'
print(weights_path)
os.listdir(weights_path)
keys = ['conv1', 'conv2', 'conv3', 'conv4', 'conv5', 'fc6', 'fc7', 'fc8']
weights = {}
for k in keys:
weights[k] = np.load(weights_path + k + '.npy')
Explanation: Psuedo Weight Pruning and Clustering
2017-05-10
Model
In this post, we use a trained AlexNet model (training on ImageNet dataset). AlexNet has 8 parameterized layers: 5 convolutional and 4 fully connected:
conv1: 96 11x11-kernels - 3 channels
conv2: 256 5x5-kernels - 48 channels
conv3: 384 3x3-kernels - 256 channels
conv4: 384 3x3-kernels - 192 channels
conv5: 256 3x3-kernels - 192 channels
fc6: 4096x9216 matrix
fc7: 4096x4096 matrix
fc8: 1000x4096 matrix
Each of these layers is saved as a numpy 2D array.
End of explanation
for k in keys:
print("Layer " + k + ": " + str(weights[k].shape))
Explanation: The shape of each layers:
End of explanation
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
Explanation: Preprocessing
We analyze the statisical properties for each layers.
End of explanation
plt.style.use('ggplot')
i = 0
def histogram(ax, x, num_bins=1000):
Plot a histogram onto ax
global i
i = (i + 1) % 7
clr = list(plt.rcParams['axes.prop_cycle'])[i]['color']
return ax.hist(x, num_bins, normed=1, color=clr, alpha=0.8)
# Create figure and 8 axes (4-by-2)
fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(12.8,19.2))
# Flatten each layer
conv1_f = weights['conv1'].flatten()
conv2_f = weights['conv2'].flatten()
conv3_f = weights['conv3'].flatten()
conv4_f = weights['conv4'].flatten()
conv5_f = weights['conv5'].flatten()
fc6_f = weights['fc6'].flatten()
fc7_f = weights['fc7'].flatten()
fc8_f = weights['fc8'].flatten()
# Plot histogram
histogram(ax[0,0], conv1_f)
ax[0,0].set_title("conv1")
histogram(ax[0,1], conv2_f)
ax[0,1].set_title("conv2")
histogram(ax[1,0], conv3_f)
ax[1,0].set_title("conv3")
histogram(ax[1,1], conv4_f)
ax[1,1].set_title("conv4")
histogram(ax[2,0], conv5_f)
ax[2,0].set_title("conv5")
histogram(ax[2,1], fc6_f)
ax[2,1].set_title("fc6")
histogram(ax[3,0], fc7_f)
ax[3,0].set_title("fc7")
histogram(ax[3,1], fc8_f)
ax[3,1].set_title("fc8")
fig.tight_layout()
plt.show()
plt.close()
Explanation: Using ggplot (R style) for all the plots. There are 7 colors in the color wheel, we simply use a global variable i to cycle through all the color. Function histogram here plots to axis ax.
End of explanation
def violin(ax, x, pos):
Plot a histogram onto ax
ax.violinplot(x, showmeans=True, showextrema=True, showmedians=True, positions=[pos])
# Create a single figure
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12.8,6.4))
# Plot violin
violin(ax, conv1_f, pos=0)
violin(ax, conv2_f, pos=1)
violin(ax, conv3_f, pos=2)
violin(ax, conv4_f, pos=3)
violin(ax, conv5_f, pos=4)
violin(ax, fc6_f, pos=5)
violin(ax, fc7_f, pos=6)
violin(ax, fc8_f, pos=7)
# Labels
ax.set_xticks(np.arange(0, len(keys)))
ax.set_xticklabels(keys)
fig.tight_layout()
plt.show()
fig.savefig('violin_alexnet.pdf')
plt.close()
Explanation: The plot showed that all weights in AlexNet seems to have zero mean and follows a normal (or gamma) distribution. Next, we use violin plot to show the statistical properties of these weights in detail.
End of explanation
def prun(o_weights, thres=None, percentile=0.8):
Set weights to zero according the threshold.
If the threshold is not provided, `thres` is
infered from `percentile`.
w_weights = o_weights.reshape(1,-1)
if thres == None:
args = w_weights[0].argsort()
thres = w_weights[0][args[int((len(args)-1)*(1-percentile))]]
for i, val in enumerate(w_weights[0]):
if abs(val) <= thres:
w_weights[0][i] = 0.0
Explanation: Pruning
This session provide a simple pseudo pruning function. We call this pseudo pruning because these is no re-training involved, hence the accuracy of the neural network would greatly decreased comparing to the pruning-retraining scheme. The prun function here merely used to create a fake sparse matrix to use for testing the compression packing.
End of explanation
print("Before pruning:")
for layer_name in keys:
print(layer_name + " total size: " + str(weights[layer_name].size))
print(layer_name + " non-zero count: " + str(np.count_nonzero(weights[layer_name])))
print("Density: " + str(float(np.count_nonzero(weights[layer_name]))/weights[layer_name].size))
print("Cloning layers...")
clone_w = {}
for layer_name in keys:
clone_w[layer_name] = weights[layer_name].copy()
keep_per = 0.3
print("Prunning... Keeping " + str(keep_per*100) + "%")
for layer_name in keys:
prun(clone_w[layer_name], percentile=keep_per)
print(layer_name + " total size: " + str(clone_w[layer_name].size))
print(layer_name + " non-zero count: " + str(np.count_nonzero(clone_w[layer_name])))
print("Density: " + str(float(np.count_nonzero(clone_w[layer_name])*1.0)/clone_w[layer_name].size))
Explanation: Test the effect of pruning:
End of explanation
from sklearn.cluster import KMeans
Explanation: Clustering
The next step after prunning is to cluster the weight values using k-means. For the convolutional layers k=256, and k=16 for the fully connected layers. First, we run clustering for un-prunned weight matrices.
End of explanation
def quantize_kmeans(weight, ncluster=256, rs=0):
org_shape = weight.shape
km = KMeans(n_clusters=ncluster, random_state=rs).fit(weight.reshape(-1,1))
num_bits = int(np.ceil(np.log2(ncluster)))
encoded = np.zeros_like(weight, dtype=np.int32)
codebook = km.cluster_centers_
weight = weight.reshape(1,-1)
encoded = encoded.reshape(1,-1)
for i in range(encoded.size):
encoded[i] = km.predict([weight[0][i]])
return num_bits, codebook, encoded.reshape(org_shape)
Explanation: Function quantize_kmeans performs k-means clustering on the for weight values. The cluster_centers_ returned by k-means is the codebook, and the built-in function Kmeans.predict is the decoder.
End of explanation
print("Clustering conv1 ...")
conv1_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv1'].reshape(-1,1))
print("Clustering conv2 ...")
conv2_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv2'].reshape(-1,1))
print("Clustering conv3 ...")
conv3_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv3'].reshape(-1,1))
print("Clustering conv4 ...")
conv4_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv4'].reshape(-1,1))
print("Clustering conv5 ...")
conv5_k = KMeans(n_clusters=256, random_state=0).fit(weights['conv5'].reshape(-1,1))
print("Clustering fc6 ...")
fc6_k = KMeans(n_clusters=16, random_state=0).fit(weights['fc6'].reshape(-1,1))
print("Clustering fc7 ...")
fc7_k = KMeans(n_clusters=16, random_state=0).fit(weights['fc7'].reshape(-1,1))
print("Clustering fc8 ...")
fc8_k = KMeans(n_clusters=16, random_state=0).fit(weights['fc8'].reshape(-1,1))
Explanation: We manually get each clustering for each layer. Convolutional layers have 256 centers each; fully connected layers have 16 centers each.
End of explanation
def histogram_kmeans(ax, flat, kmeans, norm=20):
histogram(ax, flat)
tmp = np.ones_like(kmeans.cluster_centers_)
idx = ((np.cumsum(tmp)) - 1) / norm
ax.scatter(sorted(kmeans.cluster_centers_), idx, s=16, alpha=0.6)
plt.close()
fig, ax = plt.subplots(nrows=4, ncols=2, figsize=(12.8,19.2))
histogram_kmeans(ax[0,0], conv1_f, conv1_k)
ax[0,0].set_title("conv1")
histogram_kmeans(ax[0,1], conv2_f, conv2_k)
ax[0,1].set_title("conv2")
histogram_kmeans(ax[1,0], conv3_f, conv3_k)
ax[1,0].set_title("conv3")
histogram_kmeans(ax[1,1], conv4_f, conv4_k)
ax[1,1].set_title("conv4")
histogram_kmeans(ax[2,0], conv5_f, conv5_k)
ax[2,0].set_title("conv5")
histogram_kmeans(ax[2,1], fc6_f, fc6_k, norm=0.5)
ax[2,1].set_title("fc6")
histogram_kmeans(ax[3,0], fc7_f, fc7_k, norm=0.5)
ax[3,0].set_title("fc7")
histogram_kmeans(ax[3,1], fc8_f, fc8_k, norm=0.5)
ax[3,1].set_title("fc8")
fig.tight_layout()
plt.show()
plt.close()
def encode_kmeans(kmeans, weights):
w = weights.reshape(-1,1)
codebook = kmeans.cluster_centers_
encoded = kmeans.predict(w)
return codebook, encoded.reshape(weights.shape)
cb_conv1, conv1_e = encode_kmeans(conv1_k, weights['conv1'])
cb_conv2, conv2_e = encode_kmeans(conv2_k, weights['conv2'])
cb_conv3, conv3_e = encode_kmeans(conv3_k, weights['conv3'])
cb_conv4, conv4_e = encode_kmeans(conv4_k, weights['conv4'])
cb_conv5, conv5_e = encode_kmeans(conv5_k, weights['conv5'])
cb_fc6, fc6_e = encode_kmeans(fc6_k, weights['fc6'])
cb_fc7, fc7_e = encode_kmeans(fc7_k, weights['fc7'])
cb_fc8, fc8_e = encode_kmeans(fc8_k, weights['fc8'])
Explanation: We plot the cluster center points on top of the histogram for weight values. For the cluster center, x-axis represent its value, y-axis represents its id. We plot in such manner to better observe the concentration of cluster center.
End of explanation
from scipy import misc
def save_image(encoded_w, name, ext='.png'):
encoded_w = encoded_w.reshape(encoded_w.shape[0], -1)
misc.imsave('./' + name + ext, encoded_w)
save_image(conv1_e, 'conv1')
save_image(conv2_e, 'conv2')
save_image(conv3_e, 'conv3')
save_image(conv4_e, 'conv4')
save_image(conv5_e, 'conv5')
save_image(fc6_e, 'fc6')
save_image(fc7_e, 'fc7')
save_image(fc8_e, 'fc8')
Explanation: The encoded data can be stored by using codebooks (cb_*) and encoded matrices (*_e). Since the matrix can be represented as 8-bit data, we think it's a good idea to save the data as png images.
End of explanation
def check_image(img_name, encoded, is_fc=False):
data = misc.imread(img_name)
if is_fc:
data = data / 17 # Quick hack for 4-bit data
print(np.all(data == encoded.reshape(encoded.shape[0], -1)))
check_image('conv1.png', conv1_e)
check_image('conv2.png', conv2_e)
check_image('conv3.png', conv3_e)
check_image('conv4.png', conv4_e)
check_image('conv5.png', conv5_e)
check_image('fc6.png', fc6_e, is_fc=True)
check_image('fc7.png', fc7_e, is_fc=True)
check_image('fc8.png', fc8_e, is_fc=True)
Explanation: We check if there is any problem (data loss) while loading the images:
End of explanation
def encode_index(nz_index, bits=4):
Encode nonzero indices using 4-bit
max_val = 2**bits
if bits == 4 or bits == 8:
data_type = np.uint8
elif bits == 16:
data_type = np.uint16
else:
print("Unimplemented index encoding with " + str(bits) + " bits.")
sys.exit(1)
code = np.zeros_like(nz_index, dtype=np.uint32)
adv = 0
# Encode with relative to array index
for i, val in enumerate(nz_index):
cur_i = i + adv
code[i] = val - cur_i
if (val - cur_i != 0):
adv += val - cur_i
# Check if there is overflow
if (code.max() >= max_val):
print("Overflow index codebook. Unimplemented handling.")
sys.exit(1)
# Special case of 4-bit encoding
if (bits == 4):
code_4bit = np.zeros((code.size-1)/2+1)
code_4bit = code[np.arange(0,code.size,2)]*(2**bits) + code[np.arange(1,code.size,2)]
return np.asarray(code_4bit, dtype=data_type)
return np.asarray(code, dtype=data_type)
def decode_index(encoded_ind, org_size=None, bits=4):
Decode nonzero indices
if org_size is None:
print("Original size must be specified.")
sys.exit(1)
decode = np.zeros(org_size, dtype=np.uint32)
if (bits == 4):
decode[np.arange(0,org_size,2)] = encoded_ind / 2**bits
decode[np.arange(1,org_size,2)] = encoded_ind % 2**bits
decode = np.cumsum(decode+1) - 1
return np.asarray(decode, dtype=np.uint32)
# It should be nonzero indices for real data; this is psuedo weights
conv1_ind = np.arange(weights['conv1'].size)
conv2_ind = np.arange(weights['conv2'].size)
conv3_ind = np.arange(weights['conv3'].size)
conv4_ind = np.arange(weights['conv4'].size)
conv5_ind = np.arange(weights['conv5'].size)
fc6_ind = np.arange(weights['fc6'].size)
fc7_ind = np.arange(weights['fc7'].size)
fc8_ind = np.arange(weights['fc8'].size)
# Encode the indices
conv1_ie = encode_index(conv1_ind)
conv2_ie = encode_index(conv2_ind)
conv3_ie = encode_index(conv3_ind)
conv4_ie = encode_index(conv4_ind)
conv5_ie = encode_index(conv5_ind)
fc6_ie = encode_index(fc6_ind)
fc7_ie = encode_index(fc7_ind)
fc8_ie = encode_index(fc8_ind)
Explanation: Saving data as compressed format
We now encode the data using non-zero indices list and encoded values.
End of explanation |
7,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Long-Short Equity Strategies
By Delaney Granizo-Mackenzie
Part of the Quantopian Lecture Series
Step1: Now that we have factor values and returns, we can see what would happen if we ranked our equities based on factor values, and then entered the long and short positions.
Step2: Let's compute the returns if we go long the top basket and short the bottom basket.
Step3: Market Neutrality is Built-In
The nice thing about making money based on the spread of the ranking is that it is unaffected by what the market does. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# We'll generate a random factor
current_factor_values = np.random.normal(0, 1, 10000)
equity_names = ['Equity ' + str(x) for x in range(10000)]
# Put it into a dataframe
factor_data = pd.Series(current_factor_values, index = equity_names)
factor_data = pd.DataFrame(factor_data, columns=['Factor Value'])
# Take a look at the dataframe
factor_data.head(10)
# Now let's say our future returns are dependent on our factor values
future_returns = current_factor_values + np.random.normal(0, 1, 10000)
returns_data = pd.Series(future_returns, index=equity_names)
returns_data = pd.DataFrame(returns_data, columns=['Returns'])
# Put both the factor values and returns into one dataframe
data = returns_data.join(factor_data)
# Take a look
data.head(10)
Explanation: Long-Short Equity Strategies
By Delaney Granizo-Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
https://github.com/quantopian/research_public
Long-short equity refers to the fact that the strategy is both long and short in the equity market. This is a rather general statement, but has over time grown to mean a specific family of strategies. These strategies rank all stocks in the market using some model. The strategy then goes long (buys) the top $n$ equities of the ranking, and goes short on (sells) the bottom $n$ while maintaining equal dollar volume between the long and short positions. This has the advantage of being statistically robust, as by ranking stocks and entering hundreds or thousands of positions, you are making many bets on your ranking model rather than just a few risky bets. You are also betting purely on the quality of your ranking scheme, as the equal dollar volume long and short positions ensure that the strategy will remain market neutral (immune to market movements).
Ranking Scheme
A ranking scheme is any model that can assign each stocks a number, where higher is better or worse. Examples could be value factors, technical indicators, pricing models, or a combination of all of the above. The Ranking Universes by Factors lecture will cover ranking schemes in more detail. Ranking schemes are the secret sauce of any long-short equity strategy, so developing them is nontrivial.
Making a Bet on the Ranking Scheme
Once we have determined a ranking scheme, we would like to be able to profit from it. We do this by investing an equal amount of money long into the top of the ranking, and short into the bottom. This ensures that the strategy will make money proportionally to the quality of the ranking only, and will be market neutral.
Long and Short Baskets
If you are ranking $m$ equities, have $d$ dollars to invest, and your total target number of positions to hold is $2n$, then the long and short baskets are created as follows. For each equity in spots $1, \dots, n$ in the ranking, sell $\frac{1}{2n} * d$ dollars of that equity. For each equity in spots $m - n, \dots, m$ in the ranking, buy $\frac{1}{2n} * d$ dollars of that equity.
Friction Because of Prices
Because equity prices will not always divide $\frac{1}{2n} * d$ evenly, and equities must be bought in integer amounts, there will be some imprecision and the algorithm should get as close as it can to this number. Most algorithms will have access to some leverage during execution, so it is fine to buy slightly more than $\frac{1}{2n} * d$ dollars per equity. This does, however, cause some friction at low capital amounts. For a strategy running $d = 100000$, and $n = 500$, we see that
$$\frac{1}{2n} * d = \frac{1}{1000} * 100000 = 100$$
This will cause big problems for expensive equities, and cause the algorithm to be overlevered. This is alleviated by trading fewer equities or increasing the capital, $d$. Luckily, long-short equity strategies tend to be very high capicity, so there is for most purposes no ceiling on the amount of money one can invest. For more information on algorithm capacities, refer to the algorithm capacity lecture when it is released.
Returns Come From The Ranking Spread
The returns of a long-short equity strategy are dependent on how well the ranking spreads out the high and low returns. To see how this works, consider this hypothetical example.
End of explanation
# Rank the equities
ranked_data = data.sort('Factor Value')
# Compute the returns of each basket
# Baskets of size 500, so we create an empty array of shape (10000/500)
number_of_baskets = 10000/500
basket_returns = np.zeros(number_of_baskets)
for i in range(number_of_baskets):
start = i * 500
end = i * 500 + 500
basket_returns[i] = ranked_data[start:end]['Returns'].mean()
# Plot the returns of each basket
plt.bar(range(number_of_baskets), basket_returns)
plt.ylabel('Returns')
plt.xlabel('Basket')
plt.legend(['Returns of Each Basket']);
Explanation: Now that we have factor values and returns, we can see what would happen if we ranked our equities based on factor values, and then entered the long and short positions.
End of explanation
basket_returns[number_of_baskets-1] - basket_returns[0]
Explanation: Let's compute the returns if we go long the top basket and short the bottom basket.
End of explanation
# We'll generate a random factor
current_factor_values = np.random.normal(0, 1, 10000)
equity_names = ['Equity ' + str(x) for x in range(10000)]
# Put it into a dataframe
factor_data = pd.Series(current_factor_values, index = equity_names)
factor_data = pd.DataFrame(factor_data, columns=['Factor Value'])
# Now let's say our future returns are dependent on our factor values
future_returns = -10 + current_factor_values + np.random.normal(0, 1, 10000)
returns_data = pd.Series(future_returns, index=equity_names)
returns_data = pd.DataFrame(returns_data, columns=['Returns'])
# Put both the factor values and returns into one dataframe
data = returns_data.join(factor_data)
# Rank the equities
ranked_data = data.sort('Factor Value')
# Compute the returns of each basket
# Baskets of size 500, so we create an empty array of shape (10000/500
number_of_baskets = 10000/500
basket_returns = np.zeros(number_of_baskets)
for i in range(number_of_baskets):
start = i * 500
end = i * 500 + 500
basket_returns[i] = ranked_data[start:end]['Returns'].mean()
basket_returns[number_of_baskets-1] - basket_returns[0]
Explanation: Market Neutrality is Built-In
The nice thing about making money based on the spread of the ranking is that it is unaffected by what the market does.
End of explanation |
7,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Save this file as studentid1_studentid2_lab#.ipynb
(Your student-id is the number shown on your student card.)
E.g. if you work with 3 people, the notebook should be named
Step1: Lab 1
Step2: $\newcommand{\bPhi}{\mathbf{\Phi}}$
$\newcommand{\bx}{\mathbf{x}}$
$\newcommand{\bw}{\mathbf{w}}$
$\newcommand{\bt}{\mathbf{t}}$
$\newcommand{\by}{\mathbf{y}}$
$\newcommand{\bm}{\mathbf{m}}$
$\newcommand{\bS}{\mathbf{S}}$
$\newcommand{\bI}{\mathbf{I}}$
Part 1
Step3: 1.2 Polynomial regression (10 points)
Write a method fit_polynomial(x, t, M) that finds the maximum-likelihood solution of an unregularized $M$-th order polynomial for some dataset x. The error function to minimize w.r.t. $\bw$ is
Step4: 1.3 Plot (5 points)
Sample a dataset with $N=10$, and fit four polynomials with $M \in (0, 2, 4, 8)$.
For each value of $M$, plot the prediction function, along with the data and the original cosine function. The resulting figure should look similar to fig 1.4 of the Bishop's book. Note that you can use matplotlib's plt.pyplot(.) functionality for creating grids of figures.
Step5: 1.4 Regularized linear regression (10 points)
Write a method fit_polynomial_reg(x, t, M, lamb) that fits a regularized $M$-th order polynomial to the periodic data, as discussed in the lectures, where lamb is the regularization term lambda. (Note that 'lambda' cannot be used as a variable name in Python since it has a special meaning). The error function to minimize w.r.t. $\bw$
Step6: 1.5 Model selection by cross-validation (15 points)
Use cross-validation to find a good choice of $M$ and $\lambda$, given a dataset of $N=10$ datapoints generated with gen_cosine(20). You should write a function that tries (loops over) a reasonable range of choices of $M$ and $\lambda$, and returns the choice with the best cross-validation error. In this case you use $K=5$ folds.
You can let $M \in (0, 1, ..., 10)$, and let $\lambda \in (e^{-10}, e^{-9}, ..., e^{0})$.
a) (5 points) First of all, write a method pred_error(x_train, x_valid, t_train, t_valid, M, lamb) that compares the prediction of your method fit_polynomial_reg for a given set of parameters $M$ and $\lambda$ to t_valid. It should return the prediction error for a single fold.
Step7: b) (10 points) Now write a method find_best_m_and_lamb(x, t) that finds the best values for $M$ and $\lambda$. The method should return the best $M$ and $\lambda$. To get you started, here is a method you can use to generate indices of cross-validation folds.
Step8: 1.7 Plot best cross-validated fit (5 points)
For some dataset with $N = 10$, plot the model with the optimal $M$ and $\lambda$ according to the cross-validation error, using the method you just wrote. In addition, the plot should show the dataset itself and the function that we try to approximate. Let the plot make clear which $M$ and $\lambda$ were found.
Step10: Part 2
Step12: 2.2 Compute Posterior (15 points)
You're going to implement a Bayesian linear regression model, and fit it to the periodic data. Your regression model has a zero-mean isotropic Gaussian prior over the parameters, governed by a single (scalar) precision parameter $\alpha$, i.e.
Step14: 2.3 Prediction (10 points)
The predictive distribution of Bayesian linear regression is
Step15: 2.4 Plot predictive distribution (10 points)
a) (5 points) Generate 10 datapoints with gen_cosine2(10). Compute the posterior mean and covariance for a Bayesian polynomial regression model with $M=4$, $\alpha=\frac{1}{2}$ and $\beta=\frac{1}{0.2^2}$.
Plot the Bayesian predictive distribution, where you plot (for $x$ between 0 and $2 \pi$) $t$'s predictive mean and a 1-sigma predictive variance using plt.fill_between(..., alpha=0.1) (the alpha argument induces transparency).
Include the datapoints in your plot.
Step16: b) (5 points) For a second plot, draw 100 samples from the parameters' posterior distribution. Each of these samples is a certain choice of parameters for 4-th order polynomial regression.
Display each of these 100 polynomials. | Python Code:
NAME = "Michelle Appel"
NAME2 = "Verna Dankers"
NAME3 = "Yves van Montfort"
EMAIL = "michelle.appel@student.uva.nl"
EMAIL2 = "verna.dankers@student.uva.nl"
EMAIL3 = "yves.vanmontfort@student.uva.nl"
Explanation: Save this file as studentid1_studentid2_lab#.ipynb
(Your student-id is the number shown on your student card.)
E.g. if you work with 3 people, the notebook should be named:
12301230_3434343_1238938934_lab1.ipynb.
This will be parsed by a regexp, so please double check your filename.
Before you turn this problem in, please make sure everything runs correctly. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your names and email adresses below.
End of explanation
%pylab inline
plt.rcParams["figure.figsize"] = [20,10]
Explanation: Lab 1: Linear Regression and Overfitting
Machine Learning 1, September 2017
Notes on implementation:
You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.
Please write your answers right below the questions.
Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.
Refer to last week's lab notes, i.e. http://docs.scipy.org/doc/, if you are unsure about what function to use. There are different correct ways to implement each problem!
For this lab, your regression solutions should be in closed form, i.e., should not perform iterative gradient-based optimization but find the exact optimum directly.
use the provided test boxes to check if your answers are correct
End of explanation
import numpy as np
import matplotlib.pyplot as plt
lims = (0, 2*np.pi)
def gen_cosine(n):
x = np.linspace(lims[0], lims[1], num=n, endpoint=True)
s = 0.2
mu = np.cos(x)
t = np.random.normal(loc=mu, scale=s)
return x, t
### Test your function
np.random.seed(5)
N = 10
x, t = gen_cosine(N)
assert x.shape == (N,), "the shape of x is incorrect"
assert t.shape == (N,), "the shape of t is incorrect"
Explanation: $\newcommand{\bPhi}{\mathbf{\Phi}}$
$\newcommand{\bx}{\mathbf{x}}$
$\newcommand{\bw}{\mathbf{w}}$
$\newcommand{\bt}{\mathbf{t}}$
$\newcommand{\by}{\mathbf{y}}$
$\newcommand{\bm}{\mathbf{m}}$
$\newcommand{\bS}{\mathbf{S}}$
$\newcommand{\bI}{\mathbf{I}}$
Part 1: Polynomial Regression
1.1. Generate periodic data (5 points)
Write a method gen_cosine(N) that generates toy data like in fig 1.2 of Bishop's book. The method should have a parameter $N$, and should return $N$-dimensional vectors $\bx$ and $\bt$, where $\bx$ contains evenly spaced values from 0 to (including) 2$\pi$, and the elements $t_i$ of $\bt$ are distributed according to:
$$t_i \sim \mathcal{N}(\mu_i, \sigma^2)$$
where $x_i$ is the $i$-th elements of $\bf{x}$, the mean $\mu_i = \cos(x_i)$ and the standard deviation $\sigma = 0.2$.
End of explanation
def designmatrix(x, M): # it is highly recommended to write a helper function that computes Phi
Phi = []
for i in range(M+1):
Phi.append(np.power(x, i))
Phi = np.matrix(Phi).transpose()
return Phi
def fit_polynomial(x, t, M):
Phi = designmatrix(x, M)
w_ml = (np.linalg.inv(Phi.T*Phi)*Phi.T)*np.matrix(t).T
return np.squeeze(np.asarray(w_ml)), Phi
### Test your function
N = 10
x = np.square((np.linspace(-1, 1, N)))
t = 0.5*x + 1.5
m = 2
w, Phi = fit_polynomial(x,t,m)
assert w.shape == (m+1,), "The shape of w is incorrect"
assert Phi.shape == (N, m+1), "The shape of Phi is incorrect"
Explanation: 1.2 Polynomial regression (10 points)
Write a method fit_polynomial(x, t, M) that finds the maximum-likelihood solution of an unregularized $M$-th order polynomial for some dataset x. The error function to minimize w.r.t. $\bw$ is:
$E(\bw) = \frac{1}{2} (\bPhi\bw - \bt)^T(\bPhi\bw - \bt)$
where $\bPhi$ is the feature matrix (or design matrix) as explained in Bishop's book at section 3.1.1, $\bt$ is the vector of target values. Your method should return a vector $\bw$ with the maximum-likelihood parameter estimates, as well as the feature matrix $\bPhi$.
End of explanation
np.random.seed(5)
M = (0, 2, 4, 8)
N = 10
Nc = 1000
x, t = gen_cosine(N)
xc = np.linspace(lims[0], lims[1], Nc)
tc = np.cos(xc)
plt.figure()
for i, m in enumerate(M):
plt.subplot(2,2,i+1)
w, Phi = fit_polynomial(x,t,m)
tf = w*designmatrix(xc, m).T
plt.plot(xc, tf.T, color='r')
plt.plot(xc, tc, color='g')
plt.scatter(x,t, marker='x')
plt.show()
Explanation: 1.3 Plot (5 points)
Sample a dataset with $N=10$, and fit four polynomials with $M \in (0, 2, 4, 8)$.
For each value of $M$, plot the prediction function, along with the data and the original cosine function. The resulting figure should look similar to fig 1.4 of the Bishop's book. Note that you can use matplotlib's plt.pyplot(.) functionality for creating grids of figures.
End of explanation
def fit_polynomial_reg(x, t, m, lamb):
Phi = designmatrix(x, m)
w_ml = (np.linalg.inv(lamb*np.identity(m+1) + Phi.T*Phi)*Phi.T)*np.matrix(t).T
return np.squeeze(np.asarray(w_ml)), Phi
### Test your function
N = 10
x = np.square((np.linspace(-1, 1, N)))
t = 0.5*x + 1.5
m = 2
lamb = 0.1
w, Phi = fit_polynomial_reg(x,t,m, lamb)
assert w.shape == (m+1,), "The shape of w is incorrect"
assert Phi.shape == (N, m+1), "The shape of w is incorrect"
Explanation: 1.4 Regularized linear regression (10 points)
Write a method fit_polynomial_reg(x, t, M, lamb) that fits a regularized $M$-th order polynomial to the periodic data, as discussed in the lectures, where lamb is the regularization term lambda. (Note that 'lambda' cannot be used as a variable name in Python since it has a special meaning). The error function to minimize w.r.t. $\bw$:
$E(\bw) = \frac{1}{2} (\bPhi\bw - \bt)^T(\bPhi\bw - \bt) + \frac{\lambda}{2} \mathbf{w}^T \mathbf{w}$
For background, see section 3.1.4 of Bishop's book.
The function should return $\bw$ and $\bPhi$.
End of explanation
def pred_error(x_train, x_valid, t_train, t_valid, M, reg):
w_train, Phi_train = fit_polynomial_reg(x_train, t_train, M, reg)
Phi_valid = designmatrix(x_valid, M)
w_train = np.matrix(w_train).T
err_t = Phi_valid * w_train - np.matrix(t_valid).T
pred_err = err_t.T * err_t
return pred_err
### Test your function
N = 10
x = np.linspace(-1, 1, N)
t = 0.5*np.square(x) + 1.5
M = 2
reg = 0.1
pred_err = pred_error(x[:-2], x[-2:], t[:-2], t[-2:], M, reg)
assert pred_err < 0.01, "pred_err is too big"
Explanation: 1.5 Model selection by cross-validation (15 points)
Use cross-validation to find a good choice of $M$ and $\lambda$, given a dataset of $N=10$ datapoints generated with gen_cosine(20). You should write a function that tries (loops over) a reasonable range of choices of $M$ and $\lambda$, and returns the choice with the best cross-validation error. In this case you use $K=5$ folds.
You can let $M \in (0, 1, ..., 10)$, and let $\lambda \in (e^{-10}, e^{-9}, ..., e^{0})$.
a) (5 points) First of all, write a method pred_error(x_train, x_valid, t_train, t_valid, M, lamb) that compares the prediction of your method fit_polynomial_reg for a given set of parameters $M$ and $\lambda$ to t_valid. It should return the prediction error for a single fold.
End of explanation
def kfold_indices(N, k):
all_indices = np.arange(N,dtype=int)
np.random.shuffle(all_indices)
idx = [int(i) for i in np.floor(np.linspace(0,N,k+1))]
train_folds = []
valid_folds = []
for fold in range(k):
valid_indices = all_indices[idx[fold]:idx[fold+1]]
valid_folds.append(valid_indices)
train_folds.append(np.setdiff1d(all_indices, valid_indices))
return train_folds, valid_folds
def find_best_m_and_lamb(x, t, Ms, lambs, K):
n = np.size(x)
folds = kfold_indices(n, K)
Mv, lambv = np.meshgrid(Ms, lambs)
errs = np.empty(Mv.shape)
for i in np.ndindex(Mv.shape):
for k in range(K):
ftr = folds[0][k]
fva = folds[1][k]
errs[i] += pred_error(x[ftr], x[fva], t[ftr], t[fva], Mv[i], lambv[i])
best_idx = np.unravel_index(np.argmin(errs), errs.shape)
return Mv[best_idx], lambv[best_idx]
### If you want you can write your own test here
Explanation: b) (10 points) Now write a method find_best_m_and_lamb(x, t) that finds the best values for $M$ and $\lambda$. The method should return the best $M$ and $\lambda$. To get you started, here is a method you can use to generate indices of cross-validation folds.
End of explanation
np.random.seed(5)
N = 10
Nc = 1000
M = 10
k = 5
lamb_p = 10
Ms = np.arange(M+1)
lambs = np.exp(-np.arange(lamb_p + 1)[::-1])
x, t = gen_cosine(N)
xc = np.linspace(lims[0], lims[1], Nc)
tc = np.cos(xc)
M_best, lamb_best = find_best_m_and_lamb(x, t, Ms, lambs, k)
print('The best values are M = %i and lambda = %.6f' % (M_best, lamb_best))
w, Phi = fit_polynomial_reg(x, t, M_best, lamb_best)
tf = w*designmatrix(xc, M_best).T
plt.figure()
plt.scatter(x, t, marker='x')
plt.plot(xc, tc, color='g')
plt.plot(xc, tf.T, color='r')
plt.show()
Explanation: 1.7 Plot best cross-validated fit (5 points)
For some dataset with $N = 10$, plot the model with the optimal $M$ and $\lambda$ according to the cross-validation error, using the method you just wrote. In addition, the plot should show the dataset itself and the function that we try to approximate. Let the plot make clear which $M$ and $\lambda$ were found.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
start = 0
stop = 2*np.pi
N = 1000
def gen_cosine2(n):
Generate x-data from a uniform distribution between 0 and 2pi.
x = np.random.uniform(0,2*np.pi, (n))
sigma = 0.2
mu = np.cos(x)
t = np.random.normal(loc=mu, scale=sigma)
return x, t
x2, t2 = gen_cosine2(10)
# plt.scatter(x2, t2)
# plt.show()
### Test your function
np.random.seed(5)
N = 10
x, t = gen_cosine2(N)
assert x.shape == (N,), "the shape of x is incorrect"
assert t.shape == (N,), "the shape of t is incorrect"
Explanation: Part 2: Bayesian Linear (Polynomial) Regression
2.1 Cosine 2 (5 points)
Write a function gen_cosine2(N) that behaves identically to gen_cosine(N) except that the generated values $x_i$ are not linearly spaced, but drawn from a uniform distribution between $0$ and $2 \pi$.
End of explanation
import matplotlib.pyplot as plt
def fit_polynomial_bayes(x, t, M, alpha, beta):
Fit a polynomial to data x with corresponding targets t.
M indicates the order of the polynomial, alpha is the precision of the
predicitve distribution and beta is the noise precision.
# Calculate S and m
Phi = designmatrix(x, M)
S = np.linalg.inv(alpha * np.identity(M+1) + beta * Phi.T * Phi)
m = np.array((beta * S * Phi.T * np.matrix(t).T).T)[0]
# Check answer through a plot, can be removed before submission
# _ = plt.scatter(x, t)
# _ = plt.plot(x, m[0]+m[1]*x+m[2]*(x*x),'--')
# plt.show()
return m, S, Phi
### Test your function
N = 10
x = np.linspace(-1, 1, N)
t = 0.5*np.square(x) + 1.5
M = 2
alpha = 0.5
beta = 25
m, S, Phi = fit_polynomial_bayes(x, t, M, alpha, beta)
assert m.shape == (M+1,), "the shape of m is incorrect"
assert S.shape == (M+1, M+1), "the shape of S is incorrect"
assert Phi.shape == (N, M+1), "the shape of Phi is incorrect"
Explanation: 2.2 Compute Posterior (15 points)
You're going to implement a Bayesian linear regression model, and fit it to the periodic data. Your regression model has a zero-mean isotropic Gaussian prior over the parameters, governed by a single (scalar) precision parameter $\alpha$, i.e.:
$$p(\bw \;|\; \alpha) = \mathcal{N}(\bw \;|\; 0, \alpha^{-1} \bI)$$
The covariance and mean of the posterior are given by:
$$\bS_N= \left( \alpha \bI + \beta \bPhi^T \bPhi \right)^{-1} $$
$$\bm_N = \beta\; \bS_N \bPhi^T \bt$$
where $\alpha$ is the precision of the predictive distribution, and $\beta$ is the noise precision.
See MLPR chapter 3.3 for background.
Write a method fit_polynomial_bayes(x, t, M, alpha, beta) that returns the mean $\bm_N$ and covariance $\bS_N$ of the posterior for a $M$-th order polynomial. In addition it should return the design matrix $\bPhi$. The arguments x, t and M have the same meaning as in question 1.2.
End of explanation
def predict_polynomial_bayes(x, m, S, beta):
Predict the target values for input x
and return the predictions, the posterior variance and Phi.
Phi = designmatrix(x, len(m)-1)
sigma = [(1 / beta + np.asscalar(Phi[i] * S * Phi[i].T))
for i in range(len(x))]
mean = [np.asscalar(m*Phi[i].T) for i in range(len(x))]
# scipy.stats.norm(0, 1).pdf(0)
# # Check answer through a plot, can be removed before submission
# _ = plt.scatter(x, mean)
# plt.show()
return np.array(mean), np.array(sigma), Phi
### Test your function
np.random.seed(5)
N = 10
x = np.linspace(-1, 1, N)
m = np.empty(3)
S = np.empty((3, 3))
beta = 25
mean, sigma, Phi = predict_polynomial_bayes(x, m, S, beta)
assert mean.shape == (N,), "the shape of mean is incorrect"
assert sigma.shape == (N,), "the shape of sigma is incorrect"
assert Phi.shape == (N, m.shape[0]), "the shape of Phi is incorrect"
Explanation: 2.3 Prediction (10 points)
The predictive distribution of Bayesian linear regression is:
$$ p(t \;|\; \bx, \bt, \alpha, \beta) = \mathcal{N}(t \;|\; \bm_N^T \phi(\bx), \sigma_N^2(\bx))$$
$$ \sigma_N^2 = \frac{1}{\beta} + \phi(\bx)^T \bS_N \phi(\bx) $$
where $\phi(\bx)$ are the computed features for a new datapoint $\bx$, and $t$ is the predicted variable for datapoint $\bx$.
Write a function that predict_polynomial_bayes(x, m, S, beta) that returns the predictive mean, variance and design matrix $\bPhi$ given a new datapoint x, posterior mean m, posterior variance S and a choice of model variance beta.
End of explanation
import matplotlib.pyplot as plt
# Generate 10 datapoints
x3, t3 = gen_cosine2(10)
# Compute posterior mean and covariance
alpha = 1/2
beta = 1/(0.2*0.2)
M = 4
posterior_mean, covariance, Phi = fit_polynomial_bayes(x3, t3, M, alpha, beta)
# Get Bayesian predictive distribution
mean, sigma, Phi = predict_polynomial_bayes(x3, posterior_mean, covariance, beta)
# Plot the predictive mean
x = np.arange(0.0, 2*np.pi, 0.01)
p1 = plt.plot(x, posterior_mean[0] + posterior_mean[1]*x + posterior_mean[2]*(x*x) + \
posterior_mean[3]*np.power(x,3) + posterior_mean[4]*np.power(x,4), label="Predictive mean")
# Plot the predictive variance
mean, sigma, Phi = predict_polynomial_bayes(x, posterior_mean, covariance, beta)
p2 = plt.fill_between(x, mean-(np.sqrt(sigma)), mean+(np.sqrt(sigma)), alpha=0.1, label="Predictive variance")
# Include the datapoints in your plot
p3 = plt.scatter(x3, t3, label="Datapoints")
# Control layout
axes = plt.gca()
axes.set_xlim([0, 2*np.pi])
axes.set_ylim([-1.5, 1.5])
plt.xlabel("x")
plt.ylabel("target")
legend()
plt.show()
Explanation: 2.4 Plot predictive distribution (10 points)
a) (5 points) Generate 10 datapoints with gen_cosine2(10). Compute the posterior mean and covariance for a Bayesian polynomial regression model with $M=4$, $\alpha=\frac{1}{2}$ and $\beta=\frac{1}{0.2^2}$.
Plot the Bayesian predictive distribution, where you plot (for $x$ between 0 and $2 \pi$) $t$'s predictive mean and a 1-sigma predictive variance using plt.fill_between(..., alpha=0.1) (the alpha argument induces transparency).
Include the datapoints in your plot.
End of explanation
# Draw 100 samples from the parameters' posterior distribution
samples = np.random.multivariate_normal(posterior_mean, covariance, size=100)
# Plot every sample
for i, s in enumerate(samples):
plt.plot(x, s[0] + s[1]*x + s[2]*(x*x) + s[3]*np.power(x,3) + s[4]*np.power(x,4))
# Control layout
x = np.arange(0.0, 2*np.pi, 0.01)
plt.xlabel("x")
plt.ylabel("target")
axes = plt.gca()
axes.set_xlim([0,2*np.pi])
axes.set_ylim([-1.5,1.5])
plt.show()
Explanation: b) (5 points) For a second plot, draw 100 samples from the parameters' posterior distribution. Each of these samples is a certain choice of parameters for 4-th order polynomial regression.
Display each of these 100 polynomials.
End of explanation |
7,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting a Non-Deterministic <span style="font-variant
Step1: In order to transform a non-deterministic <span style="font-variant
Step2: The function $\delta^$ maps a state into a set of states. Since the <span style="font-variant
Step3: The function allStates takes three arguments
Step4: Now we are ready to formally define how the deterministic <span style="font-variant | Python Code:
def epsClosure(s, delta):
Result = { s }
while True:
NewStates = { p for q in Result
for p in delta.get((q, ''), set())
}
if NewStates <= Result:
return frozenset(Result)
Result |= NewStates
Explanation: Converting a Non-Deterministic <span style="font-variant:small-caps;">Fsm</span> into a Deterministic <span style="font-variant:small-caps;">Fsm</span>
In this notebook we show how a non-deterministic <span style="font-variant:small-caps;">Fsm</span>
$$ F = \langle Q, \Sigma, \delta, q_0, A \rangle $$
can be transformed into a deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ such that both <span style="font-variant:small-caps;">Fsm</span>s accept the
same language, that is we have
$$ L(F) = L\bigl(\texttt{det}(F)\bigr). $$
The idea behind this transformation is that the <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ has to
compute the set of all states that the <span style="font-variant:small-caps;">Fsm</span> $F$ could be in.
Hence the states of the deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ are
sets of states of the non-deterministic <span style="font-variant:small-caps;">Fsm</span> $F$. A set of these states contains all those states that the non-deterministic <span style="font-variant:small-caps;">Fsm</span>
$F$ could have reached. Furthermore, a set $M$ of states of the <span style="font-variant:small-caps;">Fsm</span> $F$ is an accepting state of the <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ if the set $M$ contains an accepting state of the <span style="font-variant:small-caps;">Fsm</span> $F$.
<hr style="height:5px;background-color:blue">
In order to present the construction of $\texttt{det}(F)$ we first have to define two auxiliary functions.
We start with the <em style="color:blue">$\varepsilon$-closure</em> of a given state.
The function epsClosure takes two arguments:
- s is a state,
- delta is the transition function of the non-deterministic <span style="font-variant:small-caps;">Fsm</span> $F$.
The function computes the set of all those states that can be reached from the state
s via $\varepsilon$-transitions.
Formally, the set $\texttt{epsClosure}(q)$ is defined inductively:
- $s \in \texttt{epsClosure}(s)$.
- $p \in \texttt{epsClosure}(s) \wedge r \in \delta(p, \varepsilon) \;\rightarrow\; r \in \texttt{epsClosure}(s)$.
If the state $p$ is an element of the $\varepsilon$-closure of the state $s$
and there is an $\varepsilon$-transition from $p$ to some state $r$, then $r$
is also an element of the $\varepsilon$-transition of $s$.
End of explanation
def deltaStar(s, c, delta):
return { p for q in delta.get((s, c), set())
for p in epsClosure(q, delta)
}
Explanation: In order to transform a non-deterministic <span style="font-variant:small-caps;">Fsm</span> $F$ into a deterministic
<span style="font-variant:small-caps;">Fsm</span>
$\texttt{det}(F)$ we have to extend the function $\delta:Q \times \Sigma \rightarrow 2^Q$ into the function
$$\delta^: Q \times \Sigma \rightarrow 2^Q. $$
The idea is that given a state $q$ and a character $c$, the value of $\delta^(q,c)$ is the set of all states that the
<span style="font-variant:small-caps;">Fsm</span> $F$ could reach when it reads the character $c$ in state $q$ and then performs an arbitrary number of $\varepsilon$-transitions. Formally, the definition of $\delta^$ is as follows:
$$ \delta^(q_1, c) := \bigcup \bigl{ \texttt{epsClosure}(q_2) \bigm| q_2 \in \delta(q_1, c) \bigr }. $$
This formula is to be read as follows:
- For every state $q_2 \in Q$ that can be reached from the state $q_1$ by reading the character $c$ we
compute $\texttt{epsClosure}(q_2)$.
- Then we take the union of all these sets $\texttt{epsClosure}(q_2)$.
The function $\delta^*$ is implemented as the function deltaStar, which takes three arguments:
- s is a state,
- c is a character,
- 𝛿 is the transition function of a deterministic
<span style="font-variant:small-caps;">Fsm</span>.
This function computes the set of all those states that can be reached
from s when we first have a transition from state s to some state p
on reading the character c followed by any number of $\varepsilon$-transitions
starting in p.
End of explanation
def capitalDelta(M, c, delta):
Result = { s for q in M
for s in deltaStar(q, c, delta)
}
return frozenset(Result)
Explanation: The function $\delta^$ maps a state into a set of states. Since the <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ uses sets of states of the <span style="font-variant:small-caps;">Fsm</span> $F$ as its states we need a function that maps sets of states of the <span style="font-variant:small-caps;">Fsm</span> $F$ into sets of states. Hence we generalize
the function $\delta^$ to the function
$$ \Delta: 2^Q \times \Sigma \rightarrow 2^Q $$
such that for a set $M$ of states and a character $c$ the expression $\Delta(M, c)$
computes the set of all those states that the <span style="font-variant:small-caps;">Fsm</span> $F$ could be in if it is in a state from the set $M$, then
reads the character $c$, and finally makes some $\varepsilon$-transitions.
The formal definition is as follows:
$$ \Delta(M,c) := \bigcup \bigl{ \delta^*(q,c) \bigm| q \in M \bigr}. $$
This formula is easy to understand: For every state $q \in M$ we compute the set of states that the
<span style="font-variant:small-caps;">Fsm</span> could be in after reading the character $c$ and doing some
$\varepsilon$-transitions. Then we take the union of these sets.
End of explanation
def allStates(Q, delta, Sigma):
Result = { Q }
while True:
NewStates = { capitalDelta(M, c, delta) for M in Result
for c in Sigma
}
if NewStates <= Result:
return Result
Result |= NewStates
Explanation: The function allStates takes three arguments:
- $Q$ is $\texttt{epsClosure}(q_0)$, where $q_0$ is the start state of the deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$,
- $\delta$ is the transition function of the non-deterministic <span style="font-variant:small-caps;">Fsm</span> $F$, and
- $\Sigma$ is the alphabet of the non-deterministic <span style="font-variant:small-caps;">Fsm</span> $F$.
The function allStates computes the set of all states of the deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$
that can be reached from the start state.
End of explanation
def nfa2dfa(nfa):
States, Sigma, delta, q0, Final = nfa
newStart = epsClosure(q0, delta)
NewStates = allStates(newStart, delta, Sigma)
newDelta = { (M, c): capitalDelta(M, c, delta) for M in NewStates
for c in Sigma
}
NewFinal = { M for M in NewStates
if M & Final != set()
}
return NewStates, Sigma, newDelta, newStart, NewFinal
Explanation: Now we are ready to formally define how the deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$
is computed from the non-deterministic <span style="font-variant:small-caps;">Fsm</span>
$F = \bigl\langle Q, \Sigma, \delta, q_0, A \bigr\rangle$.
We define:
$$ \texttt{det}(F) := \bigl\langle \texttt{allStates}(\texttt{epsClosure}(q_0)), \Sigma, \Delta, \texttt{epsClosure}(q_0), \widehat{A} \bigr\rangle $$
where the components of this tuple are given as follows:
- The set of states of $\texttt{det}(F)$ is the set of all states that can be reached from the set $\texttt{epsClosure}(q_0)$.
- The input alphabet $\Sigma$ does not change when going from $F$ to $\texttt{det}(F)$.
After all, the deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ has to recognize the same language as the non-deterministic
<span style="font-variant:small-caps;">Fsm</span> $F$.
- The function $\Delta$, that has been defined previously, specified how the set of states change when a
character is read.
- The start state $\texttt{epsClosure}(q_0)$ of the non-deterministic <span style="font-variant:small-caps;">Fsm</span> $\texttt{det}(F)$ is the set of all states
that can be reached from the start state $q_0$ of the non-deterministic <span style="font-variant:small-caps;">Fsm</span> $F$
via $\varepsilon$-transitions.
- The set of accepting states $\widehat{F}$ is the set of those subsets of $Q$ that contain an accepting
state of the <span style="font-variant:small-caps;">Fsm</span> $F$:
$$\widehat{A} := \bigl{ M \in 2^Q \mid M \cap A \not= {} \bigl}. $$
End of explanation |
7,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usage reference guide for haystack-reverse
this is an example of every haystack-reverse commands.
The zeus.vmem.856.dump is there https
Step1: First we need to generate the analysis for the process memory dump.
Step2: Then we can start to use some of the cli
Step3: Mhh interesting string... I wonder what memory chunk was that allocated in.
Step4: Ah, that make sense.. It's a classic utf16 string. The whole allocated memory chunk is being used for a string.
Lets look at the bytes behind the scene.
Step5: I wonder if this record was referenced in some other record...
Maybe we can find a parent record that points to this string...
Step6: Tough luck... What about the others ?
Step7: That looks interesting. A record made of 82x 4-bytes pointers and some trailings zeroes/padding.
Let's see if we can check that out with haystack CLI.
Step8: So, due to a little monkey patching, there is a CString ctypes types available in the haystack ctypes module.
Step9: Oh, that is pretty good...
but it seems the first few strings are not quite right..
Step10: Mhh, it seems the first few strings are utf16 strings. Lets try with a Wide char string type. | Python Code:
!haystack-reverse --help
Explanation: Usage reference guide for haystack-reverse
this is an example of every haystack-reverse commands.
The zeus.vmem.856.dump is there https://dl.dropboxusercontent.com/u/10222931/HAYSTACK/zeus.vmem.856.dump.tgz
It was extracted from pid 856 from the zeus.img image from http://malwarecookbook.googlecode.com/svn-history/r26/trunk/17/1/zeus.vmem.zip
End of explanation
!haystack-reverse ../test/dumps/vol/zeus.vmem.856.dump
Explanation: First we need to generate the analysis for the process memory dump.
End of explanation
!ls -al ../test/dumps/vol/zeus.vmem.856.dump/cache/
!cat ../test/dumps/vol/zeus.vmem.856.dump/cache/*.strings| grep -a http
Explanation: Then we can start to use some of the cli
End of explanation
!haystack-reverse-show ../test/dumps/vol/zeus.vmem.856.dump 0xc64e8
Explanation: Mhh interesting string... I wonder what memory chunk was that allocated in.
End of explanation
!haystack-reverse-hex ../test/dumps/vol/zeus.vmem.856.dump 0xc64e8
Explanation: Ah, that make sense.. It's a classic utf16 string. The whole allocated memory chunk is being used for a string.
Lets look at the bytes behind the scene.
End of explanation
!haystack-reverse-parents ../test/dumps/vol/zeus.vmem.856.dump 0xc64e8
Explanation: I wonder if this record was referenced in some other record...
Maybe we can find a parent record that points to this string...
End of explanation
!haystack-reverse-parents ../test/dumps/vol/zeus.vmem.856.dump 0xc32d98
!haystack-reverse-parents ../test/dumps/vol/zeus.vmem.856.dump 0xc329f8
Explanation: Tough luck... What about the others ?
End of explanation
!cat ../test/structures/zeus/records.py
Explanation: That looks interesting. A record made of 82x 4-bytes pointers and some trailings zeroes/padding.
Let's see if we can check that out with haystack CLI.
End of explanation
!haystack-show ../test/dumps/vol/zeus.vmem.856.dump test.structures.zeus.records.array_of_pointers 0xc31e90
Explanation: So, due to a little monkey patching, there is a CString ctypes types available in the haystack ctypes module.
End of explanation
!haystack-reverse-hex ../test/dumps/vol/zeus.vmem.856.dump 0x00c32000
!haystack-reverse-show ../test/dumps/vol/zeus.vmem.856.dump 0x00c32000
Explanation: Oh, that is pretty good...
but it seems the first few strings are not quite right..
End of explanation
!haystack-show ../test/dumps/vol/zeus.vmem.856.dump test.structures.zeus.records.array_of_wcharp 0xc31e90
Explanation: Mhh, it seems the first few strings are utf16 strings. Lets try with a Wide char string type.
End of explanation |
7,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MAT245 Lab 10
Multi-Layer Perceptrons
Structure & Flow of Information
Multi-layer perceptrons (MLPs) are a simple class of neural networks. It's easiest to understand how an MLP works by examining one directly. Below we have a diagram of a basic MLP
<img src="mlp.png">
There are four input neurons, three hidden neurons, and two output neurons. We can label these as follows
Step1: Training a multi-layer perceptron
Suppose $\textbf{x}_1, \dots, \textbf{x}_n$ are samples of data. Each data point $\textbf{x}_i$ is associated to a target value $\textbf{y}_i$, and we want to train a neural network to approximate the mapping $\textbf{x}_i \mapsto \textbf{y}_i$. We do so by choosing the weights $\textbf{w}$ that minimize some error function. For example, if $E$ is the mean squared error function,
$$
E(\textbf{w}) = \frac{1}{n-1} \sum_{i=1}^n \| F_{\mathrm{net}}(\textbf{x}_i, \textbf{w}) - \textbf{y}_i \|^2.
$$
In practice we usually have to approximate the optimal weights using a process like gradient descent.
Classification using a multi-layer perceptron
In a classification task, we try to assign each input $\textbf{x}_i$ to one of $n$ different classes/categories. Essentially, we're trying to predict the correct label for the data point. Here's how an MLP is typically used to perform a classification task
Step2: Neural networks tend to perform better when the inputs are scaled to have zero mean and unit variance. Use sklearn.preprocessing.StandardScaler to appropriately scale the training and test sets. Note | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import expit
from sklearn import datasets, mixture
xs = np.linspace(-5, 5)
fig = plt.figure(figsize=(20, 5))
## Plot relu
ax1 = fig.add_subplot(1, 3, 1)
ax1.plot(xs, np.maximum(0, xs))
## Plot sigmoid
ax2 = fig.add_subplot(1, 3, 2)
ax2.plot(xs, expit(xs))
## Plot tanh
ax3 = fig.add_subplot(1, 3, 3)
ax3.plot(xs, np.tanh(xs))
plt.show()
Explanation: MAT245 Lab 10
Multi-Layer Perceptrons
Structure & Flow of Information
Multi-layer perceptrons (MLPs) are a simple class of neural networks. It's easiest to understand how an MLP works by examining one directly. Below we have a diagram of a basic MLP
<img src="mlp.png">
There are four input neurons, three hidden neurons, and two output neurons. We can label these as follows:
inputs: $i_1, \dots, i_4$
hiddens: $h_1, h_2, h_3$
outputs: $o_1, o_2$.
Each line between two neurons represents a connection and has an associated weight. For instance, $i_1$ connects into $h_2$; the weight of the connection between these two is denoted $w_{i_1, h_2}$.
A network like the above gives a mapping $\mathbb{R}^4 \to \mathbb{R}^2$. Let's see how a sample input vector $(x_1, x_2, x_3, x_4)$ flows through the network.
Assign each entry in the input vector to the corresponding input neuron, ie. set $i_1 = x_1, \dots, i_4 = x_4$.
Compute the activations of the hidden neurons via the formula
\begin{align}
h_i
&=
f(w_{i_1,h_i} i_1 + \dots w_{i_4,h_i} i_4) \
&=
f(\textbf{w}^{h_i} \cdot \textbf{i})
\end{align}
where $f$ is the network's activation function. The most common activation functions are the relu, sigmoid and tanh functions (pictured below). Notice that we simply compute the dot product of the incoming weights with the inputs and plug the result into the activation function $f$.
Compute the activations of the output neurons via a similar forumla:
\begin{align}
o_i
&=
f(w_{h_1,o_i} h_1 + w_{h_2,o_i} h_2 + w_{h_3,o_i} h_3) \
&=
f(\textbf{w}^{o_i} \cdot \textbf{h}).
\end{align}
So the neural network implements the map $(\textbf{x}, \textbf{w}) \mapsto F_{\mathrm{net}}(\textbf{x}, \textbf{w}) = (o_1, o_2)$
where the outputs $(o_1, o_2)$ are obtained from the process above.
Graphs of common activations
End of explanation
iris = datasets.load_iris()
xs = iris.data[:, 0:2]
ys = iris.target
Explanation: Training a multi-layer perceptron
Suppose $\textbf{x}_1, \dots, \textbf{x}_n$ are samples of data. Each data point $\textbf{x}_i$ is associated to a target value $\textbf{y}_i$, and we want to train a neural network to approximate the mapping $\textbf{x}_i \mapsto \textbf{y}_i$. We do so by choosing the weights $\textbf{w}$ that minimize some error function. For example, if $E$ is the mean squared error function,
$$
E(\textbf{w}) = \frac{1}{n-1} \sum_{i=1}^n \| F_{\mathrm{net}}(\textbf{x}_i, \textbf{w}) - \textbf{y}_i \|^2.
$$
In practice we usually have to approximate the optimal weights using a process like gradient descent.
Classification using a multi-layer perceptron
In a classification task, we try to assign each input $\textbf{x}_i$ to one of $n$ different classes/categories. Essentially, we're trying to predict the correct label for the data point. Here's how an MLP is typically used to perform a classification task:
The data
For a classification problem with $n$ classes, the targets $\textbf{y}_i$ are usually one-hot encoded. That is, if sample $\textbf{x}_i$ belongs to the $k^{th}$ class, the corresponding $\textbf{y}_i$ is a zero vector except for a single entry of 1.0 in the $k^{th}$ position.
$$
\textbf{y}_i = \left( 0.0, 0.0, \dots, 1.0, \dots, 0.0\right).
$$
We interpret this $\textbf{y}_i$ as a probability distribution that assigns all of the probability to the true class.
Converting outputs to probabilities
A standard MLP outputs a vector in $\mathbb{R}^n$. This output needs to be converted into a probability distribution over the $n$ classes. To do so we can apply the $\mathrm{softmax}$ function. Recall that the $\textrm{softmax}$ of a vector $\textbf{x} \in \mathbb{R}^n$ is
$$
\textrm{softmax}(\textbf{x}) = (\sigma(\textbf{x})1, \dots, \sigma(\textbf{x})_n) = \left( \frac{e^{x_1}}{\sum{j=1}^n e^{x_j}}, \dots, \frac{e^{x_n}}{\sum_{j=1}^n e^{x_j}} \right).
$$
Since $\sum_{j=1}^n \sigma(F_\textrm{net}(\textbf{x}, \textbf{w}))_j = 1$, applying softmax turns the output of an MLP into a probability distribution.
Choosing an appropriate error function
A common loss function for probability distributions is the cross-entropy loss. The cross entropy between two distributions $\textbf{p} = (p_1, p_2, \dots, p_n)$ and $\textbf{q} = (q_1, q_2, \dots, q_n)$ is given by the formula
$$
\textrm{cross-entropy}(\textbf{p}, \textbf{q}) = -\sum_{j=1}^n p_i \log(q_i).
$$
Here we interpret $\textbf{p}$ as the "true" distribution, and $\textbf{q}$ is the approximatation we want to evaluate.
Lower cross-entropy scores are better. For our application, we want to measure the cross-entropy between the true distribution $\textbf{y}i$ and the prediction $\mathrm{softmax}(F\textrm{net}(\textbf{x}, \textbf{w}))$. The formula in this case is:
$$
\textrm{cross-entropy}(F_\text{net}(\textbf{x}i, \textbf{w}), \textbf{y}_i) = -\sum{j=1}^n y_{i, j} \log (\sigma (F_\text{net}(\textbf{x}_i, \textbf{w})_j))
$$
Goals (1)
The iris dataset consists of measurements of the sepal length, sepal width, petal length, and petal width of three different kinds of iris flowers. Our first goal is to build an MLP using sklearn to classify the irises. For visualization reasons, we will only use two of the input dimensions.
Load the sklearn iris dataset, selecting any two dimensions from the sample data for inputs into the neural network, and partition the data into 70% training and 30% validation sets. For example, to load the first two columns:
End of explanation
X, y = datasets.make_classification(n_samples=250, n_features=2, n_informative=2, n_redundant=0, n_classes=3, n_clusters_per_class=1)
gmm = mixture.GaussianMixture(n_components=3).fit(X, y)
xx, yy = np.meshgrid(np.arange(-5, 5, 0.2), np.arange(-5, 5, 0.2))
Z = gmm.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Accent, marker='.')
plt.contourf(xx, yy, Z, cmap=plt.cm.Accent, alpha=.6)
plt.show()
Explanation: Neural networks tend to perform better when the inputs are scaled to have zero mean and unit variance. Use sklearn.preprocessing.StandardScaler to appropriately scale the training and test sets. Note: While you have to transform both the training and test data, be sure to fit the scaler using the training data only.
Use sklearn.neural_network.MLPClassifier with cross-entropy error (the default and only choice) to train an MLP on the subset of the iris data you selected. Note that sklearn will automatically one-hot encode the iris classes for the training process.
Make a scatter plot of your validation set. Colour code the points based on the type of iris they represent.
Make a filled contour plot (see matplotlib.pyplot.contourf) to visualize the decision boundaries of your classifier. (See below for an example filled contour plot). How do the decision boundaries compare to your scatter plot above?
Repeat the plots above for two different choices of activation function. So if you used sigmoid activations above, plot the results of using relu and tanh activations also.
Example of a filled contour plot
End of explanation |
7,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: カスタムフェデレーテッドアルゴリズム、パート 1
Step2: フェデレーテッドデータ
TFF の際立った特徴の 1 つは、フェデレーテッドデータに関する TensorFlow ベースの計算をコンパクトに表現できることです。本チュートリアルで使用するフェデレーテッドデータという用語は、分散システム内のデバイスのグループにまたがってホストされるデータアイテムの集まりを指します。例えば、モバイルデバイスで実行するアプリケーションはデータを収集し、中央の場所にはアップロードせずローカルに保存します。あるいは、分散センサーのアレイがその場所の温度測定値を収集して保存する場合などがあります。
上記の例のようなフェデレーテッドデータは、TFF では第一級オブジェクトとして扱います。つまり、それらは関数のパラメータおよび結果として表示され、型を持ちます。この概念を強化するために、フェデレーテッドデータセットをフェデレーテッド値、またはフェデレーテッド型の値と呼びます。
理解しておくべき重要な点は、すべてのデバイスにまたがるデータアイテムのコレクション全体(例えば、分散アレイ内の全センサーの温度測定値のコレクション全体)を単一のフェデレーテッド値としてモデル化するということです。
例として、クライアントデバイスのグループがホストするフェデレーテッド float 型を TFF で定義する方法を以下に示します。分散センサーのアレイにまたがってマテリアライズする温度測定値のコレクションは、このフェデレーテッドタイプの値としてモデル化されます。
Step3: より一般的には、TFF のフェデレーテッド型は、そのメンバ構成要素のT型を指定して定義します。これには個々のデバイスに存在するデータのアイテムと、この型のフェデレーテッド値がホストされるデバイスのグループG(それに加えて後で説明するオプションの 3 つ目の情報)があります。フェデレーテッド値をホストするデバイスのグループGを、その値の配置と呼びます。したがって、<code>tff.CLIENTS</code>は配置の一例です。
Step4: 以下に示すように、メンバ構成要素Tと配置Gを持つフェデレーテッド型は、コンパクトに{T}@Gと表すことができます。
Step5: この簡潔な表記法である中括弧{}には、例えば温度センサーの測定値のようにメンバ構成要素(異なるデバイス上のデータのアイテム)が異なる場合があるので、クライアントがグループとして<code>T</code>型のアイテムのマルチセットを共同でホストし、それらが一緒になってフェデレーテッド値を構成することを思い出させる役割があります。
重要な点として、フェデレーテッド値のメンバ構成要素は一般にプログラマには不透明だということがあります。つまり、フェデレーテッド値をシステム内のデバイスの識別子によってキー付けされる単純なdictであると考えるべきではありません。これらの値は、さまざまな種類の分散通信プロトコル(集約など)を抽象的に表現するフェデレーテッド演算子によってのみ、集合的に変換されるように意図されています。これが抽象的に見えたとしても心配は不要です。これについては、後ほど具体的な例を用いて説明します。
TFF のフェデレーテッド型には 2 つの種類があります。フェデレーテッド値のメンバ構成要素が(上記のように)異なる可能性があるものと、それらが全て等しいと分かっているものです。これは、tff.FederatedTypeコンストラクタの 3 番目のオプションであるall_equalパラメータによって制御されます(デフォルトは False)。
Step6: T型のメンバ構成要素がすべて等しいと分かっている配置Gを持つフェデレーテッド型は、T@Gとしてコンパクトに表現できます(これは{T}@Gとは対照的で、メンバ構成要素のマルチセットが 1 つのアイテムで構成されているという事実を反映するために、中括弧が削除されています)。
Step7: 実際のシナリオに現れる可能性があるこのような型のフェデレーテッド値の 1 例として、サーバーがフェデレーテッドトレーニングに参加するデバイスのグループにブロードキャストする、ハイパーパラメータ(学習率、クリッピングノルムなど)があります。
別の例としては、サーバーで事前トレーニングされた機械学習モデルのパラメータのセットがあります。これはクライアントデバイスのグループにブロードキャストされ、そこで各ユーザーに合わせてパーソナライズすることができます。
例えば、単純な 1 次元線形回帰モデルのfloat32パラメータのペアaとbがあるとします。TFF で使用するには、このようなモデルの(非フェデレーテッドの)型は以下のように構築します。出力された型文字列の山括弧<>は、名前付きタプルまたは名前なしタプルのコンパクトな TFF 表記です。
Step8: 上記ではdtypeのみを指定していることに注意してください。非スカラー型もサポートされています。上のコードでは、tf.float32はより一般的なtff.TensorType(dtype=tf.float32, shape=[])のショートカット表記です。
このモデルがクライアントにブロードキャストされると、結果のフェデレーテッド値の型は以下のように表すことができます。
Step9: 上記のフェデレーテッド float との対称性から、このような型をフェデレーテッドタプルと呼びます。より一般的には、メンバ構成要素が XYZ であるフェデレーテッド値を指す場合にフェデレーテッド XYZ という名称をよく使用します。そのため、フェデレーテッドタプル、フェデレーテッドシーケンス、フェデレーテッドモデルなどについて説明しています。
さて、float32@CLIENTSの話に戻ります。これは複数のデバイス間で複製されているように見えますが、すべてのメンバが同じなので、実際には単一のfloat32です。一般的には、すべて等しいフェデレーテッド型、つまり T@G 形式のものは、非フェデレーテッド型の T と同型であると考えることができます。それはどちらの場合でも、実際にはT型のアイテムが 1 つだけ(複製される可能性はありますが)存在するからです。
TとT@G間の同型性を考えると、後者の型がどのような目的で役に立つのか疑問に思われるかもしれません。先をお読みください。
配置
設計の概要
前のセクションでは、配置、すなわちフェデレーテッド値を共同でホストする可能性のあるシステムの参加者のグループの概念について、また配置の仕様の例としてtff.CLIENTSの使用について紹介しました。
なぜ配置という概念が非常に基本的なものであり、TFF 型システムに組み込む必要があるのかを説明するにあたり、本チュートリアルの冒頭で述べた TFF の使用目的について思い出してみてください。
本チュートリアルでは、TFF コードはシミュレートされた環境でローカルで実行するだけですが、TFF が目標としているのは、分散システム内の物理デバイスのグループ(Android を実行しているモバイルデバイスや組み込みデバイスを含む可能性があります)にデプロイして実行できるようなコード記述ができるようにすることです。これらのデバイスは、それぞれのシステムで果たす役割(エンドユーザーデバイス、集中型コーディネータ、多階層アーキテクチャの中間レイヤーなど)に応じて、ローカルで実行する個別の命令セットを受け取ります。デバイスのどのサブセットがどのようなコードを実行し、異なる部分のデータを物理的にどこでマテリアライズするかについて推論ができることは重要です。
これは、例えばモバイルデバイス上のアプリケーションデータなどを扱う場合、特に重要です。データは非公開で機密性が高い可能性があるため、このデータがデバイス外に出ることがないよう静的に検証する(およびデータの処理方法に関する事実を証明する)機能が必要です。配置仕様は、これをサポートするために設計された仕組みの 1 つです。
TFF はデータを中心としたプログラミング環境として設計されているため、演算子およびその演算子がどこで実行されるかに焦点を当てた既存のフレームワークとは異なり、TFF はデータ、そのデータがマテリアライズされる場所、およびその変換方法に焦点を当てています。その結果、TFF では配置はデータの演算子のプロパティとしてではなく、データのプロパティとしてモデル化されます。実際、次のセクションで説明するように、一部の TFF 演算子は複数の場所にまたがり、単一のマシンやマシンのグループで実行されるのではなく、いわば「ネットワーク内」で実行されます。
特定の値の型を(単にTとして表すのとは対照的に)T@Gまたは{T}@Gとして表すことは、データ配置の決定を明示的にするとともに、TFF で記述されたプログラムの静的解析と合わせて、デバイス上の機密性の高いデータに正式なプライバシー保証を提供する基盤となります。
ただし、この時点で注意すべき重要な点は、TFF ユーザーにはデータ(配置)をホストする参加デバイスの グループ を明示的に示すよう推奨していますが、プログラマが個々の参加者の生データや身元情報を取り扱うことは決してないということです。
設計上、TFF コードの本体内ではtff.CLIENTSで表されるグループを構成するデバイスを列挙したり、グループ内に特定のデバイスの存在するかどうかを調べたりする方法がありません。フェデレーテッドコア API、基礎となるアーキテクチャの抽象化セット、シミュレーションをサポートするために提供するコア ランタイム インフラストラクチャには、デバイスやクライアントを識別する概念がありません。記述するすべての計算ロジックは、クライアントグループ全体に対する演算子として表現されます。
ここで、フェデレーテッド型の値が Python のdictとは異なり、メンバ構成要素を単純に列挙することはできないと先に述べました。TFF プログラムのロジックが扱う値は、個々の参加者ではなく、配置(グループ)に関連付けられると考えてください。
TFF においても、配置はすべて第一級オブジェクトとなるように設計されており、placement型(API ではtff.PlacementTypeで表現)のパラメータおよび結果として表示できます。今後は配置の変換や結合を可能にする多様な演算子の提供を予定していますが、それは本チュートリアルの範囲外です。現時点では、intとboolが Python の不透明な組み込み型であるのと同様に、placementは TFF の不透明なプリミティブ組み込み型であると考えれば十分です。これはtff.CLIENTSがこの型の定数リテラルであり、1がint型の定数リテラルであることに似ています。
配置を指定する
TFF provides two basic placement literals, tff.CLIENTS and tff.SERVER, to make it easy to express the rich variety of practical scenarios that are naturally modeled as client-server architectures, with multiple client devices (mobile phones, embedded devices, distributed databases, sensors, etc.) orchestrated by a single centralized server coordinator. TFF is designed to also support custom placements, multiple client groups, multi-tiered and other, more general distributed architectures, but discussing them is outside the scope of this tutorial.
TFF はtff.CLIENTSやtff.SERVERが実際に何を表すかを規定していません。
特に、tff.SERVERは単一の物理デバイス(シングルトングループのメンバ)である場合がありますが、ステートマシンレプリケーションを実行しているフォールト トレラント クラスタ内のレプリカのグループである場合もあります。むしろ、前のセクションで述べたall_equalの部分を使用して、通常サーバーでは単一のデータアイテムのみを処理するという事実を表現します。
Likewise, tff.CLIENTS in some applications might represent all clients in the system - what in the context of federated learning we sometimes refer to as the population, but e.g., in production implementations of Federated Averaging, it may represent a cohort - a subset of the clients selected for paticipation in a particular round of training. The abstractly defined placements are given concrete meaning when a computation in which they appear is deployed for execution (or simply invoked like a Python function in a simulated environment, as is demonstrated in this tutorial). In our local simulations, the group of clients is determined by the federated data supplied as input.
フェデレーテッド計算
フェデレーテッド計算を宣言する
TFF は、モジュール開発をサポートする、強力に型付けされた関数型プログラミング環境として設計されています。
TFF の構成の基本的な単位はフェデレーテッド計算、すなわちフェデレーテッド値を入力として受け入れ、フェデレーテッド値を出力として返すロジックのセクションです。ここでは、前述の例のセンサーアレイから報告された温度の平均値を算出する計算の定義方法を以下に示します。
Step10: 上記のコードを見るとこの時点では、TensorFlow にはtf.functionのような合成可能な単位を定義するデコレータ構造はすでに存在していないのではないか、もし存在するならば、なぜ別のデコレータの構造を導入するのか、そしてどう違うのか、といった疑問が生じるかもしれません。
その答えを簡単に言うと、tff.federated_computationラッパーによって生成されたコードは TensorFlow でも Python でもないということです。これは、内部プラットフォームに依存しないグルー言語による分散システムの仕様です。この時点では間違いなく不可解に聞こえますが、このように直感的に解釈されたフェデレーテッド計算は分散システムの抽象的な仕様であることを頭に入れておいてください。この後すぐに説明をします。
まず最初に、定義を少し入力してみましょう。通常 TFF 計算は、パラメータの有無に関わらず明確に定義された型シグネチャを使用して、関数としてモデル化されます。以下に示すようにtype_signatureプロパティをクエリすると、計算の型シグネチャを出力することができます。
Step11: この型シグネチャは、計算がクライアントデバイス上の様々なセンサーの測定値のコレクションを受け入れ、サーバー上で単一の平均値を返すことを示しています。
この計算の入力と出力は異なる場所(CLIENTS上とSERVER上)にあります。先に進む前に、これについて少し考えてみましょう。前のセクションで配置に関して述べた、TFF の操作が場所をまたいでネットワーク内でどのように実行されるかについて、そして先程説明した、フェデレーテッド計算が分散システムの抽象的な仕様を表すものだということについて思い出してください。ここではそのような計算の 1 つを、つまりデータをクライアントデバイスで消費して集約結果をサーバで得る単純な分散システムを定義したにすぎません。
多くの実用的なシナリオにおいて、トップレベルのタスクを表す計算はサーバーに入力を受け入れ、サーバーに出力を報告する傾向があります。これはサーバーを起点と終点にしたクエリによって、計算がトリガされる可能性があるという考えを反映しています。
ただし、FC API はこの仮定を課さないため、内部で使用するビルディングブロックの多く(API 内にある数多くのtff.federated_...演算子を含む)には配置の異なる入力と出力があります。そのため通常は、フェデレーテッド計算をサーバー上で実行するもの、あるいはサーバーが実行するものと考えてはいけません。サーバーは、フェデレーテッド計算の参加者の 1 タイプにすぎません。このような計算の仕組みを考える際には、単一の集中型コーディネータの視点ではなく、常にグローバルなネットワーク全体の視点をデフォルトで考えるのがベストです。
一般に、関数型シグネチャは、入力と出力の型TとUに対して、それぞれ(T -> U)のようにコンパクトに表現します。デコレータの引数には、フォーマルパラメータの型(この場合はsensor_readingsなど)を指定します。結果の型を指定する必要はなく、これは自動的に決定されます。
TFF が提供するポリモーフィズムの形式は限られていますが、コードのプロパティの理解、デバッグ、および正式な検証を容易にするために、プログラマの皆さんには扱うデータの型を明示的に指定することを強くお勧めします。場合によっては型の明示的な指定が必要条件であることもあります(例えば現時点では、ポリモーフィック計算の直接実行はできません)。
フェデレーテッド計算を実行する
TFF は開発やデバッグをサポートするために、以下に示すように、この方法で定義された計算を Python の関数として直接呼び出すことができます。計算がall_equalの部分をFalse設定にしたフェデレーテッド型の値を期待している場合、Python では単純なlistとして与えることができます。また、all_equalの部分をTrue設定にしたフェデレーテッド型の場合は、(単一の)メンバ構成要素を直接与えることができます。これは結果を報告する方法でもあります。
Step12: このような計算をシミュレーションモードで実行する場合、あなたにはネットワーク内の任意の場所で入力を供給して出力を使用することができる、システム全体のビューを持った外部オブザーバーとしての役割があります。ここでは入力時にクライアントの値を供給し、サーバーの結果を使用しました。
ここで、先ほどのグルー言語でコードを出力するtff.federated_computationデコレータに関して保留していた説明に戻りましょう。TFF 計算のロジックは Python でも(上記のようにtff.federated_computationで装飾するだけで)普通の関数として表現できます。このノートブックにある他の Python 関数と同様に Python の引数を使用して直接呼び出すこともできますが、その裏では、先にも述べたように TFF 計算は実は Python ではありません。
これは、Python インタプリタがtff.federated_computationでデコレートされた関数に遭遇したとき、この関数のボディ内のステートメントを(定義時に) 1 度だけトレースして、実行のためまたは別の計算にサブコンポーネントとして組み込むためかを問わず、その後の使用のために計算のロジックのシリアル化された表現を構築することを意味します。
以下のように print 文を追加すると、これを確認することができます。
Step13: フェデレーテッド計算を定義する Python のコードは、非 Eager なコンテキストで TensorFlow グラフを構築する Python のコードの考え方に似ています (TensorFlow の非 Eager な使用法をご存知でない場合には、後で実行される演算のグラフを定義した Python のコードは、実際にはその場で実行されるわけではないと考えてみてください)。TensorFlow で非 Eager なグラフ構築のコードは Python ですが、このコードによって構築された TensorFlow のグラフはプラットフォームに依存しないので、シリアライズが可能です。
同様に、TFF の計算は Python で定義しますが、先ほど示した例のtff.federated_meanなど、その本体内の Python ステートメントは、内部でポータブルかつプラットフォームに依存しないシリアライズ可能な表現にコンパイルされます。
開発者が直接この表現を使って作業をすることはないので、この表現の詳細を気にする必要はありませんが、TFF 計算は基本的に非 Eager であり、任意の Python の状態をキャプチャできないという事実に注意してください。tff.federated_computationで装飾された Python 関数の本体をシリアライズ前するにトレースする際、TFF 計算の本体に含まれる Python のコードを定義時に実行します。呼び出し時に再びトレースすることはありません。(ポリモーフィックな場合を除きます。詳細についてはドキュメントのページをご覧ください。)
なぜ Python ではない専用の内部表現を導入したのか、疑問に思われるかもしれません。その理由の 1 つは、最終的には TFF の計算を実際の物理的環境に展開し、Python を使用できないモバイルデバイスや組み込みデバイスでもホストされることを想定しているからです。
もう 1 つの理由は、個々の参加者のローカルな振る舞いを表現する Python プログラムとは対照的に、TFF 計算は分散システムのグローバルな振る舞いを表現するからです。上記の単純な例では、特殊な演算子tff.federated_meanを使用してクライアントデバイス上のデータを受け取り、結果をサーバーに送信することが分かります。
演算子tff.federated_meanはローカルで実行せず、先に述べたように複数のシステム参加者の動作を調整する分散システムを表現しているため、Python の一般的な演算子として簡単にモデル化することはできません。このような演算子をフェデレーテッド演算子と呼び、Python の一般的(ローカル)な演算子とは区別します。
TFF 型システムや TFF の言語でサポートされる演算子の基本的なセットは Python の演算子と大きく異なるため、専用の表現を使用する必要があるのです。
フェデレーテッド計算を作成する
上で述べたように、フェデレーテッド計算とその構成要素は、分散システムのモデルとして最もよく理解されており、フェデレーテッド計算の作成は、単純な分散システムからより複雑な分散システムを作成することであると考えることができます。演算子tff.federated_meanは、型シグネチャ({T}@CLIENTS -> T@SERVER)を持つ、ある種の組み込みテンプレートのフェデレーテッド計算と考えることができます(実際、計算の作成と同様に、この演算子の構造も複雑で、それをより単純な演算子に内部で分解します)。
フェデレーテッド計算を作成する場合も同様です。計算get_average_temperatureは、tff.federated_computationで装飾された別の Python 関数の本体内で呼び出すことができます。そうすると、前にtff.federated_meanがその本体内に埋め込まれたのと同じような方法で、親の本体内に埋め込まれます。
注意すべき重要な制限事項は、tff.federated_computationで装飾された Python 関数の本体は、フェデレーテッド演算子のみで構成する必要があるいうことです。つまり、直接 TensorFlow 演算子を含むことはできません。例えば、直接tf.nestインターフェースを使用してフェデレーテッド値のペアを追加することはできません。TensorFlow コードの使用は、次のセクションで説明するtff.tf_computationで装飾されたコードのブロックに制限する必要があります。そのようにしてラップされた場合にのみ、tff.federated_computationの本体内でラップされた TensorFlow コードを呼び出すことができます。
この分離の理由には、技術的な理由(非テンソルでtf.addのような演算子が使えるように仕向けるのは困難であること)およびアーキテクチャ的な理由があります。フェデレーテッド計算の言語(つまりtff.federated_computationで装飾された Python 関数のシリアライズされた本体から構築したロジック)は、プラットフォームに依存しないグルー言語として機能するように設計されています。このグルー言語は現在、TensorFlow コードの埋め込みセクション(tff.tf_computationブロックに限定)から分散システムを構築するために使用されています。将来的には、入力パイプラインを表すリレーショナル データベース クエリのような TensorFlow 以外のロジックのセクションを埋め込んで、同じグルー言語(tff.federated_computationブロック)を使用してこれらすべてを接続できるようにする必要があると予想しています。
TensorFlow のロジック
TensorFlow 計算を宣言する
TFF は TensorFlow で使用するように設計されています。したがって、TFF で記述するコードの大部分は通常の(つまりローカルで実行する)TensorFlow のコードになるはずです。上で述べたように、tff.tf_computationで装飾するだけで、TensorFlow のコードを TFF で使用できるようになります。
例えば、数値を受け取り、それに0.5を加える関数を実装する方法を以下に示します。
Step14: もう一度これを見ると、単純にtf.functionなどの既存の仕組みを使用するのではなく、なぜ別のデコレータtff.tf_computationを定義する必要があるのか疑問に思うかもしれません。前のセクションとは異なり、ここでは TensorFlow コードの普通のブロックを扱っています。
これにはいくつかの理由があります。その全体的な扱いに関しては本チュートリアルの範囲外になってしまいますが、ここに主要な理由を挙げておきます。
TensorFlow コードを使用して実装した再利用可能なビルディングブロックをフェデレーテッド計算の本体に埋め込むには、定義時にトレースされてシリアライズされる、型シグネチャを持つなどの特定のプロパティを満たす必要があります。これには通常、何らかの形のデコレータが必要です。
通常は、可能な限りtf.functionのような TensorFlow のネイティブメカニズムの使用を推奨しています。これは TFF のデコレータが Eager 関数と相互作用するこの確実な方法は、進化が期待できるからです。
ここで、上記のコードスニペットの例に戻りますが、先ほど定義した計算add_halfは、他の TFF 計算と同様に TFF で処理することができます。特に、これには TFF 型シグネチャがあります。
Step15: この型シグネチャには配置がないことに注意してください。TensorFlow の計算は、フェデレーテッド型を消費または返すことができません。
また、add_halfを他の計算のビルディングブロックとして使用することもできます。例として、tff.federated_map 演算子を使用して、クライアントデバイス上のフェデレーテッド float のすべてのメンバ構成要素に点ごとにadd_halfを適用する方法を以下に示します。
Step16: TensorFlow 計算を実行する
tff.tf_computationで定義された計算の実行は、tff.federated_computationで説明したのと同じルールに従います。これらは通常の Python の callable として以下のように呼び出すことができます。
Step17: 繰り返しますが、この方法でadd_half_on_clientsの計算を呼び出すと、分散プロセスをシミュレートすることに注目してください。データをクライアント上で消費し、クライアント上で返します。実際、この計算は各クライアントにローカルアクションを実行させます。(実際にはこのようなプロセスをオーケストレーションする際に必要になる可能性はありますが、)このシステム内ではtff.SERVERを明示的に言及していません。この方法で定義された計算は、MapReduceのMapステージの概念に類似していると考えてください。
また、前のセクションで、TFF の計算を定義時にシリアライズすることについて言及しましたが、これはtff.tf_computationのコードにも該当します。add_half_on_clientsの Python 本体は定義時に一度だけトレースします。それ以降の呼び出しでは、TFF はそのシリアライズされた表現を使用します。
tff.federated_computationで装飾された Python メソッドとtff.tf_computationで装飾された Python メソッドの唯一の違いは、(前者が TensorFlow コードを直接埋め込むことが許されないのに対し)後者は TensorFlow のグラフとしてシリアライズされることです。
内部的にtff.tf_computationで装飾された各メソッドは、計算の構造をキャプチャできるようにするため、一時的に Eager execution を無効化します。Eager execution をローカルで無効化し、正しくシリアライズできるように計算のロジックを記述している限りは、Eager な TensorFlow、AutoGraph、TensorFlow 2.0 コンストラクトなどを使用しても問題ありません。
例えば、以下のコードは失敗します。
Step18: 上記では、tff.tf_computationがadd_tenの本体で内部的に構築するグラフの外側で、シリアル化のプロセス中に既にconstant_10が構築されるため、失敗します。
その一方で、内部にtff.tf_computationを呼び出す際に、現在のグラフを変更する python 関数を呼び出すことについては問題ありません。
Step19: TensorFlow のシリアル化の仕組みが進化をしているため、TFF の計算のシリアライズ方法の詳細についても今後進化していくと予想しています。
tf.data.Datasetで作業する
前述したように、tff.tf_computationのユニークな特徴は、正式なパラメータとして抽象的に定義したtf.data.Datasetをコードを扱うことができるということです。TensorFlow でデータセットとして表現するパラメータは、tff.SequenceTypeコンストラクタを使用して宣言する必要があります。
例えば、型の仕様tff.SequenceType(tf.float32)は、TFF の float 要素の抽象シーケンスを定義します。シーケンスにはテンソルまたは複雑な入れ子構造のいずれかを含めることができます。(これらの例については後ほど説明します。)T型のアイテムのシーケンスを簡潔に表現すると、T*となります。
Step20: 温度センサーの例では、各センサーが単一の温度測定値ではなく、それぞれ複数の温度測定値を保持しているとします。ここではtf.data.Dataset.reduce演算子を使用して、TensorFlow で 単一のローカルデータセットの平均温度を計算する TFF 計算の定義方法を説明します。
Step21: tff.tf_computationで装飾されたメソッドの本体では、TFF のシーケンス型の形式的なパラメータは、tf.data.Datasetのように振る舞うオブジェクトとして単純に表現されます。つまり、同じプロパティとメソッドをサポートします。(現在はその型のサブクラスとして実装されていませんが、これは TensorFlow のデータセットのサポートの進化に伴って変更される可能性があります。)
以下のようにして簡単に確認することができます。
Step22: 一般的なtf.data.Datasetとは異なり、これらのデータセットのようなオブジェクトはプレースホルダーであることに注意してください。これらのオブジェクトは抽象的なシーケンス型のパラメータを表現しているため、要素は一切含んでおらず、具体的なコンテキストで使用する際に具体的なデータにバインドされます。抽象的に定義されたプレースホルダー データセットのサポートについては、現時点ではまだ多少限界があり TFF の初期段階なので、ある種の制限に遭遇する可能性があります。しかし、本チュートリアルで心配する必要はありません(詳細についてはドキュメントのページをご覧ください)。
本チュートリアルのように、シーケンスを受け入れる計算をシミュレーションモードでローカルで実行する場合、以下のように Python のリストとしてシーケンスを与えることができます(他の方法、例えば Eager モードのtf.data.Datasetでも可能ですが、今のところはシンプルにしておきます)。
Step23: 他のすべての TFF 型もそうですが、上で定義したようなシーケンスは、tff.StructTypeコンストラクタを使用して入れ子構造を定義することができます。例えば、AとBのペアのシーケンスを受け入れ、それらの積の和を返す計算を宣言する方法を以下に示します。TFF の型シグネチャがどのようにデータセットのoutput_typesとoutput_shapesに変換しているかを確認できるよう、計算の本文にトレース文を入れています。
Step24: tf.data.Datasetsを形式的なパラメータとして使用するためのサポートについては、本チュートリアルで使用されているような単純なシナリオでは機能しますが、まだ多少制限があり、進化を続けているところです。
すべてを統合する
さて、ここで再びフェデレーテッド設定で TensorFlow 計算を使用してみましょう。各センサーが温度測定値のローカルシーケンスを持つセンサー群があるとします。以下のようにセンサーのローカル平均値を平均化して、グローバルな平均温度を計算することができます。
Step25: これは全てのクライアントの全てのローカルの温度測定値にわたる単純平均値ではないことに注意してください。単純平均値には、各クライアントがローカルで保持する測定値の数に従い、それぞれの測定値に対する重み付けが必要となるからです。これは上記コードの更新の練習として、読者の皆さんに残しておくことにします。tff.federated_mean演算子は、重みをオプションの第 2 引数(フェデレーテッド float が予想される)として受け入れます。
さらに、get_global_temperature_averageの入力がフェデレーテッド float シーケンスになっていることにも注意してください。フェデレーテッドシーケンスはフェデレーテッドラーニングでデバイス上のデータを一般的に表現する方法であり、シーケンス要素は一般的にデータのバッチを表現します(後の例でご覧になれます)。
Step26: ここで Python のデータのサンプルに対してローカルで計算を実行する方法を説明します。入力を供給する方法は、listのlistとなっていることに注意してください。外側のリストはtff.CLIENTSで表現されるグループ内のデバイスをイテレートし、内側のリストは各デバイスのローカルシーケンス内の要素をイテレートします。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
Explanation: カスタムフェデレーテッドアルゴリズム、パート 1: フェデレーテッドコアの基礎
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/federated/tutorials/custom_federated_algorithms_1.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/federated/tutorials/custom_federated_algorithms_1.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/federated/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
本チュートリアルは、フェデレーテッドコア (FC) を使用してTensorFlow Federated (TFF) でフェデレーテッドアルゴリズムのカスタム型を実装する方法を示す、2 部構成シリーズの第 1 部です。これはフェデレーテッドラーニング (FL) レイヤー実装の基礎として機能する、低レベルのインターフェースセットです。
この第 1 部はより概念的な内容です。 TFF で使用される主要な概念とプログラミングの抽象化のいくつかを紹介し、温度センサーの分散配列を使用した非常に単純な例を用いてその使用法を示します。このシリーズの第 2 部では、ここで紹介するメカニズムを使用して、フェデレーテッドトレーニングと評価アルゴリズムの単純なバージョンを実装します。フォローアップとして、<code>tff.learning</code>を使用したフェデレーテッド平均化<a>実装</a>の研究をお勧めします。
このシリーズを終えると、フェデレーテッドコア (FC) のアプリケーションが学習のみに限定されているわけではないことが認識できるはずです。提供しているプログラミングの抽象化は非常に一般的なので、例えば分散データに対する分析やその他カスタム型の計算の実装に使用することができます。
本チュートリアルは自己完結型で作成していますが、まず画像分類とテキスト作成 に関するチュートリアルを読むことをお勧めします。TensorFlow Federated のフレームワークとフェデレーテッドラーニング API(tff.learning)に関する、より高レベルで丁寧な導入を提供しているので、ここで説明する概念を文脈に沿って理解するのに有用です。
使用目的
端的に言うと、フェデレーテッドコア (FC) は TensorFlow のコードと分散通信演算子を結合し、コンパクトなプログラムロジックの表現を可能にする開発環境です。これはフェデレーテッドアベレージングで使用されている、システム内のクライアントデバイスの集合に対する分散加算、平均、その他の型の分散集約の計算、およびそれらのデバイスのモデルやパラメータのブロードキャスト、などがそれにあたります。
tf.contrib.distributeについては既にご存知かも知れません。この時点で「このフレームワークはどのような点で違うのだろうか?」という自然な疑問が出てくるでしょう。結局のところは、どちらのフレームワークも TensorFlow の計算の分散を試みます。
1 つの考え方としては、tf.contrib.distributeの目標がユーザーが既存のモデルやトレーニングコードを最小限の変更で使用して分散トレーニングを可能にすることであるのに対し、TFF のフェデレーテッドコアの目標はシステムで使用する分散通信の特定のパターンを研究者や実践者が明示的に制御できるようにすることです。FC は、実装された分散トレーニング機能の具体的なセットの提供よりも、分散データフローアルゴリズムを表現するための柔軟で拡張可能な言語の提供に焦点を絞っています。
TFF の FC API の主要な対象者には、システム実装の詳細で行き詰まることなく新しいフェデレーテッドラーニングアルゴリズムを実験し、分散システム内でデータのフローのオーケストレーションに影響を与える設計の微妙な選択の結果を評価したいと考えている研究者や実践者があります。FC API が目指している抽象度は、システムに存在するデータとその変換方法など、研究出版物の中でフェデレーテッドラーニングアルゴリズムの仕組みの説明に使用される疑似コードにほぼ対応していますが、個々のポイントツーポイントのネットワークメッセージ交換のレベルにまで落ちることはありません。
TFF 全体はデータが分散しているシナリオを対象としており、例えばプライバシー上の理由など、データが分散した状態を維持する必要があるため、すべてのデータを中央の場所に収集することは実行不可能な選択肢である場合があります。これはすべてのデータをデータセンターの中央の場所に集められるシナリオと比較して、より高度で明示的な制御を必要とする機械学習アルゴリズムを実装する意義があると言えます。
始める前に
コードに手をつける前に、まず以下の「Hello World」の例を実行して、環境が正しく設定されていることを確認してください。動作しない場合は、インストールガイドをご覧ください。
End of explanation
federated_float_on_clients = tff.type_at_clients(tf.float32)
Explanation: フェデレーテッドデータ
TFF の際立った特徴の 1 つは、フェデレーテッドデータに関する TensorFlow ベースの計算をコンパクトに表現できることです。本チュートリアルで使用するフェデレーテッドデータという用語は、分散システム内のデバイスのグループにまたがってホストされるデータアイテムの集まりを指します。例えば、モバイルデバイスで実行するアプリケーションはデータを収集し、中央の場所にはアップロードせずローカルに保存します。あるいは、分散センサーのアレイがその場所の温度測定値を収集して保存する場合などがあります。
上記の例のようなフェデレーテッドデータは、TFF では第一級オブジェクトとして扱います。つまり、それらは関数のパラメータおよび結果として表示され、型を持ちます。この概念を強化するために、フェデレーテッドデータセットをフェデレーテッド値、またはフェデレーテッド型の値と呼びます。
理解しておくべき重要な点は、すべてのデバイスにまたがるデータアイテムのコレクション全体(例えば、分散アレイ内の全センサーの温度測定値のコレクション全体)を単一のフェデレーテッド値としてモデル化するということです。
例として、クライアントデバイスのグループがホストするフェデレーテッド float 型を TFF で定義する方法を以下に示します。分散センサーのアレイにまたがってマテリアライズする温度測定値のコレクションは、このフェデレーテッドタイプの値としてモデル化されます。
End of explanation
str(federated_float_on_clients.member)
str(federated_float_on_clients.placement)
Explanation: より一般的には、TFF のフェデレーテッド型は、そのメンバ構成要素のT型を指定して定義します。これには個々のデバイスに存在するデータのアイテムと、この型のフェデレーテッド値がホストされるデバイスのグループG(それに加えて後で説明するオプションの 3 つ目の情報)があります。フェデレーテッド値をホストするデバイスのグループGを、その値の配置と呼びます。したがって、<code>tff.CLIENTS</code>は配置の一例です。
End of explanation
str(federated_float_on_clients)
Explanation: 以下に示すように、メンバ構成要素Tと配置Gを持つフェデレーテッド型は、コンパクトに{T}@Gと表すことができます。
End of explanation
federated_float_on_clients.all_equal
Explanation: この簡潔な表記法である中括弧{}には、例えば温度センサーの測定値のようにメンバ構成要素(異なるデバイス上のデータのアイテム)が異なる場合があるので、クライアントがグループとして<code>T</code>型のアイテムのマルチセットを共同でホストし、それらが一緒になってフェデレーテッド値を構成することを思い出させる役割があります。
重要な点として、フェデレーテッド値のメンバ構成要素は一般にプログラマには不透明だということがあります。つまり、フェデレーテッド値をシステム内のデバイスの識別子によってキー付けされる単純なdictであると考えるべきではありません。これらの値は、さまざまな種類の分散通信プロトコル(集約など)を抽象的に表現するフェデレーテッド演算子によってのみ、集合的に変換されるように意図されています。これが抽象的に見えたとしても心配は不要です。これについては、後ほど具体的な例を用いて説明します。
TFF のフェデレーテッド型には 2 つの種類があります。フェデレーテッド値のメンバ構成要素が(上記のように)異なる可能性があるものと、それらが全て等しいと分かっているものです。これは、tff.FederatedTypeコンストラクタの 3 番目のオプションであるall_equalパラメータによって制御されます(デフォルトは False)。
End of explanation
str(tff.type_at_clients(tf.float32, all_equal=True))
Explanation: T型のメンバ構成要素がすべて等しいと分かっている配置Gを持つフェデレーテッド型は、T@Gとしてコンパクトに表現できます(これは{T}@Gとは対照的で、メンバ構成要素のマルチセットが 1 つのアイテムで構成されているという事実を反映するために、中括弧が削除されています)。
End of explanation
simple_regression_model_type = (
tff.StructType([('a', tf.float32), ('b', tf.float32)]))
str(simple_regression_model_type)
Explanation: 実際のシナリオに現れる可能性があるこのような型のフェデレーテッド値の 1 例として、サーバーがフェデレーテッドトレーニングに参加するデバイスのグループにブロードキャストする、ハイパーパラメータ(学習率、クリッピングノルムなど)があります。
別の例としては、サーバーで事前トレーニングされた機械学習モデルのパラメータのセットがあります。これはクライアントデバイスのグループにブロードキャストされ、そこで各ユーザーに合わせてパーソナライズすることができます。
例えば、単純な 1 次元線形回帰モデルのfloat32パラメータのペアaとbがあるとします。TFF で使用するには、このようなモデルの(非フェデレーテッドの)型は以下のように構築します。出力された型文字列の山括弧<>は、名前付きタプルまたは名前なしタプルのコンパクトな TFF 表記です。
End of explanation
str(tff.type_at_clients(
simple_regression_model_type, all_equal=True))
Explanation: 上記ではdtypeのみを指定していることに注意してください。非スカラー型もサポートされています。上のコードでは、tf.float32はより一般的なtff.TensorType(dtype=tf.float32, shape=[])のショートカット表記です。
このモデルがクライアントにブロードキャストされると、結果のフェデレーテッド値の型は以下のように表すことができます。
End of explanation
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(sensor_readings):
return tff.federated_mean(sensor_readings)
Explanation: 上記のフェデレーテッド float との対称性から、このような型をフェデレーテッドタプルと呼びます。より一般的には、メンバ構成要素が XYZ であるフェデレーテッド値を指す場合にフェデレーテッド XYZ という名称をよく使用します。そのため、フェデレーテッドタプル、フェデレーテッドシーケンス、フェデレーテッドモデルなどについて説明しています。
さて、float32@CLIENTSの話に戻ります。これは複数のデバイス間で複製されているように見えますが、すべてのメンバが同じなので、実際には単一のfloat32です。一般的には、すべて等しいフェデレーテッド型、つまり T@G 形式のものは、非フェデレーテッド型の T と同型であると考えることができます。それはどちらの場合でも、実際にはT型のアイテムが 1 つだけ(複製される可能性はありますが)存在するからです。
TとT@G間の同型性を考えると、後者の型がどのような目的で役に立つのか疑問に思われるかもしれません。先をお読みください。
配置
設計の概要
前のセクションでは、配置、すなわちフェデレーテッド値を共同でホストする可能性のあるシステムの参加者のグループの概念について、また配置の仕様の例としてtff.CLIENTSの使用について紹介しました。
なぜ配置という概念が非常に基本的なものであり、TFF 型システムに組み込む必要があるのかを説明するにあたり、本チュートリアルの冒頭で述べた TFF の使用目的について思い出してみてください。
本チュートリアルでは、TFF コードはシミュレートされた環境でローカルで実行するだけですが、TFF が目標としているのは、分散システム内の物理デバイスのグループ(Android を実行しているモバイルデバイスや組み込みデバイスを含む可能性があります)にデプロイして実行できるようなコード記述ができるようにすることです。これらのデバイスは、それぞれのシステムで果たす役割(エンドユーザーデバイス、集中型コーディネータ、多階層アーキテクチャの中間レイヤーなど)に応じて、ローカルで実行する個別の命令セットを受け取ります。デバイスのどのサブセットがどのようなコードを実行し、異なる部分のデータを物理的にどこでマテリアライズするかについて推論ができることは重要です。
これは、例えばモバイルデバイス上のアプリケーションデータなどを扱う場合、特に重要です。データは非公開で機密性が高い可能性があるため、このデータがデバイス外に出ることがないよう静的に検証する(およびデータの処理方法に関する事実を証明する)機能が必要です。配置仕様は、これをサポートするために設計された仕組みの 1 つです。
TFF はデータを中心としたプログラミング環境として設計されているため、演算子およびその演算子がどこで実行されるかに焦点を当てた既存のフレームワークとは異なり、TFF はデータ、そのデータがマテリアライズされる場所、およびその変換方法に焦点を当てています。その結果、TFF では配置はデータの演算子のプロパティとしてではなく、データのプロパティとしてモデル化されます。実際、次のセクションで説明するように、一部の TFF 演算子は複数の場所にまたがり、単一のマシンやマシンのグループで実行されるのではなく、いわば「ネットワーク内」で実行されます。
特定の値の型を(単にTとして表すのとは対照的に)T@Gまたは{T}@Gとして表すことは、データ配置の決定を明示的にするとともに、TFF で記述されたプログラムの静的解析と合わせて、デバイス上の機密性の高いデータに正式なプライバシー保証を提供する基盤となります。
ただし、この時点で注意すべき重要な点は、TFF ユーザーにはデータ(配置)をホストする参加デバイスの グループ を明示的に示すよう推奨していますが、プログラマが個々の参加者の生データや身元情報を取り扱うことは決してないということです。
設計上、TFF コードの本体内ではtff.CLIENTSで表されるグループを構成するデバイスを列挙したり、グループ内に特定のデバイスの存在するかどうかを調べたりする方法がありません。フェデレーテッドコア API、基礎となるアーキテクチャの抽象化セット、シミュレーションをサポートするために提供するコア ランタイム インフラストラクチャには、デバイスやクライアントを識別する概念がありません。記述するすべての計算ロジックは、クライアントグループ全体に対する演算子として表現されます。
ここで、フェデレーテッド型の値が Python のdictとは異なり、メンバ構成要素を単純に列挙することはできないと先に述べました。TFF プログラムのロジックが扱う値は、個々の参加者ではなく、配置(グループ)に関連付けられると考えてください。
TFF においても、配置はすべて第一級オブジェクトとなるように設計されており、placement型(API ではtff.PlacementTypeで表現)のパラメータおよび結果として表示できます。今後は配置の変換や結合を可能にする多様な演算子の提供を予定していますが、それは本チュートリアルの範囲外です。現時点では、intとboolが Python の不透明な組み込み型であるのと同様に、placementは TFF の不透明なプリミティブ組み込み型であると考えれば十分です。これはtff.CLIENTSがこの型の定数リテラルであり、1がint型の定数リテラルであることに似ています。
配置を指定する
TFF provides two basic placement literals, tff.CLIENTS and tff.SERVER, to make it easy to express the rich variety of practical scenarios that are naturally modeled as client-server architectures, with multiple client devices (mobile phones, embedded devices, distributed databases, sensors, etc.) orchestrated by a single centralized server coordinator. TFF is designed to also support custom placements, multiple client groups, multi-tiered and other, more general distributed architectures, but discussing them is outside the scope of this tutorial.
TFF はtff.CLIENTSやtff.SERVERが実際に何を表すかを規定していません。
特に、tff.SERVERは単一の物理デバイス(シングルトングループのメンバ)である場合がありますが、ステートマシンレプリケーションを実行しているフォールト トレラント クラスタ内のレプリカのグループである場合もあります。むしろ、前のセクションで述べたall_equalの部分を使用して、通常サーバーでは単一のデータアイテムのみを処理するという事実を表現します。
Likewise, tff.CLIENTS in some applications might represent all clients in the system - what in the context of federated learning we sometimes refer to as the population, but e.g., in production implementations of Federated Averaging, it may represent a cohort - a subset of the clients selected for paticipation in a particular round of training. The abstractly defined placements are given concrete meaning when a computation in which they appear is deployed for execution (or simply invoked like a Python function in a simulated environment, as is demonstrated in this tutorial). In our local simulations, the group of clients is determined by the federated data supplied as input.
フェデレーテッド計算
フェデレーテッド計算を宣言する
TFF は、モジュール開発をサポートする、強力に型付けされた関数型プログラミング環境として設計されています。
TFF の構成の基本的な単位はフェデレーテッド計算、すなわちフェデレーテッド値を入力として受け入れ、フェデレーテッド値を出力として返すロジックのセクションです。ここでは、前述の例のセンサーアレイから報告された温度の平均値を算出する計算の定義方法を以下に示します。
End of explanation
str(get_average_temperature.type_signature)
Explanation: 上記のコードを見るとこの時点では、TensorFlow にはtf.functionのような合成可能な単位を定義するデコレータ構造はすでに存在していないのではないか、もし存在するならば、なぜ別のデコレータの構造を導入するのか、そしてどう違うのか、といった疑問が生じるかもしれません。
その答えを簡単に言うと、tff.federated_computationラッパーによって生成されたコードは TensorFlow でも Python でもないということです。これは、内部プラットフォームに依存しないグルー言語による分散システムの仕様です。この時点では間違いなく不可解に聞こえますが、このように直感的に解釈されたフェデレーテッド計算は分散システムの抽象的な仕様であることを頭に入れておいてください。この後すぐに説明をします。
まず最初に、定義を少し入力してみましょう。通常 TFF 計算は、パラメータの有無に関わらず明確に定義された型シグネチャを使用して、関数としてモデル化されます。以下に示すようにtype_signatureプロパティをクエリすると、計算の型シグネチャを出力することができます。
End of explanation
get_average_temperature([68.5, 70.3, 69.8])
Explanation: この型シグネチャは、計算がクライアントデバイス上の様々なセンサーの測定値のコレクションを受け入れ、サーバー上で単一の平均値を返すことを示しています。
この計算の入力と出力は異なる場所(CLIENTS上とSERVER上)にあります。先に進む前に、これについて少し考えてみましょう。前のセクションで配置に関して述べた、TFF の操作が場所をまたいでネットワーク内でどのように実行されるかについて、そして先程説明した、フェデレーテッド計算が分散システムの抽象的な仕様を表すものだということについて思い出してください。ここではそのような計算の 1 つを、つまりデータをクライアントデバイスで消費して集約結果をサーバで得る単純な分散システムを定義したにすぎません。
多くの実用的なシナリオにおいて、トップレベルのタスクを表す計算はサーバーに入力を受け入れ、サーバーに出力を報告する傾向があります。これはサーバーを起点と終点にしたクエリによって、計算がトリガされる可能性があるという考えを反映しています。
ただし、FC API はこの仮定を課さないため、内部で使用するビルディングブロックの多く(API 内にある数多くのtff.federated_...演算子を含む)には配置の異なる入力と出力があります。そのため通常は、フェデレーテッド計算をサーバー上で実行するもの、あるいはサーバーが実行するものと考えてはいけません。サーバーは、フェデレーテッド計算の参加者の 1 タイプにすぎません。このような計算の仕組みを考える際には、単一の集中型コーディネータの視点ではなく、常にグローバルなネットワーク全体の視点をデフォルトで考えるのがベストです。
一般に、関数型シグネチャは、入力と出力の型TとUに対して、それぞれ(T -> U)のようにコンパクトに表現します。デコレータの引数には、フォーマルパラメータの型(この場合はsensor_readingsなど)を指定します。結果の型を指定する必要はなく、これは自動的に決定されます。
TFF が提供するポリモーフィズムの形式は限られていますが、コードのプロパティの理解、デバッグ、および正式な検証を容易にするために、プログラマの皆さんには扱うデータの型を明示的に指定することを強くお勧めします。場合によっては型の明示的な指定が必要条件であることもあります(例えば現時点では、ポリモーフィック計算の直接実行はできません)。
フェデレーテッド計算を実行する
TFF は開発やデバッグをサポートするために、以下に示すように、この方法で定義された計算を Python の関数として直接呼び出すことができます。計算がall_equalの部分をFalse設定にしたフェデレーテッド型の値を期待している場合、Python では単純なlistとして与えることができます。また、all_equalの部分をTrue設定にしたフェデレーテッド型の場合は、(単一の)メンバ構成要素を直接与えることができます。これは結果を報告する方法でもあります。
End of explanation
@tff.federated_computation(tff.type_at_clients(tf.float32))
def get_average_temperature(sensor_readings):
print ('Getting traced, the argument is "{}".'.format(
type(sensor_readings).__name__))
return tff.federated_mean(sensor_readings)
Explanation: このような計算をシミュレーションモードで実行する場合、あなたにはネットワーク内の任意の場所で入力を供給して出力を使用することができる、システム全体のビューを持った外部オブザーバーとしての役割があります。ここでは入力時にクライアントの値を供給し、サーバーの結果を使用しました。
ここで、先ほどのグルー言語でコードを出力するtff.federated_computationデコレータに関して保留していた説明に戻りましょう。TFF 計算のロジックは Python でも(上記のようにtff.federated_computationで装飾するだけで)普通の関数として表現できます。このノートブックにある他の Python 関数と同様に Python の引数を使用して直接呼び出すこともできますが、その裏では、先にも述べたように TFF 計算は実は Python ではありません。
これは、Python インタプリタがtff.federated_computationでデコレートされた関数に遭遇したとき、この関数のボディ内のステートメントを(定義時に) 1 度だけトレースして、実行のためまたは別の計算にサブコンポーネントとして組み込むためかを問わず、その後の使用のために計算のロジックのシリアル化された表現を構築することを意味します。
以下のように print 文を追加すると、これを確認することができます。
End of explanation
@tff.tf_computation(tf.float32)
def add_half(x):
return tf.add(x, 0.5)
Explanation: フェデレーテッド計算を定義する Python のコードは、非 Eager なコンテキストで TensorFlow グラフを構築する Python のコードの考え方に似ています (TensorFlow の非 Eager な使用法をご存知でない場合には、後で実行される演算のグラフを定義した Python のコードは、実際にはその場で実行されるわけではないと考えてみてください)。TensorFlow で非 Eager なグラフ構築のコードは Python ですが、このコードによって構築された TensorFlow のグラフはプラットフォームに依存しないので、シリアライズが可能です。
同様に、TFF の計算は Python で定義しますが、先ほど示した例のtff.federated_meanなど、その本体内の Python ステートメントは、内部でポータブルかつプラットフォームに依存しないシリアライズ可能な表現にコンパイルされます。
開発者が直接この表現を使って作業をすることはないので、この表現の詳細を気にする必要はありませんが、TFF 計算は基本的に非 Eager であり、任意の Python の状態をキャプチャできないという事実に注意してください。tff.federated_computationで装飾された Python 関数の本体をシリアライズ前するにトレースする際、TFF 計算の本体に含まれる Python のコードを定義時に実行します。呼び出し時に再びトレースすることはありません。(ポリモーフィックな場合を除きます。詳細についてはドキュメントのページをご覧ください。)
なぜ Python ではない専用の内部表現を導入したのか、疑問に思われるかもしれません。その理由の 1 つは、最終的には TFF の計算を実際の物理的環境に展開し、Python を使用できないモバイルデバイスや組み込みデバイスでもホストされることを想定しているからです。
もう 1 つの理由は、個々の参加者のローカルな振る舞いを表現する Python プログラムとは対照的に、TFF 計算は分散システムのグローバルな振る舞いを表現するからです。上記の単純な例では、特殊な演算子tff.federated_meanを使用してクライアントデバイス上のデータを受け取り、結果をサーバーに送信することが分かります。
演算子tff.federated_meanはローカルで実行せず、先に述べたように複数のシステム参加者の動作を調整する分散システムを表現しているため、Python の一般的な演算子として簡単にモデル化することはできません。このような演算子をフェデレーテッド演算子と呼び、Python の一般的(ローカル)な演算子とは区別します。
TFF 型システムや TFF の言語でサポートされる演算子の基本的なセットは Python の演算子と大きく異なるため、専用の表現を使用する必要があるのです。
フェデレーテッド計算を作成する
上で述べたように、フェデレーテッド計算とその構成要素は、分散システムのモデルとして最もよく理解されており、フェデレーテッド計算の作成は、単純な分散システムからより複雑な分散システムを作成することであると考えることができます。演算子tff.federated_meanは、型シグネチャ({T}@CLIENTS -> T@SERVER)を持つ、ある種の組み込みテンプレートのフェデレーテッド計算と考えることができます(実際、計算の作成と同様に、この演算子の構造も複雑で、それをより単純な演算子に内部で分解します)。
フェデレーテッド計算を作成する場合も同様です。計算get_average_temperatureは、tff.federated_computationで装飾された別の Python 関数の本体内で呼び出すことができます。そうすると、前にtff.federated_meanがその本体内に埋め込まれたのと同じような方法で、親の本体内に埋め込まれます。
注意すべき重要な制限事項は、tff.federated_computationで装飾された Python 関数の本体は、フェデレーテッド演算子のみで構成する必要があるいうことです。つまり、直接 TensorFlow 演算子を含むことはできません。例えば、直接tf.nestインターフェースを使用してフェデレーテッド値のペアを追加することはできません。TensorFlow コードの使用は、次のセクションで説明するtff.tf_computationで装飾されたコードのブロックに制限する必要があります。そのようにしてラップされた場合にのみ、tff.federated_computationの本体内でラップされた TensorFlow コードを呼び出すことができます。
この分離の理由には、技術的な理由(非テンソルでtf.addのような演算子が使えるように仕向けるのは困難であること)およびアーキテクチャ的な理由があります。フェデレーテッド計算の言語(つまりtff.federated_computationで装飾された Python 関数のシリアライズされた本体から構築したロジック)は、プラットフォームに依存しないグルー言語として機能するように設計されています。このグルー言語は現在、TensorFlow コードの埋め込みセクション(tff.tf_computationブロックに限定)から分散システムを構築するために使用されています。将来的には、入力パイプラインを表すリレーショナル データベース クエリのような TensorFlow 以外のロジックのセクションを埋め込んで、同じグルー言語(tff.federated_computationブロック)を使用してこれらすべてを接続できるようにする必要があると予想しています。
TensorFlow のロジック
TensorFlow 計算を宣言する
TFF は TensorFlow で使用するように設計されています。したがって、TFF で記述するコードの大部分は通常の(つまりローカルで実行する)TensorFlow のコードになるはずです。上で述べたように、tff.tf_computationで装飾するだけで、TensorFlow のコードを TFF で使用できるようになります。
例えば、数値を受け取り、それに0.5を加える関数を実装する方法を以下に示します。
End of explanation
str(add_half.type_signature)
Explanation: もう一度これを見ると、単純にtf.functionなどの既存の仕組みを使用するのではなく、なぜ別のデコレータtff.tf_computationを定義する必要があるのか疑問に思うかもしれません。前のセクションとは異なり、ここでは TensorFlow コードの普通のブロックを扱っています。
これにはいくつかの理由があります。その全体的な扱いに関しては本チュートリアルの範囲外になってしまいますが、ここに主要な理由を挙げておきます。
TensorFlow コードを使用して実装した再利用可能なビルディングブロックをフェデレーテッド計算の本体に埋め込むには、定義時にトレースされてシリアライズされる、型シグネチャを持つなどの特定のプロパティを満たす必要があります。これには通常、何らかの形のデコレータが必要です。
通常は、可能な限りtf.functionのような TensorFlow のネイティブメカニズムの使用を推奨しています。これは TFF のデコレータが Eager 関数と相互作用するこの確実な方法は、進化が期待できるからです。
ここで、上記のコードスニペットの例に戻りますが、先ほど定義した計算add_halfは、他の TFF 計算と同様に TFF で処理することができます。特に、これには TFF 型シグネチャがあります。
End of explanation
@tff.federated_computation(tff.type_at_clients(tf.float32))
def add_half_on_clients(x):
return tff.federated_map(add_half, x)
str(add_half_on_clients.type_signature)
Explanation: この型シグネチャには配置がないことに注意してください。TensorFlow の計算は、フェデレーテッド型を消費または返すことができません。
また、add_halfを他の計算のビルディングブロックとして使用することもできます。例として、tff.federated_map 演算子を使用して、クライアントデバイス上のフェデレーテッド float のすべてのメンバ構成要素に点ごとにadd_halfを適用する方法を以下に示します。
End of explanation
add_half_on_clients([1.0, 3.0, 2.0])
Explanation: TensorFlow 計算を実行する
tff.tf_computationで定義された計算の実行は、tff.federated_computationで説明したのと同じルールに従います。これらは通常の Python の callable として以下のように呼び出すことができます。
End of explanation
try:
# Eager mode
constant_10 = tf.constant(10.)
@tff.tf_computation(tf.float32)
def add_ten(x):
return x + constant_10
except Exception as err:
print (err)
Explanation: 繰り返しますが、この方法でadd_half_on_clientsの計算を呼び出すと、分散プロセスをシミュレートすることに注目してください。データをクライアント上で消費し、クライアント上で返します。実際、この計算は各クライアントにローカルアクションを実行させます。(実際にはこのようなプロセスをオーケストレーションする際に必要になる可能性はありますが、)このシステム内ではtff.SERVERを明示的に言及していません。この方法で定義された計算は、MapReduceのMapステージの概念に類似していると考えてください。
また、前のセクションで、TFF の計算を定義時にシリアライズすることについて言及しましたが、これはtff.tf_computationのコードにも該当します。add_half_on_clientsの Python 本体は定義時に一度だけトレースします。それ以降の呼び出しでは、TFF はそのシリアライズされた表現を使用します。
tff.federated_computationで装飾された Python メソッドとtff.tf_computationで装飾された Python メソッドの唯一の違いは、(前者が TensorFlow コードを直接埋め込むことが許されないのに対し)後者は TensorFlow のグラフとしてシリアライズされることです。
内部的にtff.tf_computationで装飾された各メソッドは、計算の構造をキャプチャできるようにするため、一時的に Eager execution を無効化します。Eager execution をローカルで無効化し、正しくシリアライズできるように計算のロジックを記述している限りは、Eager な TensorFlow、AutoGraph、TensorFlow 2.0 コンストラクトなどを使用しても問題ありません。
例えば、以下のコードは失敗します。
End of explanation
def get_constant_10():
return tf.constant(10.)
@tff.tf_computation(tf.float32)
def add_ten(x):
return x + get_constant_10()
add_ten(5.0)
Explanation: 上記では、tff.tf_computationがadd_tenの本体で内部的に構築するグラフの外側で、シリアル化のプロセス中に既にconstant_10が構築されるため、失敗します。
その一方で、内部にtff.tf_computationを呼び出す際に、現在のグラフを変更する python 関数を呼び出すことについては問題ありません。
End of explanation
float32_sequence = tff.SequenceType(tf.float32)
str(float32_sequence)
Explanation: TensorFlow のシリアル化の仕組みが進化をしているため、TFF の計算のシリアライズ方法の詳細についても今後進化していくと予想しています。
tf.data.Datasetで作業する
前述したように、tff.tf_computationのユニークな特徴は、正式なパラメータとして抽象的に定義したtf.data.Datasetをコードを扱うことができるということです。TensorFlow でデータセットとして表現するパラメータは、tff.SequenceTypeコンストラクタを使用して宣言する必要があります。
例えば、型の仕様tff.SequenceType(tf.float32)は、TFF の float 要素の抽象シーケンスを定義します。シーケンスにはテンソルまたは複雑な入れ子構造のいずれかを含めることができます。(これらの例については後ほど説明します。)T型のアイテムのシーケンスを簡潔に表現すると、T*となります。
End of explanation
@tff.tf_computation(tff.SequenceType(tf.float32))
def get_local_temperature_average(local_temperatures):
sum_and_count = (
local_temperatures.reduce((0.0, 0), lambda x, y: (x[0] + y, x[1] + 1)))
return sum_and_count[0] / tf.cast(sum_and_count[1], tf.float32)
str(get_local_temperature_average.type_signature)
Explanation: 温度センサーの例では、各センサーが単一の温度測定値ではなく、それぞれ複数の温度測定値を保持しているとします。ここではtf.data.Dataset.reduce演算子を使用して、TensorFlow で 単一のローカルデータセットの平均温度を計算する TFF 計算の定義方法を説明します。
End of explanation
@tff.tf_computation(tff.SequenceType(tf.int32))
def foo(x):
return x.reduce(np.int32(0), lambda x, y: x + y)
foo([1, 2, 3])
Explanation: tff.tf_computationで装飾されたメソッドの本体では、TFF のシーケンス型の形式的なパラメータは、tf.data.Datasetのように振る舞うオブジェクトとして単純に表現されます。つまり、同じプロパティとメソッドをサポートします。(現在はその型のサブクラスとして実装されていませんが、これは TensorFlow のデータセットのサポートの進化に伴って変更される可能性があります。)
以下のようにして簡単に確認することができます。
End of explanation
get_local_temperature_average([68.5, 70.3, 69.8])
Explanation: 一般的なtf.data.Datasetとは異なり、これらのデータセットのようなオブジェクトはプレースホルダーであることに注意してください。これらのオブジェクトは抽象的なシーケンス型のパラメータを表現しているため、要素は一切含んでおらず、具体的なコンテキストで使用する際に具体的なデータにバインドされます。抽象的に定義されたプレースホルダー データセットのサポートについては、現時点ではまだ多少限界があり TFF の初期段階なので、ある種の制限に遭遇する可能性があります。しかし、本チュートリアルで心配する必要はありません(詳細についてはドキュメントのページをご覧ください)。
本チュートリアルのように、シーケンスを受け入れる計算をシミュレーションモードでローカルで実行する場合、以下のように Python のリストとしてシーケンスを与えることができます(他の方法、例えば Eager モードのtf.data.Datasetでも可能ですが、今のところはシンプルにしておきます)。
End of explanation
@tff.tf_computation(tff.SequenceType(collections.OrderedDict([('A', tf.int32), ('B', tf.int32)])))
def foo(ds):
print('element_structure = {}'.format(ds.element_spec))
return ds.reduce(np.int32(0), lambda total, x: total + x['A'] * x['B'])
str(foo.type_signature)
foo([{'A': 2, 'B': 3}, {'A': 4, 'B': 5}])
Explanation: 他のすべての TFF 型もそうですが、上で定義したようなシーケンスは、tff.StructTypeコンストラクタを使用して入れ子構造を定義することができます。例えば、AとBのペアのシーケンスを受け入れ、それらの積の和を返す計算を宣言する方法を以下に示します。TFF の型シグネチャがどのようにデータセットのoutput_typesとoutput_shapesに変換しているかを確認できるよう、計算の本文にトレース文を入れています。
End of explanation
@tff.federated_computation(
tff.type_at_clients(tff.SequenceType(tf.float32)))
def get_global_temperature_average(sensor_readings):
return tff.federated_mean(
tff.federated_map(get_local_temperature_average, sensor_readings))
Explanation: tf.data.Datasetsを形式的なパラメータとして使用するためのサポートについては、本チュートリアルで使用されているような単純なシナリオでは機能しますが、まだ多少制限があり、進化を続けているところです。
すべてを統合する
さて、ここで再びフェデレーテッド設定で TensorFlow 計算を使用してみましょう。各センサーが温度測定値のローカルシーケンスを持つセンサー群があるとします。以下のようにセンサーのローカル平均値を平均化して、グローバルな平均温度を計算することができます。
End of explanation
str(get_global_temperature_average.type_signature)
Explanation: これは全てのクライアントの全てのローカルの温度測定値にわたる単純平均値ではないことに注意してください。単純平均値には、各クライアントがローカルで保持する測定値の数に従い、それぞれの測定値に対する重み付けが必要となるからです。これは上記コードの更新の練習として、読者の皆さんに残しておくことにします。tff.federated_mean演算子は、重みをオプションの第 2 引数(フェデレーテッド float が予想される)として受け入れます。
さらに、get_global_temperature_averageの入力がフェデレーテッド float シーケンスになっていることにも注意してください。フェデレーテッドシーケンスはフェデレーテッドラーニングでデバイス上のデータを一般的に表現する方法であり、シーケンス要素は一般的にデータのバッチを表現します(後の例でご覧になれます)。
End of explanation
get_global_temperature_average([[68.0, 70.0], [71.0], [68.0, 72.0, 70.0]])
Explanation: ここで Python のデータのサンプルに対してローカルで計算を実行する方法を説明します。入力を供給する方法は、listのlistとなっていることに注意してください。外側のリストはtff.CLIENTSで表現されるグループ内のデバイスをイテレートし、内側のリストは各デバイスのローカルシーケンス内の要素をイテレートします。
End of explanation |
7,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-fold cross validation - Regression Model
Based on the Ludwig regression example
Data set
This example demonstrates teh following
Step1: Contstants
Step2: Clean out previous results
Step3: Retrieve data from UCI Machine Learning Repository
Download required data
Step4: Create Pandas DataFrame from downloaded data
Step5: Create train/test split
Step6: Setup Ludwig config
Step7: Create Ludwig input_features
Step8: Create Ludwig output features
Step9: Perform K-fold Cross Validation analysis
Step10: Train model and assess model performance
Step11: Compare K-fold Cross Validation metrics against hold-out test metrics
Hold-out Test Metrics
Step12: K-fold Cross Validation Metrics | Python Code:
import logging
import os
import os.path
import shutil
import tempfile
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import scipy.stats as stats
import seaborn as sns
from sklearn.model_selection import train_test_split
from ludwig.api import kfold_cross_validate, LudwigModel
Explanation: K-fold cross validation - Regression Model
Based on the Ludwig regression example
Data set
This example demonstrates teh following:
Download a data set and create a pandas dataframe
Create a training and hold-out test data sets
Create a Ludwig config data structure from the pandas dataframe
Run a 5-fold cross validation analysis with the training data
Use Ludwig APIs to train and assess model performance on hold-out test data set
End of explanation
DATA_SET_URL = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
DATA_SET = 'auto_mpg.data'
RESULTS_DIR = 'results'
Explanation: Contstants
End of explanation
if os.path.isfile(DATA_SET):
os.remove(DATA_SET)
shutil.rmtree(RESULTS_DIR, ignore_errors=True)
Explanation: Clean out previous results
End of explanation
r = requests.get(DATA_SET_URL)
if r.status_code == 200:
with open(DATA_SET,'w') as f:
f.write(r.content.decode("utf-8"))
Explanation: Retrieve data from UCI Machine Learning Repository
Download required data
End of explanation
raw_df = pd.read_csv(DATA_SET,
header=None,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
raw_df.columns = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'ModelYear', 'Origin']
raw_df.shape
raw_df.head()
Explanation: Create Pandas DataFrame from downloaded data
End of explanation
train_df, test_df = train_test_split(raw_df, train_size=0.8, random_state=17)
print(train_df.shape)
print(test_df.shape)
Explanation: Create train/test split
End of explanation
num_features = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'ModelYear']
cat_features = ['Origin']
Explanation: Setup Ludwig config
End of explanation
input_features = []
# setup input features for numerical variables
for p in num_features:
a_feature = {'name': p, 'type': 'numerical',
'preprocessing': {'missing_value_strategy': 'fill_with_mean', 'normalization': 'zscore'}}
input_features.append(a_feature)
# setkup input features for categorical variables
for p in cat_features:
a_feature = {'name': p, 'type': 'category'}
Explanation: Create Ludwig input_features
End of explanation
output_features =[
{
'name': 'MPG',
'type': 'numerical',
'num_fc_layers': 2,
'fc_size': 64
}
]
config = {
'input_features' : input_features,
'output_features': output_features,
'training' :{
'epochs': 100,
'batch_size': 32
}
}
config
Explanation: Create Ludwig output features
End of explanation
%%time
with tempfile.TemporaryDirectory() as tmpdir:
data_csv_fp = os.path.join(tmpdir,'train.csv')
train_df.to_csv(data_csv_fp, index=False)
(
kfold_cv_stats,
kfold_split_indices
) = kfold_cross_validate(
num_folds=5,
config=config,
dataset=data_csv_fp,
output_directory=tmpdir,
logging_level=logging.ERROR
)
kfold_cv_stats['overall']['MPG']
Explanation: Perform K-fold Cross Validation analysis
End of explanation
model = LudwigModel(
config=config,
logging_level=logging.ERROR
)
%%time
training_stats = model.train(
training_set=train_df,
output_directory=RESULTS_DIR,
)
test_stats, mpg_hat_df, _ = model.evaluate(dataset=test_df, collect_predictions=True, collect_overall_stats=True)
test_stats
a = plt.axes(aspect='equal')
sns.scatterplot(test_df['MPG'].values, mpg_hat_df['MPG_predictions'].values,
s=50)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
lims = [0, 50]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
Explanation: Train model and assess model performance
End of explanation
test_stats['MPG']
Explanation: Compare K-fold Cross Validation metrics against hold-out test metrics
Hold-out Test Metrics
End of explanation
kfold_cv_stats['overall']['MPG']
Explanation: K-fold Cross Validation Metrics
End of explanation |
7,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
7,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hubs
Step1: Exercise
Can you create a ranked list of the importance of each individual, based on the number of neighbors they have?
Hint
Step2: If you inspect the dictionary closely, you will find that node 19 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.
There are other measures of centrality, namely betweenness centrality, flow centrality and load centrality. You can take a look at their definitions on the NetworkX API docs and their cited references. You can also define your own measures if those don't fit your needs, but that is an advanced topic that won't be dealt with here.
The NetworkX API docs that document the centrality measures are here
Step3: Paths in a Network
Graph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths that connect two nodes in the network.
Using the synthetic social network, we will figure out how to answer the following questions
Step5: Let's say we wanted to find the shortest path between two nodes. How would we approach this? One approach is what one would call a breadth-first search (http
Step6: And testing the function on a few test cases
Step7: Meanwhile... thankfully, NetworkX has a function for us to use, titled has_path, so we don't have to always implement this on our own.
Step8: NetworkX also has other shortest path algorithms implemented.
https
Step9: nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. (Not all paths are guaranteed to be found.)
Step11: Incidentally, the node list is in order as well - we will travel through 19 and 17 in that order to get from 14 from 4.
Exercise
Write a function that extracts the edges in the shortest path between two nodes and puts them into a new graph, and draws it to the screen. It should also return an error if there is no path between the two nodes. (~5 min)
Hint
Step13: Exercise
Since we've been drawing some graphs to screen, we might as well draw a few other things while we're on a roll.
Write a function that extracts only node, its neighbors, and the edges between that node and its neighbors as a new graph. Then, draw the new graph to screen. (~5 min.)
Step15: Challenge Exercises (optional)
Let's try some other problems that build on the NetworkX API. (10 min.)
Refer to the following for the relevant functions
Step16: Hubs Revisited
It looks like individual 19 is an important person of some sorts - if a message has to be passed through the network in the shortest time possible, then usually it'll go through person 19. Such a person has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Check out the Wikipedia page for a further description.
http
Step17: Exercise
Plot betweeness centrality against degree centrality for the synthetic social network above.
Think about it...
From the scatter plot, we can see that the dots don't all fall on the same line. Degree centrality and betweenness centrality don't necessarily correlate. Can you think of a reason why?
What would be the degree centrality and betweenness centrality of the middle connecting node in the barbell graph below? | Python Code:
# Let's find out the number of neighbors that individual #7 has.
G.neighbors(7)
Explanation: Hubs: How do we evaluate the importance of some individuals in a network?
Within a social network, there will be certain individuals which perform certain important functions. For example, there may be hyper-connected individuals who are connected to many, many more people. They would be of use in the spreading of information. Alternatively, if this were a disease contact network, identifying them would be useful in stopping the spread of diseases. How would one identify these people?
Approach 1: Neighbors
One way we could compute this is to find out the number of people an individual is conencted to. NetworkX let's us do this by giving us a G.neighbors(node) function.
End of explanation
nx.degree_centrality(G)
Explanation: Exercise
Can you create a ranked list of the importance of each individual, based on the number of neighbors they have?
Hint: One suggested output would be a list of tuples, where the first element in each tuple is the node ID (an integer number), and the second element is a list of its neighbors.
Hint: Python's sorted(iterable, key=lambda x:...., reverse=True) function may be of help here.
Approach 2: Degree Centrality
The number of other nodes that one node is connected to is a measure of its centrality. NetworkX implements a degree centrality, which is defined as the number of neighbors that a node has normalized to the number of individuals it could be connected to in the entire graph. This is accessed by using nx.degree_centrality(G)
End of explanation
# Your answer here.
Explanation: If you inspect the dictionary closely, you will find that node 19 is the one that has the highest degree centrality, just as we had measured by counting the number of neighbors.
There are other measures of centrality, namely betweenness centrality, flow centrality and load centrality. You can take a look at their definitions on the NetworkX API docs and their cited references. You can also define your own measures if those don't fit your needs, but that is an advanced topic that won't be dealt with here.
The NetworkX API docs that document the centrality measures are here: http://networkx.github.io/documentation/networkx-1.9.1/reference/algorithms.centrality.html
Exercises
Can you create a histogram of the distribution of degree centralities? (1-2 min)
Can you create a histogram of the distribution of number of neighbors? (1-2 min)
Can you create a scatterplot of the degree centralities against number of neighbors? (1-2 min)
If I have n nodes, then how many possible edges are there in total, assuming self-edges are allowed? What if self-edges are not allowed?
Time: 3-6 min.
Hint: You may want to use:
plt.hist(list_of_values)
and
plt.scatter(x_values, y_values)
If you know the Matplotlib API, feel free to get fancy :).
End of explanation
nx.draw(G, with_labels=True)
Explanation: Paths in a Network
Graph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths that connect two nodes in the network.
Using the synthetic social network, we will figure out how to answer the following questions:
How long will it take for a message to spread through this group of friends? (making some assumptions, of course)
How do we find the shortest path to get from individual A to individual B?
Shortest Path
End of explanation
def path_exists(node1, node2, G):
This function checks whether a path exists between two nodes (node1, node2) in graph G.
Explanation: Let's say we wanted to find the shortest path between two nodes. How would we approach this? One approach is what one would call a breadth-first search (http://en.wikipedia.org/wiki/Breadth-first_search). While not necessarily the fastest, it is the easiest to conceptualize.
The approach is essentially as such:
Begin with a queue of the starting node.
Add the neighbors of that node to the queue.
If destination node is present in the queue, end.
If destination node is not present, proceed.
For each node in the queue:
Remove node from the queue.
Add neighbors of the node to the queue. Check if destination node is present or not.
If destination node is present, break.
If destination node is not present, repeat step 3.
Exercise
Try implementing this algorithm in a function called path_exists(node1, node2, G).
The function should take in two nodes, node1 and node2, and the graph G that they belong to, and return a Boolean that indicates whether a path exists between those two nodes or not.
End of explanation
path_exists(18, 5, G)
path_exists(29, 26, G)
Explanation: And testing the function on a few test cases:
18 and any other node (should return False)
29 and 26 (should return True)
End of explanation
nx.has_path(G, 18, 5)
Explanation: Meanwhile... thankfully, NetworkX has a function for us to use, titled has_path, so we don't have to always implement this on our own. :-)
http://networkx.lanl.gov/reference/generated/networkx.algorithms.shortest_paths.generic.has_path.html#networkx.algorithms.shortest_paths.generic.has_path
End of explanation
nx.draw(G, with_labels=True)
Explanation: NetworkX also has other shortest path algorithms implemented.
https://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.shortest_paths.unweighted.predecessor.html#networkx.algorithms.shortest_paths.unweighted.predecessor
We can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to another.
End of explanation
nx.shortest_path(G, 4, 14)
Explanation: nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. (Not all paths are guaranteed to be found.)
End of explanation
# Possible Answer:
def extract_path_edges(G, source, target):
Fill in the code below.
# Test your function with the following block of code.
newG = extract_path_edges(G, 1, 14)
nx.draw(newG, with_labels=True)
Explanation: Incidentally, the node list is in order as well - we will travel through 19 and 17 in that order to get from 14 from 4.
Exercise
Write a function that extracts the edges in the shortest path between two nodes and puts them into a new graph, and draws it to the screen. It should also return an error if there is no path between the two nodes. (~5 min)
Hint: You may want to use G.subgraph(iterable_of_nodes) to extract just the nodes and edges of interest from the graph G. One coding pattern to consider is this:
newG = G.subgraph(nodes_of_interest)
newG will be comprised of the nodes of interest and the edges that connect them.
End of explanation
# Possible Answer
def extract_neighbor_edges(G, node):
Fill in code below.
# Test your function with the following block of code.
fig = plt.figure(0)
newG = extract_neighbor_edges(G, 19)
nx.draw(newG, with_labels=True)
Explanation: Exercise
Since we've been drawing some graphs to screen, we might as well draw a few other things while we're on a roll.
Write a function that extracts only node, its neighbors, and the edges between that node and its neighbors as a new graph. Then, draw the new graph to screen. (~5 min.)
End of explanation
# Your answer to Question 1:
# All we need here is the length of the path.
def compute_transmission_time(G, source, target):
Fill in code below.
# Test with the following line of code.
compute_transmission_time(G, 14, 4)
# Your answer to Question 2:
# We need to know the length of every single shortest path between every pair of nodes.
# If we don't put a source and target into the nx.shortest_path_length(G) function call, then
# we get a dictionary of dictionaries, where all source-->target-->lengths are shown.
# Your answer to Question 3:
# You may want to use the Counter object from collections, as well as combinations from itertools.
from collections import Counter
from itertools import combinations
# Your answer to Question 4:
# Hint: You may want to use bar graphs or histograms.
plt.bar(totals.keys(), totals.values())
Explanation: Challenge Exercises (optional)
Let's try some other problems that build on the NetworkX API. (10 min.)
Refer to the following for the relevant functions:
https://networkx.github.io/documentation/latest/reference/algorithms.shortest_paths.html
If we want a message to go from one person to another person, and we assume that the message takes 1 day for the initial step and 1 additional day per step in the transmission chain (i.e. the first step takes 1 day, the second step takes 2 days etc.), how long will the message take to spread from any two given individuals? Write a function to compute this.
What is the distribution of message spread times from person to person? What about chain lengths?
Are there certain individuals who consistently show up in the chain? (Hint: you might wish to use the following functions/objects:
Counter object from the collections module
combinations function from the itertools module.
all_shortest_paths(G, node1, node2) which is part of the networkX algorithms.
As a bonus, if you were able to compute the answer to question 3, can you plot a histogram of the number of times each node shows up in a connecting path?
End of explanation
btws = nx.betweenness_centrality(G, normalized=False)
plt.bar(btws.keys(), btws.values())
Explanation: Hubs Revisited
It looks like individual 19 is an important person of some sorts - if a message has to be passed through the network in the shortest time possible, then usually it'll go through person 19. Such a person has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Check out the Wikipedia page for a further description.
http://en.wikipedia.org/wiki/Betweenness_centrality
End of explanation
nx.draw(nx.barbell_graph(5, 1))
Explanation: Exercise
Plot betweeness centrality against degree centrality for the synthetic social network above.
Think about it...
From the scatter plot, we can see that the dots don't all fall on the same line. Degree centrality and betweenness centrality don't necessarily correlate. Can you think of a reason why?
What would be the degree centrality and betweenness centrality of the middle connecting node in the barbell graph below?
End of explanation |
7,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: You will use the AdamW optimizer from tensorflow/models to fine-tune BERT, which you will install as well.
Step3: Next, configure TFHub to read checkpoints directly from TFHub's Cloud Storage buckets. This is only recommended when running TFHub models on TPU.
Without this setting TFHub would download the compressed file and extract the checkpoint locally. Attempting to load from these local files will fail with the following error
Step4: Connect to the TPU worker
The following code connects to the TPU worker and changes TensorFlow's default device to the CPU device on the TPU worker. It also defines a TPU distribution strategy that you will use to distribute model training onto the 8 separate TPU cores available on this one TPU worker. See TensorFlow's TPU guide for more information.
Step5: Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune.
There are multiple BERT models available to choose from.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT
Step6: Preprocess the text
On the Classify text with BERT colab the preprocessing model is used directly embedded with the BERT encoder.
This tutorial demonstrates how to do preprocessing as part of your input pipeline for training, using Dataset.map, and then merge it into the model that gets exported for inference. That way, both training and inference can work from raw text inputs, although the TPU itself requires numeric inputs.
TPU requirements aside, it can help performance have preprocessing done asynchronously in an input pipeline (you can learn more in the tf.data performance guide).
This tutorial also demonstrates how to build multi-input models, and how to adjust the sequence length of the inputs to BERT.
Let's demonstrate the preprocessing model.
Step7: Each preprocessing model also provides a method, .bert_pack_inputs(tensors, seq_length), which takes a list of tokens (like tok above) and a sequence length argument. This packs the inputs to create a dictionary of tensors in the format expected by the BERT model.
Step9: Here are some details to pay attention to
Step10: Let's demonstrate the preprocessing model. You will create a test with two sentences input (input1 and input2). The output is what a BERT model would expect as input
Step11: Let's take a look at the model's structure, paying attention to the two inputs you just defined.
Step12: To apply the preprocessing in all the inputs from the dataset, you will use the map function from the dataset. The result is then cached for performance.
Step13: Define your model
You are now ready to define your model for sentence or sentence pair classification by feeding the preprocessed inputs through the BERT encoder and putting a linear classifier on top (or other arrangement of layers as you prefer), and using dropout for regularization.
Step14: Let's try running the model on some preprocessed inputs.
Step15: Choose a task from GLUE
You are going to use a TensorFlow DataSet from the GLUE benchmark suite.
Colab lets you download these small datasets to the local filesystem, and the code below reads them entirely into memory, because the separate TPU worker host cannot access the local filesystem of the colab runtime.
For bigger datasets, you'll need to create your own Google Cloud Storage bucket and have the TPU worker read the data from there. You can learn more in the TPU guide.
It's recommended to start with the CoLa dataset (for single sentence) or MRPC (for multi sentence) since these are small and don't take long to fine tune.
Step16: The dataset also determines the problem type (classification or regression) and the appropriate loss function for training.
Step17: Train your model
Finally, you can train the model end-to-end on the dataset you chose.
Distribution
Recall the set-up code at the top, which has connected the colab runtime to
a TPU worker with multiple TPU devices. To distribute training onto them, you will create and compile your main Keras model within the scope of the TPU distribution strategy. (For details, see Distributed training with Keras.)
Preprocessing, on the other hand, runs on the CPU of the worker host, not the TPUs, so the Keras model for preprocessing as well as the training and validation datasets mapped with it are built outside the distribution strategy scope. The call to Model.fit() will take care of distributing the passed-in dataset to the model replicas.
Note
Step18: Export for inference
You will create a final model that has the preprocessing part and the fine-tuned BERT we've just created.
At inference time, preprocessing needs to be part of the model (because there is no longer a separate input queue as for training data that does it). Preprocessing is not just computation; it has its own resources (the vocab table) that must be attached to the Keras Model that is saved for export.
This final assembly is what will be saved.
You are going to save the model on colab and later you can download to keep it for the future (View -> Table of contents -> Files).
Step19: Test the model
The final step is testing the results of your exported model.
Just to make some comparison, let's reload the model and test it using some inputs from the test split from the dataset.
Note
Step20: Test
Step21: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. Notice there are some small differences in the input. In Python, you can test them as follows | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Hub Authors.
End of explanation
!pip install -q -U "tensorflow-text==2.8.*"
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/bert_glue"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Solve GLUE tasks using BERT on TPU
BERT can be used to solve many problems in natural language processing. You will learn how to fine-tune BERT for many tasks from the GLUE benchmark:
CoLA (Corpus of Linguistic Acceptability): Is the sentence grammatically correct?
SST-2 (Stanford Sentiment Treebank): The task is to predict the sentiment of a given sentence.
MRPC (Microsoft Research Paraphrase Corpus): Determine whether a pair of sentences are semantically equivalent.
QQP (Quora Question Pairs2): Determine whether a pair of questions are semantically equivalent.
MNLI (Multi-Genre Natural Language Inference): Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral).
QNLI(Question-answering Natural Language Inference): The task is to determine whether the context sentence contains the answer to the question.
RTE(Recognizing Textual Entailment): Determine if a sentence entails a given hypothesis or not.
WNLI(Winograd Natural Language Inference): The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence.
This tutorial contains complete end-to-end code to train these models on a TPU. You can also run this notebook on a GPU, by changing one line (described below).
In this notebook, you will:
Load a BERT model from TensorFlow Hub
Choose one of GLUE tasks and download the dataset
Preprocess the text
Fine-tune BERT (examples are given for single-sentence and multi-sentence datasets)
Save the trained model and use it
Key point: The model you develop will be end-to-end. The preprocessing logic will be included in the model itself, making it capable of accepting raw strings as input.
Note: This notebook should be run using a TPU. In Colab, choose Runtime -> Change runtime type and verify that a TPU is selected.
Setup
You will use a separate model to preprocess text before using it to fine-tune BERT. This model depends on tensorflow/text, which you will install below.
End of explanation
!pip install -q -U tf-models-official==2.7.0
!pip install -U tfds-nightly
import os
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import tensorflow_text as text # A dependency of the preprocessing model
import tensorflow_addons as tfa
from official.nlp import optimization
import numpy as np
tf.get_logger().setLevel('ERROR')
Explanation: You will use the AdamW optimizer from tensorflow/models to fine-tune BERT, which you will install as well.
End of explanation
os.environ["TFHUB_MODEL_LOAD_FORMAT"]="UNCOMPRESSED"
Explanation: Next, configure TFHub to read checkpoints directly from TFHub's Cloud Storage buckets. This is only recommended when running TFHub models on TPU.
Without this setting TFHub would download the compressed file and extract the checkpoint locally. Attempting to load from these local files will fail with the following error:
InvalidArgumentError: Unimplemented: File system scheme '[local]' not implemented
This is because the TPU can only read directly from Cloud Storage buckets.
Note: This setting is automatic in Colab.
End of explanation
import os
if os.environ['COLAB_TPU_ADDR']:
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
print('Using TPU')
elif tf.config.list_physical_devices('GPU'):
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('Running on CPU is not recommended.')
Explanation: Connect to the TPU worker
The following code connects to the TPU worker and changes TensorFlow's default device to the CPU device on the TPU worker. It also defines a TPU distribution strategy that you will use to distribute model training onto the 8 separate TPU cores available on this one TPU worker. See TensorFlow's TPU guide for more information.
End of explanation
#@title Choose a BERT model to fine-tune
bert_model_name = 'bert_en_uncased_L-12_H-768_A-12' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_en_cased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "albert_en_large", "albert_en_xlarge", "albert_en_xxlarge", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base", "talking-heads_large"]
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/3',
'bert_en_wwm_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_en_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_L-24_H-1024_A-16/3',
'bert_en_wwm_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_wwm_cased_L-24_H-1024_A-16/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'albert_en_large':
'https://tfhub.dev/tensorflow/albert_en_large/2',
'albert_en_xlarge':
'https://tfhub.dev/tensorflow/albert_en_xlarge/2',
'albert_en_xxlarge':
'https://tfhub.dev/tensorflow/albert_en_xxlarge/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
'talking-heads_large':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_large/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_wwm_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_wwm_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'albert_en_large':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'albert_en_xlarge':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'albert_en_xxlarge':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_large':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print('BERT model selected :', tfhub_handle_encoder)
print('Preprocessing model auto-selected:', tfhub_handle_preprocess)
Explanation: Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune.
There are multiple BERT models available to choose from.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT: four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers.
BERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task.
Electra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN).
BERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture.
See the model documentation linked above for more details.
In this tutorial, you will start with BERT-base. You can use larger and more recent models for higher accuracy, or smaller models for faster training times. To change the model, you only need to switch a single line of code (shown below). All the differences are encapsulated in the SavedModel you will download from TensorFlow Hub.
End of explanation
bert_preprocess = hub.load(tfhub_handle_preprocess)
tok = bert_preprocess.tokenize(tf.constant(['Hello TensorFlow!']))
print(tok)
Explanation: Preprocess the text
On the Classify text with BERT colab the preprocessing model is used directly embedded with the BERT encoder.
This tutorial demonstrates how to do preprocessing as part of your input pipeline for training, using Dataset.map, and then merge it into the model that gets exported for inference. That way, both training and inference can work from raw text inputs, although the TPU itself requires numeric inputs.
TPU requirements aside, it can help performance have preprocessing done asynchronously in an input pipeline (you can learn more in the tf.data performance guide).
This tutorial also demonstrates how to build multi-input models, and how to adjust the sequence length of the inputs to BERT.
Let's demonstrate the preprocessing model.
End of explanation
text_preprocessed = bert_preprocess.bert_pack_inputs([tok, tok], tf.constant(20))
print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape)
print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16])
print('Shape Mask : ', text_preprocessed['input_mask'].shape)
print('Input Mask : ', text_preprocessed['input_mask'][0, :16])
print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape)
print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16])
Explanation: Each preprocessing model also provides a method, .bert_pack_inputs(tensors, seq_length), which takes a list of tokens (like tok above) and a sequence length argument. This packs the inputs to create a dictionary of tensors in the format expected by the BERT model.
End of explanation
def make_bert_preprocess_model(sentence_features, seq_length=128):
Returns Model mapping string features to BERT inputs.
Args:
sentence_features: a list with the names of string-valued features.
seq_length: an integer that defines the sequence length of BERT inputs.
Returns:
A Keras Model that can be called on a list or dict of string Tensors
(with the order or names, resp., given by sentence_features) and
returns a dict of tensors for input to BERT.
input_segments = [
tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft)
for ft in sentence_features]
# Tokenize the text to word pieces.
bert_preprocess = hub.load(tfhub_handle_preprocess)
tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer')
segments = [tokenizer(s) for s in input_segments]
# Optional: Trim segments in a smart way to fit seq_length.
# Simple cases (like this example) can skip this step and let
# the next step apply a default truncation to approximately equal lengths.
truncated_segments = segments
# Pack inputs. The details (start/end token ids, dict of output tensors)
# are model-dependent, so this gets loaded from the SavedModel.
packer = hub.KerasLayer(bert_preprocess.bert_pack_inputs,
arguments=dict(seq_length=seq_length),
name='packer')
model_inputs = packer(truncated_segments)
return tf.keras.Model(input_segments, model_inputs)
Explanation: Here are some details to pay attention to:
- input_mask The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input_word_ids, and contains a 1 anywhere the input_word_ids is not padding.
- input_type_ids has the same shape as input_mask, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of.
Next, you will create a preprocessing model that encapsulates all this logic. Your model will take strings as input, and return appropriately formatted objects which can be passed to BERT.
Each BERT model has a specific preprocessing model, make sure to use the proper one described on the BERT's model documentation.
Note: BERT adds a "position embedding" to the token embedding of each input, and these come from a fixed-size lookup table. That imposes a max seq length of 512 (which is also a practical limit, due to the quadratic growth of attention computation). For this Colab 128 is good enough.
End of explanation
test_preprocess_model = make_bert_preprocess_model(['my_input1', 'my_input2'])
test_text = [np.array(['some random test sentence']),
np.array(['another sentence'])]
text_preprocessed = test_preprocess_model(test_text)
print('Keys : ', list(text_preprocessed.keys()))
print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape)
print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16])
print('Shape Mask : ', text_preprocessed['input_mask'].shape)
print('Input Mask : ', text_preprocessed['input_mask'][0, :16])
print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape)
print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16])
Explanation: Let's demonstrate the preprocessing model. You will create a test with two sentences input (input1 and input2). The output is what a BERT model would expect as input: input_word_ids, input_masks and input_type_ids.
End of explanation
tf.keras.utils.plot_model(test_preprocess_model, show_shapes=True, show_dtype=True)
Explanation: Let's take a look at the model's structure, paying attention to the two inputs you just defined.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
def load_dataset_from_tfds(in_memory_ds, info, split, batch_size,
bert_preprocess_model):
is_training = split.startswith('train')
dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[split])
num_examples = info.splits[split].num_examples
if is_training:
dataset = dataset.shuffle(num_examples)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda ex: (bert_preprocess_model(ex), ex['label']))
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
return dataset, num_examples
Explanation: To apply the preprocessing in all the inputs from the dataset, you will use the map function from the dataset. The result is then cached for performance.
End of explanation
def build_classifier_model(num_classes):
class Classifier(tf.keras.Model):
def __init__(self, num_classes):
super(Classifier, self).__init__(name="prediction")
self.encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True)
self.dropout = tf.keras.layers.Dropout(0.1)
self.dense = tf.keras.layers.Dense(num_classes)
def call(self, preprocessed_text):
encoder_outputs = self.encoder(preprocessed_text)
pooled_output = encoder_outputs["pooled_output"]
x = self.dropout(pooled_output)
x = self.dense(x)
return x
model = Classifier(num_classes)
return model
Explanation: Define your model
You are now ready to define your model for sentence or sentence pair classification by feeding the preprocessed inputs through the BERT encoder and putting a linear classifier on top (or other arrangement of layers as you prefer), and using dropout for regularization.
End of explanation
test_classifier_model = build_classifier_model(2)
bert_raw_result = test_classifier_model(text_preprocessed)
print(tf.sigmoid(bert_raw_result))
Explanation: Let's try running the model on some preprocessed inputs.
End of explanation
tfds_name = 'glue/cola' #@param ['glue/cola', 'glue/sst2', 'glue/mrpc', 'glue/qqp', 'glue/mnli', 'glue/qnli', 'glue/rte', 'glue/wnli']
tfds_info = tfds.builder(tfds_name).info
sentence_features = list(tfds_info.features.keys())
sentence_features.remove('idx')
sentence_features.remove('label')
available_splits = list(tfds_info.splits.keys())
train_split = 'train'
validation_split = 'validation'
test_split = 'test'
if tfds_name == 'glue/mnli':
validation_split = 'validation_matched'
test_split = 'test_matched'
num_classes = tfds_info.features['label'].num_classes
num_examples = tfds_info.splits.total_num_examples
print(f'Using {tfds_name} from TFDS')
print(f'This dataset has {num_examples} examples')
print(f'Number of classes: {num_classes}')
print(f'Features {sentence_features}')
print(f'Splits {available_splits}')
with tf.device('/job:localhost'):
# batch_size=-1 is a way to load the dataset into memory
in_memory_ds = tfds.load(tfds_name, batch_size=-1, shuffle_files=True)
# The code below is just to show some samples from the selected dataset
print(f'Here are some sample rows from {tfds_name} dataset')
sample_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[train_split])
labels_names = tfds_info.features['label'].names
print(labels_names)
print()
sample_i = 1
for sample_row in sample_dataset.take(5):
samples = [sample_row[feature] for feature in sentence_features]
print(f'sample row {sample_i}')
for sample in samples:
print(sample.numpy())
sample_label = sample_row['label']
print(f'label: {sample_label} ({labels_names[sample_label]})')
print()
sample_i += 1
Explanation: Choose a task from GLUE
You are going to use a TensorFlow DataSet from the GLUE benchmark suite.
Colab lets you download these small datasets to the local filesystem, and the code below reads them entirely into memory, because the separate TPU worker host cannot access the local filesystem of the colab runtime.
For bigger datasets, you'll need to create your own Google Cloud Storage bucket and have the TPU worker read the data from there. You can learn more in the TPU guide.
It's recommended to start with the CoLa dataset (for single sentence) or MRPC (for multi sentence) since these are small and don't take long to fine tune.
End of explanation
def get_configuration(glue_task):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
if glue_task == 'glue/cola':
metrics = tfa.metrics.MatthewsCorrelationCoefficient(num_classes=2)
else:
metrics = tf.keras.metrics.SparseCategoricalAccuracy(
'accuracy', dtype=tf.float32)
return metrics, loss
Explanation: The dataset also determines the problem type (classification or regression) and the appropriate loss function for training.
End of explanation
epochs = 3
batch_size = 32
init_lr = 2e-5
print(f'Fine tuning {tfhub_handle_encoder} model')
bert_preprocess_model = make_bert_preprocess_model(sentence_features)
with strategy.scope():
# metric have to be created inside the strategy scope
metrics, loss = get_configuration(tfds_name)
train_dataset, train_data_size = load_dataset_from_tfds(
in_memory_ds, tfds_info, train_split, batch_size, bert_preprocess_model)
steps_per_epoch = train_data_size // batch_size
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = num_train_steps // 10
validation_dataset, validation_data_size = load_dataset_from_tfds(
in_memory_ds, tfds_info, validation_split, batch_size,
bert_preprocess_model)
validation_steps = validation_data_size // batch_size
classifier_model = build_classifier_model(num_classes)
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
classifier_model.compile(optimizer=optimizer, loss=loss, metrics=[metrics])
classifier_model.fit(
x=train_dataset,
validation_data=validation_dataset,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_steps=validation_steps)
Explanation: Train your model
Finally, you can train the model end-to-end on the dataset you chose.
Distribution
Recall the set-up code at the top, which has connected the colab runtime to
a TPU worker with multiple TPU devices. To distribute training onto them, you will create and compile your main Keras model within the scope of the TPU distribution strategy. (For details, see Distributed training with Keras.)
Preprocessing, on the other hand, runs on the CPU of the worker host, not the TPUs, so the Keras model for preprocessing as well as the training and validation datasets mapped with it are built outside the distribution strategy scope. The call to Model.fit() will take care of distributing the passed-in dataset to the model replicas.
Note: The single TPU worker host already has the resource objects (think: a lookup table) needed for tokenization. Scaling up to multiple workers requires use of Strategy.experimental_distribute_datasets_from_function with a function that loads the preprocessing model separately onto each worker.
Optimizer
Fine-tuning follows the optimizer set-up from BERT pre-training (as in Classify text with BERT): It uses the AdamW optimizer with a linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).
End of explanation
main_save_path = './my_models'
bert_type = tfhub_handle_encoder.split('/')[-2]
saved_model_name = f'{tfds_name.replace("/", "_")}_{bert_type}'
saved_model_path = os.path.join(main_save_path, saved_model_name)
preprocess_inputs = bert_preprocess_model.inputs
bert_encoder_inputs = bert_preprocess_model(preprocess_inputs)
bert_outputs = classifier_model(bert_encoder_inputs)
model_for_export = tf.keras.Model(preprocess_inputs, bert_outputs)
print('Saving', saved_model_path)
# Save everything on the Colab host (even the variables from TPU memory)
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model_for_export.save(saved_model_path, include_optimizer=False,
options=save_options)
Explanation: Export for inference
You will create a final model that has the preprocessing part and the fine-tuned BERT we've just created.
At inference time, preprocessing needs to be part of the model (because there is no longer a separate input queue as for training data that does it). Preprocessing is not just computation; it has its own resources (the vocab table) that must be attached to the Keras Model that is saved for export.
This final assembly is what will be saved.
You are going to save the model on colab and later you can download to keep it for the future (View -> Table of contents -> Files).
End of explanation
with tf.device('/job:localhost'):
reloaded_model = tf.saved_model.load(saved_model_path)
#@title Utility methods
def prepare(record):
model_inputs = [[record[ft]] for ft in sentence_features]
return model_inputs
def prepare_serving(record):
model_inputs = {ft: record[ft] for ft in sentence_features}
return model_inputs
def print_bert_results(test, bert_result, dataset_name):
bert_result_class = tf.argmax(bert_result, axis=1)[0]
if dataset_name == 'glue/cola':
print('sentence:', test[0].numpy())
if bert_result_class == 1:
print('This sentence is acceptable')
else:
print('This sentence is unacceptable')
elif dataset_name == 'glue/sst2':
print('sentence:', test[0])
if bert_result_class == 1:
print('This sentence has POSITIVE sentiment')
else:
print('This sentence has NEGATIVE sentiment')
elif dataset_name == 'glue/mrpc':
print('sentence1:', test[0])
print('sentence2:', test[1])
if bert_result_class == 1:
print('Are a paraphrase')
else:
print('Are NOT a paraphrase')
elif dataset_name == 'glue/qqp':
print('question1:', test[0])
print('question2:', test[1])
if bert_result_class == 1:
print('Questions are similar')
else:
print('Questions are NOT similar')
elif dataset_name == 'glue/mnli':
print('premise :', test[0])
print('hypothesis:', test[1])
if bert_result_class == 1:
print('This premise is NEUTRAL to the hypothesis')
elif bert_result_class == 2:
print('This premise CONTRADICTS the hypothesis')
else:
print('This premise ENTAILS the hypothesis')
elif dataset_name == 'glue/qnli':
print('question:', test[0])
print('sentence:', test[1])
if bert_result_class == 1:
print('The question is NOT answerable by the sentence')
else:
print('The question is answerable by the sentence')
elif dataset_name == 'glue/rte':
print('sentence1:', test[0])
print('sentence2:', test[1])
if bert_result_class == 1:
print('Sentence1 DOES NOT entails sentence2')
else:
print('Sentence1 entails sentence2')
elif dataset_name == 'glue/wnli':
print('sentence1:', test[0])
print('sentence2:', test[1])
if bert_result_class == 1:
print('Sentence1 DOES NOT entails sentence2')
else:
print('Sentence1 entails sentence2')
print('BERT raw results:', bert_result[0])
print()
Explanation: Test the model
The final step is testing the results of your exported model.
Just to make some comparison, let's reload the model and test it using some inputs from the test split from the dataset.
Note: The test is done on the colab host, not the TPU worker that it has connected to, so it appears below with explicit device placements. You can omit those when loading the SavedModel elsewhere.
End of explanation
with tf.device('/job:localhost'):
test_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[test_split])
for test_row in test_dataset.shuffle(1000).map(prepare).take(5):
if len(sentence_features) == 1:
result = reloaded_model(test_row[0])
else:
result = reloaded_model(list(test_row))
print_bert_results(test_row, result, tfds_name)
Explanation: Test
End of explanation
with tf.device('/job:localhost'):
serving_model = reloaded_model.signatures['serving_default']
for test_row in test_dataset.shuffle(1000).map(prepare_serving).take(5):
result = serving_model(**test_row)
# The 'prediction' key is the classifier's defined model name.
print_bert_results(list(test_row.values()), result['prediction'], tfds_name)
Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. Notice there are some small differences in the input. In Python, you can test them as follows:
End of explanation |
7,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with the pyCNN package
The pyCNN package is intended for neural-network processing on the CPU, and is particularly suited for NLP applications. It is a python-wrapper for the CNN package written by Chris Dyer.
There are two modes of operation
Step1: The first block creates a model and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into Expressions.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
Step2: Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
Step3: To use the trainer, we need to
Step4: The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a training set, and iterate over it.
For the xor problem, the training instances are easy to create.
Step5: We now feed each question / answer pair to the network, and try to minimize the loss.
Step6: Our network is now trained. Let's verify that it indeed learned the xor function
Step7: In case we are curious about the parameter values, we can query them
Step8: To summarize
Here is a complete program
Step9: Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the xor example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks. | Python Code:
# create a model and add the parameters.
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
renew_cg() # new computation graph. not strictly needed here, but good practice.
# associate the parameters with cg Expressions
W = parameter(m["W"])
V = parameter(m["V"])
b = parameter(m["b"])
#b[1:-1].value()
b.value()
Explanation: Working with the pyCNN package
The pyCNN package is intended for neural-network processing on the CPU, and is particularly suited for NLP applications. It is a python-wrapper for the CNN package written by Chris Dyer.
There are two modes of operation:
Static networks, in which a network is built and then being fed with different inputs/outputs. Most NN packages work this way.
Dynamic networks, in which a new network is built for each training example (sharing parameters with the networks of other training examples). This approach is what makes pyCNN unique, and where most of its power comes from.
We will describe both of these modes.
Package Fundamentals
The main piece of pyCNN is the ComputationGraph, which is what essentially defines a neural network.
The ComputationGraph is composed of expressions, which relate to the inputs and outputs of the network,
as well as the Parameters of the network. The parameters are the things in the network that are optimized over time, and all of the parameters sit inside a Model. There are trainers (for example SimpleSGDTrainer) that are in charge of setting the parameter values.
We will not be using the ComputationGraph directly, but it is there in the background, as a singleton object.
When pycnn is imported, a new ComputationGraph is created. We can then reset the computation graph to a new state
by calling renew_cg().
Static Networks
The life-cycle of a pyCNN program is:
1. Create a Model, and populate it with Parameters.
2. Renew the computation graph, and create Expression representing the network
(the network will include the Expressions for the Parameters defined in the model).
3. Optimize the model for the objective of the network.
As an example, consider a model for solving the "xor" problem. The network has two inputs, which can be 0 or 1, and a single output which should be the xor of the two inputs.
We will model this as a multi-layer perceptron with a single hidden node.
Let $x = x_1, x_2$ be our input. We will have a hidden layer of 8 nodes, and an output layer of a single node. The activation on the hidden layer will be a $\tanh$. Our network will then be:
$\sigma(V(\tanh(Wx+b)))$
Where $W$ is a $8 \times 2$ matrix, $V$ is an $8 \times 1$ matrix, and $b$ is an 8-dim vector.
We want the output to be either 0 or 1, so we take the output layer to be the logistic-sigmoid function, $\sigma(x)$, that takes values between $-\infty$ and $+\infty$ and returns numbers in $[0,1]$.
We will begin by defining the model and the computation graph.
End of explanation
x = vecInput(2) # an input vector of size 2. Also an expression.
output = logistic(V*(tanh((W*x)+b)))
# we can now query our network
x.set([0,0])
output.value()
# we want to be able to define a loss, so we need an input expression to work against.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
x.set([1,0])
y.set(0)
print loss.value()
y.set(1)
print loss.value()
Explanation: The first block creates a model and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into Expressions.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
End of explanation
trainer = SimpleSGDTrainer(m)
Explanation: Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
End of explanation
x.set([1,0])
y.set(1)
loss_value = loss.value() # this performs a forward through the network.
print "the loss before step is:",loss_value
# now do an optimization step
loss.backward() # compute the gradients
trainer.update()
# see how it affected the loss:
loss_value = loss.value(recalculate=True) # recalculate=True means "don't use precomputed value"
print "the loss after step is:",loss_value
Explanation: To use the trainer, we need to:
* call the forward_scalar method of ComputationGraph. This will run a forward pass through the network, calculating all the intermediate values until the last one (loss, in our case), and then convert the value to a scalar. The final output of our network must be a single scalar value. However, if we do not care about the value, we can just use cg.forward() instead of cg.forward_sclar().
* call the backward method of ComputationGraph. This will run a backward pass from the last node, calculating the gradients with respect to minimizing the last expression (in our case we want to minimize the loss). The gradients are stored in the model, and we can now let the trainer take care of the optimization step.
* call trainer.update() to optimize the values with respect to the latest gradients.
End of explanation
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
Explanation: The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a training set, and iterate over it.
For the xor problem, the training instances are easy to create.
End of explanation
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: We now feed each question / answer pair to the network, and try to minimize the loss.
End of explanation
x.set([0,1])
print "0,1",output.value()
x.set([1,0])
print "1,0",output.value()
x.set([0,0])
print "0,0",output.value()
x.set([1,1])
print "1,1",output.value()
Explanation: Our network is now trained. Let's verify that it indeed learned the xor function:
End of explanation
W.value()
V.value()
b.value()
Explanation: In case we are curious about the parameter values, we can query them:
End of explanation
# define the parameters
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
# renew the computation graph
renew_cg()
# add the parameters to the graph
W = parameter(m["W"])
V = parameter(m["V"])
b = parameter(m["b"])
# create the network
x = vecInput(2) # an input vector of size 2.
output = logistic(V*(tanh((W*x)+b)))
# define the loss with respect to an output y.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
# create training instances
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# train the network
trainer = SimpleSGDTrainer(m)
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: To summarize
Here is a complete program:
End of explanation
# create training instances, as before
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# create a network for the xor problem given input and output
def create_xor_network(model, inputs, expected_answer):
renew_cg()
W = parameter(model["W"])
V = parameter(model["V"])
b = parameter(model["b"])
x = vecInput(len(inputs))
x.set(inputs)
y = scalarInput(expected_answer)
output = logistic(V*(tanh((W*x)+b)))
loss = binary_log_loss(output, y)
return loss
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
trainer = SimpleSGDTrainer(m)
seen_instances = 0
total_loss = 0
for question, answer in zip(questions, answers):
loss = create_xor_network(m, question, answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the xor example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks.
End of explanation |
7,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bienvenid@s a otra reunión de Pyladies!!
En esta sesión aprenderemos a crear nuestras propias funciones en python.Pero primero que son funciones?
Una función en python es un bloque de código organizado y reusable que sirve para realizar una tarea. Recuerdas las funciones que hemos usado en python, por ejemplo, cuando quisimos saber cuántos elementos hay en una lista usamos la función len. En python ya hay una gran colección de funciones que puedes utilizar (así no tenemos que re inventar la rueda cada vez que necesitemos algo) y aquí hay una lista de funciones que ya vienen incluídas en python.
Usando funciones en python
Como se los he dicho en varias ocasiones todas las funciones en python tienen la misma estructura que es como se ilustra a continuación
Step1: Ejercicio 1.
Cada un@ va a escoger una función de las que ya están incluídas en python y la va a explicar a sus compañer@s
Creando tus propias funciones en python
Ya que estás más familiarizada con las funciones pre hechas en python te podrás dar cuenta de que no siempre van a tener una que necesites, entonces, ¿cómo puedo hacer mis propias funciones?
En python la forma de hacerlo es la siguiente
Step2: Pausa para dudas
3 ..
2..
1
Muy bien! ahora te toca a tí
Step3: Ahora prueba tu función con estos valores de porcentage
Step4: Ahora veamos que pasa cuando llamamos a la función
Step5: Esto no significa que la función que acabo de escribir sea definitiva y no pueda yo modificarla para sacar las potencias con otros números. Como veremos a continuación, la función puede tomar cualquier número. Sólo tenemos que hacerlo explícito esta vez... | Python Code:
animales = ['perro', 'gato', 'perico']
len(animales)
animales[1]
x = 4
type(int('43'))
Explanation: Bienvenid@s a otra reunión de Pyladies!!
En esta sesión aprenderemos a crear nuestras propias funciones en python.Pero primero que son funciones?
Una función en python es un bloque de código organizado y reusable que sirve para realizar una tarea. Recuerdas las funciones que hemos usado en python, por ejemplo, cuando quisimos saber cuántos elementos hay en una lista usamos la función len. En python ya hay una gran colección de funciones que puedes utilizar (así no tenemos que re inventar la rueda cada vez que necesitemos algo) y aquí hay una lista de funciones que ya vienen incluídas en python.
Usando funciones en python
Como se los he dicho en varias ocasiones todas las funciones en python tienen la misma estructura que es como se ilustra a continuación:
nombre + paréntesis + argumentos
En el caso de len, la estructura sería la siguiente:
len(lista)
len toma como argumento la lista o arreglo del cual quieras saber su longitud. Una vez que la función sea ejecutada, nos va a devolver un objeto (que evidentemente será lo que le hemos pedido).
End of explanation
def cuadrado(numero):
'''Función que da como resultado el cuadrado de un número
necesitas un número como argumento'''
resultado = numero**2
return resultado
# Probemos la función
cuadrado(8)
cuadrado(8.0)
cuadrado(-8)
Explanation: Ejercicio 1.
Cada un@ va a escoger una función de las que ya están incluídas en python y la va a explicar a sus compañer@s
Creando tus propias funciones en python
Ya que estás más familiarizada con las funciones pre hechas en python te podrás dar cuenta de que no siempre van a tener una que necesites, entonces, ¿cómo puedo hacer mis propias funciones?
En python la forma de hacerlo es la siguiente:
Primero tienes que hacerle claro a python que el bloque de código (o pequeño programa) que vas a escribir forma va a ser una función para esto se escribe def que es la abreviatura para definir.
Después tienes que inventar un nombre para tu función. En teoría puedes llamarlo como quieras, sin embargo, es de buena práctica en python llamar a tus funciones de forma tal que cuando las leas después de meses o años puedas claramente recordar que es lo que hacen.
Después de escribir def y el nombre de la función viene algo crucial para crear funciones, basándote en la estructura de las que ya vienen incluídas en python, qué crees que sea...
... Exactamente!! los argumentos!!
Esta parte es crucial porque de aquí vas a sacar la información necesaria para poder dar un resultado. Veremos esto más adelante.
Después viene el bloque de código que deseas ejecutar y esto puede constar de operaciones complejas y transformación de los datos.
Finalmente, para que quede claro para python lo que te debe devolver al final de la función necesitas escribir un return seguido de lo que será el resultado de la función.
La estructura para definir funciones queda de la siguiente manera:
def nombre_función(argumento 1, argumento 2, ... , argumento n):
operación 1
operación 2
resultado = operación 1 + operación 2
return resultado
Hagamos una pequeña función como ejemplo.
End of explanation
def barras(porcentaje):
gatos = (porcentaje*20)//100
guiones = 20 - gatos
print('['+'#'* gatos + '-' * guiones + ']'+str(porcentaje)+'%')
barras(167)
gatos = (35*20)/100
Explanation: Pausa para dudas
3 ..
2..
1
Muy bien! ahora te toca a tí :)
Ejercicio 2
Crea una función que dibuje una barra de carga con un porcentaje. Digamos que queremos que se dibuje el 35% entonces el resultado de correr la función sería:
[#######-------------] 35%
End of explanation
def exponente(numero=4, exponente=2):
'''Toma un número y lo eleva a la potencia de otro'''
resultado = numero**exponente
return resultado
Explanation: Ahora prueba tu función con estos valores de porcentage:
* 12.5%
* 167 %
* -20 *
Ejercicio 3
Escribe una función que te diga cuantas consonantes hay en una palabra. Ejemplo: La palabra "carroza" tiene 4 consonantes
Argumentos predeterminados
Hay ocasiones en las cuales los argumentos para una función que vamos a crear los vamos a ocuparemos cotidianamente o simplemente tienen más sentido y para no tener que escribirlos cada vez que llamamos a la función lo que podemos hacer es definirlos desde el momento en el que estamos creando una función
Vamos a asumir que yo quiero hacer una función que eleve a la potencia n un número x. Digamos que de acuerdo a mi experiencia, la mayoría de las personas quiere saber el cuadrado de 4. Lo que hago entonces es una función que tenga como argumentos predeterminados el 4 y el 2... Veamos el ejemplo
End of explanation
exponente()
Explanation: Ahora veamos que pasa cuando llamamos a la función
End of explanation
exponente(4, 0.5)
exponente(5, -1)
exponente(0.5, 2)
Explanation: Esto no significa que la función que acabo de escribir sea definitiva y no pueda yo modificarla para sacar las potencias con otros números. Como veremos a continuación, la función puede tomar cualquier número. Sólo tenemos que hacerlo explícito esta vez...
End of explanation |
7,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Learning In-Depth
Step1: Motivating Random Forests
Step2: The binary splitting makes this extremely efficient.
As always, though, the trick is to ask the right questions.
This is where the algorithmic process comes in
Step3: We have some convenience functions in the repository that help
Step4: Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits
Step5: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
The result is a very fast non-parametric classification, and can be extremely useful in practice.
Question
Step6: The details of the classifications are completely different! That is an indication of over-fitting
Step7: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer
Step8: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
(Note
Step9: As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!
Example
Step10: To remind us what we're looking at, we'll visualize the first few data points
Step11: We can quickly classify the digits using a decision tree as follows
Step12: We can check the accuracy of this classifier
Step13: and for good measure, plot the confusion matrix
Step14: Exercise
Repeat this classification task with sklearn.ensemble.RandomForestClassifier. How does the max_depth, max_features, and n_estimators affect the results?
Try this classification with sklearn.svm.SVC, adjusting kernel, C, and gamma. Which classifier performs optimally?
Try a few sets of parameters for each model and check the F1 score (sklearn.metrics.f1_score) on your results. What's the best F1 score you can reach? | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
Explanation: Supervised Learning In-Depth: Random Forests
Previously we saw a powerful discriminative classifier, Support Vector Machines.
Here we'll take a look at motivating another powerful algorithm. This one is a non-parametric algorithm called Random Forests.
End of explanation
import fig_code
fig_code.plot_example_decision_tree()
Explanation: Motivating Random Forests: Decision Trees
Random forests are an example of an ensemble learner built on decision trees.
For this reason we'll start by discussing decision trees themselves.
Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
Explanation: The binary splitting makes this extremely efficient.
As always, though, the trick is to ask the right questions.
This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information.
Creating a Decision Tree
Here's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:
End of explanation
from fig_code import visualize_tree, plot_tree_interactive
Explanation: We have some convenience functions in the repository that help
End of explanation
plot_tree_interactive(X, y);
Explanation: Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:
End of explanation
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
Explanation: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
The result is a very fast non-parametric classification, and can be extremely useful in practice.
Question: Do you see any problems with this?
Decision Trees and over-fitting
One issue with decision trees is that it is very easy to create trees which over-fit the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:
End of explanation
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from IPython.html.widgets import interact
interact(fit_randomized_tree, random_state=[0, 100]);
Explanation: The details of the classifications are completely different! That is an indication of over-fitting: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal.
Ensembles of Estimators: Random Forests
One possible way to address over-fitting is to use an Ensemble Method: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!
One of the most common ensemble methods is the Random Forest, in which the ensemble is made up of many decision trees which are in some way perturbed.
There are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
Explanation: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:
End of explanation
from sklearn.ensemble import RandomForestRegressor
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o');
xfit = np.linspace(0, 10, 1000)
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
ytrue = model(xfit, 0)
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r');
plt.plot(xfit, ytrue, '-k', alpha=0.5);
Explanation: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the scikit-learn documentation)
Quick Example: Moving to Regression
Above we were considering random forests within the context of classification.
Random forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is sklearn.ensemble.RandomForestRegressor.
Let's quickly demonstrate how this can be used:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
Explanation: As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!
Example: Random Forest for Classifying Digits
We previously saw the hand-written digits data. Let's use that here to test the efficacy of the SVM and Random Forest classifiers.
End of explanation
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
Explanation: To remind us what we're looking at, we'll visualize the first few data points:
End of explanation
from sklearn.model_selection import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
Explanation: We can quickly classify the digits using a decision tree as follows:
End of explanation
metrics.accuracy_score(ypred, ytest)
Explanation: We can check the accuracy of this classifier:
End of explanation
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
Explanation: and for good measure, plot the confusion matrix:
End of explanation
clf = RandomForestClassifier(max_depth=100)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
metrics.accuracy_score(ypred, ytest)
Explanation: Exercise
Repeat this classification task with sklearn.ensemble.RandomForestClassifier. How does the max_depth, max_features, and n_estimators affect the results?
Try this classification with sklearn.svm.SVC, adjusting kernel, C, and gamma. Which classifier performs optimally?
Try a few sets of parameters for each model and check the F1 score (sklearn.metrics.f1_score) on your results. What's the best F1 score you can reach?
End of explanation |
7,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimuon spectrum
<hr style="border-top-width
Step1: A little extra
Step2: Convert to ROOT format and analyse
First of all we convert the csv file into ROOT format, i.e. filling up a TTree data structure. But first of all we uncompress it if it's not.
Step3: Now we create an histogram to hold the invariant mass values. In order to loop on the TTree rows, we use the TTree
Step4: That might have been too fast. We now make the analysis above more explicit producing a plot also for the J/Psi particle.
Step5: Now time to draw our plot | Python Code:
import ROOT
Explanation: Dimuon spectrum
<hr style="border-top-width: 4px; border-top-color: #34609b;">
This ROOTbook produces a plot of the dimuon spectrum starting from a subset of the CMS collision events of Run2010B.
Dataset Reference:<br>
McCauley, T. (2014). Dimuon event information derived from the Run2010B public Mu dataset. CERN Open Data Portal. DOI: 10.7483/OPENDATA.CMS.CB8H.MFFA.
End of explanation
%jsroot on
Explanation: A little extra: JavaScript visualisation. This command will become a magic very soon.
End of explanation
inputFileName = 'MuRun2010B.csv'
import os
if not os.path.exists(inputFileName):
import urllib2
response = urllib2.urlopen('https://raw.githubusercontent.com/dpiparo/swanExamples/master/notebooks/MuRun2010B.csv')
filecontent = response.read()
with open(inputFileName,"w") as f_out:
f_out.write(filecontent)
dimuons = ROOT.TTree("MuonPairs","MuonPairs")
dimuons.ReadFile(inputFileName)
Explanation: Convert to ROOT format and analyse
First of all we convert the csv file into ROOT format, i.e. filling up a TTree data structure. But first of all we uncompress it if it's not.
End of explanation
invMass = ROOT.TH1F("invMass","CMS Opendata: #mu#mu mass;#mu#mu mass [GeV];Events",512, 2, 110)
invMassFormula = "sqrt((E1 + E2)^2 - ((px1 + px2)^2 + (py1 + py2)^2 + (pz1 + pz2)^2))"
cut = "Q1*Q2==-1"
c = ROOT.TCanvas()
dimuons.Draw(invMassFormula + " >> invMass",cut,"hist")
c.SetLogx()
c.SetLogy()
c.Draw()
Explanation: Now we create an histogram to hold the invariant mass values. In order to loop on the TTree rows, we use the TTree::Draw method: this is the most straightforward way in which you can loop on a N-tuple in ROOT.
Notice that the plot is an interactive JavaScript based visualisation: you can zoom on the resonances to better inspect the result.
End of explanation
from math import sqrt
invMass = ROOT.TH1F("Spectrum","Subset of CMS Run 2010B;#mu#mu mass [GeV];Events",1024, 2, 110)
jpsiLow = 2.95
jspiHigh = 3.25
jpsi = ROOT.TH1F("jpsi","Subset of CMS Run 2010B: J/#psi window;#mu#mu mass [GeV];Events",128, jpsiLow, jspiHigh)
for e in dimuons: # a loop on the events
if e.Q1 * e.Q2 != -1:
continue
m2 = (e.E1 + e.E2)**2 - ((e.px1 + e.px2)**2 + (e.py1 + e.py2)**2 + (e.pz1 + e.pz2)**2)
m = sqrt(m2)
invMass.Fill(m)
if m < jspiHigh and m > jpsiLow:
jpsi.Fill(m)
Explanation: That might have been too fast. We now make the analysis above more explicit producing a plot also for the J/Psi particle.
End of explanation
dualCanvas = ROOT.TCanvas("DualCanvas","DualCanvas",800,512)
dualCanvas.Divide(2,1)
leftPad = dualCanvas.cd(1)
leftPad.SetLogx()
leftPad.SetLogy()
invMass.Draw("Hist")
dualCanvas.cd(2)
jpsi.Draw("HistP")
dualCanvas.Draw()
Explanation: Now time to draw our plot: this time we will inline an image in the notebook. We will plot on the same canvas the full spectrum and the zoom in the J/psi particle.
End of explanation |
7,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project
Step1: <a id='wrangling'></a>
Data Wrangling
General Properties
Step3: Data Cleaning
As evident from the data, it seems we have cast of the movie as string separated by | symbol. This needs to be converted into a suitable type in order to consume it properly later.
Step4: Convert cast, genres, director and production_companies columns to array
Step5: <a id='eda'></a>
Exploratory Data Analysis
Research Question 1
Step8: Research Question 2
Step9: Research Question 3
Step10: Find common genres in highest grossing movies
Step11: Popularity of highest grossing movies
Step13: Directors of highest grossing movies
Step14: Cast of highest grossing movies
Step15: Production companies of highest grossing movies
Step18: Highest grossing budget
Research Question 4
Step19: Research Question 5 | Python Code:
# import necessary libraries
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
Explanation: Project: Investigate TMDb Movie Data
Table of Contents
<ul>
<li><a href="#intro">Introduction</a></li>
<li><a href="#wrangling">Data Wrangling</a></li>
<li><a href="#eda">Exploratory Data Analysis</a></li>
<li><a href="#conclusions">Conclusions</a></li>
</ul>
<a id='intro'></a>
Introduction
The TMDb data provides details of 10,000 movies. The data include details like casts, revenue, budget, popularity etc. This data can be analysed to find interesting pattern between movies.
We can use this data to try to answer following questions:
What is the yearly revenue change?
Which genres are most popular from year to year?
What kinds of properties are associated with movies that have high revenues?
Who are top 15 highest grossing directors?
Who are top 15 highest grossing actors?
End of explanation
# Load TMDb data and print out a few lines. Perform operations to inspect data
# types and look for instances of missing or possibly errant data.
tmdb_movies = pd.read_csv('tmdb-movies.csv')
tmdb_movies.head()
tmdb_movies.describe()
Explanation: <a id='wrangling'></a>
Data Wrangling
General Properties
End of explanation
# Pandas read empty string value as nan, make it empty string
tmdb_movies.cast.fillna('', inplace=True)
tmdb_movies.genres.fillna('', inplace=True)
tmdb_movies.director.fillna('', inplace=True)
tmdb_movies.production_companies.fillna('', inplace=True)
def string_to_array(data):
This function returns given splitss the data by separator `|` and returns the result as array
return data.split('|')
Explanation: Data Cleaning
As evident from the data, it seems we have cast of the movie as string separated by | symbol. This needs to be converted into a suitable type in order to consume it properly later.
End of explanation
tmdb_movies.cast = tmdb_movies.cast.apply(string_to_array)
tmdb_movies.genres = tmdb_movies.genres.apply(string_to_array)
tmdb_movies.director = tmdb_movies.director.apply(string_to_array)
tmdb_movies.production_companies = tmdb_movies.production_companies.apply(string_to_array)
Explanation: Convert cast, genres, director and production_companies columns to array
End of explanation
def yearly_growth(mean_revenue):
return mean_revenue - mean_revenue.shift(1).fillna(0)
# Show change in mean revenue over years, considering only movies for which we have revenue data
movies_with_budget = tmdb_movies[tmdb_movies.budget_adj > 0]
movies_with_revenue = movies_with_budget[movies_with_budget.revenue_adj > 0]
revenues_over_years = movies_with_revenue.groupby('release_year').sum()
revenues_over_years.apply(yearly_growth)['revenue'].plot()
revenues_over_years[['budget_adj', 'revenue_adj']].plot()
def log(data):
return np.log(data)
movies_with_revenue[['budget_adj', 'revenue_adj']].apply(log) \
.sort_values(by='budget_adj').set_index('budget_adj')['revenue_adj'].plot(figsize=(20,6))
Explanation: <a id='eda'></a>
Exploratory Data Analysis
Research Question 1: What is the yearly revenue change?
It's evident from observations below that there is no clear trend in change in mean revenue over years.
Mean revenue from year to year is quite unstable. This can be attributed to number of movies and number of movies having high or low revenue
The gap between budget and revenue have widened after 2000. This can be attributed to circulation of movies worldwide compared to earlier days.
There seems to be a correlation between gross budget and gross revenue over years.
When log of revenue_adj is plotted against log of budget_adj, we can see a clear correlation between revenue of a movie against the budget
End of explanation
def popular_movies(movies):
return movies[movies['vote_average']>=7]
def group_by_genre(data):
This function takes a Data Frame having and returns a dictionary having
release_year as key and value a dictionary having key as movie's genre
and value as frequency of the genre that year.
genres_by_year = {}
for (year, position), genres in data.items():
for genre in genres:
if year in genres_by_year:
if genre in genres_by_year[year]:
genres_by_year[year][genre] += 1
else:
genres_by_year[year][genre] = 1
else:
genres_by_year[year] = {}
genres_by_year[year][genre] = 1
return genres_by_year
def plot(genres_by_year):
This function iterates over each row of Data Frame and prints rows
having release_year divisible by 5 to avoid plotting too many graphs.
for year, genres in genres_by_year.items():
if year%5 == 0:
pd.DataFrame(grouped_genres[year], index=[year]).plot(kind='bar', figsize=(20, 6))
# Group movies by genre for each year and try to find the correlations
# of genres over years.
grouped_genres = group_by_genre(tmdb_movies.groupby('release_year').apply(popular_movies).genres)
plot(grouped_genres)
Explanation: Research Question 2: Which genres are most popular from year to year?
Since popularity column indicates all time popularity of the movie, it might not be the right metric to measure popularity over years. We can measure popularty of a movie based on average vote. I think a movie is popular if vote_average >= 7.
On analyzing the popular movies since 1960(check illustrations below), following onservations can be made:
Almost all popular movies have Drama genre
Over years Comedy, Action and Adventure got popular.
In recent years, Documentry, Action and Animation movies got more popularity.
End of explanation
highest_grossing_movies = tmdb_movies[tmdb_movies['revenue_adj'] >= 1000000000]\
.sort_values(by='revenue_adj', ascending=False)
highest_grossing_movies.head()
Explanation: Research Question 3: What kinds of properties are associated with movies that have high revenues?
We can consider those movies with at least 1 billion revenue and see what are common properties among them.
Considering this criteria and based on illustrations below, we can make following observations about highest grossing movies:
Adventure and Action are most common genres among these movies followed by Science Fiction, Fantasy and Family.
Most of the movies have more than 7 average vote, some movies have less than 7 but that is because of less number of total votes. This means highest grossing movies are popular as well.
Steven Spielberg and Peter Jackson are directors who have highest muber of movies having at least 1 billion revenue.
Most of the directors have only one movies having at least a billion revenue, hence there seems to be no corelation between highest grossing movies and directors.
Most of the cast have one movie having at least a billion revenue.
Warner Bros., Walt Disney, Fox Film and Universal picture seems to have figured out the secret of highest grossing movies. They have highest number of at least a billion revenue movies. This does not mean all their movies have pretty high revenue.
End of explanation
def count_frequency(data):
frequency_count = {}
for items in data:
for item in items:
if item in frequency_count:
frequency_count[item] += 1
else:
frequency_count[item] = 1
return frequency_count
highest_grossing_genres = count_frequency(highest_grossing_movies.genres)
print(highest_grossing_genres)
pd.DataFrame(highest_grossing_genres, index=['Genres']).plot(kind='bar', figsize=(20, 8))
Explanation: Find common genres in highest grossing movies
End of explanation
highest_grossing_movies.vote_average.hist()
Explanation: Popularity of highest grossing movies
End of explanation
def list_to_dict(data, label):
This function creates returns statistics and indices for a data frame
from a list having label and value.
statistics = {label: []}
index = []
for item in data:
statistics[label].append(item[1])
index.append(item[0])
return statistics, index
import operator
high_grossing_dirs = count_frequency(highest_grossing_movies.director)
revenues, indexes = list_to_dict(sorted(high_grossing_dirs.items(), key=operator.itemgetter(1), reverse=True)[:20], 'revenue')
pd.DataFrame(revenues, index=indexes).plot(kind='bar', figsize=(20, 5))
Explanation: Directors of highest grossing movies
End of explanation
high_grossing_cast = count_frequency(highest_grossing_movies.cast)
revenues, index = list_to_dict(sorted(high_grossing_cast.items(), key=operator.itemgetter(1), reverse=True)[:30], 'number of movies')
pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
Explanation: Cast of highest grossing movies
End of explanation
high_grossing_prod_comps = count_frequency(highest_grossing_movies.production_companies)
revenues, index = list_to_dict(sorted(high_grossing_prod_comps.items(), key=operator.itemgetter(1), reverse=True)[:30]\
, 'number of movies')
pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
Explanation: Production companies of highest grossing movies
End of explanation
def grossing(movies, by):
This function returns the movies' revenues over key passed as `by` value in argument.
revenues = {}
for id, movie in movies.iterrows():
for key in movie[by]:
if key in revenues:
revenues[key].append(movie.revenue_adj)
else:
revenues[key] = [movie.revenue_adj]
return revenues
def gross_revenue(data):
This functions computes the sum of values of the dictionary and
return a new dictionary with same key but cummulative value.
gross = {}
for key, revenues in data.items():
gross[key] = np.sum(revenues)
return gross
gross_by_dirs = grossing(movies=movies_with_revenue, by='director')
director_gross_revenue = gross_revenue(gross_by_dirs)
top_15_directors = sorted(director_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15]
revenues, index = list_to_dict(top_15_directors, 'director')
pd.DataFrame(data=revenues, index=index).plot(kind='bar', figsize=(15, 9))
Explanation: Highest grossing budget
Research Question 4: Who are top 15 highest grossing directors?
We can see the top 30 highest grossing directors in bar chart below.
It seems Steven Spielberg surpasses other directors in gross revenue.
End of explanation
gross_by_actors = grossing(movies=tmdb_movies, by='cast')
actors_gross_revenue = gross_revenue(gross_by_actors)
top_15_actors = sorted(actors_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15]
revenues, indexes = list_to_dict(top_15_actors, 'actors')
pd.DataFrame(data=revenues, index=indexes).plot(kind='bar', figsize=(15, 9))
Explanation: Research Question 5: Who are top 15 highest grossing actors?
We can find the top 30 actors based on gross revenue as shown in subsequent sections below.
As we can see Harison Ford tops the chart with highest grossing.
End of explanation |
7,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bootstrap replicates
in other words repeating the same experiment for a given number of times.
Step1: For further experiments Sepal Length variable will be used. Additionally, we will focus on bootstrap technique used for mean.
Step2: Use of numpy.random.choice for bootstraping
For the boostraping technique we will use numpy.random.choice function that will randomly select the data from a given variable forming a new data set. Then a function of choice can be used to calculate required statistics. In our case it will be mean value.
The code of the function
Step3: It would be interesting to repeat call of boostrap_replicate_1d function many times to see how the mean value changes. For that reason another function is defined
Step5: Repeating the same exercise for Sepal Width | Python Code:
# importing required modules
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# runing the functions script
%run stats_func.py
# loading the iris dataset
df = pd.read_csv('iris.csv')
df.head()
# extracting sepal length and sepal width for further analysis
sepalLength = np.array(df['SepalLengthCm'])
sepalWidth = np.array(df['SepalWidthCm'])
Explanation: Bootstrap replicates
in other words repeating the same experiment for a given number of times.
End of explanation
# calculating the mean from data
np.mean(sepalLength)
Explanation: For further experiments Sepal Length variable will be used. Additionally, we will focus on bootstrap technique used for mean.
End of explanation
bootstrap_replicate_1d(sepalLength, np.mean)
bootstrap_replicate_1d(sepalLength, np.mean)
bootstrap_replicate_1d(sepalLength, np.mean)
Explanation: Use of numpy.random.choice for bootstraping
For the boostraping technique we will use numpy.random.choice function that will randomly select the data from a given variable forming a new data set. Then a function of choice can be used to calculate required statistics. In our case it will be mean value.
The code of the function:
def bootstrap_replicate_1d(data, func):
'''Generate bootstrap replicate of 1D data'''
bs_sample = np.random.choice(data, len(data))
return func(bs_sample)
To show the impact of the function on the mean value, the function is called three times in the cells below:
End of explanation
bs_replicates = draw_bs_reps(sepalLength, np.mean, size=10000)
# histogram plot will show us how the mean value of sepal length changes when the experiment is repeated 10,000 times
plt.hist(bs_replicates, bins=30, normed=True, edgecolor='black')
plt.xlabel("Mean sepal length [cm]")
plt.ylabel('PDF');
# calculating 95% confidence interval for the mean based on boostrap technique
conf_intervals = np.percentile(bs_replicates, [2.5, 97.5])
conf_intervals
Explanation: It would be interesting to repeat call of boostrap_replicate_1d function many times to see how the mean value changes. For that reason another function is defined:
def draw_bs_reps(data, func, size=1):
Draw bootstrap replicates.
# Initialize array of replicates: bs_replicates
bs_replicates = np.empty(size)
# Generate replicates
for i in range(size):
bs_replicates[i] = bootstrap_replicate_1d(data, func)
return bs_replicates
In the example we will call the boostrap_replicate_1d function 10,000 times.
End of explanation
bs_replicates2 = draw_bs_reps(sepalWidth, np.mean, size=10000)
# histogram plot will show us how the mean value of sepal width changes when the experiment is repeated 10,000 times
plt.hist(bs_replicates2, bins=30, normed=True, edgecolor='black')
plt.xlabel("Mean sepal width [cm]")
plt.ylabel('PDF');
# calculating 95% confidence interval for the mean based on boostrap technique
conf_intervals2 = np.percentile(bs_replicates2, [2.5, 97.5])
conf_intervals2
Explanation: Repeating the same exercise for Sepal Width
End of explanation |
7,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
More solving the problem with some code before I write the code.
This post is a continuation on solving a problem before writing the code.
Defining the problem.
2 objects I want to understand are unfamiliar to me
Step1: The running instance of MongoDB was created using Docker files from dm-wyncode/docker-mongo-flpd-hackathon-data.
I wrote an idiomatic Python package so I could easily reuse code. The repository is dm-wyncode/flpd_helper
import objects needed to talk to a running instance of MongoDb
Step2: I created constants that avoid repetition and make the aggregation pipelines easier to read.
You can see their values in this Python module.
Step3: Create a logger for logging debugging info and displaying it in this notebook.
Step4: Sanity check
Step5: The actual string that makes up the pipeline looks like this. Recall that I created constants rather than repetedly typing the same quoted strings.
Step6: Note that there are 1071 records where the date is blank.
Step7: Creating a new and better collection called "valid_citations".
Another problem I discovered with the raw data was that the "Date Occured" field had a text string in it with USA-styled date notation. While this date notation may be idiomatically comfortable to USAmericans, the reversal of the day and month makes it impossible to sort date strings in code.
I decided to go one effort better and insert datetime objects into the "Date Occurred" field.
The code to remove the blank entries and insert valid records with datetime objects is here in the 'load_valid_date_data' function.
Check valid_citations collection has documents.
Step8: Notice that the document count for the valid_citations collection is less than the document count for the citations collection because the invalid entries were removed..
Step9: This aggregate results in a citations count per date.
Here datetime.datetime.utcfromtimestamp(0) will be fed into the pipeline as a BSON Date representing "epoch". When you \$subtract one BSON Date from another the difference in milliseconds is returned. This allows you to "round" the date to the current day by again subtracting the \$mod result to get the remainder of milliseconds difference from a day.
The same is true of \$add where "adding" a BSON Date to a numeric value will result in a BSON Date.
Taken from datetime aggregation how-to.
Step10: This aggregate results in a citations count per date using substrings rather than datetime objects.
Step11: Using Python to get the max and min.
TODO
Step12: How many empty dates are there?
Using Python to do the counting with a simple find.
I'll trust this simple method as the gold standard against which to compare a more complex MongoDb query.
I wanted a method in which I am confident against which I could compare an aggregate method from MongoDB. Since I am learning the aggregate method I wanted to test my answer. Since I do not have canonical answers yet about this data set I have to make two educated guesses and compare them.
Step13: Number of documents removed from the database.
Step14: Using a MongoDB aggregate to do the counting.
Step15: Testing my two educated guesses. | Python Code:
import logging
from pprint import pprint
from itertools import chain
from datetime import datetime
Explanation: More solving the problem with some code before I write the code.
This post is a continuation on solving a problem before writing the code.
Defining the problem.
2 objects I want to understand are unfamiliar to me:
MongoDB
Data provided by the City of Fort Lauderdale at the Code for Fort Lauderdale first annual hackathon 2016.
I ultimately have a goal of building an interactive website with the police data.
I decided to first tackle the problem of learning how to query MongoDB.
In the process of looking at the police data, I discovered that it needed some cleaning up.
As a user of the police data, I do not expect to see entries with blank or impossible values.
Of course finding impossible or illogical data is not unusual. Stuff happens!
So the task at hand is to clean up the data and do some sanity checks.
Sanity checking the data.
First, import some Python built-ins that will be needed to get the job done.
End of explanation
from flpd_helper import (
citations, # a MongoDB collection created from CSV data
valid_citations, # a MongoDB collection where validated data will go
db, # a database instance
log_collection_counts, # function that prints collection counts
)
# importing from flpd_results in info being logged from that package
Explanation: The running instance of MongoDB was created using Docker files from dm-wyncode/docker-mongo-flpd-hackathon-data.
I wrote an idiomatic Python package so I could easily reuse code. The repository is dm-wyncode/flpd_helper
import objects needed to talk to a running instance of MongoDb
End of explanation
from bson import SON
from flpd_helper.constants import * # import almost all the constants.
from flpd_helper.constants import ( # underscores are not imported with import * syntax
_DATE_OCCURRED,
_ID,
)
Explanation: I created constants that avoid repetition and make the aggregation pipelines easier to read.
You can see their values in this Python module.
End of explanation
from logging_utility import get_logger
logger = logging.getLogger(__name__)
logger = get_logger(logger)
logger.debug("The logger is working.")
Explanation: Create a logger for logging debugging info and displaying it in this notebook.
End of explanation
pipeline = [
{GROUP: {_ID: _DATE_OCCURRED, COUNT: {SUM: 1}}},
{SORT: SON([(COUNT, -1), (_ID, -1), ])},
{LIMIT: 10}
]
Explanation: Sanity check: Are there blank values in the citation data entries?
Create a pipeline which is an array of dictionaries that contain MongoDb query instructions.
This one aggregates the field "Date Occurred" and then counts and sorts the data with a limit of 10 items.
End of explanation
pprint(pipeline)
Explanation: The actual string that makes up the pipeline looks like this. Recall that I created constants rather than repetedly typing the same quoted strings.
End of explanation
list(citations.aggregate(pipeline))
Explanation: Note that there are 1071 records where the date is blank.
End of explanation
log_collection_counts((valid_citations, ))
Explanation: Creating a new and better collection called "valid_citations".
Another problem I discovered with the raw data was that the "Date Occured" field had a text string in it with USA-styled date notation. While this date notation may be idiomatically comfortable to USAmericans, the reversal of the day and month makes it impossible to sort date strings in code.
I decided to go one effort better and insert datetime objects into the "Date Occurred" field.
The code to remove the blank entries and insert valid records with datetime objects is here in the 'load_valid_date_data' function.
Check valid_citations collection has documents.
End of explanation
log_collection_counts((citations, ))
Explanation: Notice that the document count for the valid_citations collection is less than the document count for the citations collection because the invalid entries were removed..
End of explanation
# give a citation count based on date and time
pipeline = [
{ GROUP: {
_ID: {
ADD: [
{ SUBTRACT: [
{ SUBTRACT: [ _DATE_OCCURRED, datetime.utcfromtimestamp(0) ] },
{ MOD: [
{ SUBTRACT: [ _DATE_OCCURRED, datetime.utcfromtimestamp(0) ] },
1000 * 60 * 60 * 24
]}
]},
datetime.utcfromtimestamp(0)
]
},
COUNT: { SUM: 1 }
}},
{ SORT: { _ID: 1 } },
]
logger.info(pipeline)
limited_pipeline = pipeline[:] # copy pipeline
limited_pipeline.append({LIMIT: 15})
list(valid_citations.aggregate(limited_pipeline))
Explanation: This aggregate results in a citations count per date.
Here datetime.datetime.utcfromtimestamp(0) will be fed into the pipeline as a BSON Date representing "epoch". When you \$subtract one BSON Date from another the difference in milliseconds is returned. This allows you to "round" the date to the current day by again subtracting the \$mod result to get the remainder of milliseconds difference from a day.
The same is true of \$add where "adding" a BSON Date to a numeric value will result in a BSON Date.
Taken from datetime aggregation how-to.
End of explanation
# padding with zeros, number of citations by date
pipeline = [
{GROUP: {
_ID : { CONCAT: [ # join the year, month, day with a dash
{SUBSTR: [{YEAR: _DATE_OCCURRED}, 0, 4 ]},
DASH,
{ COND: [ # pad the month with a leading "0" if less than 9
{ LESS_THAN_OR_EQUAL: [ { MONTH: _DATE_OCCURRED }, 9 ] },
{ CONCAT: [
ZERO_STRING, { SUBSTR: [ { MONTH: _DATE_OCCURRED }, 0, 2 ] }
]},
{ SUBSTR: [ { MONTH: _DATE_OCCURRED }, 0, 2 ] }
]},
DASH,
{ COND: [ # pad the day of month with a leading "0" if less than 9
{ LESS_THAN_OR_EQUAL: [ { DAYOFMONTH: _DATE_OCCURRED }, 9 ] },
{ CONCAT: [
ZERO_STRING, { SUBSTR: [ { DAYOFMONTH: _DATE_OCCURRED }, 0, 2 ] }
]},
{ SUBSTR: [ { DAYOFMONTH: _DATE_OCCURRED }, 0, 2 ] }
]},
]},
COUNT: {SUM: 1 } # count how many records for each YYYY-MM-DD entry
}},
{SORT: { _ID: 1 }} # sort on YYYY-MM-DD
]
logger.info(pipeline)
limited_pipeline = pipeline[:] # copy pipeline
limited_pipeline.append({LIMIT: 15})
list(valid_citations.aggregate(limited_pipeline))
Explanation: This aggregate results in a citations count per date using substrings rather than datetime objects.
End of explanation
logger.info(min(date[_ID] for date in valid_citations.aggregate(pipeline)))
logger.info(max(date[_ID] for date in valid_citations.aggregate(pipeline)))
Explanation: Using Python to get the max and min.
TODO: Learn how to do it with MongoDB.
max and min dates of the citations data in the citations collection.
End of explanation
from IPython.core.display import display, HTML
display(HTML('<span id="line-count"></span>'))
Explanation: How many empty dates are there?
Using Python to do the counting with a simple find.
I'll trust this simple method as the gold standard against which to compare a more complex MongoDb query.
I wanted a method in which I am confident against which I could compare an aggregate method from MongoDB. Since I am learning the aggregate method I wanted to test my answer. Since I do not have canonical answers yet about this data set I have to make two educated guesses and compare them.
End of explanation
# plain search
# Python is doing the counting
blanks = list(citations.find(
{ # query filter
DATE_OCCURRED: ''
},
))
logger.info(len(blanks))
Explanation: Number of documents removed from the database.
End of explanation
# doing the count with MongoDB
pipeline = [
{ GROUP: {
_ID: {
EQUALS: [ _DATE_OCCURRED, EMPTY_STRING ],
},
COUNT: { SUM: 1 },
}},
{ SORT: { _ID: 1 } },
]
logger.info(pipeline)
counts = list(citations.aggregate(pipeline))
pprint(counts)
Explanation: Using a MongoDB aggregate to do the counting.
End of explanation
all_counts = empties, not_empties = list(chain(*[[count[COUNT] for count in counts
if count[_ID] is criteria]
for criteria in (True, False)]))
pipeline_sum = sum(all_counts)
citations_total = citations.count()
delimiter = ': '
labels = (
"empties count",
"not empties count",
'sum of empties and non-empties',
'total count',
'two methods equal',
)
data = (
empties,
not_empties,
pipeline_sum,
citations_total,
citations_total == pipeline_sum,
)
data = zip(labels, (str(datum) for datum in data))
for datum in data:
logger.info(delimiter.join(datum))
Explanation: Testing my two educated guesses.
End of explanation |
7,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Kernel Learning
By Saurabh Mahindre - <a href="https
Step1: Introduction
<em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well.
Kernel based methods such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br>
Selecting the kernel function
$k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting.
In shogun the CMKL is the base class for MKL. We can do classifications
Step2: Prediction on toy data
In order to see the prediction capabilities, let us generate some data using the GMM class. The data is sampled by setting means (GMM notebook) such that it sufficiently covers X-Y grid and is not too easy to classify.
Step3: Generating Kernel weights
Just to help us visualize let's use two gaussian kernels (CGaussianKernel) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of MKL is required. This generates the weights as seen in this example.
Step4: Binary classification using MKL
Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with kernel.init and pass the test features. After that it's just a matter of doing mkl.apply to generate outputs.
Step5: To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison.
Step6: As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those.
Step7: MKL for knowledge discovery
MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases.
Step8: These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first.
Step9: As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10).
Step10: In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem
Step11: Let's plot five of the examples to get a feel of the dataset.
Step12: We combine a Gaussian kernel and a PolyKernel. To test, examples not included in training data are used.
This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight.
Step13: The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too.
One-class classification using MKL
One-class classification can be done using MKL in shogun. This is demonstrated in the following simple example using CMKLOneClass. We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features.
Step14: Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all shogun classes
from shogun import *
Explanation: Multiple Kernel Learning
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a>
This notebook is about multiple kernel learning in shogun. We will see how to construct a combined kernel, determine optimal kernel weights using MKL and use it for different types of classification and novelty detection.
Introduction
Mathematical formulation
Using a Combined kernel
Example: Toy Data
Generating Kernel weights
Binary classification using MKL
MKL for knowledge discovery
Multiclass classification using MKL
One-class classification using MKL
End of explanation
kernel = CombinedKernel()
Explanation: Introduction
<em>Multiple kernel learning</em> (MKL) is about using a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well.
Kernel based methods such as support vector machines (SVMs) employ a so-called kernel function $k(x_{i},x_{j})$ which intuitively computes the similarity between two examples $x_{i}$ and $x_{j}$. </br>
Selecting the kernel function
$k()$ and it's parameters is an important issue in training. Kernels designed by humans usually capture one aspect of data. Choosing one kernel means to select exactly one such aspect. Which means combining such aspects is often better than selecting.
In shogun the CMKL is the base class for MKL. We can do classifications: binary, one-class, multiclass and regression too: regression.
Mathematical formulation (skip if you just want code examples)
</br>In a SVM, defined as:
$$f({\bf x})=\text{sign} \left(\sum_{i=0}^{N-1} \alpha_i k({\bf x}, {\bf x_i})+b\right)$$</br>
where ${\bf x_i},{i = 1,...,N}$ are labeled training examples ($y_i \in {±1}$).
One could make a combination of kernels like:
$${\bf k}(x_i,x_j)=\sum_{k=0}^{K} \beta_k {\bf k_k}(x_i, x_j)$$
where $\beta_k > 0$ and $\sum_{k=0}^{K} \beta_k = 1$
In the multiple kernel learning problem for binary classification one is given $N$ data points ($x_i, y_i$ )
($y_i \in {±1}$), where $x_i$ is translated via $K$ mappings $\phi_k(x) \rightarrow R^{D_k} $, $k=1,...,K$ , from the input into $K$ feature spaces $(\phi_1(x_i),...,\phi_K(x_i))$ where $D_k$ denotes dimensionality of the $k$-th feature space.
In MKL $\alpha_i$,$\beta$ and bias are determined by solving the following optimization program. For details see [1].
$$\mbox{min} \hspace{4mm} \gamma-\sum_{i=1}^N\alpha_i$$
$$ \mbox{w.r.t.} \hspace{4mm} \gamma\in R, \alpha\in R^N \nonumber$$
$$\mbox {s.t.} \hspace{4mm} {\bf 0}\leq\alpha\leq{\bf 1}C,\;\;\sum_{i=1}^N \alpha_i y_i=0 \nonumber$$
$$ {\frac{1}{2}\sum_{i,j=1}^N \alpha_i \alpha_j y_i y_j \leq \gamma}, \forall k=1,\ldots,K\nonumber\
$$
Here C is a pre-specified regularization parameter.
Within shogun this optimization problem is solved using semi-infinite programming. For 1-norm MKL one of the two approaches described in [1] is used.
The first approach (also called the wrapper algorithm) wraps around a single kernel SVMs, alternatingly solving for $\alpha$ and $\beta$. It is using a traditional SVM to generate new violated constraints and thus requires a single kernel SVM and any of the SVMs contained in shogun can be used. In the MKL step either a linear program is solved via glpk or cplex or analytically or a newton (for norms>1) step is performed.
The second much faster but also more memory demanding approach performing interleaved optimization, is integrated into the chunking-based SVMlight.
Using a Combined kernel
Shogun provides an easy way to make combination of kernels using the CombinedKernel class, to which we can append any kernel from the many options shogun provides. It is especially useful to combine kernels working on different domains and to combine kernels looking at independent features and requires CombinedFeatures to be used. Similarly the CombinedFeatures is used to combine a number of feature objects into a single CombinedFeatures object
End of explanation
num=30;
num_components=4
means=zeros((num_components, 2))
means[0]=[-1,1]
means[1]=[2,-1.5]
means[2]=[-1,-3]
means[3]=[2,1]
covs=array([[1.0,0.0],[0.0,1.0]])
gmm=GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.set_coef(array([1.0,0.0,0.0,0.0]))
xntr=array([gmm.sample() for i in range(num)]).T
xnte=array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(array([0.0,1.0,0.0,0.0]))
xntr1=array([gmm.sample() for i in range(num)]).T
xnte1=array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(array([0.0,0.0,1.0,0.0]))
xptr=array([gmm.sample() for i in range(num)]).T
xpte=array([gmm.sample() for i in range(5000)]).T
gmm.set_coef(array([0.0,0.0,0.0,1.0]))
xptr1=array([gmm.sample() for i in range(num)]).T
xpte1=array([gmm.sample() for i in range(5000)]).T
traindata=concatenate((xntr,xntr1,xptr,xptr1), axis=1)
trainlab=concatenate((-ones(2*num), ones(2*num)))
testdata=concatenate((xnte,xnte1,xpte,xpte1), axis=1)
testlab=concatenate((-ones(10000), ones(10000)))
#convert to shogun features and generate labels for data
feats_train=features(traindata)
labels=BinaryLabels(trainlab)
_=jet()
figure(figsize(18,5))
subplot(121)
# plot train data
_=scatter(traindata[0,:], traindata[1,:], c=trainlab, s=100)
title('Toy data for classification')
axis('equal')
colors=["blue","blue","red","red"]
# a tool for visualisation
from matplotlib.patches import Ellipse
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
for i in range(num_components):
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs, color=colors[i]))
Explanation: Prediction on toy data
In order to see the prediction capabilities, let us generate some data using the GMM class. The data is sampled by setting means (GMM notebook) such that it sufficiently covers X-Y grid and is not too easy to classify.
End of explanation
width0=0.5
kernel0=GaussianKernel(feats_train, feats_train, width0)
width1=25
kernel1=GaussianKernel(feats_train, feats_train, width1)
#combine kernels
kernel.append_kernel(kernel0)
kernel.append_kernel(kernel1)
kernel.init(feats_train, feats_train)
mkl = MKLClassification()
#set the norm, weights sum to 1.
mkl.set_mkl_norm(1)
mkl.set_C(1, 1)
mkl.set_kernel(kernel)
mkl.set_labels(labels)
#train to get weights
mkl.train()
w=kernel.get_subkernel_weights()
print(w)
Explanation: Generating Kernel weights
Just to help us visualize let's use two gaussian kernels (CGaussianKernel) with considerably different widths. As required in MKL, we need to append them to the Combined kernel. To generate the optimal weights (i.e $\beta$s in the above equation), training of MKL is required. This generates the weights as seen in this example.
End of explanation
size=100
x1=linspace(-5, 5, size)
x2=linspace(-5, 5, size)
x, y=meshgrid(x1, x2)
#Generate X-Y grid test data
grid=features(array((ravel(x), ravel(y))))
kernel0t=GaussianKernel(feats_train, grid, width0)
kernel1t=GaussianKernel(feats_train, grid, width1)
kernelt=CombinedKernel()
kernelt.append_kernel(kernel0t)
kernelt.append_kernel(kernel1t)
#initailize with test grid
kernelt.init(feats_train, grid)
mkl.set_kernel(kernelt)
#prediction
grid_out=mkl.apply()
z=grid_out.get_values().reshape((size, size))
figure(figsize=(10,5))
title("Classification using MKL")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
Explanation: Binary classification using MKL
Now with the data ready and training done, we can do the binary classification. The weights generated can be intuitively understood. We will see that on plotting individual subkernels outputs and outputs of the MKL classification. To apply on test features, we need to reinitialize the kernel with kernel.init and pass the test features. After that it's just a matter of doing mkl.apply to generate outputs.
End of explanation
z=grid_out.get_labels().reshape((size, size))
# MKL
figure(figsize=(20,5))
subplot(131, title="Multiple Kernels combined")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
comb_ker0=CombinedKernel()
comb_ker0.append_kernel(kernel0)
comb_ker0.init(feats_train, feats_train)
mkl.set_kernel(comb_ker0)
mkl.train()
comb_ker0t=CombinedKernel()
comb_ker0t.append_kernel(kernel0)
comb_ker0t.init(feats_train, grid)
mkl.set_kernel(comb_ker0t)
out0=mkl.apply()
# subkernel 1
z=out0.get_labels().reshape((size, size))
subplot(132, title="Kernel 1")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
comb_ker1=CombinedKernel()
comb_ker1.append_kernel(kernel1)
comb_ker1.init(feats_train, feats_train)
mkl.set_kernel(comb_ker1)
mkl.train()
comb_ker1t=CombinedKernel()
comb_ker1t.append_kernel(kernel1)
comb_ker1t.init(feats_train, grid)
mkl.set_kernel(comb_ker1t)
out1=mkl.apply()
# subkernel 2
z=out1.get_labels().reshape((size, size))
subplot(133, title="kernel 2")
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
Explanation: To justify the weights, let's train and compare two subkernels with the MKL classification output. Training MKL classifier with a single kernel appended to a combined kernel makes no sense and is just like normal single kernel based classification, but let's do it for comparison.
End of explanation
kernelt.init(feats_train, features(testdata))
mkl.set_kernel(kernelt)
out=mkl.apply()
evaluator=ErrorRateMeasure()
print("Test error is %2.2f%% :MKL" % (100*evaluator.evaluate(out,BinaryLabels(testlab))))
comb_ker0t.init(feats_train,features(testdata))
mkl.set_kernel(comb_ker0t)
out=mkl.apply()
evaluator=ErrorRateMeasure()
print("Test error is %2.2f%% :Subkernel1"% (100*evaluator.evaluate(out,BinaryLabels(testlab))))
comb_ker1t.init(feats_train, features(testdata))
mkl.set_kernel(comb_ker1t)
out=mkl.apply()
evaluator=ErrorRateMeasure()
print("Test error is %2.2f%% :subkernel2" % (100*evaluator.evaluate(out,BinaryLabels(testlab))))
Explanation: As we can see the multiple kernel output seems just about right. Kernel 1 gives a sort of overfitting output while the kernel 2 seems not so accurate. The kernel weights are hence so adjusted to get a refined output. We can have a look at the errors by these subkernels to have more food for thought. Most of the time, the MKL error is lesser as it incorporates aspects of both kernels. One of them is strict while other is lenient, MKL finds a balance between those.
End of explanation
def circle(x, radius, neg):
y=sqrt(square(radius)-square(x))
if neg:
return[x, -y]
else:
return [x,y]
def get_circle(radius):
neg=False
range0=linspace(-radius,radius,100)
pos_a=array([circle(i, radius, neg) for i in range0]).T
neg=True
neg_a=array([circle(i, radius, neg) for i in range0]).T
c=concatenate((neg_a,pos_a), axis=1)
return c
def get_data(r1, r2):
c1=get_circle(r1)
c2=get_circle(r2)
c=concatenate((c1, c2), axis=1)
feats_tr=features(c)
return c, feats_tr
l=concatenate((-ones(200),ones(200)))
lab=BinaryLabels(l)
#get two circles with radius 2 and 4
c, feats_tr=get_data(2,4)
c1, feats_tr1=get_data(2,3)
_=gray()
figure(figsize=(10,5))
subplot(121)
title("Circles with different separation")
p=scatter(c[0,:], c[1,:], c=lab)
subplot(122)
q=scatter(c1[0,:], c1[1,:], c=lab)
Explanation: MKL for knowledge discovery
MKL can recover information about the problem at hand. Let us see this with a binary classification problem. The task is to separate two concentric classes shaped like circles. By varying the distance between the boundary of the circles we can control the separability of the problem. Starting with an almost non-separable scenario, the data quickly becomes separable as the distance between the circles increases.
End of explanation
def train_mkl(circles, feats_tr):
#Four kernels with different widths
kernel0=GaussianKernel(feats_tr, feats_tr, 1)
kernel1=GaussianKernel(feats_tr, feats_tr, 5)
kernel2=GaussianKernel(feats_tr, feats_tr, 7)
kernel3=GaussianKernel(feats_tr, feats_tr, 10)
kernel = CombinedKernel()
kernel.append_kernel(kernel0)
kernel.append_kernel(kernel1)
kernel.append_kernel(kernel2)
kernel.append_kernel(kernel3)
kernel.init(feats_tr, feats_tr)
mkl = MKLClassification()
mkl.set_mkl_norm(1)
mkl.set_C(1, 1)
mkl.set_kernel(kernel)
mkl.set_labels(lab)
mkl.train()
w=kernel.get_subkernel_weights()
return w, mkl
def test_mkl(mkl, grid):
kernel0t=GaussianKernel(feats_tr, grid, 1)
kernel1t=GaussianKernel(feats_tr, grid, 5)
kernel2t=GaussianKernel(feats_tr, grid, 7)
kernel3t=GaussianKernel(feats_tr, grid, 10)
kernelt = CombinedKernel()
kernelt.append_kernel(kernel0t)
kernelt.append_kernel(kernel1t)
kernelt.append_kernel(kernel2t)
kernelt.append_kernel(kernel3t)
kernelt.init(feats_tr, grid)
mkl.set_kernel(kernelt)
out=mkl.apply()
return out
size=50
x1=linspace(-10, 10, size)
x2=linspace(-10, 10, size)
x, y=meshgrid(x1, x2)
grid=features(array((ravel(x), ravel(y))))
w, mkl=train_mkl(c, feats_tr)
print(w)
out=test_mkl(mkl,grid)
z=out.get_values().reshape((size, size))
figure(figsize=(5,5))
c=pcolor(x, y, z)
_=contour(x, y, z, linewidths=1, colors='black', hold=True)
title('classification with constant separation')
_=colorbar(c)
Explanation: These are the type of circles we want to distinguish between. We can try classification with a constant separation between the circles first.
End of explanation
range1=linspace(5.5,7.5,50)
x=linspace(1.5,3.5,50)
temp=[]
for i in range1:
#vary separation between circles
c, feats=get_data(4,i)
w, mkl=train_mkl(c, feats)
temp.append(w)
y=array([temp[i] for i in range(0,50)]).T
figure(figsize=(20,5))
_=plot(x, y[0,:], color='k', linewidth=2)
_=plot(x, y[1,:], color='r', linewidth=2)
_=plot(x, y[2,:], color='g', linewidth=2)
_=plot(x, y[3,:], color='y', linewidth=2)
title("Comparison between kernel widths and weights")
ylabel("Weight")
xlabel("Distance between circles")
_=legend(["1","5","7","10"])
Explanation: As we can see the MKL classifier classifies them as expected. Now let's vary the separation and see how it affects the weights.The choice of the kernel width of the Gaussian kernel used for classification is expected to depend on the separation distance of the learning problem. An increased distance between the circles will correspond to a larger optimal kernel width. This effect should be visible in the results of the MKL, where we used MKL-SVMs with four kernels with different widths (1,5,7,10).
End of explanation
from scipy.io import loadmat, savemat
from os import path, sep
mat = loadmat(sep.join(['..','..','..','data','multiclass', 'usps.mat']))
Xall = mat['data']
Yall = array(mat['label'].squeeze(), dtype=double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
#get first 1000 examples
Xtrain = Xall[:, subset[:1000]]
Ytrain = Yall[subset[:1000]]
Nsplit = 2
all_ks = range(1, 21)
print(Xall.shape)
print(Xtrain.shape)
Explanation: In the above plot we see the kernel weightings obtained for the four kernels. Every line shows one weighting. The courses of the kernel weightings reflect the development of the learning problem: as long as the problem is difficult the best separation can be obtained when using the kernel with smallest width. The low width kernel looses importance when the distance between the circle increases and larger kernel widths obtain a larger weight in MKL. Increasing the distance between the circles, kernels with greater widths are used.
Multiclass classification using MKL
MKL can be used for multiclass classification using the MKLMulticlass class. It is based on the GMNPSVM Multiclass SVM. Its termination criterion is set by set_mkl_epsilon(float64_t eps ) and the maximal number of MKL iterations is set by set_max_num_mkliters(int32_t maxnum). The epsilon termination criterion is the L2 norm between the current MKL weights and their counterpart from the previous iteration. We set it to 0.001 as we want pretty accurate weights.
To see this in action let us compare it to the normal GMNPSVM example as in the KNN notebook, just to see how MKL fares in object recognition. We use the USPS digit recognition dataset.
End of explanation
def plot_example(dat, lab):
for i in range(5):
ax=subplot(1,5,i+1)
title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=figure(figsize=(17,6))
gray()
plot_example(Xtrain, Ytrain)
Explanation: Let's plot five of the examples to get a feel of the dataset.
End of explanation
# MKL training and output
labels = MulticlassLabels(Ytrain)
feats = features(Xtrain)
#get test data from 5500 onwards
Xrem=Xall[:,subset[5500:]]
Yrem=Yall[subset[5500:]]
#test features not used in training
feats_rem=features(Xrem)
labels_rem=MulticlassLabels(Yrem)
kernel = CombinedKernel()
feats_train = CombinedFeatures()
feats_test = CombinedFeatures()
#append gaussian kernel
subkernel = GaussianKernel(10,15)
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_rem)
kernel.append_kernel(subkernel)
#append PolyKernel
feats = features(Xtrain)
subkernel = PolyKernel(10,2)
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_rem)
kernel.append_kernel(subkernel)
kernel.init(feats_train, feats_train)
mkl = MKLMulticlass(1.2, kernel, labels)
mkl.set_epsilon(1e-2)
mkl.set_mkl_epsilon(0.001)
mkl.set_mkl_norm(1)
mkl.train()
#initialize with test features
kernel.init(feats_train, feats_test)
out = mkl.apply()
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=figure(figsize=(17,6))
gray()
plot_example(Xbad, Ybad)
w=kernel.get_subkernel_weights()
print(w)
# Single kernel:PolyKernel
C=1
pk=PolyKernel(10,2)
svm=GMNPSVM(C, pk, labels)
_=svm.train(feats)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=figure(figsize=(17,6))
gray()
plot_example(Xbad, Ybad)
#Single Kernel:Gaussian kernel
width=15
C=1
gk=GaussianKernel()
gk.set_width(width)
svm=GMNPSVM(C, gk, labels)
_=svm.train(feats)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print("Accuracy = %2.2f%%" % (100*accuracy))
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=figure(figsize=(17,6))
gray()
plot_example(Xbad, Ybad)
Explanation: We combine a Gaussian kernel and a PolyKernel. To test, examples not included in training data are used.
This is just a demonstration but we can see here how MKL is working behind the scene. What we have is two kernels with significantly different properties. The gaussian kernel defines a function space that is a lot larger than that of the linear kernel or the polynomial kernel. The gaussian kernel has a low width, so it will be able to represent more and more complex relationships between the training data. But it requires enough data to train on. The number of training examples here is 1000, which seems a bit less as total examples are 10000. We hope the polynomial kernel can counter this problem, since it will fit the polynomial for you using a lot less data than the squared exponential. The kernel weights are printed below to add some insight.
End of explanation
X = -0.3 * random.randn(100,2)
traindata=r_[X + 2, X - 2].T
X = -0.3 * random.randn(20, 2)
testdata = r_[X + 2, X - 2].T
trainlab=concatenate((ones(99),-ones(1)))
#convert to shogun features and generate labels for data
feats=features(traindata)
labels=BinaryLabels(trainlab)
xx, yy = meshgrid(linspace(-5, 5, 500), linspace(-5, 5, 500))
grid=features(array((ravel(xx), ravel(yy))))
#test features
feats_t=features(testdata)
x_out=(random.uniform(low=-4, high=4, size=(20, 2))).T
feats_out=features(x_out)
kernel=CombinedKernel()
feats_train=CombinedFeatures()
feats_test=CombinedFeatures()
feats_test_out=CombinedFeatures()
feats_grid=CombinedFeatures()
#append gaussian kernel
subkernel=GaussianKernel(10,8)
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_t)
feats_test_out.append_feature_obj(feats_out)
feats_grid.append_feature_obj(grid)
kernel.append_kernel(subkernel)
#append PolyKernel
feats = features(traindata)
subkernel = PolyKernel(10,3)
feats_train.append_feature_obj(feats)
feats_test.append_feature_obj(feats_t)
feats_test_out.append_feature_obj(feats_out)
feats_grid.append_feature_obj(grid)
kernel.append_kernel(subkernel)
kernel.init(feats_train, feats_train)
mkl = MKLOneClass()
mkl.set_kernel(kernel)
mkl.set_labels(labels)
mkl.set_interleaved_optimization_enabled(False)
mkl.set_epsilon(1e-2)
mkl.put('mkl_epsilon', 0.1)
mkl.set_mkl_norm(1)
Explanation: The misclassified examples are surely pretty tough to predict. As seen from the accuracy MKL seems to work a shade better in the case. One could try this out with more and different types of kernels too.
One-class classification using MKL
One-class classification can be done using MKL in shogun. This is demonstrated in the following simple example using CMKLOneClass. We will see how abnormal data is detected. This is also known as novelty detection. Below we generate some toy data and initialize combined kernels and features.
End of explanation
mkl.train()
print("Weights:")
w=kernel.get_subkernel_weights()
print(w)
#initialize with test features
kernel.init(feats_train, feats_test)
normal_out = mkl.apply()
#test on abnormally generated data
kernel.init(feats_train, feats_test_out)
abnormal_out = mkl.apply()
#test on X-Y grid
kernel.init(feats_train, feats_grid)
grid_out=mkl.apply()
z=grid_out.get_values().reshape((500,500))
z_lab=grid_out.get_labels().reshape((500,500))
a=abnormal_out.get_labels()
n=normal_out.get_labels()
#check for normal and abnormal classified data
idx=where(normal_out.get_labels() != 1)[0]
abnormal=testdata[:,idx]
idx=where(normal_out.get_labels() == 1)[0]
normal=testdata[:,idx]
figure(figsize(15,6))
pl =subplot(121)
title("One-class classification using MKL")
_=pink()
c=pcolor(xx, yy, z)
_=contour(xx, yy, z_lab, linewidths=1, colors='black', hold=True)
_=colorbar(c)
p1=pl.scatter(traindata[0, :], traindata[1,:], cmap=gray(), s=100)
p2=pl.scatter(normal[0,:], normal[1,:], c="red", s=100)
p3=pl.scatter(abnormal[0,:], abnormal[1,:], c="blue", s=100)
p4=pl.scatter(x_out[0,:], x_out[1,:], c=a, cmap=jet(), s=100)
_=pl.legend((p1, p2, p3), ["Training samples", "normal samples", "abnormal samples"], loc=2)
subplot(122)
c=pcolor(xx, yy, z)
title("One-class classification output")
_=gray()
_=contour(xx, yy, z, linewidths=1, colors='black', hold=True)
_=colorbar(c)
Explanation: Now that everything is initialized, let's see MKLOneclass in action by applying it on the test data and on the X-Y grid.
End of explanation |
7,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1g', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: GISS-E2-1G
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:20
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
7,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy 2016 Scikit-learn Tutorial
Unsupervised Learning Part 1 -- Transformation
Many instances of unsupervised learning, such as dimensionality reduction, manifold learning, and feature extraction, find a new representation of the input data without any additional input. (In contrast to supervised learning, usnupervised algorithms don't require or consider target variables like in the previous classification and regression examples).
<img src="figures/unsupervised_workflow.svg" width="100%">
A very basic example is the rescaling of our data, which is a requirement for many machine learning algorithms as they are not scale-invariant -- rescaling falls into the category of data pre-processing and can barely be called learning. There exist many different rescaling technques, and in the following example, we will take a look at a particular method that is commonly called "standardization." Here, we will recale the data so that each feature is centered at zero (mean = 0) with unit variance (standard deviation = 0).
For example, if we have a 1D dataset with the values [1, 2, 3, 4, 5], the standardized values are
1 -> -1.41
2 -> -0.71
3 -> 0.0
4 -> 0.71
5 -> 1.41
computed via the equation $x_{standardized} = \frac{x - \mu_x}{\sigma_x}$,
where $\mu$ is the sample mean, and $\sigma$ the standard deviation, respectively.
Step1: Although standardization is a most basic preprocessing procedure -- as we've seen in the code snipped above -- scikit-learn implements a StandardScaler class for this computation. And in later sections, we will see why and when the scikit-learn interface comes in handy over the code snippet we executed above.
Applying such a preprocessing has a very similar interface to the supervised learning algorithms we saw so far.
To get some more practice with scikit-learn's "Transformer" interface, let's start by loading the iris dataset and rescale it
Step2: The iris dataset is not "centered" that is it has non-zero mean and the standard deviation is different for each component
Step3: To use a preprocessing method, we first import the estimator, here StandardScaler and instantiate it
Step4: As with the classification and regression algorithms, we call fit to learn the model from the data. As this is an unsupervised model, we only pass X, not y. This simply estimates mean and standard deviation.
Step5: Now we can rescale our data by applying the transform (not predict) method
Step6: X_train_scaled has the same number of samples and features, but the mean was subtracted and all features were scaled to have unit standard deviation
Step7: To summarize
Step8: It is important for the training and test data to be transformed in exactly the same way, for the following processing steps to make sense of the data, as is illustrated in the figure below
Step9: There are several common ways to scale the data. The most common one is the StandardScaler we just introduced, but rescaling the data to a fix minimum an maximum value with MinMaxScaler (usually between 0 and 1), or using more robust statistics like median and quantile, instead of mean and standard deviation (with RobustScaler), are also useful.
Step10: Principal Component Analysis
An unsupervised transformation that is somewhat more interesting is Principal Component Analysis (PCA).
It is a technique to reduce the dimensionality of the data, by creating a linear projection.
That is, we find new features to represent the data that are a linear combination of the old data (i.e. we rotate it). Thus, we can think of PCA as a projection of our data onto a new feature space.
The way PCA finds these new directions is by looking for the directions of maximum variance.
Usually only few components that explain most of the variance in the data are kept. Here, the premise is to reduce the size (dimensionality) of a dataset while capturing most of its information. There are many reason why dimensionality reduction can be useful
Step11: Now let's go through all the steps in more detail
Step12: As always, we instantiate our PCA model. By default all directions are kept.
Step13: Then we fit the PCA model with our data. As PCA is an unsupervised algorithm, there is no output y.
Step14: Then we can transform the data, projected on the principal components
Step15: On the left of the plot you can see the four points that were on the top right before. PCA found fit first component to be along the diagonal, and the second to be perpendicular to it. As PCA finds a rotation, the principal components are always at right angles ("orthogonal") to each other.
Dimensionality Reduction for Visualization with PCA
Consider the digits dataset. It cannot be visualized in a single 2D plot, as it has 64 features. We are going to extract 2 dimensions to visualize it in, using the example from the sklearn examples here
Step16: Note that this projection was determined without any information about the
labels (represented by the colors) | Python Code:
ary = np.array([1, 2, 3, 4, 5])
ary_standardized = (ary - ary.mean()) / ary.std()
ary_standardized
Explanation: SciPy 2016 Scikit-learn Tutorial
Unsupervised Learning Part 1 -- Transformation
Many instances of unsupervised learning, such as dimensionality reduction, manifold learning, and feature extraction, find a new representation of the input data without any additional input. (In contrast to supervised learning, usnupervised algorithms don't require or consider target variables like in the previous classification and regression examples).
<img src="figures/unsupervised_workflow.svg" width="100%">
A very basic example is the rescaling of our data, which is a requirement for many machine learning algorithms as they are not scale-invariant -- rescaling falls into the category of data pre-processing and can barely be called learning. There exist many different rescaling technques, and in the following example, we will take a look at a particular method that is commonly called "standardization." Here, we will recale the data so that each feature is centered at zero (mean = 0) with unit variance (standard deviation = 0).
For example, if we have a 1D dataset with the values [1, 2, 3, 4, 5], the standardized values are
1 -> -1.41
2 -> -0.71
3 -> 0.0
4 -> 0.71
5 -> 1.41
computed via the equation $x_{standardized} = \frac{x - \mu_x}{\sigma_x}$,
where $\mu$ is the sample mean, and $\sigma$ the standard deviation, respectively.
End of explanation
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=0)
print(X_train.shape)
Explanation: Although standardization is a most basic preprocessing procedure -- as we've seen in the code snipped above -- scikit-learn implements a StandardScaler class for this computation. And in later sections, we will see why and when the scikit-learn interface comes in handy over the code snippet we executed above.
Applying such a preprocessing has a very similar interface to the supervised learning algorithms we saw so far.
To get some more practice with scikit-learn's "Transformer" interface, let's start by loading the iris dataset and rescale it:
End of explanation
print("mean : %s " % X_train.mean(axis=0))
print("standard deviation : %s " % X_train.std(axis=0))
Explanation: The iris dataset is not "centered" that is it has non-zero mean and the standard deviation is different for each component:
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
Explanation: To use a preprocessing method, we first import the estimator, here StandardScaler and instantiate it:
End of explanation
scaler.fit(X_train)
Explanation: As with the classification and regression algorithms, we call fit to learn the model from the data. As this is an unsupervised model, we only pass X, not y. This simply estimates mean and standard deviation.
End of explanation
X_train_scaled = scaler.transform(X_train)
Explanation: Now we can rescale our data by applying the transform (not predict) method:
End of explanation
print(X_train_scaled.shape)
print("mean : %s " % X_train_scaled.mean(axis=0))
print("standard deviation : %s " % X_train_scaled.std(axis=0))
Explanation: X_train_scaled has the same number of samples and features, but the mean was subtracted and all features were scaled to have unit standard deviation:
End of explanation
X_test_scaled = scaler.transform(X_test)
print("mean test data: %s" % X_test_scaled.mean(axis=0))
Explanation: To summarize: Via the fit method, the estimator is fitted to the data we provide. In this step, the estimator estimates the parameters from the data (here: mean and standard deviation). Then, if we transform data, these parameters are used to transform a dataset. (Please note that the transform method does not update these parameters).
It's important to note that the same transformation is applied to the training and the test set. That has the consequence that usually the mean of the test data is not zero after scaling:
End of explanation
from figures import plot_relative_scaling
plot_relative_scaling()
Explanation: It is important for the training and test data to be transformed in exactly the same way, for the following processing steps to make sense of the data, as is illustrated in the figure below:
End of explanation
from figures import plot_scaling
plot_scaling()
Explanation: There are several common ways to scale the data. The most common one is the StandardScaler we just introduced, but rescaling the data to a fix minimum an maximum value with MinMaxScaler (usually between 0 and 1), or using more robust statistics like median and quantile, instead of mean and standard deviation (with RobustScaler), are also useful.
End of explanation
from figures import plot_pca_illustration
plot_pca_illustration()
Explanation: Principal Component Analysis
An unsupervised transformation that is somewhat more interesting is Principal Component Analysis (PCA).
It is a technique to reduce the dimensionality of the data, by creating a linear projection.
That is, we find new features to represent the data that are a linear combination of the old data (i.e. we rotate it). Thus, we can think of PCA as a projection of our data onto a new feature space.
The way PCA finds these new directions is by looking for the directions of maximum variance.
Usually only few components that explain most of the variance in the data are kept. Here, the premise is to reduce the size (dimensionality) of a dataset while capturing most of its information. There are many reason why dimensionality reduction can be useful: It can reduce the computational cost when running learning algorithms, decrease the storage space, and may help with the so-called "curse of dimensionality," which we will discuss in greater detail later.
To illustrate how a rotation might look like, we first show it on two-dimensional data and keep both principal components. Here is an illustraion:
End of explanation
rnd = np.random.RandomState(5)
X_ = rnd.normal(size=(300, 2))
X_blob = np.dot(X_, rnd.normal(size=(2, 2))) + rnd.normal(size=2)
y = X_[:, 0] > 0
plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30)
plt.xlabel("feature 1")
plt.ylabel("feature 2");
Explanation: Now let's go through all the steps in more detail:
We create a Gaussian blob that is rotated:
End of explanation
from sklearn.decomposition import PCA
pca = PCA()
Explanation: As always, we instantiate our PCA model. By default all directions are kept.
End of explanation
pca.fit(X_blob)
Explanation: Then we fit the PCA model with our data. As PCA is an unsupervised algorithm, there is no output y.
End of explanation
X_pca = pca.transform(X_blob)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, linewidths=0, s=30)
plt.xlabel("first principal component")
plt.ylabel("second principal component");
Explanation: Then we can transform the data, projected on the principal components:
End of explanation
from figures import digits_plot
digits_plot()
Explanation: On the left of the plot you can see the four points that were on the top right before. PCA found fit first component to be along the diagonal, and the second to be perpendicular to it. As PCA finds a rotation, the principal components are always at right angles ("orthogonal") to each other.
Dimensionality Reduction for Visualization with PCA
Consider the digits dataset. It cannot be visualized in a single 2D plot, as it has 64 features. We are going to extract 2 dimensions to visualize it in, using the example from the sklearn examples here
End of explanation
# %load solutions/07A_iris-pca.py
Explanation: Note that this projection was determined without any information about the
labels (represented by the colors): this is the sense in which the learning
is unsupervised. Nevertheless, we see that the projection gives us insight
into the distribution of the different digits in parameter space.
Exercises
Visualize the iris dataset using the first two principal components, and compress this visualization to using two of the original features.
End of explanation |
7,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery basics
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime. This page shows you how to get started with the Google BigQuery API using the Python client library.
Import the libraries used in this tutorial
Step1: Initialize a client
To use the BigQuery Python client library, start by initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API.
Client project
The bigquery.Client object uses your default project. Alternatively, you can specify a project in the Client constructor. For more information about how the default project is determined, see the google-auth documentation.
Client location
Locations are required for certain BigQuery operations such as creating a dataset. If a location is provided to the client when it is initialized, it will be the default location for jobs, datasets, and tables.
Run the following to create a client with your default project
Step2: To explicitly specify a project when constructing the client, set the project parameter
Step4: Run a query on a public dataset
The following example queries the BigQuery usa_names public dataset to find the 10 most popular names. usa_names is a Social Security Administration dataset that contains all names from Social Security card applications for births that occurred in the United States after 1879.
Use the Client.query method to run the query, and the QueryJob.to_dataframe method to return the results as a pandas DataFrame.
Step6: Run a parameterized query
BigQuery supports query parameters to help prevent SQL injection when you construct a query with user input. Query parameters are only available with standard SQL syntax. Query parameters can be used as substitutes for arbitrary expressions. Parameters cannot be used as substitutes for identifiers, column names, table names, or other parts of the query.
To specify a parameter, use the @ character followed by an identifier, such as @param_name. For example, the following query finds all the words in a specific Shakespeare corpus with counts that are at least the specified value.
For more information, see Running parameterized queries in the BigQuery documentation.
Step7: Create a new dataset
A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset. You need to create at least one dataset before loading data into BigQuery.
Step9: Write query results to a destination table
For more information, see Writing query results in the BigQuery documentation.
Step10: Load data from a pandas DataFrame to a new table
Step11: Load data from a local file to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Loading Data into BigQuery from a local data source in the BigQuery documentation.
Step12: Load data from Cloud Storage to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Introduction to loading data from Cloud Storage in the BigQuery documentation.
Step13: Cleaning Up
The following code deletes the dataset created for this tutorial, including all tables in the dataset. | Python Code:
import pandas
from google.cloud import bigquery
Explanation: BigQuery basics
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime. This page shows you how to get started with the Google BigQuery API using the Python client library.
Import the libraries used in this tutorial
End of explanation
client = bigquery.Client(location="US")
print("Client creating using default project: {}".format(client.project))
Explanation: Initialize a client
To use the BigQuery Python client library, start by initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API.
Client project
The bigquery.Client object uses your default project. Alternatively, you can specify a project in the Client constructor. For more information about how the default project is determined, see the google-auth documentation.
Client location
Locations are required for certain BigQuery operations such as creating a dataset. If a location is provided to the client when it is initialized, it will be the default location for jobs, datasets, and tables.
Run the following to create a client with your default project:
End of explanation
# client = bigquery.Client(location="US", project="your-project-id")
Explanation: To explicitly specify a project when constructing the client, set the project parameter:
End of explanation
query =
SELECT name, SUM(number) as total
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY total DESC
LIMIT 10
query_job = client.query(
query,
# Location must match that of the dataset(s) referenced in the query.
location="US",
) # API request - starts the query
df = query_job.to_dataframe()
df
Explanation: Run a query on a public dataset
The following example queries the BigQuery usa_names public dataset to find the 10 most popular names. usa_names is a Social Security Administration dataset that contains all names from Social Security card applications for births that occurred in the United States after 1879.
Use the Client.query method to run the query, and the QueryJob.to_dataframe method to return the results as a pandas DataFrame.
End of explanation
# Define the query
sql =
SELECT word, word_count
FROM `bigquery-public-data.samples.shakespeare`
WHERE corpus = @corpus
AND word_count >= @min_word_count
ORDER BY word_count DESC;
# Define the parameter values in a query job configuration
job_config = bigquery.QueryJobConfig(
query_parameters=[
bigquery.ScalarQueryParameter("corpus", "STRING", "romeoandjuliet"),
bigquery.ScalarQueryParameter("min_word_count", "INT64", 250),
]
)
# Start the query job
query_job = client.query(sql, location="US", job_config=job_config)
# Return the results as a pandas DataFrame
query_job.to_dataframe()
Explanation: Run a parameterized query
BigQuery supports query parameters to help prevent SQL injection when you construct a query with user input. Query parameters are only available with standard SQL syntax. Query parameters can be used as substitutes for arbitrary expressions. Parameters cannot be used as substitutes for identifiers, column names, table names, or other parts of the query.
To specify a parameter, use the @ character followed by an identifier, such as @param_name. For example, the following query finds all the words in a specific Shakespeare corpus with counts that are at least the specified value.
For more information, see Running parameterized queries in the BigQuery documentation.
End of explanation
# Define a name for the new dataset.
dataset_id = "your_new_dataset"
# The project defaults to the Client's project if not specified.
dataset = client.create_dataset(dataset_id) # API request
Explanation: Create a new dataset
A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset. You need to create at least one dataset before loading data into BigQuery.
End of explanation
sql =
SELECT corpus
FROM `bigquery-public-data.samples.shakespeare`
GROUP BY corpus;
table_ref = dataset.table("your_new_table_id")
job_config = bigquery.QueryJobConfig(destination=table_ref)
# Start the query, passing in the extra configuration.
query_job = client.query(sql, location="US", job_config=job_config)
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
Explanation: Write query results to a destination table
For more information, see Writing query results in the BigQuery documentation.
End of explanation
records = [
{"title": "The Meaning of Life", "release_year": 1983},
{"title": "Monty Python and the Holy Grail", "release_year": 1975},
{"title": "Life of Brian", "release_year": 1979},
{"title": "And Now for Something Completely Different", "release_year": 1971},
]
# Optionally set explicit indices.
# If indices are not specified, a column will be created for the default
# indices created by pandas.
index = ["Q24980", "Q25043", "Q24953", "Q16403"]
df = pandas.DataFrame(records, index=pandas.Index(index, name="wikidata_id"))
table_ref = dataset.table("monty_python")
job = client.load_table_from_dataframe(df, table_ref, location="US")
job.result() # Waits for table load to complete.
print("Loaded dataframe to {}".format(table_ref.path))
Explanation: Load data from a pandas DataFrame to a new table
End of explanation
source_filename = "resources/us-states.csv"
table_ref = dataset.table("us_states_from_local_file")
job_config = bigquery.LoadJobConfig(
source_format=bigquery.SourceFormat.CSV, skip_leading_rows=1, autodetect=True
)
with open(source_filename, "rb") as source_file:
job = client.load_table_from_file(
source_file,
table_ref,
location="US", # Must match the destination dataset location.
job_config=job_config,
) # API request
job.result() # Waits for table load to complete.
print("Loaded {} rows into {}:{}.".format(job.output_rows, dataset_id, table_ref.path))
Explanation: Load data from a local file to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Loading Data into BigQuery from a local data source in the BigQuery documentation.
End of explanation
# Configure the load job
job_config = bigquery.LoadJobConfig(
schema=[
bigquery.SchemaField("name", "STRING"),
bigquery.SchemaField("post_abbr", "STRING"),
],
skip_leading_rows=1,
# The source format defaults to CSV. The line below is optional.
source_format=bigquery.SourceFormat.CSV,
)
uri = "gs://cloud-samples-data/bigquery/us-states/us-states.csv"
destination_table_ref = dataset.table("us_states_from_gcs")
# Start the load job
load_job = client.load_table_from_uri(uri, destination_table_ref, job_config=job_config)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
# Retreive the destination table
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
Explanation: Load data from Cloud Storage to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Introduction to loading data from Cloud Storage in the BigQuery documentation.
End of explanation
# Retrieve the dataset from the API
dataset = client.get_dataset(client.dataset(dataset_id))
# Delete the dataset and its contents
client.delete_dataset(dataset, delete_contents=True)
print("Deleted dataset: {}".format(dataset.path))
Explanation: Cleaning Up
The following code deletes the dataset created for this tutorial, including all tables in the dataset.
End of explanation |
7,822 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have many duplicate records - some of them have a bank account. I want to keep the records with a bank account. | Problem:
import pandas as pd
import numpy as np
df = pd.DataFrame({'firstname': ['foo Bar', 'Bar Bar', 'Foo Bar'],
'lastname': ['Foo Bar', 'Bar', 'Foo Bar'],
'email': ['Foo bar', 'Bar', 'Foo Bar'],
'bank': [np.nan, 'abc', 'xyz']})
def g(df):
uniq_indx = (df.sort_values(by="bank", na_position='last').dropna(subset=['firstname', 'lastname', 'email'])
.applymap(lambda s: s.lower() if type(s) == str else s)
.applymap(lambda x: x.replace(" ", "") if type(x) == str else x)
.drop_duplicates(subset=['firstname', 'lastname', 'email'], keep='first')).index
return df.loc[uniq_indx]
result = g(df.copy()) |
7,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook for preprocessing NYT op-ed data
Goal
Step1: 1. Read in the Data
Step2: This dataset has 11,648 op-eds from the NY Times. We have additional information for each article (title, author, number of comments, etc.) but for now we will just focus on the text data.
2. Tokenize
For my analysis, I plan to consider each article as a separate document. For my purposes, I do not need to retain punctuation information, so I plan to remove punctuation in my preprocessing | Python Code:
import pandas as pd
import numpy as np
import nltk
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer
Explanation: Notebook for preprocessing NYT op-ed data
Goal: Emily & Greg go through NLP preprocessing pipeline for two data sets in parallel
End of explanation
names = range(1,14)
df_list = []
for name in names:
csvfile = '/Users/emilyhalket/Desktop/NLP_NYT/datafiles/{0}_100.csv'.format(name)
df = pd.read_csv(csvfile)
df_list.append(df)
article_df = pd.concat(df_list)
article_df = article_df[pd.notnull(article_df['full_text'])]
article_df.shape
Explanation: 1. Read in the Data
End of explanation
def preprocess_article_content(text_df):
print 'preprocessing article text...'
# text_df is data frame from SQL query, column 'content' contains text content from each article
article_list = []
tokenizer = RegexpTokenizer(r'\w+')
stop_words = set(stopwords.words('english')) # can add more stop words to this set
stemmer = SnowballStemmer('english')
kept_rows = [] # keep track of rows that have unusable articles
for row, article in enumerate(text_df['full_text']):
cleaned_tokens = []
tokens = tokenizer.tokenize(article.decode('utf-8').lower())
for token in tokens:
if token not in stop_words:
if len(token) > 0 and len(token) < 20: # removes non words
if not token[0].isdigit() and not token[-1].isdigit(): # removes numbers
stemmed_tokens = stemmer.stem(token)
cleaned_tokens.append(stemmed_tokens)
print 'success for row %d' % row
article_list.append(' '.join(wd for wd in cleaned_tokens))
kept_rows.append(row)
print 'preprocessed content for %d articles' % len(article_list)
return article_list, kept_rows
article_df = article_df[pd.notnull(article_df['full_text'])]
article_df.shape
article_list, kept_rows = preprocess_article_content(article_df)
len(article_list)
article_list[2000]
Explanation: This dataset has 11,648 op-eds from the NY Times. We have additional information for each article (title, author, number of comments, etc.) but for now we will just focus on the text data.
2. Tokenize
For my analysis, I plan to consider each article as a separate document. For my purposes, I do not need to retain punctuation information, so I plan to remove punctuation in my preprocessing
End of explanation |
7,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The
Step1:
Step2: Now, we can create an
Step3: Epochs behave similarly to
Step4: You can select subsets of epochs by indexing the
Step5: It is also possible to iterate through
Step6: You can manually remove epochs from the Epochs object by using
Step7: If you wish to save the epochs as a file, you can do it with
Step8: Later on you can read the epochs with
Step9: If you wish to look at the average across trial types, then you may do so,
creating an | Python Code:
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
Explanation: The :class:Epochs <mne.Epochs> data structure: epoched data
:class:Epochs <mne.Epochs> objects are a way of representing continuous
data as a collection of time-locked trials, stored in an array of shape
(n_events, n_channels, n_times). They are useful for many statistical
methods in neuroscience, and make it easy to quickly overview what occurs
during a trial.
End of explanation
data_path = mne.datasets.sample.data_path()
# Load a dataset that contains events
raw = mne.io.read_raw_fif(
op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))
# If your raw object has a stim channel, you can construct an event array
# easily
events = mne.find_events(raw, stim_channel='STI 014')
# Show the number of events (number of rows)
print('Number of events:', len(events))
# Show all unique event codes (3rd column)
print('Unique event codes:', np.unique(events[:, 2]))
# Specify event codes of interest with descriptive labels.
# This dataset also has visual left (3) and right (4) events, but
# to save time and memory we'll just look at the auditory conditions
# for now.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
Explanation: :class:Epochs <mne.Epochs> objects can be created in three ways:
1. From a :class:Raw <mne.io.Raw> object, along with event times
2. From an :class:Epochs <mne.Epochs> object that has been saved as a
.fif file
3. From scratch using :class:EpochsArray <mne.EpochsArray>. See
tut_creating_data_structures
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,
baseline=(None, 0), preload=True)
print(epochs)
Explanation: Now, we can create an :class:mne.Epochs object with the events we've
extracted. Note that epochs constructed in this manner will not have their
data available until explicitly read into memory, which you can do with
:func:get_data <mne.Epochs.get_data>. Alternatively, you can use
preload=True.
Expose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event
onsets
End of explanation
print(epochs.events[:3])
print(epochs.event_id)
Explanation: Epochs behave similarly to :class:mne.io.Raw objects. They have an
:class:info <mne.Info> attribute that has all of the same
information, as well as a number of attributes unique to the events contained
within the object.
End of explanation
print(epochs[1:5])
print(epochs['Auditory/Right'])
Explanation: You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>
object directly. Alternatively, if you have epoch names specified in
event_id then you may index with strings instead.
End of explanation
# These will be epochs objects
for i in range(3):
print(epochs[i])
# These will be arrays
for ep in epochs[:2]:
print(ep)
Explanation: It is also possible to iterate through :class:Epochs <mne.Epochs> objects
in this way. Note that behavior is different if you iterate on Epochs
directly rather than indexing:
End of explanation
epochs.drop([0], reason='User reason')
epochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)
print(epochs.drop_log)
epochs.plot_drop_log()
print('Selection from original events:\n%s' % epochs.selection)
print('Removed events (from numpy setdiff1d):\n%s'
% (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))
print('Removed events (from list comprehension -- should match!):\n%s'
% ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))
Explanation: You can manually remove epochs from the Epochs object by using
:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat
thresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.
You can also inspect the reason why epochs were dropped by looking at the
list stored in epochs.drop_log or plot them with
:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices
from the original set of events are stored in epochs.selection.
End of explanation
epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')
epochs.save(epochs_fname)
Explanation: If you wish to save the epochs as a file, you can do it with
:func:mne.Epochs.save. To conform to MNE naming conventions, the
epochs file names should end with '-epo.fif'.
End of explanation
epochs = mne.read_epochs(epochs_fname, preload=False)
Explanation: Later on you can read the epochs with :func:mne.read_epochs. For reading
EEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use
preload=False to save memory, loading the epochs from disk on demand.
End of explanation
ev_left = epochs['Auditory/Left'].average()
ev_right = epochs['Auditory/Right'].average()
f, axs = plt.subplots(3, 2, figsize=(10, 5))
_ = f.suptitle('Left / Right auditory', fontsize=20)
_ = ev_left.plot(axes=axs[:, 0], show=False)
_ = ev_right.plot(axes=axs[:, 1], show=False)
plt.tight_layout()
Explanation: If you wish to look at the average across trial types, then you may do so,
creating an :class:Evoked <mne.Evoked> object in the process. Instances
of Evoked are usually created by calling :func:mne.Epochs.average. For
creating Evoked from other data structures see :class:mne.EvokedArray and
tut_creating_data_structures.
End of explanation |
7,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to ML Deployment
Deploying models created using python in a Turi Predictive Service is very easy. This notebook walks you through the step-by-step process.
<img src='images/predictive_services_overview.png'></img>
Deployment Steps
The notebook has three sections
Step1: We can expose the trained model as a REST endpoint. This will allow other applications to consume the predictions from the model.
In order to do that, we wrap the model object in a Python function and add it to the Predictive Service. In the function you may add your own logic for transform input to the model, ensemble different models or manipulate output before returning. Checkout out user guide for more details.
The result of the function needs to be a JSON serializable object.
Step2: 2. Create a Predictive Service (One time) <a id='create'></a>
This section shows you how to deploy a Predictive Service to EC2. The EC2 instances used by the Predictive Service will be launched in your own AWS account, so you will be responsible for the cost.
<img src="images/middle.png"></img>
To create a Predictive Service in Amazon AWS, we first configure the EC2 Config object, which contains the configuration parameters required for launching a Predictive Service cluster in EC2. These fields are optional and include the region, instance type, CIDR rules etc. Predictive Service uses this configuration for service creation.
Having configured our EC2 Config object, we're ready to launch a Predictive Service Deployment, There are a few aspects of the Predictive Service that can be customized
Step3: Load an already created service
Step4: Query the model <a id='query'></a>
You may do a test query before really deploying it to production. This will help detect errors in the function before deploying it the Predictive Service.
<img src="images/right.png"></img>
Step5: Query from external applications via REST
Now other applications can interact with our model! In the next section we will illustrate how to consume the model. We can also use other APIs like ps.update() to update a mode, ps.remove() to remove a model.
The model query is exposed through REST API. The path is | Python Code:
# In order to run this code, you need an already trianed model (see the accompanying notebook)
import graphlab as gl
model = gl.load_model('pattern_mining_model.gl')
model
Explanation: Introduction to ML Deployment
Deploying models created using python in a Turi Predictive Service is very easy. This notebook walks you through the step-by-step process.
<img src='images/predictive_services_overview.png'></img>
Deployment Steps
The notebook has three sections:
<a href='#cpo'>Create a model</a>
<a href='#create'>Create a predictive service</a>
<a href='#query'>Query the model</a>
If you are deploying a model in an existing Predictive Service instance you can go to step (2) directly.
1. Create a model <a id='cpo'></a>
Let's train a simple pattern mining model
<img src="images/left.png"></img>
End of explanation
def predict(x):
# Construct an SFrame
sf = gl.SFrame(x)
# Add your own business logic here
# Call the predict method on the model.
predictions = model.predict(sf)
return predictions['prediction']
Explanation: We can expose the trained model as a REST endpoint. This will allow other applications to consume the predictions from the model.
In order to do that, we wrap the model object in a Python function and add it to the Predictive Service. In the function you may add your own logic for transform input to the model, ensemble different models or manipulate output before returning. Checkout out user guide for more details.
The result of the function needs to be a JSON serializable object.
End of explanation
import graphlab as gl
# Replace with your path.
ps_state_path = 's3://<your-bucket-name>/predictive_service/ps'
# Set your AWS credentials.
gl.aws.set_credentials(<key>, <secret>)
# Create an EC2 config
ec2_config = gl.deploy.Ec2Config()
# Launch a predictive service
ps = gl.deploy.predictive_service.create(name = 'sklearn-predictive-service',
ec2_config = ec2_config, state_path = ps_state_path, num_hosts = 1)
Explanation: 2. Create a Predictive Service (One time) <a id='create'></a>
This section shows you how to deploy a Predictive Service to EC2. The EC2 instances used by the Predictive Service will be launched in your own AWS account, so you will be responsible for the cost.
<img src="images/middle.png"></img>
To create a Predictive Service in Amazon AWS, we first configure the EC2 Config object, which contains the configuration parameters required for launching a Predictive Service cluster in EC2. These fields are optional and include the region, instance type, CIDR rules etc. Predictive Service uses this configuration for service creation.
Having configured our EC2 Config object, we're ready to launch a Predictive Service Deployment, There are a few aspects of the Predictive Service that can be customized:
* Number of nodes in the service - By default the number of hosts (num_hosts) is 1. To obtain good cache utility and high availability, we recommended setting num_hosts to at least 3.
* State path to persist service state and service logs. This is a s3 location.
* Port to be used by the server.
* Other settings, such as SSL credentials etc.
The following code snippet shows you how to create a Predictive Service. You will have to replace the ps_state_path and credentials for your Predictive Service.
End of explanation
import graphlab as gl
ps = gl.deploy.predictive_service.load('s3://gl-demo-usw2/predictive_service/demolab/ps-1.6')
ps
# ps.add('pattern-mining', predict) (When you add this for the first time)
ps.update('pattern-mining', predict)
ps.apply_changes()
Explanation: Load an already created service
End of explanation
# test query to make sure the model works fine
ps.query('pattern-mining', x={'Receipt': [1], 'StoreNum': [2], 'Item': ['CherryTart']})
Explanation: Query the model <a id='query'></a>
You may do a test query before really deploying it to production. This will help detect errors in the function before deploying it the Predictive Service.
<img src="images/right.png"></img>
End of explanation
import json
import requests
def restful_query(x):
headers = {'content-type': 'application/json'}
payload = {'api_key':'b437e588-0f2b-45e1-81c8-ce3acfa81ade', "data":{"x": x}}
end_point = 'http://demolab-one-six-2015364754.us-west-2.elb.amazonaws.com/query/pattern-mining'
return requests.post(end_point, json.dumps(payload), headers=headers).json()
restful_query({'Receipt': [1], 'StoreNum': [2], 'Item': ['CherryTart']})
Explanation: Query from external applications via REST
Now other applications can interact with our model! In the next section we will illustrate how to consume the model. We can also use other APIs like ps.update() to update a mode, ps.remove() to remove a model.
The model query is exposed through REST API. The path is:
http(s)://<your-ps-endpoint>/data/<model-name>
And the payload is a JSON serialized string in the following format:
{"api_key": <api key>,
"data": <data-passed-to-custom-query>}
Here the 'api key' may be obtained through ps.api_key, and data is the actual data passed to the custom predictive object in the Predictive Service. It will be passed to the query using **kwargs format
Here is a sample curl command to query your model:
curl -X POST -d '{"api_key":"b437e588-0f2b-45e1-81c8-ce3acfa81ade", "data":{"x":{"Receipt": [1], "StoreNum": [2], "Item": ["CherryTart"]}}}' http://demolab-one-six-2015364754.us-west-2.elb.amazonaws.com/query/pattern-mining
You can also query though Python using the requests module
Query through Python
End of explanation |
7,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model summary
Run done with model with three convolutional layers, two fully connected layers and a final softmax layer, with a constant of 48 channels per convolutional layer. Initially run with dropout in two fully connected layers and minor random augmentation (4 rotations and flip), when learning appeared to stop this run then halted, dropout removed and more signficant random augmentation applied (random arbitrary rotations, shunting, scaling and flipping). This gave a further gain in performance with an eventual best of 0.848 NLL on validation set achieved. Various manual changes to learning rate etc. at this point did not seem to give any further gain in performance.
Step1: Train and valid set NLL trace
The discontinuity at just over 80 epoch is due to resuming without dropout and with more augmentation.
Step2: Visualising first layer weights
Quite nice features appear to have been learned with some kernels appearing to have been learned at various rotations. Some quite small scale features appear to have been learned too.
Step3: Learning rate
Initially linear decay learning rate schedule used with monitor based adjuster. Turns out these don't play well together as the linear decay schedule overwrites any adjusments by monitor based extension at the next epoch. After resume initial learning rate manually reduced and learning rate schedule set exclusively with monitor based adjuster.
Step4: Update norm monitoring
Ratio of update norms to parameter norms across epochs for different layers plotted to give idea of how learning rate schedule performing. | Python Code:
print('## Model structure summary\n')
print(model)
params = model.get_params()
n_params = {p.name : p.get_value().size for p in params}
total_params = sum(n_params.values())
print('\n## Number of parameters\n')
print(' ' + '\n '.join(['{0} : {1} ({2:.1f}%)'.format(k, v, 100.*v/total_params)
for k, v in sorted(n_params.items(), key=lambda x: x[0])]))
print('\nTotal : {0}'.format(total_params))
Explanation: Model summary
Run done with model with three convolutional layers, two fully connected layers and a final softmax layer, with a constant of 48 channels per convolutional layer. Initially run with dropout in two fully connected layers and minor random augmentation (4 rotations and flip), when learning appeared to stop this run then halted, dropout removed and more signficant random augmentation applied (random arbitrary rotations, shunting, scaling and flipping). This gave a further gain in performance with an eventual best of 0.848 NLL on validation set achieved. Various manual changes to learning rate etc. at this point did not seem to give any further gain in performance.
End of explanation
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600.
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(111)
ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record)
ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record)
ax1.set_xlabel('Epochs')
ax1.legend(['Valid', 'Train'])
ax1.set_ylabel('NLL')
ax1.set_ylim(0., 5.)
ax1.grid(True)
ax2 = ax1.twiny()
ax2.set_xticks(np.arange(0,tr.shape[0],20))
ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]])
ax2.set_xlabel('Hours')
print("Minimum validation set NLL {0}".format(min(model.monitor.channels['valid_y_y_1_nll'].val_record)))
Explanation: Train and valid set NLL trace
The discontinuity at just over 80 epoch is due to resuming without dropout and with more augmentation.
End of explanation
pv = get_weights_report(model=model)
img = pv.get_img()
img = img.resize((8*img.size[0], 8*img.size[1]))
img_data = io.BytesIO()
img.save(img_data, format='png')
display(Image(data=img_data.getvalue(), format='png'))
Explanation: Visualising first layer weights
Quite nice features appear to have been learned with some kernels appearing to have been learned at various rotations. Some quite small scale features appear to have been learned too.
End of explanation
plt.plot(model.monitor.channels['learning_rate'].val_record)
Explanation: Learning rate
Initially linear decay learning rate schedule used with monitor based adjuster. Turns out these don't play well together as the linear decay schedule overwrites any adjusments by monitor based extension at the next epoch. After resume initial learning rate manually reduced and learning rate schedule set exclusively with monitor based adjuster.
End of explanation
h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record])
h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record])
plt.plot(h1_W_norms / h1_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record)
h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record])
h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record])
plt.plot(h2_W_norms / h2_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record)
h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record])
h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record])
plt.plot(h3_W_norms / h3_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record)
h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_col_norm_mean'].val_record])
h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_col_norms_mean'].val_record])
plt.plot(h4_W_norms / h4_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h4_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h4_col_norms_max'].val_record)
h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_col_norm_mean'].val_record])
h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_col_norms_mean'].val_record])
plt.plot(h5_W_norms / h5_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h5_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h5_col_norms_max'].val_record)
Explanation: Update norm monitoring
Ratio of update norms to parameter norms across epochs for different layers plotted to give idea of how learning rate schedule performing.
End of explanation |
7,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Factorial HMM
Example synthetic data
Step1: Test out learned distribution inside of SMC
We'll compare it against a baseline of "bootstrap" SMC, which proposes from the transition dynamics of the individual HMMs.
Step2: Look at rate of path coalescence | Python Code:
devices = factorial_hmm.gen_devices()
T = 50
np.random.seed(20)
X, Y = factorial_hmm.gen_dataset(devices, T)
plt.figure(figsize=(15,3.5))
plt.plot(Y)
plt.figure(figsize=(15,10))
plt.imshow((X*devices).T, interpolation='None', aspect=1);
plt.yticks(np.arange(len(devices)), devices);
print len(devices), 2**len(devices)
trace_train = []
trace_validation = []
dist_est = cde.ConditionalBinaryMADE(len(devices)+1, len(devices), H=300, num_layers=4)
if USE_GPU:
dist_est.cuda()
dist_est.load_state_dict(torch.load('../saved/trained_hmm_params.rar'))
Explanation: Factorial HMM
Example synthetic data: 20 different "devices", each with different power consumptions, turning on and off following separate Markov models
End of explanation
X_hat_bootstrap, ancestry_bootstrap, ESS_bootstrap = \
factorial_hmm.run_smc(devices, Y, 500, factorial_hmm.baseline_proposal, verbose=False)
Y_hat_bootstrap = np.dot(X_hat_bootstrap, devices)
nn_proposal = factorial_hmm.make_nn_proposal(dist_est)
X_hat_nn, ancestry_nn, ESS_nn = \
factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)
Y_hat_nn = np.dot(X_hat_nn, devices)
plt.hist(ESS_bootstrap, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')
plt.hist(ESS_nn, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k')
plt.xlim([0,plt.xlim()[1]])
plt.legend(['bootstrap', 'nnsmc'])
plt.title('Histogram of effective sample size of SMC filtering distribution');
plt.figure(figsize=(16,4))
plt.title('Ancestral paths for bootstrap proposals (blue) and nn (green)')
plt.plot(ancestry_bootstrap.T, color=sns.color_palette()[0]);
plt.plot(ancestry_nn.T, color=sns.color_palette()[1]);
plt.ylim(0,ancestry_nn.shape[0])
plt.xlim(0,T-1);
plt.figure(figsize=(14,3.25))
plt.plot(np.dot(X_hat_nn, devices).T, color=sns.color_palette()[1], alpha=0.1)
plt.plot(np.arange(len(Y)), Y,'k--')
plt.xlim([0,T-1])
plt.xlabel('Time step')
plt.ylabel('Total energy usage')
Explanation: Test out learned distribution inside of SMC
We'll compare it against a baseline of "bootstrap" SMC, which proposes from the transition dynamics of the individual HMMs.
End of explanation
ANC_PRIOR = []
ANC_NN = []
def count_uniques(ancestry):
K, T = ancestry.shape
counts = np.empty((T,), dtype=int)
for t in xrange(T):
counts[t] = len(np.unique(ancestry[:,t]))
return counts
def run_iter():
X,Y = factorial_hmm.gen_dataset(devices, T=30)
X_particles_baseline, ancestry_baseline, _ = \
factorial_hmm.run_smc(devices, Y, 100, factorial_hmm.baseline_proposal, verbose=False)
print "smc complete"
X_particles, ancestry_nnsmc, _ = \
factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False)
print "nn complete"
ANC_PRIOR.append(count_uniques(ancestry_baseline))
ANC_NN.append(count_uniques(ancestry_nnsmc))
return X,Y
for i in xrange(10):
print "iteration", i+1
X_tmp, Y_tmp = run_iter()
plt.figure(figsize=(8,3.5))
plt.plot(np.arange(len(X_tmp)), np.mean(ANC_PRIOR, 0));
plt.plot(np.arange(len(X_tmp)), np.mean(ANC_NN, 0));
plt.legend(['Bootstrap SMC', 'NN-SMC'], loc='upper left')
pm = np.mean(ANC_PRIOR, 0)
psd = np.std(ANC_PRIOR, 0)
safe_lb = (pm - psd) * (pm - psd > 1.0) + (pm - psd <= 1.0)
plt.fill_between(np.arange(len(X_tmp)), safe_lb, pm+psd, alpha=0.25, color=sns.color_palette()[0]);
pm = np.mean(ANC_NN, 0)
psd = np.std(ANC_NN, 0)
plt.fill_between(np.arange(len(X_tmp)), pm-psd, pm+psd, alpha=0.25, color=sns.color_palette()[1]);
plt.semilogy();
plt.xlabel('Time step')
plt.ylabel('Surviving paths')
plt.ylim(1, 100)
plt.xlim(0, len(X_tmp)-1)
plt.tight_layout()
Explanation: Look at rate of path coalescence
End of explanation |
7,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Text Classification using TensorFlow/Keras on Cloud ML Engine </h1>
This notebook illustrates
Step2: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step4: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https
Step8: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
Step9: Finally we will save our data, which is currently in-memory, to disk.
Step10: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>
Step11: Train on the Cloud
Let's first copy our training data to the cloud
Step12: Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
Results
What accuracy did you get?
Deploy trained model
Once your training completes you will see your exported models in the output directory specified in Google Cloud Storage.
You should see one model for each training checkpoint (default is every 1000 steps).
Step13: We will take the last export and deploy it as a REST API using Google Cloud Machine Learning Engine
Step14: Get Predictions
Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.
Step15: Our serving input function expects the already tokenized representations of the headlines, so we do that pre-processing in the code before calling the REST API.
Note
Step16: How many of your predictions were correct?
Rerun with Pre-trained Embedding
In the previous model we trained our word embedding from scratch. Often times we get better performance and/or converge faster by leveraging a pre-trained embedding. This is a similar concept to transfer learning during image classification.
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.14'
import tensorflow as tf
print(tf.__version__)
Explanation: <h1> Text Classification using TensorFlow/Keras on Cloud ML Engine </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using BigQuery
<li> Creating a text classification model using the Estimator API with a Keras model
<li> Training on Cloud ML Engine
<li> Deploying the model
<li> Predicting with model
<li> Rerun with pre-trained embedding
</ol>
End of explanation
import google.datalab.bigquery as bq
query=
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
LIMIT 10
df = bq.Query(query).execute().result().to_dataframe()
df
Explanation: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
query=
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 10
df = bq.Query(query).execute().result().to_dataframe()
df
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
query=
SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM
(SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
df = bq.Query(query + " LIMIT 10").execute().result().to_dataframe()
df.head()
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
traindf = bq.Query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").execute().result().to_dataframe()
evaldf = bq.Query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").execute().result().to_dataframe()
Explanation: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).
End of explanation
traindf['source'].value_counts()
evaldf['source'].value_counts()
Explanation: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
End of explanation
import os, shutil
DATADIR='data/txtcls'
shutil.rmtree(DATADIR, ignore_errors=True)
os.makedirs(DATADIR)
traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
!head -3 data/txtcls/train.tsv
!wc -l data/txtcls/*.tsv
Explanation: Finally we will save our data, which is currently in-memory, to disk.
End of explanation
%%bash
## Make sure we have the latest version of Google Cloud Storage package
pip install --upgrade google-cloud-storage
rm -rf txtcls_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
-- \
--output_dir=${PWD}/txtcls_trained \
--train_data_path=${PWD}/data/txtcls/train.tsv \
--eval_data_path=${PWD}/data/txtcls/eval.tsv \
--num_epochs=0.1
Explanation: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>: model.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job.
There are some TODOs in the model.py, make sure to complete the TODOs before proceeding!
Run Locally
Let's make sure the code compiles by running locally for a fraction of an epoch
End of explanation
%%bash
gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/
%%bash
OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
--job-dir=$OUTDIR \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_data_path=gs://${BUCKET}/txtcls/train.tsv \
--eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \
--num_epochs=5
Explanation: Train on the Cloud
Let's first copy our training data to the cloud:
End of explanation
%%bash
gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/
Explanation: Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
Results
What accuracy did you get?
Deploy trained model
Once your training completes you will see your exported models in the output directory specified in Google Cloud Storage.
You should see one model for each training checkpoint (default is every 1000 steps).
End of explanation
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="v1_fromscratch"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/ | tail -1)
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME} --quiet
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: We will take the last export and deploy it as a REST API using Google Cloud Machine Learning Engine
End of explanation
techcrunch=[
'Uber shuts down self-driving trucks unit',
'Grover raises €37M Series A to offer latest tech products as a subscription',
'Tech companies can now bid on the Pentagon’s $10B cloud contract'
]
nytimes=[
'‘Lopping,’ ‘Tips’ and the ‘Z-List’: Bias Lawsuit Explores Harvard’s Admissions',
'A $3B Plan to Turn Hoover Dam into a Giant Battery',
'A MeToo Reckoning in China’s Workplace Amid Wave of Accusations'
]
github=[
'Show HN: Moon – 3kb JavaScript UI compiler',
'Show HN: Hello, a CLI tool for managing social media',
'Firefox Nightly added support for time-travel debugging'
]
Explanation: Get Predictions
Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.
End of explanation
import pickle
from tensorflow.python.keras.preprocessing import sequence
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
requests = techcrunch+nytimes+github
# Tokenize and pad sentences using same mapping used in the deployed model
tokenizer = pickle.load( open( "txtclsmodel/tokenizer.pickled", "rb" ) )
requests_tokenized = tokenizer.texts_to_sequences(requests)
requests_tokenized = sequence.pad_sequences(requests_tokenized,maxlen=50)
# JSON format the requests
request_data = {'instances':requests_tokenized.tolist()}
# Authenticate and call CMLE prediction API
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
parent = 'projects/%s/models/%s' % (PROJECT, 'txtcls') #version is not specified so uses default
response = api.projects().predict(body=request_data, name=parent).execute()
# Format and print response
for i in range(len(requests)):
print('\n{}'.format(requests[i]))
print(' github : {}'.format(response['predictions'][i]['dense'][0]))
print(' nytimes : {}'.format(response['predictions'][i]['dense'][1]))
print(' techcrunch: {}'.format(response['predictions'][i]['dense'][2]))
Explanation: Our serving input function expects the already tokenized representations of the headlines, so we do that pre-processing in the code before calling the REST API.
Note: Ideally we would do these transformation in the tensorflow graph directly instead of relying on separate client pre-processing code (see: training-serving skew), howevever the pre-processing functions we're using are python functions so cannot be embedded in a tensorflow graph.
See the <a href="../text_classification_native.ipynb">text_classification_native</a> notebook for a solution to this.
End of explanation
!gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/
Explanation: How many of your predictions were correct?
Rerun with Pre-trained Embedding
In the previous model we trained our word embedding from scratch. Often times we get better performance and/or converge faster by leveraging a pre-trained embedding. This is a similar concept to transfer learning during image classification.
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/
You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed.
End of explanation |
7,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo de word2vec con gensim
En la siguiente celda, importamos las librerías necesarias y configuramos los mensajes de los logs.
Step1: Entrenamiento de un modelo
Implemento una clase Corpus con un iterador sobre un directorio que contiene ficheros de texto. Utilizaré una instancia de Corpus para poder procesar de manera más eficiente una colección, sin necesidad de cargarlo previamente en memoria.
Step2: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
Step3: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
Step4: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción
Step5: Probando nuestro modelo
El objeto model contiene una enorme matriz de números
Step6: Cada término del vocabulario está representado como un vector con 150 dimensiones
Step7: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños
Step8: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match
Step9: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo
Step10: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones. | Python Code:
import gensim, logging, os
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Ejemplo de word2vec con gensim
En la siguiente celda, importamos las librerías necesarias y configuramos los mensajes de los logs.
End of explanation
class Corpus(object):
'''Clase Corpus que permite leer de manera secuencial un directorio de documentos de texto'''
def __init__(self, directorio):
self.directory = directorio
def __iter__(self):
for fichero in os.listdir(self.directory):
for linea in open(os.path.join(self.directory, fichero)):
yield linea.split()
Explanation: Entrenamiento de un modelo
Implemento una clase Corpus con un iterador sobre un directorio que contiene ficheros de texto. Utilizaré una instancia de Corpus para poder procesar de manera más eficiente una colección, sin necesidad de cargarlo previamente en memoria.
End of explanation
CORPUSDIR = '/opt/textos/efe/txt/'
oraciones = Corpus(CORPUSDIR)
#model = gensim.models.Word2Vec(oraciones, min_count=10, size=150, workers=2)
# el modelo puede entrenarse en dos pasos sucesivos pero por separado
#model = gensim.models.Word2Vec() # modelo vacío
#model.build_vocab(oraciones) # primera pasada para crear la lista de vocabulario
#model.train(other_sentences) # segunda pasada para calcula vectores
Explanation: CORPUSDIR contiene una colección de noticias en español (normalizada previamente a minúsculas y sin signos de puntuación) con alrededor de 150 millones de palabras. Entrenamos un modelo en un solo paso, ignorando aquellos tokens que aparecen menos de 10 veces, para descartar erratas.
End of explanation
#model.save('/opt/textos/efe/efe.model.w2v')
Explanation: Una vez completado el entrenamiento (después de casi 30 minutos), guardamos el modelo en disco.
End of explanation
model = gensim.models.Word2Vec.load('/opt/textos/efe/efe.model.w2v')
Explanation: En el futuro, podremos utilizar este modelo cargándolo en memoria con la instrucción:
End of explanation
print(model.corpus_count)
Explanation: Probando nuestro modelo
El objeto model contiene una enorme matriz de números: una tabla, donde cada fila es uno de los términos del vocabulario reconocido y cada columna es una de las características que permiten modelar el significado de dicho término.
En nuestro modelo, tal y como está entrenado, tenemos más de 26 millones de términos:
End of explanation
print(model['azul'], '\n')
print(model['verde'], '\n')
print(model['microsoft'])
Explanation: Cada término del vocabulario está representado como un vector con 150 dimensiones: 105 características. Podemos acceder al vector de un término concreto:
End of explanation
print('hombre - mujer', model.similarity('hombre', 'mujer'))
print('madrid - parís', model.similarity('madrid', 'parís'))
print('perro - gato', model.similarity('perro', 'gato'))
print('gato - periódico', model.similarity('gato', 'periódico'))
Explanation: Estos vectores no nos dicen mucho, salvo que contienen números muy pequeños :-/
El mismo objeto model permite acceder a una serie de funcionalidades ya implementadas que nos van a permitir evaluar formal e informalmente el modelo. Por el momento, nos contentamos con los segundo: vamos a revisar visualmente los significados que nuestro modelo ha aprendido por su cuenta.
Podemos calcular la similitud semántica entre dos términos usando el método similarity, que nos devuelve un número entre 0 y 1:
End of explanation
lista1 = 'madrid barcelona gonzález washington'.split()
print('en la lista', ' '.join(lista1), 'sobra:', model.doesnt_match(lista1))
lista2 = 'psoe pp ciu epi'.split()
print('en la lista', ' '.join(lista2), 'sobra:', model.doesnt_match(lista2))
lista3 = 'publicaron declararon soy negaron'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
lista3 = 'homero saturno cervantes shakespeare cela'.split()
print('en la lista', ' '.join(lista3), 'sobra:', model.doesnt_match(lista3))
Explanation: Podemos seleccionar el término que no encaja a partir de una determinada lista de términos usando el método doesnt_match:
End of explanation
terminos = 'psoe chicago sevilla aznar podemos estuvieron'.split()
for t in terminos:
print(t, '==>', model.most_similar(t), '\n')
Explanation: Podemos buscar los términos más similares usando el método most_similar de nuestro modelo:
End of explanation
print('==> alcalde + mujer - hombre')
most_similar = model.most_similar(positive=['alcalde', 'mujer'], negative=['hombre'], topn=3)
for item in most_similar:
print(item)
print('==> madrid + filipinas - españa')
most_similar = model.most_similar(positive=['madrid', 'filipinas'], negative=['españa'], topn=3)
for item in most_similar:
print(item)
print('==> michel + fútbol + argentina - españa')
most_similar = model.most_similar(positive=['michel', 'fútbol', 'argentina'], negative=['españa'], topn=3)
for item in most_similar:
print(item)
Explanation: Con el mismo método most_similar podemos combinar vectores de palabras tratando de jugar con los rasgos semánticos de cada una de ellas para descubrir nuevas relaciones.
End of explanation |
7,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Visualizing High-Performance Gradient Boosting with XGBoost and Yellowbrick
In this post we'll explore how to evaluate the performance of a gradient boosting classifier from the xgboost library on the poker hand dataset using visual diagnostic tools from Yellowbrick. Even though Yellowbrick is designed to work with scikit-learn, it turns out that it works well with any machine learning library that provides a sklearn wrapper module.
Image credit
Step2: Now that the data is downloaded and has been read into a dataframe, I'm going to start by fitting a preliminary XGBClassifier() model from the xgboost.sklearn module, and using the Yellowbrick ClassBalance plot to evaluate whether there are any class imbalance issues that may effect the modeling processing. Anecdotally, I know that certain poker hands are pretty rare, so we should be expecting to see at least some imbalance; the ClassBalance report will tell us just how severe it is.
Class Balance Plot
First we instantiate the ClassBalance visualizer, passing in the xgboost estimator, desired figure size in pixels, and class names. We then call fit on the visualizer, which will also call the xgboost (or sklearn) model's internal fit method. The Yellowbrick score method calls Scikit-Learn classification scoring means to evaluate the internal model, and poof shows the plot.
Step3: One issue we can observe from the above ClassBalance report is that several of our classes - such as the Royal Flush and Straight Flush - are so rare that Scikit-Learn raises a warning that "Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples." This is means that our classifier will be unlikely to successfully predict those hands, no matter how much we try to scale complexity.
As a result we'll use Pandas to convert these rare classes into a single class that includes Flush or better.
Step4: Now we'll break our data into training and test splits, so that we evaluate our fitted model on data that it wasn't trained on. This will allow us to see how well our xgboost model is balancing the bias/variance tradeoff.
Step5: Now that our model is fitted, let's evaluate its performance using some of Yellowbrick's visualizers for classification.
ROCAUC
Receiver Operating Characteristic (ROC) curves are a measure of a classifier’s predictive quality that compares and visualizes the tradeoff between the models’ sensitivity and specificity. The ROC curve displays the true positive rate on the Y axis and the false positive rate on the X axis on both a global average and per-class basis. The ideal point is therefore the top-left corner of the plot
Step6: Classification Report Heatmap
The classification report displays the precision, recall, and F1 scores for the model. In order to support easier interpretation and problem detection, Yellowbrick's implementation of ClassificationReport integrates numerical scores with a color-coded heatmap.
The classification report shows a representation of the main classification metrics on a per-class basis. This gives a deeper intuition of the classifier behavior over global accuracy which can mask functional weaknesses in one class of a multiclass problem. Visual classification reports are used to compare classification models to select models that are “redder”, e.g. have stronger classification metrics or that are more balanced. (Note that Yellowbrick also makes it pretty easy to change the colormap if needed.)
Step7: Class Prediction Error
The Yellowbrick Class Prediction Error chart that shows the support for each class in the fitted classification model displayed as a stacked bar. Each bar is segmented to show the distribution of predicted classes for each class. It is initialized with a fitted model and generates a class prediction error chart on draw. For my part, I find ClassPredictionError a convenient and easier-to-interpret alternative to the standard confusion matrix (which Yellowbrick also has a visualizer for). | Python Code:
%matplotlib inline
import os
import requests
import pandas as pd
import matplotlib.pyplot as plt
from xgboost.sklearn import XGBClassifier
from sklearn.model_selection import train_test_split as tts
from yellowbrick.classifier import ClassBalance, ROCAUC, ClassificationReport, ClassPredictionError
def download_data(data_url, dir_path):
Convenience function that uses the requests library to retrieve data
given a url to the dataset and a directory folder on your computer.
if not os.path.exists(dir_path):
os.mkdir(dir_path)
response = requests.get(data_url)
name = os.path.basename(data_url)
with open(os.path.join(dir_path, name), 'wb') as f:
f.write(response.content)
return name
# Specify the directory where I want to store the data on my machine, and the URL for the data
base_folder = 'data'
poker = 'http://archive.ics.uci.edu/ml/machine-learning-databases/poker/poker-hand-training-true.data'
dataset_name = download_data(poker, base_folder)
# Read the data from disk into a Pandas dataframe
poker_df = pd.read_csv(os.path.join(base_folder, dataset_name))
# Manually label the columns and classes based on the dataset description from the UCI Repository
poker_df.columns = ['first_suit', 'first_rank', 'second_suit', 'second_rank', 'third_suit', 'third_rank',
'fourth_suit', 'fourth_rank', 'fifth_suit', 'fifth_rank', 'hand']
classes = ['zilch', 'one_pair', 'two_pair', 'three_of_a_kind', 'straight', 'flush', 'full_house',
'four_of_a_kind', 'straight_flush', 'royal_flush']
# Separate the data into features (X) and targets (y)
X = poker_df.iloc[:,0:9]
y = poker_df['hand']
Explanation: Visualizing High-Performance Gradient Boosting with XGBoost and Yellowbrick
In this post we'll explore how to evaluate the performance of a gradient boosting classifier from the xgboost library on the poker hand dataset using visual diagnostic tools from Yellowbrick. Even though Yellowbrick is designed to work with scikit-learn, it turns out that it works well with any machine learning library that provides a sklearn wrapper module.
Image credit: "cards" by bl0ndeeo2, Creative Commons License
What is Boosting?
In supervised machine learning, [gradient boosting]((https://en.wikipedia.org/wiki/Gradient_boosting) is an additive training technique for iteratively ensembling weak models into stronger ones. Traditional tree-based methods allow us to scale complexity with increasingly deep trees and more complex branching, but have a tendency to overfit to the training data. Gradient boosting enables us to build complex models using ensembles of shallow trees, which individually have low variance and high bias, but together can incrementally decrease bias via gradient descent. A preliminary, naive model is fit, its error is computed, and the error is used to train the next model, and so on. In this way, the algorithm aims to optimize the loss function over function space by iteratively fitting models that point in the negative gradient direction. Gradient boosting models are also invariant to scaling inputs, meaning that they do not require careful feature normalization, as do some models (like k-nearest neighbors and support vector machines).
In general, when I am prototyping a machine learning application, I leverage the Scikit-Learn API to compare many estimators and a few different hyperparameters. Because I work in a biweekly scrum cycle context, I'm less concerned with optimatization (at least at the outset), and more focused on proving out whether or not a new dataset is well-suited to prediction. Scikit-Learn does have an implementation of gradient boosting, but in this post, I'll be using xgboost, which provides implementations of parallel tree boosting and also has a sklearn wrapper.
Conveniently, there are an increasing number of Python libraries that expose convenient Scikit-Learn wrappers for custom estimators (e.g. Gensim, Keras, etc). This means that they can be used inside Scikit-Learn pipelines (if you've never experimented with pipelines or feature unions, here's great post by Zac Stewart on leveraging them in your ML workflow).
Compared to the Scikit-Learn implementation (which tend to be geared towards small-to-medium-sized datasets), xgboost is unique in that it is written to scale seamlessly to large datasets and a Spark context (it evolved out of a research project by Tianqi Chen and Carlos Guestrin). This enables us to quickly compare not only the standard Scikit-Learn estimators but also ones that may more easily scale, should we find good preliminary success with our prototype.
What is Yellowbrick?
The Yellowbrick library is a new Python visualization package that extends the Scikit-Learn API to incorporate visual diagnostics into machine learning -- and has over the last years become an essential part of my own ML workflow. It's my go-to tool for determining whether I'm on track through my feature analysis -> model selection -> hyperparameter tuning cycles, and for deciding what to do next based on my current results.
About the Data
The dataset we'll be exploring in this post is the Poker Hand data from the UCI Machine Learning Repository. The premise is that given some features of a hand of cards in a poker game, we should be able to predict the type of hand.
Each record in the dataset is an example of a hand consisting of five playing cards drawn from a standard deck of 52. Each card is described using two attributes (suit and rank), for a total of 10 predictive attributes. The target column describes the hand, with the possibilities being:
0: Nothing in hand; not a recognized poker hand
1: One pair; one pair of equal ranks within five cards
2: Two pairs; two pairs of equal ranks within five cards
3: Three of a kind; three equal ranks within five cards
4: Straight; five cards, sequentially ranked with no gaps
5: Flush; five cards with the same suit
6: Full house; pair + different rank three of a kind
7: Four of a kind; four equal ranks within five cards
8: Straight flush; straight + flush
9: Royal flush; {Ace, King, Queen, Jack, Ten} + flush
The order of cards is important, which is why there are 480 possible Royal Flush hands as compared to 4 (one for each suit).
End of explanation
balance = ClassBalance(XGBClassifier(), size=(1080, 720), classes=classes)
balance.fit(X, y)
balance.score(X, y)
b = balance.poof()
Explanation: Now that the data is downloaded and has been read into a dataframe, I'm going to start by fitting a preliminary XGBClassifier() model from the xgboost.sklearn module, and using the Yellowbrick ClassBalance plot to evaluate whether there are any class imbalance issues that may effect the modeling processing. Anecdotally, I know that certain poker hands are pretty rare, so we should be expecting to see at least some imbalance; the ClassBalance report will tell us just how severe it is.
Class Balance Plot
First we instantiate the ClassBalance visualizer, passing in the xgboost estimator, desired figure size in pixels, and class names. We then call fit on the visualizer, which will also call the xgboost (or sklearn) model's internal fit method. The Yellowbrick score method calls Scikit-Learn classification scoring means to evaluate the internal model, and poof shows the plot.
End of explanation
poker_df.loc[poker_df['hand'] >= 5, 'hand'] = 8
y = poker_df['hand']
classes = ['zilch', 'one_pair', 'two_pair', 'three_of_a_kind', 'straight', 'flush_or_better']
Explanation: One issue we can observe from the above ClassBalance report is that several of our classes - such as the Royal Flush and Straight Flush - are so rare that Scikit-Learn raises a warning that "Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples." This is means that our classifier will be unlikely to successfully predict those hands, no matter how much we try to scale complexity.
As a result we'll use Pandas to convert these rare classes into a single class that includes Flush or better.
End of explanation
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.5)
clf = XGBClassifier(max_depth=5)
clf.fit(X_train, y_train)
Explanation: Now we'll break our data into training and test splits, so that we evaluate our fitted model on data that it wasn't trained on. This will allow us to see how well our xgboost model is balancing the bias/variance tradeoff.
End of explanation
rocauc = ROCAUC(clf, size=(1080, 720), classes=classes)
rocauc.score(X_test, y_test)
r = rocauc.poof()
Explanation: Now that our model is fitted, let's evaluate its performance using some of Yellowbrick's visualizers for classification.
ROCAUC
Receiver Operating Characteristic (ROC) curves are a measure of a classifier’s predictive quality that compares and visualizes the tradeoff between the models’ sensitivity and specificity. The ROC curve displays the true positive rate on the Y axis and the false positive rate on the X axis on both a global average and per-class basis. The ideal point is therefore the top-left corner of the plot: false positives are zero and true positives are one.
This leads to another metric, area under the curve (AUC), a computation of the relationship between false positives and true positives. The higher the AUC, the better the model generally is. However, it is also important to inspect the “steepness” of the curve, as this describes the maximization of the true positive rate while minimizing the false positive rate. Generalizing “steepness” usually leads to discussions about convexity, which we do not get into here.
The cool thing about Yellowbrick's implementation of ROCAUC is that we can evaluate a multi-class classifier. Yellowbrick does this by plotting the ROCAUC curve for each class as though it were it's own binary classifier, all on one plot.
End of explanation
report = ClassificationReport(clf, size=(1080, 720), classes=classes)
report.score(X_test, y_test)
c = report.poof()
Explanation: Classification Report Heatmap
The classification report displays the precision, recall, and F1 scores for the model. In order to support easier interpretation and problem detection, Yellowbrick's implementation of ClassificationReport integrates numerical scores with a color-coded heatmap.
The classification report shows a representation of the main classification metrics on a per-class basis. This gives a deeper intuition of the classifier behavior over global accuracy which can mask functional weaknesses in one class of a multiclass problem. Visual classification reports are used to compare classification models to select models that are “redder”, e.g. have stronger classification metrics or that are more balanced. (Note that Yellowbrick also makes it pretty easy to change the colormap if needed.)
End of explanation
error = ClassPredictionError(clf, size=(1080, 720), classes=classes)
error.score(X_test, y_test)
e = error.poof()
Explanation: Class Prediction Error
The Yellowbrick Class Prediction Error chart that shows the support for each class in the fitted classification model displayed as a stacked bar. Each bar is segmented to show the distribution of predicted classes for each class. It is initialized with a fitted model and generates a class prediction error chart on draw. For my part, I find ClassPredictionError a convenient and easier-to-interpret alternative to the standard confusion matrix (which Yellowbrick also has a visualizer for).
End of explanation |
7,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
7,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theory and Practice of Visualization Exercise 2
Imports
Step1: Violations of graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.
CNN
Fox News
Time
Upload the image for the visualization to this directory and display the image inline in this notebook. | Python Code:
from IPython.display import Image
Explanation: Theory and Practice of Visualization Exercise 2
Imports
End of explanation
# Add your filename and uncomment the following line:
Image(filename='graph2.JPG')
Explanation: Violations of graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.
CNN
Fox News
Time
Upload the image for the visualization to this directory and display the image inline in this notebook.
End of explanation |
7,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We see above that there is a entry called "TOTAL". That obvisously cannot be a name. We would need to remove that from the dataset. Before we do, let's confirm it is what it the name suggests it is.
Step1: We see that the total salary matches to the salary against the "TOTAL" record in the dataset. | Python Code:
print "print out some values of the observation 'TOTAL'"
for name, person in data_dict.iteritems():
if name == 'TOTAL':
print person
salary = []
for name, person in data_dict.iteritems():
if float(person['salary']) > 0:
salary.append(float(person['salary']))
print "the sum of salary of all other persons is: ",np.sum(salary)/2
Explanation: We see above that there is a entry called "TOTAL". That obvisously cannot be a name. We would need to remove that from the dataset. Before we do, let's confirm it is what it the name suggests it is.
End of explanation
# Let's remove this TOTAL record.
data_dict.pop('TOTAL')
# There is a also a record which belongs to "THE TRAVEL AGENCY IN THE PARK".
# This is not a person and hence should be removed.
data_dict.pop("THE TRAVEL AGENCY IN THE PARK")
# No of records after removal of TOTAL & THE TRAVEL AGENCY IN THE PARK
print "No of records after removal of TOTAL: ", len(data_dict)
### Task 3: Create new feature(s)
### Store to my_dataset for easy export below.
my_dataset = data_dict
print "we create two new features here 'to_poi_message_ratio' and 'from_poi_message_ratio' "
for person in my_dataset.values():
person['to_poi_message_ratio'] = 0
person['from_poi_message_ratio'] = 0
if float(person['from_messages']) > 0:
person['to_poi_message_ratio'] = float(person['from_this_person_to_poi'])/float(person['from_messages'])
if float(person['to_messages']) > 0:
person['from_poi_message_ratio'] = float(person['from_poi_to_this_person'])/float(person['to_messages'])
features_list.extend(['to_poi_message_ratio', 'from_poi_message_ratio'])
### Extract features and labels from dataset for local testing
data = featureFormat(my_dataset, features_list)
labels, features = targetFeatureSplit(data)
### Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines. For more info:
### http://scikit-learn.org/stable/modules/pipeline.html
# Provided to give you a starting point. Try a variety of classifiers.
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf = DecisionTreeClassifier(min_samples_split=6, random_state=10)
test_classifier(clf, my_dataset, features_list)
#clf = ensemble.RandomForestClassifier(criterion='gini', n_estimators=14, max_depth=7,
# max_features=None, random_state=42, min_samples_split=1)
#clf = AdaBoostClassifier(algorithm='SAMME')
#params = dict(reduce_dim__n_components=[1, 2, 3], tree__min_samples_split=[2, 4, 6, 8 10])
#clf = GridSearchCV(clf, param_grid=params, n_jobs=-1, scoring='recall')
#test_classifier(clf, my_dataset, features_list)
### Task 5: Tune your classifier to achieve better than .3 precision and recall
### using our testing script. Check the tester.py script in the final project
### folder for details on the evaluation method, especially the test_classifier
### function. Because of the small size of the dataset, the script uses
### stratified shuffle split cross validation. For more info:
### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html
# Example starting point. Try investigating other evaluation techniques!
from sklearn.cross_validation import train_test_split
features_train, features_test, labels_train, labels_test = \
train_test_split(features, labels, test_size=0.3, random_state=42)
### Task 6: Dump your classifier, dataset, and features_list so anyone can
### check your results. You do not need to change anything below, but make sure
### that the version of poi_id.py that you submit can be run on its own and
### generates the necessary .pkl files for validating your results.
dump_classifier_and_data(clf, my_dataset, features_list)
Explanation: We see that the total salary matches to the salary against the "TOTAL" record in the dataset.
End of explanation |
7,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Triggers" data-toc-modified-id="Triggers-1"><span class="toc-item-num">1 </span>Triggers</a></span></li></ul></div>
Triggers
If large signal traces are captured, it's useful to be able to find portions where particular things happen.
Triggers are used for this.
These are analogous to the trigger function on logic analyzers and oscilloscopes.
Triggers are created by performing operations on Trace objects.
A Trace is just a list of values for a signal and the times at which they occurred.
Every Peeker object contains a Trace to record the values observed by the Peeker.
Arithmetic (+, -, *, %, /, //, **, <<, >>, abs), logical (&, |, ^, ~), and comparison (==, !=, <, >, <=, >=) operations
can be performed on Trace and Peeker objects to create a new Trace as a result
Step1: A common trigger is to look for a positive-going edge on a signal.
This can be done by logically-ANDing the signal to a time-delayed and inverted version of itself
Step2: Naturally, there are convenience functions for these common cases
Step3: Another common trigger is to look for when a bus has a certain value
Step4: Or detect when a bus is between two values
Step5: Or trigger when several bits of a bus have a certain value
Step6: Complicated triggers are possible. Here's one that triggers when consecutive bus values differ by more than 10
Step7: Since the output of operations on Traces is another Trace, it's possible to
concatenate several operations to get a concise trigger expression | Python Code:
from random import randrange
from myhdlpeek import *
setup(use_wavedrom=True, use_jupyter=True)
def create_random_trace(name, num_bits, num_samples):
trace = Trace()
trace.name = name
trace.num_bits = num_bits
for i in range(num_samples):
trace.append(Sample(i, randrange(0,2**num_bits)))
return trace
trc1 = create_random_trace('Trc1', 4, 100)
trc2 = create_random_trace('Trc2', 1, 100)
show_traces(trc1, trc2, stop_time=20)
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Triggers" data-toc-modified-id="Triggers-1"><span class="toc-item-num">1 </span>Triggers</a></span></li></ul></div>
Triggers
If large signal traces are captured, it's useful to be able to find portions where particular things happen.
Triggers are used for this.
These are analogous to the trigger function on logic analyzers and oscilloscopes.
Triggers are created by performing operations on Trace objects.
A Trace is just a list of values for a signal and the times at which they occurred.
Every Peeker object contains a Trace to record the values observed by the Peeker.
Arithmetic (+, -, *, %, /, //, **, <<, >>, abs), logical (&, |, ^, ~), and comparison (==, !=, <, >, <=, >=) operations
can be performed on Trace and Peeker objects to create a new Trace as a result:
<Trace|Peeker> op <Trace|Peeker|integer|float> => Trace
There is also a delay() method for creating a time-shifted version of a Trace (useful for
detecting edges):
<Trace|Peeker>.delay(<integer>) => Trace
The times at which the resulting trace is True (i.e., non-zero) can be extracted as a list:
<Trace|Peeker>.trig_times() => [t0, t1, ...]
These time values can be used to set the start_time parameter when displaying waveforms or tables:
trigs = trc.trig_times()
show_waveforms(..., start_time=trigs[0])
I'll demonstrate the creation of triggers using Trace objects containing random values.
(These same operations can be performed on Peeker objects;
it's just easier to create a Trace.)
First, I'll create some random traces:
End of explanation
posedge_trc = trc2 & ~trc2.delay(1) # Create trigger trace that is 1 whenever trc2 has a rising edge.
posedge_trc.name = '+Trc2'
trigs = posedge_trc.trig_times() # Get times at which the trigger trace is 1.
print('Trigger times:', trigs)
start_time = trigs[0] # Start waveform display at the first trigger.
stop_time = start_time+20 # Stop waveform display after 20 time units.
show_traces(trc2, posedge_trc, start_time=start_time, stop_time=stop_time, tock=True)
Explanation: A common trigger is to look for a positive-going edge on a signal.
This can be done by logically-ANDing the signal to a time-delayed and inverted version of itself:
End of explanation
posedge_trc = trc2.posedge(); posedge_trc.name = '+Trc2'
negedge_trc = trc2.negedge(); negedge_trc.name = '-Trc2'
anyedge_trc = trc2.anyedge(); anyedge_trc.name = '+/-Trc2'
show_traces(trc2, posedge_trc, negedge_trc, anyedge_trc, stop_time=20)
Explanation: Naturally, there are convenience functions for these common cases: posedge(), negedge() and anyedge():
End of explanation
val_trc = trc1 == 7 # Create a trigger trace that is 1 whenever trc1 has the value 7.
trigs = val_trc.trig_times()
show_traces(trc1, val_trc, start_time=trigs[0]-5, stop_time=trigs[0]+15)
Explanation: Another common trigger is to look for when a bus has a certain value:
End of explanation
val_trc = (5 <= trc1) & (trc1 <= 8) # Trigger when trace value is in range [5,8].
trigs = val_trc.trig_times()
show_traces(trc1, val_trc, start_time=trigs[0], stop_time=trigs[0]+20)
Explanation: Or detect when a bus is between two values:
End of explanation
bit_val_trc = ((trc1 & 0b0110)>>1) == 3 # Trigger when bits 1 and 2 are both on (trc1 is 6, 7, 14 or 15).
trig = bit_val_trc.trig_times()[0]
show_traces(trc1, bit_val_trc, start_time=trig-5, stop_time=trig+15)
Explanation: Or trigger when several bits of a bus have a certain value:
End of explanation
diff_trc = (abs(trc1 - trc1.delay(1))) > 10
trig = diff_trc.trig_times()[0]
show_traces(trc1, diff_trc, start_time=trig-5, stop_time=trig+15)
Explanation: Complicated triggers are possible. Here's one that triggers when consecutive bus values differ by more than 10:
End of explanation
trig = ((5 <= trc1) & (trc1 <= 8)).trig_times()[0]
show_traces(trc1, val_trc, start_time=trig, stop_time=trig+20)
Explanation: Since the output of operations on Traces is another Trace, it's possible to
concatenate several operations to get a concise trigger expression:
End of explanation |
7,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Prep
Load in cleaned experiment data, generated from this notebook.
Step1: Grab the min and max submission dates for filtering main_summary.
Step2: Load in main_summary, filtered to the min date of the experiment, and (7 * N_WEEKS) days beyond its compleition (max_date) to allow for the specified n-week Retention Analysis. We then join main_summary with the experiment data.
Step3: Assign a branch label to each (client_id, submission_date) tuple
Filtering on non-None branches here ensures that the client was seen in the TAAR study since the broadcasted client_id sets are based on client data from the cleaned TAAr application logs.
Step4: Calculate Retention Data
Perform day over day retention analysis.
Step5: Write to s3 since this job is quite expensive and should only be run once.
Step6: Load processed Retention Data
This section loads the data generated above without having to the re-run the entire notebook. | Python Code:
S3_PATH = "s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/cleaned_data/"
# Select essential columns.
clean_data = sqlContext.read.parquet(S3_PATH).select('client_id', 'locale', 'branch', 'submission_date_s3')
# Display number of rows per branch.
clean_data.groupBy('branch').count().collect()
Explanation: Data Prep
Load in cleaned experiment data, generated from this notebook.
End of explanation
min_date = clean_data.select(F.min('submission_date_s3').alias('min_d')).collect()[0].min_d
max_date = clean_data.select(F.max('submission_date_s3').alias('max_d')).collect()[0].max_d
print("min date: " + str(min_date))
print("max date: " + str(max_date))
Explanation: Grab the min and max submission dates for filtering main_summary.
End of explanation
# Get distinct client_ids that were observed in the TAAR experiment.
ensemble_ids = clean_data.rdd.filter(lambda p: p['branch'] == 'ensemble-taar').map(lambda x: x['client_id']).distinct()
linear_ids = clean_data.rdd.filter(lambda p: p['branch'] == 'linear-taar').map(lambda x: x['client_id']).distinct()
control_ids = clean_data.rdd.filter(lambda p: p['branch'] == 'control').map(lambda x: x['client_id']).distinct()
# Reduce redundant Rows to a set of client_ids per branch observed in TAAR.
local_ensemble_ids = set(ensemble_ids.collect())
local_linear_ids = set(linear_ids.collect())
local_control_ids = set(control_ids.collect())
# Sanity check that there are no elements in the set intersection between branches.
print(set.intersection(*[local_ensemble_ids, local_linear_ids, local_control_ids]))
# Broadcast the sets of ids for fast filtering on Main Summary.
bc_ensemble_ids = sc.broadcast(local_ensemble_ids)
bc_linear_ids = sc.broadcast(local_linear_ids)
bc_control_ids = sc.broadcast(local_control_ids)
# print(len(local_ensemble_ids))
# print(len(local_linear_ids))
# print(len(local_control_ids))
ms = (
sqlContext.read.option("mergeSchema", True)
.parquet("s3://telemetry-parquet/main_summary/v4")
.filter("submission_date_s3 >= '{}'".format(min_date))
.filter("normalized_channel = 'release'")
.filter("app_name = 'Firefox'")
.select('client_id', 'active_addons', 'locale', 'subsession_start_date', 'submission_date', 'submission_date_s3')
)
Explanation: Load in main_summary, filtered to the min date of the experiment, and (7 * N_WEEKS) days beyond its compleition (max_date) to allow for the specified n-week Retention Analysis. We then join main_summary with the experiment data.
End of explanation
# branches_col = ms.rdd.map(lambda p: (p['client_id'], count_addons(p['active_addons']), assign_branch(p['client_id']), p['submission_date_s3']))
branches_col = ms.withColumn("branch", assign_branch("client_id"))
branches_col = branches_col.filter(branches_col.branch != "None")
branches_col.take(1)
# Double group by and count distinct shoudl leave us with a managable Pandas DF containing:
# datastring in %Y%m%d format (sortable), branch: {ensemble, linear, control} and distinct_client_count
# Everything we need for a day over day retetnion analysis for only clients observed in the TAAR study
# spanning from the earliest study date to latest available ping.
df_daily_grouped = branches_col.groupby("submission_date_s3", "branch")
retention_pd = df_daily_grouped.agg(F.countDistinct('client_id')).toPandas()
ret_df = retention_pd.sort('submission_date_s3', ascending=True)
Explanation: Assign a branch label to each (client_id, submission_date) tuple
Filtering on non-None branches here ensures that the client was seen in the TAAR study since the broadcasted client_id sets are based on client data from the cleaned TAAr application logs.
End of explanation
ret_df.to_csv("taar_v2_retention-alternate.csv", index=False)
Explanation: Calculate Retention Data
Perform day over day retention analysis.
End of explanation
%%bash
aws s3 cp taar_v2_retention-alternate.csv s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/
Explanation: Write to s3 since this job is quite expensive and should only be run once.
End of explanation
%%bash
aws s3 cp s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/taar_v2_retention-alternate.csv .
ret = pd.read_csv("taar_v2_retention-alternate.csv")
plt.rcParams['figure.figsize'] = (12, 6)
fig, ax = plt.subplots()
for group, data in ret.groupby("branch"):
(data.sort_values("submission_date_s3")
.plot(x='submission_date_s3',
y='count(DISTINCT client_id)',
ax=ax,
label=group))
plt.ylabel("Retention")
plt.xlabel("submission date ")
plt.title("Day-over-day Retention by Branch")
plt.show()
ret
Explanation: Load processed Retention Data
This section loads the data generated above without having to the re-run the entire notebook.
End of explanation |
7,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Text Generation using a RNN
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step1: Import tensorflow and enable eager execution.
Step2: Download the dataset
In this example, we will use the shakespeare dataset. You can use any other dataset that you like.
Step3: Read the dataset
Step4: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs
Step5: Creating the input and output tensors
Vectorizing the input and the target text because our model cannot understand strings only numbers.
But first, we need to create the input and output vectors.
Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first.
For example, consider that the string = 'tensorflow' and the max_length is 9
So, the input = 'tensorflo' and output = 'ensorflow'
After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above.
Step6: Creating batches and shuffling them using tf.data
Step7: Creating the model
We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model.
Embedding layer
GRU layer (you can use an LSTM layer here)
Fully connected layer
Step8: Call the model and set the optimizer and the loss function
Step9: Train the model
Here we will use a custom training loop with the help of GradientTape()
We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model.
Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input.
There are a lot of interesting things happening here.
The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0.
The model then returns the predictions P1 and H1.
For the next batch of input, the model receives I1 and H1.
The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state.
We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this.
After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input)
Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function.
Note
Step10: Predicting using our trained model
The below code block is used to generated the text
We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate.
We get predictions using the start_string and the hidden state
Then we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model
The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome! | Python Code:
!pip install unidecode
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Text Generation using a RNN
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table>
This notebook demonstrates how to generate text using an RNN using tf.keras and eager execution. If you like, you can write a similar model using less code. Here, we show a lower-level impementation that's useful to understand as prework before diving in to deeper examples in a similar, like Neural Machine Translation with Attention.
This notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare's writing. We'll use a collection of plays, borrowed from Andrej Karpathy's excellent The Unreasonable Effectiveness of Recurrent Neural Networks. The notebook will train a model, and use it to generate sample output.
Here is the output(with start string='w') after training a single layer GRU for 30 epochs with the default settings below:
```
were to the death of him
And nothing of the field in the view of hell,
When I said, banish him, I will not burn thee that would live.
HENRY BOLINGBROKE:
My gracious uncle--
DUKE OF YORK:
As much disgraced to the court, the gods them speak,
And now in peace himself excuse thee in the world.
HORTENSIO:
Madam, 'tis not the cause of the counterfeit of the earth,
And leave me to the sun that set them on the earth
And leave the world and are revenged for thee.
GLOUCESTER:
I would they were talking with the very name of means
To make a puppet of a guest, and therefore, good Grumio,
Nor arm'd to prison, o' the clouds, of the whole field,
With the admire
With the feeding of thy chair, and we have heard it so,
I thank you, sir, he is a visor friendship with your silly your bed.
SAMPSON:
I do desire to live, I pray: some stand of the minds, make thee remedies
With the enemies of my soul.
MENENIUS:
I'll keep the cause of my mistress.
POLIXENES:
My brother Marcius!
Second Servant:
Will't ple
```
Of course, while some of the sentences are grammatical, most do not make sense. But, consider:
Our model is character based (when we began training, it did not yet know how to spell a valid English word, or that words were even a unit of text).
The structure of the output resembles a play (blocks begin with a speaker name, in all caps similar to the original text). Sentences generally end with a period. If you look at the text from a distance (or don't read the invididual words too closely, it appears as if it's an excerpt from a play).
As a next step, you can experiment training the model on a different dataset - any large text file(ASCII) will do, and you can modify a single line of code below to make that change. Have fun!
Install unidecode library
A helpful library to convert unicode to ASCII.
End of explanation
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
# Note: Once you enable eager execution, it cannot be disabled.
tf.enable_eager_execution()
import numpy as np
import re
import random
import unidecode
import time
Explanation: Import tensorflow and enable eager execution.
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Explanation: Download the dataset
In this example, we will use the shakespeare dataset. You can use any other dataset that you like.
End of explanation
text = unidecode.unidecode(open(path_to_file).read())
# length of text is the number of characters in it
print (len(text))
Explanation: Read the dataset
End of explanation
# unique contains all the unique characters in the file
unique = sorted(set(text))
# creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(unique)}
idx2char = {i:u for i, u in enumerate(unique)}
# setting the maximum length sentence we want for a single input in characters
max_length = 100
# length of the vocabulary in chars
vocab_size = len(unique)
# the embedding dimension
embedding_dim = 256
# number of RNN (here GRU) units
units = 1024
# batch size
BATCH_SIZE = 64
# buffer size to shuffle our dataset
BUFFER_SIZE = 10000
Explanation: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs
End of explanation
input_text = []
target_text = []
for f in range(0, len(text)-max_length, max_length):
inps = text[f:f+max_length]
targ = text[f+1:f+1+max_length]
input_text.append([char2idx[i] for i in inps])
target_text.append([char2idx[t] for t in targ])
print (np.array(input_text).shape)
print (np.array(target_text).shape)
Explanation: Creating the input and output tensors
Vectorizing the input and the target text because our model cannot understand strings only numbers.
But first, we need to create the input and output vectors.
Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first.
For example, consider that the string = 'tensorflow' and the max_length is 9
So, the input = 'tensorflo' and output = 'ensorflow'
After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above.
End of explanation
dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
Explanation: Creating batches and shuffling them using tf.data
End of explanation
class Model(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, units, batch_size):
super(Model, self).__init__()
self.units = units
self.batch_sz = batch_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
if tf.test.is_gpu_available():
self.gru = tf.keras.layers.CuDNNGRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
x = self.embedding(x)
# output shape == (batch_size, max_length, hidden_size)
# states shape == (batch_size, hidden_size)
# states variable to preserve the state of the model
# this will be used to pass at every step to the model while training
output, states = self.gru(x, initial_state=hidden)
# reshaping the output so that we can pass it to the Dense layer
# after reshaping the shape is (batch_size * max_length, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# The dense layer will output predictions for every time_steps(max_length)
# output shape after the dense layer == (max_length * batch_size, vocab_size)
x = self.fc(output)
return x, states
Explanation: Creating the model
We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model.
Embedding layer
GRU layer (you can use an LSTM layer here)
Fully connected layer
End of explanation
model = Model(vocab_size, embedding_dim, units, BATCH_SIZE)
optimizer = tf.train.AdamOptimizer()
# using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors
def loss_function(real, preds):
return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds)
Explanation: Call the model and set the optimizer and the loss function
End of explanation
# Training step
EPOCHS = 30
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
hidden = model.reset_states()
for (batch, (inp, target)) in enumerate(dataset):
with tf.GradientTape() as tape:
# feeding the hidden state back into the model
# This is the interesting step
predictions, hidden = model(inp, hidden)
# reshaping the target because that's how the
# loss function expects it
target = tf.reshape(target, (-1,))
loss = loss_function(target, predictions)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables), global_step=tf.train.get_or_create_global_step())
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1,
batch,
loss))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
Explanation: Train the model
Here we will use a custom training loop with the help of GradientTape()
We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model.
Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input.
There are a lot of interesting things happening here.
The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0.
The model then returns the predictions P1 and H1.
For the next batch of input, the model receives I1 and H1.
The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state.
We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this.
After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input)
Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function.
Note:- If you are running this notebook in Colab which has a Tesla K80 GPU it takes about 23 seconds per epoch.
End of explanation
# Evaluation step(generating text using the model learned)
# number of characters to generate
num_generate = 1000
# You can change the start string to experiment
start_string = 'Q'
# converting our start string to numbers(vectorizing!)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# empty string to store our results
text_generated = ''
# low temperatures results in more predictable text.
# higher temperatures results in more surprising text
# experiment to find the best setting
temperature = 1.0
# hidden state shape == (batch_size, number of rnn units); here batch size == 1
hidden = [tf.zeros((1, units))]
for i in range(num_generate):
predictions, hidden = model(input_eval, hidden)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated += idx2char[predicted_id]
print (start_string + text_generated)
Explanation: Predicting using our trained model
The below code block is used to generated the text
We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate.
We get predictions using the start_string and the hidden state
Then we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model
The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome!
End of explanation |
7,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents
This notebook analyses lag-frequency spectrums of the light curves simulated through impulse response approach. First, a simple case with delta impulse response is covered. Subsequently, an energy-dependent impulse response scenario is analysed.
Setup
Import some useful libraries.
Step1: Import relevant stingray libraries.
Step2: Initializing
Instantiate a simulator object and define a variability signal.
Step3: For ease of analysis, define a simple delta impulse response with width 1. Here, start parameter refers to the lag delay, which we will soon see.
Step4: Finally, simulate a filtered light curve. Here, filtered means that the initial lag delay portion is cut.
Step5: Analysis
Compute crossspectrum.
Step6: Rebin the crosss-spectrum for ease of visualization.
Step7: Calculate time lag.
Step8: Plot lag.
Step9: According to Uttley et al (2014), the lag-frequency spectrum shows a constant delay until the frequency (1/2*time_delay) which is represented by the green vertical line in the above figure. After this point, the phase wraps and the lag becomes negative.
Energy Dependent Impulse Responses
In practical situations, different channels may have different impulse responses and hence, would react differently to incoming light curves. To account for this, stingray an option to simulate light curves and add them to corresponding energy channels.
Below, we analyse the lag-frequency spectrum in such cases.
We define two delta impulse responses with same intensity but varying positions, each applicable on different energy channels (say '3.5-4.5 keV' and '4.5-5.5 keV' energy ranges).
Step10: Now, we create two energy channels to simulate light curves for these two impulse responses.
Step11: Compute cross-spectrum for each channel.
Step12: Calculate lags.
Step13: Get cut-off points.
Step14: Plot lag-frequency spectrums. | Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Contents
This notebook analyses lag-frequency spectrums of the light curves simulated through impulse response approach. First, a simple case with delta impulse response is covered. Subsequently, an energy-dependent impulse response scenario is analysed.
Setup
Import some useful libraries.
End of explanation
from stingray import Lightcurve, Crossspectrum, sampledata
from stingray.simulator import simulator, models
Explanation: Import relevant stingray libraries.
End of explanation
var = sampledata.sample_data()
# Beware: set tstart here, or nothing will work!
sim = simulator.Simulator(N=1024, mean=0.5, dt=0.125, rms=0.4, tstart=var.tstart)
Explanation: Initializing
Instantiate a simulator object and define a variability signal.
End of explanation
delay = 10
s_ir = sim.simple_ir(start=delay, width=1)
Explanation: For ease of analysis, define a simple delta impulse response with width 1. Here, start parameter refers to the lag delay, which we will soon see.
End of explanation
lc = sim.simulate(var.counts, s_ir)
plt.plot(lc.time, lc.counts)
plt.plot(var.time, var.counts)
Explanation: Finally, simulate a filtered light curve. Here, filtered means that the initial lag delay portion is cut.
End of explanation
cross = Crossspectrum(var, lc)
Explanation: Analysis
Compute crossspectrum.
End of explanation
cross = cross.rebin(0.0050)
Explanation: Rebin the crosss-spectrum for ease of visualization.
End of explanation
lag = cross.time_lag()
Explanation: Calculate time lag.
End of explanation
plt.figure()
# Plot lag-frequency spectrum.
plt.plot(cross.freq, lag, 'r')
# Find cutoff points
v_cutoff = 1.0/(2*delay)
h_cutoff = lag[int((v_cutoff-0.0050)*1/0.0050)]
plt.axvline(v_cutoff, color='g',linestyle='--')
plt.axhline(h_cutoff, color='g', linestyle='-.')
# Define axis
plt.axis([0,0.2,-20,20])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Lag')
plt.title('Lag-frequency Spectrum')
plt.show()
Explanation: Plot lag.
End of explanation
delays = [10,20]
h1 = sim.simple_ir(start=delays[0], width=1)
h2 = sim.simple_ir(start=delays[1], width=1)
Explanation: According to Uttley et al (2014), the lag-frequency spectrum shows a constant delay until the frequency (1/2*time_delay) which is represented by the green vertical line in the above figure. After this point, the phase wraps and the lag becomes negative.
Energy Dependent Impulse Responses
In practical situations, different channels may have different impulse responses and hence, would react differently to incoming light curves. To account for this, stingray an option to simulate light curves and add them to corresponding energy channels.
Below, we analyse the lag-frequency spectrum in such cases.
We define two delta impulse responses with same intensity but varying positions, each applicable on different energy channels (say '3.5-4.5 keV' and '4.5-5.5 keV' energy ranges).
End of explanation
sim.simulate_channel('3.5-4.5', var, h1)
sim.simulate_channel('4.5-5.5', var, h2)
Explanation: Now, we create two energy channels to simulate light curves for these two impulse responses.
End of explanation
cross = [Crossspectrum(var, lc).rebin(0.005) for lc in sim.get_channels(['3.5-4.5', '4.5-5.5'])]
Explanation: Compute cross-spectrum for each channel.
End of explanation
lags = [c.time_lag() for c in cross]
Explanation: Calculate lags.
End of explanation
v_cuts = [1.0/(2*d) for d in delays]
h_cuts = [lag[int((v_cutoff-0.005)*1/0.005)] for lag, v_cut in zip(lags, v_cuts)]
Explanation: Get cut-off points.
End of explanation
plt.figure()
plots = []
colors = ['r','g']
energies = ['3.5-4.5 keV', '4.5-5.5 keV']
# Plot lag-frequency spectrum
for i in range(0,len(lags)):
plots += plt.plot(cross[i].freq, lags[i], colors[i], label=energies[i])
plt.axvline(v_cuts[i],color=colors[i],linestyle='--')
plt.axhline(h_cuts[i], color=colors[i], linestyle='-.')
# Define axes and add labels
plt.axis([0,0.2,-20,20])
plt.legend()
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Lags')
plt.title('Energy Dependent Frequency-lag Spectrum')
plt.show()
Explanation: Plot lag-frequency spectrums.
End of explanation |
7,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First off, an acknowledgement of the usefulness of the Software Carpentry materials, and particularly data, in preparing this session - this course is not, however, affiliated with or endorsed by the Software Carpentry team. Nonetheless, if you would like a more in-depth walkthrough, relevant to this arthritis example, I recommend you explore their resources, particularly the python-novice-inflammation workshop.
A New Treatment for Arthritis
Starting out with some data
In this task, we will examine some data representing trials of an arthritis treatment. Our data is in the form of an ASCII file, where each row represents a patient and each column a consecutive day's measurement of inflammation.
The Scientific Python Trifecta
numpy - handling numbers
matplotlib - plotting
scipy - general scientific methods
numpy has flexible routines for loading raw data...
In the same directory as your notebooks, there is a folder with some data, called data.
Step1: This construct, a with statement, addresses the age-old problem of cleaning up file descriptors. In general, a with context expects the object being used to have some open and close routines that will be called at the entrance and exit of the block, respectively. Note that we don't have scoping here - the snippet variable exists outside the block, making it easy to load data in a short block and handle it later.
We could use more traditional syntax
Step2: While this has kindly been rendered for us as if it were a list of lists, in reality it is something much more useful...
Step3: This type is numpy's N-dimensional matrix class. This means we get a wide swathe of attributes and methods applicable to matrices, and a whole load of functions inside numpy and other libraries that will happily take it as an argument. One particularly useful property is shape
Step4: This tells us that there are 60 patients (rows), each with 40 days of inflammation measurements (columns).
We can get an idea of the wide variety of methods available on the data object...
Step5: A lot of the magic methods here, those with double underscores on either side, allow Python to treat this object more like a built-in. For instance, the __neg__ method will get called when we enter -data. If we try data1 < data2, then data1.__lt__ will be called with data2 as an argument, expecting a boolean return. The behaviour when the user writes data.previously_unknown_attr is defined by the __getattr__ method - this need not be an error. In some ways, you can think of this as an expansion of the concept of operator overloading.
The ndarray class uses those methods very effectively, to help us write clear and efficient code.
Step6: This works because numpy has overridden the __mul__ method.
We can also do our slicing in multiple dimensions... let's get the top-left corner...
Step7: ...or more succinctly...
Try grabbing the bottom left corner (3x3)...
Step8: A few more practical uses of nparray
Step9: This could also be written data[3,
Step10: Suppose we want the max for every day...
We could create a new list and write a loop to go through it, but it seems there should be a more transparent, succinct, Pythonic approach
Step11: As you can see, the second day does indeed have max of 1.0. Notice that it returns an array (this is a superclass of ndarray), so we can do...
An aside
Step12: This shouldn't be surprising - we take the average over all patients for each of 40 days, so we end up with a 40 element array.
Try using the following cell to get the mean inflammation over all patients for each of the first three days...
(you should use slicing and end up with a 3 element array)
Step13: RIGHT
Enough text.
Programming isn't about writing reams of code, it's about getting stuff done and having fun while doing it. Right? Right. Right. RIGHT.
In any case, when examining data, it's a whole lot more pleasant and efficient to see visualizations, so lets get started with some plotting.
Plotting
Like we're Brutus the Younger
First, we use a magic method that works in Jupyter (strictly, in its IPython backend)
Step14: Now for inline plotting we can use the matplotlib module, the go-to for Python plotting (but not in the Dijkstra sense)...
Step15: MMMM. Pretty. Red shows greater inflammation, and what we may have guessed from the bits of CSV we saw, inflammation starts low, grows and reduces. Patients along the side, days along the bottom.
Time for a challenge... plot the heat map (as above) for the fourth, fifth and sixth patients, transposed with days up the side. The easiest way is to start with Google... But first, put up your stars so I can see them!
Step16: Reproducible Plots
If you come from a MATLAB background, or use other plotting tools, you will be used to saving/re-opening a plot...
...in Python you normally have a plotting script that you run each time (and save the image).
Cons
You have no standard GUI for designing plots
If your plot involves lots of processing, it's slow to open
Pros
Changes are trivially easy
Every time you open the plot, it contains current data
Con-workarounds
There are libraries for adding GUI elements to plots
You can split into two scripts
output processed data
plot
Example plotting script
Step17: Now if I want to modify some part... I can do so and have the updated plot straight-away
This particular syntax is for fairly basic use, with only one set of axis and so forth, but it's probably quite familiar if you are coming from MATLAB. If we want to expand a bit more, first we use a more classic matplotlib approach, and neater programming approach, by setting properties directly on specific objects, such as axes...
Step18: Not particularly more complicated, but more flexible... say we have a series of plots...
Step19: Challenge
Put up your stars!!
Modify the previous cell to give the max line with a dash, the blue line with dots and the red line solid.
Even better, add a title at the top.
When done, swap to the arrow. If you find this very easy and are waiting, leave your arrow up and have a go at plotting all the patients in very light grey behind your three coloured lines. See how long it takes to run for those 63 lines.
RMV
Step20: If this works, you should see an icon like a rainbow-coloured camera shutter.
Step21: Notice the syntax is quite similar to matplotlib but a little different, so make sure you don't get them mixed up. For example, plot has become specifically line and we now explicitly provide the days along the bottom. Note that numpy.arange is just like range but it returns a numpy array.
Challenge II
Reproduce our three-line matplotlib plot in Bokeh
Step22: Don't forget to put up your star!
Start with the Bokeh documentation and the Bokeh example above...
Step23: Interactive plots
But really, this still isn't very interactive... let's try adding a drop-down to pick which function (average, max, min) that we want to use.
Step24: First off, note that we do not have to tell the function to import the global variable, we can just use it. Now, examine the function - this is callback that we will use to update the plot when we change a drop-down menu choice of function.
We create a source object, an instance of the ColumnDataSource class, and when this function is called, it updates the 'y' item in source.data. Inside the function, we check what has been picked (this is the argument to the function), and update the source "y" array on that basis. Finally, we push this change to the notebook.
Step25: Now we need a widget to use this...
Step26: This a very basic approach - the widget tool guesses you want a drop-down because you pass a list. Alternatively, you could pass a tuple with two floats as limits and get a slider back.
It seems you do have to output the image before the widgets, if you define the callback in Python, at least. However, there are a lot of other interactors - you can pass a JS callback (as a string), assign URLs to various objects in your plot, animate it, etc. etc. etc.
Final Challenge
Add a slider for variable N from 1 to 60 that reduces the dataset from all Patients to Patients 1 to N. Unless you're feeling particularly ambitious, you can replace the drop-down from the previous example with this slider.
Step27: Don't forget your star! | Python Code:
with open('data/inflammation-01.csv', 'r') as f:
snippet = f.readlines()[:3]
print(*snippet)
Explanation: First off, an acknowledgement of the usefulness of the Software Carpentry materials, and particularly data, in preparing this session - this course is not, however, affiliated with or endorsed by the Software Carpentry team. Nonetheless, if you would like a more in-depth walkthrough, relevant to this arthritis example, I recommend you explore their resources, particularly the python-novice-inflammation workshop.
A New Treatment for Arthritis
Starting out with some data
In this task, we will examine some data representing trials of an arthritis treatment. Our data is in the form of an ASCII file, where each row represents a patient and each column a consecutive day's measurement of inflammation.
The Scientific Python Trifecta
numpy - handling numbers
matplotlib - plotting
scipy - general scientific methods
numpy has flexible routines for loading raw data...
In the same directory as your notebooks, there is a folder with some data, called data.
End of explanation
import numpy as np
data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',') # Comma-separated...
print(data)
Explanation: This construct, a with statement, addresses the age-old problem of cleaning up file descriptors. In general, a with context expects the object being used to have some open and close routines that will be called at the entrance and exit of the block, respectively. Note that we don't have scoping here - the snippet variable exists outside the block, making it easy to load data in a short block and handle it later.
We could use more traditional syntax:
python
f = open('foo.csv', 'r')
...
f.close()
but this succinct approach has the benefit of containing use within a protected block.
It can be used for much more than just simple files, such as RMV.
We also note that a new piece of Python syntax has appeared - a star, indicating that an iterable (here, a list) is used to fill up arguments to a function. This is especially useful with print, and is equivalent to:
python
print(snippet[0], snippet[1], snippet[2])
However, there are places it really shines. In fact, you can also name arguments to a function - in that case you can additionally provide a dict that fills them out (with two stars to indicate the expansion).
Looking at the printed data, you can see we have a large number of columns (equal for each row), data in the range 0-20 and it is comma-separated. Remember, each row represents a patient and each column, the measurement of inflammation on a given day.
Import the data
End of explanation
type(data)
Explanation: While this has kindly been rendered for us as if it were a list of lists, in reality it is something much more useful...
End of explanation
data.shape
Explanation: This type is numpy's N-dimensional matrix class. This means we get a wide swathe of attributes and methods applicable to matrices, and a whole load of functions inside numpy and other libraries that will happily take it as an argument. One particularly useful property is shape:
End of explanation
", ".join(dir(data))
Explanation: This tells us that there are 60 patients (rows), each with 40 days of inflammation measurements (columns).
We can get an idea of the wide variety of methods available on the data object...
End of explanation
print(data * 2)
Explanation: A lot of the magic methods here, those with double underscores on either side, allow Python to treat this object more like a built-in. For instance, the __neg__ method will get called when we enter -data. If we try data1 < data2, then data1.__lt__ will be called with data2 as an argument, expecting a boolean return. The behaviour when the user writes data.previously_unknown_attr is defined by the __getattr__ method - this need not be an error. In some ways, you can think of this as an expansion of the concept of operator overloading.
The ndarray class uses those methods very effectively, to help us write clear and efficient code.
End of explanation
data[0:3, 0:3]
data[:3,:3]
Explanation: This works because numpy has overridden the __mul__ method.
We can also do our slicing in multiple dimensions... let's get the top-left corner...
End of explanation
# Use this cell!
Explanation: ...or more succinctly...
Try grabbing the bottom left corner (3x3)...
End of explanation
data.mean(), data.max(), data.min()
data[3].max() # Max inflammation for 4th patient
Explanation: A few more practical uses of nparray:
End of explanation
data[:,1].max() # Max infl for 2nd day
Explanation: This could also be written data[3,:] - colon on its own just signifies all entries in that axis, from first to last
End of explanation
data.max(axis=0)
Explanation: Suppose we want the max for every day...
We could create a new list and write a loop to go through it, but it seems there should be a more transparent, succinct, Pythonic approach
End of explanation
data.max(axis=0).shape
Explanation: As you can see, the second day does indeed have max of 1.0. Notice that it returns an array (this is a superclass of ndarray), so we can do...
An aside: this is the first time you have seen a named argument. In an argument list of a function, you can supply a name for arguments:
python
def some_function(arg1, info=38, axis=None):
...
To write a function like this, you must provide a default for the argument. You can then call the function only passing a subset of arguments, as you have just seen.
End of explanation
# Use this cell!
Explanation: This shouldn't be surprising - we take the average over all patients for each of 40 days, so we end up with a 40 element array.
Try using the following cell to get the mean inflammation over all patients for each of the first three days...
(you should use slicing and end up with a 3 element array)
End of explanation
# Switch on the joy
% matplotlib inline
Explanation: RIGHT
Enough text.
Programming isn't about writing reams of code, it's about getting stuff done and having fun while doing it. Right? Right. Right. RIGHT.
In any case, when examining data, it's a whole lot more pleasant and efficient to see visualizations, so lets get started with some plotting.
Plotting
Like we're Brutus the Younger
First, we use a magic method that works in Jupyter (strictly, in its IPython backend)
End of explanation
import matplotlib
pretty_pic = matplotlib.pyplot.imshow(data)
matplotlib.pyplot.show(pretty_pic)
Explanation: Now for inline plotting we can use the matplotlib module, the go-to for Python plotting (but not in the Dijkstra sense)...
End of explanation
# Use this cell!
Explanation: MMMM. Pretty. Red shows greater inflammation, and what we may have guessed from the bits of CSV we saw, inflammation starts low, grows and reduces. Patients along the side, days along the bottom.
Time for a challenge... plot the heat map (as above) for the fourth, fifth and sixth patients, transposed with days up the side. The easiest way is to start with Google... But first, put up your stars so I can see them!
End of explanation
import numpy as np
from matplotlib import pyplot
data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
pyplot.figure(figsize=(5.0, 3.0))
pyplot.xlabel('Day')
pyplot.ylabel('Inflammation')
pyplot.plot(data.mean(axis=0), label='Average')
pyplot.plot(data.max(axis=0), label='Max')
pyplot.plot(data.min(axis=0), label='Min')
pyplot.legend()
pyplot.show()
exec(In[42]) # Cheeky way to reshow output... forget immediately.
Explanation: Reproducible Plots
If you come from a MATLAB background, or use other plotting tools, you will be used to saving/re-opening a plot...
...in Python you normally have a plotting script that you run each time (and save the image).
Cons
You have no standard GUI for designing plots
If your plot involves lots of processing, it's slow to open
Pros
Changes are trivially easy
Every time you open the plot, it contains current data
Con-workarounds
There are libraries for adding GUI elements to plots
You can split into two scripts
output processed data
plot
Example plotting script
End of explanation
# This is the whole figure, possibly
# with multiple subplots
fig = pyplot.figure(figsize=(5.0, 3.0))
# This is a specific set of axes
axes = fig.add_subplot(1, 1, 1)
axes.set_xlabel('Day')
axes.set_ylabel('Inflammation')
axes.plot(data.mean(axis=0), label='Average')
axes.plot(data.max(axis=0), label='Max')
axes.plot(data.min(axis=0), label='Min')
axes.legend()
pyplot.show()
Explanation: Now if I want to modify some part... I can do so and have the updated plot straight-away
This particular syntax is for fairly basic use, with only one set of axis and so forth, but it's probably quite familiar if you are coming from MATLAB. If we want to expand a bit more, first we use a more classic matplotlib approach, and neater programming approach, by setting properties directly on specific objects, such as axes...
End of explanation
fig = pyplot.figure(figsize=(10.0, 3.0))
axes = [] # Blank list
for i in range(1, 4):
ax = fig.add_subplot(1, 3, i)
ax.set_xlabel('Day')
axes.append(ax)
axes[0].set_ylabel('Average')
axes[1].set_ylabel('Max')
axes[2].set_ylabel('Min')
axes[0].plot(data.mean(axis=0))
axes[1].plot(data.max(axis=0))
axes[2].plot(data.min(axis=0))
fig.tight_layout()
pyplot.show(fig)
Explanation: Not particularly more complicated, but more flexible... say we have a series of plots...
End of explanation
import bokeh.plotting as bplot
from bokeh.io import output_notebook
output_notebook()
Explanation: Challenge
Put up your stars!!
Modify the previous cell to give the max line with a dash, the blue line with dots and the red line solid.
Even better, add a title at the top.
When done, swap to the arrow. If you find this very easy and are waiting, leave your arrow up and have a go at plotting all the patients in very light grey behind your three coloured lines. See how long it takes to run for those 63 lines.
RMV: To get you started, Google pyplot line style and pick the pylab_examples example code link...
Bear in mind, and this is particularly the case with matplotlib, that there are multiple similar tools with slightly different configurations and long lists of technical documentation - try not to get lost in them and use Etherpad to flags things up. However, short examples are findable via Google, especially via Stack Overflow, that get you going fast.
But...
Not so interactive
If we are using Jupyter, it is probably because we want to have something interactive. Fixed plots in a web-page make it seem like something is missing.
Bokeh
Dynamic plots
End of explanation
fig = bplot.figure()
days = np.arange(data.shape[1])
fig.line(days, data.mean(axis=0))
fig.xaxis.axis_label = "Day"
bplot.show(fig)
Explanation: If this works, you should see an icon like a rainbow-coloured camera shutter.
End of explanation
# Here's cell for you...
Explanation: Notice the syntax is quite similar to matplotlib but a little different, so make sure you don't get them mixed up. For example, plot has become specifically line and we now explicitly provide the days along the bottom. Note that numpy.arange is just like range but it returns a numpy array.
Challenge II
Reproduce our three-line matplotlib plot in Bokeh
End of explanation
# RMV
# This is the whole figure, possibly
# with multiple subplots
fig = bplot.figure()
days = np.arange(data.shape[1])
# This is a specific set of axes
fig.xaxis.axis_label = 'Day'
fig.yaxis.axis_label = 'Inflammation'
fig.line(days, data.mean(axis=0), legend='Average', color='green')
fig.line(days, data.max(axis=0), legend='Max', color='blue')
fig.line(days, data.min(axis=0), legend='Min', color='red')
bplot.show(fig)
Explanation: Don't forget to put up your star!
Start with the Bokeh documentation and the Bokeh example above...
End of explanation
from bokeh.models import ColumnDataSource
from bokeh.io import push_notebook
# Start out with days vs average
initial_coordinates = {'x': days, 'y': data.mean(axis=0)}
source = ColumnDataSource(initial_coordinates)
# Define a callback to update the plot when we
# pick something else
def update_plot_statistic(statistic):
if statistic == "Average":
source.data['y'] = data.mean(axis=0)
elif statistic == "Max":
source.data['y'] = data.max(axis=0)
elif statistic == "Min":
source.data['y'] = data.min(axis=0)
push_notebook()
Explanation: Interactive plots
But really, this still isn't very interactive... let's try adding a drop-down to pick which function (average, max, min) that we want to use.
End of explanation
fig = bplot.figure()
days = np.arange(data.shape[1])
fig.xaxis.axis_label = 'Day'
fig.yaxis.axis_label = 'Inflammation'
fig.line(initial_coordinates['x'], initial_coordinates['y'], source=source)
bplot.show(fig)
Explanation: First off, note that we do not have to tell the function to import the global variable, we can just use it. Now, examine the function - this is callback that we will use to update the plot when we change a drop-down menu choice of function.
We create a source object, an instance of the ColumnDataSource class, and when this function is called, it updates the 'y' item in source.data. Inside the function, we check what has been picked (this is the argument to the function), and update the source "y" array on that basis. Finally, we push this change to the notebook.
End of explanation
from ipywidgets import interact
interact(update_plot_statistic, statistic=["Average", "Max", "Min"])
Explanation: Now we need a widget to use this...
End of explanation
# Use this cell for the plot
# And this for the one line `interact` call afterwards
Explanation: This a very basic approach - the widget tool guesses you want a drop-down because you pass a list. Alternatively, you could pass a tuple with two floats as limits and get a slider back.
It seems you do have to output the image before the widgets, if you define the callback in Python, at least. However, there are a lot of other interactors - you can pass a JS callback (as a string), assign URLs to various objects in your plot, animate it, etc. etc. etc.
Final Challenge
Add a slider for variable N from 1 to 60 that reduces the dataset from all Patients to Patients 1 to N. Unless you're feeling particularly ambitious, you can replace the drop-down from the previous example with this slider.
End of explanation
# RMV
# Start out with days vs average
initial_coordinates = {'x': days, 'y': data.mean(axis=0)}
source = ColumnDataSource(initial_coordinates)
# Define a callback to update the plot when we
# pick something else
def update_plot(N):
source.data['x'] = np.arange(N)
push_notebook()
fig = bplot.figure()
days = np.arange(data.shape[1])
fig.xaxis.axis_label = 'Day'
fig.yaxis.axis_label = 'Inflammation'
fig.line(initial_coordinates['x'], initial_coordinates['y'], source=source)
bplot.show(fig)
interact(update_plot, N=(1, 60, 1))
Explanation: Don't forget your star!
End of explanation |
7,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part-of-Speech Tagging using NLTK
One task in NLP has been to reliably identify a word's part of speech. This can help us with the ever-present task of identifying content words, but can be used in a variety of analyses. Part-of-speech tagging is a specific instance in the larger category of word tagging, or placing words in pre-determined categories.
Today we'll learn how to identify a word's part of speech and think through reasons we may want to do this.
Learning Goals
Step1: Now comes more complicated code. Stay with me. The above output is a list of tuples. A tuple is a sequence of Python objects. In this case, each of these tuples is a sequence of strings. To loop through tuples is intuitively the same as looping through a list, but slightly different syntax.
Note that this is not a list of lists, as we saw in our lesson on Pandas. This is a list of tuples.
Let's pull out the part-of-speech tag from each tuple above and save that to a list. Notice the order stays exactly the same.
Step2: Question
Step3: This sentence contains a lot of adjectives. So let's first look at the adjectives. Notice the syntax here.
Step4: Let's do the same for nouns.
Step5: And now verbs. | Python Code:
import nltk
from nltk import word_tokenize
sentence = "For me it has to do with the work that gets done at the crossroads of \
digital media and traditional humanistic study. And that happens in two different ways. \
On the one hand, it's bringing the tools and techniques of digital media to bear \
on traditional humanistic questions; on the other, it's also bringing humanistic modes \
of inquiry to bear on digital media."
sentence_tokens = word_tokenize(sentence)
#check we did everything correctly
sentence_tokens
#use the nltk pos function to tag the tokens
tagged_sentence_tokens = nltk.pos_tag(sentence_tokens)
#view tagged sentence
tagged_sentence_tokens
Explanation: Part-of-Speech Tagging using NLTK
One task in NLP has been to reliably identify a word's part of speech. This can help us with the ever-present task of identifying content words, but can be used in a variety of analyses. Part-of-speech tagging is a specific instance in the larger category of word tagging, or placing words in pre-determined categories.
Today we'll learn how to identify a word's part of speech and think through reasons we may want to do this.
Learning Goals:
Understand the intuition behind tagging and information extraction
Use NLTK to tag the part of speech of each word
Count most frequent words based on their part of speech
Outline
Part-of-Speech Tagging
Counting words based on their part of speech
Key Terms
part-of-speech tagging:
the process of marking up a word in a text as corresponding to a particular part of speech, based on both its definition and its context
named entity recognition:
a subtask of information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc
tree
data structure made up of nodes or vertices and edges without having any cycle.
treebank:
a parsed text corpus that annotates syntactic or semantic sentence structure
tuple:
a sequence of immutable Python objects
Further Resources
For more information on information extraction using NLTK, see chapter 7: http://www.nltk.org/book/ch07.html
<a id='pos'></a>
Part-of-Speech Tagging
You may have noticed that stop words are typically short function words. Intuitively, if we could identify the part of speech of a word, we would have another way of identifying content words. NLTK can do that too!
NLTK has a function that will tag the part of speech of every token in a text. For this, we will re-create our original tokenized text sentence from the previous tutorial, with the stop words and punctuation intact.
NLTK uses the Penn Treebank Project to tag the part-of-speech of the words. The NLTK algoritm is deterministic - it assigns the most common part of speech for each word, as found in the Penn Treebank. You can find a list of all the part-of-speech tags here:
https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
End of explanation
word_tags = [tag for (word, tag) in tagged_sentence_tokens]
print(word_tags)
Explanation: Now comes more complicated code. Stay with me. The above output is a list of tuples. A tuple is a sequence of Python objects. In this case, each of these tuples is a sequence of strings. To loop through tuples is intuitively the same as looping through a list, but slightly different syntax.
Note that this is not a list of lists, as we saw in our lesson on Pandas. This is a list of tuples.
Let's pull out the part-of-speech tag from each tuple above and save that to a list. Notice the order stays exactly the same.
End of explanation
tagged_frequency = nltk.FreqDist(word_tags)
tagged_frequency.most_common()
Explanation: Question: What is the difference in syntax for the above code compared to our standard list comprehension code?
<a id='counting'></a>
Counting words based on their part of speech
We can count the part-of-speech tags in a similar way we counted words, to output the most frequent types of words in our text. We can also count words based on their part of speech.
First, we count the frequency of each part-of-speech tag.
End of explanation
adjectives = [word for (word,pos) in tagged_sentence_tokens if pos == 'JJ' or pos=='JJR' or pos=='JJS']
#print all of the adjectives
print(adjectives)
Explanation: This sentence contains a lot of adjectives. So let's first look at the adjectives. Notice the syntax here.
End of explanation
nouns = [word for (word,pos) in tagged_sentence_tokens if pos=='NN' or pos=='NNS']
#print all of the nouns
print(nouns)
Explanation: Let's do the same for nouns.
End of explanation
#verbs = [word for (word,pos) in tagged_sentence_tokens if pos == 'VB' or pos=='VBD' or pos=='VBG' or pos=='VBN' or pos=='VBP' or pos=='VBZ']
verbs = [word for (word,pos) in tagged_sentence_tokens if pos in ['VB', 'VBD','VBG','VBN','VBP','VBZ']]
#print all of the verbs
print(verbs)
##Ex: Print the most frequent nouns, adjective, and verbs in the sentence
######What does this tell us?
######Compare this to what we did earlier with removing stop words.
##Ex: Compare the most frequent part-of-speech used in two of the texts in our data folder
Explanation: And now verbs.
End of explanation |
7,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 고급 자동 미분
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 그래디언트 기록 제어하기
자동 미분 가이드에서는 그래디언트 계산을 빌드하는 동안 테이프에서 감시할 변수 및 텐서를 제어하는 방법을 살펴보았습니다.
테이프에는 기록을 조작하는 방법도 있습니다.
그래디언트 기록을 중지하려면 GradientTape.stop_recording()을 사용하여 기록을 일시적으로 중단할 수 있습니다.
모델 중간에서 복잡한 연산을 구별하지 않으려면, 일시 중단이 오버헤드를 줄이는 데 유용할 수 있습니다. 여기에는 메트릭 또는 중간 결과 계산이 포함될 수 있습니다.
Step3: 완전히 다시 시작하려면, reset()을 사용합니다. 그래디언트 테이프 블록을 종료하고 다시 시작하는 것이 일반적으로 읽기 쉽지만, 테이프 블록을 종료하는 것이 어렵거나 불가능한 경우, reset을 사용할 수 있습니다.
Step4: 그래디언트 중지
위의 전역 테이프 컨트롤과 달리, tf.stop_gradient 함수는 훨씬 더 정확합니다. 테이프 자체에 액세스할 필요 없이 특정 경로를 따라 그래디언트가 흐르는 것을 막는 데 사용할 수 있습니다.
Step5: 사용자 정의 그래디언트
경우에 따라 기본값을 사용하지 않고 그래디언트를 계산하는 방법을 정확하게 제어할 수 있습니다. 이러한 상황에는 다음이 포함됩니다.
작성 중인 새 op에 대해 정의된 그래디언트가 없습니다.
기본 계산이 수치적으로 불안정합니다.
정방향 패스에서 값비싼 계산을 캐시하려고 합니다.
그래디언트를 수정하지 않고 값(예
Step6: 자세한 내용은 tf.custom_gradient데코레이터를 참조하세요.
여러 테이프
여러 테이프가 원활하게 상호 작용합니다. 예를 들어, 각 테이프는 서로 다른 텐서 세트를 감시합니다.
Step7: 고계도 그래디언트
GradientTape 컨텍스트 관리자 내부의 연산은 자동 미분을 위해 기록됩니다. 해당 컨텍스트에서 그래디언트가 계산되면, 그래디언트 계산도 기록됩니다. 결과적으로, 정확히 같은 API가 고계도 그래디언트에도 작동합니다. 예를 들면, 다음과 같습니다.
Step8: 스칼라 함수의 2차 미분을 제공하지만, GradientTape.gradient는 스칼라의 그래디언트만 계산하므로 이 패턴은 Hessian 행렬을 생성하도록 일반화되지 않습니다. Hessian을 구성하려면 야고비안 섹션의 Hessian 예제를 참조하세요.
그래디언트에서 스칼라를 계산할 때 "GradientTape.gradient에 대한 중첩된 호출"은 좋은 패턴이며, 결과 스칼라는 다음 예제와 같이 두 번째 그래디언트 계산의 소스로 작동합니다.
예
Step9: 야고비안
이전의 모든 예제는 일부 소스 텐서와 관련하여 스칼라 대상의 그래디언트를 나타냅니다.
야고비 행렬식은 벡터값 함수의 그래디언트를 나타냅니다. 각 행에는 벡터 요소 중 하나의 그래디언트가 포함됩니다.
GradientTape.jacobian 메서드를 사용하면 야고비 행렬식을 효율적으로 계산할 수 있습니다.
참고
Step10: 스칼라에 대한 야고비안을 취하면, 결과는 대상의 형상을 가지며 소스에 대한 각 요소의 그래디언트를 제공합니다.
Step11: 텐서 소스
입력이 스칼라이든 텐서이든 GradientTape.jacobian은 대상(들)의 각 요소에 대한 각 소스 요소의 그래디언트를 효율적으로 계산합니다.
예를 들어, 이 레이어의 출력은 형상 (10, 7)입니다.
Step12: 레이어의 커널 형상은 (5, 10)입니다.
Step13: 커널에 대한 출력의 야고비안 형상은 서로 연결된 두 가지 형상입니다.
Step14: 대상의 차원을 합하면, GradientTape.gradient에서 계산한 합계의 그래디언트가 남습니다.
Step15: <a id="hessian"> </a>
예
Step16: 이 Hessian을 뉴턴의 방법 단계에 사용하려면, 먼저 축을 행렬로 평면화하고 그래디언트를 벡터로 평면화합니다.
Step17: Hessian 행렬은 대칭이어야 합니다.
Step18: 뉴턴의 방법 업데이트 단계는 다음과 같습니다.
Step19: 참고
Step20: 단일 tf.Variable 경우에는 비교적 간단하지만, 사소하지 않은 모델에 적용하려면 여러 변수에 걸쳐 완전한 Hessian을 생성하기 위해 신중하게 연결하고 슬라이스해야 합니다.
배치 야고비안
어떤 경우에는 소스 스택과 관련하여 각 대상 스택의 야고비안을 가져오려 합니다. 여기에서 각 대상-소스 쌍의 야고비안은 독립적입니다.
예를 들어, 여기에서 입력 x는 (batch, ins) 형상이 되고, 출력 y는 (batch, outs) 형상이 됩니다.
Step21: x에 대한 y의 전체 야고비안은 (batch, ins, outs)만 원하는 경우에도 (batch, ins, batch, outs)의 형상을 가집니다.
Step22: 스택에 있는 각 항목의 그래디언트가 독립적이면, 이 텐서의 모든 (batch, batch) 슬라이스는 대각선 행렬입니다.
Step23: 원하는 결과를 얻으려면 중복 batch 차원를 합산하거나 tf.einsum을 사용하여 대각선을 선택할 수 있습니다.
Step24: 처음부터 추가 차원 없이 계산을 수행하는 것이 훨씬 더 효율적입니다. GradientTape.batch_jacobian 메서드는 정확히 그렇게 합니다.
Step25: 주의
Step26: 이 경우, batch_jacobian가 여전히 실행되어 예상 형상을 가진 어떤 것을 반환하지만, 내용은 명확하지 않습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 6)
Explanation: 고급 자동 미분
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/advanced_autodiff"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/advanced_autodiff.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/advanced_autodiff.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/advanced_autodiff.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
자동 미분 가이드에는 그래디언트를 계산하는 데 필요한 모든 것이 포함되어 있습니다. 이 가이드는 tf.GradientTape API의 더 깊고 덜 일반적인 기능에 중점을 둡니다.
설정
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
with tf.GradientTape() as t:
x_sq = x * x
with t.stop_recording():
y_sq = y * y
z = x_sq + y_sq
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: 그래디언트 기록 제어하기
자동 미분 가이드에서는 그래디언트 계산을 빌드하는 동안 테이프에서 감시할 변수 및 텐서를 제어하는 방법을 살펴보았습니다.
테이프에는 기록을 조작하는 방법도 있습니다.
그래디언트 기록을 중지하려면 GradientTape.stop_recording()을 사용하여 기록을 일시적으로 중단할 수 있습니다.
모델 중간에서 복잡한 연산을 구별하지 않으려면, 일시 중단이 오버헤드를 줄이는 데 유용할 수 있습니다. 여기에는 메트릭 또는 중간 결과 계산이 포함될 수 있습니다.
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
reset = True
with tf.GradientTape() as t:
y_sq = y * y
if reset:
# Throw out all the tape recorded so far
t.reset()
z = x * x + y_sq
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: 완전히 다시 시작하려면, reset()을 사용합니다. 그래디언트 테이프 블록을 종료하고 다시 시작하는 것이 일반적으로 읽기 쉽지만, 테이프 블록을 종료하는 것이 어렵거나 불가능한 경우, reset을 사용할 수 있습니다.
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
with tf.GradientTape() as t:
y_sq = y**2
z = x**2 + tf.stop_gradient(y_sq)
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: 그래디언트 중지
위의 전역 테이프 컨트롤과 달리, tf.stop_gradient 함수는 훨씬 더 정확합니다. 테이프 자체에 액세스할 필요 없이 특정 경로를 따라 그래디언트가 흐르는 것을 막는 데 사용할 수 있습니다.
End of explanation
# Establish an identity operation, but clip during the gradient pass
@tf.custom_gradient
def clip_gradients(y):
def backward(dy):
return tf.clip_by_norm(dy, 0.5)
return y, backward
v = tf.Variable(2.0)
with tf.GradientTape() as t:
output = clip_gradients(v * v)
print(t.gradient(output, v)) # calls "backward", which clips 4 to 2
Explanation: 사용자 정의 그래디언트
경우에 따라 기본값을 사용하지 않고 그래디언트를 계산하는 방법을 정확하게 제어할 수 있습니다. 이러한 상황에는 다음이 포함됩니다.
작성 중인 새 op에 대해 정의된 그래디언트가 없습니다.
기본 계산이 수치적으로 불안정합니다.
정방향 패스에서 값비싼 계산을 캐시하려고 합니다.
그래디언트를 수정하지 않고 값(예 : tf.clip_by_value , tf.math.round)을 수정하려고 합니다.
새 op를 작성하려면, tf.RegisterGradient를 사용하여 직접 설정할 수 있습니다. 자세한 내용은 해당 페이지를 참조하세요. (그래디언트 레지스트리는 전역이므로 주의해서 변경하세요.)
후자의 세 가지 경우에는 tf.custom_gradient를 사용할 수 있습니다.
다음은 tf.clip_by_norm을 중간 그래디언트에 적용하는 예입니다.
End of explanation
x0 = tf.constant(0.0)
x1 = tf.constant(0.0)
with tf.GradientTape() as tape0, tf.GradientTape() as tape1:
tape0.watch(x0)
tape1.watch(x1)
y0 = tf.math.sin(x0)
y1 = tf.nn.sigmoid(x1)
y = y0 + y1
ys = tf.reduce_sum(y)
tape0.gradient(ys, x0).numpy() # cos(x) => 1.0
tape1.gradient(ys, x1).numpy() # sigmoid(x1)*(1-sigmoid(x1)) => 0.25
Explanation: 자세한 내용은 tf.custom_gradient데코레이터를 참조하세요.
여러 테이프
여러 테이프가 원활하게 상호 작용합니다. 예를 들어, 각 테이프는 서로 다른 텐서 세트를 감시합니다.
End of explanation
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y = x * x * x
# Compute the gradient inside the outer `t2` context manager
# which means the gradient computation is differentiable as well.
dy_dx = t1.gradient(y, x)
d2y_dx2 = t2.gradient(dy_dx, x)
print('dy_dx:', dy_dx.numpy()) # 3 * x**2 => 3.0
print('d2y_dx2:', d2y_dx2.numpy()) # 6 * x => 6.0
Explanation: 고계도 그래디언트
GradientTape 컨텍스트 관리자 내부의 연산은 자동 미분을 위해 기록됩니다. 해당 컨텍스트에서 그래디언트가 계산되면, 그래디언트 계산도 기록됩니다. 결과적으로, 정확히 같은 API가 고계도 그래디언트에도 작동합니다. 예를 들면, 다음과 같습니다.
End of explanation
x = tf.random.normal([7, 5])
layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)
with tf.GradientTape() as t2:
# The inner tape only takes the gradient with respect to the input,
# not the variables.
with tf.GradientTape(watch_accessed_variables=False) as t1:
t1.watch(x)
y = layer(x)
out = tf.reduce_sum(layer(x)**2)
# 1. Calculate the input gradient.
g1 = t1.gradient(out, x)
# 2. Calculate the magnitude of the input gradient.
g1_mag = tf.norm(g1)
# 3. Calculate the gradient of the magnitude with respect to the model.
dg1_mag = t2.gradient(g1_mag, layer.trainable_variables)
[var.shape for var in dg1_mag]
Explanation: 스칼라 함수의 2차 미분을 제공하지만, GradientTape.gradient는 스칼라의 그래디언트만 계산하므로 이 패턴은 Hessian 행렬을 생성하도록 일반화되지 않습니다. Hessian을 구성하려면 야고비안 섹션의 Hessian 예제를 참조하세요.
그래디언트에서 스칼라를 계산할 때 "GradientTape.gradient에 대한 중첩된 호출"은 좋은 패턴이며, 결과 스칼라는 다음 예제와 같이 두 번째 그래디언트 계산의 소스로 작동합니다.
예: 입력 그래디언트 정규화
많은 모델이 "적대적인 예"에 취약합니다. 이 기술 컬렉션은 모델의 입력을 수정하여 모델의 출력이 혼동됩니다. 가장 간단한 구현은 입력에 대한 출력의 그래디언트를 따라 단일 단계를 수행합니다. "입력 그래디언트".
적대적인 예에 대한 견고성을 높이는 한 가지 기술은 입력 그래디언트 정규화로, 입력 그래디언트의 크기를 최소화하려고 시도합니다. 입력 그래디언트가 작으면 출력의 변화도 작아야 합니다.
아래는 입력 그래디언트 정규화의 네이티브 구현입니다. 구현은 다음과 같습니다.
내부 테이프를 사용하여 입력에 대한 출력 그래디언트를 계산합니다.
해당 입력 그래디언트의 크기를 계산합니다.
모델에 대한 해당 크기의 그래디언트를 계산합니다.
End of explanation
x = tf.linspace(-10.0, 10.0, 200+1)
delta = tf.Variable(0.0)
with tf.GradientTape() as tape:
y = tf.nn.sigmoid(x+delta)
dy_dx = tape.jacobian(y, delta)
Explanation: 야고비안
이전의 모든 예제는 일부 소스 텐서와 관련하여 스칼라 대상의 그래디언트를 나타냅니다.
야고비 행렬식은 벡터값 함수의 그래디언트를 나타냅니다. 각 행에는 벡터 요소 중 하나의 그래디언트가 포함됩니다.
GradientTape.jacobian 메서드를 사용하면 야고비 행렬식을 효율적으로 계산할 수 있습니다.
참고:
gradient처럼: sources 인수는 텐서 또는 텐서의 컨테이너가 될 수 있습니다.
gradient와 달리: target 텐서는 단일 텐서여야 합니다.
스칼라 소스
첫 번째 예는, 스칼라 소스에 대한 벡터 대상의 야코비안입니다.
End of explanation
print(y.shape)
print(dy_dx.shape)
plt.plot(x.numpy(), y, label='y')
plt.plot(x.numpy(), dy_dx, label='dy/dx')
plt.legend()
_ = plt.xlabel('x')
Explanation: 스칼라에 대한 야고비안을 취하면, 결과는 대상의 형상을 가지며 소스에 대한 각 요소의 그래디언트를 제공합니다.
End of explanation
x = tf.random.normal([7, 5])
layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)
with tf.GradientTape(persistent=True) as tape:
y = layer(x)
y.shape
Explanation: 텐서 소스
입력이 스칼라이든 텐서이든 GradientTape.jacobian은 대상(들)의 각 요소에 대한 각 소스 요소의 그래디언트를 효율적으로 계산합니다.
예를 들어, 이 레이어의 출력은 형상 (10, 7)입니다.
End of explanation
layer.kernel.shape
Explanation: 레이어의 커널 형상은 (5, 10)입니다.
End of explanation
j = tape.jacobian(y, layer.kernel)
j.shape
Explanation: 커널에 대한 출력의 야고비안 형상은 서로 연결된 두 가지 형상입니다.
End of explanation
g = tape.gradient(y, layer.kernel)
print('g.shape:', g.shape)
j_sum = tf.reduce_sum(j, axis=[0, 1])
delta = tf.reduce_max(abs(g - j_sum)).numpy()
assert delta < 1e-3
print('delta:', delta)
Explanation: 대상의 차원을 합하면, GradientTape.gradient에서 계산한 합계의 그래디언트가 남습니다.
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.relu)
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.relu)
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
x = layer1(x)
x = layer2(x)
loss = tf.reduce_mean(x**2)
g = t1.gradient(loss, layer1.kernel)
h = t2.jacobian(g, layer1.kernel)
print(f'layer.kernel.shape: {layer1.kernel.shape}')
print(f'h.shape: {h.shape}')
Explanation: <a id="hessian"> </a>
예: Hessian
tf.GradientTape는 Hessian 행렬을 구성하기 위한 명시적인 방법을 제공하지 않지만, GradientTape.jacobian 메서드를 사용하여 빌드할 수 있습니다.
참고: Hessian 행렬은 N **2 매개변수를 포함합니다. 그 외 여러 이유로 인해 대부분의 모델에는 실용적이지 않습니다. 이 예제는 GradientTape.jacobian 메서드를 사용하는 방법에 대한 설명으로 포함되어 있으며 직접적인 Hessian 기반 최적화를 보증하는 것은 아닙니다. Hessian-vector 곱은 중첩 테이프를 사용하여 효율적으로 계산할 수 있으며 2차 최적화에 대한 훨씬 효율적인 접근 방식입니다.
End of explanation
n_params = tf.reduce_prod(layer1.kernel.shape)
g_vec = tf.reshape(g, [n_params, 1])
h_mat = tf.reshape(h, [n_params, n_params])
Explanation: 이 Hessian을 뉴턴의 방법 단계에 사용하려면, 먼저 축을 행렬로 평면화하고 그래디언트를 벡터로 평면화합니다.
End of explanation
def imshow_zero_center(image, **kwargs):
lim = tf.reduce_max(abs(image))
plt.imshow(image, vmin=-lim, vmax=lim, cmap='seismic', **kwargs)
plt.colorbar()
imshow_zero_center(h_mat)
Explanation: Hessian 행렬은 대칭이어야 합니다.
End of explanation
eps = 1e-3
eye_eps = tf.eye(h_mat.shape[0])*eps
Explanation: 뉴턴의 방법 업데이트 단계는 다음과 같습니다.
End of explanation
# X(k+1) = X(k) - (∇²f(X(k)))^-1 @ ∇f(X(k))
# h_mat = ∇²f(X(k))
# g_vec = ∇f(X(k))
update = tf.linalg.solve(h_mat + eye_eps, g_vec)
# Reshape the update and apply it to the variable.
_ = layer1.kernel.assign_sub(tf.reshape(update, layer1.kernel.shape))
Explanation: 참고: 실제로 행렬을 반전하지 마세요.
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.elu)
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.elu)
with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
tape.watch(x)
y = layer1(x)
y = layer2(y)
y.shape
Explanation: 단일 tf.Variable 경우에는 비교적 간단하지만, 사소하지 않은 모델에 적용하려면 여러 변수에 걸쳐 완전한 Hessian을 생성하기 위해 신중하게 연결하고 슬라이스해야 합니다.
배치 야고비안
어떤 경우에는 소스 스택과 관련하여 각 대상 스택의 야고비안을 가져오려 합니다. 여기에서 각 대상-소스 쌍의 야고비안은 독립적입니다.
예를 들어, 여기에서 입력 x는 (batch, ins) 형상이 되고, 출력 y는 (batch, outs) 형상이 됩니다.
End of explanation
j = tape.jacobian(y, x)
j.shape
Explanation: x에 대한 y의 전체 야고비안은 (batch, ins, outs)만 원하는 경우에도 (batch, ins, batch, outs)의 형상을 가집니다.
End of explanation
imshow_zero_center(j[:, 0, :, 0])
_ = plt.title('A (batch, batch) slice')
def plot_as_patches(j):
# Reorder axes so the diagonals will each form a contiguous patch.
j = tf.transpose(j, [1, 0, 3, 2])
# Pad in between each patch.
lim = tf.reduce_max(abs(j))
j = tf.pad(j, [[0, 0], [1, 1], [0, 0], [1, 1]],
constant_values=-lim)
# Reshape to form a single image.
s = j.shape
j = tf.reshape(j, [s[0]*s[1], s[2]*s[3]])
imshow_zero_center(j, extent=[-0.5, s[2]-0.5, s[0]-0.5, -0.5])
plot_as_patches(j)
_ = plt.title('All (batch, batch) slices are diagonal')
Explanation: 스택에 있는 각 항목의 그래디언트가 독립적이면, 이 텐서의 모든 (batch, batch) 슬라이스는 대각선 행렬입니다.
End of explanation
j_sum = tf.reduce_sum(j, axis=2)
print(j_sum.shape)
j_select = tf.einsum('bxby->bxy', j)
print(j_select.shape)
Explanation: 원하는 결과를 얻으려면 중복 batch 차원를 합산하거나 tf.einsum을 사용하여 대각선을 선택할 수 있습니다.
End of explanation
jb = tape.batch_jacobian(y, x)
jb.shape
error = tf.reduce_max(abs(jb - j_sum))
assert error < 1e-3
print(error.numpy())
Explanation: 처음부터 추가 차원 없이 계산을 수행하는 것이 훨씬 더 효율적입니다. GradientTape.batch_jacobian 메서드는 정확히 그렇게 합니다.
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.elu)
bn = tf.keras.layers.BatchNormalization()
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.elu)
with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
tape.watch(x)
y = layer1(x)
y = bn(y, training=True)
y = layer2(y)
j = tape.jacobian(y, x)
print(f'j.shape: {j.shape}')
plot_as_patches(j)
_ = plt.title('These slices are not diagonal')
_ = plt.xlabel("Don't use `batch_jacobian`")
Explanation: 주의: GradientTape.batch_jacobian은 소스 및 대상의 첫 번째 차원만 일치하는지 확인합니다. 그래디언트가 실제로 독립적인지 확인하지 않습니다. 적합한 경우에 batch_jacobian만을 사용하도록 하는 것은 사용자에게 달려 있습니다. 예를 들어, layers.BatchNormalization를 추가하면 batch 차원에서 정규화되므로 독립성이 없어집니다.
End of explanation
jb = tape.batch_jacobian(y, x)
print(f'jb.shape: {jb.shape}')
Explanation: 이 경우, batch_jacobian가 여전히 실행되어 예상 형상을 가진 어떤 것을 반환하지만, 내용은 명확하지 않습니다.
End of explanation |
7,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quicksort
Summary
| Performance | Complexity |
|-----------------------------|------------------|
|Worst-case | $O(n^2)$ |
|Best-case | $O(n\log{n})$ |
|Average | $O(n\log{n})$ |
|Worst-case space | $O(n)$ |
Algorithm
Pick a pivot point, and move all the items in the list greater than the pivot to the right, and all items less than the pivot to the left. Afterwards, quicksort is recurively applied to the sublists.
The partition function works by moving everything less than the pivot point to the left and everything greater than the pivot point to the right. This can be accomplished by keeping two indices starting from both sides of the array. The left index is incremented until it finds a value greater than the pivot point, and the right index is incremented until it finds a value less than the pivot point.
Intuition
Given an unsorted list $u$
Step1: To see the code in action (arrow denotes pivot point, highlighted cells indicate values to be swapped)
Step2: As can be see from the diagram above, everything to the left of the pivot will be less than the pivot, and everything to the right will be greater than the pivot. We can now recursively apply the partition to those values
Step3: Finally, we can see the code in action | Python Code:
def partition(lst, start, end):
# in this formulation, the pivot point is the first item
pivot = lst[start]
# start partitioning after the pivot point
first = start + 1
last = end
# keep going until we covered the entire list
while first <= last:
# find the next element that is less than pivot
while first <= last and lst[first] <= pivot:
first += 1
# find the next element that is greater than pivot
while last >= first and lst[last] > pivot:
last -= 1
# and swap their values
if first < last:
lst[first], lst[last] = lst[last], lst[first]
# finally, swap the pivot point with the last point
lst[start], lst[last] = lst[last], lst[start]
return last
Explanation: Quicksort
Summary
| Performance | Complexity |
|-----------------------------|------------------|
|Worst-case | $O(n^2)$ |
|Best-case | $O(n\log{n})$ |
|Average | $O(n\log{n})$ |
|Worst-case space | $O(n)$ |
Algorithm
Pick a pivot point, and move all the items in the list greater than the pivot to the right, and all items less than the pivot to the left. Afterwards, quicksort is recurively applied to the sublists.
The partition function works by moving everything less than the pivot point to the left and everything greater than the pivot point to the right. This can be accomplished by keeping two indices starting from both sides of the array. The left index is incremented until it finds a value greater than the pivot point, and the right index is incremented until it finds a value less than the pivot point.
Intuition
Given an unsorted list $u$:
$$ u = [5, 3, 10, 7, 15, 1, 4, 2] $$
pick a pivot $p$ from $u$. For this example we'll use the first item in $u$ ($p = 5$). Create two new lists: $l$ and $r$, where $l$ consists of all elements less than $p$ and $r$ consists of all elements greater than $p$:
$$ l = [3, 1, 4, 2] $$
$$ r = [10, 7, 15] $$
We can construct an ordered list with: qsort(l) + p + qsort(r). The left side qsort(l) will again result in two lists:
$$ p = 3 $$
$$ l = [1, 2] $$
$$ r = [4] $$
putting these together we get: [1, 2] + [3] + [4] $\rightarrow$ [1, 2, 3, 4]. Evaluating the right side we get:
$$ p = 10 $$
$$ l = [7] $$
$$ r = [15] $$
putting these together we get: [7] + [10] + [15] $\rightarrow$ [7, 10, 15]. Going up one level we now have: [1, 2, 3, 4] + [5] + [7, 10, 15] $\rightarrow$ [1, 2, 3, 4, 5, 7, 10, 15], which results in the sorted list.
However, constructing the list in this manner is not efficient in space. Instead we can create a partition function that functionally does the same thing, but instead sorts the elements in place.
End of explanation
reload(quicksort)
contents = [5,3,10,7,15,1,4,2] #np.random.randint(100, size=20)
partition_example(contents, 0)
Explanation: To see the code in action (arrow denotes pivot point, highlighted cells indicate values to be swapped):
End of explanation
def quicksort(lst, first=None, last=None):
if first == None: first = 0
if last == None: last = len(lst)-1
if first < last:
# sort list so everything less than `pivot` is to the left of the pivot
# and everything to the right is greater than the `pivot`
p = partition(lst, first, last)
# recursively apply `quicksort` to the items on the left of the pivot
quicksort(lst, first, p-1)
# recursively apply `quicksort` to the items on the right of the pivot
quicksort(lst, p+1, last)
lst = [19, 92, 97, 34, 70, 26, 51, 97, 1, 42, 79, 34]
quicksort(lst)
print "sorted: ", lst
Explanation: As can be see from the diagram above, everything to the left of the pivot will be less than the pivot, and everything to the right will be greater than the pivot. We can now recursively apply the partition to those values:
End of explanation
reload(quicksort)
quicksort_example([19, 92, 97, 34, 70, 26, 51, 97, 1, 42, 79, 34])
def quicksort(myList, start, end):
if start < end:
# partition the list
pivot = partition(myList, start, end)
# sort both halves
quicksort(myList, start, pivot-1)
quicksort(myList, pivot+1, end)
return myList
def partition(myList, start, end):
pivot = myList[start]
left = start+1
right = end
done = False
while not done:
while left <= right and myList[left] <= pivot:
left = left + 1
while myList[right] >= pivot and right >=left:
right = right -1
if right < left:
done= True
else:
# swap places
temp=myList[left]
myList[left]=myList[right]
myList[right]=temp
# swap start with myList[right]
temp=myList[start]
myList[start]=myList[right]
myList[right]=temp
return right
lst = [19, 92, 97, 34, 70, 26, 51, 97, 1, 42, 79, 34]
quicksort(lst, 0, len(lst)-1)
print(lst)
Explanation: Finally, we can see the code in action:
End of explanation |
7,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python design patterns
Covering only a small portion of what exists.
import this
creating main functions
Take a look at this guide
Step1: Explicit is better than implicit
Step2: Simple is better than complex
Step3: Take a look at the main example in the folder
Below the main is imported and used. On the command line it is also use. | Python Code:
import this as t
print(t)
Explanation: Python design patterns
Covering only a small portion of what exists.
import this
creating main functions
Take a look at this guide: http://docs.python-guide.org/en/latest/writing/style/
Take a look at this "Zen of Python" by example: http://artifex.org/~hblanks/talks/2011/pep20_by_example.html
End of explanation
fruits = ['apple', 'pear', 'cranberry']
# Good
for fruit in fruits:
print(fruit)
# Bad
for i in fruits:
print(i)
# okay
for index in range(10):
print(index)
# Questionable
for i in range(10):
for j in range(11):
for k in range(11):
pass
Explanation: Explicit is better than implicit
End of explanation
class Person:
def __init__(self, name):
self.name = name
self.is_hungry = True
self.is_tired = False
def __str__(self):
return '{name} {id}'.format(
name=self.name,
id=id(self)
)
def fed(self, big_meal=False):
self.is_hungry = False
if big_meal:
self.is_tired = True
def sleep(self):
self.is_hungry = True
self.is_tired = False
bill = Person('Bill')
sammy = Person('Sammy')
jim = Person('Jim')
sue = Person('Sue')
persons = [bill, sammy, jim, sue]
jim.fed(True)
sue.fed(True)
# Bad check -- because it's complex and hides logic
if persons[0].is_hungry and persons[1].is_hungry and persons[2].is_tired and persons[3].is_tired:
for person in persons:
print('I have a person', person)
print()
print()
# Better check -- it doesn't hide meaning, but is verbose
def all_are_hungry(persons):
for person in persons:
if not person.is_hungry:
return False
return True
def all_are_tired(persons):
for person in persons:
if not person.is_tired:
return False
return True
hungry_people_group = persons[:2]
tired_people = persons[2:]
if all_are_hungry(hungry_people_group) and all_are_tired(tired_people):
for person in persons:
print('I have a person', person)
print()
print()
# Good/Best check -- doesn't hide meaning, and isn't verbose
has_hungry = all([person.is_hungry for person in persons[:2]])
has_tired = all([person.is_tired for person in persons[2:]])
if has_hungry and has_tired:
for person in persons:
print('I have a person', person)
print(range(10))
Explanation: Simple is better than complex
End of explanation
import echo
echo.main('I am testing my main function')
import exponent
_ = exponent.main(2, 5)
_ = exponent.main(2, 5, quiet=True)
_ = exponent.main(2, 5, verbose=True)
_ = exponent.main(2, 5, quiet=True, verbose=True)
Explanation: Take a look at the main example in the folder
Below the main is imported and used. On the command line it is also use.
End of explanation |
7,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 1
Imports
Step3: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic
Step5: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
Step7: Write a function sort_word_counts that return a list of sorted word counts
Step8: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt
Step9: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research... | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
Explanation: Algorithms Exercise 1
Imports
End of explanation
things = "hello!"
def ispuct(char, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
return (not (char in punctuation))
#x = list(filter(ispuct, things))
#a = ''
#a.join(x)
#print(new_things)
def tokenize(s, stop_words = '', punctuation='`~!@#$%^&*()_+={[}]|\:;"<,>.?/}\t'):
m = []
s = s.replace("-", " ")
stop = stop_words
def is_stop(word, stop_words = stop):
return not (word in stop_words)
def is_space(word, space = ['']):
return not (word in space)
for line in s.splitlines():
raw = line.lower().split(' ' or '.')
y = list()
for w in raw:
x = list(filter(ispuct, w))
y.append(a.join(x))
words = list(filter(is_space, y))
words = list(filter(is_stop, words))
m += words
return m
tokenize("This, is the way; that things will hi--end", stop_words = 'is the')
#ispuct('!')
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
tokenize(wasteland, stop_words='is the of and')
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland =
APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
#tokenize(wasteland, stop_words='is the of and')
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
def count_words(data):
Return a word count dictionary from the list of words in data.
dictionary = {}
for n in data:
dictionary[n]= data.count(n)
return dictionary
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
def sort_word_counts(wc):
Return a list of 2-tuples of (word, count), sorted by count descending.
l = [(i,wc[i]) for i in wc]
return sorted(l, key = lambda x:x[1], reverse = True)
print(sort_word_counts(count_words(tokenize('this and a the this this and a a a'))))
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
txt = open('mobydick_chapter1.txt', 'r')
x = txt.read()
swc = sort_word_counts(count_words(tokenize(s = x, stop_words = ['the', 'of', 'and', 'to', 'in', 'is', 'it', 'that', 'as', 'a'])))
string = ''
x = (tokenize(s = x, stop_words = ['the', 'of', 'and', 'to', 'in', 'is', 'it', 'that', 'as', 'a']))
for things in x:
string = string + things + " "
print(len(swc))
#print(swc)
print(string)
punchfactor = 4
assert swc[0]==('i',43)
assert len(swc)==848 - punchfactor #4 is the punchfactor, ranked out of 4
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
x = np.array(swc)
plt.plot(x[0:50,1], range(50),'o')
assert True # use this for grading the dotplot
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation |
7,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: CNRM-ESM2-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
7,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial Value Problems
A paper by Jones and Underwood suggests a model for the temperature behaviour $T(t)$ of a PV cell in terms of a nonlinear differential equation. Here we extract the key features as
\begin{equation}
\frac{\text{d}T}{\text{d}t} = f(t, T) = c_{1} \left( c_{2} T_{\text{ambient}}^4 - T^4 \right) + c_{3} - \frac{c_4}{T} - c_5 ( T - T_{\text{ambient}} ),
\end{equation}
where the various $c_{1, \dots, 5}$ are constant parameters, and the cell is assumed to relax back to the ambient temperature fast enough to treat $T_{\text{ambient}}$ as a constant as well.
If we're given the values of the parameters together with a temperature value at time $t=0$, we can solve this initial value problem numerically.
Solution by integration
We've solved lots of problems by integration already. A good scientist is a lazy scientist, so we can try to solve this one by integration as well.
Assume we know the solution at $t_j$ and want to compute the solution at $t_{j+1} = t_j + \Delta t$. We write
\begin{equation}
\int_{t_j}^{t_{j+1}} \text{d}t \, \frac{\text{d}T}{\text{d}t} = T \left( t_{j+1} \right) - T \left( t_{j} \right).
\end{equation}
Using the differential equation we therefore get
\begin{equation}
T \left( t_{j+1} \right) = T \left( t_{j} \right) + \int_{t_j}^{t_{j+1}} \text{d}t \, f(t, T).
\end{equation}
If we can solve the integral, we can move from the solution at $t_j$ to the solution at $t_{j+1}$.
The simplest solution of the integral was the Riemann integral approximation. The width of the interval is $t_{j+1} - t_j = \Delta t$. We know the value of $T(t_j)$. Therefore we can approximate
\begin{equation}
\int_{t_j}^{t_{j+1}} \text{d}t \, f(t, T) \approx \Delta t \, \, f \left( t, T(t_j) \right),
\end{equation}
leading to Euler's method
\begin{equation}
T \left( t_{j+1} \right) = T \left( t_{j} \right) + \Delta t \, \, f \left( t_j, T(t_j) \right),
\end{equation}
which in more compact notation is
\begin{equation}
T_{j+1} = T_{j} + \Delta t \, \, f_j.
\end{equation}
Euler's method
Let's implement this where the ambient temperature is $290$K, the $c$ parameters are
\begin{align}
c_1 &= 10^{-5} \
c_2 &= 0.9 \
c_3 &= 0 \
c_4 &= 10^{-2} \
c_5 &= 1
\end{align}
and $T(0) = 300$K. We'll solve up to $t=10^{-2}$ hours (it relaxes very fast!).
Note
Step1: As with all integration problems, we expect accuracy (and computation time!) to increase as we increase the number of steps. Euler's method, like the Riemann integral on which it's built, is first order.
Exercise
Try modifying the number of steps. Plot your solutions to check the solution remains reasonable. What happens when the number of steps is very small?
Solution by differentiation
A different way of thinking about Euler's method shows explicitly that it's first order. Take the original differential equation
\begin{equation}
\frac{\text{d}T}{\text{d}t} = f(t, T).
\end{equation}
We can directly replace the derivative by using finite differencing. By using Taylor expansion we have
\begin{align}
T \left( t_{j+1} \right) &= T \left( t_j \right) + \left( t_{j+1} - t_{j} \right) \left. \frac{\text{d}T}{\text{d}t} \right|{t = t{j}} + \frac{\left( t_{j+1} - t_{j} \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots \
&= T \left( t_j \right) + \Delta t \, \left. \frac{\text{d}T}{\text{d}t} \right|{t = t{j}} + \frac{\left( \Delta t \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots
\end{align}
By re-arranging we get
\begin{equation}
\left. \frac{\text{d}T}{\text{d}t} \right|{t = t{j}} = \frac{T_{j+1} - T_j}{\Delta t} - \frac{\Delta t}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots
\end{equation}
This is the forward difference approximation to the first derivative.
By evaluating the original differential equation at $t=t_j$ we get
\begin{equation}
\frac{T_{j+1} - T_j}{\Delta t} - \frac{\Delta t}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots = f \left( t_j, T(t_j) \right).
\end{equation}
This shows that the difference between this approximation from the finite differencing, and the original differential equation, goes as $(\Delta t)^1$ - it is first order. This approximation can be re-arranged to give
\begin{equation}
T_{j+1} = T_j + \Delta t \, f_j + \frac{\left( \Delta t \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots
\end{equation}
By ignoring the higher order terms, we see that this is just Euler's method again.
Runge-Kutta methods
We can now imagine how to get higher order methods for IVPs
Step2: The solution looks pretty much identical to that from Euler's method, as this problem is well behaved. In general, the benefits of higher order methods (RK4 is pretty standard) massively outweight the slight additional effort in implementing them.
A system of IVPs
Of course, a PV cell is not one component with one temperature, but different materials coupled together. Let's assume it's made of three components, as in the Jones and Underwood paper
Step3: Exercise
Check that you get similar results using RK2. Try RK4 as well.
Stochastic case
This is quite a bit more complex
Step4: In a fluctuating problem like this, a single simulation doesn't tell you very much. Instead we should perform many simulations and average the result. Let's run this 1000 times | Python Code:
from __future__ import division
import numpy
%matplotlib notebook
from matplotlib import pyplot
parameters = { "T_ambient" : 290.0,
"c1" : 1.0e-5,
"c2" : 0.9,
"c3" : 0.0,
"c4" : 1.0e-2,
"c5" : 1.0}
T_initial = 300.0
t_end = 1e-2
def f(t, T, parameters):
T_ambient = parameters["T_ambient"]
c1 = parameters["c1"]
c2 = parameters["c2"]
c3 = parameters["c3"]
c4 = parameters["c4"]
c5 = parameters["c5"]
return c1 * (c2 * T_ambient**4 - T**4) + c3 - c4 / T - c5 * (T - T_ambient)
def euler_step(f, t, T, dt, parameters):
return T + dt * f(t, T, parameters)
Nsteps = 100
T = numpy.zeros((Nsteps+1,))
T[0] = T_initial
dt = t_end / Nsteps
t = numpy.linspace(0, t_end, Nsteps+1)
for j in range(Nsteps):
T[j+1] = euler_step(f, t[j], T[j], dt, parameters)
pyplot.figure(figsize=(10,6))
pyplot.plot(t, T)
pyplot.xlabel(r"$t$")
pyplot.ylabel(r"$T$")
pyplot.show()
Explanation: Initial Value Problems
A paper by Jones and Underwood suggests a model for the temperature behaviour $T(t)$ of a PV cell in terms of a nonlinear differential equation. Here we extract the key features as
\begin{equation}
\frac{\text{d}T}{\text{d}t} = f(t, T) = c_{1} \left( c_{2} T_{\text{ambient}}^4 - T^4 \right) + c_{3} - \frac{c_4}{T} - c_5 ( T - T_{\text{ambient}} ),
\end{equation}
where the various $c_{1, \dots, 5}$ are constant parameters, and the cell is assumed to relax back to the ambient temperature fast enough to treat $T_{\text{ambient}}$ as a constant as well.
If we're given the values of the parameters together with a temperature value at time $t=0$, we can solve this initial value problem numerically.
Solution by integration
We've solved lots of problems by integration already. A good scientist is a lazy scientist, so we can try to solve this one by integration as well.
Assume we know the solution at $t_j$ and want to compute the solution at $t_{j+1} = t_j + \Delta t$. We write
\begin{equation}
\int_{t_j}^{t_{j+1}} \text{d}t \, \frac{\text{d}T}{\text{d}t} = T \left( t_{j+1} \right) - T \left( t_{j} \right).
\end{equation}
Using the differential equation we therefore get
\begin{equation}
T \left( t_{j+1} \right) = T \left( t_{j} \right) + \int_{t_j}^{t_{j+1}} \text{d}t \, f(t, T).
\end{equation}
If we can solve the integral, we can move from the solution at $t_j$ to the solution at $t_{j+1}$.
The simplest solution of the integral was the Riemann integral approximation. The width of the interval is $t_{j+1} - t_j = \Delta t$. We know the value of $T(t_j)$. Therefore we can approximate
\begin{equation}
\int_{t_j}^{t_{j+1}} \text{d}t \, f(t, T) \approx \Delta t \, \, f \left( t, T(t_j) \right),
\end{equation}
leading to Euler's method
\begin{equation}
T \left( t_{j+1} \right) = T \left( t_{j} \right) + \Delta t \, \, f \left( t_j, T(t_j) \right),
\end{equation}
which in more compact notation is
\begin{equation}
T_{j+1} = T_{j} + \Delta t \, \, f_j.
\end{equation}
Euler's method
Let's implement this where the ambient temperature is $290$K, the $c$ parameters are
\begin{align}
c_1 &= 10^{-5} \
c_2 &= 0.9 \
c_3 &= 0 \
c_4 &= 10^{-2} \
c_5 &= 1
\end{align}
and $T(0) = 300$K. We'll solve up to $t=10^{-2}$ hours (it relaxes very fast!).
Note: we're going to pass in all the parameter values using a Python dictionary. These are a little like lists - they hold multiple things. However, the index is not an integer, but something constant - the key - that you specify. They're defined using curly braces {}, with the key followed by a colon and then the value.
End of explanation
def rk2_step(f, t, T, dt, parameters):
k1 = dt * f(t, T, parameters)
k2 = dt * f(t + 0.5*dt, T + 0.5*k1, parameters)
return T + k2
Nsteps = 100
T = numpy.zeros((Nsteps+1,))
T[0] = T_initial
dt = t_end / Nsteps
t = numpy.linspace(0, t_end, Nsteps+1)
for j in range(Nsteps):
T[j+1] = rk2_step(f, t[j], T[j], dt, parameters)
pyplot.figure(figsize=(10,6))
pyplot.plot(t, T)
pyplot.xlabel(r"$t$")
pyplot.ylabel(r"$T$")
pyplot.show()
Explanation: As with all integration problems, we expect accuracy (and computation time!) to increase as we increase the number of steps. Euler's method, like the Riemann integral on which it's built, is first order.
Exercise
Try modifying the number of steps. Plot your solutions to check the solution remains reasonable. What happens when the number of steps is very small?
Solution by differentiation
A different way of thinking about Euler's method shows explicitly that it's first order. Take the original differential equation
\begin{equation}
\frac{\text{d}T}{\text{d}t} = f(t, T).
\end{equation}
We can directly replace the derivative by using finite differencing. By using Taylor expansion we have
\begin{align}
T \left( t_{j+1} \right) &= T \left( t_j \right) + \left( t_{j+1} - t_{j} \right) \left. \frac{\text{d}T}{\text{d}t} \right|{t = t{j}} + \frac{\left( t_{j+1} - t_{j} \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots \
&= T \left( t_j \right) + \Delta t \, \left. \frac{\text{d}T}{\text{d}t} \right|{t = t{j}} + \frac{\left( \Delta t \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots
\end{align}
By re-arranging we get
\begin{equation}
\left. \frac{\text{d}T}{\text{d}t} \right|{t = t{j}} = \frac{T_{j+1} - T_j}{\Delta t} - \frac{\Delta t}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots
\end{equation}
This is the forward difference approximation to the first derivative.
By evaluating the original differential equation at $t=t_j$ we get
\begin{equation}
\frac{T_{j+1} - T_j}{\Delta t} - \frac{\Delta t}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots = f \left( t_j, T(t_j) \right).
\end{equation}
This shows that the difference between this approximation from the finite differencing, and the original differential equation, goes as $(\Delta t)^1$ - it is first order. This approximation can be re-arranged to give
\begin{equation}
T_{j+1} = T_j + \Delta t \, f_j + \frac{\left( \Delta t \right)^2}{2!} \left. \frac{\text{d}^2T}{\text{d}t^2} \right|{t = t{j}} + \dots
\end{equation}
By ignoring the higher order terms, we see that this is just Euler's method again.
Runge-Kutta methods
We can now imagine how to get higher order methods for IVPs: by constructing a higher order approximation to the derivative. A standard approximation is the central difference approximation
\begin{equation}
\frac{\text{d}T}{\text{d}t} = \frac{T(t_{j+1}) - T(t_{j-1})}{2 \Delta t} + {\cal O}\left( (\Delta t)^2 \right),
\end{equation}
which we will use later with PDEs. However, it isn't so useful for ODEs directly. Instead we see it as a suggestion: combine different differencing approximations to get a better method. Standard Runge-Kutta methods do this by repeatedly constructing approximations to the derivative, which are combined. These combinations are chosen so that the Taylor expansion of the algorithm matches the original equation to higher and higher orders.
A second order Runge-Kutta method is
\begin{align}
k_{1} &= \Delta t \, f \left( t_j, T_j \right), \
k_{2} &= \Delta t \, f \left( t_j + \frac{\Delta t}{2}, T_j + \frac{k_{1}}{2} \right), \
T_{j+1} &= T_j + k_{2}.
\end{align}
Let's implement that on our problem above:
End of explanation
parameters_system = { "T_ambient" : 290.0,
"c1" : 1.0e-5,
"c2" : 0.9,
"c3" : 0.0,
"c4" : 1.0e-2,
"c5" : 1.0,
"c6" : 200.0}
T_initial = [300.0, 302.0, 304.0]
t_end = 1e-2
def f_system(t, T, parameters):
T_ambient = parameters["T_ambient"]
c1 = parameters["c1"]
c2 = parameters["c2"]
c3 = parameters["c3"]
c4 = parameters["c4"]
c5 = parameters["c5"]
c6 = parameters["c6"]
f = numpy.zeros_like(T)
f[0] = c1 * (c2 * T_ambient**4 - T[0]**4) + c3 - c4 / T[0] - c5 * (T[0] - T_ambient)
f[1] = - c5 * (T[1] - T_ambient) - c6 * (T[1] - T[0])
f[2] = - c5 * (T[2] - T_ambient) - c6 * (T[2] - T[0])
return f
Nsteps = 100
T = numpy.zeros((3, Nsteps+1))
T[:, 0] = T_initial
dt = t_end / Nsteps
t = numpy.linspace(0, t_end, Nsteps+1)
for j in range(Nsteps):
T[:, j+1] = euler_step(f_system, t[j], T[:, j], dt, parameters_system)
pyplot.figure(figsize=(10,6))
pyplot.plot(t, T[0,:], label="Silicon")
pyplot.plot(t, T[1,:], label="Trilaminate")
pyplot.plot(t, T[2,:], label="Glass")
pyplot.legend()
pyplot.xlabel(r"$t$")
pyplot.ylabel(r"$T$")
pyplot.show()
Explanation: The solution looks pretty much identical to that from Euler's method, as this problem is well behaved. In general, the benefits of higher order methods (RK4 is pretty standard) massively outweight the slight additional effort in implementing them.
A system of IVPs
Of course, a PV cell is not one component with one temperature, but different materials coupled together. Let's assume it's made of three components, as in the Jones and Underwood paper: $T_{(1)}(t)$ is the temperature of the silicon cells, $T_{(2)}(t)$ the temperature of the trilaminate, and $T_{(3)}(t)$ the temperature of the glass face. We can write the temperature behaviour as the system of differential equations
\begin{equation}
\frac{\text{d}{\bf T}}{\text{d}t} = {\bf f} \left( t, {\bf T} \right), \quad {\bf T}(0) = {\bf T}_0.
\end{equation}
Here the vector function ${\bf T}(t) = \left( T_{(1)}(t), T_{(2)}(t), T_{(3)}(t) \right)^T$.
To be concrete let's assume that the silicon behaves as in the single equation model above,
\begin{equation}
\frac{\text{d}T_{(1)}}{\text{d}t} = f_{(1)}(t, {\bf T}) = c_{1} \left( c_{2} T_{\text{ambient}}^4 - T_{(1)}^4 \right) + c_{3} - \frac{c_4}{T_{(1)}} - c_5 ( T_{(1)} - T_{\text{ambient}} ),
\end{equation}
whilst the trilaminate and the glass face try to relax to the temperature of the silicon and the ambient,
\begin{equation}
\frac{\text{d}T_{(k)}}{\text{d}t} = f_{(k)}(t, {\bf T}) = - c_5 ( T_{(k)} - T_{\text{ambient}} ) - c_6 ( T_{(k)} - T_{(1)} ), \quad k = 2, 3.
\end{equation}
We'll use the same parameters as above, and couple the materials using $c_6 = 200$. We'll start the different components at temperatures ${\bf T}_0 = (300, 302, 304)^T$.
The crucial point for numerical methods: nothing conceptually changes. We extend our methods from the scalar to the vector case directly. Where before we had $T(t_j) = T_j$ we now have ${\bf T}(t_j) = {\bf T}_j$, and we can write Euler's method, for example, as
\begin{equation}
{\bf T}_{j+1} = {\bf T}_j + \Delta t \, {\bf f} \left( t_j, {\bf T}_j \right).
\end{equation}
Even better, the code implement needs no alteration:
End of explanation
from numpy.random import randn
def g_stochastic(t, T, parameters):
T_ambient = parameters["T_ambient"]
return (T - T_ambient)**2
def euler_maruyama_step(f, g, t, T, dt, dW, parameters):
return T + dt * f(t, T, parameters) + g(t, T, parameters) * dW
parameters = { "T_ambient" : 290.0,
"c1" : 1.0e-5,
"c2" : 0.9,
"c3" : 0.0,
"c4" : 1.0e-2,
"c5" : 1.0}
T_initial = 300.0
t_end = 1e-2
Nsteps = 100
T = numpy.zeros((Nsteps+1,))
T[0] = T_initial
dt = t_end / Nsteps
t = numpy.linspace(0, t_end, Nsteps+1)
dW = numpy.sqrt(dt) * randn(Nsteps+1)
for j in range(Nsteps):
T[j+1] = euler_maruyama_step(f, g_stochastic, t[j], T[j], dt, dW[j], parameters)
pyplot.figure(figsize=(10,6))
pyplot.plot(t, T)
pyplot.xlabel(r"$t$")
pyplot.ylabel(r"$T$")
pyplot.show()
Explanation: Exercise
Check that you get similar results using RK2. Try RK4 as well.
Stochastic case
This is quite a bit more complex: see D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43:525-546 (2001) for more details.
Let's suppose that there's some fluctuating heat source in the cell that we can't explicitly model. Going back to the single cell case, let's write it as
\begin{equation}
\frac{\text{d}T}{\text{d}t} = f(t, T) + g(T) \frac{\text{d}W}{\text{d}t}.
\end{equation}
Here $W(t)$ is a random, or Brownian, or Wiener process. It's going to represent the random fluctuating heat source that we can't explicitly model: its values will be drawn from a normal distribution with mean zero. The values of the random process can jump effectively instantly, but over a timestep $\Delta t$ will average to zero, with standard deviation $\sqrt{\Delta t}$.
Because of this extreme behaviour, the derivative doesn't really make sense: instead we should use the integral form.
In our integral form we get
\begin{equation}
T_{j+1} = T_j + \Delta t \, f_j + \int_{t_j}^{t_{j+1}} \text{d}t \, g(T) \frac{\text{d}W}{\text{d}t}.
\end{equation}
We approximate this final integral at the left edge $t_j$ as
\begin{equation}
\int_{t_j}^{t_{j+1}} \text{d}t \, g(T) \frac{\text{d}W}{\text{d}t} \approx g(T_j) \, \text{d}W_j,
\end{equation}
where $\text{d}W_j$ is the random process over the interval $[t_j, t_{j+1}]$: this is a random number drawn from a normal distribution with mean zero and standard deviation $\sqrt{\Delta t}$.
This is the Euler-Maruyama method.
Let's take our original single temperature model and add a temperature dependent fluctuation $g(T) = (T - T_{\text{ambient}})^2$.
End of explanation
Nruns = 1000
T = numpy.zeros((Nruns, Nsteps+1))
T[:,0] = T_initial
dt = t_end / Nsteps
t = numpy.linspace(0, t_end, Nsteps+1)
for n in range(Nruns):
dW = numpy.sqrt(dt) * randn(Nsteps+1)
for j in range(Nsteps):
T[n, j+1] = euler_maruyama_step(f, g_stochastic, t[j], T[n, j], dt, dW[j], parameters)
T_average = numpy.mean(T, axis=0)
pyplot.figure(figsize=(10,6))
pyplot.plot(t, T[0,:], label="First run")
pyplot.plot(t, T[99,:], label="Hundredth run")
pyplot.plot(t, T_average, label="Average")
pyplot.legend()
pyplot.xlabel(r"$t$")
pyplot.ylabel(r"$T$")
pyplot.show()
Explanation: In a fluctuating problem like this, a single simulation doesn't tell you very much. Instead we should perform many simulations and average the result. Let's run this 1000 times:
End of explanation |
7,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
#target_text
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
#view_sentence_range = (0, 10)
view_sentence_range = (31, 40)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = source_text.split('\n')
target_sentences = target_text.split('\n')
#print(source_vocab_to_int)
source_id_text = []
for sentence in source_sentences:
words = sentence.split()
mysentence = []
for word in words:
mysentence.append(source_vocab_to_int.get(word,0)) # return 0 if not in the dd
#mysentence.append(source_vocab_to_int[word])
#print(source_vocab_to_int[word])
#print(source_vocab_to_int.get(word,0))
source_id_text.append(mysentence)
target_id_text = []
for sentence in target_sentences:
words = sentence.split()
mysentence = []
for word in words:
mysentence.append(target_vocab_to_int.get(word,0)) # return 0 is the word doesn't exit in the dd
mysentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(mysentence)
# print(source_id_text[0])
# print(target_id_text[0])
#
# use list comprehension is more efficient
#
#target_ids = [[target_vocab_to_int.get(word) for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(dtype = tf.int32,
shape=(None, None), name='input')
targets = tf.placeholder(dtype = tf.int32,
shape=(None, None), name='targets')
learning_rate = tf.placeholder(dtype = tf.float32,
name='learning_rate')
keep_prob = tf.placeholder(dtype = tf.float32,
name='keep_prob')
return (inputs, targets, learning_rate, keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
newbatch = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
newtarget = tf.concat([tf.fill([batch_size, 1],
target_vocab_to_int['<GO>']),
newbatch], 1)
return newtarget
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) # lstm cell
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob = keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)
output, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell,
decoder_fn,
inputs = dec_embed_input,
sequence_length=sequence_length,
scope=decoding_scope)
training_logits = output_fn(outputs)
# add additional dropout
# tf.nn.dropout(training_logits, keep_prob)
return training_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
num_decoder_symbols = vocab_size,
dtype = tf.int32)
dp_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob = keep_prob)
outputs, state, context = tf.contrib.seq2seq.dynamic_rnn_decoder(dp_cell,
infer_fn,
sequence_length=maximum_length,
scope=decoding_scope)
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
start_symb, end_symb = target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>']
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
stack_lstm = tf.contrib.rnn.MultiRNNCell([dropout] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, activation_fn=None,scope = decoding_scope)
with tf.variable_scope('decoding') as decoding_scope:
training_logits = decoding_layer_train(encoder_state,
stack_lstm,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state,
stack_lstm,
dec_embeddings,
start_symb,
end_symb,
sequence_length,
vocab_size,
decoding_scope,
output_fn,
keep_prob)
# option 2: more concise
# decoding_scope.reuse_variables()
# infer_logits = decoding_layer_infer(encoder_state,
# stack_lstm,
# dec_embeddings,
# start_symb,
# end_symb,
# sequence_length,
# vocab_size,
# decoding_scope,
# output_fn,
# keep_prob)
return (training_logits, infer_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encode = encoding_layer(enc_embed, rnn_size, num_layers, keep_prob)
dec_process = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_input = tf.nn.embedding_lookup(dec_embed, dec_process)
train_logits, infer_logits = decoding_layer(dec_input,
dec_embed,
encode,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob)
return (train_logits, infer_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
wid_list = []
for word in sentence.lower().split():
wid_list.append(vocab_to_int.get(word, vocab_to_int['<UNK>']))
return wid_list
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
7,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 07
ML Models
Step1: We'll import the DecisionTreeClassifier and use all of the default values except for the random_state. We'll provide that so that the output is consistent run-to-run. The decision tree classifier uses the random number generator to make decisions about branching, so if we don't set this, we'll get different results every time we run the algorithm.
Step2: Take a look at the decision boundary for this classifier
Step3: So our decision boundary is cleaned up significantly and we got a bump in the test performance of the model. Let's check one more value to see if we can do any better.
Step4: We got an MCC of 0.894 with a fairly simple decision boundary. That's good! There are, perhaps, a few too many wiggles in the boundary, but overall it is looking pretty good. Note that all of the boundaries are straight lines- that is because the decision tree is choosing cutoff values of "Grade" and "Bumpiness" to split the dataset along those lines. Overall this isn't too bad.
Ensemble Methods
The decision tree did a reasonable job of modeling our data but we only used one tree and one set of random values. What if we could do this many times and average the results. There are tools to do that! One of the strategies that ensemble methods will use is to scramble which of the training features it uses for each trial run. Let's take a quick look at that method, called a "bootstrap" sample.
We will start with 100 data points in our training sample. The ensemble model will break this up into 10 chunks of 10 data points each (each chunk labeled A-J). For the first model it takes chunks A-I and trains on them, then validates that model with chunk J. The next model will take chunks A-H and J, leaving chunk I for validation. It repeats this as many times as it needs. Thus the ensemble is doing training and validation all on the same set of data!
Data Snooping Warning
Although the ensemble is doing its own validation, that doesn't mean you can train with all of your data. You still need to keep the test data locked away and not used for training. This means we can compare the ensemble model to the other models without cheating ourselves.
We'll try out the simplest version of this first, called the RandomForestClassifier.
Step5: We see that the ensemble does a reasonable job- perhaps not better, in this case, than the decision tree by itself. However, there is something else that we get out of using the ensemble
Step6: Both features (Grade and Bumpiness) have just about the same importance in our model (about 50% each). That isn't too surprising since we faked the data to begin with...
Let's try some other ensemble methods to see how they work.
AdaBoost Classifier
This is another ensemble classifier that iteratively learns using a series of weights.
Step7: XGBoost
This last ensemble method is new enough to not be a part of the regular sklearn toolbox yet. However, it has made a fairly big splash in the machine learning community for its performance on real-world data. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
#Note the new use of the dtype option here. We can directly tell pandas to use the Speed column as a category in one step.
speeddf = pd.read_csv("../Class04/Class04_speed_data.csv",dtype={'Speed':'category'})
#We'll use a different tool to plot the data now that we know how to group the data by a category. This will help us make better combined plots later on.
groups = speeddf.groupby('Speed')
# Plot
trainfig, ax = plt.subplots()
ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.set_aspect(1)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
Explanation: Class 07
ML Models: Decision Trees
We will cover a new type of machine learning algorithm in this class: decision trees. We will also talk about ensemble methods and how we can use them to improve the performance of our machine learner.
Classification Decision Trees
We'll start by using a decision tree classifier. We'll use the same set of data as we used in Class 06. Again, that will allow us to compare the algorithm head-to-head with the other classifiers we've used previously. A decision tree works by splitting the data into pieces while trying to maximize the uniformity of each piece. Although we won't dive deeply into how the algorithm works, you can read a great tutorial here.
For example we start with a group of 10 people, half who identify as male and half as female. The most uniform split will be to divide the group into two sub-groups known as nodes. We can cleanly split the group so that each sub-group is uniformly populated. The tree builds a set of decision nodes to split the group so as to end up with the best set of rules to predict the output labels.
The tree will continue to split until it reaches a point where it can't split the data anymore. These end points are called leaf nodes. The number of data points allowed to be in a leaf node is one of the hyperparameters we have to tune. Going back to our example, if we set the minimum size of the leaf node to 5 people, the decision tree will end after doing a single split. However, if we let the leaf nodes be smaller, it may split up the sub-groups by age, height, or by other features.
This will make more sense as we try it out, so let's get started.
End of explanation
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
# Create our decision boundary mesh
# point in the mesh
x_min = 0.0; x_max = 1.0 # Mesh x size
y_min = 0.0; y_max = 1.0 # Mesh y size
h = .01 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max+h, h), np.arange(y_min, y_max+h, h))
# Split the data into training and testing sets and prepare the features and labels
train, test = train_test_split(speeddf, test_size=0.2, random_state=23)
features_train = train[['Grade','Bumpiness']].values
labels_train = train['Speed'].values
features_test = test[['Grade','Bumpiness']].values
labels_test = test['Speed'].values
class_labels = ["slow", "fast"]
# Load the model and fit the data
dtmodel = DecisionTreeClassifier(random_state=32)
dtmodel.fit(features_train,labels_train)
y_pred = dtmodel.predict(features_test)
# Predict the boundary
Z = pd.Series(dtmodel.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values.reshape(xx.shape)
# First plot our points
testfig1, ax = plt.subplots()
plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1,axes=ax)
ax.set_aspect(1)
# Plot test points
groups = test.groupby('Speed')
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
import sklearn.metrics as metrics
recall_score = metrics.recall_score(labels_test, y_pred,labels=class_labels,average=None)
prec_score = metrics.precision_score(labels_test, y_pred,labels=class_labels,average=None)
f1_score = metrics.f1_score(labels_test, y_pred,labels=class_labels,average=None)
acc_score = metrics.accuracy_score(labels_test, y_pred)
matt_score = metrics.matthews_corrcoef(labels_test, y_pred)
print("Class-dependent Metrics")
print("Sensitivity/Recall Score: {}".format(recall_score))
print("Precision Score: {}".format(prec_score))
print("F1 Score: {}".format(f1_score))
print("\nClass-independent Metrics")
print("Accuracy Score: {}".format(acc_score))
print("Matthews Correlation Coefficient (MCC): {}".format(matt_score))
Explanation: We'll import the DecisionTreeClassifier and use all of the default values except for the random_state. We'll provide that so that the output is consistent run-to-run. The decision tree classifier uses the random number generator to make decisions about branching, so if we don't set this, we'll get different results every time we run the algorithm.
End of explanation
# Load the model and fit the data
dtmodel = DecisionTreeClassifier(min_samples_leaf=10,random_state=32)
dtmodel.fit(features_train,labels_train)
y_pred = dtmodel.predict(features_test)
# Predict the boundary
Z = pd.Series(dtmodel.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values.reshape(xx.shape)
# First plot our points
testfig1, ax = plt.subplots()
plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1,axes=ax)
ax.set_aspect(1)
# Plot test points
groups = test.groupby('Speed')
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
matt_score = metrics.matthews_corrcoef(labels_test, y_pred)
print("Matthews Correlation Coefficient (MCC): {}".format(matt_score))
Explanation: Take a look at the decision boundary for this classifier: it is all over the place! The tree tries to account for every point and so it creates braches where there shouldn't be branches. We have a classic case of overfitting! And the model performance isn't great either with an MCC of 0.84. It is time to tune the hyperparameters to see if we can do better. Let's start by tuning the minimum number of samples in the leaf nodes of the tree.
End of explanation
# Load the model and fit the data
dtmodel = DecisionTreeClassifier(min_samples_leaf=5,random_state=32)
dtmodel.fit(features_train,labels_train)
y_pred = dtmodel.predict(features_test)
# Predict the boundary
Z = pd.Series(dtmodel.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values.reshape(xx.shape)
# First plot our points
testfig1, ax = plt.subplots()
plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1,axes=ax)
ax.set_aspect(1)
# Plot test points
groups = test.groupby('Speed')
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
matt_score = metrics.matthews_corrcoef(labels_test, y_pred)
print("Matthews Correlation Coefficient (MCC): {}".format(matt_score))
Explanation: So our decision boundary is cleaned up significantly and we got a bump in the test performance of the model. Let's check one more value to see if we can do any better.
End of explanation
# Load the model and fit the data
from sklearn.ensemble import RandomForestClassifier
rfmodel = RandomForestClassifier(n_estimators=100,random_state=32)
rfmodel.fit(features_train,labels_train)
y_pred = rfmodel.predict(features_test)
# Predict the boundary
Z = pd.Series(rfmodel.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values.reshape(xx.shape)
# First plot our points
testfig1, ax = plt.subplots()
plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1,axes=ax)
ax.set_aspect(1)
# Plot test points
groups = test.groupby('Speed')
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
matt_score = metrics.matthews_corrcoef(labels_test, y_pred)
print("Matthews Correlation Coefficient (MCC): {}".format(matt_score))
Explanation: We got an MCC of 0.894 with a fairly simple decision boundary. That's good! There are, perhaps, a few too many wiggles in the boundary, but overall it is looking pretty good. Note that all of the boundaries are straight lines- that is because the decision tree is choosing cutoff values of "Grade" and "Bumpiness" to split the dataset along those lines. Overall this isn't too bad.
Ensemble Methods
The decision tree did a reasonable job of modeling our data but we only used one tree and one set of random values. What if we could do this many times and average the results. There are tools to do that! One of the strategies that ensemble methods will use is to scramble which of the training features it uses for each trial run. Let's take a quick look at that method, called a "bootstrap" sample.
We will start with 100 data points in our training sample. The ensemble model will break this up into 10 chunks of 10 data points each (each chunk labeled A-J). For the first model it takes chunks A-I and trains on them, then validates that model with chunk J. The next model will take chunks A-H and J, leaving chunk I for validation. It repeats this as many times as it needs. Thus the ensemble is doing training and validation all on the same set of data!
Data Snooping Warning
Although the ensemble is doing its own validation, that doesn't mean you can train with all of your data. You still need to keep the test data locked away and not used for training. This means we can compare the ensemble model to the other models without cheating ourselves.
We'll try out the simplest version of this first, called the RandomForestClassifier.
End of explanation
rfmodel.feature_importances_
Explanation: We see that the ensemble does a reasonable job- perhaps not better, in this case, than the decision tree by itself. However, there is something else that we get out of using the ensemble: it will tell us the relative importance of the different features it used in making the decision boundary. The list of feature importances are given in terms of percentage importance of each feature. This can be helpful in deciding which features to use as inputs to the model. If the ensemble says that a feature is not very important, you may be able to drop it and simplify your model.
Let's look at our feature importances:
End of explanation
# Load the model and fit the data
from sklearn.ensemble import AdaBoostClassifier
abcmodel = AdaBoostClassifier(n_estimators=100,random_state=32)
abcmodel.fit(features_train,labels_train)
y_pred = abcmodel.predict(features_test)
# Predict the boundary
Z = pd.Series(abcmodel.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values.reshape(xx.shape)
# First plot our points
testfig1, ax = plt.subplots()
plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1,axes=ax)
ax.set_aspect(1)
# Plot test points
groups = test.groupby('Speed')
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
matt_score = metrics.matthews_corrcoef(labels_test, y_pred)
print("Matthews Correlation Coefficient (MCC): {}".format(matt_score))
Explanation: Both features (Grade and Bumpiness) have just about the same importance in our model (about 50% each). That isn't too surprising since we faked the data to begin with...
Let's try some other ensemble methods to see how they work.
AdaBoost Classifier
This is another ensemble classifier that iteratively learns using a series of weights.
End of explanation
import xgboost
xgbmodel = xgboost.XGBClassifier(n_estimators=100, seed=32)
xgbmodel.fit(features_train,labels_train)
y_pred = xgbmodel.predict(features_test)
# Predict the boundary
Z = pd.Series(xgbmodel.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values.reshape(xx.shape)
# First plot our points
testfig1, ax = plt.subplots()
plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1,axes=ax)
ax.set_aspect(1)
# Plot test points
groups = test.groupby('Speed')
# The next step is to cycle through the groups (based on our categories) and plot each one on the same axis.
for name, group in groups:
ax.plot(group['Grade'], group['Bumpiness'], marker='o', linestyle='', ms=8, label=name)
ax.legend(bbox_to_anchor=(1.2,0.5))
ax.set_xlabel('Grade')
ax.set_ylabel('Bumpiness')
matt_score = metrics.matthews_corrcoef(labels_test, y_pred)
print("Matthews Correlation Coefficient (MCC): {}".format(matt_score))
Explanation: XGBoost
This last ensemble method is new enough to not be a part of the regular sklearn toolbox yet. However, it has made a fairly big splash in the machine learning community for its performance on real-world data.
End of explanation |
7,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>Question 3 Is there any difference in app quality for free apps with in-app purchases?</b>
Step1: <p>First, the data set is splitted into two parts, one is app without in-app purchases and another is app with in-app purchases. Then the density plots for the two subsets are made and from the plots we can see that the current rating of paid app is generally higher than the overall rating of free app. Some specific tests are still needed to perform.</p>
Step2: <p>I perform t test here. We have two samples here, one is free apps and another is apps with in-app purchases. So I want to test whether the mean current rating for these two samples are different.</p>
<p>The null hypothesis is mean current rating for free apps and mean current rating for apps with in-app purchases are the same and the alternative hypothesis is that the mean current rating for these two samples are not the same.</p>
<p>From the result we can see that the p value is 2.5715670717150474e-38, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of current rating for these two samples are not the same and with in-app purchases or not do influent the rating of an app.</p>
Step3: <p>I also perform one-way ANOVA test here.</p>
<p>The null hypothesis is mean current rating for free apps and mean overall rating for apps with in-app purchases are the same and the alternative hypothesis is that the mean current rating for these two samples are not the same.</p>
<p>From the result we can see that the p value is 9.7392843155192399e-37, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of current rating for these two samples are not the same and with in-app purchases or not do influent the rating of an app.</p> | Python Code:
data_q3['is_InAppPurcased'].value_counts()
free = data_q3.loc[data_q3['is_InAppPurcased'] == 0]
paid = data_q3.loc[data_q3['is_InAppPurcased'] == 1]
free['current_rating'].plot(kind = "density")
paid['current_rating'].plot(kind = "density")
plt.xlabel('Current Rating')
plt.legend(labels = ['free','paid'], loc='upper right')
plt.title('Distribution of current rating among free/paid apps')
plt.show()
Explanation: <b>Question 3 Is there any difference in app quality for free apps with in-app purchases?</b>
End of explanation
import scipy.stats
free = list(free['current_rating'])
paid = list(paid['current_rating'])
print(np.mean(free))
print(np.mean(paid))
scipy.stats.ttest_ind(free, paid, equal_var = False)
Explanation: <p>First, the data set is splitted into two parts, one is app without in-app purchases and another is app with in-app purchases. Then the density plots for the two subsets are made and from the plots we can see that the current rating of paid app is generally higher than the overall rating of free app. Some specific tests are still needed to perform.</p>
End of explanation
scipy.stats.f_oneway(free, paid)
Explanation: <p>I perform t test here. We have two samples here, one is free apps and another is apps with in-app purchases. So I want to test whether the mean current rating for these two samples are different.</p>
<p>The null hypothesis is mean current rating for free apps and mean current rating for apps with in-app purchases are the same and the alternative hypothesis is that the mean current rating for these two samples are not the same.</p>
<p>From the result we can see that the p value is 2.5715670717150474e-38, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of current rating for these two samples are not the same and with in-app purchases or not do influent the rating of an app.</p>
End of explanation
scipy.stats.kruskal(free, paid)
Explanation: <p>I also perform one-way ANOVA test here.</p>
<p>The null hypothesis is mean current rating for free apps and mean overall rating for apps with in-app purchases are the same and the alternative hypothesis is that the mean current rating for these two samples are not the same.</p>
<p>From the result we can see that the p value is 9.7392843155192399e-37, which is smaller than 0.05, so we should reject null hypothesis at significance level 0.05, that is, we should conclude that the mean of current rating for these two samples are not the same and with in-app purchases or not do influent the rating of an app.</p>
End of explanation |
7,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Mixture Models (GMM)
KDE centers each bin (or kernel rather) at each point. In a mixture model we don't use a kernel for each data point, but rather we fit for the locations of the kernels--in addition to the width. So a mixture model is sort of a hybrid between an $N$-D histogram and KDE. Using lots of kernels (maybe even more than the BIC score suggests) may make sense if you just want to provide an accurate description of the data (as in density estimation). Using fewer kernels makes mixture models more like clustering, where the suggestion is still to use many kernels in order to divide the sample into real clusters and "background".
Gaussians are the most commonly used components for mixture models. So, the pdf is modeled by a sum of Gaussians
Step1: A typical call to the Gaussian Mixture Model algorithm looks like this
Step2: Let's start with the 1-D example given in Ivezic, Figure 6.8, which compares a Mixture Model to KDE.
[Note that the version at astroML.org has some bugs!]
Step3: Hmm, that doesn't look so great for the 5000 point distribution. Plot the BIC values and see if anything looks awry.
What do the individual components look like? Make a plot of those. Careful with the shapes of the arrays!
Can you figure out something that you can do to improve the results?
Ivezic, Figure 6.6 shows a 2-D example. In the first panel, we have the raw data. In the second panel we have a density plot (essentially a 2-D histogram). We then try to represent the data with a series of Gaussians. We allow up to 14 Gaussians and use the AIC/BIC to determine the best choice for this number. This is shown in the third panel. Finally, the fourth panel shows the chosen Gaussians with their centroids and 1-$\sigma$ contours.
In this case 7 components are required for the best fit. While it looks like we could do a pretty good job with just 2 components, there does appear to be some "background" that is a high enough level to justify further components.
Step4: That said, I'd say that there are too many components here. So, I'd be inclined to explore this a bit further if it were my data.
Lastly, let's look at a 2-D case where we are using GMM more to characterize the data than to find clusters. | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("B36fzChfyGU")
Explanation: Gaussian Mixture Models (GMM)
KDE centers each bin (or kernel rather) at each point. In a mixture model we don't use a kernel for each data point, but rather we fit for the locations of the kernels--in addition to the width. So a mixture model is sort of a hybrid between an $N$-D histogram and KDE. Using lots of kernels (maybe even more than the BIC score suggests) may make sense if you just want to provide an accurate description of the data (as in density estimation). Using fewer kernels makes mixture models more like clustering, where the suggestion is still to use many kernels in order to divide the sample into real clusters and "background".
Gaussians are the most commonly used components for mixture models. So, the pdf is modeled by a sum of Gaussians:
$$p(x) = \sum_{k=1}^N \alpha_k \mathscr{N}(x|\mu_k,\Sigma_k),$$
where $\alpha_k$ is the "mixing coefficient" with $0\le \alpha_k \le 1$ and $\sum_{k=1}^N \alpha_k = 1$.
We can solve for the parameters using maximum likelihood analyis as we have discussed previously.
However, this can be complicated in multiple dimensions, requiring the use of Expectation Maximization (EM) methods.
Expectation Maximization (ultra simplified version)
(Note: all explanations of EM are far more complicated than seems necessary for our purposes, so here is my overly simplified explanation.)
This may make more sense in terms of our earlier Bayesian analyses if we write this as
$$p(z=c) = \alpha_k,$$
and
$$p(x|z=c) = \mathscr{N}(x|\mu_k,\Sigma_k),$$
where $z$ is a "hidden" variable related to which "component" each point is assigned to.
In the Expectation step, we hold $\mu_k, \Sigma_k$, and $\alpha_k$ fixed and compute the probability that each $x_i$ belongs to component, $c$.
In the Maximization step, we hold the probability of the components fixed and maximize $\mu_k, \Sigma_k,$ and $\alpha_k$.
Note that $\alpha$ is the relative weight of each Gaussian component and not the probability of each point belonging to a specific component.
We can use the following animation to illustrate the process.
We start with a 2-component GMM, where the initial components can be randomly determined.
The points that are closest to the centroid of a component will be more probable under that distribution in the "E" step and will pull the centroid towards them in the "M" step. Iteration between the "E" and "M" step eventually leads to convergence.
In this particular example, 3 components better describes the data and similarly converges. Note that the process is not that sensitive to how the components are first initialized. We pretty much get the same result in the end.
End of explanation
# Execute this cell
import numpy as np
from sklearn.mixture import GMM
X = np.random.normal(size=(1000,2)) #1000 points in 2D
gmm = GMM(3) #three components
gmm.fit(X)
log_dens = gmm.score(X)
BIC = gmm.bic(X)
Explanation: A typical call to the Gaussian Mixture Model algorithm looks like this:
End of explanation
# Execute this cell
# Ivezic, Figure 6.8
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
from astroML.plotting import hist
from sklearn.mixture import GMM
from sklearn.neighbors import KernelDensity
#------------------------------------------------------------
# Generate our data: a mix of several Cauchy distributions
# this is the same data used in the Bayesian Blocks figure
np.random.seed(0)
N = 10000
mu_gamma_f = [(5, 1.0, 0.1),
(7, 0.5, 0.5),
(9, 0.1, 0.1),
(12, 0.5, 0.2),
(14, 1.0, 0.1)]
true_pdf = lambda x: sum([f * stats.cauchy(mu, gamma).pdf(x)
for (mu, gamma, f) in mu_gamma_f])
x = np.concatenate([stats.cauchy(mu, gamma).rvs(int(f * N))
for (mu, gamma, f) in mu_gamma_f])
np.random.shuffle(x)
x = x[x > -10]
x = x[x < 30]
#------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(bottom=0.08, top=0.95, right=0.95, hspace=0.1)
N_values = (500, 5000)
subplots = (211, 212)
k_values = (10, 100)
for N, k, subplot in zip(N_values, k_values, subplots):
ax = fig.add_subplot(subplot)
xN = x[:N]
t = np.linspace(-10, 30, 1000)
kde = KernelDensity(0.1, kernel='gaussian')
kde.fit(xN[:, None])
dens_kde = np.exp(kde.score_samples(t[:, None]))
# Compute density via Gaussian Mixtures
# we'll try several numbers of clusters
n_components = np.arange(3, 16)
gmms = [GMM(n_components=n).fit(xN[:,None]) for n in n_components]
BICs = [gmm.bic(xN[:,None]) for gmm in gmms]
i_min = np.argmin(BICs)
t = np.linspace(-10, 30, 1000)
logprob, responsibilities = gmms[i_min].score_samples(t[:,None])
# plot the results
ax.plot(t, true_pdf(t), ':', color='black', zorder=3,
label="Generating Distribution")
ax.plot(xN, -0.005 * np.ones(len(xN)), '|k', lw=1.5)
ax.plot(t, np.exp(logprob), '-', color='gray',
label="Mixture Model\n(%i components)" % n_components[i_min])
ax.plot(t, dens_kde, '-', color='black', zorder=3,
label="Kernel Density $(h=0.1)$")
# label the plot
ax.text(0.02, 0.95, "%i points" % N, ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel('$p(x)$')
ax.legend(loc='upper right')
if subplot == 212:
ax.set_xlabel('$x$')
ax.set_xlim(0, 20)
ax.set_ylim(-0.01, 0.4001)
plt.show()
Explanation: Let's start with the 1-D example given in Ivezic, Figure 6.8, which compares a Mixture Model to KDE.
[Note that the version at astroML.org has some bugs!]
End of explanation
# Execute this cell
# Ivezic, Figure 6.6
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
from sklearn.mixture import GMM
from astroML.datasets import fetch_sdss_sspp
from astroML.decorators import pickle_results
from astroML.plotting.tools import draw_ellipse
#------------------------------------------------------------
# Get the Segue Stellar Parameters Pipeline data
data = fetch_sdss_sspp(cleaned=True)
# Note how X was created from two columns of data
X = np.vstack([data['FeH'], data['alphFe']]).T
# truncate dataset for speed
X = X[::5]
#------------------------------------------------------------
# Compute GMM models & AIC/BIC
N = np.arange(1, 14)
#@pickle_results("GMM_metallicity.pkl")
def compute_GMM(N, covariance_type='full', n_iter=1000):
models = [None for n in N]
for i in range(len(N)):
#print N[i]
models[i] = GMM(n_components=N[i], n_iter=n_iter, covariance_type=covariance_type)
models[i].fit(X)
return models
models = compute_GMM(N)
AIC = [m.aic(X) for m in models]
BIC = [m.bic(X) for m in models]
i_best = np.argmin(BIC)
gmm_best = models[i_best]
print "best fit converged:", gmm_best.converged_
print "BIC: n_components = %i" % N[i_best]
#------------------------------------------------------------
# compute 2D density
FeH_bins = 51
alphFe_bins = 51
H, FeH_bins, alphFe_bins = np.histogram2d(data['FeH'], data['alphFe'], (FeH_bins, alphFe_bins))
Xgrid = np.array(map(np.ravel,
np.meshgrid(0.5 * (FeH_bins[:-1]
+ FeH_bins[1:]),
0.5 * (alphFe_bins[:-1]
+ alphFe_bins[1:])))).T
log_dens = gmm_best.score(Xgrid).reshape((51, 51))
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(12, 5))
fig.subplots_adjust(wspace=0.45, bottom=0.25, top=0.9, left=0.1, right=0.97)
# plot data
ax = fig.add_subplot(141)
ax.scatter(data['FeH'][::10],data['alphFe'][::10],marker=".",color='k',edgecolors='None')
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlim(-1.101, 0.101)
ax.text(0.93, 0.93, "Input",
va='top', ha='right', transform=ax.transAxes)
# plot density
ax = fig.add_subplot(142)
ax.imshow(H.T, origin='lower', interpolation='nearest', aspect='auto',
extent=[FeH_bins[0], FeH_bins[-1],
alphFe_bins[0], alphFe_bins[-1]],
cmap=plt.cm.binary)
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlim(-1.101, 0.101)
ax.text(0.93, 0.93, "Density",
va='top', ha='right', transform=ax.transAxes)
# plot AIC/BIC
ax = fig.add_subplot(143)
ax.plot(N, AIC, '-k', label='AIC')
ax.plot(N, BIC, ':k', label='BIC')
ax.legend(loc=1)
ax.set_xlabel('N components')
plt.setp(ax.get_yticklabels(), fontsize=7)
# plot best configurations for AIC and BIC
ax = fig.add_subplot(144)
ax.imshow(np.exp(log_dens),
origin='lower', interpolation='nearest', aspect='auto',
extent=[FeH_bins[0], FeH_bins[-1],
alphFe_bins[0], alphFe_bins[-1]],
cmap=plt.cm.binary)
ax.scatter(gmm_best.means_[:, 0], gmm_best.means_[:, 1], c='w')
for mu, C, w in zip(gmm_best.means_, gmm_best.covars_, gmm_best.weights_):
draw_ellipse(mu, C, scales=[1], ax=ax, fc='none', ec='k')
ax.text(0.93, 0.93, "Converged",
va='top', ha='right', transform=ax.transAxes)
ax.set_xlim(-1.101, 0.101)
ax.set_ylim(alphFe_bins[0], alphFe_bins[-1])
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
plt.show()
Explanation: Hmm, that doesn't look so great for the 5000 point distribution. Plot the BIC values and see if anything looks awry.
What do the individual components look like? Make a plot of those. Careful with the shapes of the arrays!
Can you figure out something that you can do to improve the results?
Ivezic, Figure 6.6 shows a 2-D example. In the first panel, we have the raw data. In the second panel we have a density plot (essentially a 2-D histogram). We then try to represent the data with a series of Gaussians. We allow up to 14 Gaussians and use the AIC/BIC to determine the best choice for this number. This is shown in the third panel. Finally, the fourth panel shows the chosen Gaussians with their centroids and 1-$\sigma$ contours.
In this case 7 components are required for the best fit. While it looks like we could do a pretty good job with just 2 components, there does appear to be some "background" that is a high enough level to justify further components.
End of explanation
# Execute this cell
# Ivezic, Figure 6.7
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.mixture import GMM
from astroML.datasets import fetch_great_wall
from astroML.decorators import pickle_results
#------------------------------------------------------------
# load great wall data
X = fetch_great_wall()
#------------------------------------------------------------
# Create a function which will save the results to a pickle file
# for large number of clusters, computation will take a long time!
#@pickle_results('great_wall_GMM.pkl')
def compute_GMM(n_clusters, n_iter=1000, min_covar=3, covariance_type='full'):
clf = GMM(n_clusters, covariance_type=covariance_type,
n_iter=n_iter, min_covar=min_covar)
clf.fit(X)
print "converged:", clf.converged_
return clf
#------------------------------------------------------------
# Compute a grid on which to evaluate the result
Nx = 100
Ny = 250
xmin, xmax = (-375, -175)
ymin, ymax = (-300, 200)
Xgrid = np.vstack(map(np.ravel, np.meshgrid(np.linspace(xmin, xmax, Nx),
np.linspace(ymin, ymax, Ny)))).T
#------------------------------------------------------------
# Compute the results
#
# we'll use 100 clusters. In practice, one should cross-validate
# with AIC and BIC to settle on the correct number of clusters.
clf = compute_GMM(n_clusters=100)
log_dens = clf.score(Xgrid).reshape(Ny, Nx)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(hspace=0, left=0.08, right=0.95, bottom=0.13, top=0.9)
ax = fig.add_subplot(211, aspect='equal')
ax.scatter(X[:, 1], X[:, 0], s=1, lw=0, c='k')
ax.set_xlim(ymin, ymax)
ax.set_ylim(xmin, xmax)
ax.xaxis.set_major_formatter(plt.NullFormatter())
plt.ylabel(r'$x\ {\rm (Mpc)}$')
ax = fig.add_subplot(212, aspect='equal')
ax.imshow(np.exp(log_dens.T), origin='lower', cmap=plt.cm.binary,
extent=[ymin, ymax, xmin, xmax])
ax.set_xlabel(r'$y\ {\rm (Mpc)}$')
ax.set_ylabel(r'$x\ {\rm (Mpc)}$')
plt.show()
Explanation: That said, I'd say that there are too many components here. So, I'd be inclined to explore this a bit further if it were my data.
Lastly, let's look at a 2-D case where we are using GMM more to characterize the data than to find clusters.
End of explanation |
7,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instacart
This workbook about feature generation
The generated fetaures are stored as csv files, so can be loaded from subseqent pages
TODO
Combine the feature generation for train and test.
1. Load data
Step1: Load Products
Step2: 2. Prepare data
Concat prior and train
To be used for final predictions
Step3: Join with orders table to get the user id
This gets us the user_id and other order features
Step4: Make a data frame of user and previous product list
Step6: Create train eval
Keep 25% of the train data outside for cross validation
Step7: 3. Feature Generation
*Features built *
Order features
Current Order - u'order_dow', u'order_hour_of_day', u'days_since_prior_order'
Prev Order - u'prev_order_dow',u'prev_order_hour_of_day', u'prev_days_since_prior_order'
User Order Features
u'avg_days_since_prior_order
Product Features
u'aisle_id', u'department_id'
u'p_orders', u'p_reorders', u'p_reorder_rate'
User Features
u'tot_orders', u'tot_prods', u'avg_basket', u'avg_reorder', u'std_basket'
User Product Features
u'up_orders', u'up_reorder', u'up_reorder_rate'
User Order features
How much time on an average does a user leave between orders
Step8: User id last order feature
The last order of the user, fetaures of it.
Step9: Product Features
Features of the product, including the number of orders in which the product has appeared, number of re-orders where it has appeared
and reorder rate
Step10: User features
Step11: User Product Features
Step12: Product days since last ordered
Create a feature dict
Step13: 4. Join the features
Step14: Prepare Y Variable from train
Here we use the train table as our y variable. We create a dataset with user_id, product_id, is_product_in_train columns.
We can then use this to merge with user, product and user/product features.
This will serve as our testing data before use actually run our model on real data.
Step15: Order Features
Attributes of the order we are about to predict
Step16: Prepare y variable
Step17: Prepare Y Varible from eval
Step18: Prepare Y Variable from test
Step19: Order Feature
Step21: Vowpalwabbit - Experimental | Python Code:
import numpy as np
import pandas as pd
import time
from tqdm import tqdm
import gc
print('loading prior')
priors = pd.read_csv('./data/order_products__prior.csv')
print('loading train')
train_all = pd.read_csv('./data/order_products__train.csv')
## Have split the trian data into two sets, train and eval
train = pd.read_csv('./data/train_new.csv')
train_eval = pd.read_csv('./data/train_eval.csv')
print('loading orders')
orders = pd.read_csv(IDIR + 'orders.csv')
print('priors {}: {}'.format(priors.shape, ', '.join(priors.columns)))
print('orders {}: {}'.format(orders.shape, ', '.join(orders.columns)))
print('train {}: {}'.format(train.shape, ', '.join(train.columns)))
###
# some memory measures for kaggle kernel
print('optimize memory')
orders.order_dow = orders.order_dow.astype(np.int8)
orders.order_hour_of_day = orders.order_hour_of_day.astype(np.int8)
orders.order_number = orders.order_number.astype(np.int16)
orders.order_id = orders.order_id.astype(np.int32)
orders.user_id = orders.user_id.astype(np.int32)
orders.days_since_prior_order = orders.days_since_prior_order.astype(np.float32)
train.reordered = train.reordered.astype(np.int8)
train.add_to_cart_order = train.add_to_cart_order.astype(np.int16)
train_eval.reordered = train.reordered.astype(np.int8)
train_eval.add_to_cart_order = train.add_to_cart_order.astype(np.int16)
train_all.reordered = train.reordered.astype(np.int8)
train_all.add_to_cart_order = train.add_to_cart_order.astype(np.int16)
priors.order_id = priors.order_id.astype(np.int32)
priors.add_to_cart_order = priors.add_to_cart_order.astype(np.int16)
priors.reordered = priors.reordered.astype(np.int8)
priors.product_id = priors.product_id.astype(np.int32)
Explanation: Instacart
This workbook about feature generation
The generated fetaures are stored as csv files, so can be loaded from subseqent pages
TODO
Combine the feature generation for train and test.
1. Load data
End of explanation
print('loading products')
products = pd.read_csv(IDIR + 'products.csv')
products.drop(['product_name'], axis=1, inplace=True)
products.aisle_id = products.aisle_id.astype(np.int8)
products.department_id = products.department_id.astype(np.int8)
products.product_id = products.product_id.astype(np.int32)
Explanation: Load Products
End of explanation
prior_train = pd.concat([priors, train_all], axis = 0)
print prior_train.shape
if ( priors.shape[0] + train.shape[0] ) == prior_train.shape[0]:
print "concat successful"
Explanation: 2. Prepare data
Concat prior and train
To be used for final predictions
End of explanation
## set the index, for the join
orders.set_index('order_id', inplace=True, drop=False)
## Join with prior_train
print "Joining orders with prior_train"
prior_train = prior_train.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
prior_train.drop('order_id_', inplace = True, axis = 1)
## Repeat the same only for prior
print "Joining orders with priors"
priors = priors.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
priors.drop('order_id_', inplace = True, axis = 1)
## Joining orders with train
print "Joining orders with train"
train = train.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
train.drop('order_id_', inplace = True, axis = 1)
## Joining orders with train_eval
print "Joining orders with train_eval"
train_eval = train_eval.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
train_eval.drop('order_id_', inplace = True, axis = 1)
## reset the order table index
orders.reset_index(inplace=True, drop=True)
orders.head()
Explanation: Join with orders table to get the user id
This gets us the user_id and other order features
End of explanation
## Using prior and train data
users_prior_all = pd.DataFrame()
users_prior_all['prod_list'] = prior_train.groupby('user_id')['product_id'].apply(set)
users_prior_all.reset_index(inplace = True, drop = False)
## Using only prior data
users_prior = pd.DataFrame()
users_prior['prod_list'] = priors.groupby('user_id')['product_id'].apply(set)
users_prior.reset_index(inplace = True, drop = False)
users_prior.head()
Explanation: Make a data frame of user and previous product list
End of explanation
import random
train.reset_index(drop = True, inplace = True)
order_ids = list(train['order_id'].unique())
sample_size = int(0.25 * len(order_ids))
sample_orders = random.sample(order_ids, sample_size)
train_eval = train[train['order_id'].isin(sample_orders)]
train_new = train[~train['order_id'].isin(sample_orders)]
train_eval.to_csv('./data/train_eval.csv', index = False)
train_new.to_csv('./data/train_new.csv', index = False)
del train
train = train_new
del train_new
train.head()
Explanation: Create train eval
Keep 25% of the train data outside for cross validation
End of explanation
## using prior stats
user_order_prior = pd.DataFrame()
user_order_prior['avg_days_since_prior_order'] = priors.groupby('user_id')['days_since_prior_order'].agg('mean')
## using prior train stats
user_order_all = pd.DataFrame()
user_order_all['avg_days_since_prior_order'] = prior_train.groupby('user_id')['days_since_prior_order'].agg('mean')
user_order_prior.reset_index(drop = False, inplace = True)
user_order_all.reset_index(drop = False, inplace = True)
print user_order_prior.head()
print
print user_order_all.head()
Explanation: 3. Feature Generation
*Features built *
Order features
Current Order - u'order_dow', u'order_hour_of_day', u'days_since_prior_order'
Prev Order - u'prev_order_dow',u'prev_order_hour_of_day', u'prev_days_since_prior_order'
User Order Features
u'avg_days_since_prior_order
Product Features
u'aisle_id', u'department_id'
u'p_orders', u'p_reorders', u'p_reorder_rate'
User Features
u'tot_orders', u'tot_prods', u'avg_basket', u'avg_reorder', u'std_basket'
User Product Features
u'up_orders', u'up_reorder', u'up_reorder_rate'
User Order features
How much time on an average does a user leave between orders
End of explanation
def last_order_features(priors):
max_order = pd.DataFrame()
max_order['max_order'] = priors.groupby(['user_id'])['order_number'].agg('max')
max_order = max_order.rename(columns = {"max_order":"order_number"})
max_order.reset_index(inplace = True, drop = False)
max_order.set_index(['user_id','order_number'], drop = False, inplace = True)
priors.set_index(['user_id','order_number'], drop = False, inplace = True)
max_order = max_order.join(priors[['user_id', 'order_id', 'order_number','order_dow','order_hour_of_day','days_since_prior_order']], rsuffix="_")
max_order.reset_index(drop =True, inplace = True)
priors.reset_index(drop = True, inplace = True)
max_order.drop('user_id_', inplace = True, axis =1)
max_order.drop('order_number_', inplace = True, axis = 1)
max_order.drop('order_number', inplace = True, axis = 1)
max_order = max_order.rename(columns = {"order_id":"prev_order_id","order_dow":"prev_order_dow"
,"order_hour_of_day":"prev_order_hour_of_day"
,"days_since_prior_order":"prev_days_since_prior_order"})
max_order.drop_duplicates(inplace = True)
return max_order
## Stats from prior
max_order = last_order_features(priors)
print max_order.head()
print
## Stats from prior and train
max_order_all = last_order_features(prior_train)
print max_order_all.head()
Explanation: User id last order feature
The last order of the user, fetaures of it.
End of explanation
def product_features(priors):
prods = pd.DataFrame()
prods['p_orders'] = priors.groupby(priors.product_id).size().astype(np.float32)
prods['p_reorders'] = priors['reordered'].groupby(priors.product_id).sum().astype(np.float32)
prods['p_reorder_rate'] = (prods.p_reorders / prods.p_orders).astype(np.float32)
## set the index for products
products.set_index('product_id', inplace = True, drop = False)
products_prior = products.join(prods, rsuffix="_")
## Reset the index
products_prior.reset_index(inplace = True, drop = True)
del prods
return products_prior
### Stats from prior and train
products_all = product_features(prior_train)
print products_all.head()
print
## Stats from prior
products_prior = product_features(priors)
print products_prior.head()
Explanation: Product Features
Features of the product, including the number of orders in which the product has appeared, number of re-orders where it has appeared
and reorder rate
End of explanation
def user_features(priors):
prod_count_prior = pd.DataFrame()
prod_count_prior['basket_size'] = priors.groupby(['user_id','order_id'])['product_id'].size().astype(np.int32)
prod_count_prior['reorder_size'] = priors.groupby(['user_id','order_id'])['reordered'].agg('sum').astype(np.int32)
# reset / set index
prod_count_prior = prod_count_prior.reset_index()
prod_count_prior.set_index('user_id', inplace = True, drop =False)
prod_count_prior['tot_orders'] = prod_count_prior.groupby(['user_id']).size().astype(np.int32)
prod_count_prior['tot_prods'] = prod_count_prior.groupby(['user_id'])['basket_size'].agg(['sum'])
prod_count_prior['avg_basket'] = prod_count_prior.groupby(['user_id'])['basket_size'].agg(['mean'])
prod_count_prior['avg_reorder'] = prod_count_prior.groupby(['user_id'])['reorder_size'].agg(['mean'])
prod_count_prior['std_basket'] = prod_count_prior.groupby(['user_id'])['basket_size'].agg(['std'])
prod_count_prior.drop(['order_id','basket_size','reorder_size'], inplace=True, axis=1)
prod_count_prior.drop_duplicates(inplace = True)
prod_count_prior = prod_count_prior.reset_index(level = 'user_id', drop = True)
return prod_count_prior
## Stats from only prior
prod_count_prior = user_features(priors)
## Stats from all
prod_count_all = user_features(prior_train)
print prod_count_prior.head()
print
print prod_count_all.head()
Explanation: User features
End of explanation
def user_prod_features(priors):
user_prod_prior = pd.DataFrame()
## Number of user's order where product id is present
user_prod_prior['up_orders'] = priors.groupby(['user_id','product_id'])['order_id'].size()
user_prod_prior.reset_index(inplace = True, drop = False)
user_prod_prior.set_index(['user_id', 'product_id'], inplace = True, drop = False)
## Number of times the product was re-ordered by the user
user_prod_prior['up_reorder'] = priors.groupby(['user_id', 'product_id'])['reordered'].agg(['sum'])
user_prod_prior['up_reorder_rate'] = user_prod_prior.up_reorder / user_prod_prior.up_orders
user_prod_prior.reset_index(drop = True, inplace= True)
return user_prod_prior
## Stats from prior
user_prod_prior = user_prod_features(priors)
user_prod_all = user_prod_features(prior_train)
print user_prod_prior.head()
print
print user_prod_all.head()
Explanation: User Product Features
End of explanation
feature_dict_prior = {}
feature_dict_prior[1] = {"name":"user_order_feature","obj":user_order_prior,"index":['user_id']}
feature_dict_prior[2] = {"name":"last_order_feature","obj":max_order,"index":['user_id']}
feature_dict_prior[3] = {"name":"product_feature","obj":products_prior,"index":['product_id']}
feature_dict_prior[4] = {"name":"user_feature","obj":prod_count_prior,"index":['user_id']}
feature_dict_prior[5] = {"name":"user_pro_feature","obj":user_prod_prior,"index":['user_id','product_id']}
feature_dict_all = {}
feature_dict_all[1] = {"name":"user_order_feature","obj":user_order_all,"index":['user_id']}
feature_dict_all[2] = {"name":"last_order_feature","obj":max_order_all,"index":['user_id']}
feature_dict_all[3] = {"name":"product_feature","obj":products_all,"index":['product_id']}
feature_dict_all[4] = {"name":"user_feature","obj":prod_count_all,"index":['user_id']}
feature_dict_all[5] = {"name":"user_prod_feature","obj":user_prod_all,"index":['user_id','product_id']}
Explanation: Product days since last ordered
Create a feature dict
End of explanation
def join_features(feature_dict, features):
for k,v in feature_dict.items():
print "Joining {} feature".format(v['name'])
obj = v['obj']
index = v['index']
features.set_index(index, drop = False, inplace = True)
obj.set_index(index, drop = False, inplace = True)
features = features.join(obj ,on =index, rsuffix='_')
index_ = [idx + '_' for idx in index]
features.drop(index_, inplace = True, axis = 1)
features.reset_index(drop = True, inplace = True)
obj.reset_index(drop = True, inplace = True)
features.drop( ['prev_order_id'], inplace = True, axis = 1 )
return features
## Join train
train.head()
Explanation: 4. Join the features
End of explanation
## This block needs to be run only once
## Later the output of this block is stored in ./data/features.csv file
## The next block reads the file, it enough to run the next block subsequently
## We could have got this from order_id
## However since we have separted our train into two sets
## we need to interate the train_new aka train data we have created.
train.reset_index(inplace = True, drop = True)
train_list = pd.DataFrame()
train_list['ignore'] = train.groupby(['user_id','order_id'], group_keys = True).size()
train_list.reset_index(inplace = True, drop = False)
train.set_index(['order_id', 'product_id'], inplace = True, drop = False)
print "features"
count = 0
order_list = []
product_list = []
user_list = []
labels = []
for user_record in train_list.itertuples():
count+=1
if count%10000 == 0:
print "Finished {} users".format(count)
user_id = user_record.user_id
order_id = user_record.order_id
prev_products = list(users_prior[users_prior.user_id == user_id]['prod_list'].values.tolist()[0])
product_list+= prev_products
order_list+=[order_id] * len(prev_products)
user_list+=[user_id] * len(prev_products)
labels+=[(order_id, product) in train.index for product in prev_products]
feature_df = pd.DataFrame({'user_id':user_list,'product_id':product_list,'order_id':order_list,'in_next_order':labels}, dtype=np.int32)
print feature_df.head()
feature_df.to_csv('./features/features.csv', index = False)
features = pd.read_csv('./features/features.csv')
features.head()
Explanation: Prepare Y Variable from train
Here we use the train table as our y variable. We create a dataset with user_id, product_id, is_product_in_train columns.
We can then use this to merge with user, product and user/product features.
This will serve as our testing data before use actually run our model on real data.
End of explanation
print "Order features"
features.set_index('order_id',inplace = True, drop = False)
orders.set_index('order_id', inplace = True, drop = False)
features = pd.merge(features, orders, left_on = 'order_id', right_on = 'order_id')
#features.drop('order_id', inplace = True, axis =1)
features.drop('eval_set', inplace = True, axis =1)
features.drop('user_id_y', inplace = True, axis =1)
features.drop('order_number', inplace = True, axis =1)
features = features.rename(columns={"user_id_x":"user_id"})
features.reset_index(drop = True, inplace= True)
train.reset_index(drop = True, inplace = True)
features.head()
features = join_features(feature_dict_prior, features)
features.head()
features.to_csv('./features/features_train.csv', index = False)
Explanation: Order Features
Attributes of the order we are about to predict
End of explanation
def get_y(test_list, users_prior):
feature = []
count = 0
for user_record in (test_list.itertuples()):
count+=1
if count%10000 == 0:
print "Finished {} users".format(count)
user_id = user_record.user_id
order_id = user_record.order_id
prev_products = list(users_prior[users_prior.user_id == user_id]['prod_list'].values.tolist()[0])
for p_p in prev_products:
feature.append((order_id, user_id ,p_p))
test_df = pd.DataFrame(data = feature, columns =['order_id','user_id','product_id'])
return test_df
train_eval.head()
Explanation: Prepare y variable
End of explanation
train_eval.reset_index(inplace = True, drop = True)
train_eval_list = pd.DataFrame()
train_eval_list['ignore'] = train_eval.groupby(['user_id','order_id'], group_keys = True).size()
train_eval_list.reset_index(inplace = True, drop = False)
test_df = get_y(train_eval_list, users_prior, orders)
test_df.head()
test_df = join_features(feature_dict_prior, test_df)
test_df.head()
test_df.to_csv('./features/features_eval.csv',index = False)
Explanation: Prepare Y Varible from eval
End of explanation
test_list = orders[orders.eval_set == 'test']
feature = []
count = 0
for user_record in (test_list.itertuples()):
count+=1
if count%10000 == 0:
print "Finished {} users".format(count)
user_id = user_record.user_id
order_id = user_record.order_id
prev_products = list(users_prior[users_prior.user_id == user_id]['prod_list'].values.tolist()[0])
for p_p in prev_products:
feature.append((order_id, user_id ,p_p))
test_df = pd.DataFrame(data = feature, columns =['order_id','user_id','product_id'])
print test_df.head()
Explanation: Prepare Y Variable from test
End of explanation
## Order features
print "Order features"
test_df.set_index('order_id',inplace = True, drop = False)
orders.set_index('order_id', inplace = True, drop = False)
test_df = pd.merge(test_df, orders, left_on = 'order_id', right_on = 'order_id')
test_df.drop('eval_set', inplace = True, axis =1)
test_df.drop('user_id_y', inplace = True, axis =1)
test_df.drop('order_number', inplace = True, axis =1)
test_df = test_df.rename(columns={"user_id_x":"user_id"})
test_df.reset_index(drop = True, inplace= True)
train.reset_index(drop = True, inplace = True)
test_df.head()
test_df = join_features(feature_dict_all, test_df)
test_df.head()
test_df.to_csv('./features/features_test.csv',index = False)
Explanation: Order Feature
End of explanation
## Numpy savetxt is extremely slow
#VW_train = np.column_stack((y_train, X_train))
#print "Save"
#np.savetxt('./data/vw_train.csv', VW_train)
#print "done"
VW_train = pd.concat([Y, X],axis =1 )
print VW_train.shape
VW_train.head()
print "VW_train"
VW_train.to_csv('./data/vw_train.csv', index = False)
#python csv2vw.py ./data/vw_train.csv ./data/vw_train.txt 0 1
#python csv2vw.py ./data/vw_test.csv ./data/vw_test.txt 0 1
### Vowpal wabbit baseline model
#time vw ./data/vw_train.txt --predictions vwpred_train.out
vw_pred_train = pd.read_csv('vwpred_train.out', names=['y_p'])
vw_pred_train['y_pp']= vw_pred_train['y_p'].apply(lambda x: 1.0 if x > 0.35 else 0.0)
y_p3 = vw_pred_train['y_pp'].values
print "Vowpal wabbit accuracy {0:.2f}, precision {0:.2f}, recall {0:.2f}, f1-score {0:.2f}".format(
accuracy_score(y_train, y_p3),
precision_score(y_train, y_p3),
recall_score(y_train, y_p3),
f1_score(y_train, y_p3))
print confusion_matrix(y_train, y_p3)
Explanation: Vowpalwabbit - Experimental
End of explanation |
7,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for i, r in enumerate(reviews[0]):
tokens = r.split(" ")
for i, t in enumerate(tokens):
total_counts[t] += 1
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
7,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Complications
Author
Step1: Now we plot a single example of both classes, to show what the data looks like. First the pulsar example.
Step2: It is clear that the peak is not in the centre. For most examples it is, but not for all. How about for the non-pulsar examples?
Step4: The non-pulsar example doesn't appear to be correctly centred either. So we centre the data using a simple function. We define this function below
Step5: Now we execute this centering function.
Step7: Now the data is correctly loaded and centred, we can move on.
Importance of the i.i.d Assumption
Today students are subjected to many formal examinations throughout their time in education. To pass those exams, students must prepare, typically by studying the material to be covered in the exam. If the student works hard and learns the subject material, they'll likely do well. Machine learning algorithms are not much different. They are extremely studious when it comes to learning from the material they are given.
Occasionally an exam board and/or school makes a mistake when preparing for an exam. Sometimes students are given the wrong exam paper, other times they are taught the wrong subject material. The outcome is usually not good for anyone - especially for the students, as their performance will be limited. This outcome does not make them bad students. Rather, the circumstances they find themselves in are less than favourable. What we have described here via a simple analogy, is a violation of the i.i.d assumption. The i.i.d assumption implies that so long as the information used to train a machine learning classifier, is similar to the information it will be tested on, it will do well. Otherwise, it will likely perform poorly.
Much like for students, violations in the i.i.d assumption do not imply that a given learning algorithm is poor, or that the wrong algorithm was chosen for the job. We cannot know this, as we have simply given the algorithm the wrong information to learn from. Given the right information, the same algorithm could perform extremely well. It may even be the best algorithm for a particular problem.
However, whereas it is easy to realise that students have been given the incorrect exam/subject material, it is much harder to know when we've poorly trained our algorithms. This is because differences between training data and real data can often be so subtle, that they are impossible to spot via a cursory analysis. Can you spot subtle distributional differences in an n-dimensional dataset - because I usually can't! To mitigate these issues we have to be diligent teachers. We have to be sure we are giving our algorithms the best chance to learn the concepts we're trying to teach them. This means we must understand our data first, to guard against fundamental i.i.d violations. When the i.i.d assumption is violated, no machine learning classifier can be expected to perform well.
There are three ways the i.i.d. assumption is commonly violated. These are explored in the sections that follow. Specifically, with reference to the pulsar candidate classification problem.
When i.i.d is Violated
The i.i.d assumption holds when the feature data supplied in the training set, is independent from the feature data in the test set, yet identically distributed. The i.i.d assumption is violated when,
The data used to train a classifier comes from a different source/data distribution, to the data the classifier will be applied to. For example, suppose a classifier is trained to recognise RFI using data collected at the Parkes telescope, but is deployed upon data obtained at the Green Bank Telescope.
The data used to train and test a classifier, initially comes from the same source/data distribution. However over time, the distribution of the data being classified changes - either slowly, or abruptly. For example, the RFI environment surrounding most radio telescopes is subject to change over varying time-scales due to human activity. Any change causes a violation of the i.i.d. assumption. In the machine learning research literature, this problem is known more widely as distributional drift, or concept drift.
The data used to train and test a classifier, initially comes from the same source/data distribution. However the data is post-processed in a different way to the data collected post-training. This could happen during a survey. For example, if a problem is spotted in the pre-processing, and a correction made which alters the data distributions.
These violations can easily occur, and perhaps unwittingly. We'll perform some experiments here that show how, in the following sections.
Next, we introduce some simple code used to extract the machine learning features we'll use in our experiments. These are the features which will be subjected to distributional change. The features are as follows
Step8: It's clear that the function is producing values very close to those expected from the theory. It is also clear that our function is giving the same answers to the numpy function. So it appears to be working well. Now for another test, this time on the uniform distribution.
The Impacts of i.i.d Violations
We now provide three examples to show why we must guard against violations of the i.i.d assumption.
Different Training & Test Distributions
This experiment shows how classifier performance degrades when data distributions change, causing violations in the i.i.d. assumption. To illustrate just how bad the effects can be, we do not use real data here. Instead, we use some idealised (and simple) Gaussian distribution data. The approach is as follows
Step9: The data looks good, so we can continue. Next we split the data into test and training samples. This is important so we can build and test a machine learning classifier on the data. To do this, we use functions built in to the scikit-learn machine learning library.
Step10: Now that we have generated some example data, lets see what happens when we build a classifier to operate on this data. We will use a simple Bayesian classifier for this data.
Step11: Now lets see what happens as the data distributions diverge, causing i.i.d violations. Here we shift the target class distributions only, in some new test data. In particular we change only the mean of the target class feature distributions in this data. This code can take around 30 seconds to execute.
Step12: The plot above shows how a change in the mean of the target class feature distributions (for $f^{1}{i}, f^{2}{i}, f^{3}_{i}$), alters classification performance. Performance can drop below 50%, which is no better then random guessing! This is clearly not good. But what happens if both classes experience change in their feature distributions? Suppose the non-target class now experiences change in $\sigma$ in a new independent sample.
Step14: As you can see, the result is even more stark, with accuracy being heavily impacted as the i.i.d assumption is violated. So you may wonder is this really a significant problem? Whilst what we have shown here is somewhat contrived, the effects are real. The i.i.d assumption is often violated in real-world problems.
This example shows the impact of using training examples with a different distribution to real-world data. For example, when using data obtained at one telescope to train a classifier which will be applied to data collected by another. It also shows the impact on classification performance when real-world data changing over time. This is especially relevant for astronomers, where local RFI environments often change.
Same Data Distribution Different Pre-processing
Data collected at the same telescope can still result in i.i.d violations, if it has been pre-processed in different ways. Here we use the example profile data loaded earlier, to show how this can happen. The example data is scaled to the range [0,255]. Suppose a classifier is trained on features extracted this information. Lets use the 4 features defined earlier (mean, standard deviation, skew and excess kurtosis).
Suppose the same trained classifier is then asked to classify profiles from the same distribution, but which are described in the range [0,1]. The features extracted from this second set of profiles, are now computed over a completely different data range. What's the impact on classification performance?
First we define the function we'll use to rescale our data
Step15: Next we observe the performance when the data isn't scaled differently. The following cell takes approximately 20-30 seconds to execute.
Step16: Now we build and test the classifier.
Step17: We can see that when the data belongs to the same distribution/data ranges, even a simple classifier can do well using only 4 features. Now we use the scale function to convert some of the data into different data ranges. We then re-run this experiment.
Step18: Now train and test as before.
Step19: We can see that accuracy has degraded. This example shows that it is important to ensure our data is always pre-processed in the same way. Indeed, even if data is produced at different telescopes, it can still help if the data is pre-processed in the same way. Doing this is certainly better than doing nothing at all. Indeed, in Lyon et. al. 2016, the authors were able to train an accurate classifier for the LOFAR telescope, using only data from the Parkes telescope. Whilst the results where far from perfect, they were very good.
It isn't just the data ranges that impact the classification outcome. The number of bins used in the integrated pulse profile are important too. This can be shown via a further experiment. Suppose we down-sample the profiles used during testing, from 128 to 64 bins - are the results affected? This time we keep all data in the range [0,1].
Step20: Now with the data sets created, lets train then test as before. | Python Code:
# Import the libraries to be used throughout.
%pylab inline
import matplotlib.pyplot as plt
# The HTRU 2 profile data is split - one file containing the real pulsar
# profiles, one file containing noise/interference profiles. We load both
# these data sources here. First we construct relative paths to the files.
data_dir = 'data/HTRU2'
pulsar_file = data_dir + '/HTRU2_pulsar.csv'
nonpulsar_file = data_dir + '/HTRU2_nonpulsar.csv'
# Now simply load the data.
pulsar_data = genfromtxt(pulsar_file, dtype=np.int,delimiter=',')
non_pulsar_data = genfromtxt(nonpulsar_file, dtype=np.int,delimiter=',')
# Print overview details.
print ('\n\nTotal number of pulsar profiles: ', len(pulsar_data))
print ('Total number of noise/RFI profiles: ', len(non_pulsar_data))
Explanation: Machine Learning Complications
Author: Dr. Robert Lyon
Contact: robert.lyon@manchester.ac.uk
Institution: University of Manchester
Affiliation: SKA Group, Time Domain Team (TDT)
Version: 1.0
Date: 30/08/2017
Acknowledgements: This notebook utilises data obtained by the High Time Resolution Universe Collaboration using the Parkes Observatory, funded by the Commonwealth of Australia and managed by the CSIRO. The data was originally processed by Dr. Daniel Thornton & Dr. Samuel Bates, and I gratefully acknowledge their efforts.
Introduction
This notebook explores the main issues which reduce the accuracy of Machine Learning (ML) algorithms, used for candidate classification. It was written to support a talk delivered at IAU Symposium No. 337, Pulsar Astrophysics: The Next Fifty Years (2017). The notebook covers three problems in particular:
Violations of the Independent and Identically Distributed (i.i.d) assumption (see Bishop, 2006). This is fundamental to the success of all ML classifiers.
Distributional drift occuring in survey data, which causes i.i.d violations (see Gama et. al. 2014) reducing classifier accuracy.
The data pre-processing steps which cause i.i.d violations. This is especially problematic when trying to share data obtained during different surveys.
The notebook explores these problems through interactive Python code (Python version 2.7). The code requires the numpy, scipy, and scikit-learn libraries to function. The notebook assumes some basic background knowledge in statistics and machine learning (yet references to relevant work are provided).
Code & License
The code and the contents of this notebook are released under the GNU GENERAL PUBLIC LICENSE, Version 3, 29 June 2007.
Citation Request
We kindly request that if you make use of the notebook, please cite the work using the following bibtex source.
Input Data
Real Data
The input data consists of integrated pulse profiles, collected during the Medium Latitude portion of the High Time Resolution Universe Survey (HTRU) (see Thornton 2013 and Bates et. al. 2012). The data is comprised of $1,586$ pulsar and $8,852$ non-pulsar candidate profiles. Each profile contains exactly 128 phase bins. The data contains 725 of the known 1,108 pulsars (known at the time) in the Medium Latitude survey region (see Levin 2012), along with re-detections and harmonics. The data also contains noise examples, along with strong and weak forms of Radio Frequency Interference (RFI). This data is not to be confused with the HTRU 2 feature data made available by Lyon et. al. (2016) - the feature data contains only machine learning features extracted from candidates, whilst this data set is made up of integrated pulse profiles only.
Artificial Data
In some cases it is simpler to generate fake data, then to try and illustrate issues using real data. Where appropriate example data is generated using the random number generators provided by the numpy and scipy libraries. Where possible we stick to simple Gaussian distributions.
Loading the Data
Here we simply load in the integrated pulse profile data, from files in the provided distribution. There are two files to be read in. The first contains integrated profiles for pulsars, the second contains noise and RFI profiles.
End of explanation
figure(1)
plot(pulsar_data[7], 'r')
xlabel('Bin')
ylabel('Normalised Intensity')
title('Example Integrated Profile for a pulsar')
show()
Explanation: Now we plot a single example of both classes, to show what the data looks like. First the pulsar example.
End of explanation
figure(2)
plot(non_pulsar_data[0], 'b')
xlabel('Bin')
ylabel('Normalised Intensity')
title('Example Integrated Profile for a non-pulsar')
show()
Explanation: It is clear that the peak is not in the centre. For most examples it is, but not for all. How about for the non-pulsar examples?
End of explanation
import operator
def centre_on_peak(data):
Centre the data such that the maximum y-axis value is in the
centre of the data.
Parameters
----------
:param data: the data to be centred.
Returns
----------
:return: the centred data array.
# Stores the centred data.
centred_data = []
# Get the index of the maximum value.
index, value = max(enumerate(data), key=operator.itemgetter(1))
# Find midpoint of the data.
midpoint = int(len(data)/2)
# Figure out the shift required to centre the data (put max value in centre bin).
n = midpoint - index # N gives the number of bins the data should be shifted.
a = n % len(data)
# Apply the correction.
centred_data = numpy.concatenate([data[-a:],data[:-a]])
return centred_data
Explanation: The non-pulsar example doesn't appear to be correctly centred either. So we centre the data using a simple function. We define this function below:
End of explanation
# Here we simply loop over each item in the data arrays,
# and update their values.
for i in range(0, len(pulsar_data)):
pulsar_data[i] = centre_on_peak(pulsar_data[i])
for i in range(0, len(non_pulsar_data)):
non_pulsar_data[i] = centre_on_peak(non_pulsar_data[i])
figure(3)
plot(pulsar_data[7], 'r')
xlabel('Bin')
ylabel('Normalised Intensity')
title('Example Integrated Profile for a pulsar - Centred')
show()
figure(4)
plot(non_pulsar_data[0], 'b')
xlabel('Bin')
ylabel('Normalised Intensity')
title('Example Integrated Profile for a non-pulsar - Centred')
show()
Explanation: Now we execute this centering function.
End of explanation
def compute_features(data):
Computes machine learning feature values for the supplied data array.
Parameters
----------
:param data: a data array.
Returns
----------
:return: the computed machine learning features as a list [mean, stdev, shew, kurtosis].
if data is not None: # Check data is not empty
if len(data) > 0:
# Sums computed during calculation.
mean_sum = 0
mean_subtracted_sum_power_2 = 0
mean_subtracted_sum_power_3 = 0
mean_subtracted_sum_power_4 = 0
# The number of data points in the array.
n = len(data)
# Necessary first loop to calculate the sum, min and max
for d in data:
mean_sum += float(d)
if mean_sum > 0 or mean_sum < 0: # If the mean is less than or greater than zero (should be)
# Update the mean value.
mean_value = mean_sum / float(n)
# Now try to compute the standard deviation, using
# the mean computed above... we also compute values in
# this loop required to compute the excess Kurtosis and
# standard deviation.
for d in data:
mean_subtracted_sum_power_2 += np.power((float(d) - mean_value), 2.0)
# Used to compute skew
mean_subtracted_sum_power_3 += np.power((float(d) - mean_value), 3.0)
# Used to compute Kurtosis
mean_subtracted_sum_power_4 += np.power((float(d) - mean_value), 4.0)
# Update the standard deviation value.
stdev = np.sqrt(mean_subtracted_sum_power_2 / (n - 1.0))
# Next try to calculate the excess Kurtosis and skew using the
# information gathered above.
one_over_n = 1.0 / n # Used multiple times...
kurt = ((one_over_n * mean_subtracted_sum_power_4) / np.power((one_over_n * mean_subtracted_sum_power_2), 2) ) - 3
skew = (one_over_n * mean_subtracted_sum_power_3) / np.power(np.sqrt(one_over_n * mean_subtracted_sum_power_2), 3)
return [mean_value, stdev, skew, kurt]
else: # Data sums to zero, i.e. no data!
return [0,0,0,0]
else: # Data empty for some reason...
return [0,0,0,0]
Explanation: Now the data is correctly loaded and centred, we can move on.
Importance of the i.i.d Assumption
Today students are subjected to many formal examinations throughout their time in education. To pass those exams, students must prepare, typically by studying the material to be covered in the exam. If the student works hard and learns the subject material, they'll likely do well. Machine learning algorithms are not much different. They are extremely studious when it comes to learning from the material they are given.
Occasionally an exam board and/or school makes a mistake when preparing for an exam. Sometimes students are given the wrong exam paper, other times they are taught the wrong subject material. The outcome is usually not good for anyone - especially for the students, as their performance will be limited. This outcome does not make them bad students. Rather, the circumstances they find themselves in are less than favourable. What we have described here via a simple analogy, is a violation of the i.i.d assumption. The i.i.d assumption implies that so long as the information used to train a machine learning classifier, is similar to the information it will be tested on, it will do well. Otherwise, it will likely perform poorly.
Much like for students, violations in the i.i.d assumption do not imply that a given learning algorithm is poor, or that the wrong algorithm was chosen for the job. We cannot know this, as we have simply given the algorithm the wrong information to learn from. Given the right information, the same algorithm could perform extremely well. It may even be the best algorithm for a particular problem.
However, whereas it is easy to realise that students have been given the incorrect exam/subject material, it is much harder to know when we've poorly trained our algorithms. This is because differences between training data and real data can often be so subtle, that they are impossible to spot via a cursory analysis. Can you spot subtle distributional differences in an n-dimensional dataset - because I usually can't! To mitigate these issues we have to be diligent teachers. We have to be sure we are giving our algorithms the best chance to learn the concepts we're trying to teach them. This means we must understand our data first, to guard against fundamental i.i.d violations. When the i.i.d assumption is violated, no machine learning classifier can be expected to perform well.
There are three ways the i.i.d. assumption is commonly violated. These are explored in the sections that follow. Specifically, with reference to the pulsar candidate classification problem.
When i.i.d is Violated
The i.i.d assumption holds when the feature data supplied in the training set, is independent from the feature data in the test set, yet identically distributed. The i.i.d assumption is violated when,
The data used to train a classifier comes from a different source/data distribution, to the data the classifier will be applied to. For example, suppose a classifier is trained to recognise RFI using data collected at the Parkes telescope, but is deployed upon data obtained at the Green Bank Telescope.
The data used to train and test a classifier, initially comes from the same source/data distribution. However over time, the distribution of the data being classified changes - either slowly, or abruptly. For example, the RFI environment surrounding most radio telescopes is subject to change over varying time-scales due to human activity. Any change causes a violation of the i.i.d. assumption. In the machine learning research literature, this problem is known more widely as distributional drift, or concept drift.
The data used to train and test a classifier, initially comes from the same source/data distribution. However the data is post-processed in a different way to the data collected post-training. This could happen during a survey. For example, if a problem is spotted in the pre-processing, and a correction made which alters the data distributions.
These violations can easily occur, and perhaps unwittingly. We'll perform some experiments here that show how, in the following sections.
Next, we introduce some simple code used to extract the machine learning features we'll use in our experiments. These are the features which will be subjected to distributional change. The features are as follows:
the mean
the standard deviation
the skew
the excess kurtosis,
all extracted from the integrated pulse profile. These features were first presented in Lyon et. al. 2016, where they were subjected to a rigorous statistical analysis.
Feature extraction code
The code provided below is optimised, so that it extracts the features in as few passes over the data as possible, whilst still giving accurate answers (not approximate answers).
End of explanation
# Import the random library again, just incase
# this notebook is executed out of order.
import random as rnd
# Set a simple seed value - ensures the results
# are reproducible.
np.random.seed(12345678)
X = [] # Stores the feature data.
Y = [] # Stores the class labels.
# Generate the feature data for the target class examples.
f1 = np.random.normal(0, 1.0, 1000)
f2 = np.random.normal(0, 1.0, 1000)
f3 = np.random.normal(0, 1.0, 1000)
# Now show how the data looks...
figure(5)
count, bins, ignored = hist(f1, 50, normed=True)
# Since we now what the mu and sigma values are, we
# plot a theoretical curve. We can then compare the
# distribution to this curve.
mu = 0.0
sigma = 1.0
# Plot theoretical curve
plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
ylabel('Probability Density')
xlabel('Bin')
title('Distribution of feature 1 data - Target class (mu=0.0, sigma=1.0)')
show()
# Now store the feature values and labels, in the correct
# sets. Remember X contains the feature data, Y the true
# class labels. Here the true class label is always 1, as
# this data represents the target class.
for x, y, z in zip(f1, f2, f3):
X.append([x,y,z])
Y.append(1)
# Now generate the non-target data.
f1 = np.random.normal(0.1, 2.0, 1000)
f2 = np.random.normal(0.2, 2.5, 1000)
f3 = np.random.normal(0.3, 3.0, 1000)
for x, y, z in zip(f1, f2, f3):
X.append([x,y,z])
Y.append(0)
# Now show how the data looks...
figure(6)
count, bins, ignored = hist(f1, 50, normed=True)
# Since we now what the mu and sigma values are, we
# plot a theoretical curve. We can then compare the
# distribution to this curve.
mu = 0.1
sigma = 2.0
# Plot theoretical curve
plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ),linewidth=2, color='r')
ylabel('Probability Density')
xlabel('Bin')
title('Distribution of feature 1 data - Non-target class (mu=0.1, sigma=2.0)')
show()
# Some cleanup
f1 = None
f2 = None
f3 = None
mu = None
sigma = None
Explanation: It's clear that the function is producing values very close to those expected from the theory. It is also clear that our function is giving the same answers to the numpy function. So it appears to be working well. Now for another test, this time on the uniform distribution.
The Impacts of i.i.d Violations
We now provide three examples to show why we must guard against violations of the i.i.d assumption.
Different Training & Test Distributions
This experiment shows how classifier performance degrades when data distributions change, causing violations in the i.i.d. assumption. To illustrate just how bad the effects can be, we do not use real data here. Instead, we use some idealised (and simple) Gaussian distribution data. The approach is as follows:
Generate a collection of 1000 target class examples, comprised of exactly 3 features each. Here the 'target' class simply means the class we consider most important, i.e. the positive class. For simplicity, we assume each of our three features is normally distributed.
To create the features which comprise the target class examples, we create normally distributed artificial feature vectors $f^{1}$, $f^{2}$, and $f^{3}$. All three vectors have a $\mu=0$ and $\sigma=1.0$. We use the numpy random number generator to populate these vectors.
To create each example $x_{i}$, we simply assign it the values at position $i$, in each of the feature vectors. Thus each $x_{i} = \lbrace f^{1}{i}, f^{2}{i}, f^{3}_{i} \rbrace $.
We add the newly created examples to a set, $X$.
We assign a label $y_{i}$ to each $x_{i}$. We do this by adding labels to the column vector $Y$. It contains the true class label for each $x_{i}$.
Repeat steps 1 to 5, but for the creation of 1000 non-target class examples. For these $f^{1}$ has a $\mu=0.1$ and $\sigma=2.0$, $f^{2}$ has a $\mu=0.2$ and $\sigma=2.5$, and $f^{3}$ has a $\mu=0.3$ and $\sigma=3.0$.
End of explanation
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.5)
print ('Examples in training set: ' , str(len(x_train)))
print ('Examples in testing set: ' , str(len(x_test)))
Explanation: The data looks good, so we can continue. Next we split the data into test and training samples. This is important so we can build and test a machine learning classifier on the data. To do this, we use functions built in to the scikit-learn machine learning library.
End of explanation
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
# First train the classifier with call to fit.
classifier.fit(x_train, y_train)
# Now obtain the classifiers 'score'
accuracy = classifier.score(x_test, y_test)
print ("Naive Bayes Classifier accuracy: ", (100* accuracy), "%.")
Explanation: Now that we have generated some example data, lets see what happens when we build a classifier to operate on this data. We will use a simple Bayesian classifier for this data.
End of explanation
# Store the accuracy recorded for each
# distributional change.
recorded_accuracies = []
# Some simple displacements we'll apply
# to the feature distributions the classifier
# was trained upon.
displacements = np.arange(-2.0,2.0,0.05)
# Stores the non-i.i.d used during this experiment.
x_test_non_iid = []
y_test_non_iid = []
# For each displacement
for d in displacements:
# Used to compute classifier accuracy after each run.
aggregate_sum = 0.0
aggregate_accuracy = 0.0
n = 25
# For n iterations...
for x in np.arange(0,n,1.0):
x_test_non_iid = []
y_test_non_iid = []
# Generate some new example data using the
# displacement values to move the feature distributions.
f1 = np.random.normal(0.1+d, 1.0, 1000)
f2 = np.random.normal(0.1+d, 1.0, 1000)
f3 = np.random.normal(0.1+d, 1.0, 1000)
for x, y, z in zip(f1, f2, f3):
x_test_non_iid.append([x,y,z])
y_test_non_iid.append(1)
# Now generate the non-target data.
f1 = np.random.normal(0.1, 2.0, 1000)
f2 = np.random.normal(0.2, 2.5, 1000)
f3 = np.random.normal(0.3, 3.0, 1000)
#noise_1 = np.random.normal(0.4+d, 2.0, 1000)
#noise_2 = np.random.normal(0.5+d, 2.5, 1000)
#noise_3 = np.random.normal(0.6+d, 3.0, 1000)
for x, y, z in zip(f1, f2, f3):
x_test_non_iid.append([x,y,z])
y_test_non_iid.append(0)
accuracy = classifier.score(x_test_non_iid, y_test_non_iid)
aggregate_sum += accuracy
#print "NB accuracy: ", accuracy # Uncomment if you wish to see the values
recorded_accuracies.append(aggregate_sum/float(n))
# Some cleanup
f1 = None
f2 = None
f3 = None
x_test_non_iid = None
y_test_non_iid = None
# Now plot the change observed in classifier accuracy over time.
plt.plot(recorded_accuracies,label='Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('n')
plt.title('Accuracy as test distribution drifts (changing target class mu)')
plt.legend(loc='upper left')
plt.show()
Explanation: Now lets see what happens as the data distributions diverge, causing i.i.d violations. Here we shift the target class distributions only, in some new test data. In particular we change only the mean of the target class feature distributions in this data. This code can take around 30 seconds to execute.
End of explanation
# Store the accuracy recorded for each
# distributional change.
recorded_accuracies = []
# Some simple displacements we'll apply
# to the feature distributions the classifier
# was trained upon.
displacements = np.arange(-2.0,2.0,0.05)
# Stores the non-i.i.d used during this experiment.
x_test_non_iid = []
y_test_non_iid = []
# For each displacement
for d in displacements:
# Used to compute classifier accuracy after each run.
aggregate_sum = 0.0
aggregate_accuracy = 0.0
n = 25
# For n iterations...
for x in np.arange(0,n,1.0):
x_test_non_iid = []
y_test_non_iid = []
# Generate some new example data using the
# displacement values to move the feature distributions.
f1 = np.random.normal(0.1+d, 1.0, 1000)
f2 = np.random.normal(0.1+d, 1.0, 1000)
f3 = np.random.normal(0.1+d, 1.0, 1000)
for x, y, z in zip(f1, f2, f3):
x_test_non_iid.append([x,y,z])
y_test_non_iid.append(1)
# Now generate the non-target data.
f1 = np.random.normal(0.1, 2.0+d, 1000)
f2 = np.random.normal(0.2, 2.5+d, 1000)
f3 = np.random.normal(0.3, 3.0+d, 1000)
for x, y, z in zip(f1, f2, f3):
x_test_non_iid.append([x,y,z])
y_test_non_iid.append(0)
accuracy = classifier.score(x_test_non_iid, y_test_non_iid)
aggregate_sum += accuracy
#print "NB accuracy: ", accuracy # Uncomment if you wish to see the values
recorded_accuracies.append(aggregate_sum/float(n))
# Some cleanup
f1 = None
f2 = None
f3 = None
x_test_non_iid = None
y_test_non_iid = None
# Now plot the change observed in classifier accuracy over time.
plt.plot(recorded_accuracies,label='Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('n')
plt.title('Accuracy as test distribution drifts (changing target class mu, non-target class sigma)')
plt.legend(loc='upper left')
plt.show()
Explanation: The plot above shows how a change in the mean of the target class feature distributions (for $f^{1}{i}, f^{2}{i}, f^{3}_{i}$), alters classification performance. Performance can drop below 50%, which is no better then random guessing! This is clearly not good. But what happens if both classes experience change in their feature distributions? Suppose the non-target class now experiences change in $\sigma$ in a new independent sample.
End of explanation
def scale(data,new_min, new_max):
Scales data to within the range [new_min,new_max].
Parameters
----------
:param data: the data to scale.
:param new_min: the new minimum value for the data range.
:param new_max: the new maximum value for the data range.
Returns
----------
:return: A new array with the data scaled to within the range [new_min,new_max].
min_ = min(data)
max_ = max(data)
new_data = []
for n in range(len(data)):
value = data[n]
x = (new_min * (1-( (value-min_) /( max_- min_ )))) + (new_max * ( (value-min_) /( max_- min_ ) ))
new_data.append(x)
return new_data
Explanation: As you can see, the result is even more stark, with accuracy being heavily impacted as the i.i.d assumption is violated. So you may wonder is this really a significant problem? Whilst what we have shown here is somewhat contrived, the effects are real. The i.i.d assumption is often violated in real-world problems.
This example shows the impact of using training examples with a different distribution to real-world data. For example, when using data obtained at one telescope to train a classifier which will be applied to data collected by another. It also shows the impact on classification performance when real-world data changing over time. This is especially relevant for astronomers, where local RFI environments often change.
Same Data Distribution Different Pre-processing
Data collected at the same telescope can still result in i.i.d violations, if it has been pre-processed in different ways. Here we use the example profile data loaded earlier, to show how this can happen. The example data is scaled to the range [0,255]. Suppose a classifier is trained on features extracted this information. Lets use the 4 features defined earlier (mean, standard deviation, skew and excess kurtosis).
Suppose the same trained classifier is then asked to classify profiles from the same distribution, but which are described in the range [0,1]. The features extracted from this second set of profiles, are now computed over a completely different data range. What's the impact on classification performance?
First we define the function we'll use to rescale our data:
End of explanation
# Now scale the first half of each data set to [0,1],
# and add to the test and training data sets.
from sklearn.model_selection import train_test_split
X = [] # Stores the feature data.
Y = [] # Stores the class labels.
# Add pulsar examples.
for i in range(0, len(pulsar_data)):
# Now here we extract the features with the call
# to compute_features().
X.append(compute_features(pulsar_data[i]))
Y.append(1)
# Add non-pulsar examples.
for i in range(0, len(non_pulsar_data)):
# Now here we extract the features with the call
# to compute_features().
X.append(compute_features(non_pulsar_data[i]))
Y.append(0)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.5)
print ('\nExamples in training set: ' , str(len(x_train)))
print ('Examples in testing set: ' , str(len(x_test)))
# There should be 4 features per example. Lets just check this is
# the case.
print ('Dimensions of training set: ' , str(np.asarray(x_train).shape))
print ('Dimensions of testing set: ' , str(np.asarray(x_test).shape))
Explanation: Next we observe the performance when the data isn't scaled differently. The following cell takes approximately 20-30 seconds to execute.
End of explanation
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
# First train the classifier with call to fit.
classifier.fit(x_train, y_train)
# Now obtain the classifiers 'score'
accuracy = classifier.score(x_test, y_test)
print ("Naive Bayes Classifier accuracy: ", (100* accuracy), "%.")
Explanation: Now we build and test the classifier.
End of explanation
# Get fresh data sets to prevent making mistakes
# Also keeps the cells modular.
X = [] # Stores the feature data.
Y = [] # Stores the class labels.
# Add pulsar examples.
for i in range(0, len(pulsar_data)):
X.append(pulsar_data[i])
Y.append(1)
# Add non-pulsar examples.
for i in range(0, len(non_pulsar_data)):
X.append(non_pulsar_data[i])
Y.append(0)
# Get a whole new split.
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.5)
# Now scale the training data set to [0,1],
# then the test data set to [0,255].
for i in range(0, len(x_train)):
x_train[i] = compute_features(scale(x_train[i],0,1))
for i in range(0, len(x_test)):
x_test[i] = compute_features(scale(x_test[i],0,255))
Explanation: We can see that when the data belongs to the same distribution/data ranges, even a simple classifier can do well using only 4 features. Now we use the scale function to convert some of the data into different data ranges. We then re-run this experiment.
End of explanation
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
# First train the classifier with call to fit.
classifier.fit(x_train, y_train)
# Now obtain the classifiers 'score'
accuracy = classifier.score(x_test, y_test)
print ("Naive Bayes Classifier accuracy: ", (100* accuracy), "%.")
Explanation: Now train and test as before.
End of explanation
from scipy import signal
# Get fresh data sets to prevent making mistakes
# Also keeps the cells modular.
X = [] # Stores the feature data.
Y = [] # Stores the class labels.
# Add pulsar examples.
for i in range(0, len(pulsar_data)):
X.append(pulsar_data[i])
Y.append(1)
# Add non-pulsar examples.
for i in range(0, len(non_pulsar_data)):
X.append(non_pulsar_data[i])
Y.append(0)
# Get a whole new split.
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.5)
# Now scale the training data set to [0,1],
for i in range(0, len(x_train)):
x_train[i] = compute_features(scale(x_train[i],0,1))
for i in range(0, len(x_test)):
# First get the data back to [0,1]
x = scale(x_test[i],0,1)
x_downsampled = signal.resample(x, 64)
x_test[i] = compute_features(x_downsampled)
Explanation: We can see that accuracy has degraded. This example shows that it is important to ensure our data is always pre-processed in the same way. Indeed, even if data is produced at different telescopes, it can still help if the data is pre-processed in the same way. Doing this is certainly better than doing nothing at all. Indeed, in Lyon et. al. 2016, the authors were able to train an accurate classifier for the LOFAR telescope, using only data from the Parkes telescope. Whilst the results where far from perfect, they were very good.
It isn't just the data ranges that impact the classification outcome. The number of bins used in the integrated pulse profile are important too. This can be shown via a further experiment. Suppose we down-sample the profiles used during testing, from 128 to 64 bins - are the results affected? This time we keep all data in the range [0,1].
End of explanation
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
# First train the classifier with call to fit.
classifier.fit(x_train, y_train)
# Now obtain the classifiers 'score'
accuracy = classifier.score(x_test, y_test)
print ("Naive Bayes Classifier accuracy: ", (100* accuracy), "%.")
Explanation: Now with the data sets created, lets train then test as before.
End of explanation |
7,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
7,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 5.6
Superposition in space, land strip sudden change same at both ends
IHE, Delft, transient groundwater
@T.N.Olsthoorn, 2019-01-02
Context
The 1D aquifer has a limited width equal to $L$. The head at $x=0$ changes suddenly at $t=0$ by the value $a$, while the head at $x=L$ remains fixed.
The solution for an infinite aquifer with sudden head change at $x=0$ reads
$$ s(x, t) = s(x, 0) \, \mathtt{erfc} \left(\sqrt{\frac {x^2 S} {4 kD t}} \right) $$
In this case the head change at $t=0$ is at both ends of the strip and equal to $a$.
For convenience and symmetry we choose $x=0$ in the center of the strip. So the left is $x=-L/2$ and the right it is $x=+L/2$.
Compensating sudden head change at the left side requires mirror changes equal to $-a$ at $x = \pm (2 i - \frac 1 2) L$ atarting at $i=0$ and mirror changes equal to $-a$ at $x = \pm (2 i - \frac 1 2) L $ starting at $i=1$.
This superposition can, therefore, be written as
$$ s(x, t) = a \left[ \sum _0 ^\infty \left{
\mathtt{erfc} \left(((2 i + \frac 1 2) L + x) \sqrt{ \frac S {4 kD t} } \right)
+
\mathtt{erfc} \left(((2 i + \frac 1 2) L - x) \sqrt{\frac S {4 kD t}} \right)
\right}
-
\sum_1 ^\infty \left{
\mathtt{erfc} \left(((2 i - \frac 1 2) L - x) \sqrt{\frac S {4 kD t}} \right)
+
\mathtt{erfc} \left(((2 i - \frac 1 2) L + x) \sqrt{\frac S {4 kD t}} \right)
\right} \right] $$
Second solution, decay of a mound in a strip
A second solution, which shows the deline of the head in a strip due to bleeding to the fixed heads at both ends.
$$ s(x, t) = A \frac 4 \pi \sum _{j=1} ^\infty \left{
\frac {(-1)^{j-1}} {2 j - 1}
\cos \left[(2 j -1) \frac \pi 2 \frac x b \right]
\exp \left[ -(2 j - 1)^2 \left( \frac \pi 2 \right)^2 \frac {kD} {b^2 S} t \right]
\right} $$
To make sure that both reflex decay of an initial head of $a$ above the fixed heads at $x \pm L/2 = b$, we have to subtract the sudden head solutions from the initial head $a$
Loading modules
Step1: Convenience function for setting up a graph
Step2: Super position with $\mathtt{erfc}()$ subtracted from initial head $a$
Step3: Same, thing, now using the analytical solution with the cos and the exp | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import erfc
Explanation: Section 5.6
Superposition in space, land strip sudden change same at both ends
IHE, Delft, transient groundwater
@T.N.Olsthoorn, 2019-01-02
Context
The 1D aquifer has a limited width equal to $L$. The head at $x=0$ changes suddenly at $t=0$ by the value $a$, while the head at $x=L$ remains fixed.
The solution for an infinite aquifer with sudden head change at $x=0$ reads
$$ s(x, t) = s(x, 0) \, \mathtt{erfc} \left(\sqrt{\frac {x^2 S} {4 kD t}} \right) $$
In this case the head change at $t=0$ is at both ends of the strip and equal to $a$.
For convenience and symmetry we choose $x=0$ in the center of the strip. So the left is $x=-L/2$ and the right it is $x=+L/2$.
Compensating sudden head change at the left side requires mirror changes equal to $-a$ at $x = \pm (2 i - \frac 1 2) L$ atarting at $i=0$ and mirror changes equal to $-a$ at $x = \pm (2 i - \frac 1 2) L $ starting at $i=1$.
This superposition can, therefore, be written as
$$ s(x, t) = a \left[ \sum _0 ^\infty \left{
\mathtt{erfc} \left(((2 i + \frac 1 2) L + x) \sqrt{ \frac S {4 kD t} } \right)
+
\mathtt{erfc} \left(((2 i + \frac 1 2) L - x) \sqrt{\frac S {4 kD t}} \right)
\right}
-
\sum_1 ^\infty \left{
\mathtt{erfc} \left(((2 i - \frac 1 2) L - x) \sqrt{\frac S {4 kD t}} \right)
+
\mathtt{erfc} \left(((2 i - \frac 1 2) L + x) \sqrt{\frac S {4 kD t}} \right)
\right} \right] $$
Second solution, decay of a mound in a strip
A second solution, which shows the deline of the head in a strip due to bleeding to the fixed heads at both ends.
$$ s(x, t) = A \frac 4 \pi \sum _{j=1} ^\infty \left{
\frac {(-1)^{j-1}} {2 j - 1}
\cos \left[(2 j -1) \frac \pi 2 \frac x b \right]
\exp \left[ -(2 j - 1)^2 \left( \frac \pi 2 \right)^2 \frac {kD} {b^2 S} t \right]
\right} $$
To make sure that both reflex decay of an initial head of $a$ above the fixed heads at $x \pm L/2 = b$, we have to subtract the sudden head solutions from the initial head $a$
Loading modules
End of explanation
def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None,
xscale='linear', yscale='linear', size_inches=(14, 8)):
'''Setup a new axis for plotting'''
fig, ax = plt.subplots()
fig.set_size_inches(size_inches)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xscale(xscale)
ax.set_yscale(yscale)
if xlim is not None: ax.set_xlim(xlim)
if ylim is not None: ax.set_ylim(ylim)
ax.grid(True)
return ax
Explanation: Convenience function for setting up a graph
End of explanation
L = 150 # m (strip wirdth)
x = np.linspace(-L/2, L/2, 201) # points, taking left at zero.
kD = 600 # m2/d
S = 0.1 # [-]
a = 1.0 # m, sudden head change at x = -L/2
times = np.linspace(0, 0.5, 11)[1:] # d
ax = newfig('Decay from initial head $a$ to 0 at $x = -L/2$ and at $x = L/5$',
'$x$ [m], $ 0 < x < L $', 'head change [m]', xlim=(-L/2, L/2))
for t in times:
rt = np.sqrt(S / (4 * kD * t))
s = np.zeros_like(x) + a # intiial head
for i in range(20):
s -= a * erfc(((2 * i + 0.5) * L + x) * rt)
s -= a * erfc(((2 * i + 0.5) * L - x) * rt)
if i > 0:
s += a * erfc(((2 * i - 0.5) * L - x) * rt)
s += a * erfc(((2 * i - 0.5) * L + x) * rt)
ax.plot(x, s, label=f't = {t:5.2f} d')
ax.legend()
Explanation: Super position with $\mathtt{erfc}()$ subtracted from initial head $a$
End of explanation
b =L/2
ax = newfig('Symmertric solution for head decay in strip', 'x [m]', 'head [h]', xlim=(-b, b))
for t in times:
h = np.zeros_like(x)
for j in range(1,20):
h += a * 4 / np.pi * ((-1)**(j-1) / (2 * j - 1) *
np.cos((2 * j - 1) * np.pi / 2 * x / b) *
np.exp(- (2 * j - 1)**2 * (np.pi / 2)**2 * kD /(b**2 * S) * t))
ax.plot(x, h, label=f't={t:.1f}')
ax.legend()
Explanation: Same, thing, now using the analytical solution with the cos and the exp
End of explanation |
7,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Naive Bayes Classification
For most classification problems, it’s nice to have a simple, fast method to provide a quick baseline classification. If the simple and fast method is sufficient, then we don’t have to waste CPU cycles on more complex models. If not, we can use the results of the simple method to give us clues about our data.
One good method to keep in mind is Gaussian Naive Bayes (sklearn.naive_bayes.GaussianNB).
Gaussian Naive Bayes fits a Gaussian distribution to each training label independantly on each feature, and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data, but can perform surprisingly well, for instance on text data.
Step1: Quantitative Measurement of Performance
Step2: We see that more than 80% of the 450 predictions match the input. But there are other more sophisticated metrics that can be used to judge the performance of a classifier
Step3: Another enlightening metric for this sort of multi-label classification is a confusion matrix | Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
print(len(X_train), len(X_test), y_train, y_test)
clf = GaussianNB()
clf.fit(X_train, y_train)
predicted = clf.predict(X_test)
expected = y_test
print(predicted)
print(expected)
Explanation: Gaussian Naive Bayes Classification
For most classification problems, it’s nice to have a simple, fast method to provide a quick baseline classification. If the simple and fast method is sufficient, then we don’t have to waste CPU cycles on more complex models. If not, we can use the results of the simple method to give us clues about our data.
One good method to keep in mind is Gaussian Naive Bayes (sklearn.naive_bayes.GaussianNB).
Gaussian Naive Bayes fits a Gaussian distribution to each training label independantly on each feature, and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data, but can perform surprisingly well, for instance on text data.
End of explanation
matches = (predicted == expected)
print(matches)
print(matches.sum())
print(len(matches))
qmp = matches.sum() / float(len(matches))
print(qmp)
Explanation: Quantitative Measurement of Performance
End of explanation
from sklearn import metrics
print(metrics.classification_report(expected, predicted))
Explanation: We see that more than 80% of the 450 predictions match the input. But there are other more sophisticated metrics that can be used to judge the performance of a classifier: several are available in the sklearn.metrics submodule.
One of the most useful metrics is the classification_report, which combines several measures and prints a table with the results:
End of explanation
print(metrics.confusion_matrix(expected, predicted))
Explanation: Another enlightening metric for this sort of multi-label classification is a confusion matrix: it helps us visualize which labels are being interchanged in the classification errors:
End of explanation |
7,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
Step1: Load data from CSV files
Step2: Data merging
Step3: Let's explore some data
Let's examine 3 different queries and products
Step4: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
Step5: only 'wood' is present from search term
Step6: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
Step7: Stemming
Step8: TF-IDF with linear regression | Python Code:
import graphlab as gl
from nltk.stem import *
Explanation: Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
End of explanation
train = gl.SFrame.read_csv("../data/train.csv")
test = gl.SFrame.read_csv("../data/test.csv")
desc = gl.SFrame.read_csv("../data/product_descriptions.csv")
Explanation: Load data from CSV files
End of explanation
# merge train with description
train = train.join(desc, on = 'product_uid', how = 'left')
# merge test with description
test = test.join(desc, on = 'product_uid', how = 'left')
Explanation: Data merging
End of explanation
first_doc = train[0]
first_doc
Explanation: Let's explore some data
Let's examine 3 different queries and products:
* first from the training set
* somewhere in the moddle in the training set
* the last one from the training set
End of explanation
middle_doc = train[37033]
middle_doc
Explanation: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
End of explanation
last_doc = train[-1]
last_doc
Explanation: only 'wood' is present from search term
End of explanation
train['search_term_word_count'] = gl.text_analytics.count_words(train['search_term'])
ranked3doc = train[train['relevance'] == 3]
print ranked3doc.head()
len(ranked3doc)
words_search = gl.text_analytics.tokenize(ranked3doc['search_term'], to_lower = True)
words_description = gl.text_analytics.tokenize(ranked3doc['product_description'], to_lower = True)
words_title = gl.text_analytics.tokenize(ranked3doc['product_title'], to_lower = True)
wordsdiff_desc = []
wordsdiff_title = []
puid = []
search_term = []
ws_count = []
ws_count_used_desc = []
ws_count_used_title = []
for item in xrange(len(ranked3doc)):
ws = words_search[item]
pd = words_description[item]
pt = words_title[item]
diff = set(ws) - set(pd)
if diff is None:
diff = 0
wordsdiff_desc.append(diff)
diff2 = set(ws) - set(pt)
if diff2 is None:
diff2 = 0
wordsdiff_title.append(diff2)
puid.append(ranked3doc[item]['product_uid'])
search_term.append(ranked3doc[item]['search_term'])
ws_count.append(len(ws))
ws_count_used_desc.append(len(ws) - len(diff))
ws_count_used_title.append(len(ws) - len(diff2))
differences = gl.SFrame({"puid" : puid,
"search term": search_term,
"diff desc" : wordsdiff_desc,
"diff title" : wordsdiff_title,
"ws count" : ws_count,
"ws count used desc" : ws_count_used_desc,
"ws count used title" : ws_count_used_title})
differences.sort(['ws count used desc', 'ws count used title'])
print "No terms used in description : " + str(len(differences[differences['ws count used desc'] == 0]))
print "No terms used in title : " + str(len(differences[differences['ws count used title'] == 0]))
print "No terms used in description and title : " + str(len(differences[(differences['ws count used desc'] == 0) &
(differences['ws count used title'] == 0)]))
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
End of explanation
#stemmer = SnowballStemmer("english")
stemmer = PorterStemmer()
def stem(word):
singles = [stemmer.stem(plural) for plural in unicode(word, errors='replace').split()]
text = ' '.join(singles)
return text
print "Starting stemming train search term..."
stemmed = train['search_term'].apply(stem)
train['stem_search_term'] = stemmed
print "Starting stemming train product description..."
stemmed = train['product_description'].apply(stem)
train['stem_product_description'] = stemmed
print "Starting stemming train product title..."
stemmed = train['product_title'].apply(stem)
train['stem_product_title'] = stemmed
print "Starting stemming test search term..."
stemmed = test['search_term'].apply(stem)
test['stem_search_term'] = stemmed
print "Starting stemming test product description..."
stemmed = test['product_description'].apply(stem)
test['stem_product_description'] = stemmed
print "Starting stemming test product title..."
stemmed = test['product_title'].apply(stem)
test['stem_product_title'] = stemmed
Explanation: Stemming
End of explanation
train['search_term_word_count'] = gl.text_analytics.count_words(train['stem_search_term'])
train_search_tfidf = gl.text_analytics.tf_idf(train['search_term_word_count'])
train['search_tfidf'] = train_search_tfidf
train['product_desc_word_count'] = gl.text_analytics.count_words(train['stem_product_description'])
train_desc_tfidf = gl.text_analytics.tf_idf(train['product_desc_word_count'])
train['desc_tfidf'] = train_desc_tfidf
train['product_title_word_count'] = gl.text_analytics.count_words(train['stem_product_title'])
train_title_tfidf = gl.text_analytics.tf_idf(train['product_title_word_count'])
train['title_tfidf'] = train_title_tfidf
train['distance_desc'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
#train['distance_desc_sqrt'] = train['distance_desc'] ** 2
train['distance_title'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
#train['distance_title_sqrt'] = train['distance_title'] ** 3
model1 = gl.random_forest_regression.create(train, target = 'relevance',
features = ['distance_desc', 'distance_title'],
num_trees = 500,
validation_set = None)
test['search_term_word_count'] = gl.text_analytics.count_words(test['stem_search_term'])
test_search_tfidf = gl.text_analytics.tf_idf(test['search_term_word_count'])
test['search_tfidf'] = test_search_tfidf
test['product_desc_word_count'] = gl.text_analytics.count_words(test['stem_product_description'])
test_desc_tfidf = gl.text_analytics.tf_idf(test['product_desc_word_count'])
test['desc_tfidf'] = test_desc_tfidf
test['product_title_word_count'] = gl.text_analytics.count_words(test['stem_product_title'])
test_title_tfidf = gl.text_analytics.tf_idf(test['product_title_word_count'])
test['title_tfidf'] = test_title_tfidf
test['distance_desc'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
#test['distance_desc_sqrt'] = test['distance_desc'] ** 2
test['distance_title'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
#test['distance_title_sqrt'] = test['distance_title'] ** 3
'''
predictions_test = model1.predict(test)
test_errors = predictions_test - test['relevance']
RSS_test = sum(test_errors * test_errors)
print RSS_test
'''
predictions_test = model1.predict(test)
predictions_test
#result = model1.evaluate(test)
#result
submission = gl.SFrame(test['id'])
submission.add_column(predictions_test)
submission.rename({'X1': 'id', 'X2':'relevance'})
submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: str(x['relevance']))
submission.export_csv('../data/submission2.csv', quote_level = 3)
#gl.canvas.set_target('ipynb')
Explanation: TF-IDF with linear regression
End of explanation |
7,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TBtrans is capable of calculating transport in $N\ge 1$ electrode systems. In this example we will explore a 4-terminal graphene GNR cross-bar (one zGNR, the other aGNR) system.
Step1: Create the two electrodes in $x$ and $y$ directions. We will force the systems to be nano-ribbons, i.e. only periodic along the ribbon. In sisl there are two ways of accomplishing this.
Explicitly set number of auxiliary supercells
Add vacuum beyond the orbital interaction ranges
The below code uses the first method.
Please see if you can change the creation of elec_x by adding vacuum.
HINT
Step2: Subsequently we create the electronic structure.
Step3: Now we have created the electronic structure for the electrodes. All that is needed is the electronic structure of the device region, i.e. the crossing nano-ribbons.
Step4: Remove any atoms that are duplicated, i.e. when we overlay these two geometries some atoms are the same.
Step5: Can you explain why set_nsc([1, 1, 1]) is called? And if so, is it necessary to do this step?
Ensure the lattice vectors are big enough for plotting.
Try and convince your-self that the lattice vectors are unimportant for tbtrans in this example.
HINT
Step6: Since this system has 4 electrodes we need to tell tbtrans where the 4 electrodes are in the device. The following lines prints out the fdf-lines that are appropriate for each of the electrodes (RUN.fdf is already filled correctly)
Step7: Exercises
In this example we have more than 1 transmission path. Before you run the below code which plots all relevant transmissions ($T_{ij}$ for $j>i$), consider if there are any symmetries, and if so, determine how many different transmission spectra you should expect? Please plot the geometry using your favourite geometry viewer (molden, Jmol, ...). The answer is not so trivial.
Step8: Make easy function calls for plotting energy resolved quantites
Step9: In RUN.fdf we have added the flag TBT.T.All which tells tbtrans to calculate all transmissions, i.e. between all $i\to j$ for all $i,j \in {1,2,3,4}$. This flag is by default False, why?
Create 3 plots each with $T_{1j}$ and $T_{j1}$ for all $j\neq1$.
Step10: Considering symmetries, try to figure out which transmissions ($T_{ij}$) are unique?
Plot the bulk DOS for the 2 differing electrodes.
Plot the spectral DOS injected by all 4 electrodes.
Step11: Bulk density of states
Step12: Spectral density of states for all electrodes
Step13: For 2D structures one can easily plot the DOS per atom via a scatter plot in matplotlib, here is the skeleton code for that, you should select an energy point and figure out how to extract the atom resolved DOS (you will need to look-up the documentation for the ADOS method to figure out which flag to use. | Python Code:
graphene = sisl.geom.graphene(orthogonal=True)
R = [0.1, 1.43]
hop = [0., -2.7]
Explanation: TBtrans is capable of calculating transport in $N\ge 1$ electrode systems. In this example we will explore a 4-terminal graphene GNR cross-bar (one zGNR, the other aGNR) system.
End of explanation
elec_y = graphene.tile(3, axis=0)
elec_y.set_nsc([1, 3, 1])
elec_y.write('elec_y.xyz')
elec_x = graphene.tile(5, axis=1)
elec_x.set_nsc([3, 1, 1])
elec_x.write('elec_x.xyz')
Explanation: Create the two electrodes in $x$ and $y$ directions. We will force the systems to be nano-ribbons, i.e. only periodic along the ribbon. In sisl there are two ways of accomplishing this.
Explicitly set number of auxiliary supercells
Add vacuum beyond the orbital interaction ranges
The below code uses the first method.
Please see if you can change the creation of elec_x by adding vacuum.
HINT: Look at the documentation for the sisl.Geometry and search for vacuum. To know the orbital distance look up maxR in the geometry class as well.
End of explanation
H_y = sisl.Hamiltonian(elec_y)
H_y.construct((R, hop))
H_y.write('ELEC_Y.nc')
H_x = sisl.Hamiltonian(elec_x)
H_x.construct((R, hop))
H_x.write('ELEC_X.nc')
Explanation: Subsequently we create the electronic structure.
End of explanation
dev_y = elec_y.tile(30, axis=1)
dev_y = dev_y.translate( -dev_y.center(what='xyz') )
dev_x = elec_x.tile(18, axis=0)
dev_x = dev_x.translate( -dev_x.center(what='xyz') )
Explanation: Now we have created the electronic structure for the electrodes. All that is needed is the electronic structure of the device region, i.e. the crossing nano-ribbons.
End of explanation
device = dev_y.add(dev_x)
device.set_nsc([1,1,1])
duplicates = []
for ia in dev_y:
idx = device.close(ia, 0.1)
if len(idx) > 1:
duplicates.append(idx[1])
device = device.remove(duplicates)
Explanation: Remove any atoms that are duplicated, i.e. when we overlay these two geometries some atoms are the same.
End of explanation
device = device.add_vacuum(70, 0).add_vacuum(20, 1)
device = device.translate( device.center(what='cell') - device.center(what='xyz') )
device.write('device.xyz')
Explanation: Can you explain why set_nsc([1, 1, 1]) is called? And if so, is it necessary to do this step?
Ensure the lattice vectors are big enough for plotting.
Try and convince your-self that the lattice vectors are unimportant for tbtrans in this example.
HINT: what is the periodicity?
End of explanation
print('elec-Y-1: semi-inf -A2: {}'.format(1))
print('elec-Y-2: semi-inf +A2: end {}'.format(len(dev_y)))
print('elec-X-1: semi-inf -A1: {}'.format(len(dev_y) + 1))
print('elec-X-2: semi-inf +A1: end {}'.format(-1))
H = sisl.Hamiltonian(device)
H.construct([R, hop])
H.write('DEVICE.nc')
Explanation: Since this system has 4 electrodes we need to tell tbtrans where the 4 electrodes are in the device. The following lines prints out the fdf-lines that are appropriate for each of the electrodes (RUN.fdf is already filled correctly):
End of explanation
tbt = sisl.get_sile('siesta.TBT.nc')
Explanation: Exercises
In this example we have more than 1 transmission path. Before you run the below code which plots all relevant transmissions ($T_{ij}$ for $j>i$), consider if there are any symmetries, and if so, determine how many different transmission spectra you should expect? Please plot the geometry using your favourite geometry viewer (molden, Jmol, ...). The answer is not so trivial.
End of explanation
E = tbt.E
Eplot = partial(plt.plot, E)
# Make a shorthand version for the function (simplifies the below line)
T = tbt.transmission
t12, t13, t14, t23, t24, t34 = T(0, 1), T(0, 2), T(0, 3), T(1, 2), T(1, 3), T(2, 3)
Eplot(t12, label=r'$T_{12}$'); Eplot(t13, label=r'$T_{13}$'); Eplot(t14, label=r'$T_{14}$');
Eplot(t23, label=r'$T_{23}$'); Eplot(t24, label=r'$T_{24}$');
Eplot(t34, label=r'$T_{34}$');
plt.ylabel('Transmission'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();
Explanation: Make easy function calls for plotting energy resolved quantites:
End of explanation
# Insert plot of T12 and T21
# Insert plot of T13 and T31
# Insert plot of T14 and T41
Explanation: In RUN.fdf we have added the flag TBT.T.All which tells tbtrans to calculate all transmissions, i.e. between all $i\to j$ for all $i,j \in {1,2,3,4}$. This flag is by default False, why?
Create 3 plots each with $T_{1j}$ and $T_{j1}$ for all $j\neq1$.
End of explanation
# Helper routines, this makes BDOS(...) == tbt.BDOS(..., norm='atom')
BDOS = partial(tbt.BDOS, norm='atom')
ADOS = partial(tbt.ADOS, norm='atom')
Explanation: Considering symmetries, try to figure out which transmissions ($T_{ij}$) are unique?
Plot the bulk DOS for the 2 differing electrodes.
Plot the spectral DOS injected by all 4 electrodes.
End of explanation
Eplot(..., label=r'$BDOS_1$');
Eplot(..., label=r'$BDOS_2$');
plt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();
Explanation: Bulk density of states:
End of explanation
Eplot(..., label=r'$ADOS_1$');
...
plt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();
Explanation: Spectral density of states for all electrodes:
- As a final exercise you can explore the details of the density of states for single atoms. Take for instance atom 205 (204 in Python index) which is in both GNRs at the crossing.
Feel free to play around with different atoms, subset of atoms (pass a list) etc.
End of explanation
Eidx = tbt.Eindex(...)
ADOS = [tbt.ADOS(i, ....) for i in range(4)]
f, axs = plt.subplots(2, 2, figsize=(10, 10))
a_xy = tbt.geometry.xyz[tbt.a_dev, :2]
for i in range(4):
A = ADOS[i]
A *= 100 / A.max() # normalize to maximum 100 (simply for plotting)
axs[i // 2][i % 2].scatter(a_xy[:, 0], a_xy[:, 1], A, c="bgrk"[i], alpha=.5);
plt.xlabel('x [Ang]'); plt.ylabel('y [Ang]'); plt.axis('equal');
Explanation: For 2D structures one can easily plot the DOS per atom via a scatter plot in matplotlib, here is the skeleton code for that, you should select an energy point and figure out how to extract the atom resolved DOS (you will need to look-up the documentation for the ADOS method to figure out which flag to use.
End of explanation |
7,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create an interactive TFX pipeline
This notebook is the first of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.
Use this notebook to create and run a TFX pipeline that performs the following steps
Step1: Import libraries
Step2: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step3: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step4: Instantiate the interactive context
Instantiate an interactive context so that you can execute the TFX pipeline components interactively in the notebook. The interactive context creates a local SQLite database in the LOCAL_MLMD_SQLLITE directory to use as its ML Metadata (MLMD) store.
Step5: Executing the pipeline steps
The components that implement the pipeline steps are in the tfx_pipeline/bq_components.py module.
Step6: Step 1
Step7: Step 2
Step8: Step 3
Step10: Step 4
Step11: Step 5
Step12: Read a sample embedding from the exported TFRecord files using the schema
Step13: Step 6
Step14: Run the stats_validator, which is an instance of the ExampleValidator component. This component validates the output statistics against the schema. It accepts the Statistics artifact produced by the stats_generator step and the Schema artifact produced by the schema_importer step, and produces Anomalies artifacts as outputput if any anomalies are found.
Step15: Step 7
Step16: Validate the lookup model
Use the TFX InfraValidator to make sure the created model is mechanically fine and can be loaded successfully.
Step17: Step 8
Step18: Step 9
Step19: Step 10
Step20: Step 11
Step21: Check the local MLMD store
Step22: View the model registry directory | Python Code:
%load_ext autoreload
%autoreload 2
!pip install -U -q tfx
Explanation: Create an interactive TFX pipeline
This notebook is the first of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.
Use this notebook to create and run a TFX pipeline that performs the following steps:
Compute PMI on item co-occurrence data by using a custom Python function component.
Train a BigQuery ML matrix factorization model on the PMI data to learn item embeddings by using a custom Python function component.
Extract the embeddings from the model to a BigQuery table by using a custom Python function component.
Export the embeddings in TFRecord format by using the standard BigQueryExampleGen component.
Import the schema for the embeddings by using the standard ImporterNode component.
Validate the embeddings against the imported schema by using the standard StatisticsGen and ExampleValidator components.
Create an embedding lookup SavedModel by using the standard Trainer component.
Push the embedding lookup model to a model registry directory by using the standard Pusher component.
Build the ScaNN index by using the standard Trainer component.
Evaluate and validate the ScaNN index latency and recall by implementing a TFX custom component.
Push the ScaNN index to a model registry directory by using the standard Pusher component.
The tfx_pipeline directory contains the source code for the TFX pipeline implementation.
Before starting this notebook, you must run the 00_prep_bq_procedures notebook to complete the solution prerequisites.
After completing this notebook, run the tfx02_deploy_run notebook to deploy the pipeline.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
End of explanation
import logging
import os
import numpy as np
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tfx
from tensorflow_transform.tf_metadata import schema_utils
logging.getLogger().setLevel(logging.INFO)
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
Explanation: Import libraries
End of explanation
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucket" # Change to the bucket you created.
BQ_DATASET_NAME = "recommendations"
ARTIFACT_STORE = f"gs://{BUCKET}/tfx_artifact_store"
LOCAL_MLMD_SQLLITE = "mlmd/mlmd.sqllite"
PIPELINE_NAME = "tfx_bqml_scann"
EMBEDDING_LOOKUP_MODEL_NAME = "embeddings_lookup"
SCANN_INDEX_MODEL_NAME = "embeddings_scann"
PIPELINE_ROOT = os.path.join(ARTIFACT_STORE, f"{PIPELINE_NAME}_interactive")
MODEL_REGISTRY_DIR = os.path.join(ARTIFACT_STORE, "model_registry_interactive")
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
CLEAN_ARTIFACTS = True
if CLEAN_ARTIFACTS:
if tf.io.gfile.exists(PIPELINE_ROOT):
print("Removing previous artifacts...")
tf.io.gfile.rmtree(PIPELINE_ROOT)
if tf.io.gfile.exists("mlmd"):
print("Removing local mlmd SQLite...")
tf.io.gfile.rmtree("mlmd")
if not tf.io.gfile.exists("mlmd"):
print("Creating mlmd directory...")
tf.io.gfile.mkdir("mlmd")
print(f"Pipeline artifacts directory: {PIPELINE_ROOT}")
print(f"Model registry directory: {MODEL_REGISTRY_DIR}")
print(f"Local metadata SQLlit path: {LOCAL_MLMD_SQLLITE}")
import ml_metadata as mlmd
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration.experimental.interactive.interactive_context import \
InteractiveContext
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = LOCAL_MLMD_SQLLITE
connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE
mlmd_store = mlmd.metadata_store.MetadataStore(connection_config)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=connection_config,
)
Explanation: Instantiate the interactive context
Instantiate an interactive context so that you can execute the TFX pipeline components interactively in the notebook. The interactive context creates a local SQLite database in the LOCAL_MLMD_SQLLITE directory to use as its ML Metadata (MLMD) store.
End of explanation
from tfx_pipeline import bq_components
Explanation: Executing the pipeline steps
The components that implement the pipeline steps are in the tfx_pipeline/bq_components.py module.
End of explanation
pmi_computer = bq_components.compute_pmi(
project_id=PROJECT_ID,
bq_dataset=BQ_DATASET_NAME,
min_item_frequency=15,
max_group_size=100,
)
context.run(pmi_computer)
pmi_computer.outputs.item_cooc.get()[0].get_string_custom_property("bq_result_table")
Explanation: Step 1: Compute PMI
Run the pmi_computer step, which is an instance of the compute_pmi custom Python function component. This component executes the sp_ComputePMI stored procedure in BigQuery and returns the name of the resulting table as a custom property.
End of explanation
bqml_trainer = bq_components.train_item_matching_model(
project_id=PROJECT_ID,
bq_dataset=BQ_DATASET_NAME,
item_cooc=pmi_computer.outputs.item_cooc,
dimensions=50,
)
context.run(bqml_trainer)
bqml_trainer.outputs.bq_model.get()[0].get_string_custom_property("bq_model_name")
Explanation: Step 2: Train the BigQuery ML matrix factorization model
Run the bqml_trainer step, which is an instance of the train_item_matching_model custom Python function component. This component executes the sp_TrainItemMatchingModel stored procedure in BigQuery and returns the name of the resulting model as a custom property.
End of explanation
embeddings_extractor = bq_components.extract_embeddings(
project_id=PROJECT_ID,
bq_dataset=BQ_DATASET_NAME,
bq_model=bqml_trainer.outputs.bq_model,
)
context.run(embeddings_extractor)
embeddings_extractor.outputs.item_embeddings.get()[0].get_string_custom_property(
"bq_result_table"
)
Explanation: Step 3: Extract the trained embeddings
Run the embeddings_extractor step, which is an instance of the extract_embeddings custom Python function component. This component executes the sp_ExractEmbeddings stored procedure in BigQuery and returns the name of the resulting table as a custom property.
End of explanation
from tfx.extensions.google_cloud_big_query.example_gen.component import \
BigQueryExampleGen
from tfx.proto import example_gen_pb2
query = f
SELECT item_Id, embedding, bias,
FROM {BQ_DATASET_NAME}.item_embeddings
LIMIT 1000
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=1)]
)
)
embeddings_exporter = BigQueryExampleGen(query=query, output_config=output_config)
beam_pipeline_args = [
"--runner=DirectRunner",
f"--project={PROJECT_ID}",
f"--temp_location=gs://{BUCKET}/bqml_scann/beam/temp",
]
context.run(embeddings_exporter, beam_pipeline_args=beam_pipeline_args)
Explanation: Step 4: Export the embeddings in TFRecord format
Run the embeddings_exporter step, which is an instance of the BigQueryExampleGen standard component. This component uses a SQL query to read the embedding records from BigQuery and produces an Examples artifact containing training and evaluation datasets as an output. It then exports these datasets in TFRecord format by using a Beam pipeline. This pipeline can be run using the DirectRunner or DataflowRunner. Note that in this interactive context, the embedding records to read is limited to 1000, and the runner of the Beam pipeline is set to DirectRunner.
End of explanation
schema_importer = tfx.components.ImporterNode(
source_uri="tfx_pipeline/schema",
artifact_type=tfx.types.standard_artifacts.Schema,
instance_name="SchemaImporter",
)
context.run(schema_importer)
context.show(schema_importer.outputs.result)
Explanation: Step 5: Import the schema for the embeddings
Run the schema_importer step, which is an instance of the ImporterNode standard component. This component reads the schema.pbtxt file from the solution's schema directory, and produces a Schema artifact as an output. The schema is used to validate the embedding files exported from BigQuery, and to parse the embedding records in the TFRecord files when they are read in the training components.
End of explanation
schema_file = schema_importer.outputs.result.get()[0].uri + "/schema.pbtxt"
schema = tfdv.load_schema_text(schema_file)
feature_sepc = schema_utils.schema_as_feature_spec(schema).feature_spec
data_uri = embeddings_exporter.outputs.examples.get()[0].uri + "/train/*"
def _gzip_reader_fn(filenames):
return tf.data.TFRecordDataset(filenames, compression_type="GZIP")
dataset = tf.data.experimental.make_batched_features_dataset(
data_uri,
batch_size=1,
num_epochs=1,
features=feature_sepc,
reader=_gzip_reader_fn,
shuffle=True,
)
counter = 0
for _ in dataset:
counter += 1
print(f"Number of records: {counter}")
print("")
for batch in dataset.take(1):
print(f'item: {batch["item_Id"].numpy()[0][0].decode()}')
print(f'embedding vector: {batch["embedding"].numpy()[0]}')
Explanation: Read a sample embedding from the exported TFRecord files using the schema
End of explanation
stats_generator = tfx.components.StatisticsGen(
examples=embeddings_exporter.outputs.examples,
)
context.run(stats_generator)
Explanation: Step 6: Validate the embeddings against the imported schema
Runs the stats_generator, which is an instance of the StatisticsGen standard component. This component accepts the output Examples artifact from the embeddings_exporter step and computes descriptive statistics for these examples by using an Apache Beam pipeline. The component produces a Statistics artifact as an output.
End of explanation
stats_validator = tfx.components.ExampleValidator(
statistics=stats_generator.outputs.statistics,
schema=schema_importer.outputs.result,
)
context.run(stats_validator)
context.show(stats_validator.outputs.anomalies)
Explanation: Run the stats_validator, which is an instance of the ExampleValidator component. This component validates the output statistics against the schema. It accepts the Statistics artifact produced by the stats_generator step and the Schema artifact produced by the schema_importer step, and produces Anomalies artifacts as outputput if any anomalies are found.
End of explanation
from tfx.components.base import executor_spec
from tfx.components.trainer import executor as trainer_executor
_module_file = "tfx_pipeline/lookup_creator.py"
embedding_lookup_creator = tfx.components.Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=_module_file,
train_args={"splits": ["train"], "num_steps": 0},
eval_args={"splits": ["train"], "num_steps": 0},
schema=schema_importer.outputs.result,
examples=embeddings_exporter.outputs.examples,
)
context.run(embedding_lookup_creator)
Explanation: Step 7: Create an embedding lookup SavedModel
Runs the embedding_lookup_creator step, which is an instance of the Trainer standard component. This component accepts the Schema artifact from the schema_importer step and theExamples artifact from the embeddings_exporter step as inputs, executes the lookup_creator.py module, and produces an embedding lookup Model artifact as an output.
End of explanation
from tfx.proto import infra_validator_pb2
serving_config = infra_validator_pb2.ServingSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServing(tags=["latest"]),
local_docker=infra_validator_pb2.LocalDockerConfig(),
)
validation_config = infra_validator_pb2.ValidationSpec(
max_loading_time_seconds=60,
num_tries=3,
)
infra_validator = tfx.components.InfraValidator(
model=embedding_lookup_creator.outputs.model,
serving_spec=serving_config,
validation_spec=validation_config,
)
context.run(infra_validator)
tf.io.gfile.listdir(infra_validator.outputs.blessing.get()[0].uri)
Explanation: Validate the lookup model
Use the TFX InfraValidator to make sure the created model is mechanically fine and can be loaded successfully.
End of explanation
embedding_lookup_pusher = tfx.components.Pusher(
model=embedding_lookup_creator.outputs.model,
infra_blessing=infra_validator.outputs.blessing,
push_destination=tfx.proto.pusher_pb2.PushDestination(
filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(
base_directory=os.path.join(MODEL_REGISTRY_DIR, EMBEDDING_LOOKUP_MODEL_NAME)
)
),
)
context.run(embedding_lookup_pusher)
lookup_savedmodel_dir = embedding_lookup_pusher.outputs.pushed_model.get()[
0
].get_string_custom_property("pushed_destination")
!saved_model_cli show --dir {lookup_savedmodel_dir} --tag_set serve --signature_def serving_default
loaded_model = tf.saved_model.load(lookup_savedmodel_dir)
vocab = [
token.strip()
for token in tf.io.gfile.GFile(
loaded_model.vocabulary_file.asset_path.numpy().decode(), "r"
).readlines()
]
input_items = [vocab[0], " ".join([vocab[1], vocab[2]]), "abc123"]
print(input_items)
output = loaded_model(input_items)
print(f"Embeddings retrieved: {len(output)}")
for idx, embedding in enumerate(output):
print(f"{input_items[idx]}: {embedding[:5]}")
Explanation: Step 8: Push the embedding lookup model to the model registry
Run the embedding_lookup_pusher step, which is an instance of the Pusher standard component. This component accepts the embedding lookup Model artifact from the embedding_lookup_creator step, and stores the SavedModel in the location specified by the MODEL_REGISTRY_DIR variable.
End of explanation
from tfx.components.base import executor_spec
from tfx.components.trainer import executor as trainer_executor
_module_file = "tfx_pipeline/scann_indexer.py"
scann_indexer = tfx.components.Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=_module_file,
train_args={"splits": ["train"], "num_steps": 0},
eval_args={"splits": ["train"], "num_steps": 0},
schema=schema_importer.outputs.result,
examples=embeddings_exporter.outputs.examples,
)
context.run(scann_indexer)
Explanation: Step 9: Build the ScaNN index
Run the scann_indexer step, which is an instance of the Trainer standard component. This component accepts the Schema artifact from the schema_importer step and the Examples artifact from the embeddings_exporter step as inputs, executes the scann_indexer.py module, and produces the ScaNN index Model artifact as an output.
End of explanation
from tfx_pipeline import scann_evaluator
index_evaluator = scann_evaluator.IndexEvaluator(
examples=embeddings_exporter.outputs.examples,
model=scann_indexer.outputs.model,
schema=schema_importer.outputs.result,
min_recall=0.8,
max_latency=0.01,
)
context.run(index_evaluator)
Explanation: Step 10: Evaluate and validate the ScaNN index
Runs the index_evaluator step, which is an instance of the IndexEvaluator custom TFX component. This component accepts the Examples artifact from the embeddings_exporter step, the Schema artifact from the schema_importer step, and ScaNN index Model artifact from the scann_indexer step. The IndexEvaluator component completes the following tasks:
Uses the schema to parse the embedding records.
Evaluates the matching latency of the index.
Compares the recall of the produced matches with respect to the exact matches.
Validates the latency and recall against the max_latency and min_recall input parameters.
When it is finished, it produces a ModelBlessing artifact as output, which indicates whether the ScaNN index passed the validation criteria or not.
The IndexEvaluator custom component is implemented in the tfx_pipeline/scann_evaluator.py module.
End of explanation
embedding_scann_pusher = tfx.components.Pusher(
model=scann_indexer.outputs.model,
model_blessing=index_evaluator.outputs.blessing,
push_destination=tfx.proto.pusher_pb2.PushDestination(
filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(
base_directory=os.path.join(MODEL_REGISTRY_DIR, SCANN_INDEX_MODEL_NAME)
)
),
)
context.run(embedding_scann_pusher)
from index_server.matching import ScaNNMatcher
scann_index_dir = embedding_scann_pusher.outputs.pushed_model.get()[
0
].get_string_custom_property("pushed_destination")
scann_matcher = ScaNNMatcher(scann_index_dir)
vector = np.random.rand(50)
scann_matcher.match(vector, 5)
Explanation: Step 11: Push the ScaNN index to the model registry
Runs the embedding_scann_pusher step, which is an instance of the Pusher standard component. This component accepts the ScaNN index Model artifact from the scann_indexer step and the ModelBlessing artifact from the index_evaluator step, and stores the SavedModel in the location specified by the MODEL_REGISTRY_DIR variable.
End of explanation
mlmd_store.get_artifacts()
Explanation: Check the local MLMD store
End of explanation
!gsutil ls {MODEL_REGISTRY_DIR}
Explanation: View the model registry directory
End of explanation |
7,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris introduction course
1. The Iris Cube
Learning Outcome
Step1: 1.1 Introduction to the Iris Cube<a id='intro_to_iris_cube'></a>
The top level object in Iris is called a Cube. A Cube contains data and metadata about a single phenomenon and is an implementation of the data model interpreted from the Climate and Forecast (CF) Metadata Conventions.
Each cube has
Step2: <div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise
Step3: We load in the filepath fname with iris.load.
Step4: iris.load returns an iris.cube.CubeList of all the cubes found in the file. From the above print out, we can see that we have loaded two cubes from the file, one representing the "total electron content" and the other representing "electron density". We can infer further detail about the returned cubes from this printout, such as the units, dimensions and shape.
<div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise
Step5: <b><font color="brown">SAMPLE SOLUTION
Step6: To see more detail about a specific cube, we can print out a single cube from the cubelist. We can select the second cube in the cubelist with indexing, and then print out what it returns.
Step7: As before, we have an overview of the cube's dimensions as well as the cube's name and units. We also have further detail on the cube's metadata, such as the Dimension Coordinates, Auxiliary Coordinates and Attributes.
In the printout, the dimension marker 'x' shows which dimensions apply to each coordinate. For example, we can see that the latitude Auxiliary Coordinate varies along the grid_latitude and grid_longitude dimensions.
Whilst the printout of a cube gives a nice overview of the cube's metadata, we can dig deeper by inspecting the attributes of our cube object, as covered in the next section.
1.3 Cube Attributes<a id='cube_attributes'></a>
We load in a different file (using the iris.sample_data_path utility function, as before, to give us the path of the file) and index out the first cube from the cubelist that is returned.
Step8: We can see that we have loaded and selected an air_temperature cube with time, latitude and longitude dimensions and the associated Dimension coordinates. We also have a forecast_period Auxiliary coordinate which maps the time dimension. Our cube also has two scalar coordinates
Step9: <div class="alert alert-block alert-warning">
<b><font color="brown">Exercise
Step10: The standard_name, long_name and to an extent var_name are all attributes to describe the phenomenon that the cube represents. The name() method is a convenience that looks at the name attributes in the order they are listed above, returning the first non-empty string.
Step11: standard_name is restricted to be a CF standard name (see the CF standard name table).
If there is not a suitable CF standard name, cube.standard name is set to None and the long_name is used instead.
long_name is less restrictive and can be set to be any string.
var_name is the name of a netCDF file variable in the input file, or to be used in output. This is normally unimportant, as CF data is identified by 'standard_name' instead
Step12: When renaming a cube, Iris will initally try to set cube.standard_name.
If the name is not a standard name, cube.long_name is set instead.
<div class="alert alert-block alert-warning">
<b><font color="brown">Exercise
Step13: The units attribute on a cube tells us the units of the numbers held in the data array.
Step14: We can convert the cube to another unit using the convert_units method, which will automatically update the data array.
Step15: A cube also has a dictionary for extra general purpose attributes, which can be accessed with the cube.attributes attribute
Step16: <div class="alert alert-block alert-warning">
<b><font color="brown">Exercise
Step17: 1.4 Coordinates<a id='coordinates'></a>
As we've seen, cubes need coordinate information to help us describe the underlying phenomenon. Typically a cube's coordinates are accessed with the coords or coord methods. The latter must return exactly one coordinate for the given parameter filters, where the former returns a list of matching coordinates, possibly of length 0.
For example, to access the time coordinate, and print the first 4 times
Step18: The coordinate interface is very similar to that of a cube. The attributes that exist on both cubes and coordinates are
Step19: These numbers can be converted to datetime objects with the unit's num2date method. Dates can be converted back again with the date2num method
Step20: Another important attribute on a coordinate is its coordinate system. Coordinate systems may be None for trivial coordinates, but particularly for spatial coordinates, they may be complex definitions of things such as the projection, ellipse and/or datum.
Step21: In this case, the latitude's coordinate system is a simple geographic latitude on a spherical globe of radius 6371229 (meters).
1.5 Section Review Exercise<a id='exercise'></a>
1. Load the file in iris.sample_data_path('atlantic_profiles.nc') and print the cube list. Store these cubes in a variable called cubes.
Step22: 2. Loop through each of the cubes (e.g. for cube in cubes) and print the standard name of each.
Step23: 3. Index cubes to retrieve the sea_water_potential_temperature cube.
Note
Step24: 4. Get hold of the latitude coordinate on the sea_water_potential_temperature cube. Identify whether this coordinate has bounds. Print the minimum and maximum latitude points in the cube. | Python Code:
import iris
Explanation: Iris introduction course
1. The Iris Cube
Learning Outcome: by the end of this section, you will be able to explain the capabilities and functionality of Iris Cubes and Coordinates.
Duration: 1 hour
Overview:<br>
1.1 Introduction to the Iris Cube<br>
1.2 Working with a Cube<br>
1.3 Cube Attributes<br>
1.4 Coordinates<br>
1.5 Exercise<br>
1.6 Summary of the Section
End of explanation
fname = iris.sample_data_path('space_weather.nc')
Explanation: 1.1 Introduction to the Iris Cube<a id='intro_to_iris_cube'></a>
The top level object in Iris is called a Cube. A Cube contains data and metadata about a single phenomenon and is an implementation of the data model interpreted from the Climate and Forecast (CF) Metadata Conventions.
Each cube has:
A data array (typically a NumPy array).
A "name", preferably a CF "standard name" to describe the phenomenon that the cube represents.
A collection of coordinates to describe each of the dimensions of the data array. These coordinates are split into two types:
Dimension Coordinates are numeric, monotonic and represent a single dimension of the data array. There may be only one Dimension Coordinate per data dimension.
Auxilliary Coordinates can be of any type, including discrete values such as strings, and may represent more than one data dimension.
A fuller explanation is available in the Iris user guide.
Let's take a simple example to demonstrate the Cube concept.
Suppose we have a (3, 2, 4) NumPy array:
Where dimensions 0, 1, and 2 have lengths 3, 2 and 4 respectively.
The Iris Cube to represent this data may consist of:
a standard name of "air_temperature" and units of "kelvin"
a data array of shape (3, 2, 4)
a coordinate, mapping to dimension 0, consisting of:
a standard name of "height" and units of "meters"
an array of length 3 representing the 3 height points
a coordinate, mapping to dimension 1, consisting of:
a standard name of "latitude" and units of "degrees"
an array of length 2 representing the 2 latitude points
a coordinate system such that the latitude points could be fully located on the globe
a coordinate, mapping to dimension 2, consisting of:
a standard name of "longitude" and units of "degrees"
an array of length 4 representing the 4 longitude points
a coordinate system such that the longitude points could be fully located on the globe
Pictorially the Cube has taken on more information than a simple array:
1.2 Working with a Cube<a id='working_with_a_cube'></a>
To load in a Cube from a file, we make use of the iris.load function.
<div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>Take a look at the above link to see how `iris.load` is called.</p>
</div>
For the purpose of this course, we will be using the sample data provided with Iris. We use the utility function iris.sample_data_path which returns the filepath of where the sample data is installed. We assign the output filepath returned by the iris.sample_data_path function to a variable called fname.
End of explanation
#
# edit space for user code ...
#
Explanation: <div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>Try printing <b><font style='font-family: courier'>fname</font></b>, to see where the sample data is installed on your system.<p/>
</div>
End of explanation
cubes = iris.load(fname)
print(cubes)
Explanation: We load in the filepath fname with iris.load.
End of explanation
#
# edit space for user notes ...
#
Explanation: iris.load returns an iris.cube.CubeList of all the cubes found in the file. From the above print out, we can see that we have loaded two cubes from the file, one representing the "total electron content" and the other representing "electron density". We can infer further detail about the returned cubes from this printout, such as the units, dimensions and shape.
<div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>What are the dimensions of the "total electron content" cube?
<br>What are the units of the "electron_density" cube?</p>
</div>
End of explanation
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.2a
Explanation: <b><font color="brown">SAMPLE SOLUTION:</font></b>
Un-comment and execute the following, to view a possible solution, and some code.
Then run it ...
End of explanation
air_pot_temp = cubes[1]
print(air_pot_temp)
Explanation: To see more detail about a specific cube, we can print out a single cube from the cubelist. We can select the second cube in the cubelist with indexing, and then print out what it returns.
End of explanation
fname = iris.sample_data_path('A1B_north_america.nc')
cubes = iris.load(fname)
cube = cubes[0]
print(cube)
Explanation: As before, we have an overview of the cube's dimensions as well as the cube's name and units. We also have further detail on the cube's metadata, such as the Dimension Coordinates, Auxiliary Coordinates and Attributes.
In the printout, the dimension marker 'x' shows which dimensions apply to each coordinate. For example, we can see that the latitude Auxiliary Coordinate varies along the grid_latitude and grid_longitude dimensions.
Whilst the printout of a cube gives a nice overview of the cube's metadata, we can dig deeper by inspecting the attributes of our cube object, as covered in the next section.
1.3 Cube Attributes<a id='cube_attributes'></a>
We load in a different file (using the iris.sample_data_path utility function, as before, to give us the path of the file) and index out the first cube from the cubelist that is returned.
End of explanation
print(cube.shape)
print(cube.ndim)
print(type(cube.data))
Explanation: We can see that we have loaded and selected an air_temperature cube with time, latitude and longitude dimensions and the associated Dimension coordinates. We also have a forecast_period Auxiliary coordinate which maps the time dimension. Our cube also has two scalar coordinates: forecast_reference_time and height, and a cell method of mean: time (6 hour) which means that the cube contains 6-hourly mean air temperatures.
To access the values of air temperature in the cube we use the data property. This is either a NumPy array or, in some cases, a NumPy masked array. It is very important to note that for most of the supported filetypes in Iris, the cube's data isn't actually loaded until you request it via this property (either directly or indirectly). After you've accessed the data once, it is stored on the cube and thus won't be loaded from disk again.
To find the shape of a cube's data it is possible to call cube.data.shape or cube.data.ndim, but this will trigger any unloaded data to be loaded. Therefore shape and ndim are properties available directly on the cube that do not unnecessarily load data.
End of explanation
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.3a
Explanation: <div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>From the above output we can see that cube.data is a masked numpy array.
<br>How would you find out the fill value of this masked array?</p>
</div>
End of explanation
print(cube.standard_name)
print(cube.long_name)
print(cube.var_name)
print(cube.name())
Explanation: The standard_name, long_name and to an extent var_name are all attributes to describe the phenomenon that the cube represents. The name() method is a convenience that looks at the name attributes in the order they are listed above, returning the first non-empty string.
End of explanation
cube.rename("A name that isn't a valid CF standard name")
print(cube.standard_name)
print(cube.long_name)
print(cube.var_name)
print(cube.name())
Explanation: standard_name is restricted to be a CF standard name (see the CF standard name table).
If there is not a suitable CF standard name, cube.standard name is set to None and the long_name is used instead.
long_name is less restrictive and can be set to be any string.
var_name is the name of a netCDF file variable in the input file, or to be used in output. This is normally unimportant, as CF data is identified by 'standard_name' instead : (Note: although they are often the same, some standard names are not valid as netCDF variable names).
To rename a cube, it is possible to set the attributes manually, but it is generally easier to use the rename() method.
Below we rename the cube to a string that we know is not a valid CF standard name.
End of explanation
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.3b
Explanation: When renaming a cube, Iris will initally try to set cube.standard_name.
If the name is not a standard name, cube.long_name is set instead.
<div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>Take a look at the <a href=http://cfconventions.org/standard-names.html> CF standard name table</a> and try renaming the cube to an accepted name.</p>
</div>
End of explanation
print(cube.units)
print(cube.data.max())
Explanation: The units attribute on a cube tells us the units of the numbers held in the data array.
End of explanation
cube.convert_units('Celsius')
print(cube.units)
print(cube.data.max())
Explanation: We can convert the cube to another unit using the convert_units method, which will automatically update the data array.
End of explanation
print(cube.attributes)
Explanation: A cube also has a dictionary for extra general purpose attributes, which can be accessed with the cube.attributes attribute:
End of explanation
#
# edit space for user code ...
#
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.3c
Explanation: <div class="alert alert-block alert-warning">
<b><font color="brown">Exercise: </font></b>
<p>Update the `cube.attributes` dictionary with a new entry.
<br>For example <b><font face="courier" color="black">{'comment':'Original data had units of degrees celsius'}</font></b>.</p>
</div>
End of explanation
time = cube.coord('time')
print(time[:4])
Explanation: 1.4 Coordinates<a id='coordinates'></a>
As we've seen, cubes need coordinate information to help us describe the underlying phenomenon. Typically a cube's coordinates are accessed with the coords or coord methods. The latter must return exactly one coordinate for the given parameter filters, where the former returns a list of matching coordinates, possibly of length 0.
For example, to access the time coordinate, and print the first 4 times:
End of explanation
print(repr(time.units))
print(time.points[:4])
print(time.bounds[:4])
Explanation: The coordinate interface is very similar to that of a cube. The attributes that exist on both cubes and coordinates are: standard_name, long_name, var_name, units, attributes and shape. Similarly, the name(), rename() and convert_units() methods also exist on a coordinate.
A coordinate does not have data, instead it has points and bounds (bounds may be None). In Iris, time coordinates are currently represented as "a number since an epoch":
End of explanation
import datetime
print(time.units.num2date(time.points[:4]))
print(time.units.date2num(datetime.datetime(1970, 2, 1)))
Explanation: These numbers can be converted to datetime objects with the unit's num2date method. Dates can be converted back again with the date2num method:
End of explanation
lat = cube.coord('latitude')
print(lat.coord_system)
Explanation: Another important attribute on a coordinate is its coordinate system. Coordinate systems may be None for trivial coordinates, but particularly for spatial coordinates, they may be complex definitions of things such as the projection, ellipse and/or datum.
End of explanation
# EDIT for user code ...
# SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ...
# %load solutions/iris_exercise_1.5a
Explanation: In this case, the latitude's coordinate system is a simple geographic latitude on a spherical globe of radius 6371229 (meters).
1.5 Section Review Exercise<a id='exercise'></a>
1. Load the file in iris.sample_data_path('atlantic_profiles.nc') and print the cube list. Store these cubes in a variable called cubes.
End of explanation
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.5b
Explanation: 2. Loop through each of the cubes (e.g. for cube in cubes) and print the standard name of each.
End of explanation
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.5c
Explanation: 3. Index cubes to retrieve the sea_water_potential_temperature cube.
Note: that indexing to extract single cubes is useful for EDA, but it is better practice to use constraints (See 3. Cube Control and Subsetting.ipynb for more information).
End of explanation
# user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_1.5d
Explanation: 4. Get hold of the latitude coordinate on the sea_water_potential_temperature cube. Identify whether this coordinate has bounds. Print the minimum and maximum latitude points in the cube.
End of explanation |
7,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Displaying text on a PmodOLED
This demonstration shows how to display text on a PmodOLED using the board.
The Digilent Pmod OLED is required. In this example it should be connected to PMODA.
Step1: You should now see the text output on the OLED, so let's try another message. | Python Code:
from pynq.overlays.base import BaseOverlay
from pynq.lib import Pmod_OLED
base = BaseOverlay("base.bit")
pmod_oled = Pmod_OLED(base.PMODA)
pmod_oled.clear()
pmod_oled.write('Welcome to \nPYNQ!')
Explanation: Displaying text on a PmodOLED
This demonstration shows how to display text on a PmodOLED using the board.
The Digilent Pmod OLED is required. In this example it should be connected to PMODA.
End of explanation
pmod_oled.clear()
pmod_oled.write('Python and Zynq\nproductivity & performance')
Explanation: You should now see the text output on the OLED, so let's try another message.
End of explanation |
7,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Command mode vs Edit mode
By default we are in COMMAND mode
<li>Press **ENTER** the edit the current cell
<li>Press **ESC** to switch back to command mode
## Main command mode shortcuts
Notebook control
Step2: Access to documentation and Code completion
Step3: Local shell commands execution
We can use a "!" at the beginning of a line to execute that command in a local shell
Step4: We can also use variables as parameters by passing them wrapped in "{}"
Step5: Output of a local shell command can also be captured, for example to be post-processed in python | Python Code:
a = 1
b = 2
def my_simple_sum(a, b):
Simple addition
:param a: fist number
:param b: second number
print "Sum is:", a+b
my_simple_sum(a,b)
# Further down in the code we do some changes
a = 100
# than we can go back and re-execute just the previous cell
Explanation: Command mode vs Edit mode
By default we are in COMMAND mode
<li>Press **ENTER** the edit the current cell
<li>Press **ESC** to switch back to command mode
## Main command mode shortcuts
Notebook control:
- **00** : Restart the notebook
Cells control:
- **Up/Down arrows** : move up-down on cells
- **a** : add cell above
- **b** : add cell below
- **x** : delete current cell
Editing cells:
- **Return** : enter edit mode for current cell
- **Control+/** : Toggle code comment
- **Ctrl+Shift+-** : Split cell at cursor position
- **Esc** : return to command mode
Executing cells:
- **Shift+Return** : execute the content of the current cell
More shortcuts listed under *"Help" => "Keyboard shortcuts"*
# Cells editing
Cells have a type, which can be changed using shortcuts or the dedicated dropdown menu.<br>
This is an example of text cell, where you can use **Markdown** tags to format your text.
You can also highlight chunks of code in almost any langauge
Example of Bash script:
```shell
#!/bin/bash
# A useless script
for i in $(seq 10); do
echo Hello World
done
```
Example of C fragment:
```c
29 /*
28 * System energy normalization
27 * Returns the normalized value, in the range [0..SCHED_LOAD_SCALE],
26 * corresponding to the specified energy variation.
25 */
24 static inline int
23 normalize_energy(int energy_diff)
22 {
21 u32 normalized_nrg;
20 int max_delta;
19
18 #ifdef CONFIG_SCHED_DEBUG
17 /* Check for boundaries */
16 max_delta = schedtune_target_nrg.max_power;
15 max_delta -= schedtune_target_nrg.min_power;
14 WARN_ON(abs(energy_diff) >= max_delta);
13 #endif
12
11 /* Do scaling using positive numbers to increase the range */
10 normalized_nrg = (energy_diff < 0) ? -energy_diff : energy_diff;
9
8 /* Scale by energy magnitude */
7 normalized_nrg <<= SCHED_LOAD_SHIFT;
6
5 /* Normalize on max energy for target platform */
4 normalized_nrg = reciprocal_divide(
3 normalized_nrg, schedtune_target_nrg.rdiv);
2
1 return (energy_diff < 0) ? -normalized_nrg : normalized_nrg;
5292 }
```
## Code flow vs execution flow
Normally cells contains code, which is executed when **Shift+Return** is pressed
End of explanation
# Use TAB to complete the function name
# Use SHIFT+Tab after the '(' to access
my_simple_sum(2,3)
Explanation: Access to documentation and Code completion
End of explanation
!pwd
!date
Explanation: Local shell commands execution
We can use a "!" at the beginning of a line to execute that command in a local shell
End of explanation
folder = "../"
!ls -la {folder} | wc -l
Explanation: We can also use variables as parameters by passing them wrapped in "{}"
End of explanation
output = !find ../../ipynb/ -name "*.ipynb"
print "Available notebooks:"
for line in output:
print line.replace('../../ipynb/', ' ')
Explanation: Output of a local shell command can also be captured, for example to be post-processed in python
End of explanation |
7,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2022 The TensorFlow Authors.
Step1: Retrain a speech recognition model with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Prepare the dataset
To train with the default speech dataset, just run all the code below as-is.
But if you want to train with your own speech dataset, follow these steps
Step3: Generate a background noise dataset
Whether you're using the default speech dataset or a custom dataset, you should have a good set of background noises so your model can distinguish speech from other noises (including silence).
Because the following background samples are provided in WAV files that are a minute long or longer, we need to split them up into smaller one-second samples so we can reserve some for our test dataset. We'll also combine a couple different sample sources to build a comprehensive set of background noises and silence
Step4: Note
Step5: Prepare the speech commands dataset
We already downloaded the speech commands dataset, so now we just need to prune the number of classes for our model.
This dataset includes over 30 speech command classifications, and most of them have over 2,000 samples. But because we're using transfer learning, we don't need that many samples. So the following code does a few things
Step6: Prepare a custom dataset
If you want to train the model with our own speech dataset, you need to upload your samples as WAV files in a ZIP (as described above) and modify the following variables to specify your dataset
Step7: After changing the filename and path name above, you're ready to train the model with your custom dataset. In the Colab toolbar, select Runtime > Run all to run the whole notebook.
The following code integrates our new background noise samples into your dataset and then separates a portion of all samples to create a test set.
Step8: Play a sample
To be sure the dataset looks correct, let's play at a random sample from the test set
Step9: Define the model
When using Model Maker to retrain any model, you have to start by defining a model spec. The spec defines the base model from which your new model will extract feature embeddings to begin learning new classes. The spec for this speech recognizer is based on the pre-trained BrowserFft model from TFJS.
The model expects input as an audio sample that's 44.1 kHz, and just under a second long
Step10: Load your dataset
Now you need to load your dataset according to the model specifications. Model Maker includes the DataLoader API, which will load your dataset from a folder and ensure it's in the expected format for the model spec.
We already reserved some test files by moving them to a separate directory, which makes it easier to run inference with them later. Now we'll create a DataLoader for each split
Step11: Load a custom dataset
Note
Step12: Train the model
Now we'll use the Model Maker create() function to create a model based on our model spec and training dataset, and begin training.
If you're using a custom dataset, you might want to change the batch size as appropriate for the number of samples in your train set.
Note
Step13: Review the model performance
Even if the accuracy/loss looks good from the training output above, it's important to also run the model using test data that the model has not seen yet, which is what the evaluate() method does here
Step15: View the confusion matrix
When training a classification model such as this one, it's also useful to inspect the confusion matrix. The confusion matrix gives you detailed visual representation of how well your classifier performs for each classification in your test data.
Step16: Export the model
The last step is exporting your model into the TensorFlow Lite format for execution on mobile/embedded devices and into the SavedModel format for execution elsewhere.
When exporting a .tflite file from Model Maker, it includes model metadata that describes various details that can later help during inference. It even includes a copy of the classification labels file, so you don't need to a separate labels.txt file. (In the next section, we show how to use this metadata to run an inference.)
Step19: Run inference with TF Lite model
Now your TFLite model can be deployed and run using any of the supported inferencing libraries or with the new TFLite AudioClassifier Task API. The following code shows how you can run inference with the .tflite model in Python.
Step20: To observe how well the model performs with real samples, run the following code block over and over. Each time, it will fetch a new test sample and run inference with it, and you can listen to the audio sample below.
Step21: Download the TF Lite model
Now you can deploy the TF Lite model to your mobile or embedded device. You don't need to download the labels file because you can instead retrieve the labels from .tflite file metadata, as shown in the previous inferencing example. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
!sudo apt -y install libportaudio2
!pip install tflite-model-maker
import os
import glob
import random
import shutil
import librosa
import soundfile as sf
from IPython.display import Audio
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import tflite_model_maker as mm
from tflite_model_maker import audio_classifier
from tflite_model_maker.config import ExportFormat
print(f"TensorFlow Version: {tf.__version__}")
print(f"Model Maker Version: {mm.__version__}")
Explanation: Retrain a speech recognition model with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_speech_recognition"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_speech_recognition.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker to train a speech recognition model that can classify spoken words or short phrases using one-second sound samples. The Model Maker library uses transfer learning to retrain an existing TensorFlow model with a new dataset, which reduces the amount of sample data and time required for training.
By default, this notebook retrains the model (BrowserFft, from the TFJS Speech Command Recognizer) using a subset of words from the speech commands dataset (such as "up," "down," "left," and "right"). Then it exports a TFLite model that you can run on a mobile device or embedded system (such as a Raspberry Pi). It also exports the trained model as a TensorFlow SavedModel.
This notebook is also designed to accept a custom dataset of WAV files, uploaded to Colab in a ZIP file. The more samples you have for each class, the better your accuracy will be, but because the transfer learning process uses feature embeddings from the pre-trained model, you can still get a fairly accurate model with only a few dozen samples in each of your classes.
Note: The model we'll be training is optimized for speech recognition with one-second samples. If you want to perform more generic audio classification (such as detecting different types of music), we suggest you instead follow this Colab to retrain an audio classifier.
If you want to run the notebook with the default speech dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. However, if you want to use your own dataset, then continue down to Prepare the dataset and follow the instructions there.
Import the required packages
You'll need TensorFlow, TFLite Model Maker, and some modules for audio manipulation, playback, and visualizations.
End of explanation
use_custom_dataset = False #@param ["False", "True"] {type:"raw"}
Explanation: Prepare the dataset
To train with the default speech dataset, just run all the code below as-is.
But if you want to train with your own speech dataset, follow these steps:
Note:
The model you'll retrain expects input data to be roughly one second of audio at 44.1 kHz. Model Maker perfoms automatic resampling for the training dataset, so there's no need to resample your dataset if it has a sample rate other than 44.1 kHz. But beware that audio samples longer than one second will be split into multiple one-second chunks, and the final chunk will be discarded if it's shorter than one second.
Be sure each sample in your dataset is in WAV file format, about one second long. Then create a ZIP file with all your WAV files, organized into separate subfolders for each classification. For example, each sample for a speech command "yes" should be in a subfolder named "yes". Even if you have only one class, the samples must be saved in a subdirectory with the class name as the directory name. (This script assumes your dataset is not split into train/validation/test sets and performs that split for you.)
Click the Files tab in the left panel and just drag-drop your ZIP file there to upload it.
Use the following drop-down option to set use_custom_dataset to True.
Then skip to Prepare a custom audio dataset to specify your ZIP filename and dataset directory name.
End of explanation
tf.keras.utils.get_file('speech_commands_v0.01.tar.gz',
'http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz',
cache_dir='./',
cache_subdir='dataset-speech',
extract=True)
tf.keras.utils.get_file('background_audio.zip',
'https://storage.googleapis.com/download.tensorflow.org/models/tflite/sound_classification/background_audio.zip',
cache_dir='./',
cache_subdir='dataset-background',
extract=True)
Explanation: Generate a background noise dataset
Whether you're using the default speech dataset or a custom dataset, you should have a good set of background noises so your model can distinguish speech from other noises (including silence).
Because the following background samples are provided in WAV files that are a minute long or longer, we need to split them up into smaller one-second samples so we can reserve some for our test dataset. We'll also combine a couple different sample sources to build a comprehensive set of background noises and silence:
End of explanation
# Create a list of all the background wav files
files = glob.glob(os.path.join('./dataset-speech/_background_noise_', '*.wav'))
files = files + glob.glob(os.path.join('./dataset-background', '*.wav'))
background_dir = './background'
os.makedirs(background_dir, exist_ok=True)
# Loop through all files and split each into several one-second wav files
for file in files:
filename = os.path.basename(os.path.normpath(file))
print('Splitting', filename)
name = os.path.splitext(filename)[0]
rate = librosa.get_samplerate(file)
length = round(librosa.get_duration(filename=file))
for i in range(length - 1):
start = i * rate
stop = (i * rate) + rate
data, _ = sf.read(file, start=start, stop=stop)
sf.write(os.path.join(background_dir, name + str(i) + '.wav'), data, rate)
Explanation: Note: Although there is a newer version available, we're using v0.01 of the speech commands dataset because it's a smaller download. v0.01 includes 30 commands, while v0.02 adds five more ("backward", "forward", "follow", "learn", and "visual").
End of explanation
if not use_custom_dataset:
commands = [ "up", "down", "left", "right", "go", "stop", "on", "off", "background"]
dataset_dir = './dataset-speech'
test_dir = './dataset-test'
# Move the processed background samples
shutil.move(background_dir, os.path.join(dataset_dir, 'background'))
# Delete all directories that are not in our commands list
dirs = glob.glob(os.path.join(dataset_dir, '*/'))
for dir in dirs:
name = os.path.basename(os.path.normpath(dir))
if name not in commands:
shutil.rmtree(dir)
# Count is per class
sample_count = 150
test_data_ratio = 0.2
test_count = round(sample_count * test_data_ratio)
# Loop through child directories (each class of wav files)
dirs = glob.glob(os.path.join(dataset_dir, '*/'))
for dir in dirs:
files = glob.glob(os.path.join(dir, '*.wav'))
random.seed(42)
random.shuffle(files)
# Move test samples:
for file in files[sample_count:sample_count + test_count]:
class_dir = os.path.basename(os.path.normpath(dir))
os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)
os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))
# Delete remaining samples
for file in files[sample_count + test_count:]:
os.remove(file)
Explanation: Prepare the speech commands dataset
We already downloaded the speech commands dataset, so now we just need to prune the number of classes for our model.
This dataset includes over 30 speech command classifications, and most of them have over 2,000 samples. But because we're using transfer learning, we don't need that many samples. So the following code does a few things:
Specify which classifications we want to use, and delete the rest.
Keep only 150 samples of each class for training (to prove that transfer learning works well with smaller datasets and simply to reduce the training time).
Create a separate directory for a test dataset so we can easily run inference with them later.
End of explanation
if use_custom_dataset:
# Specify the ZIP file you uploaded:
!unzip YOUR-FILENAME.zip
# Specify the unzipped path to your custom dataset
# (this path contains all the subfolders with classification names):
dataset_dir = './YOUR-DIRNAME'
Explanation: Prepare a custom dataset
If you want to train the model with our own speech dataset, you need to upload your samples as WAV files in a ZIP (as described above) and modify the following variables to specify your dataset:
End of explanation
def move_background_dataset(dataset_dir):
dest_dir = os.path.join(dataset_dir, 'background')
if os.path.exists(dest_dir):
files = glob.glob(os.path.join(background_dir, '*.wav'))
for file in files:
shutil.move(file, dest_dir)
else:
shutil.move(background_dir, dest_dir)
if use_custom_dataset:
# Move background samples into custom dataset
move_background_dataset(dataset_dir)
# Now we separate some of the files that we'll use for testing:
test_dir = './dataset-test'
test_data_ratio = 0.2
dirs = glob.glob(os.path.join(dataset_dir, '*/'))
for dir in dirs:
files = glob.glob(os.path.join(dir, '*.wav'))
test_count = round(len(files) * test_data_ratio)
random.seed(42)
random.shuffle(files)
# Move test samples:
for file in files[:test_count]:
class_dir = os.path.basename(os.path.normpath(dir))
os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)
os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))
print('Moved', test_count, 'images from', class_dir)
Explanation: After changing the filename and path name above, you're ready to train the model with your custom dataset. In the Colab toolbar, select Runtime > Run all to run the whole notebook.
The following code integrates our new background noise samples into your dataset and then separates a portion of all samples to create a test set.
End of explanation
def get_random_audio_file(samples_dir):
files = os.path.abspath(os.path.join(samples_dir, '*/*.wav'))
files_list = glob.glob(files)
random_audio_path = random.choice(files_list)
return random_audio_path
def show_sample(audio_path):
audio_data, sample_rate = sf.read(audio_path)
class_name = os.path.basename(os.path.dirname(audio_path))
print(f'Class: {class_name}')
print(f'File: {audio_path}')
print(f'Sample rate: {sample_rate}')
print(f'Sample length: {len(audio_data)}')
plt.title(class_name)
plt.plot(audio_data)
display(Audio(audio_data, rate=sample_rate))
random_audio = get_random_audio_file(test_dir)
show_sample(random_audio)
Explanation: Play a sample
To be sure the dataset looks correct, let's play at a random sample from the test set:
End of explanation
spec = audio_classifier.BrowserFftSpec()
Explanation: Define the model
When using Model Maker to retrain any model, you have to start by defining a model spec. The spec defines the base model from which your new model will extract feature embeddings to begin learning new classes. The spec for this speech recognizer is based on the pre-trained BrowserFft model from TFJS.
The model expects input as an audio sample that's 44.1 kHz, and just under a second long: the exact sample length must be 44034 frames.
You don't need to do any resampling with your training dataset. Model Maker takes care of that for you. But when you later run inference, you must be sure that your input matches that expected format.
All you need to do here is instantiate the BrowserFftSpec:
End of explanation
if not use_custom_dataset:
train_data_ratio = 0.8
train_data = audio_classifier.DataLoader.from_folder(
spec, dataset_dir, cache=True)
train_data, validation_data = train_data.split(train_data_ratio)
test_data = audio_classifier.DataLoader.from_folder(
spec, test_dir, cache=True)
Explanation: Load your dataset
Now you need to load your dataset according to the model specifications. Model Maker includes the DataLoader API, which will load your dataset from a folder and ensure it's in the expected format for the model spec.
We already reserved some test files by moving them to a separate directory, which makes it easier to run inference with them later. Now we'll create a DataLoader for each split: the training set, the validation set, and the test set.
Load the speech commands dataset
End of explanation
if use_custom_dataset:
train_data_ratio = 0.8
train_data = audio_classifier.DataLoader.from_folder(
spec, dataset_dir, cache=True)
train_data, validation_data = train_data.split(train_data_ratio)
test_data = audio_classifier.DataLoader.from_folder(
spec, test_dir, cache=True)
Explanation: Load a custom dataset
Note: Setting cache=True is important to make training faster (especially when the dataset must be re-sampled) but it will also require more RAM to hold the data. If you use a very large custom dataset, caching might exceed your RAM capacity.
End of explanation
# If your dataset has fewer than 100 samples per class,
# you might want to try a smaller batch size
batch_size = 25
epochs = 25
model = audio_classifier.create(train_data, spec, validation_data, batch_size, epochs)
Explanation: Train the model
Now we'll use the Model Maker create() function to create a model based on our model spec and training dataset, and begin training.
If you're using a custom dataset, you might want to change the batch size as appropriate for the number of samples in your train set.
Note: The first epoch takes longer because it must create the cache.
End of explanation
model.evaluate(test_data)
Explanation: Review the model performance
Even if the accuracy/loss looks good from the training output above, it's important to also run the model using test data that the model has not seen yet, which is what the evaluate() method does here:
End of explanation
def show_confusion_matrix(confusion, test_labels):
Compute confusion matrix and normalize.
confusion_normalized = confusion.astype("float") / confusion.sum(axis=1)
sns.set(rc = {'figure.figsize':(6,6)})
sns.heatmap(
confusion_normalized, xticklabels=test_labels, yticklabels=test_labels,
cmap='Blues', annot=True, fmt='.2f', square=True, cbar=False)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
confusion_matrix = model.confusion_matrix(test_data)
show_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)
Explanation: View the confusion matrix
When training a classification model such as this one, it's also useful to inspect the confusion matrix. The confusion matrix gives you detailed visual representation of how well your classifier performs for each classification in your test data.
End of explanation
TFLITE_FILENAME = 'browserfft-speech.tflite'
SAVE_PATH = './models'
print(f'Exporing the model to {SAVE_PATH}')
model.export(SAVE_PATH, tflite_filename=TFLITE_FILENAME)
model.export(SAVE_PATH, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])
Explanation: Export the model
The last step is exporting your model into the TensorFlow Lite format for execution on mobile/embedded devices and into the SavedModel format for execution elsewhere.
When exporting a .tflite file from Model Maker, it includes model metadata that describes various details that can later help during inference. It even includes a copy of the classification labels file, so you don't need to a separate labels.txt file. (In the next section, we show how to use this metadata to run an inference.)
End of explanation
# This library provides the TFLite metadata API
! pip install -q tflite_support
from tflite_support import metadata
import json
def get_labels(model):
Returns a list of labels, extracted from the model metadata.
displayer = metadata.MetadataDisplayer.with_model_file(model)
labels_file = displayer.get_packed_associated_file_list()[0]
labels = displayer.get_associated_file_buffer(labels_file).decode()
return [line for line in labels.split('\n')]
def get_input_sample_rate(model):
Returns the model's expected sample rate, from the model metadata.
displayer = metadata.MetadataDisplayer.with_model_file(model)
metadata_json = json.loads(displayer.get_metadata_json())
input_tensor_metadata = metadata_json['subgraph_metadata'][0][
'input_tensor_metadata'][0]
input_content_props = input_tensor_metadata['content']['content_properties']
return input_content_props['sample_rate']
Explanation: Run inference with TF Lite model
Now your TFLite model can be deployed and run using any of the supported inferencing libraries or with the new TFLite AudioClassifier Task API. The following code shows how you can run inference with the .tflite model in Python.
End of explanation
# Get a WAV file for inference and list of labels from the model
tflite_file = os.path.join(SAVE_PATH, TFLITE_FILENAME)
labels = get_labels(tflite_file)
random_audio = get_random_audio_file(test_dir)
# Ensure the audio sample fits the model input
interpreter = tf.lite.Interpreter(tflite_file)
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_size = input_details[0]['shape'][1]
sample_rate = get_input_sample_rate(tflite_file)
audio_data, _ = librosa.load(random_audio, sr=sample_rate)
if len(audio_data) < input_size:
audio_data.resize(input_size)
audio_data = np.expand_dims(audio_data[:input_size], axis=0)
# Run inference
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], audio_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
# Display prediction and ground truth
top_index = np.argmax(output_data[0])
label = labels[top_index]
score = output_data[0][top_index]
print('---prediction---')
print(f'Class: {label}\nScore: {score}')
print('----truth----')
show_sample(random_audio)
Explanation: To observe how well the model performs with real samples, run the following code block over and over. Each time, it will fetch a new test sample and run inference with it, and you can listen to the audio sample below.
End of explanation
try:
from google.colab import files
except ImportError:
pass
else:
files.download(tflite_file)
Explanation: Download the TF Lite model
Now you can deploy the TF Lite model to your mobile or embedded device. You don't need to download the labels file because you can instead retrieve the labels from .tflite file metadata, as shown in the previous inferencing example.
End of explanation |
7,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
この例では、GIDで生成されたメッシュを使用しています。メッシュは四面体要素です。
Step1: パラメータの設定
まずはパラメータを設定します。
Step2: メッシュの読み込み
メッシュを外部ファイルから読み込みます。
Step3: メッシュをParaviewのスクリプトでpng画像に打ち出し確認します。
Step4: 横と上から見たメッシュ図です。三脚のメッシュファイルであることがわかります。
Step5: FEMと立体求積法の設定
変数mにセットされたメッシュからそれぞれ変位応答格納用の変数と各節点データ操作用の変数を作成しておきます。GetFEM++の特徴として積分法の選択肢の多さがあります。
Step6: mfuとmfdにはそれぞれLagrange要素$Q_3$と$Q_1$が入ります。
Step7: FEM手法として古典的なLagrange要素$P_k$を割り当てます。
| degree | dimension | d.o.f. number | class | vectorial | $\tau$-equivalent | Polynomial |
|
Step8: 立体求積法として15積分点・5次のtetrahedronを使用します。
<img src="getfemlistintmethodtetrahedron5.png">
Step9: 境界条件の設定
最後に境界条件を設定します。今回は三脚の上端にNEUMANN条件を、下端にDIRICHLET条件を設定する。
Step10: 境界領域を作成します。
Step11: 外力ベクトルと剛性マトリックスの組み立て
Step12: DIRICHLET条件(固定端条件)の設定をします。
Step13: ポスト処理
以上で計算した変位をポスト処理結果として出力します。以下のコマンドで変位の解析結果のVTKファイルが出力されます。 | Python Code:
import getfem as gf
import numpy as np
Explanation: この例では、GIDで生成されたメッシュを使用しています。メッシュは四面体要素です。
End of explanation
file_msh = 'tripod.GiD.msh'
degree = 2
linear = False
incompressible = False # ensure that degree > 1 when incompressible is on..
E = 1e3
Nu = 0.3
Lambda = E*Nu/((1+Nu)*(1-2*Nu))
Mu = E/(2*(1+Nu))
Explanation: パラメータの設定
まずはパラメータを設定します。
End of explanation
m = gf.Mesh('import','gid',file_msh)
m.set('optimize_structure')
Explanation: メッシュの読み込み
メッシュを外部ファイルから読み込みます。
End of explanation
m.export_to_vtk('m.vtk')
%%writefile plot.py
try: paraview.simple
except: from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
m_vtk = LegacyVTKReader( FileNames=['m.vtk'] )
RenderView2 = CreateRenderView()
RenderView2.CompressorConfig = 'vtkSquirtCompressor 0 3'
RenderView2.UseLight = 1
RenderView2.LightSwitch = 0
RenderView2.Background = [0.31999694819562063, 0.3400015259021897, 0.4299992370489052]
RenderView2.CenterOfRotation = [9.392949104309082, 1.5, 0.0]
AnimationScene1 = GetAnimationScene()
AnimationScene1.ViewModules = RenderView2
DataRepresentation2 = Show()
DataRepresentation2.ScaleFactor = 8.50093994140625
DataRepresentation2.ScalarOpacityUnitDistance = 8.145737909998818
DataRepresentation2.EdgeColor = [0.0, 0.0, 0.5000076295109483]
RenderView2.CameraPosition = [9.392949104309082, 1.5, 183.2819944361333]
RenderView2.OrientationAxesVisibility = 0
RenderView2.CameraClippingRange = [96.86482207477977, 292.5054971187886]
RenderView2.RemoteRenderThreshold = 3.0
RenderView2.Background = [1.0, 1.0, 1.0]
RenderView2.CameraFocalPoint = [9.392949104309082, 1.5, 0.0]
RenderView2.CenterAxesVisibility = 0
RenderView2.CameraParallelScale = 57.398613649179126
RenderView2.OrientationAxesLabelColor = [0.0, 0.0, 0.0]
DataRepresentation2.EdgeColor = [0.0, 0.0, 0.0]
DataRepresentation2.DiffuseColor = [0.0, 0.0, 0.0]
DataRepresentation2.ColorArrayName = ('POINT_DATA', '')
DataRepresentation2.AmbientColor = [0.0, 0.0, 0.0]
DataRepresentation2.SelectionColor = [0.0, 0.0, 0.0]
DataRepresentation2.BackfaceDiffuseColor = [0.0, 0.0, 0.0]
DataRepresentation2.CubeAxesColor = [0.0, 0.0, 0.0]
DataRepresentation2.Representation = 'Wireframe'
a1_vtkEdgeFlags_PVLookupTable = GetLookupTableForArray( "vtkEdgeFlags", 1, RGBPoints=[0.0, 0.23, 0.299, 0.754, 0.5, 0.865, 0.865, 0.865, 1.0, 0.706, 0.016, 0.15], VectorMode='Magnitude', NanColor=[0.25, 0.0, 0.0], ColorSpace='Diverging', ScalarRangeInitialized=1.0 )
a1_vtkEdgeFlags_PiecewiseFunction = CreatePiecewiseFunction( Points=[0.0, 0.0, 0.5, 0.0, 1.0, 1.0, 0.5, 0.0] )
WriteImage('m1.png')
DataRepresentation2.ScalarOpacityFunction = a1_vtkEdgeFlags_PiecewiseFunction
DataRepresentation2.LookupTable = a1_vtkEdgeFlags_PVLookupTable
a1_vtkEdgeFlags_PVLookupTable.ScalarOpacityFunction = a1_vtkEdgeFlags_PiecewiseFunction
RenderView2.CameraViewUp = [0.0, 0.0, 1.0]
RenderView2.CameraPosition = [9.392949104309082, -220.27121326772138, 0.0]
RenderView2.CameraFocalPoint = [9.392949104309082, 1.5, 0.0]
RenderView2.CameraClippingRange = [196.66850113504415, 253.90528146673722]
WriteImage('m2.png')
Render()
!python plot.py
Explanation: メッシュをParaviewのスクリプトでpng画像に打ち出し確認します。
End of explanation
from IPython.core.display import Image
Image('m1.png')
from IPython.core.display import Image
Image('m2.png')
Explanation: 横と上から見たメッシュ図です。三脚のメッシュファイルであることがわかります。
End of explanation
mfu = gf.MeshFem(m,3) # displacement
mfd = gf.MeshFem(m,1) # data
Explanation: FEMと立体求積法の設定
変数mにセットされたメッシュからそれぞれ変位応答格納用の変数と各節点データ操作用の変数を作成しておきます。GetFEM++の特徴として積分法の選択肢の多さがあります。
End of explanation
print mfu
print mfd
Explanation: mfuとmfdにはそれぞれLagrange要素$Q_3$と$Q_1$が入ります。
End of explanation
degree = 1
mfu.set_fem(gf.Fem('FEM_PK(3,%d)' % (degree,)));
mfd.set_fem(gf.Fem('FEM_PK(3,0)'))
Explanation: FEM手法として古典的なLagrange要素$P_k$を割り当てます。
| degree | dimension | d.o.f. number | class | vectorial | $\tau$-equivalent | Polynomial |
|:--------------------:|:--------------------:|:------------------------------------:|:------------:|:------------------------:|:------------------------:|:------------:|
| $K,0\leq K\leq255$ | $P,1\leq K\leq255$ | $\dfrac{\left(K+P\right)!}{K!P!}$ | $C^0$ | No$\left(Q=1\right)$ | Yes$\left(M=Id\right)$ | Yes |
End of explanation
mim = gf.MeshIm(m,gf.Integ('IM_TETRAHEDRON(5)'))
Explanation: 立体求積法として15積分点・5次のtetrahedronを使用します。
<img src="getfemlistintmethodtetrahedron5.png">
End of explanation
P = m.pts()
ctop = (abs(P[1,:] - 13) < 1e-6)
cbot = (abs(P[1,:] + 10) < 1e-6)
pidtop = np.compress(ctop,range(0,m.nbpts()))
pidbot = np.compress(cbot,range(0,m.nbpts()))
ftop = m.faces_from_pid(pidtop)
fbot = m.faces_from_pid(pidbot)
Explanation: 境界条件の設定
最後に境界条件を設定します。今回は三脚の上端にNEUMANN条件を、下端にDIRICHLET条件を設定する。
End of explanation
NEUMANN_BOUNDARY = 1
DIRICHLET_BOUNDARY = 2
m.set_region(NEUMANN_BOUNDARY,ftop)
m.set_region(DIRICHLET_BOUNDARY,fbot)
Explanation: 境界領域を作成します。
End of explanation
nbd = mfd.nbdof()
F = gf.asm_boundary_source(NEUMANN_BOUNDARY, mim, mfu, mfd, np.repeat([[0],[-100],[0]],nbd,1))
K = gf.asm_linear_elasticity(mim, mfu, mfd, np.repeat([Lambda], nbd), np.repeat([Mu], nbd))
Explanation: 外力ベクトルと剛性マトリックスの組み立て
End of explanation
(H,R) = gf.asm_dirichlet(DIRICHLET_BOUNDARY, mim, mfu, mfd, mfd.eval('[[1,0,0],[0,1,0],[0,0,1]]'), mfd.eval('[0,0,0]'))
(N,U0) = H.dirichlet_nullspace(R)
Nt = gf.Spmat('copy',N)
Nt.transpose()
KK = Nt*K*N
FF = Nt*F # FF = Nt*(F-K*U0)
# solve ...
P = gf.Precond('ildlt',KK)
UU = gf.linsolve_cg(KK,FF,P)
U = N*UU+U0
Explanation: DIRICHLET条件(固定端条件)の設定をします。
End of explanation
sl = gf.Slice(('boundary',), mfu, degree)
sl.export_to_vtk('tripod_ev.vtk', mfu, U, 'Displacement')
%%writefile plot.py
try: paraview.simple
except: from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
tripod_ev_vtk = LegacyVTKReader( FileNames=['tripod_ev.vtk'] )
RenderView2 = CreateRenderView()
RenderView2.CompressorConfig = 'vtkSquirtCompressor 0 3'
RenderView2.UseLight = 1
RenderView2.LightSwitch = 0
RenderView2.RemoteRenderThreshold = 3.0
RenderView2.CenterOfRotation = [9.39224910736084, 1.5, 0.0]
AnimationScene1 = GetAnimationScene()
AnimationScene1.ViewModules = RenderView2
DataRepresentation2 = Show()
DataRepresentation2.ScaleFactor = 8.50093994140625
DataRepresentation2.ScalarOpacityUnitDistance = 9.599809856069918
DataRepresentation2.SelectionPointFieldDataArrayName = 'Displacement'
DataRepresentation2.EdgeColor = [0.0, 0.0, 0.0]
RenderView2.CameraPosition = [9.39224910736084, 1.5, 221.76947835313635]
RenderView2.OrientationAxesVisibility = 0
RenderView2.CameraFocalPoint = [9.39224910736084, 1.5, 0.0]
RenderView2.CameraClippingRange = [134.9674311526128, 331.57029329454673]
RenderView2.Background = [1.0, 1.0, 1.0]
RenderView2.CameraParallelScale = 57.398164620242895
RenderView2.CenterAxesVisibility = 0
a3_Displacement_PVLookupTable = GetLookupTableForArray( "Displacement", 3, RGBPoints=[0.0, 0.23, 0.299, 0.754, 6.571885963503576, 0.865, 0.865, 0.865, 12.804577141990409, 0.706, 0.016, 0.15], VectorMode='Magnitude', NanColor=[0.25, 0.0, 0.0], ColorSpace='Diverging', ScalarRangeInitialized=1.0 )
a3_Displacement_PiecewiseFunction = CreatePiecewiseFunction( Points=[0.0, 0.0, 0.5, 0.0, 12.804577141990409, 1.0, 0.5, 0.0] )
DataRepresentation2.Representation = 'Surface With Edges'
DataRepresentation2.ScalarOpacityFunction = a3_Displacement_PiecewiseFunction
DataRepresentation2.ColorArrayName = ('POINT_DATA', 'Displacement')
DataRepresentation2.LookupTable = a3_Displacement_PVLookupTable
WriteImage('tripod1.png')
a3_Displacement_PVLookupTable.ScalarOpacityFunction = a3_Displacement_PiecewiseFunction
RenderView2.CameraViewUp = [0.0, 0.0, 1.0]
RenderView2.CameraPosition = [9.39224910736084, 223.26947835313635, 0.0]
RenderView2.OrientationAxesVisibility = 0
RenderView2.CameraClippingRange = [196.66678356960497, 253.9035205284334]
RenderView2.CameraFocalPoint = [9.39224910736084, 1.5, 0.0]
RenderView2.CenterAxesVisibility = 0
WriteImage('tripod2.png')
Render()
!python plot.py
from IPython.core.display import Image
Image('tripod1.png')
from IPython.core.display import Image
Image('tripod2.png')
Explanation: ポスト処理
以上で計算した変位をポスト処理結果として出力します。以下のコマンドで変位の解析結果のVTKファイルが出力されます。
End of explanation |
7,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Posterior inference for GGP graph model
In this notebook, we'll infer the posterior distribution of yeast dataset using generalised gamma process graph model.
Original source of the dataset with detailed description
Step1: Loading yeast dataset
Step2: Run MCMC sampler
Step3: The invalid values are carefully handled in the inference codes. It is safe to ignore the warning messages.
Trace plots of some variables of interest
Step4: When the sigma is less than 0, the inferred graph is dense. | Python Code:
import os
import pickle
import time
from collections import defaultdict
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
from sgp import GGPgraphmcmc
%matplotlib inline
Explanation: Posterior inference for GGP graph model
In this notebook, we'll infer the posterior distribution of yeast dataset using generalised gamma process graph model.
Original source of the dataset with detailed description: http://www.cise.ufl.edu/research/sparse/matrices/Pajek/yeast.html
End of explanation
mat = loadmat('../data/yeast/yeast.mat')
graph = mat['Problem'][0][0][2]
Explanation: Loading yeast dataset
End of explanation
modelparam = dict()
mcmcparam = dict()
modelparam['alpha'] = (0, 0)
modelparam['sigma'] = (0, 0)
modelparam['tau'] = (0, 0)
mcmcparam['niter'] = 500
mcmcparam['nburn'] = 1
mcmcparam['thin'] = 1
mcmcparam['leapfrog.L'] = 5
mcmcparam['leapfrog.epsilon'] = 0.1
mcmcparam['leapfrog.nadapt'] = 1
mcmcparam['latent.MH_nb'] = 1
mcmcparam['hyper.MH_nb'] = 2
mcmcparam['hyper.rw_std'] = [0.02, 0.02]
mcmcparam['store_w'] = True
typegraph='undirected' # or simple
samples, stats = GGPgraphmcmc(graph, modelparam, mcmcparam, typegraph, verbose=True)
Explanation: Run MCMC sampler
End of explanation
plt.plot(samples['sigma'])
plt.title('Trace plot of $\sigma$ variable')
Explanation: The invalid values are carefully handled in the inference codes. It is safe to ignore the warning messages.
Trace plots of some variables of interest
End of explanation
plt.plot(stats['w_rate'])
plt.title('MH acceptance rate for weight w')
plt.plot(stats['hyper_rate'])
plt.title('MH acceptance rate for hyper-params')
Explanation: When the sigma is less than 0, the inferred graph is dense.
End of explanation |
7,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
eXamine Automation Tutorial
This case study demonstrates how to use the REST API of eXamine to study an annotated module in Cytoscape. The module that we study has 17 nodes and 18 edges and occurs within the KEGG mouse network consisting of 3863 nodes and 29293 edges. The module is annotated with sets from four different categories
Step1: Importing network and node-specific annotation
We start by importing the KEGG network directly from the eXamine repository on github.
Step2: We then import node-specific annotation directly from the eXamine repository on github. The imported file contains set membership information for each node. Note that it is important to ensure that set-membership information is imported as List of String, as indicated by sl. Additionaly, note that the default list separator is a pipe character.
Step3: Import set-specific annotation
We now describe how to import the set-specific annotations. In order to do so, eXamine needs to generate group nodes for each of the sets present in the module. To do so, we need to select nodes present in the module; these nodes have the value small in column Module, which we do as follows.
Step4: Now that we have selected the nodes of the module, we can proceed with generating group nodes for each set (Process, Function, Component and Pathway).
Step5: We import set-specific annotation, again directly from github.
Step6: Set-based visualization using eXamine
We now describe how to visualize the current selection. First, we set the visualization options.
Step7: We then select five groups.
Step8: There are two options
Step9: The command below launches the eXamine window. If this window is blank, simply resize the window to force a redraw of the scene. | Python Code:
# HTTP Client for Python
import requests
# Cytoscape port number
PORT_NUMBER = 1234
BASE_URL = "https://raw.githubusercontent.com/ls-cwi/eXamine/master/data/"
# The Base path for the CyRest API
BASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'
#Helper command to call a command via HTTP POST
def executeRestCommand(namespace="", command="", args={}):
postString = BASE + "commands/" + namespace + "/" + command
res = requests.post(postString,json=args)
return res
Explanation: eXamine Automation Tutorial
This case study demonstrates how to use the REST API of eXamine to study an annotated module in Cytoscape. The module that we study has 17 nodes and 18 edges and occurs within the KEGG mouse network consisting of 3863 nodes and 29293 edges. The module is annotated with sets from four different categories: (1) KEGG pathways and the GO categories (2) molecular process, (3) biological function and (4) cellular component.
There are three steps for visualizing subnetwork modules with eXamine. In the following, we will describe and perform the steps using the Automation functionality of Cytoscape. We refer to tutorial.pdf for instructions using the Cytoscape GUI.
End of explanation
# First we import our demo network
executeRestCommand("network", "import url", {"indexColumnSourceInteraction":"1",
"indexColumnTargetInteraction":"2",
"url": BASE_URL + "edges.txt"})
Explanation: Importing network and node-specific annotation
We start by importing the KEGG network directly from the eXamine repository on github.
End of explanation
# Next we import node annotations
executeRestCommand("table", "import url",
{"firstRowAsColumnNames":"true",
"keyColumnIndex" : "1",
"startLoadRow" : "1",
"dataTypeList":"s,s,f,f,f,s,s,s,sl,sl,sl,sl",
"url": BASE_URL + "nodes_induced.txt"})
Explanation: We then import node-specific annotation directly from the eXamine repository on github. The imported file contains set membership information for each node. Note that it is important to ensure that set-membership information is imported as List of String, as indicated by sl. Additionaly, note that the default list separator is a pipe character.
End of explanation
executeRestCommand("network", "select", {"nodeList":"Module:small"})
Explanation: Import set-specific annotation
We now describe how to import the set-specific annotations. In order to do so, eXamine needs to generate group nodes for each of the sets present in the module. To do so, we need to select nodes present in the module; these nodes have the value small in column Module, which we do as follows.
End of explanation
executeRestCommand("examine", "generate groups",
{"selectedGroupColumns" : "Process,Function,Component,Pathway"})
Explanation: Now that we have selected the nodes of the module, we can proceed with generating group nodes for each set (Process, Function, Component and Pathway).
End of explanation
#Ok, time to enrich our newly greated group nodes with some interesting annotations
executeRestCommand("table", "import url",
{"firstRowAsColumnNames":"true",
"keyColumnIndex" : "1",
"startLoadRow" : "1",
"url" : BASE_URL + "sets_induced.txt"})
Explanation: We import set-specific annotation, again directly from github.
End of explanation
# Adjust the visualization settings
executeRestCommand("examine", "update settings",
{"labelColumn" : "Symbol",
"urlColumn" : "URL",
"scoreColumn" : "Score",
"showScore" : "true",
"selectedGroupColumns" : "Function,Pathway"})
Explanation: Set-based visualization using eXamine
We now describe how to visualize the current selection. First, we set the visualization options.
End of explanation
# Select groups for demarcation in the visualization
executeRestCommand("examine", "select groups",
{"selectedGroups":"GO:0008013,GO:0008083,mmu04070,mmu05200,mmu04520"})
Explanation: We then select five groups.
End of explanation
# Launch the interactive eXamine visualization
executeRestCommand("examine", "interact", {})
Explanation: There are two options: either we launch the interactive eXamine visualization, or we directly generate an SVG.
End of explanation
# Export a graphic instead of interacting with it
# use absolute path; writes in Cytoscape directory if not changed
executeRestCommand("examine", "export", {"path": "your-path-here.svg"})
Explanation: The command below launches the eXamine window. If this window is blank, simply resize the window to force a redraw of the scene.
End of explanation |
7,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function Node
Satra once called the Function module, the "do anything you want card". Which is a perfect description. Because it allows you to put any code you want into an empty node, which you than can put in your workflow exactly where it needs to be.
You might have already seen the Function module in the example section in the Node tutorial. Let's take a closer look at it again.
Step1: Trap 1
There are only two traps that you should be aware when you're using the Function module. The first one is about naming the input variables. The variable name for the node input has to be the exactly the same name as the function input parameter, in this case this is x_input.
Otherwise you get the following error
Step2: Now, let's see what happens if we move the import of random outside the scope of get_random_array | Python Code:
# Import Node and Function module
from nipype import Node, Function
# Create a small example function
def add_two(x_input):
return x_input + 2
# Create Node
addtwo = Node(Function(input_names=["x_input"],
output_names=["val_output"],
function=add_two),
name='add_node')
addtwo.inputs.x_input =4
addtwo.run()
addtwo.result.outputs
Explanation: Function Node
Satra once called the Function module, the "do anything you want card". Which is a perfect description. Because it allows you to put any code you want into an empty node, which you than can put in your workflow exactly where it needs to be.
You might have already seen the Function module in the example section in the Node tutorial. Let's take a closer look at it again.
End of explanation
from nipype import Node, Function
# Create the Function object
def get_random_array(array_shape):
# Import random function
from numpy.random import random
return random(array_shape)
# Create Function Node that executes get_random_array
rndArray = Node(Function(input_names=["array_shape"],
output_names=["random_array"],
function=get_random_array),
name='rndArray_node')
# Specify the array_shape of the random array
rndArray.inputs.array_shape = (3, 3)
# Run node
rndArray.run()
# Print output
print(rndArray.result.outputs)
Explanation: Trap 1
There are only two traps that you should be aware when you're using the Function module. The first one is about naming the input variables. The variable name for the node input has to be the exactly the same name as the function input parameter, in this case this is x_input.
Otherwise you get the following error:
TypeError: add_two() got an unexpected keyword argument 'x_input'
Interface Function failed to run.
Note that in the current version of Nipype you don't have to provide input_names as an argument of Function.
Trap 2
If you want to use another module inside a function, you have to import it again inside the function. Let's take a look at the following example:
End of explanation
from nipype import Node, Function
# Import random function
from numpy.random import random
# Create the Function object
def get_random_array(array_shape):
return random(array_shape)
# Create Function Node that executes get_random_array
rndArray = Node(Function(input_names=["array_shape"],
output_names=["random_array"],
function=get_random_array),
name='rndArray_node')
# Specify the array_shape of the random array
rndArray.inputs.array_shape = (3, 3)
# Run node
rndArray.run()
# Print output
print(rndArray.result.outputs)
Explanation: Now, let's see what happens if we move the import of random outside the scope of get_random_array:
End of explanation |
7,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example for dimensionnality reduction
Step1: convert integer in to binary string
Step2: we apply this to a data
we use enumerate to loop
then save a dictionary to encapsulate of the values that are unique in binary code, so it give you the index possition and the value, now what we do is that we create a dictionary that maps this individual string in to binary. | Python Code:
import pandas as pd
import numpy as np
my_data = pd.DataFrame([1,2,3])
Explanation: Example for dimensionnality reduction
End of explanation
def to_binary(value):
return "{0:b}".format(value)
to_binary(5)
unique_values = my_data.thrid.unique()
Explanation: convert integer in to binary string
End of explanation
my_dict = {}
for index,val in enumerate(unique_vals):
my_dict[val] = to_binary(index)
my_data["thrid_binary"] = my_data.apply(lambda x: my_dict[x.thrid], axis = 1)
Explanation: we apply this to a data
we use enumerate to loop
then save a dictionary to encapsulate of the values that are unique in binary code, so it give you the index possition and the value, now what we do is that we create a dictionary that maps this individual string in to binary.
End of explanation |
7,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
7,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Microsoft Emotion API Data
Images were placed into the API by hand since there were so few. This step was automated using the API for the Baseline data
Step1: Plotting sentiment for each image.
Step2: Plotting sentiment for each image | Python Code:
def read_jsons(f, candidate):
tmp_dict = {}
with open(f) as json_file:
data = json.load(json_file)
for i in data[0]['scores']:
if data[0]['scores'][i] > 0.55: # confidence score threshold.
tmp_dict[i] = data[0]['scores'][i]
else: tmp_dict[i] = np.nan
tmp_dict['image_file'] = f.split('/')[-1]
return tmp_dict
basefilepath = './MicrosoftEmotionAPI/'
def get_json(path, candidate):
for f in glob.glob(path + '*.json'):
#print(f)
if candidate in f:
row_list.append(read_jsons(f, candidate))
row_list = []
get_json(basefilepath, 'hillary_clinton')
HCDF = pd.DataFrame(row_list)
HCDF.head(11)
Explanation: Microsoft Emotion API Data
Images were placed into the API by hand since there were so few. This step was automated using the API for the Baseline data
End of explanation
HCDF.plot(kind='bar', ylim=(0,1))
plt.legend(bbox_to_anchor=(1.1, 1))
row_list = []
get_json(basefilepath, 'donald_trump')
DTDF = pd.DataFrame(row_list)
DTDF.head(12)
Explanation: Plotting sentiment for each image.
End of explanation
DTDF.plot(kind='bar',ylim=(0,1))
plt.legend(bbox_to_anchor=(1.12, 1))
Explanation: Plotting sentiment for each image
End of explanation |
7,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quandl
Step1: Let's go over the columns
Step2: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows
Step3: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
Step4: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread
Step5: Taking what we've seen from above, let's see how we'd move that into the backtester. | Python Code:
# import the dataset
from quantopian.interactive.data.quandl import yahoo_index_vix as dataset
# Since this data is provided by Quandl for free, there is no _free version of this
# data set, as found in the premium sets. This import gets you the entirety of this data set.
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
Explanation: Quandl: S&P 500 Volatility Index (VIX)
In this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from January, 1990 through the current day. It contains the value for the index VIX, a measure of volatility in the S&P 500. We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website.
To be clear, this is a single value for VIX each day.
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
# Convert it over to a Pandas dataframe for easy charting
vix_df = odo(dataset, pd.DataFrame)
vix_df.plot(x='asof_date', y='close')
plt.xlabel("As of Date (asof_date)")
plt.ylabel("Close Price")
plt.axis([None, None, 0, 100])
plt.title("VIX")
plt.legend().set_visible(False)
Explanation: Let's go over the columns:
- asof_date: the timeframe to which this data applies
- timestamp: the simulated date upon which this data point is available to a backtest
- open: opening price for the day indicated on asof_date
- high: high price for the day indicated on asof_date
- low: lowest price for the day indicated by asof_date
- close: closing price for asof_date
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Let's go plot it for fun below. 6500 rows is small enough to just convert right over to Pandas.
End of explanation
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.quandl import yahoo_index_vix
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.quandl import yahoo_index_vix
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(yahoo_index_vix.close, 'close')
End of explanation
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (yahoo_index_vix,):
_print_fields(data)
print "---------------------------------------------------\n"
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(yahoo_index_vix.open_.latest, 'open')
pipe.add(yahoo_index_vix.close.latest, 'close')
pipe.add(yahoo_index_vix.adjusted_close.latest, 'adjusted_close')
pipe.add(yahoo_index_vix.high.latest, 'high')
pipe.add(yahoo_index_vix.low.latest, 'low')
pipe.add(yahoo_index_vix.volume.latest, 'volume')
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
# For use in your algorithms
# Using the full dataset in your pipeline algo
from quantopian.pipeline.data.quandl import yahoo_index_vix
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Add pipeline factors
pipe.add(yahoo_index_vix.open_.latest, 'open')
pipe.add(yahoo_index_vix.close.latest, 'close')
pipe.add(yahoo_index_vix.adjusted_close.latest, 'adjusted_close')
pipe.add(yahoo_index_vix.high.latest, 'high')
pipe.add(yahoo_index_vix.low.latest, 'low')
pipe.add(yahoo_index_vix.volume.latest, 'volume')
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Explanation: Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation |
7,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. important
Step1: Load the data
We use here EEG data from the BCI dataset.
Step2: Setup source space and compute forward | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
import os.path as op
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = op.join(fs_dir, 'bem', 'fsaverage-trans.fif')
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
Explanation: EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. important:: Source reconstruction without an individual T1 MRI from the
subject will be less accurate. Do not over interpret
activity locations which can be off by multiple centimeters.
<div class="alert alert-info"><h4>Note</h4><p>`plot_montage` show all the standard montages in MNE-Python.</p></div>
:depth: 2
End of explanation
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
ch_names = [c.replace('.', '') for c in raw.ch_names]
raw.rename_channels({old: new for old, new in zip(raw.ch_names, ch_names)})
# Read and set the EEG electrode locations
montage = mne.channels.read_montage('standard_1005', ch_names=raw.ch_names,
transform=True)
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans, dig=True)
Explanation: Load the data
We use here EEG data from the BCI dataset.
End of explanation
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
# for illustration purposes use fwd to compute the sensitivity map
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
eeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[5, 50, 100]))
Explanation: Setup source space and compute forward
End of explanation |
7,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory Data Analysis
Step1: Data Cleaning
Step2: Random Forest
Step3: Random Forest Results
```
0.79426
['AgeSex', 'AgeSexFare', 'Fare', 'Sex', 'Pclass', 'Age']
create_submission(RandomForestClassifier(
bootstrap= True,
min_samples_leaf= 3,
n_estimators= 20,
min_samples_split= 9,
criterion= 'entropy',
max_features= 4,
max_depth= None)
0.78469
['AgeSex', 'AgeSexFare', 'Fare', 'Age', 'Pclass', 'Sex']
create_submission(RandomForestClassifier(50, min_samples_split=4, min_samples_leaf=2), \
df, df_test, predictors, "submission.csv")
0.76555
['AgeSex', 'AgeSexFare', 'Fare', 'Age']
create_submission(RandomForestClassifier(50, min_samples_split=4, min_samples_leaf=2), \
df, df_test, features, "submission.csv")
```
Step4: SVM
Step5: mean
Step6: Gradient Boosting
Step7: BEST PARAMS
{'learning_rate'
Step8: Adaptive Boosting | Python Code:
test.info()
train.describe()
# train.Cabin.str.split().str.get(-1).str[0]
# train.Cabin.str.split(expand=True)
# train.Ticket.str.split().str.get(0).str.extract
train.Ticket.str.split()[0:].str[0].head()
print train[train['Survived']==1]["Age"].mean(),
print train[train['Survived']==0]["Age"].mean(),
print test.Age.mean()
Explanation: Exploratory Data Analysis
End of explanation
def clean_data(titanic):
titanic = titanic.copy()
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
titanic["Fare"] = titanic["Fare"].fillna(titanic["Fare"].median())
titanic['Cabin'] = titanic['Cabin'].str.split().str.get(-1).str[0]
titanic['Ticket'] = titanic.Ticket.str.split()[0:].str[0]
titanic.loc[titanic["Sex"] == "male", "Sex"] = -10
titanic.loc[titanic["Sex"] == "female", "Sex"] = 10
titanic["Embarked"] = titanic["Embarked"].fillna("S")
titanic['Title'] = titanic['Name'].apply(lambda x: x.split(',')[1].split()[0])
# d = {'Mr.':'Mr', 'Mrs.':'Mrs', 'Miss.':'Miss', 'Master.':'Master', 'Don.':'Mr', 'Rev.':'Mr', 'Dr.':'Dr', 'Mme.':'Mrs',
# 'Ms.':'Miss', 'Major.':'Mr', 'Lady.':'Miss', 'Sir.':'Mr', 'Mlle.':'Miss', 'Col.':'Mr', 'Capt.':'Mr', 'the':'Mr',
# 'Jonkheer.':'Mr', 'Dona.':'Mrs'}
d = {'Mr.':28, 'Mrs.':80, 'Miss.':50, 'Master.':28, 'Don.':40, 'Rev.':60, 'Dr.':60, 'Mme.':80,
'Ms.':50, 'Major.':60, 'Lady.':70, 'Sir.':40, 'Mlle.':50, 'Col.':60, 'Capt.':60, 'the':28,
'Jonkheer.':28, 'Dona.':70}
titanic['Title'].replace(d, inplace =True)
colnames = ['Embarked','Cabin','Ticket']
for colname in colnames:
titanic[colname] = pd.Categorical(titanic[colname]).codes
# # Grab all the features that can be included in a Random Forest Regressor
# age_titanic = titanic[['Age','Fare','Ticket','Pclass','Cabin','Title']]
# # Split into sets with known and unknown Age values
# knownAge = age_titanic.loc[ (titanic.Age.notnull()) ]
# unknownAge = age_titanic.loc[ (titanic.Age.isnull()) ]
# # All age values are stored in a target array
# y = knownAge.pop('Age').values
# # All the other values are stored in the feature array
# X = knownAge.values
# # Create and fit a model
# rtr = RandomForestRegressor(20)
# rtr.fit(X, y)
# # Use the fitted model to predict the missing values
# predictedAges = rtr.predict(unknownAge.values[:, 1::])
# # Assign those predictions to the full data set
# titanic.loc[ (titanic.Age.isnull()), 'Age' ] = predictedAges
# StandardScaler will subtract the mean from each value then scale to the unit variance
# scaler = StandardScaler()
# titanic['Age_scaled'] = scaler.fit_transform(titanic['Age'])
# titanic['Fare_scaled'] = scaler.fit_transform(titanic['Fare'])
titanic.Age = titanic.Age/titanic.Age.max()
titanic.Fare = titanic.Fare/titanic.Fare.max()
titanic['AgeSex'] = titanic.Age * titanic.Sex
titanic['AgeSexFare'] = titanic.Age * titanic.Sex * titanic.Fare
# titanic['TitlePclass'] = titanic.Title * titanic.Pclass
# titanic['CabinPclass'] = titanic.Cabin * titanic.Pclass
# titanic['PclassSq'] = titanic.Pclass ** 2
# titanic['SexFare'] = titanic.Sex * titanic.Fare
# titanic["FamilySize"] = titanic['Parch'] + titanic['SibSp']
# titanic.loc[(titanic["Sex"] == "female") , "Age"] = \
# titanic.loc[(titanic["Sex"] == "female") , "Age"].fillna(28.34)
# titanic.loc[(titanic["Sex"] == "male") , "Age"] = \
# titanic.loc[(titanic["Sex"] == "male") , "Age"].fillna(30.62)
# (titanic[titanic['Survived']==0]["Age"].mean())
# titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 1
# titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 2
# titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 3
titanic.drop(titanic[['Name',
# 'Ticket',
# 'Cabin',
# 'Age',
# 'Sex',
# 'Fare',
'SibSp',
'Parch',
# 'Title',
# 'Pclass',
]], axis = 1, inplace=True)
return titanic
df = clean_data(train)
df_train = df.copy()
df_train.drop('PassengerId', axis=1, inplace=True)
df_test = clean_data(test)
df.describe().T
df_train.info()
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
train[train['Survived']==1]["Age"].hist(bins=20, label='survived')
plt.title('Survived')
plt.subplot(1,2,2)
train[train['Survived']==0]["Age"].hist(bins=20)
plt.title('Did not survive')
df.head()
Explanation: Data Cleaning
End of explanation
y = df_train.pop('Survived').values
X = df_train.values
X_test = df_test.values
rf = RandomForestClassifier(40, n_jobs=-1)
rf.fit(X,y)
feat_rank = np.argsort(rf.feature_importances_)[::-1]
feat_rank
df_train.columns[feat_rank]
df_features = pd.DataFrame(rf.feature_importances_,df_train.columns, columns = ['feature_value'])
df_features.sort_values('feature_value', ascending=False)
scores = np.zeros((feat_rank.shape[0],2))
for i in range(1,feat_rank.shape[0]+1):
features = [df_train.columns[feat_rank][x] for x in range(i)]
scores[i-1:] = (i,(cross_val_score(rf, df[features], df['Survived'], cv=10)).mean())
scores
plt.plot(scores[:,:1],scores[:,1:2])
cross_val_score(rf, df[features], df['Survived'], cv=10).mean()
importances = rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(figsize=(12,5))
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), df_train.columns[indices])
plt.xlim([-1, X.shape[1]])
plt.show()
features = [df_train.columns[feat_rank][x] for x in range(9)]
features
# features = [df_train.columns[indices][x] for x in range(9)]
# features
X = df_train[features].values
X
def create_submission(model, train, test, features, filename):
# model.fit(train[features], train['Survived'])
predictions = model.predict(test[features])
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": predictions
})
submission.to_csv(filename, index=False)
from time import time
from operator import itemgetter
from scipy.stats import randint as sp_randint
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
# build a classifier
clf = RandomForestClassifier()
# Utility function to report best scores
def report(grid_scores, n_top=3):
top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]
for i, score in enumerate(top_scores):
print("Model with rank: {0}".format(i + 1))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
score.mean_validation_score,
np.std(score.cv_validation_scores)))
print("Parameters: {0}".format(score.parameters))
print("")
# specify parameters and distributions to sample from
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 6),
"min_samples_split": sp_randint(1, 11),
"min_samples_leaf": sp_randint(1, 11),
"bootstrap": [True, False],
'n_estimators': [10, 40, 50, 60],
"criterion": ["gini", "entropy"]}
# run randomized search
n_iter_search = 20
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=n_iter_search, n_jobs=-1)
start = time()
random_search.fit(X, y)
print("RandomizedSearchCV took %.2f seconds for %d candidates"
" parameter settings." % ((time() - start), n_iter_search))
report(random_search.grid_scores_)
# use a full grid over all parameters
param_grid = {'max_depth': [1, 2, 4, None],
'max_features': ['sqrt', 'log2', None],
'min_samples_split': [1, 2, 6, 8, 10],
'min_samples_leaf': [1, 2, 4, 6],
'bootstrap': [True, False],
'n_estimators': [30, 40, 50, 60, 100],
"criterion": ["gini", "entropy"]}
# run grid search
grid_search = GridSearchCV(clf, param_grid=param_grid, n_jobs=-1)
start = time()
grid_search.fit(X, y)
print("GridSearchCV took %.2f seconds for %d candidate parameter settings."
% (time() - start, len(grid_search.grid_scores_)))
report(grid_search.grid_scores_)
grid_search.best_estimator_
create_submission(grid_search.best_estimator_,
df, df_test, features, "../submissions/rf_submission.csv")
Explanation: Random Forest
End of explanation
trees_accuracy = []
for i in xrange(1,X.shape[1]):
rf = RandomForestClassifier(50, max_features = i, min_samples_split=4, min_samples_leaf=2)
rf.fit(X, y)
trees_accuracy.append(rf.score(X,y))
plt.plot(range(1, X.shape[1]), trees_accuracy, '-o')
Explanation: Random Forest Results
```
0.79426
['AgeSex', 'AgeSexFare', 'Fare', 'Sex', 'Pclass', 'Age']
create_submission(RandomForestClassifier(
bootstrap= True,
min_samples_leaf= 3,
n_estimators= 20,
min_samples_split= 9,
criterion= 'entropy',
max_features= 4,
max_depth= None)
0.78469
['AgeSex', 'AgeSexFare', 'Fare', 'Age', 'Pclass', 'Sex']
create_submission(RandomForestClassifier(50, min_samples_split=4, min_samples_leaf=2), \
df, df_test, predictors, "submission.csv")
0.76555
['AgeSex', 'AgeSexFare', 'Fare', 'Age']
create_submission(RandomForestClassifier(50, min_samples_split=4, min_samples_leaf=2), \
df, df_test, features, "submission.csv")
```
End of explanation
pipeline = Pipeline([('scaler', StandardScaler()),
('svc', SVC(kernel='linear'))])
pipeline.fit(X, y)
parameters = {'kernel':['linear','rbf'],
'C':np.linspace(.001,10,5),'degree':np.linspace(0,10,5)}
gsCV = GridSearchCV(estimator=pipeline.steps[1][1],
param_grid=parameters,scoring='accuracy', cv=5)
X = pipeline.steps[0][1].fit_transform(X)
gsCV.fit(X,y)
gsCV.grid_scores_, gsCV.best_params_
Explanation: SVM
End of explanation
def svm_submission(model, train, test, features, filename):
model.fit(train[features], train['Survived'])
predictions = model.predict(test[features])
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": predictions
})
submission.to_csv(filename, index=False)
svm_features = [df_train.columns[feat_rank][x] for x in range(8)]
svm_features
create_submission(Pipeline([('scaler', StandardScaler()),
('svc', SVC(kernel='rbf', C=2.5, degree=2.5))]), \
df, df_test, svm_features, "../submissions/svm_submission.csv")
Explanation: mean: 0.78151, std: 0.03323, params: {'C': 25.00075, 'degree': 0.0}
End of explanation
X = df_train
X.head()
gdb = GradientBoostingClassifier(
n_estimators=3000,
learning_rate = 0.01,
max_depth = 4,
max_features = 0.1,
min_samples_leaf = 17)
gdb.fit(X,y)
feat_rank = np.argsort(gdb.feature_importances_)[::-1]
feat_rank
df_train.columns[feat_rank]
boost_features = [df_train.columns[feat_rank][x] for x in range(8)]
boost_features
df_train[boost_features].head()
X = df_train[boost_features]
X.head()
param_grid = {'learning_rate': [0.1, 0.05, 0.02, 0.01],
'max_depth': [4, 6],
'min_samples_leaf': [3, 5, 9, 17],
'max_features': [1.0, 0.3, 0.1]}
gdb_grid = GradientBoostingClassifier(n_estimators=6000)
gs_cv = GridSearchCV(gdb_grid, param_grid).fit(X,y)
gs_cv.best_params_
gs_cv.grid_scores_
Explanation: Gradient Boosting
End of explanation
create_submission(GradientBoostingClassifier(
n_estimators=3000,
learning_rate = 0.01,
max_depth = 4,
max_features = 0.1,
min_samples_leaf = 9),
df, df_test, boost_features, "../submissions/gdboost_submission.csv")
Explanation: BEST PARAMS
{'learning_rate': 0.01,
'max_depth': 4,
'max_features': 0.1,
'min_samples_leaf': 17}
End of explanation
X = df_train
X.head()
ada = AdaBoostClassifier(
n_estimators=3000,
learning_rate = 0.01)
ada.fit(X,y)
feat_rank = np.argsort(ada.feature_importances_)[::-1]
ada_features = [df_train.columns[feat_rank][x] for x in range(6)]
ada_features
X = df_train[ada_features]
X.head()
param_grid = {'learning_rate': [1, 0.1, 0.05, 0.02, 0.01]}
ada_grid = AdaBoostClassifier(n_estimators=6000)
ada_cv = GridSearchCV(ada_grid, param_grid).fit(X,y)
ada_cv.best_params_
create_submission(AdaBoostClassifier(
n_estimators=3000,
learning_rate = 0.01),
df, df_test, ada_features, "../submissions/adaboost_submission.csv")
Explanation: Adaptive Boosting
End of explanation |
7,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGSLIB
Draw
The GSLIb equivalent parameter file is
```
Parameters for DRAW
***
START OF PARAMETERS
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: Testing Draw
Step3: Comparing results with gslib | Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
Explanation: PyGSLIB
Draw
The GSLIb equivalent parameter file is
```
Parameters for DRAW
***
START OF PARAMETERS:
data/cluster.dat \file with data
3 \ number of variables
1 2 3 \ columns for variables
0 \ column for probabilities (0=equal)
-1.0e21 1.0e21 \ trimming limits
69069 100 \random number seed, number to draw
draw.out \file for realizations
```
End of explanation
#get the data in gslib format into a pandas Dataframe
cluster = pygslib.gslib.read_gslib_file('../data/cluster.dat')
print ('\n\t\tCluster Data \n',cluster.tail())
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
print (pygslib.gslib.__draw.draw.__doc__)
cluster['NO-Weight']=1.
parameters_draw = {
'vr' : cluster[['Xlocation','Ylocation','Primary']], # data
'wt' : cluster['NO-Weight'], # weight/prob (use wt[:]=1 for equal probability)
'rseed' : 69069, # random number seed (conditioning cat.)
'ndraw' : 100} # number to draw
vo,sumwts,error = pygslib.gslib.__draw.draw(**parameters_draw)
print ('error ? ', error != 0, error)
print ('is 1./sumwts == nd?', 1./sumwts, len(cluster))
#making the output (which is numpy array) a pandas dataframe for nice printing
dfvo=pd.DataFrame(vo,columns= ['Xlocation','Ylocation','Primary'])
Explanation: Testing Draw
End of explanation
print (dfvo.head(6))
print ('******')
print (dfvo.tail(6))
Explanation: Comparing results with gslib
End of explanation |
7,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 4
Step2: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data.
Project DRA
Step3: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved. In this case we do not have the sample names, just their SRR IDs.
Step4: Make a params file
Step5: Assemble in pyrad
Step6: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.36M.
Step7: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage.
Step8: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
Step9: Print final stats table
Step10: Infer ML phylogeny in raxml as an unrooted tree
Step11: Plot the tree in R using ape
The backbone of the ingroup taxa has very low support across the radiation. The same result was found in the original paper (see Fig. 4 and Supplemental Fig S1 of Takahashi et al). Below we plot the full tree and a zoomed in tree of just the ingroup taxa.
Step12: An ultrametric tree for plotting | Python Code:
### Notebook 4
### Data set 4 (Orestias)
### Authors: Takahashi & Moreno (2015)
### Data Location: DDBJ DRA DRA003595
Explanation: Notebook 4:
This is an IPython notebook. Most of the code is composed of bash scripts, indicated by %%bash at the top of the cell, otherwise it is IPython code. This notebook includes code to download, assemble and analyze a published RADseq data set.
End of explanation
%%bash
## make a new directory for this analysis
mkdir -p empirical_4/fastq/
import os
def wget_download_ddbj(SRR, outdir):
Python function to get sra data from ncbi and write to
outdir with a new name using bash call wget
## create a call string
call = "wget -q -r -nH --cut-dirs=9 -P "+outdir+" "+\
"ftp://ftp.ddbj.nig.ac.jp/ddbj_database/dra/sra/ByExp/"+\
"sra/DRX/DRX033/DRX033{:03d}".format(SRR)
## run wget call
! $call
Explanation: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data.
Project DRA: DRA003595
Study: DRP002750
Experiments: DRX033006 -- DRX033069
Samples: DRS020928 -- DRS020991
SRA link: http://trace.ddbj.nig.ac.jp/DRASearch/submission?acc=DRA003595
Publication address: doi:10.1016/j.ympev.2015.08.012
End of explanation
for ID in range(6,70):
wget_download_ddbj(ID, "empirical_4/fastq/")
%%bash
## convert sra files to fastq using fastq-dump tool
## output as gzipped into the fastq directory
fastq-dump --gzip -O empirical_4/fastq/ empirical_4/fastq/*.sra
## remove .sra files
rm empirical_4/fastq/*.sra
Explanation: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved. In this case we do not have the sample names, just their SRR IDs.
End of explanation
%%bash
pyrad --version
%%bash
## delete existing params file if it exits
rm params.txt
## create a new default params file
pyrad -n
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_4/ ## 1. working directory ' params.txt
sed -i '/## 6. /c\TGCAGG ## 6. cutters ' params.txt
sed -i '/## 7. /c\30 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_4_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\empirical_4/fastq/*.gz ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt
cat params.txt
Explanation: Make a params file
End of explanation
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_4_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
Explanation: Assemble in pyrad
End of explanation
## import data frame
import pandas as pd
## read in the data
s4dat = pd.read_table("empirical_4/stats/s2.rawedit.txt", header=0, nrows=65)
## print summary stats
print s4dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s4dat["passed.total"].max()
print "\nmost raw data in sample:"
print s4dat['sample '][s4dat['passed.total']==maxraw]
Explanation: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.36M.
End of explanation
## read in the s3 results
s4dat = pd.read_table("empirical_4/stats/s3.clusters.txt", header=0, nrows=65)
## print summary stats
print "summary of means\n=================="
print s4dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s4dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion lowdepth\n=================="
print pd.Series(1-s4dat['d>5.tot']/s4dat["total"]).describe()
## find which sample has the greatest depth of retained loci
max_hiprop = (s4dat["d>5.tot"]/s4dat["total"]).max()
print "\nhighest coverage in sample:"
print s4dat['taxa'][s4dat['d>5.tot']/s4dat["total"]==max_hiprop]
Explanation: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage.
End of explanation
import toyplot
import toyplot.svg
import numpy as np
## read in the depth information for this sample
with open("empirical_4/clust.85/DRR036775.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
## make a barplot in Toyplot
canvas = toyplot.Canvas(width=350, height=300)
axes = canvas.axes(xlabel="Depth of coverage (N reads)",
ylabel="N loci",
label="dataset4/sample=DRR036775")
## select the loci with depth > 5 (kept)
keeps = depths[depths>5]
## plot kept and discarded loci
edat = np.histogram(depths, range(30)) # density=True)
kdat = np.histogram(keeps, range(30)) #, density=True)
axes.bars(edat)
axes.bars(kdat)
#toyplot.svg.render(canvas, "empirical_4_depthplot.svg")
Explanation: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
End of explanation
cat empirical_4/stats/empirical_4_m4.stats
%%bash
head -n 20 empirical_4/stats/empirical_4_m2.stats
Explanation: Print final stats table
End of explanation
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_4/ \
-n empirical_4_m4 -s empirical_4/outfiles/empirical_4_m4.phy
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_4/ \
-n empirical_4_m2 -s empirical_4/outfiles/empirical_4_m2.phy
%%bash
head -n 40 empirical_4/RAxML_info.empirical_4
Explanation: Infer ML phylogeny in raxml as an unrooted tree
End of explanation
%load_ext rpy2.ipython
%%R -w 1000 -h 800
library(ape)
tre <- read.tree("empirical_4/RAxML_bipartitions.empirical_4")
ltre <- ladderize(tre)
outgroups = c("DRR036791", "DRR036765", "DRR036790", "DRR036767", "DRR036769",
"DRR036766", "DRR036777", "DRR036793", "DRR036778", "DRR036778",
"DRR036792", "DRR036768", "DRR036775")
rtre <- root(ltre, outgroups)
ingrouptre <- drop.tip(ltre, outgroups)
par(mfrow=c(1,2))
plot(ltre, edge.width=2)
nodelabels(ltre$node.label, cex=1)
plot(ingrouptre, edge.width=2)
nodelabels(ingrouptre$node.label, cex=1)
%%R
mean(cophenetic.phylo(ltre))
Explanation: Plot the tree in R using ape
The backbone of the ingroup taxa has very low support across the radiation. The same result was found in the original paper (see Fig. 4 and Supplemental Fig S1 of Takahashi et al). Below we plot the full tree and a zoomed in tree of just the ingroup taxa.
End of explanation
%%R -h 700
utre <- ladderize(chronopl(rtre, 0.5, resolve.root=TRUE))
plot(utre, edge.width=2)
Explanation: An ultrametric tree for plotting
End of explanation |
7,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
A cold atom experimental setup on an optical table is a useful metaphor for the philosophy underlying the cold-atom library. In such an experimental apparatus there are expensive, complicated gadgets like lasers, detectors, vacuum pumps, etc. Usually these gadgets are not chosen so as to maximize their performance for a specific experiment. Rather they are reasonably versatile and general so that they can be repurposed for different experiments. Of course this saves cost but more importantly it allows experimenters to adapt to new insights that are gained only once construction of the apparatus has begun or once the experiment is underway. This is a recognition of the reality that it is impossible to fully anticipate all details of an experiment and to have a perfect plan ahead of time. A certain amount of on the fly learning and adaptation to new information is necessary.
The expensive and sophisticated gadgets are connected together by a large number of highly standardized, very general, and typically low cost components
Step1: Collections of particles are represented by the Ensemble class, which essentially contains their positions and velocities. While it is possible to consider particles in reduced dimensions (e.g. 1D or 2D) the cold-atoms library is designed for three spatial dimensions. This is because cold-atoms is at its heart a mesh-free library where the cost of higher dimensions (e.g. 3D vs 2D) is rather low. Therefore there is little incentive to save cost by considering lower dimensional problems. Some functions and classes assume that we're in three spatial dimensions. If you're deviating from that you're on your own!
In addition to positions and velocities ensembles of particles can have ensemble properties and per particle properties. Ensemble properties uniformly apply to all particles in the ensemble. A common example of an ensemble property are the particle mass or dipole matrix element in an ensemble of idential particles. A typical example of per particle state is the internal state of the atoms.
As an example, lets consider a bunch of particles distributed according to a Gaussian density distribution and a Maxwell-Boltzmann velocity distribution
Step2: In cold-atoms functions and algorithms operate on whole Ensembles rather than individual particles. This is by design. Dealing with whole ensembles allows us to construct high performance building blocks out of which simulations can then be assembled. If the library worked at the level of individual particles it would be impossible to amortize the cost of some of the glue code needed to tie the different components together. The result would be slow performance.
For the same reasons it is typically more efficient to use ensemble properties rather than per particle properties when possible.
If we let the particles evolve without any forces being applied to them we get ballistic expansion. The following figure shows the initial density distribution and the density distribution after free expansion at two later times.
Step3: Some basic performance estimates
To get a rough idea of the performance of our particle push algorithm we measure the time it takes to update the positions of a certain number of particles
Step4: To update the positions of n particles we need at least 2x3xn floating point operations
Step5: And we need to read at least 2x8xn bytes of data
Step6: The following figure shows the performance we end up with on my laptop. This figure shows the read bandwidth in GB/s and the arithmetic throughput in GFLOP/s. Note that the write bandwdith requirements of this kernel are only half as large as the read bandwdith. The write bandwidth is therefore not a limiting factor because write and read bandwidth of chipsets are typically comaparable.
The particle push kernel has very low arithmetic intensity. For every byte that is brought from main memory into registers we only perform 1/8th of an arithmetic operation. Therfore this kernel is very much bandwidth limited.
The peak bandwidth that our implementation attains is less than 2 GB/s. Mild cache effects are apparent in the data. The peak bandwidth of our system as measured with the stream triad benchmark is just over 5 GB/s. For large numbers of particles (more than fit into last level cache) our implementation achieves only about 0.7 GB/s. These results indicate that there is quite a bit of room for improvement. | Python Code:
import coldatoms
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
Explanation: Introduction
A cold atom experimental setup on an optical table is a useful metaphor for the philosophy underlying the cold-atom library. In such an experimental apparatus there are expensive, complicated gadgets like lasers, detectors, vacuum pumps, etc. Usually these gadgets are not chosen so as to maximize their performance for a specific experiment. Rather they are reasonably versatile and general so that they can be repurposed for different experiments. Of course this saves cost but more importantly it allows experimenters to adapt to new insights that are gained only once construction of the apparatus has begun or once the experiment is underway. This is a recognition of the reality that it is impossible to fully anticipate all details of an experiment and to have a perfect plan ahead of time. A certain amount of on the fly learning and adaptation to new information is necessary.
The expensive and sophisticated gadgets are connected together by a large number of highly standardized, very general, and typically low cost components: optical fibers, various types of cables, screws, bolts, mounts, wire, duct tape etc. The use of these very general, low cost interfaces is what allows students in the field of cold atoms to get to the point where they can take data and carry out physics research in a matter of years (sometimes even months).
The cold-atom library aims to emulate this approach for computational research in cold atoms. The library itself provides powerful data structures, algorithms, and implementations for specialized tasks (e.g. particle tracking, laser cooling, etc). In our analogy these capabilities correspond to lasers and other equipment. The interface between these capabilities is based on widely used standard libraries such as numpy and matplotlib and the programming language python itself.
This architecture is in deliberate contrast to simulation software that is specialized for a rather specific and narrow application domain (e.g. fusion research). Such applications are designed and developed over many years and decades and simulation runs are often carried out on super computers using batch systems with turnaround times of hours or days. Granted, for many of the application areas where these traditional computational solutions are brought to bear this approach is the only practical one.
But in our opinion it is a poor fit for cold atom research where flexibility, agility, and interactivity are more important than application performance. Of course this is practical only because the computational needs in the flavor of atomic physics we're targeting are much less stringent than in traditional areas of computational science.
This notebook illustrates some of the most basic concepts of the cold atoms library. We show how to represent an ensemble of particles and how to simulate their ballistic expansion.
Ballistic expansion of an ensemble of cold atoms
First we need to include a few libraries. As mentioned in the introduction, the coldatoms library uses numpy to represent most of its data. matplotlib is our go to solution for visualization
End of explanation
N = 1000
ensemble = coldatoms.Ensemble(num_ptcls=N)
ensemble.x = np.random.normal(size=(N, 3))
ensemble.v = np.random.normal(size=(N, 3))
Explanation: Collections of particles are represented by the Ensemble class, which essentially contains their positions and velocities. While it is possible to consider particles in reduced dimensions (e.g. 1D or 2D) the cold-atoms library is designed for three spatial dimensions. This is because cold-atoms is at its heart a mesh-free library where the cost of higher dimensions (e.g. 3D vs 2D) is rather low. Therefore there is little incentive to save cost by considering lower dimensional problems. Some functions and classes assume that we're in three spatial dimensions. If you're deviating from that you're on your own!
In addition to positions and velocities ensembles of particles can have ensemble properties and per particle properties. Ensemble properties uniformly apply to all particles in the ensemble. A common example of an ensemble property are the particle mass or dipole matrix element in an ensemble of idential particles. A typical example of per particle state is the internal state of the atoms.
As an example, lets consider a bunch of particles distributed according to a Gaussian density distribution and a Maxwell-Boltzmann velocity distribution:
End of explanation
def plot_positions(ax, x, y, x_range, y_range):
ax.plot(x, y,'.',markersize=1)
ax.set_xlim(-x_range, x_range)
ax.set_ylim(-y_range, y_range)
ax.set_aspect(1)
fig = plt.figure()
subplots = [plt.subplot(131), plt.subplot(132), plt.subplot(133)]
for ax in subplots:
plot_positions(ax, ensemble.x[:,0], ensemble.x[:, 1], 10, 10)
coldatoms.drift_kick(1.0, ensemble)
ax.set_xlabel(r'$x$')
subplots[0].set_ylabel(r'$y$')
subplots[1].get_yaxis().set_ticks([])
subplots[2].get_yaxis().set_ticks([])
fig.tight_layout()
Explanation: In cold-atoms functions and algorithms operate on whole Ensembles rather than individual particles. This is by design. Dealing with whole ensembles allows us to construct high performance building blocks out of which simulations can then be assembled. If the library worked at the level of individual particles it would be impossible to amortize the cost of some of the glue code needed to tie the different components together. The result would be slow performance.
For the same reasons it is typically more efficient to use ensemble properties rather than per particle properties when possible.
If we let the particles evolve without any forces being applied to them we get ballistic expansion. The following figure shows the initial density distribution and the density distribution after free expansion at two later times.
End of explanation
import time
def time_ballistic(n):
ensemble = coldatoms.Ensemble(num_ptcls=n)
ensemble.x = np.random.normal(size=(n, 3))
ensemble.v = np.random.normal(size=(n, 3))
t = time.time()
num_iter = 10
for i in range(num_iter):
coldatoms.drift_kick(0.1, ensemble)
elapsed = time.time() - t
return elapsed / num_iter
Explanation: Some basic performance estimates
To get a rough idea of the performance of our particle push algorithm we measure the time it takes to update the positions of a certain number of particles:
End of explanation
def flops(n):
return 2 * n * 3
Explanation: To update the positions of n particles we need at least 2x3xn floating point operations:
End of explanation
def bandwidth(n):
return 2 * np.dtype('float64').itemsize * n
Explanation: And we need to read at least 2x8xn bytes of data:
End of explanation
import math
nptcls = np.array([1000*1.5**e for e in range(0, 15)])
times = np.array([time_ballistic(int(math.floor(n))) for n in nptcls])
gflops = flops(nptcls) / times / (2.0**30)
gbytes = bandwidth(nptcls) / times / (2.0**30)
plt.figure()
gflops_plot = plt.loglog(nptcls, gflops)
bw_plot = plt.loglog(nptcls, gbytes)
plt.xlim([0.9 * nptcls[0], 1.1 * nptcls[-1]])
plt.ylim([0.1, 2.0])
plt.xlabel(r'$N$')
plt.ylabel('GFLOP/s, GB/s')
plt.legend(['Arithmetic throughput', 'Bandwidth']);
Explanation: The following figure shows the performance we end up with on my laptop. This figure shows the read bandwidth in GB/s and the arithmetic throughput in GFLOP/s. Note that the write bandwdith requirements of this kernel are only half as large as the read bandwdith. The write bandwidth is therefore not a limiting factor because write and read bandwidth of chipsets are typically comaparable.
The particle push kernel has very low arithmetic intensity. For every byte that is brought from main memory into registers we only perform 1/8th of an arithmetic operation. Therfore this kernel is very much bandwidth limited.
The peak bandwidth that our implementation attains is less than 2 GB/s. Mild cache effects are apparent in the data. The peak bandwidth of our system as measured with the stream triad benchmark is just over 5 GB/s. For large numbers of particles (more than fit into last level cache) our implementation achieves only about 0.7 GB/s. These results indicate that there is quite a bit of room for improvement.
End of explanation |
7,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
线性支持向量机的朴素实现
虽然从形式上来说,线性支持向量机(LinearSVM)和感知机的差别只在于损失函数,但如果只是简单地将感知机的训练策略(亦即每次只选出使得损失函数最大的样本点来进行梯度下降)迁移过来的话、会引发一些问题。为方便,我们称感知机的训练策略为极大梯度下降法(注:这不是被广泛承认的称谓,只是本文的一个代称)
我们会先展示极大梯度下降法的有效性,然后会展示极大梯度下降法存在的问题,最后则会介绍一种解决方案、并将该解决方案拓展为 Mini-Batch 梯度下降法(MBGD)
极大梯度下降法训练 LinearSVM
Step1: 测试
Step2: 可视化
Step3: 可视化训练过程
实现思路如下:
在每一步迭代时生成一张如上所示的图像
在最后调用相应的第三方库(imageio)、将生成的所有图像合成一个 mp4
用ffmpeg将 mp4 转为方便分享的 gif
存在的问题
由上述可视化其实已经可以看出,用极大梯度下降法训练 LinearSVM 会非常不稳定
从直观上来说,由于 LinearSVM 的损失函数比感知机要更复杂,所以相应的函数形状也会更复杂。这意味着当数据集稍微差一点的时候,直接单纯地应用极大梯度下降法可能会导致一些问题——比如说模型会卡在某个很奇怪的地方无法自拔(什么鬼)
可以通过下面这个栗子来直观感受一下 LinearSVM 存在的这些问题:
Step4: 通过下面这张动图,我们能够直观地感受极大梯度下降法下 LinearSVM 的训练过程:
可以看到,LinearSVM 确实卡在了奇怪的地方
原理我不敢乱说,这里只提供一个牵强附会的直观解释:
每次只取使得损失函数极大的一个样本进行梯度下降$\rightarrow$模型在某个地方可能来来回回都只受那么几个样本的影响$\rightarrow$死循环(什么鬼!)
专业的理论就留待专业的观众老爷补充吧 ( σ'ω')σ
解决方案
极大梯度下降法的最大问题很有可能在于它每次都只根据使得损失函数最大的一个样本点来进行梯度下降,这会导致两个问题:
+ 模型的训练将会很不稳定(这点和随机梯度下降类似)
+ 模型对噪声或“不太好的点”极为敏感(因为它们往往会使损失函数最大)
按部就班、我们先解决第一个问题,为此我们只需要多选出几个样本点(比如选出使得损失函数最大的 top n 个样本)、然后取它们梯度的平均即可
Top n 梯度下降法
注:该名字同样只是我瞎编的一个名字(喂)
Step5: 测试
Step6: Mini-Batch 梯度下降法(MBGD)
上述解决方案已经不错,但我们还是有些太“激进”了——我们每次进行梯度下降时,选取的样本点都是使得损失函数最大的样本点,但一般而言使损失函数最大的样本点如果不是关键的样本点(支持向量)的话、通常而言会是噪声。当数据集比较差时,噪声所带来的副作用很有可能就会盖过支持向量带来的正效应
为此,我们应该引入一定的随机性。神经网络的训练中所用的 MBGD 就是很好的方法:每次都从数据集中抽样出一个小 Batch,然后用这个 Batch 来做梯度下降
Step7: 测试
Step8: 存在的问题
Top n LinearSVM 和 MBGD LinearSVM 各有优劣,很难直接说谁好谁坏;但它们都有一个共同的问题,那就是它们所运用的梯度下降法都只是朴素的Vanilla Update,这会导致当数据的 scale 很大时模型对参数极为敏感、从而导致持续的震荡(所谓的 scale 比较大,可以理解为“规模很大”,或者直白一点——以二维数据为例的话——就是横纵坐标的数值很大)
可以通过下面这个栗子来直观感受一下 scale 很大的数据所带来的问题:
Step9: 通过下面这张动图,我们能够直观地感受数据的 scale 很大时 LinearSVM 的训练过程:
可以看到,模型确实一直在持续震荡
解决方案
采用更好的梯度下降法,比如Adam之类的
进行数据预处理、把数据的 scale 弄回 1
关于Adam等梯度下降算法的实现和在 LinearSVM 上的应用可以参见这里和这里,下面我们就仅展示进行数据预处理后的结果 | Python Code:
import numpy as np
class LinearSVM:
def __init__(self):
self._w = self._b = None
def fit(self, x, y, c=1, lr=0.01, epoch=10000):
x, y = np.asarray(x, np.float32), np.asarray(y, np.float32)
self._w = np.zeros(x.shape[1])
self._b = 0.
for _ in range(epoch):
self._w *= 1 - lr
err = 1 - y * self.predict(x, True)
idx = np.argmax(err)
# 注意即使所有 x, y 都满足 w·x + b >= 1
# 由于损失里面有一个 w 的模长平方
# 所以仍然不能终止训练,只能截断当前的梯度下降
if err[idx] <= 0:
continue
delta = lr * c * y[idx]
self._w += delta * x[idx]
self._b += delta
def predict(self, x, raw=False):
x = np.asarray(x, np.float32)
y_pred = x.dot(self._w) + self._b
if raw:
return y_pred
return np.sign(y_pred).astype(np.float32)
Explanation: 线性支持向量机的朴素实现
虽然从形式上来说,线性支持向量机(LinearSVM)和感知机的差别只在于损失函数,但如果只是简单地将感知机的训练策略(亦即每次只选出使得损失函数最大的样本点来进行梯度下降)迁移过来的话、会引发一些问题。为方便,我们称感知机的训练策略为极大梯度下降法(注:这不是被广泛承认的称谓,只是本文的一个代称)
我们会先展示极大梯度下降法的有效性,然后会展示极大梯度下降法存在的问题,最后则会介绍一种解决方案、并将该解决方案拓展为 Mini-Batch 梯度下降法(MBGD)
极大梯度下降法训练 LinearSVM
End of explanation
from Util import gen_two_clusters
x, y = gen_two_clusters()
svm = LinearSVM()
svm.fit(x, y)
print("准确率:{:8.6} %".format((svm.predict(x) == y).mean() * 100))
Explanation: 测试
End of explanation
from Util import visualize2d
visualize2d(svm, x, y)
visualize2d(svm, x, y, True)
Explanation: 可视化
End of explanation
# 注意我们只是把 center 参数(亦即正负样本点的“中心”)
# 从原点(0, 0)(默认值)挪到(5, 5)(亦即破坏了一定的对称性)、
# 并将正负样本点之间的距离(dis 参数)稍微拉近了一点而已,
# 结果就已经惨不忍睹了
x, y = gen_two_clusters(center=5, dis=1)
svm = LinearSVM()
svm.fit(x, y)
print("准确率:{:8.6} %".format((svm.predict(x) == y).mean() * 100))
visualize2d(svm, x, y)
visualize2d(svm, x, y, True)
Explanation: 可视化训练过程
实现思路如下:
在每一步迭代时生成一张如上所示的图像
在最后调用相应的第三方库(imageio)、将生成的所有图像合成一个 mp4
用ffmpeg将 mp4 转为方便分享的 gif
存在的问题
由上述可视化其实已经可以看出,用极大梯度下降法训练 LinearSVM 会非常不稳定
从直观上来说,由于 LinearSVM 的损失函数比感知机要更复杂,所以相应的函数形状也会更复杂。这意味着当数据集稍微差一点的时候,直接单纯地应用极大梯度下降法可能会导致一些问题——比如说模型会卡在某个很奇怪的地方无法自拔(什么鬼)
可以通过下面这个栗子来直观感受一下 LinearSVM 存在的这些问题:
End of explanation
# 继承上一个 LinearSVM 以重复利用代码
class LinearSVM2(LinearSVM):
# 用参数 batch_size 表示 Top n 中的 n
def fit(self, x, y, c=1, lr=0.01, batch_size=128, epoch=10000):
x, y = np.asarray(x, np.float32), np.asarray(y, np.float32)
# 如果 batch_size 设得比样本总数还多、则将其改为样本总数
batch_size = min(batch_size, len(y))
self._w = np.zeros(x.shape[1])
self._b = 0.
for _ in range(epoch):
self._w *= 1 - lr
err = 1 - y * self.predict(x, True)
# 利用 argsort 函数直接取出 Top n
# 注意 argsort 的结果是从小到大的,所以要用 [::-1] 把结果翻转一下
batch = np.argsort(err)[-batch_size:][::-1]
err = err[batch]
if err[0] <= 0:
continue
# 注意这里我们只能利用误分类的样本做梯度下降
# 因为被正确分类的样本处、这一部分的梯度为 0
mask = err > 0
batch = batch[mask]
# 取各梯度平均并做一步梯度下降
delta = lr * c * y[batch]
self._w += np.mean(delta[..., None] * x[batch], axis=0)
self._b += np.mean(delta)
Explanation: 通过下面这张动图,我们能够直观地感受极大梯度下降法下 LinearSVM 的训练过程:
可以看到,LinearSVM 确实卡在了奇怪的地方
原理我不敢乱说,这里只提供一个牵强附会的直观解释:
每次只取使得损失函数极大的一个样本进行梯度下降$\rightarrow$模型在某个地方可能来来回回都只受那么几个样本的影响$\rightarrow$死循环(什么鬼!)
专业的理论就留待专业的观众老爷补充吧 ( σ'ω')σ
解决方案
极大梯度下降法的最大问题很有可能在于它每次都只根据使得损失函数最大的一个样本点来进行梯度下降,这会导致两个问题:
+ 模型的训练将会很不稳定(这点和随机梯度下降类似)
+ 模型对噪声或“不太好的点”极为敏感(因为它们往往会使损失函数最大)
按部就班、我们先解决第一个问题,为此我们只需要多选出几个样本点(比如选出使得损失函数最大的 top n 个样本)、然后取它们梯度的平均即可
Top n 梯度下降法
注:该名字同样只是我瞎编的一个名字(喂)
End of explanation
x, y = gen_two_clusters(center=5, dis=1)
svm = LinearSVM2()
svm.fit(x, y)
print("准确率:{:8.6} %".format((svm.predict(x) == y).mean() * 100))
visualize2d(svm, x, y)
visualize2d(svm, x, y, True)
Explanation: 测试
End of explanation
class LinearSVM3(LinearSVM):
def fit(self, x, y, c=1, lr=0.01, batch_size=128, epoch=10000):
x, y = np.asarray(x, np.float32), np.asarray(y, np.float32)
batch_size = min(batch_size, len(y))
self._w = np.zeros(x.shape[1])
self._b = 0.
for _ in range(epoch):
self._w *= 1 - lr
# 随机选取 batch_size 个样本
batch = np.random.choice(len(x), batch_size)
x_batch, y_batch = x[batch], y[batch]
err = 1 - y_batch * self.predict(x_batch, True)
if np.max(err) <= 0:
continue
mask = err > 0
delta = lr * c * y_batch[mask]
self._w += np.mean(delta[..., None] * x_batch[mask], axis=0)
self._b += np.mean(delta)
Explanation: Mini-Batch 梯度下降法(MBGD)
上述解决方案已经不错,但我们还是有些太“激进”了——我们每次进行梯度下降时,选取的样本点都是使得损失函数最大的样本点,但一般而言使损失函数最大的样本点如果不是关键的样本点(支持向量)的话、通常而言会是噪声。当数据集比较差时,噪声所带来的副作用很有可能就会盖过支持向量带来的正效应
为此,我们应该引入一定的随机性。神经网络的训练中所用的 MBGD 就是很好的方法:每次都从数据集中抽样出一个小 Batch,然后用这个 Batch 来做梯度下降
End of explanation
# 进一步拉近正负样本点间的距离以观察性能
x, y = gen_two_clusters(center=5, dis=0.5)
top_n_svm = LinearSVM2()
top_n_svm.fit(x, y)
print("Top n LinearSVM 准确率:{:8.6} %".format((top_n_svm.predict(x) == y).mean() * 100))
mbgd_svm = LinearSVM3()
mbgd_svm.fit(x, y)
print("MBGD LinearSVM 准确率:{:8.6} %".format((mbgd_svm.predict(x) == y).mean() * 100))
visualize2d(top_n_svm, x, y)
visualize2d(mbgd_svm, x, y)
Explanation: 测试
End of explanation
# 将 scale 从 1(默认)调成 5
x, y = gen_two_clusters(center=5, scale=5)
top_n_svm = LinearSVM2()
top_n_svm.fit(x, y)
print("Top n LinearSVM 准确率:{:8.6} %".format((top_n_svm.predict(x) == y).mean() * 100))
mbgd_svm = LinearSVM3()
mbgd_svm.fit(x, y)
print("MBGD LinearSVM 准确率:{:8.6} %".format((mbgd_svm.predict(x) == y).mean() * 100))
visualize2d(top_n_svm, x, y)
visualize2d(mbgd_svm, x, y)
Explanation: 存在的问题
Top n LinearSVM 和 MBGD LinearSVM 各有优劣,很难直接说谁好谁坏;但它们都有一个共同的问题,那就是它们所运用的梯度下降法都只是朴素的Vanilla Update,这会导致当数据的 scale 很大时模型对参数极为敏感、从而导致持续的震荡(所谓的 scale 比较大,可以理解为“规模很大”,或者直白一点——以二维数据为例的话——就是横纵坐标的数值很大)
可以通过下面这个栗子来直观感受一下 scale 很大的数据所带来的问题:
End of explanation
x, y = gen_two_clusters(center=5, dis=1, scale=5)
# 进行归一化处理
x -= x.mean(axis=0)
x /= x.std(axis=0)
# Top 1 梯度下降法即为极大梯度下降法
top_1_svm = LinearSVM()
top_1_svm.fit(x, y)
print("Top 1 LinearSVM 准确率:{:8.6} %".format((top_1_svm.predict(x) == y).mean() * 100))
top_n_svm = LinearSVM2()
top_n_svm.fit(x, y)
print("Top n LinearSVM 准确率:{:8.6} %".format((top_n_svm.predict(x) == y).mean() * 100))
mbgd_svm = LinearSVM3()
mbgd_svm.fit(x, y)
print("MBGD LinearSVM 准确率:{:8.6} %".format((mbgd_svm.predict(x) == y).mean() * 100))
visualize2d(top_1_svm, x, y)
visualize2d(top_n_svm, x, y)
visualize2d(mbgd_svm, x, y)
Explanation: 通过下面这张动图,我们能够直观地感受数据的 scale 很大时 LinearSVM 的训练过程:
可以看到,模型确实一直在持续震荡
解决方案
采用更好的梯度下降法,比如Adam之类的
进行数据预处理、把数据的 scale 弄回 1
关于Adam等梯度下降算法的实现和在 LinearSVM 上的应用可以参见这里和这里,下面我们就仅展示进行数据预处理后的结果
End of explanation |
7,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compare Solutions - Homogenous (Eurus)
Brendan Smithyman | October 2015
This notebook shows comparisons between the responses of the different solvers.
Step1: Error plots for Eurus vs. the AnalyticalHelmholtz response
Response of the field (showing where the numerical case does not match the analytical case)
Step2: Relative error of the MiniZephyr solution (in %) | Python Code:
import sys
sys.path.append('../')
import numpy as np
from zephyr.backend import Eurus, SparseKaiserSource, AnalyticalHelmholtz
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
dx = 1.
dz = 1.
nx = 100
nz = 200
velocity = 2000.
density = 1.
# Anisotropy parameters
theta = 0.
epsilon = 0.2
delta = 0.2
nPML = 10
freeSurf = [False, False, False, False]
systemConfig = {
'c': velocity, # m/s
'rho': density, # kg/m^3
'freq': 200., # Hz
'nx': nx,
'nz': nz,
'dx': dx,
'dz': dz,
'theta': theta,
'eps': epsilon,
'delta': delta,
'nPML': nPML,
'cPML': 1e3,
'freeSurf': freeSurf,
}
Ainv = Eurus(systemConfig)
AH = AnalyticalHelmholtz(systemConfig)
SKS = SparseKaiserSource(systemConfig)
xs, zs = 50, 100
sloc = np.array([xs, zs]).reshape((1,2))
q = SKS(sloc)
uMZ = Ainv*q
uAH = AH(sloc)
clip = 0.1
plotopts = {
'vmin': -np.pi,
'vmax': np.pi,
'extent': [0., dx * nx, dz * nz, 0.],
'cmap': cm.bwr,
}
fig = plt.figure()
ax1 = fig.add_subplot(1,4,1)
plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts)
plt.title('AH Phase')
ax2 = fig.add_subplot(1,4,2)
plt.imshow(np.angle(uMZ[:nx*nz].reshape((nz,nx))), **plotopts)
plt.title('ER Phase')
plotopts.update({
'vmin': -clip,
'vmax': clip,
})
ax3 = fig.add_subplot(1,4,3)
plt.imshow(uAH.reshape((nz, nx)).real, **plotopts)
plt.title('AH Real')
ax4 = fig.add_subplot(1,4,4)
plt.imshow(uMZ[:nx*nz].reshape((nz, nx)).real, **plotopts)
plt.title('ER Real')
fig.tight_layout()
Explanation: Compare Solutions - Homogenous (Eurus)
Brendan Smithyman | October 2015
This notebook shows comparisons between the responses of the different solvers.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(1,1,1, aspect=100)
plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz')
plt.plot(uMZ[:nx*nz].real.reshape((nz, nx))[:,xs], label='Eurus')
plt.legend(loc=4)
plt.title('Real part of response through xs=%d'%xs)
Explanation: Error plots for Eurus vs. the AnalyticalHelmholtz response
Response of the field (showing where the numerical case does not match the analytical case):
Source region
PML regions
End of explanation
uMZr = uMZ[:nx*nz].reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
plotopts.update({
'cmap': cm.jet,
'vmin': 0.,
'vmax': 20.,
})
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
plotopts.update({'vmax': 5.})
ax2 = fig.add_subplot(1,2,2)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
fig.tight_layout()
Explanation: Relative error of the MiniZephyr solution (in %)
End of explanation |
7,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diferencias Finitas
El método de diferencias finitas corresponde a una aproximación discreta del dominio del problema, generando un sistema de ecuaciones para tal efecto. Tanto Ecuaciones diferenciales ordinarias como parciales pueden ser resueltas numéricamente con este método.
Tomemos las fórmulas de diferencias finitas centradas para estimar la primera y segunda derivada de nuestra función a aproximar,
$$y'(t) = \frac{y(t+h)-y(t-h)}{2h} - \frac{h^2}{6}y'''(c)$$
$$y''(t) = \frac{y(t+h)-2y(t) + y(t-h)}{h^2} + \frac{h^2}{12}y''''(c)$$
y las reemplazaremos en nuestros problemas que involucran ecuaciones diferenciales. Si el problema original es lineal, el sistema de ecuaciones a resolver será lineal y podemos aproximarlo por eliminación gaussiana o por métodos iterativos. Problemas no lineales generaran sistemas de ecuaciones no lineales y habrá que resolverlos de otra forma.
1. Problemas de Valor de Frontera (BVP) lineales
Por ejemplo el siguiente BVP puede ser resuelto utilizando diferencias finitas.
\begin{align}
y'' = 4y \
y(0) = 1\
y(1) = 3
\end{align}
Al reemplazar las derivadas por sus aproximaciones obtenemos que
\begin{align}
\frac{w_{i+1} - 2w_i + w_{i-1}}{h^2} - 4w_i &= 0\
\Rightarrow w_{i-1} + (-4h^2-2)w_i + w_{i+1} &= 0
\end{align}
Si elegimos una aproximación de $n=3$ estimaciones el tamaño del intervalo es $h = \frac{1}{n+1} = \frac{1}{4}$ con tres ecuaciones. En general, como sabemos las condiciones de borde, nosotros buscamos una aproximación en $n$ puntos equiespaciados sin contar los extremos, luego para saber el tamaño del intervalo usamos la fórmula general $h = \frac{b-a}{n+1}$.
Step1: Los errores de éste método son dos principalmente
Step2: A continuación otro ejemplo de BVP, esta vez note que hay involucrada una función explícitamente dependiente del tiempo. Basta con evaluarla en la grilla de tiempo según sea necesario, la consecuencia directa es que el vector que solía contener sólo condiciones de borde ahora tendrá estos valores asociados a $f(t)$.
$$\begin{align}
\ddot{y}(t) &= 2\cos(t) - \dot{y}(t)\
y(0) &= -1\
y(\pi) &= 1
\end{align}$$
Step3: 2. Problemas de Valor de Frontera No Lineales
Si introducimos no linealidades a nuestras ecuaciones diferenciales, resolverlas numéricamente implicará replantear el sistema de ecuaciones para dejarla en función de $w$ y podemos en cambio resolver $F(w) = 0$. Por ejemplo podemos usar el Método de Newton multivariado, y para ello necesitamos la matriz Jacobiana de $F$...
No olvidar la fórmula del jacobiano de una función $F$, que no es más que la matriz de derivadas parciales de $F$
\begin{bmatrix}
\cfrac{\partial F_1}{\partial w_1} & \cdots & \cfrac{\partial F_1}{\partial w_n} \
\vdots & \ddots & \vdots \
\cfrac{\partial F_n}{\partial w_1} & \cdots & \cfrac{\partial F_n}{\partial w_n}
\end{bmatrix}
El método de Newton es $w^{k+1} = w^{k} - \partial F(w^k)^{-1} F(w^k)$. Pero, como siempre, encontrar la inversa del Jacobiano puede ser una operacion costosa y puede llevar a inestabilidades. Por lo que es mejor resolver para $\Delta w$ el sistema análogo $\partial F(w^k) \Delta w = -F(w^k)$
$$Ax=b$$
El siguiente problema puede ser resuelto con diferencias finitas y método de Newton
Step4: 3. Condiciones de Borde
Es posible establecer diversas condiciones de borde para nuestro problema. Las más comunes son Dirichlet, Neumann, Robin y condiciones periódicas.
Las condiciones de Dirichlet buscan fijar un valor en específico para la función incógnita en los bordes.
Las condiciones de Neumann buscan análogamente fijar un valor de la derivada de la función incógnita en los bordes.
Las condiciones de Robin son una mezcla de ambas, una combinación de condiciones de Dirichlet y Neumann para algún borde.
Cabe destacar también condiciones periódicas, en donde se fuerza que la función en un borde sea igual a la función en el borde contrario.
Resolvamos la siguiente ecuación diferencial con condiciones mixtas (una Neumann y una Dirichlet) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Rango de tiempo
tt = np.linspace(0, 1, 100)
# Solución Analítica
def y(t):
return (np.exp(-2*t)*(-3*np.exp(2)+np.exp(4)-np.exp(4*t)+ 3*np.exp(2+4*t)))/(-1+np.exp(4))
yy = y(tt)
# Matriz de diferencias finitas que depende de n
def DiffMatrix(n, h):
m = np.zeros((n,n), dtype=float)
np.fill_diagonal(m, -4.0*h**2.0-2)
dix, diy = np.diag_indices(n)
dix = dix[:-1]
diy = diy[:-1] + 1
m[(dix, diy)] = 1
dix = dix + 1
diy = diy - 1
m[(dix, diy)] = 1
return m
plt.figure(figsize=(10,7))
# Para distintas precisiones
for n in [1, 2, 5]:
# Calcular tamaño del intervalo
h = 1.0/(n + 1.0)
# Armar vector de coeficientes
b = np.zeros((n))
b[0] = -1
b[-1] = -3
# Resolver el sistema A*w = b
# A es la matriz de diferencias finitas
# w el vector de aproximaciones finitas
A = DiffMatrix(n, h)
w = np.concatenate(([1], np.linalg.solve(A, b), [3]))
_t = np.linspace(0, 1, n+2)
# Plot aproximación
plt.plot(_t, w, 'o--',lw=2, label="$n ="+str(n)+"$")
# Plots
plt.plot(tt, yy, 'm', lw=2, label="$y(t)$")
plt.legend(loc='best', fontsize=16)
plt.xlabel("$t$", fontsize=16)
plt.show()
print(A)
Explanation: Diferencias Finitas
El método de diferencias finitas corresponde a una aproximación discreta del dominio del problema, generando un sistema de ecuaciones para tal efecto. Tanto Ecuaciones diferenciales ordinarias como parciales pueden ser resueltas numéricamente con este método.
Tomemos las fórmulas de diferencias finitas centradas para estimar la primera y segunda derivada de nuestra función a aproximar,
$$y'(t) = \frac{y(t+h)-y(t-h)}{2h} - \frac{h^2}{6}y'''(c)$$
$$y''(t) = \frac{y(t+h)-2y(t) + y(t-h)}{h^2} + \frac{h^2}{12}y''''(c)$$
y las reemplazaremos en nuestros problemas que involucran ecuaciones diferenciales. Si el problema original es lineal, el sistema de ecuaciones a resolver será lineal y podemos aproximarlo por eliminación gaussiana o por métodos iterativos. Problemas no lineales generaran sistemas de ecuaciones no lineales y habrá que resolverlos de otra forma.
1. Problemas de Valor de Frontera (BVP) lineales
Por ejemplo el siguiente BVP puede ser resuelto utilizando diferencias finitas.
\begin{align}
y'' = 4y \
y(0) = 1\
y(1) = 3
\end{align}
Al reemplazar las derivadas por sus aproximaciones obtenemos que
\begin{align}
\frac{w_{i+1} - 2w_i + w_{i-1}}{h^2} - 4w_i &= 0\
\Rightarrow w_{i-1} + (-4h^2-2)w_i + w_{i+1} &= 0
\end{align}
Si elegimos una aproximación de $n=3$ estimaciones el tamaño del intervalo es $h = \frac{1}{n+1} = \frac{1}{4}$ con tres ecuaciones. En general, como sabemos las condiciones de borde, nosotros buscamos una aproximación en $n$ puntos equiespaciados sin contar los extremos, luego para saber el tamaño del intervalo usamos la fórmula general $h = \frac{b-a}{n+1}$.
End of explanation
from scipy.interpolate import BarycentricInterpolator
y_data = y(0.75)
exp = np.linspace(1, 3, 10)
N = np.round(np.power(10, exp))
err = []
for n in N:
n = int(n)
h = 1.0/(n + 1.0)
A = DiffMatrix(n, h)
b = np.zeros((n))
b[0] = -1
b[-1] = -3
w = np.concatenate(([1], np.linalg.solve(A, b), [3]))
_t = np.linspace(0, 1, n+2)
f = BarycentricInterpolator(_t, w)
err.append(np.abs(f(0.75) - y(0.75)))
logerr = np.log10(err)
plt.figure(figsize=(10,7))
plt.plot(exp, logerr, 'bo')
slope = (logerr[1] - logerr[0]) / (exp[1] - exp[0])
print("pendiente:",slope)
plt.xlabel("$n$",fontsize=20)
plt.ylabel("$|w-y|_{t=0.75}$",fontsize=20)
exp_n = [1,2,3]
plt.xticks(exp_n, ["$10^{"+str(i)+"}$" for i in exp_n], fontsize=16)
exp_error = [-7, -6, -5, -4 , -3]
plt.yticks(exp_error, ["$10^{"+str(i)+"}$" for i in exp_error], fontsize=16)
plt.show()
Explanation: Los errores de éste método son dos principalmente:
Truncamiento de las fórmulas de diferencias finitas (es decir, cuando despreciamos los términos de orden superior que provienen de la Serie de Taylor que utilizamos para definir las fórmulas).
El error de solucionar numéricamente el sistema de ecuaciones lineales.
Las fórmulas centradas de diferencia finitas tienen un error proporcional a $h^2$. Nuestro método irá disminuyendo el error a medida que aumentamos la cantidad de subintervalos $n+1$, es decir, a medida que disminuimos $h$, y por lo tanto el error decrece como $\mathcal{O}(n^{-2})$.
Recordemos que para entender gráficamente cómo un error decrece (o crece) es conveniente establecer relaciones utilizando gráficos loglog, donde la pendiente del gráfico indica el exponente para la regla de cambio del error respecto a la variable independiente $n$.
Se puede observar que efectivamente la pendiente de este gráfico es -2, lo que confirma la regla del error que hemos calculado.
End of explanation
# Rango de tiempo
tt = np.linspace(0, np.pi, 100)
# Solución Analítica
def y(t):
return np.sin(t) - np.cos(t)
y0 = y(0)
yM = y(np.pi)
yy = y(tt)
# Matriz de diferencias finitas que depende de n
def DiffMatrix(n, h):
m = np.zeros((n,n), dtype=float)
m += np.diag((-h-2)*np.ones(n), k=0)
m += np.diag(np.ones(n-1), k=-1)
m += np.diag((1+h)*np.ones(n-1), k=1)
return m
plt.figure(figsize=(10,7))
# Para distintas precisiones
for n in [20]:
# Calcular tamaño del intervalo
h = (np.pi - 0.0)/(n + 1.0)
t = np.arange(0, np.pi+h, h)
# Armar vector de coeficientes
b = np.zeros((n))
b[0] = 2.0*(h**2)*np.cos(t[1]) - y0
b[-1] = 2.0*(h**2)*np.cos(t[-2]) - (1+h)*yM
b[1:-1] = 2.0*(h**2)*np.cos(t[2:-2])
# Resolver el sistema A*w = b
# A es la matriz de diferencias finitas
# w el vector de aproximaciones finitas
A = DiffMatrix(n, h)
w = np.concatenate(([y0], np.linalg.solve(A, b), [yM]))
# Plot aproximación
plt.plot(t, w, 'o--',lw=2, label="$n ="+str(n)+"$")
# Plots
plt.plot(tt, yy, 'm', lw=2, label="$y(t)$")
plt.legend(loc='best', fontsize=16)
plt.xlabel("$t$", fontsize=16)
plt.show()
Explanation: A continuación otro ejemplo de BVP, esta vez note que hay involucrada una función explícitamente dependiente del tiempo. Basta con evaluarla en la grilla de tiempo según sea necesario, la consecuencia directa es que el vector que solía contener sólo condiciones de borde ahora tendrá estos valores asociados a $f(t)$.
$$\begin{align}
\ddot{y}(t) &= 2\cos(t) - \dot{y}(t)\
y(0) &= -1\
y(\pi) &= 1
\end{align}$$
End of explanation
ya = 1
yb = 4
n_iter = 20
# Matriz de diferencias finitas que depende de n
def jacobian(n, w, h):
m = np.zeros((n,n), dtype=float)
np.fill_diagonal(m, 2.0*h**2.0 * w -2-h**2)
dix, diy = np.diag_indices(n)
dix = dix[:-1]
diy = diy[:-1] + 1
m[(dix, diy)] = 1
dix = dix + 1
diy = diy - 1
m[(dix, diy)] = 1
return m
# Función con el sistema de ecuaciones a la cual le encontraremos el cero
def f(n, w, h):
y = np.zeros((n))
y[0] = ya - (2+h**2)*w[0] + h**2*w[0]**2 + w[1]
y[n-1] = w[n-2] - (2+h**2)*w[n-1] + h**2*w[n-1]**2 + yb
for i in range(1, n-1):
y[i] = w[i-1] - (2+h**2)*w[i] + h**2*w[i]**2 + w[i+1]
return y
plt.figure(figsize=(10,7))
# Probar distintos intervalos
for n in [2, 4, 40]:
h = 1.0 / (n + 1)
t = np.linspace(0, 1, n+2)
w = np.zeros((n))
for i in range(n_iter):
# wk+1 = wk - delta w
w = w - np.linalg.solve(jacobian(n, w, h), f(n, w, h))
plt.plot(t, np.concatenate([[ya],w,[yb]]), 'o-', label="$n = "+str(n)+"$")
plt.legend(loc='best', fontsize=16)
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.show()
Explanation: 2. Problemas de Valor de Frontera No Lineales
Si introducimos no linealidades a nuestras ecuaciones diferenciales, resolverlas numéricamente implicará replantear el sistema de ecuaciones para dejarla en función de $w$ y podemos en cambio resolver $F(w) = 0$. Por ejemplo podemos usar el Método de Newton multivariado, y para ello necesitamos la matriz Jacobiana de $F$...
No olvidar la fórmula del jacobiano de una función $F$, que no es más que la matriz de derivadas parciales de $F$
\begin{bmatrix}
\cfrac{\partial F_1}{\partial w_1} & \cdots & \cfrac{\partial F_1}{\partial w_n} \
\vdots & \ddots & \vdots \
\cfrac{\partial F_n}{\partial w_1} & \cdots & \cfrac{\partial F_n}{\partial w_n}
\end{bmatrix}
El método de Newton es $w^{k+1} = w^{k} - \partial F(w^k)^{-1} F(w^k)$. Pero, como siempre, encontrar la inversa del Jacobiano puede ser una operacion costosa y puede llevar a inestabilidades. Por lo que es mejor resolver para $\Delta w$ el sistema análogo $\partial F(w^k) \Delta w = -F(w^k)$
$$Ax=b$$
El siguiente problema puede ser resuelto con diferencias finitas y método de Newton:
\begin{align}
y'' = y - y^2\
y(0) = 1\
y(1) = 4
\end{align}
Reemplazando nuestras fórmulas de diferencias finitas y dejando todo al lado izquierdo obtenemos:
$$w_{i-1} - (2 + h^2)w_i + h^2w_i^2 + w_{i+1} = 0$$
Esto nos asegurará poder expresar el sistema de ecuaciones como una función que depende de las estimaciones $w$. Aparte, las ecuaciones para las condiciones de borde resultan ser:
$$y_a - (2+h^2)w_1 + h^2w_1^2 + w_2 = 0$$
$$w_{n-1} - (2+h^2)w_n + h^2w_n^2 + y_b = 0$$
Para encontrar la solución numérica $w$ debemos calcular tanto $F(w)$ como su jacobiano $\partial F$. $F(w)$ no es más que un vector que contiene las ecuaciones mencionadas anteriormente:
$$F\left[ \begin{array}{c}
w_{1} \
w_{2}\
\vdots \
w_{n-1} \
w_{n}\ \end{array} \right] = \left[ \begin{array}{c}
y_a - (2+h^2)w_1 + h^2w_1^2 + w_2 \
w_{1} - (2 + h^2)w_2 + h^2w_2^2 + w_{3} \
\vdots \
w_{n-2} - (2+h^2)w_{n-1} + h^2w_{n-1}^2 + w_{n} \
w_{n-1} - (2+h^2)w_n + h^2w_n^2 + y_b \ \end{array} \right]$$
Como se imaginarán el jacobiano de $F$ es hermoso. Hay que derivar parcialmente cada ecuación en $F$ respecto de cada variable $w_i$... Los detalles los dejamos al lector, pero para ejemplificar mostraremos las parciales de $F_1$ respecto a $w_i$.
\begin{align}
\frac{\partial F_1}{\partial w_1} &= -(2+h^2) + 2h^2w_1 \
\frac{\partial F_1}{\partial w_2} &= 1 \
\frac{\partial F_1}{\partial w_i} &= 0,\;\; \forall i \neq {1, 2}
\end{align}
El jacobiano finalmente tendrá una forma tridiagonal. Ahora podemos resolver numéricamente $\Delta w$ e ir actualizando iterativamente $w$ como $w^{k+1} = w^{k} - \Delta w$
End of explanation
xx = np.linspace(0, 1, 200)
# Solución Analítica
def y(x):
dem = (1.+np.exp(2.))*(1+4*np.pi**2.)
num = np.exp(-x)*(3*(1+4*np.pi**2)*np.exp(x)*x +3*(1+4*np.pi**2)*np.exp(x+2)*x
-2*(1+6*np.pi**2)*np.exp(2*x+1)-2*(1+4*np.pi**2)*np.exp(2*x)
-(1+np.exp(2))*np.cos(2*np.pi*x) + np.exp(2)*(2+8*np.pi**2)
-2*np.e*(1+6*np.pi**2))
return num / dem
yy = y(xx)
# Matriz de derivacion
def DiffMatrix(n, h):
D = np.diag((-2-h**2)*np.ones(n),0) + np.diag(np.ones(n-1),-1) + np.diag(np.ones(n-1),1)
D[0][0] = -1.-h**2.
return D
n = 30
h = 1/(n+1)
x = np.linspace(0, 1, n+2)
b = (h**2)*(-3*x[1:-1]+np.cos(2*np.pi*x[1:-1]))
D = DiffMatrix(n, h)
b[0] += h
w = np.linalg.solve(D, b)
w = np.concatenate([[w[0]-h], w, [0]])
plt.figure(figsize=(10,7))
plt.plot(xx, yy, lw=2, label="$y(x)$")
plt.plot(x, w, 'ro-', lw=1.5)
plt.legend(loc='best', fontsize=16)
plt.xlabel("$x$", fontsize=16)
plt.show()
Explanation: 3. Condiciones de Borde
Es posible establecer diversas condiciones de borde para nuestro problema. Las más comunes son Dirichlet, Neumann, Robin y condiciones periódicas.
Las condiciones de Dirichlet buscan fijar un valor en específico para la función incógnita en los bordes.
Las condiciones de Neumann buscan análogamente fijar un valor de la derivada de la función incógnita en los bordes.
Las condiciones de Robin son una mezcla de ambas, una combinación de condiciones de Dirichlet y Neumann para algún borde.
Cabe destacar también condiciones periódicas, en donde se fuerza que la función en un borde sea igual a la función en el borde contrario.
Resolvamos la siguiente ecuación diferencial con condiciones mixtas (una Neumann y una Dirichlet):
\begin{align}
y''(x) &= y(x)-3x+\cos(2\pi x)\
y'(0) &= 1 \
y(1) &= 0
\end{align}
Discretizando obtenemos:
\begin{align}
\frac{w_{i+1}-2w_i+w_{i-1}}{h^2} &= w_i - 3x_i + \cos(2\pi x_i)\
w_{i+1} +(-2-h^2)w_i+w_{i-1} &= -3h^2x_i+ h^2\cos(2\pi x_i)
\end{align}
Esta ecuación será la forma general que tendrá nuestra matriz de diferenciación. Ahora veamos las condiciones de borde. La condición de Neumann corresponde al primer borde $x_0 = 0$, discretizando obtenemos:
\begin{align}
\frac{y_{1}-y_{0}}{h} &= 1\
\Rightarrow y_{0} &= y_{1} - h
\end{align}
End of explanation |
7,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI
Step2: Note that the information is stored differently when read in with envi.open. We can find the wavelength information in img.bands.centers. Let's take a look at the first and last wavelengths values
Step3: We'll set the Water Vapor Band windows to NaN
Step4: To get a quick look at the img data, use the params method
Step5: Metadata information is stored in img.metadata, a dictionary. Let's look at the metadata contents
Step6: To access any of these metadata items, use the syntax md['description'] or md['map info']
Step7: You can also use type and len to look at the type and length (or number) of some of the metadata contents
Step8: Let's look at the data using imshow, a wrapper around matplotlib's imshow for multi-band images
Step9: When dealing with NEON hyperspectral data, we first want to remove the water vapor & noisy bands, keeping only the valid bands. To speed up the classification algorithms for demonstration purposes, we'll look at a subset of the data using read_subimage, a built in method to subset by area and bands. Type help(img.read_subimage) to see how it works.
Step10: Plot the subsetted image for reference
Step11: Now that we have the image subsetted, lets run the k-means algorithm. Type help(kmeans) to show how the function works. To run the k-means algorithm on the image and create 5 clusters, using a maximum of 50 iterations, use the following syntax
Step12: Note that the algorithm terminated afte 14 iterations, when the pixels stopped being reassigned.
Data Tip
Step13: c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes
Step14: Challenges
Step15: In the covariance matrix display, lighter values indicate strong positive covariance, darker values indicate strong negative covariance, and grey values indicate covariance near zero. | Python Code:
from spectral import *
import spectral.io.envi as envi
import numpy as np
import matplotlib
#for clean output, to not print warnings, don't use when developing script
import warnings
warnings.filterwarnings('ignore')
Explanation: syncID: 75f8885948494c0dbe6084099c61dd1e
title: "Unsupervised Spectral Classification in Python: KMeans & PCA"
description: "Learn to classify spectral data using KMeans and Principal Components Analysis (PCA)."
dateCreated: 2018-07-10
authors: Bridget Hass
contributors:
estimatedTime:
packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot
topics: hyperspectral-remote-sensing, HDF5, remote-sensing
languagesTool: python
dataProduct: NEON.DP1.30006, NEON.DP3.30006, NEON.DP1.30008
code1: Python/remote-sensing/hyperspectral-data/classification_kmeans_pca_py.ipynb
tutorialSeries: intro-hsi-py-series
urlTitle: classification-kmeans-pca-python
In this tutorial, we will use the Spectral Python (SPy) package to run KMeans and Principal Component Analysis unsupervised classification algorithms.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Classify spectral remote sensing data.
### Install Python Packages
* **numpy**
* **gdal**
* **matplotlib**
* **matplotlib.pyplot**
### Download Data
<a href="https://neondata.sharefile.com/d-s1dc135daffd4e65b" class="btn btn-success">
Download the spectral classification teaching data subset</a>
</div>
In this tutorial, we will use the Spectral Python (SPy) package to run KMeans and Principal Component Analysis unsupervised classification algorithms.
To learn more about the Spcral Python packages read:
<a href="http://www.spectralpython.net/user_guide.html" target="blank">Spectral Python User Guide</a>.
<a href="http://www.spectralpython.net/algorithms.html#unsupervised-classification" target="_blank">Spectral Python Unsupervised Classification</a>.
KMeans Clustering
KMeans is an iterative clustering algorithm used to classify unsupervised data (eg. data without a training set) into a specified number of groups. The algorithm begins with an initial set of randomly determined cluster centers. Each pixel in the image is then assigned to the nearest cluster center (using distance in N-space as the distance metric) and each cluster center is then re-computed as the centroid of all pixels assigned to the cluster. This process repeats until a desired stopping criterion is reached (e.g. max number of iterations).
Read more on KMeans clustering from <a href="http://www.spectralpython.net/algorithms.html#k-means-clustering" target="_blank">Spectral Python</a>.
To visualize how the algorithm works, it's easier look at a 2D data set. In the example below, watch how the cluster centers shift with progressive iterations,
<figure>
<a href="{{ site.baseurl }}/images/hyperspectral/KMeans2D.gif">
<img src="{{ site.baseurl }}/images/hyperspectral/KMeans2D.gif"></a>
<figcaption> KMeans clustering demonstration Source: <a href="https://sandipanweb.wordpress.com/2017/03/19/hard-soft-clustering-with-k-means-weighted-k-means-and-gmm-em/" target="_blank">Sandipan Deyn</a>
</figcaption>
</figure>
Principal Component Analysis (PCA) - Dimensionality Reduction
Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands).
Read more about PCA with
<a href="http://www.spectralpython.net/algorithms.html#principal-components" target="_blank">Spectral Python</a>.
Set up
To run this notebook, the following Python packages need to be installed. You can install required packages from command line pip install spectra scikit-learn cvxopt.
or if already in a Jupyter Notebook, run the following code in a Notebook code cell.
Packages:
- pylab
- spectral
- scikit-learn (optional)
python
import sys
!{sys.executable} -m pip install spectral
!conda install --yes --prefix {sys.prefix} scikit-learn
!conda install --yes --prefix {sys.prefix} cvxopt
In order to make use of the interactive graphics capabilities of spectralpython, such as N-Dimensional Feature Display, you work in a Python 3.6 environment (as of July 2018).
For more, read from <a href="http://www.spectralpython.net/graphics.html" target="_blank">Spectral Python</a>.
Optional:
matplotlib wx backend (for 3-D visualization of PCA, requires Python 3.6)
Find out more on
<a href="https://stackoverflow.com/questions/42007164/how-to-install-wxpython-phoenix-for-python-3-6" target="_blank"> StackOverflow</a>.
python
conda install -c newville wxpython-phoenix
Managing Conda Environments
- nb_conda_kernels package provides a separate jupyter kernel for each conda environment
- Find out more on
<a href="https://conda.io/docs/user-guide/tasks/manage-environments.html" target="_blank"> Conda docs</a>.
python
conda install -c conda-forge nb_conda_kernels
First, import the required packages and set display preferences:
End of explanation
img = envi.open('../data/Hyperspectral/NEON_D02_SERC_DP3_368000_4306000_reflectance.hdr',
'../data/Hyperspectral/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat')
Explanation: For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI
End of explanation
print('First 3 Band Center Wavelengths:',img.bands.centers[:3])
print('Last 3 Band Center Wavelengths:',img.bands.centers[-3:])
Explanation: Note that the information is stored differently when read in with envi.open. We can find the wavelength information in img.bands.centers. Let's take a look at the first and last wavelengths values:
End of explanation
img.bands.centers[191:211]==np.nan
img.bands.centers[281:314]==np.nan
img.bands.centers[-10:]==np.nan
Explanation: We'll set the Water Vapor Band windows to NaN:
End of explanation
img.params
Explanation: To get a quick look at the img data, use the params method:
End of explanation
md = img.metadata
print('Metadata Contents:')
for item in md:
print('\t',item)
Explanation: Metadata information is stored in img.metadata, a dictionary. Let's look at the metadata contents:
End of explanation
print('description:',md['description'])
print('map info:',md['map info'])
Explanation: To access any of these metadata items, use the syntax md['description'] or md['map info']:
End of explanation
print(type(md['wavelength']))
print('Number of Bands:',len(md['wavelength']))
Explanation: You can also use type and len to look at the type and length (or number) of some of the metadata contents:
End of explanation
view = imshow(img,bands=(58,34,19),stretch=0.05,title="RGB Image of 2017 SERC Tile")
print(view)
Explanation: Let's look at the data using imshow, a wrapper around matplotlib's imshow for multi-band images:
End of explanation
valid_band_range = [i for j in (range(0,191), range(212, 281), range(315,415)) for i in j] #remove water vapor bands
img_subset = img.read_subimage(range(400,600),range(400,600),bands=valid_band_range) #subset image by area and bands
Explanation: When dealing with NEON hyperspectral data, we first want to remove the water vapor & noisy bands, keeping only the valid bands. To speed up the classification algorithms for demonstration purposes, we'll look at a subset of the data using read_subimage, a built in method to subset by area and bands. Type help(img.read_subimage) to see how it works.
End of explanation
view = imshow(img_subset,bands=(58,34,19),stretch=0.01,title="RGB Image of 2017 SERC Tile Subset")
Explanation: Plot the subsetted image for reference:
End of explanation
(m,c) = kmeans(img_subset,5,50)
Explanation: Now that we have the image subsetted, lets run the k-means algorithm. Type help(kmeans) to show how the function works. To run the k-means algorithm on the image and create 5 clusters, using a maximum of 50 iterations, use the following syntax:
End of explanation
print(c.shape)
Explanation: Note that the algorithm terminated afte 14 iterations, when the pixels stopped being reassigned.
Data Tip: You can iterrupt the algorithm with a keyboard interrupt (CTRL-C) if you notice that the number of reassigned pixels drops off. Kmeans catches the KeyboardInterrupt exception and returns the clusters generated at the end of the previous iteration. If you are running the algorithm interactively, this feature allows you to set the max number of iterations to an arbitrarily high number and then stop the algorithm when the clusters have converged to an acceptable level. If you happen to set the max number of iterations too small (many pixels are still migrating at the end of the final iteration), you can simply call kmeans again to resume processing by passing the cluster centers generated by the previous call as the optional start_clusters argument to the function.
Let's take a look at the cluster centers c. In this case, these represent spectras of the five clusters of reflectance that the data were grouped into.
End of explanation
%matplotlib inline
import pylab
pylab.figure()
pylab.hold(1)
for i in range(c.shape[0]):
pylab.plot(c[i])
pylab.show
pylab.title('Spectral Classes from K-Means Clustering')
pylab.xlabel('Bands (with Water Vapor Windows Removed)')
pylab.ylabel('Reflectance')
#%matplotlib notebook
view = imshow(img_subset, bands=(58,34,19),stretch=0.01, classes=m)
view.set_display_mode('overlay')
view.class_alpha = 0.5 #set transparency
view.show_data
Explanation: c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes:
End of explanation
pc = principal_components(img_subset)
pc_view = imshow(pc.cov)
xdata = pc.transform(img_subset)
Explanation: Challenges: K-Means
What do you think the spectral classes in the figure you just created represent?
Try using a different number of clusters in the kmeans algorithm (e.g., 3 or 10) to see what spectral classes and classifications result.
Principal Component Analysis (PCA)
Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands) .
End of explanation
pcdata = pc.reduce(num=10).transform(img_subset)
pc_0999 = pc.reduce(fraction=0.999)
# How many eigenvalues are left?
print(len(pc_0999.eigenvalues))
img_pc = pc_0999.transform(img_subset)
print(img_pc.shape)
v = imshow(img_pc[:,:,:5], stretch_all=True)
Explanation: In the covariance matrix display, lighter values indicate strong positive covariance, darker values indicate strong negative covariance, and grey values indicate covariance near zero.
End of explanation |
7,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Budget accounting with diffprivlib
Diffprivlib includes a budget accountant to allow you to keep track of privacy budget being spent. The budget accounting is handled by the BudgetAccountant class.
Basic functionality of the BudgetAccountant includes initialisation with an appropriate epsilon and delta (if desired), determining the total current spend with total() and evaluating the remaining budget to be spent with remaining(). The number of recorded budget spends can be found using len().
Step1: If a query were to exceed the privacy budget specified in the accountant, an error is raised and execution ceases.
Step2: Using BudgetAccountant to track privacy budget
There are three ways to use the BudgetAccountant class to track budget spend across many operations
Step3: Setting the slack
Composition of privacy budgets typically add up linearly, that is unless you allow a slack in your delta. This is governed by the slack parameter in the initialisation.
The benefit of a non-zero slack is especially evident when many queries are being asked. | Python Code:
import matplotlib.pyplot as plt
from numpy.random import random
from diffprivlib import BudgetAccountant
from diffprivlib.tools import mean, var
X = random(100)
acc = BudgetAccountant(epsilon=5, delta=0)
dp_mean = mean(X, bounds=(0, 1), accountant=acc)
print("Total spent: %r" % (acc.total(),))
print("Remaining budget (for 1 query): %r" % (acc.remaining(),))
print("Remaining budget (for 4 queries): %r" % (acc.remaining(4),))
print("Number of queries recorded: %d" % len(acc))
Explanation: Budget accounting with diffprivlib
Diffprivlib includes a budget accountant to allow you to keep track of privacy budget being spent. The budget accounting is handled by the BudgetAccountant class.
Basic functionality of the BudgetAccountant includes initialisation with an appropriate epsilon and delta (if desired), determining the total current spend with total() and evaluating the remaining budget to be spent with remaining(). The number of recorded budget spends can be found using len().
End of explanation
acc = BudgetAccountant(1.5, 0)
dp_mean = mean(X, epsilon=1, bounds=(0, 1), accountant=acc)
try:
dp_std = var(X, epsilon=1, bounds=(0, 1), accountant=acc)
except Exception as e:
print("Error raised {}: {}".format(type(e), e))
Explanation: If a query were to exceed the privacy budget specified in the accountant, an error is raised and execution ceases.
End of explanation
acc_p = BudgetAccountant()
mean(X, epsilon=1.618, bounds=(0, 1), accountant=acc_p)
print("Total spend: %r" % (acc_p.total(),))
acc_d = BudgetAccountant()
acc_d.set_default()
mean(X, epsilon=2.718, bounds=(0, 1))
print("Total spend: %r" % (acc_d.total(),))
with BudgetAccountant() as acc_w:
mean(X, epsilon=1.5705, bounds=(0, 1))
var(X, epsilon=1.5705, bounds=(0, 1))
print("Total spend: %r" % (acc_w.total(),))
Explanation: Using BudgetAccountant to track privacy budget
There are three ways to use the BudgetAccountant class to track budget spend across many operations:
1. Parametrisation: Passed as a parameter (accountant=acc)
2. Default: Set as a default (set_default())
3. Context manager: Using "with" over a block of code
End of explanation
acc_n = BudgetAccountant()
acc_a = BudgetAccountant(slack=1e-3)
epsilon, queries = 2**-6, 30
budget_naive = [0] + [acc_n.spend(epsilon, 0).total()[0] for i in range(queries)]
budget_advanced = [0] + [acc_a.spend(epsilon, 0).total()[0] for i in range(queries)]
plt.plot(range(queries + 1), budget_naive, label="Naive composition (slack=%g)" % acc_n.slack)
plt.plot(range(queries + 1), budget_advanced, label="Advanced composition (slack=%g)" % acc_a.slack)
plt.xlabel("Queries")
plt.ylabel("Epsilon spent")
plt.xlim(0, queries)
plt.ylim(0, None)
plt.legend()
Explanation: Setting the slack
Composition of privacy budgets typically add up linearly, that is unless you allow a slack in your delta. This is governed by the slack parameter in the initialisation.
The benefit of a non-zero slack is especially evident when many queries are being asked.
End of explanation |
7,881 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
load a DistilGPT2 tokenizer to process the text subfield
| Python Code::
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
|
7,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Changepoint Detection in Python
This code computes the probability of changepoints in a time series.7 In this notebook I show how you can use it.
First let's generate some data
Step1: Let's have a look, how they look like
Step2: Offline Changepoint Detection
Lets compute the probability of changepoints at each time step. We need two things for that. First a prior of how probable is it to have two successive changepoints with the distance t. The second thing is a model of the likelihood of data in a sequence [s, t] of the data, given that in this sequence there is no changepoint.
For this example we assume a uniform prior over the length of sequences (const_prior) and a piecewise gaussian model (gaussian_obs_log_likelihood).
Step3: The offline_changepoint_detection() function returns three things
Step4: That works pretty well, but is somewhat slow. It's possible to speed that up by truncating a sum in the algorithm. However that sometimes leeds to $\infty$ values. Set the truncate parameter to e.g. -10 to test that out.
To understand, what is happening have a look at the following papers
Step5: The online version computes slightly different things. For each time step it returns the probability distribution over the length of the last sequence. E.g. R[7, 3] is the probability at time step 7 that the last sequence is already 3 time steps long. It also returns the MAP estimate at each timestep for convenience.
To plot the distributions we use a grey-scale colormap, black is zero, white 1. We also plot the probability at each time step for a sequence length of 0, i.e. the probability of the current time step to be a changepoint.
Because it's very hard to correctly evaluate a change after a single sample of a new distribution, we instead can "wait" for Nw samples and evalute the probability of a change happening Nw samples prior.
Step6: Well, not bad, considering how much faster it is (if you can afford waiting for that extra Nw samples). To understand the whole algorithm look at
[1] Ryan P. Adams, David J.C. MacKay, Bayesian Online Changepoint Detection,
arXiv 0710.3742 (2007)
There you also find a Matlab version, which this code is based on. | Python Code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import seaborn
from bayesian_changepoint_detection.generate_data import generate_normal_time_series
%matplotlib inline
%load_ext autoreload
%autoreload 2
partition, data = generate_normal_time_series(7, 50, 200)
Explanation: Bayesian Changepoint Detection in Python
This code computes the probability of changepoints in a time series.7 In this notebook I show how you can use it.
First let's generate some data:
End of explanation
fig, ax = plt.subplots(figsize=[16, 12])
ax.plot(data)
Explanation: Let's have a look, how they look like:
End of explanation
from bayesian_changepoint_detection.priors import const_prior
from functools import partial
prior_function = partial(const_prior, p=1/(len(data) + 1))
from bayesian_changepoint_detection.bayesian_models import offline_changepoint_detection
import bayesian_changepoint_detection.offline_likelihoods as offline_ll
Q, P, Pcp = offline_changepoint_detection(data, prior_function ,offline_ll.StudentT(),truncate=-40)
Explanation: Offline Changepoint Detection
Lets compute the probability of changepoints at each time step. We need two things for that. First a prior of how probable is it to have two successive changepoints with the distance t. The second thing is a model of the likelihood of data in a sequence [s, t] of the data, given that in this sequence there is no changepoint.
For this example we assume a uniform prior over the length of sequences (const_prior) and a piecewise gaussian model (gaussian_obs_log_likelihood).
End of explanation
fig, ax = plt.subplots(2, figsize=[18, 16], sharex=True)
ax[0].plot(data[:])
ax[1].plot(np.exp(Pcp).sum(0))
Explanation: The offline_changepoint_detection() function returns three things: Q[t], the log-likelihood of data [t, n], P[t, s], the log-likelihood of a datasequence [t, s], given there is no changepoint between t and s and Pcp[i, t], the log-likelihood that the i-th changepoint is at time step t. To actually get the probility of a changepoint at time step t sum the probabilities.
How does that look like for our toy-data?
End of explanation
from bayesian_changepoint_detection.hazard_functions import constant_hazard
hazard_function = partial(constant_hazard, 250)
from bayesian_changepoint_detection.bayesian_models import online_changepoint_detection
import bayesian_changepoint_detection.online_likelihoods as online_ll
R, maxes = online_changepoint_detection(
data, hazard_function, online_ll.StudentT(alpha=0.1, beta=.01, kappa=1, mu=0)
)
Explanation: That works pretty well, but is somewhat slow. It's possible to speed that up by truncating a sum in the algorithm. However that sometimes leeds to $\infty$ values. Set the truncate parameter to e.g. -10 to test that out.
To understand, what is happening have a look at the following papers:
[1] Paul Fearnhead, Exact and Efficient Bayesian Inference for Multiple
Changepoint problems, Statistics and computing 16.2 (2006), pp. 203--213
[2] Xuan Xiang, Kevin Murphy, Modeling Changing Dependency Structure in
Multivariate Time Series, ICML (2007), pp. 1055--1062
Online Changepoint Detection
Let's assume the data points come in one after another and not as these nice batches. During the process you want to know if the new point has the same hyperparameter or different ones. You need an online changepoint detection.
Happily there is one, although it's interface is kind of suboptimal so far, in that it expects batches of data still and just assumes they drop in over time... I will change that at some point.
End of explanation
import matplotlib.cm as cm
epsilon = 1e-7
fig, ax = plt.subplots(3, figsize=[18, 16], sharex=True)
ax[0].plot(data)
sparsity = 5 # only plot every fifth data for faster display
density_matrix = -np.log(R[0:-1:sparsity, 0:-1:sparsity]+epsilon)
ax[1].pcolor(np.array(range(0, len(R[:,0]), sparsity)),
np.array(range(0, len(R[:,0]), sparsity)),
density_matrix,
cmap=cm.Greys, vmin=0, vmax=density_matrix.max(),
shading='auto')
Nw=10
ax[2].plot(R[Nw,Nw:-1])
Explanation: The online version computes slightly different things. For each time step it returns the probability distribution over the length of the last sequence. E.g. R[7, 3] is the probability at time step 7 that the last sequence is already 3 time steps long. It also returns the MAP estimate at each timestep for convenience.
To plot the distributions we use a grey-scale colormap, black is zero, white 1. We also plot the probability at each time step for a sequence length of 0, i.e. the probability of the current time step to be a changepoint.
Because it's very hard to correctly evaluate a change after a single sample of a new distribution, we instead can "wait" for Nw samples and evalute the probability of a change happening Nw samples prior.
End of explanation
partition, data = generate_normal_time_series(7, 50, 200)
%timeit Q, P, Pcp = offline_changepoint_detection(data, prior_function, offline_ll.StudentT(), truncate=-40)
%timeit R, maxes = online_changepoint_detection(data, hazard_function, online_ll.StudentT(10, .03, 1, 0))
Explanation: Well, not bad, considering how much faster it is (if you can afford waiting for that extra Nw samples). To understand the whole algorithm look at
[1] Ryan P. Adams, David J.C. MacKay, Bayesian Online Changepoint Detection,
arXiv 0710.3742 (2007)
There you also find a Matlab version, which this code is based on.
End of explanation |
7,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Week 11
Step3: Before we go on a note of caution is needed for class attributes. Do you remember the strange fibonacci sequence function from our first class?
Step6: The same issue can happen with classes, only this is a much more common source of bugs.
If only using strings and numbers the behaviour will likely be much as you expect. However, if using a list, dictionary, or other similar type you may get a surprise.
Step9: Both of our objects point to the same instance of the list type so adding a new friend to either object shows up in both.
The solution to this is creating our friends attribute only at instantiation of the object. This can be done by creating it in the __init__ method.
Step10: Objects have their own namespace, although we have created variables called name, age, and friends they can only be accessed in the context of the object.
Step15: We are not limited to special methods when creating classes. Standard functions, or in this context methods, are an integral part of object oriented programming. Their definition is identical to special methods and functions outside of classes.
Step16: Private vs Public
Some programming languages support hiding methods and attributes in an object. This can be useful to simplify the public interface someone using the class will see while still breaking up components into manageable blocks 'under-the-hood'. We will discuss designing the public interface in detail in future classes.
Python does not support private variables beyond convention. Names prefixed with a underscore are assumed to be private. This means they may be changed without warning between different versions of the package. For public attributes/methods this is highly discouraged.
Glossary
Class
Step17: Each of the classes we create inheriting from our general class can be thought of as having an 'is-a' relationship with the general class. For example, Equipment is a Item, Consumable is a Item.
Not yet implemented methods
There is one other situation we should consider. Occasionally we will want a class of a particular type to always implement a particular method even though we are unable to implement that method in our parent class. We need some way of raising an error when the parent class is inherited and the method is not implemented.
As a simple example consider a class representing length. We might create classes for meters, miles, feet, etc. Keeping the original units when performing operations (adding, subtracting, etc) would prevent rounding errors but each class would need custom logic.
Returning to our laboratory inventory system one way we can implement this is below
Step18: A disadvantage with this approach is we only see the error message when we call the method. The error is in the way we implemented the class so it would be more intuitive to get an error earlier, when we first create the object.
This can be achieved using the abstract method decorator.
Step19: Either of these approaches work well for adding new methods or completely changing the behaviour of a method. Often we only need to make a more subtle change. In this situation it can be useful to call a method from a parent class while only implementing our new functionality in the child class.
There are two approaches for this.
Step20: Using super() is usually the best approach, the reasons for this are covered in detail in this blog post
Multiple Inheritance
We are not limited to inheriting from a single class. It is possible to merge functionality from multiple different classes simply by inheriting from them.
When inheriting from multiple classes that contain a method or attribute with the same name there is a particular order in which the names are resolved.
Step21: A simple rule-of-thumb is that search is depth first. The details are a little more complicated.
isinstance
Often we need to check whether a particular variable is an instance of a particular class. For example, returning to our laboratory inventory system we would want to check that we only add instances of Item or its subclasses to our storage locations.
Step26: Duck typing
A popular alternative in python is duck typing, an approach named after the idea that,
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
What this means for programming is that instead of checking for a particular class, instead the methods and attributes that are actually needed are checked for.
Composition
Let's switch to another example from last week. We looked at a cookbook application and were often unsure whether things like ingredients should be attributes on the recipe class or classes in their own right.
Often the answer is both.
These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
Let's look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
Step27: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests
Step28: The API documentation for requests
The Response class
Some useful features
Step33: The API documentation for the DataFrame object.
The actual code.
Some useful features
Step38: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe | Python Code:
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
mammal = True
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
def __str__(self):
return '{0} who is {1} years old.'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
print(person1, person2)
Explanation: Week 11: Inheritance, abstraction, and crafting the public interface.
Last week we looked at several example projects and the classes we might use to implement them.
Before we revisit and expand on these we will cover the remaining material from last week.
End of explanation
def next_fibonacci(status=[]):
if len(status) < 2:
status.append(1)
return 1
status.append(status[-2] + status[-1])
return status[-1]
print(next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci())
Explanation: Before we go on a note of caution is needed for class attributes. Do you remember the strange fibonacci sequence function from our first class?
End of explanation
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
friends = []
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
def __str__(self):
return '{0} who is {1} years old'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.friends.append('Charlie')
person2.friends.append('Danielle')
print(person1.friends, person2.friends)
Explanation: The same issue can happen with classes, only this is a much more common source of bugs.
If only using strings and numbers the behaviour will likely be much as you expect. However, if using a list, dictionary, or other similar type you may get a surprise.
End of explanation
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
self.friends = []
def __str__(self):
return '{0} who is {1} years old'.format(self.name, self.age)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.friends.append('Charlie')
person2.friends.append('Danielle')
print(person1.friends, person2.friends)
Explanation: Both of our objects point to the same instance of the list type so adding a new friend to either object shows up in both.
The solution to this is creating our friends attribute only at instantiation of the object. This can be done by creating it in the __init__ method.
End of explanation
print('This works:', person1.friends)
print('This does not work:', friends)
Explanation: Objects have their own namespace, although we have created variables called name, age, and friends they can only be accessed in the context of the object.
End of explanation
class Person(object):
A class definition for a person. The following attributes are supported:
Attributes:
name: A string representing the person's name.
age: An integer representing the person's age.
def __init__(self, name, age):
Return a Person object with name and age set to the values supplied
self.name = name
self.age = age
self.friends = []
def __str__(self):
Return a string representation of the object
return '{0} who is {1} years old'.format(self.name, self.age)
def add_friend(self, friend):
Add a friend
self.friends.append(friend)
person1 = Person('Alice', 25)
person2 = Person('Bob', 30)
person1.add_friend('Charlie')
person2.add_friend('Danielle')
print(person1.friends, person2.friends)
Explanation: We are not limited to special methods when creating classes. Standard functions, or in this context methods, are an integral part of object oriented programming. Their definition is identical to special methods and functions outside of classes.
End of explanation
class Item(object):
def __init__(self, name, description, location):
self.name = name
self.description = description
self.location = location
def update_location(self, new_location):
pass
class Equipment(Item):
pass
class Consumable(Item):
def __init__(self, name, description, location, initial_quantity, current_quantity, storage_temp, flammability):
self.name = name
self.description = description
self.location = location
self.initial_quantity = initial_quantity
self.current_quantity = current_quantity
self.flammability = flammability
def update_quantity_remaining(self, amount):
pass
Explanation: Private vs Public
Some programming languages support hiding methods and attributes in an object. This can be useful to simplify the public interface someone using the class will see while still breaking up components into manageable blocks 'under-the-hood'. We will discuss designing the public interface in detail in future classes.
Python does not support private variables beyond convention. Names prefixed with a underscore are assumed to be private. This means they may be changed without warning between different versions of the package. For public attributes/methods this is highly discouraged.
Glossary
Class: Our definition, or template, for an object.
Object: An instance of a class.
Method: A function that belongs to an object
Attribute: A characteristic of an object, these can be data attributes and methods.
Now we will revisit the laboratory inventory system from last week.
Example 1: A Laboratory Inventory
I would like to keep track of all the items in the laboratory so I can easily find them the next time I need them. Both equipment and consumables would be tracked. We have multiple rooms, and items can be on shelves, in refrigerators, in freezers, etc. Items can also be in boxes containing other items in all these places.
The words in bold would all be good ideas to turn into classes. Now we know some of the classes we will need we can start to think about what each of these classes should do, what the methods will be. Let's consider the consumables class:
For consumables we will need to manage their use so there should be an initial quantity and a quantity remaining that is updated every time we use some. We want to make sure that temperature sensitive consumables are always stored at the correct temperature, and that flammables are stored in a flammables cabinet etc.
The consumable class will need a number of attributes:
Initial quantity
Current quantity
Storage temperature
Flammability
The consumable class will need methods to:
Update the quantity remaining
Check for improper storage?
The consumable class might interact with the shelf, refrigerator, freezer, and/or box classes.
Reading back through our description of consumables there is reference to a flammables cabinet that was not mentioned in our initial description of the problem. This is an iterative design process so we should go back and add a flammables cabinet class.
If we expand our list to all the classes we plan to use we get the following:
Items
Attributes
Name
Description
Location
Methods
Update location
Interactions
Every other class except items and consumables
Laboratory
Attributes
?
Methods
Search
Interactions
Every other class
Equipment
Attributes
Name
Description
Location
Methods
Update location
Interactions
Every other class except items and consumables
Consumables
Attributes
Name
Description
Location
Initial quantity
Current quantity
Storage temperature
Flammability
Methods
Update location
Update quantity remaining
Check for appropriate storage
Interactions
Every other class except equipment and items
Rooms
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class
Shelves
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class possible although refrigerator and freezer are unlikely
Refrigerators
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Temperature
Methods
Search
Interactions
Every other class possible although freezer and flammables cabinet unlikely
Freezers
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Temperature
Methods
Search
Interactions
Every other class possible although refrigerator and flammables cabinet unlikely
Boxes
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class
Flammables Cabinet
Attributes
Name
Description
Location
Storage locations within this location
Items stored here
Methods
Search
Interactions
Every other class possible although refrigerator and freezer unlikely
Although this is a long list careful examination reveals that there is a lot of repetition.
Items and equipment are identical and consumables is similar, adding several extra attributes and methods.
Rooms, shelves, refrigerators, freezers, boxes and flammables cabinet are all similar, only differing in the occasional attribute.
Our three main groups are:
* Laboratory
* Items (Items, equipment, and consumables)
* Locations (Rooms, shelves, refrigerators, freezers, boxes and flammables cabinet)
So much duplication is problematic, it is diffcult to maintain and subject to greater risk of bugs.
Inheritance
There is a better way - we can create a generic class with the shared functionality and then inherit from it when we create the other classes.
For example an Item class would contain the basic attributes and methods. The Equipment class could then inherit from this class without modification. The Consumable class would also inherit from Item and only add the extra attributes and methods uniquely needed by the Consumable class.
End of explanation
class Item(object):
def safely_stored(self):
raise NotImplementedError('override in subclass')
class Consumable(Item):
def safely_stored(self):
return True
a = Item()
a.safely_stored()
b = Consumable()
b.safely_stored()
Explanation: Each of the classes we create inheriting from our general class can be thought of as having an 'is-a' relationship with the general class. For example, Equipment is a Item, Consumable is a Item.
Not yet implemented methods
There is one other situation we should consider. Occasionally we will want a class of a particular type to always implement a particular method even though we are unable to implement that method in our parent class. We need some way of raising an error when the parent class is inherited and the method is not implemented.
As a simple example consider a class representing length. We might create classes for meters, miles, feet, etc. Keeping the original units when performing operations (adding, subtracting, etc) would prevent rounding errors but each class would need custom logic.
Returning to our laboratory inventory system one way we can implement this is below:
End of explanation
from abc import ABCMeta, abstractmethod
class Item(metaclass=ABCMeta):
@abstractmethod
def safely_stored(self):
pass
class Consumable(Item):
def safely_stored(self):
return True
a = Item()
b = Consumable()
b.safely_stored()
Explanation: A disadvantage with this approach is we only see the error message when we call the method. The error is in the way we implemented the class so it would be more intuitive to get an error earlier, when we first create the object.
This can be achieved using the abstract method decorator.
End of explanation
class A(object):
def a(self):
print('a in class A')
class B(A):
def a(self):
A.a(self)
print('b in class B')
a = A()
a.a()
b = B()
b.a()
class A(object):
def a(self):
print('a in class A')
class B(A):
def a(self):
super().a()
print('b in class B')
a = A()
a.a()
b = B()
b.a()
Explanation: Either of these approaches work well for adding new methods or completely changing the behaviour of a method. Often we only need to make a more subtle change. In this situation it can be useful to call a method from a parent class while only implementing our new functionality in the child class.
There are two approaches for this.
End of explanation
class A(object):
def a(self):
print('A-a')
class A2(object):
def a(self):
print('A2-a')
class B(A, A2):
pass
a = A()
a.a()
a2 = A2()
a2.a()
b = B()
b.a()
class A(object):
def a(self):
print('A-a')
class A2(object):
def a(self):
print('A2-a')
class B(A):
pass
class C(B, A2):
pass
a = A()
a.a()
a2 = A2()
a2.a()
c = C()
c.a()
Explanation: Using super() is usually the best approach, the reasons for this are covered in detail in this blog post
Multiple Inheritance
We are not limited to inheriting from a single class. It is possible to merge functionality from multiple different classes simply by inheriting from them.
When inheriting from multiple classes that contain a method or attribute with the same name there is a particular order in which the names are resolved.
End of explanation
class Item(object):
def safely_stored(self):
raise NotImplementedError('override in subclass')
class Consumable(Item):
def safely_stored(self):
return True
a = Item()
b = Consumable()
print('a instance of Item:', isinstance(a, Item))
print('b instance of Consumable:', isinstance(b, Consumable))
print('b instance of Item:', isinstance(b, Item))
print('a instance of Consumable:', isinstance(a, Consumable))
Explanation: A simple rule-of-thumb is that search is depth first. The details are a little more complicated.
isinstance
Often we need to check whether a particular variable is an instance of a particular class. For example, returning to our laboratory inventory system we would want to check that we only add instances of Item or its subclasses to our storage locations.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: Duck typing
A popular alternative in python is duck typing, an approach named after the idea that,
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
What this means for programming is that instead of checking for a particular class, instead the methods and attributes that are actually needed are checked for.
Composition
Let's switch to another example from last week. We looked at a cookbook application and were often unsure whether things like ingredients should be attributes on the recipe class or classes in their own right.
Often the answer is both.
These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
Let's look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
End of explanation
import requests
r = requests.get('https://api.github.com/repos/streety/biof509/events')
print(r.status_code)
print(r.headers['content-type'])
print(r.text[:1000])
print(r.json()[0]['payload']['commits'][0]['message'])
type(r)
Explanation: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests: HTTP for Humans
Requests is a package used for making HTTP requests. There are options in the python standard library for making http requests but they can seem difficult to use.
End of explanation
import pandas as pd
data = pd.DataFrame([[0,1,2,3], [4,5,6,7], [8,9,10,11]],
index=['a', 'b', 'c'],
columns=['col1', 'col2', 'col3', 'col4'])
data
print(data.shape)
print(data['col1'])
print(data.col1)
import matplotlib.pyplot as plt
%matplotlib inline
data.plot()
data.to_csv('Wk05-temp.csv')
data2 = pd.DataFrame.from_csv('Wk05-temp.csv', index_col=0)
data2
Explanation: The API documentation for requests
The Response class
Some useful features:
property
Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and
data analysis tools for the Python programming language.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: The API documentation for the DataFrame object.
The actual code.
Some useful features:
* classmethod
* property for html styling and property for shape
* __getitem__
* Public and private attributes/methods
* __getattr__
Cookbook
We can now return to our cookbook example.
Displaying the ingredients needs to be improved.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe
End of explanation |
7,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maximum Likelihood Estimation
Step1: Best parameter for n4
Step2: after removing the constant part
Step3: Coin example
Setup problem
Step4: Looking at the experient shown above, we can easily predict that the probability of TAIL is 6/10, that is higher than the probability of HEAD 4/10.
It is easy to calculate the probabilities in this setup without getting involved in Maximum likelihood. However, there are other problems which are not so obvious as this coin tossing. So, we are now going to calculate this probabilities with a more methodological way.
Probability
Knowing parameters -> Prediction of outcome
Likelihood
Observation of data -> Estimation of parameters
Bernoulli distribution
$$p(X=r; N, p) = \frac{N!}{r! (N-r)!} p^r * (1-p)^{N-r} $$
Applying the formula to the coing tossing problem, we end up with the following expression | Python Code:
from math import factorial as fac
from numpy import math
import numpy as np
import random
from collections import Counter
%matplotlib inline
n = 1000
experiments = []
for i in range(n):
a = random.randint(1, 6)
# key = "{}-{}".format(a, b)
# experiments[key] = experiments.get(key, 0) + 1
experiments.append(a)
from matplotlib import pyplot as plt
%matplotlib inline
x = experiments
plt.hist(x)
plt.show()
A = [1, 2, 3, 4, 5, 6]
n4 = 10
experiments = [1] * 22 + [2] * 16 + [3] * 21 + [4] * n4 + [5] * 19 + [6] * 12
n = len(experiments) * 1.0
print(n)
lh = []
for i in range(1, 100):
P4 = 1.0/i
lh.append(math.factorial(n) / math.factorial(n4) * math.factorial(n-n4) * ( (P4**n4) * ((1-P4) ** (n-n4)) ))
plt.hist(experiments)
plt.show()
plt.plot(lh)
Explanation: Maximum Likelihood Estimation
End of explanation
n4/n
Explanation: Best parameter for n4
End of explanation
lh = []
for i in range(1, 100):
P4 = 1.0/i
lh.append( (P4**n4) * ((1-P4) ** (n-n4)) )
plt.plot(lh)
Explanation: after removing the constant part
End of explanation
# denoting tails by 0 and heads by 1
TAIL = 0
HEAD = 1
# tossing coint N times
N = 10
# 8 of N times tail occurs
TAIL_COUNT = 8
experiments = [TAIL] * TAIL_COUNT + [HEAD] * (N - TAIL_COUNT)
print(experiments, N)
Explanation: Coin example
Setup problem
End of explanation
PROBABILITY_SCALE = 100
likelihoods = []
for i in range(1, PROBABILITY_SCALE + 1):
P_TAIL = float(i) / PROBABILITY_SCALE
constant_part = (
math.factorial(N) /
(math.factorial(TAIL_COUNT) * math.factorial(N-TAIL_COUNT)))
likelihood = (
constant_part *
np.power(P_TAIL, TAIL_COUNT) *
np.power(1 - P_TAIL, N - TAIL_COUNT))
likelihoods.append((P_TAIL, likelihood))
plt.grid(True)
plt.plot(np.array(likelihoods)[:,0], np.array(likelihoods)[:,1])
Explanation: Looking at the experient shown above, we can easily predict that the probability of TAIL is 6/10, that is higher than the probability of HEAD 4/10.
It is easy to calculate the probabilities in this setup without getting involved in Maximum likelihood. However, there are other problems which are not so obvious as this coin tossing. So, we are now going to calculate this probabilities with a more methodological way.
Probability
Knowing parameters -> Prediction of outcome
Likelihood
Observation of data -> Estimation of parameters
Bernoulli distribution
$$p(X=r; N, p) = \frac{N!}{r! (N-r)!} p^r * (1-p)^{N-r} $$
Applying the formula to the coing tossing problem, we end up with the following expression:
$$
\frac{N!}{TAILS_COUNT! (N-TAILS_COUNT)!} P_TAIL^{TAILS_COUNT} * P_HEAD^{HEADS_COUNT}
$$
\begin{eqnarray}
{\cal L}(\theta |\pi_1\cdots \pi_n) & = & \prod_{n=1}^{N} f(x_{n};\theta )
\
& = & \prod_{n=1}^{N} \theta^{x_{n}} (1-\theta)^{1-x_{n}}
\
log{\cal L}(\theta) & = & \sum_{n=1}^N x^{(n)} \log (\theta) + \sum_{n=1}^N (1- x^{(n)}) \log (1 - \theta)
\
& = & \log (\theta) \sum_{n=1}^N x^{(n)} + \log (1 - \theta) \sum_{n=1}^N (1- x^{(n)})
\end{eqnarray}
The likelihood function is simply the joint probability of observing the data.
End of explanation |
7,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a passband involves
Step1: I don't care about the details, just show/remind me how it's done
Makes sense, and we don't judge
Step2: Getting started
Let us start by importing phoebe, numpy and matplotlib
Step3: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
Step4: Let us plot this simulated passband transmission function to see what it looks like
Step5: Let us now save these data in a file that we will use to register a new passband.
Step6: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
Step7: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset
Step8: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue
Step9: Checking the content property again shows that the table has been successfully computed
Step10: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level Passband class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]'
Step11: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson
Step12: This makes perfect sense
Step13: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 database. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take ~10 minutes to complete. We can now check the passband's content attribute again
Step14: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
Step15: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few hours to complete.
Step16: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables
Step17: This completes the computation of Castelli & Kurucz auxiliary tables.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available here) and you need to grab the files atmcofplanck.dat and atmcof.dat from Bob Wilson's webpage. For this particular passband the index is 22. To import, issue
Step18: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes
Step19: Still an appreciable difference. This hopefully illustrates why excrutiating caution should be exercised at all times when dealing with modeling radiation.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a passband involves:
* providing a passband transmission function;
* defining and registering parameters of the passband;
* computing blackbody response for the passband;
* [optional] computing Castelli & Kurucz (2004) response for the passband;
* [optional] computing Castelli & Kurucz (2004) limb darkening coefficients;
* [optional] computing Castelli & Kurucz (2004) limb darkening integrals;
* [optional] if the passband is one of the passbands included in the Wilson-Devinney code, importing the WD response; and
* saving the generated passband file.
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u
# Register a passband:
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf', pbset='Custom', pbname='mypb',
effwl=330., wlunits=u.nm, calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)', version=1.0,
comments='This is my first custom passband')
# Blackbody response:
pb.compute_blackbody_response()
# Castelli & Kurucz (2004) response:
pb.compute_ck2004_response(path='ck2004i')
pb.compute_ck2004_intensities(path='ck2004i')
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
# Wilson-Devinney response:
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
# Save the passband:
pb.save('my_passband.pb')
Explanation: I don't care about the details, just show/remind me how it's done
Makes sense, and we don't judge: you want to get to science. Provided that you have the passband transmission file available and the ck2004 database already downloaded, the sequence that will generate/register a new passband is:
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger(clevel='WARNING')
Explanation: Getting started
Let us start by importing phoebe, numpy and matplotlib:
End of explanation
wl = np.linspace(300, 360, 61)
ptf = np.zeros(len(wl))
ptf[(wl>=320) & (wl<=340)] = 1.0
Explanation: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
End of explanation
plt.xlabel('Wavelength [nm]')
plt.ylabel('Passband transmission')
plt.plot(wl, ptf, 'b-')
plt.show()
Explanation: Let us plot this simulated passband transmission function to see what it looks like:
End of explanation
np.savetxt('my_passband.ptf', np.vstack((wl, ptf)).T)
Explanation: Let us now save these data in a file that we will use to register a new passband.
End of explanation
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330.,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband')
Explanation: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
End of explanation
print pb.content
Explanation: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset:pbname string, for example Johnson:V, Cousins:Rc, etc. Thus, our fake passband will be Custom:mypb.
The following two arguments, effwl and wlunits, also come as a pair. PHOEBE uses effective wavelength to apply zero-level passband corrections when better options (such as model atmospheres) are unavailable. Effective wavelength is a transmission-weighted average wavelength in the units given by wlunits.
The calibrated parameter instructs PHOEBE whether to take the transmission function as calibrated, i.e. the flux through the passband is absolutely calibrated. If set to True, PHOEBE will assume that absolute intensities computed using the passband transmission function do not need further calibration. If False, the intensities are considered as scaled rather than absolute, i.e. correct to a scaling constant. Most modern passbands provided in the recent literature are calibrated.
The reference parameter holds a reference string to the literature from which the transmission function was taken from. It is common that updated transmission functions become available, which is the point of the version parameter. If there are multiple versions of the transmission function, PHOEBE will by default take the largest value, or the value that is explicitly requested in the filter string, i.e. Johnson:V:1.0 or Johnson:V:2.0.
Finally, the comments parameter is a convenience parameter to store any additional pertinent information.
Computing blackbody response
To significantly speed up calculations, passband coefficients are stored in lookup tables instead of computing the intensities over and over again on the fly. Computed passband tables are tagged in the content property of the class:
End of explanation
pb.compute_blackbody_response()
Explanation: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue:
End of explanation
print pb.content
Explanation: Checking the content property again shows that the table has been successfully computed:
End of explanation
print pb.Inorm(Teff=5772, atm='blackbody', ld_func='linear', ld_coeffs=[0.0])
Explanation: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level Passband class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]':
End of explanation
jV = phoebe.get_passband('Johnson:V')
teffs = np.linspace(5000, 8000, 100)
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^2/A]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='mypb')
plt.plot(teffs, jV.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='jV')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson:V passband.
End of explanation
pb.compute_ck2004_response(path='ck2004i', verbose=False)
Explanation: This makes perfect sense: Johnson V transmission function is wider than our boxed transmission function, so intensity in the V band is larger the lower temperatures. However, for the hotter temperatures the contribution to the UV flux increases and our box passband with a perfect transmission of 1 takes over.
Computing Castelli & Kurucz (2004) response
For any real science you will want to generate model atmosphere tables. The default choice in PHOEBE are the models computed by Fiorella Castelli and Bob Kurucz (website, paper) that feature new opacity distribution functions. In principle, you can generate PHOEBE-compatible tables for any model atmospheres, but that would require a bit of book-keeping legwork in the PHOEBE backend. Contact Andrej Prša to discuss an extension to other model atmospheres.
To compute Castelli & Kurucz (2004) tables for the passband of your choice, you will need to download a precomputed database of absolute intensities. This database is huge, so beware. You will need approximately 140GB of free space. Once you are sure you have this kind of space available, proceed to download the database tarball (28GB):
[cd into a parent directory that will hold the database]
wget http://phoebe-project.org/static/ck2004i.tgz
tar xzf ck2004i.tgz
Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once.
Once the database is unpacked, you are ready to compute the tables. We start with the ck2004 response table:
End of explanation
print pb.content
Explanation: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 database. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take ~10 minutes to complete. We can now check the passband's content attribute again:
End of explanation
loggs = np.ones(len(teffs))*4.43
abuns = np.zeros(len(teffs))
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^2/A]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
End of explanation
pb.compute_ck2004_intensities(path='ck2004i', verbose=False)
Explanation: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few hours to complete.
End of explanation
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
Explanation: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables: one for limb darkening coefficients and the other for the integrated limb darkening. That is done by two methods that do not take appreciable time to complete:
End of explanation
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
Explanation: This completes the computation of Castelli & Kurucz auxiliary tables.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available here) and you need to grab the files atmcofplanck.dat and atmcof.dat from Bob Wilson's webpage. For this particular passband the index is 22. To import, issue:
End of explanation
print pb.content
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^2/A]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='extern_atmx', ld_func='linear', ld_coeffs=[0.0]), label='wd_atmx')
plt.legend(loc='lower right')
plt.show()
Explanation: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes:
End of explanation
pb.save('my_passband.pb')
Explanation: Still an appreciable difference. This hopefully illustrates why excrutiating caution should be exercised at all times when dealing with modeling radiation.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom:mypb'.
End of explanation |
7,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook we will use the same Cython code as in the last notebook. However, this time we will use the Vode integrator from ODEPACK (available in SciPy in scipy.integrate.ode). The reason for this is that it will be a fairer comparison against our upcoming example using CVode.
Step1: Subclassing ODEsys and providing a new method using scipy.integrate.ode
Step2: Creating a new mixin class
Step3: Same procedure as in the last notebook
Step4: That is a considerably slower than odeint. It is clear that it is the python wrapper (in scipy) that is the bottleneck. Especially since using Vode and choosing BDF for this stiff problem will avoid the method swaps LSODA attempts.
Step5: Just to see that everything looks alright | Python Code:
import json
import numpy as np
Explanation: In this notebook we will use the same Cython code as in the last notebook. However, this time we will use the Vode integrator from ODEPACK (available in SciPy in scipy.integrate.ode). The reason for this is that it will be a fairer comparison against our upcoming example using CVode.
End of explanation
# %load ../scipy2017codegen/odesys_vode.py
import numpy as np
from scipy.integrate import ode
from scipy2017codegen.odesys import ODEsys
class VODEsys(ODEsys):
default_integrator = 'vode'
def integrate_vode(self, tout, y0, params=(), method='bdf', rtol=1e-8, atol=1e-8, **kwargs):
def f(t, y, *args):
f.ncall +=1
return np.asarray(self.f_eval(y, t, *args))
f.ncall = 0
def j(t, y, *args):
j.ncall += 1
return np.asarray(self.j_eval(y, t, *args))
j.ncall = 0
r = ode(f, j)
r.set_integrator('vode', method=method, rtol=rtol, atol=atol, **kwargs)
if params:
r.set_f_params(params)
r.set_jac_params(params)
yout = np.zeros((len(tout), len(y0)))
yout[0, :] = y0
r.set_initial_value(yout[0, :], tout[0])
for idx in range(1, len(tout)):
r.integrate(tout[idx])
assert r.successful(), "Integration failed"
yout[idx, :] = r.y
return yout, {'num_rhs': f.ncall, 'num_dls_jac_evals': j.ncall}
Explanation: Subclassing ODEsys and providing a new method using scipy.integrate.ode:
End of explanation
from scipy2017codegen.odesys_cython import CythonODEsys
class CythonVODEsys(VODEsys, CythonODEsys):
pass
Explanation: Creating a new mixin class:
End of explanation
from scipy2017codegen.chem import mk_rsys
watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))
watrad = mk_rsys(ODEsys, **watrad_data)
tout = np.logspace(-6, 3, 200) # close to one hour of operation
c0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}
y0 = [c0.get(symb.name, 0) for symb in watrad.y]
cython_sys = mk_rsys(CythonVODEsys, **watrad_data)
%timeit cython_sys.integrate(tout, y0)
Explanation: Same procedure as in the last notebook:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: That is a considerably slower than odeint. It is clear that it is the python wrapper (in scipy) that is the bottleneck. Especially since using Vode and choosing BDF for this stiff problem will avoid the method swaps LSODA attempts.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(14, 6))
cython_sys.plot_result(tout, *cython_sys.integrate_vode(tout, y0), ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
Explanation: Just to see that everything looks alright:
End of explanation |
7,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
See environment_setup.README (below) for instructions about the use of the DC3_plots_NALMA script. It is a version of the script used to process the DC3 dataset as in Barth et al. (2015, BAMS) and Bruning and Thomas (2015, JGR).
The flash sorting infrastructure is modular. This script uses the <a href="http
Step1: Case-specific time series data
Locations of all flashes contributing to the analyses
Lets you identify that only one cell was tracked
Step2: Time-height plot, separated by IC and CG.
There are accompanying ASCII data files with the raw data. Uses the technique in Bruning and Thomas (2015, JGR)
Step3: Time series of flash moments
There are accompanying ASCII data files with the raw data. Uses the technique in Bruning and Thomas (2015, JGR).
Step4: Plots of each minute in the gridded data
LMA source density, flash extent density, flash initation density, and average flash area.
Step5: Each of the grid type folders contains a CSV file with statistics of the pixels making up the image.
Step6: Flash energy spectra
Plots of the flash energy spectra as defined in Bruning and MacGorman (2013, JAS). A 5/3 power law reference line is plotted. | Python Code:
%%bash
cat /data/GLM-wkshp/flashsort/environment_setup.README
# Links to representative PDFs.
from IPython.display import display, HTML, Image
class PDF(object):
def __init__(self, filename):
self.filename = filename
def _repr_pdf_(self):
return open(self.filename, 'rb').read()
base_path = '/data/GLM-wkshp/flashsort/figures-length/IOPsupercell18-AL-20090410-boundingbox-thresh-0.15_dist-3000.0_pts-10/'
Explanation: See environment_setup.README (below) for instructions about the use of the DC3_plots_NALMA script. It is a version of the script used to process the DC3 dataset as in Barth et al. (2015, BAMS) and Bruning and Thomas (2015, JGR).
The flash sorting infrastructure is modular. This script uses the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html#sklearn.cluster.DBSCAN">DBSCAN algorithm </a> as implemented in the <a href="http://scikit-learn.org">scikit-learn</a> machine-learning library. In order to manage the $N^2$ efficiency of the underlying DBSCAN implementation, data are clustered in pairs of thresh_duration chunks.
The script is configurable in a few places.
- base_sort_dir sets the path where
- center_ID chooses a network center. The centers are defined in the centers dictionary. The ID is used later when constructing output filenames, too.
- The params dictionary configures the flash sorting algorithm. Of particular importance are the following.
- stations: sets the (min, max) number of stations that must participate in each solution for it to count. Max should be larger than the number of stations. Min should be six or seven, depending on the number of stations.
- chi2: sets the (min, max) chi-squared value. The minimum should be zero, while a good maximum to start with is 1.0.
- distance: maximum distance between a source and its closest neighbor before a new flash is started
- thresh_critical_time: maximum temporal separation between a source and its closest neighbor before a new flash is started
- thresh_duration: All flashes should be last less than or equal to this number of seconds. All flashes of duration < thresh_duration are guaranteed to remain clustered. An occasional lucky flash of duration = 2 * thresh_duration is possible.
The script is broken into three sections.
- Run the flash sorting, which creates HDF5 data files with VHF source data, their flash IDs, and a matching flash data table.
- Grab the flash-sorted files and create CF-compliant NetCDF grids
- Grab the grids and create PDF images of each grid
The grid spacing, boundaries, and frame intervals are configured at the begining of the gridding section of the script. This script creates regularly-spaced lat/lon grids, with the center grid cell size calculated to match the specified dx_km and dy_km. It is also possible to grid directly in a map projection of choice by changing proj_name, as well as x_name and y_name in the call to make_plot. For instance, a geostationary projection can be obtained with proj='geos' as described in the documentation for the proj4 coordinate system library.
The PDF images are created as small-multiple plots, with the number of columns given by n_cols at the beginning of the plotting section.
An example of reading and working with the resulting data files is found in the "Reading the flash-sorted files.ipynb"
As described below, additional scripts perform follow-on analysis.
- Assigning NLDN strokes to the best-matching flash
- Using a storm cell or storm region polygon to subset some flashes from the data files.
- Creating time series plots of moments of the flash size distribution
- Creating ASCII files of flash size and rate statistics
The IOP bounding box file included here is a rectangular lat/lon box, but the underlying code works with arbitrary polygons. Adapting the existing code to polygons is mostly a matter of reading in polygon vertices and sending its vertices instead of those for a rectangle.
End of explanation
Image(base_path+'flashes.png')
Explanation: Case-specific time series data
Locations of all flashes contributing to the analyses
Lets you identify that only one cell was tracked
End of explanation
PDF(base_path+'D-1.7_b-0.25_length-profiles_CG.pdf')
PDF(base_path+'D-1.7_b-0.25_length-profiles_IC.pdf')
Explanation: Time-height plot, separated by IC and CG.
There are accompanying ASCII data files with the raw data. Uses the technique in Bruning and Thomas (2015, JGR)
End of explanation
PDF(base_path+'moment-energy-timeseries.pdf')
import pandas as pd
pd.read_csv(base_path+'../IOPsupercell18-output.flash_stats.csv')
Explanation: Time series of flash moments
There are accompanying ASCII data files with the raw data. Uses the technique in Bruning and Thomas (2015, JGR).
End of explanation
Image(base_path+'/grids_lma_source/lma_source_20090410_185300.png')
Image(base_path+'/grids_flash_extent/flash_extent_20090410_185300.png')
Image(base_path+'/grids_flash_initiation/flash_initiation_20090410_185300.png')
Image(base_path+'/grids_flash_footprint/flash_footprint_20090410_185300.png')
Explanation: Plots of each minute in the gridded data
LMA source density, flash extent density, flash initation density, and average flash area.
End of explanation
pd.read_csv(base_path+'/grids_flash_extent/flash_extent_20090410.csv')
Explanation: Each of the grid type folders contains a CSV file with statistics of the pixels making up the image.
End of explanation
PDF(base_path+'LYLOUT_090410_180000_3600-energy.pdf')
Explanation: Flash energy spectra
Plots of the flash energy spectra as defined in Bruning and MacGorman (2013, JAS). A 5/3 power law reference line is plotted.
End of explanation |
7,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation with RBF
$$
f( x) =\sum ^{P}{p=1} a{p} .R_{p} +b
$$
$$
R_{p} = e^{-\frac{1}{2\sigma ^{2}} .\parallel ( X_{i}) -( X_{p}) \parallel ^{2}}
$$
$$
\sigma =\frac{P_{max} -P_{min}}{\sqrt{2P}}
$$
$$
\sigma =\frac{4-2}{\sqrt{2.2}} \
$$
$$
\sigma ^{2} =1
$$
$$
C_{1}=2
$$
$$
C_{2}=4
$$
$$\displaystyle \frac{1}{[ R]} =\left([ R]^{t} .[ R]\right)^{-1} .[ R]^{t}$$
$$
\displaystyle \begin{bmatrix}
a
\end{bmatrix} =\frac{1}{[ R]} \ \begin{bmatrix}
A
\end{bmatrix}
$$
Step1:
Step2:
Step3:
Step4: XOR input | Python Code:
def rbf(inp, out, center):
def euclidean_norm(x1, x2):
return sqrt(((x1 - x2)**2).sum(axis=0))
def gaussian (x, c):
return exp(+1 * pow(euclidean_norm(x, c), 2))
R = np.ones((len(inp), (len(center) + 1)))
for i, iv in enumerate(inp):
for j, jv in enumerate(center):
R[i, j] = (gaussian(inp[i], center[j]))
Rt = R.transpose()
RtR = Rt.dot(R)
iRtR = inv(RtR)
oneR = iRtR.dot(Rt)
a = oneR.dot(out)
def rbf_interpolation(x):
sum = np.ones(len(center) + 1)
for i, iv in enumerate(center):
sum[i] = gaussian(x, iv)
y = a * sum
return reduce((lambda x, y: x + y), y)
return rbf_interpolation
Explanation: Interpolation with RBF
$$
f( x) =\sum ^{P}{p=1} a{p} .R_{p} +b
$$
$$
R_{p} = e^{-\frac{1}{2\sigma ^{2}} .\parallel ( X_{i}) -( X_{p}) \parallel ^{2}}
$$
$$
\sigma =\frac{P_{max} -P_{min}}{\sqrt{2P}}
$$
$$
\sigma =\frac{4-2}{\sqrt{2.2}} \
$$
$$
\sigma ^{2} =1
$$
$$
C_{1}=2
$$
$$
C_{2}=4
$$
$$\displaystyle \frac{1}{[ R]} =\left([ R]^{t} .[ R]\right)^{-1} .[ R]^{t}$$
$$
\displaystyle \begin{bmatrix}
a
\end{bmatrix} =\frac{1}{[ R]} \ \begin{bmatrix}
A
\end{bmatrix}
$$
End of explanation
inp = np.array([2, 3, 4])
out = np.array([3, 6, 5])
center = np.array([2, 4])
rbf_instance = rbf(inp, out, center)
input_test = input_test = np.linspace(0,10,100)
output_test = list(map(rbf_instance, input_test))
plt.plot(input_test, output_test)
plt.plot(inp, out, 'ro')
plt.ylabel('expected vs predicted')
plt.savefig("rbf1.svg")
plt.show()
Explanation:
End of explanation
inp = np.array([2, 3, 4, 5])
out = np.array([3, 1, 5, -2])
center = np.array([2, 3, 4])
rbf_instance = rbf(inp, out, center)
input_test = np.linspace(-5,10,100)
output_test = list(map(rbf_instance, input_test))
# plt.plot(input_test, output_test)
plt.plot(inp, out, 'ro')
plt.ylabel('expected vs predicted')
plt.savefig("interpolate1.svg")
plt.show()
Explanation:
End of explanation
inp = np.array([2, 3, 4, 5])
out = np.array([3, 1, 5, -2])
center = np.array([2, 3, 4])
rbf_instance = rbf(inp, out, center)
input_test = input_test = np.linspace(-5,15,100)
output_test = list(map(rbf_instance, input_test))
plt.plot(input_test, output_test)
plt.plot(inp, out, 'ro')
plt.ylabel('expected vs predicted')
plt.savefig("rbf3.svg")
plt.show()
Explanation:
End of explanation
inp = np.array([np.array([1,1]), np.array([0,1]), np.array([0,0]), np.array([1,0])])
out = np.array([ 0, 1, 0, 1])
center = np.array([ np.array([1,1]), np.array([0,0])])
rbf_instance = rbf(inp, out, center)
inp_test = np.array([np.array([1,1]),
np.array([0,1]),
np.array([0,0]),
np.array([1,0])])
output = map(rbf_instance, inp_test)
def colorize(output):
c = [None]* len(output)
for i, iv in enumerate(output):
if (output[i] > 0):
c[i] = 'blue'
else:
c[i] = 'red'
return c
inp_x = [1, 0, 0, 1]
inp_y = [1, 1, 0, 0]
c = colorize(output)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
xx, yy = np.meshgrid(np.arange(0, 1, 0.02), np.arange(0, 1, 0.02))
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax.scatter(xx[:, 0], yy[:, 1], output, cmap=cm_bright, depthshade=False)
plt.savefig("rbf_xor.svg")
plt.show()
Explanation: XOR input
End of explanation |
7,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This IPython notebook explains a basic workflow two tables using py_entitymatching. Our goal is to come up with a workflow to match DBLP and ACM datasets. Specifically, we want to achieve precision greater than 95% and get recall greater than 90%. The datasets contain information about the conference papers published in top databse conferences.
First, we need to import py_entitymatching package and other libraries as follows
Step1: Matching two tables typically consists of the following three steps
Step2: Block Tables To Get Candidate Set
Before we do the matching, we would like to remove the obviously non-matching tuple pairs from the input tables. This would reduce the number of tuple pairs considered for matching.
py_entitymatching provides four different blockers
Step3: Match Tuple Pairs in Candidate Set
In this step, we would want to match the tuple pairs in the candidate set. Specifically, we use learning-based method for matching purposes.
This typically involves the following four steps
Step4: Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.
Step5: For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.
Step6: Splitting the labeled data into development and evaluation set
In this step, we split the labeled data into two sets
Step7: Selecting the best learning-based matcher
Selecting the best learning-based matcher typically involves the following steps
Step8: Creating Features
Next, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
Step9: Converting the Development Set to Feature Vectors
Step10: Selecting the Best Matcher Using Cross-validation
Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher.
Step11: We observe that the best matcher (RF) is getting us to the precision and recall that we expect (i.e P > 95% and R > 90%). So, we select this matcher and now we can proceed on to evaluating the best matcher on the unseen data (the evaluation set).
Evaluating the Matching Output
Evaluating the matching outputs for the evaluation set typically involves the following four steps
Step12: Training the Selected Matcher
Now, we train the matcher using all of the feature vectors from the development set. For the purposes of this guide we use random forest as the selected matcher.
Step13: Predicting the Matches
Next, we predict the matches for the evaluation set (using the feature vectors extracted from it).
Step14: Evaluating the Matching Output
Finally, we evaluate the accuracy of predicted outputs | Python Code:
import sys
sys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/')
import py_entitymatching as em
import pandas as pd
import os
# Display the versions
print('python version: ' + sys.version )
print('pandas version: ' + pd.__version__ )
print('magellan version: ' + em.__version__ )
Explanation: Introduction
This IPython notebook explains a basic workflow two tables using py_entitymatching. Our goal is to come up with a workflow to match DBLP and ACM datasets. Specifically, we want to achieve precision greater than 95% and get recall greater than 90%. The datasets contain information about the conference papers published in top databse conferences.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
# Get the paths
path_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'dblp_demo.csv'
path_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'acm_demo.csv'
# Load csv files as dataframes and set the key attribute in the dataframe
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
print('Number of tuples in A: ' + str(len(A)))
print('Number of tuples in B: ' + str(len(B)))
print('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B)))
A.head(2)
B.head(2)
# Display the key attributes of table A and B.
em.get_key(A), em.get_key(B)
Explanation: Matching two tables typically consists of the following three steps:
1. Reading the input tables
2. Blocking the input tables to get a candidate set
3. Matching the tuple pairs in the candidate set
Read Input Tables
We begin by loading the input tables. For the purpose of this guide, we use the datasets that are included with the package.
End of explanation
# Blocking plan
# A, B -- AttrEquivalence blocker [year] --------------------|
# |---> candidate set
# A, B -- Overlap blocker [title]---------------------------|
# Create attribute equivalence blocker
ab = em.AttrEquivalenceBlocker()
# Block tables using 'year' attribute : same year include in candidate set
C1 = ab.block_tables(A, B, 'paper year', 'paper year',
l_output_attrs=['title', 'authors', 'paper year'],
r_output_attrs=['title', 'authors', 'paper year']
)
len(C1)
# Initialize overlap blocker
ob = em.OverlapBlocker()
# Block over title attribute
C2 = ob.block_tables(A, B, 'title', 'title', show_progress=False, overlap_size=2)
len(C2)
# Combine the outputs from attr. equivalence blocker and overlap blocker
C = em.combine_blocker_outputs_via_union([C1, C2])
len(C)
Explanation: Block Tables To Get Candidate Set
Before we do the matching, we would like to remove the obviously non-matching tuple pairs from the input tables. This would reduce the number of tuple pairs considered for matching.
py_entitymatching provides four different blockers: (1) attribute equivalence, (2) overlap, (3) rule-based, and (4) black-box. The user can mix and match these blockers to form a blocking sequence applied to input tables.
For the matching problem at hand, we know that two conference papers published in different years cannot match, or if there are errors in the year then there should be at least some overlap between the paper titles. So we decide the apply the following blocking plan:
End of explanation
# Sample candidate set
S = em.sample_table(C, 450)
Explanation: Match Tuple Pairs in Candidate Set
In this step, we would want to match the tuple pairs in the candidate set. Specifically, we use learning-based method for matching purposes.
This typically involves the following four steps:
Sampling and labeling the candidate set
Splitting the labeled data into development and evaluation set
Selecting the best learning based matcher using the development set
Evaluating the selected matcher using the evaluation set
Sampling and labeling the candidate set
First, we randomly sample 450 tuple pairs for labeling purposes.
End of explanation
# Label S
#G = em.label_table(S, 'label')
Explanation: Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.
End of explanation
# Load the pre-labeled data
path_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'labeled_data_demo.csv'
G = em.read_csv_metadata(path_G,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(G)
Explanation: For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.
End of explanation
# Split S into development set (I) and evaluation set (J)
IJ = em.split_train_test(G, train_proportion=0.7, random_state=0)
I = IJ['train']
J = IJ['test']
Explanation: Splitting the labeled data into development and evaluation set
In this step, we split the labeled data into two sets: development (I) and evaluation (J). Specifically, the development set is used to come up with the best learning-based matcher and the evaluation set used to evaluate the selected matcher on unseen data.
End of explanation
# Create a set of ML-matchers
dt = em.DTMatcher(name='DecisionTree', random_state=0)
svm = em.SVMMatcher(name='SVM', random_state=0)
rf = em.RFMatcher(name='RF', random_state=0)
lg = em.LogRegMatcher(name='LogReg', random_state=0)
ln = em.LinRegMatcher(name='LinReg')
Explanation: Selecting the best learning-based matcher
Selecting the best learning-based matcher typically involves the following steps:
Creating a set of learning-based matchers
Creating features
Converting the development set into feature vectors
Selecting the best learning-based matcher using k-fold cross validation
Creating a Set of Learning-based Matchers
End of explanation
# Generate features
feature_table = em.get_features_for_matching(A, B, validate_inferred_attr_types=False)
# List the names of the features generated
feature_table['feature_name']
Explanation: Creating Features
Next, we need to create a set of features for the development set. py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
End of explanation
# Convert the I into a set of feature vectors using F
H = em.extract_feature_vecs(I,
feature_table=feature_table,
attrs_after='label',
show_progress=False)
# Display first few rows
H.head(3)
Explanation: Converting the Development Set to Feature Vectors
End of explanation
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric_to_select_matcher='precision', random_state=0)
result['cv_stats']
# Select the best ML matcher using CV
result = em.select_matcher([dt, rf, svm, ln, lg], table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
k=5,
target_attr='label', metric_to_select_matcher='recall', random_state=0)
result['cv_stats']
Explanation: Selecting the Best Matcher Using Cross-validation
Now, we select the best matcher using k-fold cross-validation. For the purposes of this guide, we use five fold cross validation and use 'precision' and 'recall' metric to select the best matcher.
End of explanation
# Convert J into a set of feature vectors using feature table
L = em.extract_feature_vecs(J, feature_table=feature_table,
attrs_after='label', show_progress=False)
Explanation: We observe that the best matcher (RF) is getting us to the precision and recall that we expect (i.e P > 95% and R > 90%). So, we select this matcher and now we can proceed on to evaluating the best matcher on the unseen data (the evaluation set).
Evaluating the Matching Output
Evaluating the matching outputs for the evaluation set typically involves the following four steps:
1. Converting the evaluation set to feature vectors
2. Training matcher using the feature vectors extracted from the development set
3. Predicting the evaluation set using the trained matcher
4. Evaluating the predicted matches
Converting the Evaluation Set to Feature Vectors
As before, we convert to the feature vectors (using the feature table and the evaluation set)
End of explanation
# Train using feature vectors from I
dt.fit(table=H,
exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
target_attr='label')
Explanation: Training the Selected Matcher
Now, we train the matcher using all of the feature vectors from the development set. For the purposes of this guide we use random forest as the selected matcher.
End of explanation
# Predict on L
predictions = dt.predict(table=L, exclude_attrs=['_id', 'ltable_id', 'rtable_id', 'label'],
append=True, target_attr='predicted', inplace=False)
Explanation: Predicting the Matches
Next, we predict the matches for the evaluation set (using the feature vectors extracted from it).
End of explanation
# Evaluate the predictions
eval_result = em.eval_matches(predictions, 'label', 'predicted')
em.print_eval_summary(eval_result)
Explanation: Evaluating the Matching Output
Finally, we evaluate the accuracy of predicted outputs
End of explanation |
7,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating gallery images for annotation review
Overview
Step1: Connect girder client and set parameters
A csv file like the one in
histomicstk/annotations_and_masks/tests/test_files/sample_GTcodes_v2.csv is needed to define what group each pixel value corresponds to in the mask image, to define the overlay order of various annotation groups, and which groups are considered to be ROIs. Note that the term "group" here comes from the annotation model where each group represents a class like "tumor" or "necrosis" and is associated with a an annotation style.
Please refer to "Converting annotations to semantic segmentation mask images" for more details.
Step2: Let's take a look at some of the inputs
Step3: Retrieve ROIs from server and generate visualizations
The first step is to retrieve annotations and images of the ROIs from the server and store these locally in the
temporary directory we created. The ROI images and annotations will be combined to form new images that embed the annotations and generate visualizations. Later, another method will be used to combine these ROIs into a gallery image and to post it to DSA.
Explore the methods
Step4: This calls the method get_all_rois_from_slide_v2() to get the rois for each
individual slide. We don't need the masks here, only the contours and
visualization.
Step5: We will be get_all_rois_from_folder_v2() will the callback function
_plot_rapid_review_vis() to create a "combined" side-by-side visualization
of the annotations and the RGB image, along with a lower magnification RGB image that provides context for the ROI.
Step6: Generate and examine the visualizations
Execute this function using the girder client and folder id.
Step7: Assemble visualizations into gallery image and post to server
After the visualizations are created we need to assemble them into larger gallery images for review and post these back to the server.
This is the method we will be using
Step8: Now you can go to the girder folder where galleries will be visualized
on HistomicsUI.
Cleanup
The contents of the temporary directory are no longer needed after posting to the server. | Python Code:
import os
import tempfile
import shutil
from imageio import imread
from pandas import read_csv
import girder_client
from histomicstk.annotations_and_masks.review_gallery import \
get_all_rois_from_folder_v2, get_all_rois_from_slide_v2, \
_plot_rapid_review_vis, create_review_galleries
import matplotlib.pylab as plt
%matplotlib inline
Explanation: Creating gallery images for annotation review
Overview:
Annotation studies often focus on small regions of interest (ROIs) that are orders of magnitude smaller than whole slide images and sparsely distributed over many slides. Reviewing these annotations involves significant time spent navigating from one ROI to another within and across slides. To aid in review we developed tools to generate mosaic gallery images that condense these ROIs into a dense multiresolution images that can be viewed in HistomicsUI. These gallery images speed up the review process by minimizing navigation and the need for toggling annotations.
In this minimal example, we show how 29 ROIs from two slides are parsed into three
gallery images for pathologist review (a typical project may contain 100s of ROIs). This video demonstrates the gallery image functionality, and
the code below which shows the gallery image creation process.
Where to look:
|_histomicstk/
|_annotations_and_masks/
|_review_gallery.py
|_tests/
|_test_review_gallery.py
End of explanation
URL = 'http://candygram.neurology.emory.edu:8080/'
APIURL = URL + 'api/v1/'
# source folder containing slides with annotated ROIs
SAMPLE_FOLDER_ID = '5e2a2da8ddda5f83986d18a2'
# This is the girder folder where galleries will be visualized
POST_FOLDERID = "5e3ce440ddda5f839875b33e"
# Connect to an authenticated girder API. You
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(interactive=True) # need this to post!
# gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
# GT codes dict for parsing into label mask
GTCODE_PATH = os.path.join(
'/home/mtageld/Desktop/HistomicsTK/histomicstk/annotations_and_masks/',
'tests/test_files', 'sample_GTcodes_v2.csv')
GTCodes_dict = read_csv(GTCODE_PATH)
GTCodes_dict.index = GTCodes_dict.loc[:, 'group']
GTCodes_dict = GTCodes_dict.to_dict(orient='index')
# just a temp directory to save masks
BASE_SAVEPATH = tempfile.mkdtemp()
SAVEPATHS = {
'contours': os.path.join(BASE_SAVEPATH, 'contours'),
'rgb': os.path.join(BASE_SAVEPATH, 'rgbs'),
'visualization': os.path.join(BASE_SAVEPATH, 'vis'),
}
for _, savepath in SAVEPATHS.items():
os.mkdir(savepath)
# where to save gallery
combinedvis_savepath = os.path.join(BASE_SAVEPATH, 'combinedvis')
os.mkdir(combinedvis_savepath)
Explanation: Connect girder client and set parameters
A csv file like the one in
histomicstk/annotations_and_masks/tests/test_files/sample_GTcodes_v2.csv is needed to define what group each pixel value corresponds to in the mask image, to define the overlay order of various annotation groups, and which groups are considered to be ROIs. Note that the term "group" here comes from the annotation model where each group represents a class like "tumor" or "necrosis" and is associated with a an annotation style.
Please refer to "Converting annotations to semantic segmentation mask images" for more details.
End of explanation
print(list(GTCodes_dict.keys()))
print(GTCodes_dict['tumor'])
print(SAVEPATHS)
print('combinedvis_savepath:', combinedvis_savepath)
Explanation: Let's take a look at some of the inputs
End of explanation
print(get_all_rois_from_folder_v2.__doc__)
Explanation: Retrieve ROIs from server and generate visualizations
The first step is to retrieve annotations and images of the ROIs from the server and store these locally in the
temporary directory we created. The ROI images and annotations will be combined to form new images that embed the annotations and generate visualizations. Later, another method will be used to combine these ROIs into a gallery image and to post it to DSA.
Explore the methods
End of explanation
print(get_all_rois_from_slide_v2.__doc__)
Explanation: This calls the method get_all_rois_from_slide_v2() to get the rois for each
individual slide. We don't need the masks here, only the contours and
visualization.
End of explanation
print(_plot_rapid_review_vis.__doc__)
# params for getting all rois for slide
get_all_rois_kwargs = {
'GTCodes_dict': GTCodes_dict,
'save_directories': SAVEPATHS,
'annotations_to_contours_kwargs': {
'MPP': 0.2,
'linewidth': 0.2,
'get_rgb': True,
'get_visualization': True,
},
'verbose': False,
'get_mask': False,
# we use this callback so that we have results compatible
# of being used as input for create_review_galleries()
'callback': _plot_rapid_review_vis,
'callback_kwargs': {
'combinedvis_savepath': combinedvis_savepath,
'zoomout': 4,
},
}
Explanation: We will be get_all_rois_from_folder_v2() will the callback function
_plot_rapid_review_vis() to create a "combined" side-by-side visualization
of the annotations and the RGB image, along with a lower magnification RGB image that provides context for the ROI.
End of explanation
# Get al rois to prep for gallery
get_all_rois_from_folder_v2(
gc=gc, folderid=SAMPLE_FOLDER_ID,
get_all_rois_kwargs=get_all_rois_kwargs, monitor='test')
all_fovs = os.listdir(combinedvis_savepath)
for i in range(3):
cvis = imread(os.path.join(combinedvis_savepath, all_fovs[i]))
plt.imshow(cvis)
plt.title(all_fovs[i], fontsize=6)
plt.show()
Explanation: Generate and examine the visualizations
Execute this function using the girder client and folder id.
End of explanation
print(create_review_galleries.__doc__)
create_review_galleries_kwargs = {
'tilepath_base': combinedvis_savepath,
'upload_results': True,
'gc': gc,
'url': URL,
'gallery_folderid': POST_FOLDERID,
'gallery_savepath': None,
'padding': 25,
'tiles_per_row': 2,
'tiles_per_column': 5,
}
# create (+/- post) review gallery
resps = create_review_galleries(**create_review_galleries_kwargs)
Explanation: Assemble visualizations into gallery image and post to server
After the visualizations are created we need to assemble them into larger gallery images for review and post these back to the server.
This is the method we will be using:
End of explanation
shutil.rmtree(BASE_SAVEPATH)
Explanation: Now you can go to the girder folder where galleries will be visualized
on HistomicsUI.
Cleanup
The contents of the temporary directory are no longer needed after posting to the server.
End of explanation |
7,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D Poisson Problem
We solve the following problem
Step1: Test set 2
Step2: Test set 3
Step3: Test set 4
Step4: Test set 5
Step5: Test set 6
Step6: 3D Poisson Problem
Weak Scaling Test
<font color='red'>Number of GPU
Step7: Strong Scaling Test
<font color='red'>Number of GPU | Python Code:
omg=numpy.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1])
tPCG = numpy.array([5.72, 4.54, 3.78, 3.14, 2.71, 2.38, 2.06, 1.95, 2.49, 10.15])
tPCGF = numpy.array([2.48, 2.14, 2.03, 2.6, 10.7])
tPBICGSTAB = numpy.array([2.79, 2.58, 2.48, 3, 12.1])
pyplot.plot(omg, tPCG, label="PCG")
pyplot.plot(omg[5:], tPCGF, label="PCGF")
pyplot.plot(omg[5:], tPBICGSTAB, label="PBICGSTAB")
pyplot.xlabel("Relaxation factor")
pyplot.ylabel("Time for solve")
pyplot.legend(loc=0);
Explanation: 2D Poisson Problem
We solve the following problem:
\begin{equation}
\frac{\partial u^2}{\partial^2 x} + \frac{\partial u^2}{\partial^2 y} = -8\pi^2\cos{(2\pi x)}\cos{(2\pi y)}
\end{equation}
with boundary conditions:
\begin{equation}
\left.\frac{\partial u}{\partial x}\right|{x=0}=\left.\frac{\partial u}{\partial x}\right|{x=1}=\left.\frac{\partial u}{\partial y}\right|{y=0}=\left.\frac{\partial u}{\partial y}\right|{y=1}=0
\end{equation}
The exact solution is
\begin{equation}
u(x, y) = \cos{(2\pi x)}\cos{(2\pi y)}
\end{equation}
Test set 1:
Number of GPU: 1 (K40)
Machine: Theo
<font color='red'>Top solver: </font>
<font color='red'>Preconditioned CG (PCG)</font>
<font color='red'>Flexible Preconditioned CG (PCGF)</font>
<font color='red'>Preconditioned Stable BiCG (PBICGSTAB)</font>
Tolerance: absolute residual reach $10^{-12}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
Cycle: V
Pre-sweep: 1
Post-Sweep: 1
Coarsest sweep: 1
Maximum size of coarsest grid: 100
<font color='red'>Relaxation factor of the block Jacobi: from 0.1 to 1 </font>
Grid size: 3750 $\times$ 3750
End of explanation
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([5.152, 3.822, 4.314]),
"W": numpy.array([5.568, 5.89, 6.39]),
"F": numpy.array([5.886, 4.232, 4.53])}
errL24 = {"V": numpy.array([0.052, 0.152, 2.004]),
"W": numpy.array([0.008, 0.03, 0.01]),
"F": numpy.array([2.766, 0.002, 0.23])}
errU24 = {"V": numpy.array([0.018, 0.078, 1.986]),
"W": numpy.array([0.012, 0.04, 0.02]),
"F": numpy.array([3.174, 0.008, 0.89])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.248, 5.238, 4.15]),
"W": numpy.array([7.382, 5.53, 7.456]),
"F": numpy.array([5.672, 5.58, 4.24])}
errL12 = {"V": numpy.array([0.008, 0.368, 0]),
"W": numpy.array([0.002, 0, 1.656]),
"F": numpy.array([0.992, 1.22, 0])}
errU12 = {"V": numpy.array([0.002, 1.472, 0]),
"W": numpy.array([0.008, 0, 0.424]),
"F": numpy.array([0.658, 1.83, 0])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
Explanation: Test set 2:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-10}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([3.102, 2.444, 2.166]),
"W": numpy.array([3.716, 4.376, 5.4]),
"F": numpy.array([2.872, 3.31, 3.78])}
errL24 = {"V": numpy.array([0.032, 0.044, 0.006]),
"W": numpy.array([0.066, 0.316, 0.99]),
"F": numpy.array([0.012, 0.49, 0.88])}
errU24 = {"V": numpy.array([0.058, 0.016, 0.004]),
"W": numpy.array([0.074, 1.214, 0.67]),
"F": numpy.array([0.008, 0.74, 0.23])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.238, 4.42, 3.7]),
"W": numpy.array([5.272, 5.174, 5.396]),
"F": numpy.array([4.23, 3.974, 4.58])}
errL12 = {"V": numpy.array([0.608, 0, 0]),
"W": numpy.array([0.402, 0.004, 0.156]),
"F": numpy.array([0.05, 0.004, 0.61])}
errU12 = {"V": numpy.array([2.422, 0, 0]),
"W": numpy.array([1.608, 0.006, 0.044]),
"F": numpy.array([0.08, 0.016, 0.92])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
Explanation: Test set 3:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach <font color='red'>$10^{-8}$</font>
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([3.06, 2.422, 2.39]),
"W": numpy.array([3.802, 4.376, 5.406]),
"F": numpy.array([2.878, 3.382, 5.568])}
errL24 = {"V": numpy.array([0.05, 0.022, 0.23]),
"W": numpy.array([0.002, 0.306, 1.006]),
"F": numpy.array([0.008, 0.552, 0.668])}
errU24 = {"V": numpy.array([0.02, 0.038, 0.91]),
"W": numpy.array([0.008, 1.214, 0.674]),
"F": numpy.array([0.012, 0.988, 0.452])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.23, 4.376, 3.702]),
"W": numpy.array([5.266, 5.174, 5.43]),
"F": numpy.array([4.208, 4.288, 4.572])}
errL12 = {"V": numpy.array([0.65, 0.126, 0.002]),
"W": numpy.array([0.406, 0.004, 0.01]),
"F": numpy.array([0.028, 0.318, 0.602])}
errU12 = {"V": numpy.array([2.42, 0.044, 0.008]),
"W": numpy.array([1.614, 0.006, 0.01]),
"F": numpy.array([0.112, 1.272, 0.908])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
Explanation: Test set 4:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: <font color='red'>ALL</font>
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([4.512, 4.508, 5.152]),
"W": numpy.array([5.626, 5.63, 5.622]),
"F": numpy.array([5.268, 5.286, 6.822])}
errL24 = {"V": numpy.array([0.332, 0.338, 0.962]),
"W": numpy.array([0.026, 0.02, 0.022]),
"F": numpy.array([1.088, 1.026, 0.012])}
errU24 = {"V": numpy.array([1.278, 1.292, 0.638]),
"W": numpy.array([0.034, 0.02, 0.028]),
"F": numpy.array([1.562, 1.534, 0.008])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([4.81, 5.212, 6.016]),
"W": numpy.array([7.402, 7.406, 7.4]),
"F": numpy.array([5.584, 5.53, 5.554])}
errL12 = {"V": numpy.array([0, 0.402, 1.206]),
"W": numpy.array([0.002, 0.006, 0]),
"F": numpy.array([0.084, 0.03, 0.054])}
errU12 = {"V": numpy.array([0, 1.608, 0.804]),
"W": numpy.array([0.008, 0.014, 0]),
"F": numpy.array([0.056, 0.1, 0.086])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
Explanation: Test set 5:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-10}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: <font color='red'>LU Decomposition</font>
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3, 4, 5])
time24 = {"V": numpy.array([2.75, 2.186, 1.958, 1.832, 1.782]),
"W": numpy.array([6.9, 8.3, 8.236, 9.762, 15.764]),
"F": numpy.array([4.204, 5.106, 6.574, 5.782, 6.68])}
errL24 = {"V": numpy.array([0.06, 0.066, 0.058, 0.002, 0.002]),
"W": numpy.array([1.61, 1.95, 0.016, 0.012, 0.064]),
"F": numpy.array([0.774, 1.066, 1.554, 0.272, 0.04])}
errU24 = {"V": numpy.array([0.04, 0.044, 0.042, 0.008, 0.008]),
"W": numpy.array([0.41, 1.27, 0.014, 0.038, 0.076]),
"F": numpy.array([1.046, 0.704, 0.426, 0.078, 0.02])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([2.396, 2.18, 8.164, 1.98, 2.174]),
"W": numpy.array([8.316, 13.444, 16.048, 23.508, 18.928]),
"F": numpy.array([9.094, 8.608, 6.818, 8.416, 9.832])}
errL12 = {"V": numpy.array([0.006, 0, 6.134, 0, 0.174]),
"W": numpy.array([0.006, 0.044, 0.658, 3.685, 0.048]),
"F": numpy.array([1.624, 0.018, 0.128, 1.126, 1.832])}
errU12 = {"V": numpy.array([0.004, 0, 24.486, 0, 0.696]),
"W": numpy.array([0.014, 0.056, 2.532, 2.462, 0.062]),
"F": numpy.array([0.416, 0.012, 0.042, 1.674, 1.838])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 6)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(1, 30)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 6)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(1, 30)
pyplot.legend(loc=0)
Explanation: Test set 6:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: <font color='red'>Gauss-Seidel</font>
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3, 4, 5</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
N_1GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_1GPU = numpy.array([0.024, 0.017, 0.10, 0.2, 1.6])
err_1GPU = numpy.array([0., 0., 0., 0., 0.])
N_2GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_2GPU = numpy.array([0.04, 0.11, 0.09, 0.17, 1.19])
err_2GPU = numpy.array([0., 0., 0., 0., 0.])
N_4GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_4GPU = numpy.array([0.11, 0.10, 0.09, 0.57, 0.69])
err_4GPU = numpy.array([0., 0., 0., 0., 0.])
N_8GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_8GPU = numpy.array([0.09, 0.09, 0.53, 0.4, 0.44])
err_8GPU = numpy.array([0., 0., 0., 0., 0.])
nGPU = numpy.array([1, 2, 4, 8])
Time_N1K_GPU = numpy.array([0.024, 0.04, 0.11, 0.09])
Time_N8K_GPU = numpy.array([0.017, 0.11, 0.10, 0.09])
Time_N64K_GPU = numpy.array([0.10, 0.09, 0.09, 0.53])
Time_N512K_GPU = numpy.array([0.2, 0.17, 0.57, 0.4])
Time_N4M_GPU = numpy.array([1.6, 1.19, 0.69, 0.44])
N_1CPU = numpy.array([64000, 512000, 4096000])
Time_1CPU = numpy.array([0.23, 3.06, 45.37])
err_1CPU = numpy.array([0., 0., 0.])
N_2CPU = numpy.array([64000, 512000, 4096000])
Time_2CPU = numpy.array([0.17, 3.12, 39.05])
err_2CPU = numpy.array([0., 0., 0.])
N_4CPU = numpy.array([64000, 512000, 4096000])
Time_4CPU = numpy.array([0.09, 1.65, 21.88])
err_4CPU = numpy.array([0., 0., 0.])
N_8CPU = numpy.array([64000, 512000, 4096000])
Time_8CPU = numpy.array([0.05, 1.22, 18.3])
err_8CPU = numpy.array([0., 0., 0.])
nCPU = numpy.array([1, 2, 4, 8])
Time_N64K_CPU = numpy.array([0.23, 0.17, 0.09, 0.05])
Time_N512K_CPU = numpy.array([3.06, 3.12, 1.65, 1.22])
Time_N4M_CPU = numpy.array([45.37, 39.05, 21.88, 18.3])
#pyplot.figure(figsize=(16,8), dpi=400)
#pyplot.subplot(1, 2, 1)
#pyplot.title("Weak Scaling")
#ax = pyplot.gca()
#ax.set_xscale("log", nonposx='clip')
#ax.set_yscale("log", nonposx='clip')
#pyplot.errorbar(N_1GPU, Time_1GPU, yerr = err_1GPU, fmt='ks-', label="1 GPU")
#pyplot.errorbar(N_2GPU, Time_2GPU, yerr = err_2GPU, fmt='r^-', label="2 GPU")
#pyplot.errorbar(N_4GPU, Time_4GPU, yerr = err_4GPU, fmt='gx-', label="4 GPU")
#pyplot.errorbar(N_8GPU, Time_8GPU, yerr = err_8GPU, fmt='bo-', label="8 GPU")
#
#pyplot.errorbar(N_1CPU, Time_1CPU, yerr = err_1CPU, fmt='ks--', label="1 CPU")
#pyplot.errorbar(N_2CPU, Time_2CPU, yerr = err_2CPU, fmt='r^--', label="2 CPU")
#pyplot.errorbar(N_4CPU, Time_4CPU, yerr = err_4CPU, fmt='gx--', label="4 CPU")
#pyplot.errorbar(N_8CPU, Time_8CPU, yerr = err_8CPU, fmt='bo--', label="8 CPU")
#pyplot.xlabel("Number of total grid points")
#pyplot.ylabel("Wall time for solve (sec)")
#pyplot.legend(loc=0)
pyplot.figure(figsize=(16,8), dpi=400)
pyplot.title("Weak Scaling")
ax = pyplot.gca()
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposx='clip')
#pyplot.plot(nGPU, Time_N1K_GPU, 'ks-', label="GPU, 10x10x10")
#pyplot.plot(nGPU, Time_N8K_GPU, 'r^-', label="GPU, 20x20x20")
pyplot.plot(nGPU, Time_N64K_GPU, 'rx-', label="GPU, 40x40x40")
pyplot.plot(nGPU, Time_N512K_GPU, 'go-', label="GPU, 80x80x80")
pyplot.plot(nGPU, Time_N4M_GPU, 'b>-', label="GPU, 160x160x160")
pyplot.plot(nCPU, Time_N64K_CPU, 'rx--', label="CPU, 40x40x40")
pyplot.plot(nCPU, Time_N512K_CPU, 'go--', label="CPU, 80x80x80")
pyplot.plot(nCPU, Time_N4M_CPU, 'b>--', label="CPU, 160x160x160")
pyplot.xlabel("Number of GPUs / CPUs")
pyplot.ylabel("Wall time for solve (sec)")
#pyplot.ylim(0, 4)
pyplot.legend(loc=0)
Explanation: 3D Poisson Problem
Weak Scaling Test
<font color='red'>Number of GPU: 1, 2, 4, 8 (K20m)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Gauss-Seidel
Cycle: V
Pre-sweep: 1
Post-Sweep: 1
Coarsest sweep: 4
Maximum size of coarsest grid: 2
Relaxation factor of the block Jacobi: 0.8
<font color='red'>Grid size: 10x10x10, 20x20x20, 40x40x40, 80x80x80, 160x160x160</font>
End of explanation
N_4M_GPU = numpy.array([1, 2, 4, 8, 16, 32])
Time_4M_GPU_Raw = numpy.array([[1.04, 1.11, 0.86, 0.5, 3.7, 3.49],
[1.04, 1.11, 0.86, 0.49, 3.72, 3.47],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.51],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.47],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.48]])
Time_4M_GPU = numpy.average(Time_4M_GPU_Raw, axis=0)
N_8M_GPU = numpy.array([4, 8, 16, 32])
Time_8M_GPU_Raw = numpy.array([[1.37, 0.81, 0.57, 2.1],
[1.44, 0.81, 0.58, 2.09],
[1.37, 0.81, 0.58, 2.09],
[1.37, 0.82, 0.58, 2.09],
[1.37, 0.81, 0.59, 2.09]])
Time_8M_GPU = numpy.average(Time_8M_GPU_Raw, axis=0)
N_4M_CPU = numpy.array([1, 2, 3, 4, 5, 6, 7, 8])
Time_4M_CPU = numpy.array([9.53, 4.72, 3.06, 2.19, 1.74, 1.53, 1.31, 1.13])
N_8M_CPU = numpy.array([1, 2, 3, 4, 5, 6, 7, 8])
Time_8M_CPU = numpy.array([20.65, 10.33, 6.65, 4.92, 3.9, 3.27, 2.82, 2.45])
N_4M_GPU_OPT = numpy.array([1, 2, 4, 8, 16])
Time_4M_GPU_OPT = numpy.array([0.81, 0.67, 0.42, 0.31, 0.26])
pyplot.figure(figsize=(16,8), dpi=400)
pyplot.title("Strong Scaling (GPU)")
ax = pyplot.gca()
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposx='clip')
pyplot.plot(N_4M_GPU, Time_4M_GPU, 'ks-', label="GPU, 200x200x100")
pyplot.plot(N_8M_GPU, Time_8M_GPU, 'rx-', label="GPU, 200x200x200")
pyplot.plot(N_4M_CPU, Time_4M_CPU, 'ks--', label="CPU, 200x200x100")
pyplot.plot(N_8M_CPU, Time_8M_CPU, 'rx--', label="CPU, 200x200x200")
pyplot.plot(N_4M_GPU_OPT, Time_4M_GPU_OPT, 'ks-.', label="GPU, 160x160x160")
pyplot.xlabel("Number of GPUs / CPU-Nodes (12 CPUs per node)")
pyplot.ylabel("Wall time for solve (sec)")
pyplot.legend(loc=0)
Explanation: Strong Scaling Test
<font color='red'>Number of GPU: 1, 2, 4, 8, 16, 32 (K20m)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Gauss-Seidel
Cycle: V
Pre-sweep: 1
Post-Sweep: 1
Coarsest sweep: 4
Maximum size of coarsest grid: 2
Relaxation factor of the block Jacobi: 0.8
<font color='red'>Grid size: 200x200x100, 200x200x200</font>
End of explanation |
7,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clean up data
Sometimes, unwanted data needs to be deleted.
Each of the screenshot data was manually checked, and was moved to the wrong/ directory.
This notebook iterates through the wrong/ directory and removes the accompanying rows inside the csv file.
Step1: Get wrong image list
Step2: Get index of each wrong image
Step3: Remove the rows, and save the modified csv file
Step4: Check to see that it was saved well
Step5: Move img files to respective output directory. (To indicate that we have looked at the images, and removed the "wrong" images)
Step6: Delete wrong images from wrong directory | Python Code:
%ls -lh ../data/csv
import pandas as pd
import os
parent_path = os.path.dirname(os.getcwd())
csv_file = '97802012'
csv_file_name = csv_file + '.csv'
csv_dir_path = os.path.join(parent_path, 'data', 'csv')
csv_file_path = os.path.join(csv_dir_path, csv_file_name)
img_dir_path = os.path.join(parent_path, 'data', 'img', 'raw')
img_output_dir_path = os.path.join(img_dir_path, csv_file)
img_wrong_dir_path = os.path.join(parent_path, 'data', 'img', 'wrong')
df = pd.read_csv(csv_file_path, header=0)
old_rows_count = df.shape[0]
print("%d rows" % df.shape[0])
df.head(3)
Explanation: Clean up data
Sometimes, unwanted data needs to be deleted.
Each of the screenshot data was manually checked, and was moved to the wrong/ directory.
This notebook iterates through the wrong/ directory and removes the accompanying rows inside the csv file.
End of explanation
wrong_list = os.listdir(img_wrong_dir_path)
wrong_list = [x for x in wrong_list if csv_file in x]
len(wrong_list)
Explanation: Get wrong image list
End of explanation
def get_index(i):
return df[df['img'] == i].index.tolist()[0]
wrong_list_index = [get_index(i) for i in wrong_list]
Explanation: Get index of each wrong image
End of explanation
df = df.drop(df.index[wrong_list_index])
df.shape[0]
assert(df.shape[0] + len(wrong_list) == old_rows_count)
df.to_csv(csv_file_path, index=False)
Explanation: Remove the rows, and save the modified csv file
End of explanation
df = pd.read_csv(csv_file_path, header=0)
print("%d rows" % df.shape[0])
df.head(3)
Explanation: Check to see that it was saved well
End of explanation
if not os.path.exists(img_output_dir_path):
os.makedirs(img_output_dir_path)
for f in df['img']:
old_path = os.path.join(img_dir_path, f)
new_path = os.path.join(img_output_dir_path, f)
os.rename(old_path, new_path)
Explanation: Move img files to respective output directory. (To indicate that we have looked at the images, and removed the "wrong" images)
End of explanation
for f in wrong_list:
remove_file_path = os.path.join(img_wrong_dir_path, f)
os.remove(remove_file_path)
Explanation: Delete wrong images from wrong directory
End of explanation |
7,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
from collections import Counter
word_counter = Counter(int_words)
total_word_count = len(int_words)
word_freq = {word:count/total_word_count for word, count in word_counter.items()}
threshold = 1e-5
discard_prob = {word:(1 - threshold / np.sqrt(freq)) for word, freq in word_freq.items()}
import random
print("before sub sampling", len(int_words))
train_words = [word for word in int_words if discard_prob[word] < random.random()]
print("after sub sampling", len(train_words))
temp = [int_to_vocab[idx] for idx in train_words[0:100]]
print(temp)
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
r = int(random.randint(1, window_size))
start_idx = max(0, idx - r)
end_idx = min(len(words), idx + r + 1)
return words[start_idx:idx] + words[idx+1:end_idx]
print(train_words[3])
print(train_words[0:10])
print(get_target(train_words[0:10], 3))
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(dtype=tf.int32, shape=[None], name='inputs')
labels = tf.placeholder(dtype=tf.int32, shape=[None, None], name='labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200# Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))# create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.01))# create softmax weight matrix here
softmax_b = tf.Variable(tf.ones(n_vocab))# create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
7,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conformal Kernel Distribution Embedding
Conformal Anomaly Detector via RBF Kernel Embddeing
Below are shown sample results obtained in this project. On left are level sets of the nominal bivariate Gaussian mixture distribution used to illustrate the K-LPE algorithm. In the middle are results of K-LPE with K= 6 and Euclidean distance metric for $m = 150$ test points drawn from an equal mixture of 2D uniform and the (nominal) bivariate distributions. Scores for the test points are based on 200 nominal training samples. Scores falling below a threshold level 0.05 are declared as anomalies. The dotted contour corresponds to the exact bivariate Gaussian density level set at level alpha= 0.05. On right is the empirical distribution of the test point scores associated with the bivariate Gaussian that appears to be uniform while scores for the test points drawn from 2D uniform distribution cluster around zero.
Step1: train test splitting
Step2: CKDE vs. ocSVM
Step3: Contour plot | Python Code:
import time
import os
import numpy as np
from sklearn.grid_search import ParameterGrid
from sklearn.base import clone
from sklearn.gaussian_process import GaussianProcess
from scipy.stats import norm
from joblib import Parallel, delayed
from utils.state import _save
from utils.functions_1d import f6, pressure2, heaviside
from utils.functions import gaussian
from utils.conformal import RRCM, CRR
from utils.KRR import KRR_AB
%matplotlib inline
import matplotlib.pyplot as plt
np.seterr(all="ignore")
random_state = np.random.RandomState(0x0B00B1E5)
N, D, P = 1000, 2, 1
X = np.concatenate([
random_state.normal(size=(N//4, D))*1 + np.array([[2, 2]]),
random_state.normal(size=(N//4, D))*.5 + np.array([[-2, 1]]),
random_state.normal(size=(N//4, D))*.75 + np.array([[2, -2]]),
random_state.normal(size=(N//4, D))*2 + np.array([[-1, -2]]),
], axis=0)
X = X.reshape((-1, D))
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111)
for i in range(4)[::-1]:
X_ = X[(N//4)*i:(N//4)*(i+1)]
ax.scatter(X_[:, 0], X_[:, 1], c="rbgk"[i], lw=0)
ax.set_ylabel("$x_2$")
ax.set_xlabel("$x_1$")
ax.set_title("Toy example")
ax.set_xlim(-5, 5) ; ax.set_ylim(-5, 5)
# ax.set_xlim(-10, 10) ; ax.set_ylim(-10, 10)
fig.savefig("../report/defense/toy_ckde.pdf", dpi=120)
plt.close()
Explanation: Conformal Kernel Distribution Embedding
Conformal Anomaly Detector via RBF Kernel Embddeing
Below are shown sample results obtained in this project. On left are level sets of the nominal bivariate Gaussian mixture distribution used to illustrate the K-LPE algorithm. In the middle are results of K-LPE with K= 6 and Euclidean distance metric for $m = 150$ test points drawn from an equal mixture of 2D uniform and the (nominal) bivariate distributions. Scores for the test points are based on 200 nominal training samples. Scores falling below a threshold level 0.05 are declared as anomalies. The dotted contour corresponds to the exact bivariate Gaussian density level set at level alpha= 0.05. On right is the empirical distribution of the test point scores associated with the bivariate Gaussian that appears to be uniform while scores for the test points drawn from 2D uniform distribution cluster around zero.
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test = train_test_split(X, test_size=0.25, random_state=random_state)
theta_ = 1
from scipy.linalg import cholesky, solve_triangular
from scipy.linalg.lapack import dtrtri
from sklearn.metrics.pairwise import pairwise_kernels
## Create the kernel matrix
Kxx = pairwise_kernels(X_train, metric="rbf", gamma=theta_)
Kxz = pairwise_kernels(X_train, X_test, metric="rbf", gamma=theta_)
delta_ = 1 + Kxz.sum(axis=0, keepdims=True) - Kxx.sum(axis=1, keepdims=True) - Kxz
pvalue_ = (np.sum(delta_ >= 0, axis=0) + 1) / (Kxx.shape[0] + 1.0)
# delta_ = (Kxx.sum(axis=1, keepdims=True) + Kxz) - (1 + Kxz.sum(axis=0, keepdims=True))
# pvalue_ = np.mean(delta_ >= 0, axis=0)
np.mean(pvalue_ <= 0.25)
plt.scatter(X_train[:, 0], X_train[:, 1], c="black", alpha=0.5, lw=0,)
plt.scatter(X_test[:, 0], X_test[:, 1], s=50*(1-pvalue_), lw=0)
plt.scatter(X_test[pvalue_ < 0.05, 0], X_test[pvalue_ < 0.05, 1], s=100, c="m", lw=0)
plt.imshow(Kxx, interpolation="nearest")
Explanation: train test splitting
End of explanation
X_train, X_test = train_test_split(X, test_size=0.25, random_state=random_state)
theta_, kernel, alpha = 1.0, "rbf", 0.250
kwargs = dict(gamma=theta_)
Kxx = pairwise_kernels(X_train, metric=kernel, **kwargs)
Kxz = pairwise_kernels(X_train, X_test, metric=kernel, **kwargs)
delta_ = 1 + Kxz.sum(axis=0, keepdims=True) - Kxx.sum(axis=1, keepdims=True) - Kxz
pvalue_ = (np.sum(delta_ >= 0, axis=0) + 1) / (Kxx.shape[0] + 1.0)
from sklearn import svm
clf = svm.OneClassSVM(nu=alpha, kernel=kernel, gamma=theta_).fit(X_train)
ocsvm = clf.predict(X_test) # decision_function(X_test)[:,0]
np.mean(ocsvm>0), np.mean(pvalue_<alpha)
fig = plt.figure(figsize=(16, 8))
ax = fig.add_subplot(121)
ax.scatter(X_test[:,0], X_test[:,1], c=pvalue_>alpha, cmap=plt.cm.coolwarm_r, lw=0)
ax.set_title("$\\alpha=%g$ Conformal KDE (%s, $\\theta=%g$)"%(alpha, kernel, theta_,))
ax = fig.add_subplot(122)
ax.scatter(X_test[:,0], X_test[:,1], c=ocsvm, cmap=plt.cm.coolwarm_r, lw=0)
ax.set_title("ocSVM (%s, $\\theta=%g, \\nu=%g$)"%(kernel, theta_, alpha,))
Explanation: CKDE vs. ocSVM
End of explanation
mesh_ = np.meshgrid(*(2*[np.linspace(-10, 10, num=151)]))
X_test_ = np.stack(mesh_, axis=-1).reshape((-1, 2))
theta_, kernel, alpha = 2.0, "rbf", 0.5
kwargs = dict(gamma=theta_)
Kxx = pairwise_kernels(X_train, metric=kernel, **kwargs)
Kxz = pairwise_kernels(X_train, X_test_, metric=kernel, **kwargs)
delta_ = 1 + Kxz.sum(axis=0, keepdims=True) - Kxx.sum(axis=1, keepdims=True) - Kxz
pvalue_ = (np.sum(delta_ >= 0, axis=0) + 1) / (Kxx.shape[0] + 1.0)
from sklearn import svm
clf = svm.OneClassSVM(nu=alpha, kernel=kernel, gamma=theta_).fit(X_train)
ocsvm = np.exp(np.minimum(clf.decision_function(X_test_), 0)).reshape(mesh_[0].shape)
eta_ocsvm = np.zeros((X_train.shape[0], 1), dtype=float)
eta_ocsvm[clf.support_, 0] = clf.dual_coef_
svmcm = np.apply_along_axis(lambda z, Z: np.mean(Z>=z[np.newaxis], axis=0),
1, eta_ocsvm, eta_ocsvm)
plt.scatter(X_train[:, 0], X_train[:, 1], lw=0,
c=np.array(["b","r"])[(svmcm[:,0]<alpha).astype(int)])
plt.hist(np.exp(ocsvm.reshape(-1)))
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
ax.contourf(mesh_[0], mesh_[1], ocsvm.reshape(mesh_[0].shape),
levels=np.linspace(ocsvm.min(), ocsvm.max(), num=51), cmap=plt.cm.coolwarm_r)
# ax.scatter(X_test_[:, 0], X_test_[:, 1], c="m", s=20*(ocsvm < 0), lw=0)
ax.scatter(X_train[:, 0], X_train[:, 1], c="k", s=5, lw=0)
ax.set_title("ocSVM (%s, $\\theta=%g, \\nu=%g$)"%(kernel, theta_, alpha,))
ax.set_ylabel("$x_2$") ; ax.set_xlabel("$x_1$")
ax.set_xlim(-5, 5) ; ax.set_ylim(-5, 5)
fig.savefig("../report/defense/ocSVM.pdf", dpi=120)
plt.close()
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
ax.contourf(mesh_[0], mesh_[1], pvalue_.reshape(mesh_[0].shape),
levels=np.linspace(0, 1, num=51), cmap=plt.cm.coolwarm_r)
# ax.scatter(X_test_[:, 0], X_test_[:, 1], c="m", s=20*(pvalue_ < alpha), lw=0)
ax.scatter(X_train[:, 0], X_train[:, 1], c="k", s=5, lw=0)
ax.set_title("Conformal KDE (%s, $\\theta=%g$)"%(kernel, theta_,))
ax.set_ylabel("$x_2$") ; ax.set_xlabel("$x_1$")
ax.set_xlim(-5, 5) ; ax.set_ylim(-5, 5)
fig.savefig("../report/defense/ckde.pdf", dpi=120)
plt.close()
kernel = "laplacian"
kwargs = dict(gamma=theta_)
Kxx = pairwise_kernels(X_train, metric=kernel, **kwargs)
Kxz = pairwise_kernels(X_train, X_test_, metric=kernel, **kwargs)
delta_ = 1 + Kxz.sum(axis=0, keepdims=True) - Kxx.sum(axis=1, keepdims=True) - Kxz
pvalue_ = (np.sum(delta_ >= 0, axis=0) + 1) / (Kxx.shape[0] + 1.0)
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
ax.contourf(mesh_[0], mesh_[1], pvalue_.reshape(mesh_[0].shape),
levels=np.linspace(0, 1, num=51), cmap=plt.cm.coolwarm_r)
# ax.scatter(X_test_[:, 0], X_test_[:, 1], c="m", s=20*(pvalue_ < alpha), lw=0)
ax.scatter(X_train[:, 0], X_train[:, 1], c="k", s=5, lw=0)
ax.set_title("Conformal KDE (%s, $\\theta=%g$)"%(kernel, theta_,))
ax.set_ylabel("$x_2$") ; ax.set_xlabel("$x_1$")
ax.set_xlim(-5, 5) ; ax.set_ylim(-5, 5)
fig.savefig("../report/defense/ckde-lap.pdf", dpi=120)
plt.close()
Explanation: Contour plot
End of explanation |
7,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random variables
When the objective is to predict the category (qualitative, such as predicting political party affiliation), we term the it as predicting a qualitative random variable. On the other hand, if we are predicting a quantitative value (number of cars sold), we term it a quantitative random variable.
When the observations of a quantitative random variable can assume values in a continuous interval (such as predicting temperature), it is called a continuous random variable.
Properties of discrete random variable
Say, we are predicting the probability of getting heads in two coin tosses P(y). Then
probability of y ranges from 0 and 1
sum of probabilities of all values of y = 1
probabilities of outcomes of discrete random variable is additive. Thus probability of y = 1 or 2 is P(1) + P(2)
Binomial and Poisson discrete random variables
Binomial probability distribution
A binomial experiment is one in which the outcome is one of two possible outcomes. Coin tosses, accept / reject, pass / fail, infected / uninfected, these are the kinds of studies that involve a binomial experiment. Thus an experiment is of binomial in nature if
- experiment has n identical trials
- each trial results in 1 of 2 outcomes ( success and failure )
- probability of one of the outcome, say success remains the same for all trials
- trials are independent of each other
- the random variable y is the number of successes observed in n trials.
The probability of observing y success in n trials of a binomial experiment is
$$
P(y) = \frac{n!}{y!(n-y)!}\pi^y (1-\pi)^{n-y}
$$
where
- n = number of trials
- $\pi$ = probability of success in a single trial
- $1-\pi$ = probability of failure in a single tiral
- y = number of successes in n trials
- $n!$ (n factorial) = $n(n-1)(n-2)..(n-(n-1))$
Mean and Standard Deviation of Binomial probability distribution
$$
\mu = n\pi
$$
$$
\sigma = \sqrt{n\pi(1-\pi)}
$$
where
$\mu$ is mean
$\sigma$ is standard deviation
We can build a simple Python function to calculate the binomial probability as shown below
Step1: Binomial probability of germination
Let us consider a problem where 100 seeds are drawn at random. The germination rate of each seed is 85%. Or in other words, the probability that a seed will germinate is 0.85, derived from experiment that 85 out of 100 seeds would germinate in a nursery. Now we want to calculate what is the probability
- that utmost only 80 seeds will germinate
- that utmost only 50 seeds will germinate
- that utmost only 10 seeds will germinate
- that utmost only 95 seeds will germinate
Step2: We could calculate the probability for all possible values of the discrete random varibale in a loop and plot the probabilities as shown below
Step3: As we can see in the graph above, the probability that x number of seeds will germinate peaks around 85, matching the germination rate of 0.85.
Step4: The probability falls steeply before and after 85. Using the cumulative probability, we can answer the question of atleast. Find the probability that
- atleast 20 seeds will germinate = prob(that 21 + 22 + 23 ... 100) will germinate
Step5: We can repeat the experiment with a sample size of 20 and plot the results
Step6: Poisson probability distribution
Poisson is used for modeling the events of a particular time over a period of time or region of space. An example is the number of vehicles passing through a security checkpoint in a 5 min interval.
Conditions
The probability distribution of a discrete random variable y is Poisson, if
Step7: Lets plot the distribution of y for values 0 to 10 | Python Code:
import math
def bin_prob(n,y,pi):
a = math.factorial(n)/(math.factorial(y)*math.factorial(n-y))
b = math.pow(pi, y) * math.pow((1-pi), (n-y))
p_y = a*b
return p_y
Explanation: Random variables
When the objective is to predict the category (qualitative, such as predicting political party affiliation), we term the it as predicting a qualitative random variable. On the other hand, if we are predicting a quantitative value (number of cars sold), we term it a quantitative random variable.
When the observations of a quantitative random variable can assume values in a continuous interval (such as predicting temperature), it is called a continuous random variable.
Properties of discrete random variable
Say, we are predicting the probability of getting heads in two coin tosses P(y). Then
probability of y ranges from 0 and 1
sum of probabilities of all values of y = 1
probabilities of outcomes of discrete random variable is additive. Thus probability of y = 1 or 2 is P(1) + P(2)
Binomial and Poisson discrete random variables
Binomial probability distribution
A binomial experiment is one in which the outcome is one of two possible outcomes. Coin tosses, accept / reject, pass / fail, infected / uninfected, these are the kinds of studies that involve a binomial experiment. Thus an experiment is of binomial in nature if
- experiment has n identical trials
- each trial results in 1 of 2 outcomes ( success and failure )
- probability of one of the outcome, say success remains the same for all trials
- trials are independent of each other
- the random variable y is the number of successes observed in n trials.
The probability of observing y success in n trials of a binomial experiment is
$$
P(y) = \frac{n!}{y!(n-y)!}\pi^y (1-\pi)^{n-y}
$$
where
- n = number of trials
- $\pi$ = probability of success in a single trial
- $1-\pi$ = probability of failure in a single tiral
- y = number of successes in n trials
- $n!$ (n factorial) = $n(n-1)(n-2)..(n-(n-1))$
Mean and Standard Deviation of Binomial probability distribution
$$
\mu = n\pi
$$
$$
\sigma = \sqrt{n\pi(1-\pi)}
$$
where
$\mu$ is mean
$\sigma$ is standard deviation
We can build a simple Python function to calculate the binomial probability as shown below:
End of explanation
utmost_80 = bin_prob(100,80,0.85)
print("utmost 80: " + str(utmost_80))
utmost_50 = bin_prob(100,50,0.85)
print("utmost 50: " + str(utmost_50))
utmost_10 = bin_prob(100,10,0.85)
print("utmost 10: " + str(utmost_10))
utmost_95 = bin_prob(100, 95, 0.85)
print("utmost 95: " + str(utmost_95))
Explanation: Binomial probability of germination
Let us consider a problem where 100 seeds are drawn at random. The germination rate of each seed is 85%. Or in other words, the probability that a seed will germinate is 0.85, derived from experiment that 85 out of 100 seeds would germinate in a nursery. Now we want to calculate what is the probability
- that utmost only 80 seeds will germinate
- that utmost only 50 seeds will germinate
- that utmost only 10 seeds will germinate
- that utmost only 95 seeds will germinate
End of explanation
x =[]
y =[]
cum_prob = []
for i in range(1,101):
x.append(i)
p_y = bin_prob(100,i,0.85)
# print(str(i) + " " + str(p_y))
y.append(p_y)
if i==1:
cum_prob.append(p_y)
else:
cum_prob.append(cum_prob[i-2] + p_y)
import matplotlib.pyplot as plt
%matplotlib inline
fig,ax = plt.subplots(1,2, figsize=(13,5))
ax[0].plot(x,y)
ax[0].set_title('Probability of y successes')
ax[0].set_xlabel('num of successes in 100 trials')
ax[0].set_ylabel('probability of successes')
ax[1].plot(x,cum_prob)
ax[1].set_title('Cumulative Probability of y successes')
ax[1].set_xlabel('num of successes in 100 trials')
ax[1].set_ylabel('cumulative probability of successes')
Explanation: We could calculate the probability for all possible values of the discrete random varibale in a loop and plot the probabilities as shown below:
End of explanation
#find x corresponding to the max probability value
y.index(max(y)) + 1
Explanation: As we can see in the graph above, the probability that x number of seeds will germinate peaks around 85, matching the germination rate of 0.85.
End of explanation
atleast_20 = cum_prob[99] - cum_prob[19]
print("atleast 20 = " + str(atleast_20))
atleast_85 = cum_prob[99] - cum_prob[84]
print("atleast 85 = " + str(atleast_85))
atleast_95 = cum_prob[99] - cum_prob[94]
print("atleast 95 = " + str(atleast_95))
Explanation: The probability falls steeply before and after 85. Using the cumulative probability, we can answer the question of atleast. Find the probability that
- atleast 20 seeds will germinate = prob(that 21 + 22 + 23 ... 100) will germinate
End of explanation
x =[]
y =[]
cum_prob = []
for i in range(1,21):
x.append(i)
p_y = bin_prob(20,i,0.85)
# print(str(i) + " " + str(p_y))
y.append(p_y)
if i==1:
cum_prob.append(p_y)
else:
cum_prob.append(cum_prob[i-2] + p_y)
#find x corresponding to the max probability value
y.index(max(y)) + 1
import matplotlib.pyplot as plt
%matplotlib inline
fig,ax = plt.subplots(1,2, figsize=(13,5))
ax[0].plot(x,y)
ax[0].set_title('Probability of y successes')
ax[0].set_xlabel('num of successes in 20 trials')
ax[0].set_ylabel('probability of successes')
ax[1].plot(x,cum_prob)
ax[1].set_title('Cumulative Probability of y successes')
ax[1].set_xlabel('num of successes in 20 trials')
ax[1].set_ylabel('cumulative probability of successes')
Explanation: We can repeat the experiment with a sample size of 20 and plot the results
End of explanation
import math
def poisson_prob(y,mu):
e = 2.71828
numerator = math.pow(mu, y) * math.pow(e, 0-mu)
denomenator = math.factorial(y)
return numerator/denomenator
#calculate p(4)
p_4 = poisson_prob(4, 2.3)
p_4
Explanation: Poisson probability distribution
Poisson is used for modeling the events of a particular time over a period of time or region of space. An example is the number of vehicles passing through a security checkpoint in a 5 min interval.
Conditions
The probability distribution of a discrete random variable y is Poisson, if:
- Events occur one at a time. Two or more events do not occur precisely at the same time or space
- Events are independent - occurrence of an event at a time is independent of any other event in during a non-overlapping period of time or space
- The expected number of events during one period or region $\mu$ is the same as the expected number of events in any other period or region
Thus the probability of observing y events in a unit of time or space is given by
$$
P(y) = \frac{\mu^{y}e^{-\mu}}{y!}
$$
where
$\mu$ is average value of y
e is naturally occurring constant. e = 2.71828
Example
Let y denote number of field mice captured in a trap in 24 hour period. The average value of y is 2.3. What is the probability of capturing exactly 4 mice in a randomly selected trap?
Ans:
$$
\mu=2.3
$$
$$
P(y=4)=?
$$
End of explanation
y=list(range(0,11))
p_y = []
cum_y = []
mu = 2.3
for yi in y:
prob = poisson_prob(yi, mu)
p_y.append(prob)
if yi==0:
cum_y.append(prob)
else:
cum_y.append(cum_y[yi-1] + prob)
#plot this
import matplotlib.pyplot as plt
%matplotlib inline
fig,ax = plt.subplots(1,2, figsize=(13,5))
ax[0].plot(y, p_y)
ax[0].set_title('Probability of finding y mice in 24 hours')
ax[0].set_xlabel('Probability of finding exactly y mice in 24 hours')
ax[0].set_ylabel('Probability')
ax[1].plot(y,cum_y)
ax[1].set_title('Cumulative Probability of y successes')
ax[1].set_xlabel('Probability of finding atleast y mice in 24 hours')
ax[1].set_ylabel('Cumulative probability')
Explanation: Lets plot the distribution of y for values 0 to 10
End of explanation |
7,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise
Step20: Expected Output | Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = None
### END CODE HERE ###
print ("test: " + test)
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = None
### END CODE HERE ###
return s
basic_sigmoid(3)
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = None
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = None
ds = None
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = None
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = None
# Divide x by its norm.
x = None
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = None
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = None
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = None
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation |
7,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following
Step1: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note
Step5: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step6: We convert both the training and validation sets into NumPy arrays.
Warning
Step7: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as
Step8: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is
Step9: Quiz question
Step10: Quiz question
Step11: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
Step12: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
Step13: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step14: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step15: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
Step16: Quiz Question
Step17: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models. | Python Code:
from __future__ import division
import graphlab
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
products
Explanation: Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).
End of explanation
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
...
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = ...
return predictions
Explanation: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = ...
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
...
return derivative
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: Quiz question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = ...
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = ...
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
...
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
Explanation: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
End of explanation
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation |
7,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A regular expression (regex, RE) is a sequence of characters that define a search pattern. Usually this pattern is used by string searching algorithms for "find" or "find and replace" operations on strings. For example, search engines use regular expressions to find matches to your query as do various text editors when you, e.g., enter a search and replace dialogue.
re module provides regular expression matching operations in Python. It lets you check if a particular string matches a given regular expression or if a given regular expression matches a particular string.
Step1: There are two types of characters in regular expressions, ordinary and special characters. Ordinary characters, like 'A', 'z', or '0', simply match themselves, while special characters, like '\' or '(', either stand for classes of ordinary characters, or affect how the regular expressions around them are interpreted. In other words, special characters help you to specify how regular expressions work and what will be returned to you if you find a match.
Let us learn some special characters
Step2: r in r'.*' indicates that we are using Python's raw string notation, which, in short, differs from ordinary Python strings by its interpretation of the backslash character.
To search for a pattern in a string we will use re.search() function
Step3: What if we want to find only 'Magna'?
Step4: What about 'magna'?
Step5: Nothing was returned because no match was found.
Let us change our string to something that contains numbers and assume that we need to find only those numbers.
Step6: \d
Matches any decimal digit; this is equivalent to the class [0-9].
'+'
Causes the resulting RE to match 1 or more repetitions of the preceding RE.
Step7: Why we found only 1743, but not 1743 and 39 or 1743 and 39.95?
Answer
Step8: But how to find both numbers? For that we need to use the pipeline character '|' and re.findall() function since we want to get more than one result in return.
'|'
A|B, where A and B can be arbitrary REs, creates a regular expression that will match either A or B.
re.findall(pattern, string, flags=0)
Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found.
Step9: Moving on to a more science related example. Let us assume that we have a list of chemical reaction equations and rate coefficients and we want to separate equations from rate coefficients.
Step10: When we apply re.search() function to a line in raw_lines, we will get a MatchObject in return. MatchObjects support various methods, .group() is among them.
group([group1, ...])
Returns one or more subgroups of the match. If there is a single argument, the result is a single string; if there are multiple arguments, the result is a tuple with one item per argument.
For example,
Step11: So let us indicate that we want to return two subgroups, one for an equation and one for a rate coefficient. If we put them simply one after another in the regular expression, we do not get what we want
Step12: The equation part is separated from the rate coefficient part by the double colon '
Step13: Now we want to separate chemical reactants from products and store them in lists of strings without any arithmetic signs. To do that let us use re.findall() and a regular expression that matches letters and numbers that comprise our chemical species names
Step14: We finally got all pieces of information we wanted about each chemical reaction
Step15: This approach becomes pretty handy if you have thousands of reactions to work with (as I do), and there is still plenty of room for using re module.
References | Python Code:
import re
Explanation: A regular expression (regex, RE) is a sequence of characters that define a search pattern. Usually this pattern is used by string searching algorithms for "find" or "find and replace" operations on strings. For example, search engines use regular expressions to find matches to your query as do various text editors when you, e.g., enter a search and replace dialogue.
re module provides regular expression matching operations in Python. It lets you check if a particular string matches a given regular expression or if a given regular expression matches a particular string.
End of explanation
string = 'Sic Parvis Magna'
pattern = r'.*' # any character as many times as possible
Explanation: There are two types of characters in regular expressions, ordinary and special characters. Ordinary characters, like 'A', 'z', or '0', simply match themselves, while special characters, like '\' or '(', either stand for classes of ordinary characters, or affect how the regular expressions around them are interpreted. In other words, special characters help you to specify how regular expressions work and what will be returned to you if you find a match.
Let us learn some special characters:
'.'
(Dot.) In the default mode, this matches any character except a newline.
'*'
(Asterisk) Causes the resulting RE to match 0 or more repetitions of the preceding RE, as many repetitions as are possible.
To test how these special characters work we need to create two variables, one for a string and one for a regular expression that we will try to match with a specific pattern in a string.
End of explanation
re.search(r'.*', string)
Explanation: r in r'.*' indicates that we are using Python's raw string notation, which, in short, differs from ordinary Python strings by its interpretation of the backslash character.
To search for a pattern in a string we will use re.search() function:
re.search(pattern, string, flags=0)
Scan through string looking for the first location where the regular expression pattern produces a match, and return a corresponding MatchObject instance. Return None if no position in the string matches the pattern.
End of explanation
pattern = r'Magna'
re.search(pattern, string)
Explanation: What if we want to find only 'Magna'?
End of explanation
pattern = r'magna'
re.search(pattern, string)
Explanation: What about 'magna'?
End of explanation
string = 'Station : Boulder, CO \n Station Height : 1743 meters \n Latitude : 39.95'
Explanation: Nothing was returned because no match was found.
Let us change our string to something that contains numbers and assume that we need to find only those numbers.
End of explanation
pattern = r'\d+' # one or more digit
re.search(pattern, string)
Explanation: \d
Matches any decimal digit; this is equivalent to the class [0-9].
'+'
Causes the resulting RE to match 1 or more repetitions of the preceding RE.
End of explanation
re.search(r'\d+\.\d+', string) # float number
Explanation: Why we found only 1743, but not 1743 and 39 or 1743 and 39.95?
Answer: re.search() scans through string looking for the first location where the regular expression pattern produces a match [...].
Let us now try to find 39.95 for latitude.
There is no special character for a float number, but we can combine existing special characters to produce a regular expression that will match only float numbers. In other words, we need to include the dot '.' character into our new regular expression. However, dot has a special meaning in Python's raw string notation (see above). To construct the right regular expression we need to add the backslash character '\' before the dot character in order to avoid invoking its special meaning, i.e. quote or escape it.
End of explanation
re.findall(r'\d+\.\d+|\d+', string) # float or integer number
Explanation: But how to find both numbers? For that we need to use the pipeline character '|' and re.findall() function since we want to get more than one result in return.
'|'
A|B, where A and B can be arbitrary REs, creates a regular expression that will match either A or B.
re.findall(pattern, string, flags=0)
Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found.
End of explanation
raw_data = 'O1D = OH + OH : 2.14e-10*H2O;\nOH + O3 = HO2 : 1.70e-12*EXP(-940/TEMP);'
raw_lines = raw_data.split('\n')
raw_lines
Explanation: Moving on to a more science related example. Let us assume that we have a list of chemical reaction equations and rate coefficients and we want to separate equations from rate coefficients.
End of explanation
m = re.search(r'(.*) (\d)', 'The Witcher 3')
m.group(0) # entire match
m.group(1) # first parenthesized subgroup
m.group(2) # second parenthesized subgroup
m.group(1, 2) # multiple arguments give us a tuple
Explanation: When we apply re.search() function to a line in raw_lines, we will get a MatchObject in return. MatchObjects support various methods, .group() is among them.
group([group1, ...])
Returns one or more subgroups of the match. If there is a single argument, the result is a single string; if there are multiple arguments, the result is a tuple with one item per argument.
For example,
End of explanation
for l in raw_lines:
line = re.search(r'(.*)(.*)', l).group(1, 2)
print(line)
Explanation: So let us indicate that we want to return two subgroups, one for an equation and one for a rate coefficient. If we put them simply one after another in the regular expression, we do not get what we want:
End of explanation
for l in raw_lines:
line = re.search(r'(.*)\s:\s(.*);', l).group(1, 2)
print(line)
Explanation: The equation part is separated from the rate coefficient part by the double colon ':' and two whitespaces, therefore we need to put those characters between the subgroups, as well as the semicolon ';' at the end if we do not want to see it in the resulting string.
\s
Matches any whitespace character, this is equivalent to the set [ \t\n\r\f\v].
End of explanation
alphanum_pattern = r'\w+' # any number or character as many times as possible
for l in raw_lines:
line = re.search(r'(.*)\s:\s(.*);', l).group(1,2)
subline_reac, subline_prod = line[0].split('=') # split equation into reactants and products parts using '=' as a separator
print('Reactants: '+subline_reac, 'Products: '+subline_prod)
reac = re.findall(alphanum_pattern, subline_reac)
prod = re.findall(alphanum_pattern, subline_prod)
print(reac, prod)
Explanation: Now we want to separate chemical reactants from products and store them in lists of strings without any arithmetic signs. To do that let us use re.findall() and a regular expression that matches letters and numbers that comprise our chemical species names:
\w
Matches any alphanumeric character and the underscore; this is equivalent to the set [a-zA-Z0-9_].
'+'
Causes the resulting RE to match 1 or more repetitions of the preceding RE.
End of explanation
eqs = []
for l in raw_lines:
line = re.search(r'(.*)\s:\s(.*);', l).group(1,2)
subline_reac, subline_prod = line[0].split('=')
reac = re.findall(alphanum_pattern, subline_reac)
prod = re.findall(alphanum_pattern, subline_prod)
eqs.append(dict(reac=reac, prod=prod, coef=line[1]))
print(eqs)
Explanation: We finally got all pieces of information we wanted about each chemical reaction: what reactants and products are and what the corresponding rate coefficient is. The best way to store this information is to create a dictionary for each chemical reaction and append those dictionaries into a list.
End of explanation
HTML(html)
Explanation: This approach becomes pretty handy if you have thousands of reactions to work with (as I do), and there is still plenty of room for using re module.
References:
https://en.wikipedia.org/wiki/Regular_expression
https://docs.python.org/3.6/library/re.html
https://docs.python.org/2.0/ref/strings.html
http://stackoverflow.com/questions/12871066/what-exactly-is-a-raw-string-regex-and-how-can-you-use-it
Interactive website to play with strings and regular expressions:
http://pythex.org/
End of explanation |
7,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conda and
binstar are changing the packaging world of Python.
Conda made it easy to install re-locatable python binaries that where hard
to build, while binstar provides a "Linux repository-like system"
(or if you are younger than me an AppStore-like system) to host custom binaries.
Taking advantage of that IOOS created a binstar
channel with Met-ocean themed packages for Windows,
Linux and MacOS. Note that, if you are using Red Hat Enterprise Linux or Centos you
should use the rhel6 channel to avoid the
GLIBC problem.
All the conda-recipes are open and kept in a GitHub
repository. (And accepting PRs ;-)
In this post I will not show how to install and configure conda with this channel.
It has been done already here
and
here. Is this post I will scrape
the binstar channel stats to evaluate how the channel is doing.
First some handy functions to parse the dates, the package names, and
to same all the data into a pandas DataFrame.
Step1: All the data we need is in the repodata.json file. There isn't an API
to access that via the command line (yet), that is why we need to scrape
it.
Step2: Now let's split the various platforms and compute total number of downloads
for each package.
Step3: And here is the result,
Step4: Right now it is hard to make sense of the data. That is because some
downloads might be a direct download or an indirect download via a package
dependency. Also, our own build system downloads the dependencies when
building new or when updating the packages in the channel. One conclusion
that we may take from this is that the Windows packages are as popular the
Linux packages! | Python Code:
import re
import requests
import numpy as np
from datetime import date
from pandas import DataFrame
from bs4 import BeautifulSoup
from dateutil.relativedelta import relativedelta
def todatetime(ul_str):
upload = re.compile(r'((?P<year>\d+) years?)?( and )?((?P<month>\d+) months?)?( and )?((?P<day>\d+) days?)?( and )?((?P<hour>\d+) hours?)?( and )?((?P<min>\d+) minutes?)?(.*)ago')
yr = mo = dy = hr = mn = 0
mobj = upload.match(ul_str)
if mobj:
if mobj.group('year'):
yr = int(mobj.group('year'))
if mobj.group('month'):
mo = int(mobj.group('month'))
if mobj.group('day'):
dy = int(mobj.group('day'))
if mobj.group('hour'):
hr = int(mobj.group('hour'))
if mobj.group('min'):
mn = int(mobj.group('min'))
else:
raise ValueError("Unexpected period {!r}".format(ul_str))
delta = relativedelta(years=yr, months=mo, days=dy, hours=hr, minutes=mn)
return date.today() - delta
def parse_name(cell):
name = cell.text.strip().split('/')
if len(name) != 2:
name = cell.text.strip().split('\\')
arch = '{}'.format(name[0].split()[1])
name = '{}'.format(name[1].split('.tar.bz2')[0])
return arch, name
def get_page(package, page):
url = "https://anaconda.org/psi4/{}/files?page={}".format
r = requests.get(url(package, page))
r.raise_for_status()
soup = BeautifulSoup(r.text)
table = soup.find("table", class_="full-width")
downloads, uploaded, platforms, names = [], [], [], []
for row in table.findAll('tr'):
col = row.findAll('td')
#print('COL: ', col)
if len(col) == 8:
downloads.append(int(col[6].text.strip()))
uploaded.append(todatetime(col[4].text.strip()))
platform, name = parse_name(col[3])
platforms.append(platform)
names.append(name)
#print downloads[-1], uploaded[-1], platforms[-1], names[-1]
return downloads, uploaded, platforms, names
def get_df(package):
downloads, uploaded, platforms, names = [], [], [], []
for page in range(1, 15):
dn, up, pf, nm = get_page(package, page)
print(len(nm), end=' ')
downloads.extend(dn)
uploaded.extend(up)
platforms.extend(pf)
names.extend(nm)
if len(nm) != 50:
break
else:
print("Insufficient pages or packages in multiple of 50 which may lead to inflated download counts.")
df = DataFrame(data=np.c_[platforms, names, uploaded, downloads],
columns=['platform', 'name', 'uploaded', 'downloads'])
df['uploaded'] = pd.to_datetime(df['uploaded'])
df.set_index('uploaded', inplace=True, drop=True)
df['downloads'] = df['downloads'].astype(int)
return df
Explanation: Conda and
binstar are changing the packaging world of Python.
Conda made it easy to install re-locatable python binaries that where hard
to build, while binstar provides a "Linux repository-like system"
(or if you are younger than me an AppStore-like system) to host custom binaries.
Taking advantage of that IOOS created a binstar
channel with Met-ocean themed packages for Windows,
Linux and MacOS. Note that, if you are using Red Hat Enterprise Linux or Centos you
should use the rhel6 channel to avoid the
GLIBC problem.
All the conda-recipes are open and kept in a GitHub
repository. (And accepting PRs ;-)
In this post I will not show how to install and configure conda with this channel.
It has been done already here
and
here. Is this post I will scrape
the binstar channel stats to evaluate how the channel is doing.
First some handy functions to parse the dates, the package names, and
to same all the data into a pandas DataFrame.
End of explanation
from requests import HTTPError
from pandas import Panel, read_json
import pandas as pd
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 5000)
json = "https://conda.anaconda.org/psi4/linux-64/repodata.json"
df = read_json(json)
packages = sorted(set(['-'.join(pac.split('-')[:-2]) for pac in df.index]))
packages = [pkg for pkg in packages if pkg]
packages = [u'psi4', u'chemps2', u'dftd3', u'pcmsolver', u'v2rdm_casscf', u'libint', u'erd', u'simint', u'dkh', u'gdma', u'gcp', u'libefp', 'libxc']
dfs = dict()
for pac in packages:
try:
print('\n', pac, ': ', end='')
dfs.update({pac: get_df(pac)})
except HTTPError:
continue
#print(dfs)
Explanation: All the data we need is in the repodata.json file. There isn't an API
to access that via the command line (yet), that is why we need to scrape
it.
End of explanation
def get_plat_total(df):
package = dict()
for plat in ['linux-64', 'osx-64']: #, 'win-32', 'win-64']:
# all time
#sset = df.loc[:].query('platform == "{}"'.format(plat))
# before 1.0 # 5 Jul 2017 - no longer any good b/c I thinned out the pkgs
#sset = df.loc['2016-7-4':].query('platform == "{}"'.format(plat))
# after 1.0
#sset = df.loc[:'2016-7-4'].query('platform == "{}"'.format(plat))
# after 1.1
sset = df.loc[:'2017-5-16'].query('platform == "{}"'.format(plat))
print(sset) # nicely formatted output
total = sset.sum()
package.update({plat: total['downloads']})
return package
packages = dict()
for pac in dfs.keys():
df = dfs[pac]
packages.update({pac: get_plat_total(df)})
for pac in dfs.keys():
print('{:<15}: {:<10} {:<6} {:<10} {:<6} {:<10} {:<6}'.format(pac,
'linux-64', packages[pac]['linux-64'],
'osx-64', packages[pac]['osx-64'],
'total', packages[pac]['linux-64'] + packages[pac]['osx-64']))
df = DataFrame.from_dict(packages).T
df['sum'] = df.T.sum()
df.sort('sum', ascending=False, inplace=True)
df.drop('sum', axis=1, inplace=True)
Explanation: Now let's split the various platforms and compute total number of downloads
for each package.
End of explanation
%matplotlib inline
import seaborn
import matplotlib.pyplot as plt
stride = 19 # 19 x 5 = 95
# stride = len(packages)
kw = dict(kind='bar', stacked=True)
fig, ax = plt.subplots(figsize=(11, 3))
ax = df.ix[:stride].plot(ax=ax, **kw)
# fig, ax = plt.subplots(figsize=(11, 3))
# ax = df.ix[stride:stride*2].plot(ax=ax, **kw)
# fig, ax = plt.subplots(figsize=(11, 3))
# ax = df.ix[stride*2:stride*3].plot(ax=ax, **kw)
# fig, ax = plt.subplots(figsize=(11, 3))
# ax = df.ix[stride*3:stride*4].plot(ax=ax, **kw)
# fig, ax = plt.subplots(figsize=(11, 3))
# ax = df.ix[stride*4:stride*5].plot(ax=ax, **kw)
# df['win'] = df['win-32'] + df['win-64']
# total = df[['linux-64', 'osx-64', 'win']].sum()
total = df[['linux-64', 'osx-64']].sum()
fig, ax = plt.subplots(figsize=(7, 3))
ax = total.plot(ax=ax, kind='bar')
Explanation: And here is the result,
End of explanation
# import pandas as pd
# pd.set_option('display.max_rows', 1500)
# packagesY = dict()
# #dates = pd.date_range('1/1/2016', periods=12)
# #print 'keys', dfs.keys(), dates
# for pac in dfs.keys():
# print '<<< {} >>>'.format(pac)
# df = dfs[pac]
# df.sort_index(inplace=True)
# #print 'df\n', df
# #print 'cols', df.axes
# #df.plot(title=pac)
# df['cumulative_downloads']=df['downloads'].cumsum()
# print df
# df.plot(title=pac, figsize=(15, 8))
Explanation: Right now it is hard to make sense of the data. That is because some
downloads might be a direct download or an indirect download via a package
dependency. Also, our own build system downloads the dependencies when
building new or when updating the packages in the channel. One conclusion
that we may take from this is that the Windows packages are as popular the
Linux packages!
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.