repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
esumitra/minecraft-programming | notebooks/Adventure2.ipynb | mit | from mcpi.minecraft import *
import time
mc = Minecraft.create()
# Type Task 2 program here
"""
Explanation: Welcome Home
Usually when Steve comes home, there is no one at home. Steve can get lonely at times especially after long hard battle with creepers and zombies.
In this programming adventure we'll make Minecraft display a warm and friendly welcome message when Steve comes home. We'll test your program by exploring the world and then come back home to a friendly welcome. Along the way we will learn about coordinate systems which help us locate objects in a game. We will also learn about variables and conditions.
Coordinate Systems
From your math classes, you will remember coordinate systems to locate points in a plane. The points (2,3), (-3,1) and (-1,-2.5) are shown in the grid below.
The Minecraft coordinate grid is shown below:
In Minecraft, when you move East, your X-coordinate increases and when you move South, your Z-coordinate increases. Let's confirm this through a few Minecraft exercises.
Task 1: Moving in Minecraft coordinate systems
In Minecraft look at Steve's coordinates. Now move Steve to any other position. See how his coordinates change as you move?
[ ] Change your direction so that only the Xcoordinate moves when you move forward or back.
[ ] Change your direction so only the Z-coordinate moves when you move forward or back.
Task 2: Write a program to show Steve's position on the screen
Remember functions from the first Adventure? A function lets us do things in a computer program or in the minecraft game. The function getTilePos() get the players position as (x,y,z) coordinates in Minecraft. Let's using this function to print Steve's position as he moves around. We need to store Steve's position when we call the function getTilePos() so that we can print the position later. We can use a program variable to store the position. A variable has a name and can be used to store values. We'll call our variable pos for position and it will contain the Steve's position. When we want to print the position, we print the values of the position x,y and z coordinates using another function print() which prints any strings you give it.
Start up minecraft and type the following in a new cell.
python
from mcpi.minecraft import *
mc = Minecraft.create()
pos = mc.player.getTilePos()
print(pos.x)
print(pos.y)
print(pos.z)
When you run your program by pressing Ctrl+Enter in the program cell, you should now see Steve's position printed.
Great Job!
End of explanation
"""
## Type Task 3 program here
"""
Explanation: Task 3: Prettying up messages
The messages we printed are somewhat basic and can be confusing since we don't know which number is x,y or z. Why not print a message that is more useful. Often messages are built by attaching strings and data. Try typing
python
"my name is " + "Steve"
in a code cell. What message gets printed? Now try
python
"my age is " + 10
Hmmm... That did not work :( Strings can only be attached or concatenated with other strings. In order to attach a number to string, we need to convert the number into a printable string. We will use another function str() which returns a printable string of its arguments. Since x,y,z coordinates are numbers, we need to convert them to strings in order to print them with other strings. Too how the str() function works type the following in a code cell and run.
python
"my age is " + str(10)
What gets printed by the line below?
python
"x = " + str(10) + ",y = " + str(20) + ",z = " + str(30)
You now have all the information you need to print a pretty message.
[ ] Modify your program to print a pretty message shown below to correctly print Steve's position
Steve's position is: x = 10,y = 20,z = 30
[ ] Modify your program to use a variable names message to store the pretty message and then print the message
Hint:
python
message = ...
print(message)
End of explanation
"""
while True:
time.sleep(1)
## Type Task 4 program here
"""
Explanation: Task 4: Display Steve's coordinates in Minecraft
For this task instead of printing Steve's coordinates, lets display them in Minecraft using the postToChat() function from Adventure1
You should see a message like the one below once you run your program.
End of explanation
"""
## Change these values for your home
home_x = 0
home_z = 0
"""
Explanation: Home
In Minecraft move to a location that you want to call home and place a Gold block there. Move Steve on top of the Gold block and write down his coordinates. Lets save these coordinates in the variables home_x and home_z. We will use these variables to detect when Steve returns home.
End of explanation
"""
## Type Task 5 program here
"""
Explanation: Is Steve home?
Now the magic of figuring out if Steve is home. As Steve moves in Minecraft, his x and z coordinates change. We can detect that Steve is home when his coordinates are equal to the cordinates of his home! To put it in math terms, Steve is home when
$$
(pos_{x},pos_{z}) = (home_{x},home_{z})
$$
In the program we can write the math expression as
python
pos.x == home_x and pos.z == home_z
We can an if program block to check if Steve's coordinates equal his home coordinates. An if block is written as shown below
python
if (condition):
do something 1
do something 2
Lets put this all together in the program below
python
while True:
time.sleep(1)
pos = mc.player.getTilePos()
if (pos.x == home_x and pos.z == home_z):
mc.postToChat("Welcome home Steve.")
# the rest of your program from task 4
What happens when you run around in Minecraft and return to the gold block that is your home? That warm message makes Steve happy. He can now be well rested for battling creepers the next day.
End of explanation
"""
|
sjchoi86/Tensorflow-101 | notebooks/rnn_mnist_simple.ipynb | mit | import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print ("Packages imported")
mnist = input_data.read_data_sets("data/", one_hot=True)
trainimgs, trainlabels, testimgs, testlabels \
= mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
ntrain, ntest, dim, nclasses \
= trainimgs.shape[0], testimgs.shape[0], trainimgs.shape[1], trainlabels.shape[1]
print ("MNIST loaded")
"""
Explanation: Sequence classification with LSTM
End of explanation
"""
diminput = 28
dimhidden = 128
dimoutput = nclasses
nsteps = 28
weights = {
'hidden': tf.Variable(tf.random_normal([diminput, dimhidden])),
'out': tf.Variable(tf.random_normal([dimhidden, dimoutput]))
}
biases = {
'hidden': tf.Variable(tf.random_normal([dimhidden])),
'out': tf.Variable(tf.random_normal([dimoutput]))
}
def _RNN(_X, _istate, _W, _b, _nsteps, _name):
# 1. Permute input from [batchsize, nsteps, diminput]
# => [nsteps, batchsize, diminput]
_X = tf.transpose(_X, [1, 0, 2])
# 2. Reshape input to [nsteps*batchsize, diminput]
_X = tf.reshape(_X, [-1, diminput])
# 3. Input layer => Hidden layer
_H = tf.matmul(_X, _W['hidden']) + _b['hidden']
# 4. Splite data to 'nsteps' chunks. An i-th chunck indicates i-th batch data
_Hsplit = tf.split(0, _nsteps, _H)
# 5. Get LSTM's final output (_LSTM_O) and state (_LSTM_S)
# Both _LSTM_O and _LSTM_S consist of 'batchsize' elements
# Only _LSTM_O will be used to predict the output.
with tf.variable_scope(_name):
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(dimhidden, forget_bias=1.0)
_LSTM_O, _LSTM_S = tf.nn.rnn(lstm_cell, _Hsplit, initial_state=_istate)
# 6. Output
_O = tf.matmul(_LSTM_O[-1], _W['out']) + _b['out']
# Return!
return {
'X': _X, 'H': _H, 'Hsplit': _Hsplit,
'LSTM_O': _LSTM_O, 'LSTM_S': _LSTM_S, 'O': _O
}
print ("Network ready")
"""
Explanation: We will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$.
Our simple RNN consists of
One input layer which converts a $28$ dimensional input to an $128$ dimensional hidden layer,
One intermediate recurrent neural network (LSTM)
One output layer which converts an $128$ dimensional output of the LSTM to $10$ dimensional output indicating a class label.
<img src="images/etc/rnn_input3.jpg" width="700" height="400" >
Contruct a Recurrent Neural Network
End of explanation
"""
learning_rate = 0.001
x = tf.placeholder("float", [None, nsteps, diminput])
istate = tf.placeholder("float", [None, 2*dimhidden])
# state & cell => 2x n_hidden
y = tf.placeholder("float", [None, dimoutput])
myrnn = _RNN(x, istate, weights, biases, nsteps, 'basic')
pred = myrnn['O']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optm = tf.train.AdamOptimizer(learning_rate).minimize(cost) # Adam Optimizer
accr = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(pred,1), tf.argmax(y,1)), tf.float32))
init = tf.initialize_all_variables()
print ("Network Ready!")
"""
Explanation: Out Network looks like this
<img src="images/etc/rnn_mnist_look.jpg" width="700" height="400" >
Define functions
End of explanation
"""
training_epochs = 5
batch_size = 128
display_step = 1
sess = tf.Session()
sess.run(init)
summary_writer = tf.train.SummaryWriter('/tmp/tensorflow_logs', graph=sess.graph)
print ("Start optimization")
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
batch_xs = batch_xs.reshape((batch_size, nsteps, diminput))
# Fit training using batch data
feeds = {x: batch_xs, y: batch_ys, istate: np.zeros((batch_size, 2*dimhidden))}
sess.run(optm, feed_dict=feeds)
# Compute average loss
avg_cost += sess.run(cost, feed_dict=feeds)/total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys, istate: np.zeros((batch_size, 2*dimhidden))}
train_acc = sess.run(accr, feed_dict=feeds)
print (" Training accuracy: %.3f" % (train_acc))
testimgs = testimgs.reshape((ntest, nsteps, diminput))
feeds = {x: testimgs, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))}
test_acc = sess.run(accr, feed_dict=feeds)
print (" Test accuracy: %.3f" % (test_acc))
print ("Optimization Finished.")
"""
Explanation: Run!
End of explanation
"""
# How may sequences will we use?
nsteps2 = 25
# Test with truncated inputs
testimgs = testimgs.reshape((ntest, nsteps, diminput))
testimgs_trucated = np.zeros(testimgs.shape)
testimgs_trucated[:, 28-nsteps2:] = testimgs[:, :nsteps2, :]
feeds = {x: testimgs_trucated, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))}
test_acc = sess.run(accr, feed_dict=feeds)
print (" If we use %d seqs, test accuracy becomes %.3f" % (nsteps2, test_acc))
"""
Explanation: What we have done so far is to feed 28 sequences of vectors $ \mathbf{x} \in \mathcal{R}^{28}$.
What will happen if we feed first 25 sequences of $\mathbf{x}$?
End of explanation
"""
batch_size = 5
xtest, _ = mnist.test.next_batch(batch_size)
print ("Shape of 'xtest' is %s" % (xtest.shape,))
"""
Explanation: What's going on inside the RNN?
Inputs to the RNN
End of explanation
"""
# Reshape (this will go into the network)
xtest1 = xtest.reshape((batch_size, nsteps, diminput))
print ("Shape of 'xtest1' is %s" % (xtest1.shape,))
"""
Explanation: Reshaped inputs
End of explanation
"""
feeds = {x: xtest1, istate: np.zeros((batch_size, 2*dimhidden))}
"""
Explanation: Feeds: inputs and initial states
End of explanation
"""
rnnout_X = sess.run(myrnn['X'], feed_dict=feeds)
print ("Shape of 'rnnout_X' is %s" % (rnnout_X.shape,))
"""
Explanation: Each indivisual input to the LSTM
End of explanation
"""
rnnout_H = sess.run(myrnn['H'], feed_dict=feeds)
print ("Shape of 'rnnout_H' is %s" % (rnnout_H.shape,))
"""
Explanation: Each indivisual intermediate state
End of explanation
"""
rnnout_Hsplit = sess.run(myrnn['Hsplit'], feed_dict=feeds)
print ("Type of 'rnnout_Hsplit' is %s" % (type(rnnout_Hsplit)))
print ("Length of 'rnnout_Hsplit' is %s and the shape of each item is %s"
% (len(rnnout_Hsplit), rnnout_Hsplit[0].shape))
"""
Explanation: Actual input to the LSTM (List)
End of explanation
"""
rnnout_LSTM_O = sess.run(myrnn['LSTM_O'], feed_dict=feeds)
print ("Type of 'rnnout_LSTM_O' is %s" % (type(rnnout_LSTM_O)))
print ("Length of 'rnnout_LSTM_O' is %s and the shape of each item is %s"
% (len(rnnout_LSTM_O), rnnout_LSTM_O[0].shape))
"""
Explanation: Output from the LSTM (List)
End of explanation
"""
rnnout_O = sess.run(myrnn['O'], feed_dict=feeds)
print ("Shape of 'rnnout_O' is %s" % (rnnout_O.shape,))
"""
Explanation: Final prediction
End of explanation
"""
|
rcurrie/tumornormal | pathways.ipynb | apache-2.0 | import os
import json
import numpy as np
import pandas as pd
import tensorflow as tf
import keras
import matplotlib.pyplot as plt
# fix random seed for reproducibility
np.random.seed(42)
# See https://github.com/h5py/h5py/issues/712
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
"""
Explanation: Pathway Classification
Use gene sets from MSigDB to both prune the number of genes/features as well as a source of pathway information to encorporate into layer design.
End of explanation
"""
X = pd.read_hdf("data/tcga_target_gtex.h5", "expression")
Y = pd.read_hdf("data/tcga_target_gtex.h5", "labels")
"""
Explanation: Load TCGA+TARGET+GTEX
End of explanation
"""
# Load gene sets from downloaded MSigDB gmt file
# KEGG to for now as its experimental vs. computational)
with open("data/c2.cp.kegg.v6.1.symbols.gmt") as f:
gene_sets = {line.strip().split("\t")[0]: line.strip().split("\t")[2:]
for line in f.readlines()}
print("Loaded {} gene sets".format(len(gene_sets)))
# Drop genes not in X - sort so order is the same as X_pruned.columns
gene_sets = {name: sorted([gene for gene in genes if gene in X.columns.values])
for name, genes in gene_sets.items()}
# Find union of all gene's in all gene sets in order to filter our input rows
all_gene_set_genes = sorted(list(set().union(*[gene_set for gene_set in gene_sets.values()])))
print("Subsetting to {} genes".format(len(all_gene_set_genes)))
# Prune X to only include genes in the gene sets
X_pruned = X.drop(labels=(set(X.columns) - set(all_gene_set_genes)), axis=1, errors="ignore")
assert X_pruned["TP53"]["TCGA-ZP-A9D4-01"] == X["TP53"]["TCGA-ZP-A9D4-01"]
print("X_pruned shape", X_pruned.shape)
# Make sure the genes are the same and in the same order
assert len(all_gene_set_genes) == len(X_pruned.columns.values)
assert list(X_pruned.columns.values) == all_gene_set_genes
"""
Explanation: Ingest Pathways
Load gene sets downloaded from msigdb filter out data to only include gene's present in the union of all the pathways
End of explanation
"""
# Convert primary_site into numerical values for one-hot multi-class training
from sklearn.preprocessing import LabelEncoder
tumor_normal_encoder = LabelEncoder()
Y["tumor_normal_value"] = pd.Series(
tumor_normal_encoder.fit_transform(Y["tumor_normal"]), index=Y.index)
primary_site_encoder = LabelEncoder()
Y["primary_site_value"] = pd.Series(
primary_site_encoder.fit_transform(Y["primary_site"]), index=Y.index)
disease_encoder = LabelEncoder()
Y["disease_value"] = pd.Series(
disease_encoder.fit_transform(Y["disease"]), index=Y.index)
Y.describe(include="all", percentiles=[])
# Create a multi-class one hot output all three classifications
from keras.utils import np_utils
Y_onehot = np.append(
Y["tumor_normal_value"].values.reshape(Y.shape[0],-1),
np_utils.to_categorical(Y["primary_site_value"]), axis=1)
Y_onehot = np.append(Y_onehot,
np_utils.to_categorical(Y["disease_value"]), axis=1)
print(Y_onehot.shape)
"""
Explanation: Wrangle Labels
Convert tumor/normal, primary site and disease into one hot outputs and combine into a single multi-class multi-label output vectoro to train against.
End of explanation
"""
# Split into stratified training and test sets based primary site
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(X_pruned.values, Y["primary_site_value"]):
X_train = X_pruned.values[train_index]
X_test = X_pruned.values[test_index]
Y_train = Y.iloc[train_index]
Y_test = Y.iloc[test_index]
y_train = Y_onehot[train_index]
y_test = Y_onehot[test_index]
primary_site_train = Y["primary_site_value"].values[train_index]
primary_site_test = Y["primary_site_value"].values[test_index]
disease_train = Y["disease_value"].values[train_index]
disease_test = Y["disease_value"].values[test_index]
print(X_train.shape, X_test.shape)
# Lets see how big each class is based on primary site
plt.hist(primary_site_train, alpha=0.5, label='Train')
plt.hist(primary_site_test, alpha=0.5, label='Test')
plt.legend(loc='upper right')
plt.title("Primary Site distribution between train and test")
plt.show()
# Lets see how big each class is based on primary site
plt.hist(disease_train, alpha=0.5, label='Train')
plt.hist(disease_test, alpha=0.5, label='Test')
plt.legend(loc='upper right')
plt.title("Disease distribution between train and test")
plt.show()
"""
Explanation: Stratify Split into Train/Test
Split the data into training and test sets stratified by primary side. Check that we get a good distribution of both primary site and disease between each of these sets.
End of explanation
"""
# group genes by pathway
gene_groups = {name: np.searchsorted(X_pruned.columns.values, genes)for name, genes in gene_sets.items()}
print("Pathway KEGG_ABC_TRANSPORTERS Gene Indexes:", gene_groups["KEGG_ABC_TRANSPORTERS"])
# DEBUGGING: Prune to just 4 to speed up an debug everything from here on down
# gene_groups = {k: gene_groups[k] for k in list(gene_groups)[:4]}
%%time
"""
Build pathways using custom Keras Layer
"""
from keras.models import Model, Sequential
from keras.layers import Input, Lambda, Dense, BatchNormalization, Dropout
from keras.callbacks import EarlyStopping
from keras import regularizers
from keras.layers.merge import concatenate
from keras import backend as K
from keras.engine.topology import Layer
class GroupBy(Layer):
"""
Subset input features into multiple groups that may overlap
groups: list of list of input feature indices
"""
def __init__(self, groups, **kwargs):
super(GroupBy, self).__init__(**kwargs)
self.groups = groups
def build(self, input_shape):
super(GroupBy, self).build(input_shape)
def call(self, x):
return [K.concatenate([x[:, i:i+1] for i in indexes]) for _, indexes in self.groups.items()]
def compute_output_shape(self, input_shape):
return [(None, len(indexes)) for _, indexes in self.groups.items()]
main_input = Input(shape=(X_train.shape[1],), name="main_input")
x = main_input
x = BatchNormalization()(x)
"""
Build per pathway sub-networks
"""
x = GroupBy(gene_groups)(x)
# Add a dense layer per pathway with width proportional to the number of genes in the pathway
x = [Dense(max(2, len(i)//4), activation='relu')(p)
for p, i in zip(x, gene_groups.values())]
# Add a named binary output for each pathway
x = [Dense(1, activation='relu', name=name)(p)
for p, name in zip(x, gene_groups.keys())]
# Concatenate binary outputs of each of sub-networks back into single vector
x = keras.layers.concatenate(x, name="pathways")
"""
Add traditional stacked network for final multi-label classification
"""
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(128, activity_regularizer=regularizers.l1(
1e-5), activation='relu')(x)
x = Dropout(0.5)(x)
main_output = Dense(y_train.shape[1], activation='sigmoid')(x)
model = Model(inputs=[main_input], outputs=[main_output])
# print(model.summary()) # Too detailed when building full set of pathways
print("Trainable params::", np.sum(
[np.product(v.shape) for v in model.trainable_weights]).value)
model.compile(loss='binary_crossentropy',
optimizer='adam', metrics=['accuracy'])
# %%time
# """
# Build pathway's using Lambda
# """
# from keras.models import Model, Sequential
# from keras.layers import Input, Lambda, Dense, BatchNormalization, Dropout
# from keras.callbacks import EarlyStopping
# from keras import regularizers
# from keras.layers.merge import concatenate
# from keras import backend as K
# import itertools
# main_input = Input(shape=(X_train.shape[1],), name="main_input")
# x = main_input
# x = BatchNormalization()(x)
# """
# Build per pathway sub-networks
# """
# # Set to a small number (~4) for debugging, set to None to build all pathways
# # max_num_pathways = 4
# max_num_pathways = None
# # Extract features/gene's for each pathway from the aggregate x input vector
# x = [Lambda(lambda e: K.concatenate([e[:, i:i+1] for i in indexes]))(x)
# for name, indexes in itertools.islice(gene_set_indexes.items(), max_num_pathways)]
# # Add a dense layer per pathway with width proportional to the number of genes in the pathway
# x = [Dense(max(2, len(i)//4), activation='relu')(p)
# for p, i in zip(x, gene_set_indexes.values())]
# # Add a named binary output for each pathway
# x = [Dense(1, activation='relu', name=name)(p)
# for p, name in zip(x, gene_set_indexes.keys())]
# # Concatenate binary outputs of each of sub-networks back into single vector
# x = keras.layers.concatenate(x, name="pathways")
# """
# Add traditional stacked network for final multi-label classification
# """
# x = Dense(128, activation='relu')(x)
# x = Dropout(0.5)(x)
# x = Dense(128, activity_regularizer=regularizers.l1(1e-5), activation='relu')(x)
# x = Dropout(0.5)(x)
# main_output = Dense(y_train.shape[1], activation='sigmoid')(x)
# model = Model(inputs=[main_input], outputs=[main_output])
# # print(model.summary()) # Too detailed when building full set of pathways
# print("Trainable params::", np.sum(
# [np.product(v.shape) for v in model.trainable_weights]).value)
# model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
%%time
callbacks = [EarlyStopping(monitor='acc', min_delta=0.05, patience=2, verbose=2, mode="max")]
model.fit(X_train, y_train, epochs=10, batch_size=128, shuffle="batch", callbacks=callbacks)
print(model.metrics_names, model.evaluate(X_test, y_test))
"""
Explanation: Primary Site Classification w/Per Pathway Sub-Network Input Layer
For each pathway build a custom input layer that extracts the expression levels for the genes in the pathway from the full input vector and feeds this into a dense single output hidden neuron. These are then aggregated and fed into a standard set of stacked layers and trained to classify tumor/norml and primary site. The hidden per pathway neurons are named after the pathway an 'may' indicate which pathway lead to a given classification.
End of explanation
"""
%%time
# Predict all test samples
predictions = model.predict(X_test)
# Build a mode to access the activiations of the pathway layer
pathway_model = Model(inputs=model.layers[0].input, outputs=model.get_layer("pathways").output)
pathway_predictions = pathway_model.predict(X_test)
# First lets look at the distribution of our binary tumor/normal predictions
plt.hist(predictions[:,0])
plt.show()
# Create a new dataframe of the test original and predicted labels
Y_test_predictions = Y_test.copy()
Y_test_predictions["predicted_tumor_normal"] = ["Normal ({:0.2f})".format(p) if np.round(p) <= 0.5
else "Tumor ({:0.2f})".format(p)
for p in predictions[:,0]]
labels = primary_site_encoder.classes_.tolist()
Y_test_predictions["predicted_primary_site"] = [
", ".join(["{} ({:0.2f})".format(labels[i], p[1:1+46][i]) for i in p[1:1+46].argsort()[-3:][::-1]])
for i, p in enumerate(predictions)]
labels = disease_encoder.classes_.tolist()
Y_test_predictions["predicted_disease"] = [
", ".join(["{} ({:0.2f})".format(labels[i], p[1+46:-1][i]) for i in p[1+46:-1].argsort()[-3:][::-1]])
for i, p in enumerate(predictions)]
labels = list(gene_sets.keys())
Y_test_predictions["predicted_pathways"] = [
", ".join(["{} ({:0.2f})".format(labels[i], p[i]) for i in p.argsort()[-3:][::-1]])
for i, p in enumerate(pathway_predictions)]
Y_test_predictions.to_csv("models/Y_test_predictions.tsv", sep="\t")
Y_test_predictions.head()
# Plot confusion matrix for primary site
import sklearn.metrics
import matplotlib.ticker as ticker
confusion_matrix = sklearn.metrics.confusion_matrix(
primary_site_test, np.array([np.argmax(p[1:1+46]) for p in predictions]))
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111)
cax = ax.matshow(confusion_matrix, cmap=plt.cm.gray)
ax.set_xticklabels(primary_site_encoder.classes_.tolist(), rotation=90)
ax.set_yticklabels(primary_site_encoder.classes_.tolist())
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel("Primary Site Confusion Matrix for Holdout Test Data")
plt.show()
# Show only where there are errors
row_sums = confusion_matrix.sum(axis=1, keepdims=True)
norm_conf_mx = confusion_matrix / row_sums
np.fill_diagonal(norm_conf_mx, 0)
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111)
cax = ax.matshow(norm_conf_mx, cmap=plt.cm.gray)
ax.set_xticklabels(primary_site_encoder.classes_.tolist(), rotation=90)
ax.set_yticklabels(primary_site_encoder.classes_.tolist())
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel("Primary Site Prediction Errors")
plt.show()
# Plot confusion matrix for disease
import sklearn.metrics
import matplotlib.ticker as ticker
confusion_matrix = sklearn.metrics.confusion_matrix(
disease_test, np.array([np.argmax(p[1+46:-1]) for p in predictions]))
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111)
cax = ax.matshow(confusion_matrix, cmap=plt.cm.gray)
ax.set_xticklabels(disease_encoder.classes_.tolist(), rotation=90)
ax.set_yticklabels(disease_encoder.classes_.tolist())
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel("Disease Confusion Matrix for Holdout Test Data")
plt.show()
# Show only where there are errors
row_sums = confusion_matrix.sum(axis=1, keepdims=True)
norm_conf_mx = confusion_matrix / row_sums
np.fill_diagonal(norm_conf_mx, 0)
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111)
cax = ax.matshow(norm_conf_mx, cmap=plt.cm.gray)
ax.set_xticklabels(disease_encoder.classes_.tolist(), rotation=90)
ax.set_yticklabels(disease_encoder.classes_.tolist())
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel("Disease Prediction Errors")
plt.show()
"""
Explanation: Evaluate Model
End of explanation
"""
# Load and prepare for predicting
X_treehouse = pd.read_hdf("data/treehouse.h5", "expression")
Y_treehouse = pd.read_hdf("data/treehouse.h5", "labels")
X_treehouse_pruned = X_treehouse.drop(labels=(set(X_treehouse.columns) - set(all_gene_set_genes)),
axis=1, errors="ignore")
# Load clinical Treehouse data
import glob
treehouse_clinical = pd.read_csv(
"/treehouse/archive/compendium/v4/clin.essential.final_v4_20170420.tsv",
sep="\t", index_col=0)
# Extract top 3 pathways from Treehouse GSEA Top 5 tertiary output
top_gsea_paths = {
id: sorted(
glob.glob(
"/treehouse/archive/downstream/{}/tertiary/treehouse-protocol*/compendium*/gsea_*_top5" \
.format(id)))
for id in Y_treehouse.index.values}
top_gsea_pathways = pd.DataFrame({
id: pd.read_csv(path[-1], sep="\t", header=6, nrows=10).sort_values(
by=["k/K"], axis=0, ascending=False)["Gene Set Name"][0:3].values
for id, path in top_gsea_paths.items() if path}).T
top_gsea_pathways.head()
%%time
# Predict all test samples
predictions = model.predict(X_treehouse_pruned)
# Build a mode to access the activiations of the pathway layer
pathway_model = Model(inputs=model.layers[0].input, outputs=model.get_layer("pathways").output)
pathway_predictions = pathway_model.predict(X_treehouse_pruned)
# Add predictions to our label dataframe
Y_treehouse["predicted_tumor_normal"] = ["Normal ({:0.2f})".format(p) if np.round(p) <= 0.5
else "Tumor ({:0.2f})".format(p)
for p in predictions[:,0]]
labels = primary_site_encoder.classes_.tolist()
Y_treehouse["predicted_primary_site"] = [
", ".join(["{} ({:0.2f})".format(labels[i], p[1:1+46][i]) for i in p[1:1+46].argsort()[-3:][::-1]])
for i, p in enumerate(predictions)]
labels = disease_encoder.classes_.tolist()
Y_treehouse["predicted_disease"] = [
", ".join(["{} ({:0.2f})".format(labels[i], p[1+46:-1][i]) for i in p[1+46:-1].argsort()[-3:][::-1]])
for i, p in enumerate(predictions)]
labels = list(gene_sets.keys())
Y_treehouse["predicted_pathways"] = [
", ".join(["{} ({:0.2f})".format(labels[i], p[i]) for i in p.argsort()[-3:][::-1]])
for i, p in enumerate(pathway_predictions)]
Y_treehouse.head()
# Merge treehouse clinical and top gsea paths and write out
Y_treehouse.join(
[treehouse_clinical.Anatomical_location, top_gsea_pathways]
).to_csv("models/Y_treehouse_predictions.tsv", sep="\t")
"""
Explanation: Predict Treehouse
Predict all Treehouse samples and look up the top pathways from tertiary analysis for comparison to the predictions for the pathways interior hidden layer.
End of explanation
"""
|
merryjman/astronomy | Elements.ipynb | gpl-3.0 | # Import modules that contain functions we need
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Elements and the periodic table
This data came from Penn State CS professor Doug Hogan.
Thanks to UCF undergraduates Sam Borges, for finding the data set, and Lissa Galguera, for formatting it.
End of explanation
"""
# Read in data that will be used for the calculations.
# The data needs to be in the same directory(folder) as the program
# Using pandas read_csv method, we can create a data frame
data = pd.read_csv("./data/elements.csv")
# If you're not using a Binder link, you can get the data with this instead:
#data = pd.read_csv("http://php.scripts.psu.edu/djh300/cmpsc221/pt-data1.csv")"
# displays the first several rows of the data set
data.head()
# the names of all the columns in the dataset
data.columns
"""
Explanation: Getting the data
End of explanation
"""
ax = data.plot('Atomic Number', 'Atomic Radius (pm)', title="Atomic Radius vs. Atomic Number", legend=False)
ax.set(xlabel="x label", ylabel="y label")
data.plot('Atomic Number', 'Mass')
data[['Name', 'Year Discovered']].sort_values(by='Year Discovered')
"""
Explanation: Looking at some relationships
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/launching_into_ml/labs/supplemental/intro_logistic_regression.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
"""
Explanation: Introduction to Logistic Regression
Learning Objectives
Create Seaborn plots for Exploratory Data Analysis
Train a Logistic Regression Model using Scikit-Learn
Introduction
This lab is in introduction to logistic regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. In this lab, we will use a synthetic advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Import Libraries
End of explanation
"""
# TODO 1: Read in the advertising.csv file and set it to a data frame called ad_data.
# TODO: Your code goes here
"""
Explanation: Load the Dataset
We will use a synthetic advertising dataset. This data set contains the following features:
'Daily Time Spent on Site': consumer time on site in minutes
'Age': customer age in years
'Area Income': Avg. Income of geographical area of consumer
'Daily Internet Usage': Avg. minutes a day consumer is on the internet
'Ad Topic Line': Headline of the advertisement
'City': City of consumer
'Male': Whether or not consumer was male
'Country': Country of consumer
'Timestamp': Time at which consumer clicked on Ad or closed window
'Clicked on Ad': 0 or 1 indicated clicking on Ad
End of explanation
"""
ad_data.head()
"""
Explanation: Check the head of ad_data
End of explanation
"""
ad_data.info()
ad_data.describe()
"""
Explanation: Use info and describe() on ad_data
End of explanation
"""
ad_data.isnull().sum()
"""
Explanation: Let's check for any null values.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: Exploratory Data Analysis (EDA)
Let's use seaborn to explore the data! Try recreating the plots shown below!
TODO 1: Create a histogram of the Age
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 1: Create a jointplot showing Area Income versus Age.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 2: Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 1: Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage'
End of explanation
"""
from sklearn.model_selection import train_test_split
"""
Explanation: Logistic Regression
Logistic regression is a supervised machine learning process. It is similar to linear regression, but rather than predict a continuous value, we try to estimate probabilities by using a logistic function. Note that even though it has regression in the name, it is for classification.
While linear regression is acceptable for estimating values, logistic regression is best for predicting the class of an observation
Now it's time to do a train test split, and train our model! You'll have the freedom here to choose columns that you want to train on!
End of explanation
"""
X = ad_data[
[
"Daily Time Spent on Site",
"Age",
"Area Income",
"Daily Internet Usage",
"Male",
]
]
y = ad_data["Clicked on Ad"]
"""
Explanation: Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
End of explanation
"""
# TODO: Your code goes here
"""
Explanation: TODO 2: Split the data into training set and testing set using train_test_split
End of explanation
"""
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train, y_train)
"""
Explanation: Train and fit a logistic regression model on the training set.
End of explanation
"""
predictions = logmodel.predict(X_test)
"""
Explanation: Predictions and Evaluations
Now predict values for the testing data.
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))
"""
Explanation: Create a classification report for the model.
End of explanation
"""
|
lukemans/Hello-world | t81_558_class3_training.ipynb | apache-2.0 | from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df,name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name,x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df,name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df,name,mean=None,sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name]-mean)/sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df,target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
print(target_type)
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)
else:
# Regression
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart, we will see more of this chart in the next class.
def chart_regression(pred,y):
t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()})
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
"""
Explanation: T81-558: Applications of Deep Neural Networks
Class 3: Training a Neural Network
* Instructor: Jeff Heaton, School of Engineering and Applied Science, Washington University in St. Louis
* For more information visit the class website.
Building the Feature Vector
Neural networks require their input to be a fixed number of columns. This is very similar to spreadsheet data. This input must be completely numeric.
It is important to represent the data in a way that the neural network can train from it. In class 6, we will see even more ways to preprocess data. For now, we will look at several of the most basic ways to transform data for a neural network.
Before we look at specific ways to preprocess data, it is important to consider four basic types of data, as defined by Stanley Smith Stevens. These are commonly referred to as the levels of measure:
Character Data (strings)
Nominal - Individual discrete items, no order. For example: color, zip code, shape.
Ordinal - Individual discrete items that can be ordered. For example: grade level, job title, Starbucks(tm) coffee size (tall, vente, grande)
Numeric Data
Interval - Numeric values, no defined start. For example, temperature. You would never say "yesterday was twice as hot as today".
Ratio - Numeric values, clearly defined start. For example, speed. You would say that "The first car is going twice as fast as the second."
The following code contains several useful functions to encode the feature vector for various types of data. Encoding data:
encode_text_dummy - Encode text fields, such as the iris species as a single field for each class. Three classes would become "0,0,1" "0,1,0" and "1,0,0". Encode non-target predictors this way. Good for nominal.
encode_text_index - Encode text fields, such as the iris species as a single numeric field as "0" "1" and "2". Encode the target field for a classification this way. Good for nominal.
encode_numeric_zscore - Encode numeric values as a z-score. Neural networks deal well with "centered" fields, zscore is usually a good starting point for interval/ratio.
Ordinal values can be encoded as dummy or index. Later we will see a more advanced means of encoding
Dealing with missing data:
missing_median - Fill all missing values with the median value.
Creating the final feature vector:
to_xy - Once all fields are numeric, this function can provide the x and y matrixes that are used to fit the neural network.
Other utility functions:
hms_string - Print out an elapsed time string.
chart_regression - Display a chart to show how well a regression performs.
End of explanation
"""
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
import tensorflow.contrib.learn as skflow
import numpy as np
path = "./data/"
filename = os.path.join(path,"iris.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
# Encode feature vector
encode_numeric_zscore(df,'petal_w')
encode_numeric_zscore(df,'petal_l')
encode_numeric_zscore(df,'sepal_w')
encode_numeric_zscore(df,'sepal_l')
species = encode_text_index(df,"species")
num_classes = len(species)
# Create x & y for training
# Create the x-side (feature vectors) of the training
x, y = to_xy(df,'species')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
# as much as I would like to use 42, it gives a perfect result, and a boring confusion matrix!
# Create a deep neural network with 3 hidden layers of 10, 20, 10
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[20, 10, 5], n_classes=num_classes,
steps=10000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50, n_classes=num_classes)
# Fit/train neural network
classifier.fit(x_train, y_train, monitor=early_stop)
"""
Explanation: Training with a Validation Set and Early Stopping
Overfitting occurs when a neural network is trained to the point that it begins to memorize rather than generalize.
It is important to segment the original dataset into several datasets:
Training Set
Validation Set
Holdout Set
There are several different ways that these sets can be constructed. The following programs demonstrate some of these.
The first method is a training and validation set. The training data are used to train the neural network until the validation set no longer improves. This attempts to stop at a near optimal training point. This method will only give accurate "out of sample" predictions for the validation set, this is usually 20% or so of the data. The predictions for the training data will be overly optimistic, as these were the data that the neural network was trained on.
End of explanation
"""
from sklearn import metrics
# Evaluate success using accuracy
pred = classifier.predict(x_test)
score = metrics.accuracy_score(y_test, pred)
print("Accuracy score: {}".format(score))
"""
Explanation: Calculate Classification Accuracy
Accuracy is the number of rows where the neural network correctly predicted the target class. Accuracy is only used for classification, not regression.
$ accuracy = \frac{\textit{#} \ correct}{N} $
Where $N$ is the size of the evaluted set (training or validation). Higher accuracy numbers are desired.
End of explanation
"""
pred = classifier.predict_proba(x_test)
np.set_printoptions(precision=4)
print("Numpy array of predictions")
print(pred[0:5])
print("As percent probability")
(pred[0:5]*100).astype(int)
score = metrics.log_loss(y_test, pred)
print("Log loss score: {}".format(score))
"""
Explanation: Calculate Classification Log Loss
Accuracy is like a final exam with no partial credit. However, neural networks can predict a probability of each of the target classes. Neural networks will give high probabilities to predictions that are more likely. Log loss is an error metric that penalizes confidence in wrong answers. Lower log loss values are desired.
For any scikit-learn model there are two ways to get a prediction:
predict - In the case of classification output the numeric id of the predicted class. For regression, this is simply the prediction.
predict_proba - In the case of classification output the probability of each of the classes. Not used for regression.
The following code shows the output of predict_proba:
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange, sin, pi
t = arange(0.0, 5.0, 0.00001)
#t = arange(1.0, 5.0, 0.00001) # computer scientists
#t = arange(0.0, 1.0, 0.00001) # data scientists
fig = figure(1,figsize=(12, 10))
ax1 = fig.add_subplot(211)
ax1.plot(t, np.log(t))
ax1.grid(True)
ax1.set_ylim((-8, 1.5))
ax1.set_xlim((-0.1, 2))
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('log(x)')
show()
"""
Explanation: Log loss is calculated as follows:
$ \text{log loss} = -\frac{1}{N}\sum_{i=1}^N {( {y}_i\log(\hat{y}_i) + (1 - {y}_i)\log(1 - \hat{y}_i))} $
The log function is useful to penalizing wrong answers. The following code demonstrates the utility of the log function:
End of explanation
"""
import tensorflow.contrib.learn as skflow
from sklearn.cross_validation import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.20, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
"""
Explanation: Evaluating Regression Results
Regression results are evaluated differently than classification. Consider the following code that trains a neural network for the MPG dataset.
End of explanation
"""
pred = regressor.predict(x_test)
# Measure MSE error.
score = metrics.mean_squared_error(pred,y_test)
print("Final score (MSE): {}".format(score))
"""
Explanation: Mean Square Error
The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired.
$ \text{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $
End of explanation
"""
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
"""
Explanation: Root Mean Square Error
The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired.
$ \text{MSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $
End of explanation
"""
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.cross_validation import KFold
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
filename_write = os.path.join(path,"auto-mpg-out-of-sample.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Shuffle
np.random.seed(42)
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Cross validate
kf = KFold(len(x), n_folds=5)
oos_y = []
oos_pred = []
fold = 1
for train, test in kf:
print("Fold #{}".format(fold))
fold+=1
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
# Create a deep neural network with 3 hidden layers of 10, 20, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[10, 20, 10], steps=500)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
# Add the predictions to the oos prediction list
pred = regressor.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Fold score (RMSE): {}".format(score))
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print("Final, out of sample score (RMSE): {}".format(score))
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
oosDF.to_csv(filename_write,index=False)
"""
Explanation: Training with Cross Validation
Cross validation uses a number of folds, and multiple models, to generate out of sample predictions on the entire dataset. It is important to note that there will be one model (neural network) for each fold. Each model contributes part of the final out-of-sample prediction.
For new data, which is data not present in the training set, predictions from the fold models can be handled in several ways.
Choose the model that had the highest validation score as the final model.
Preset new data to the 5 models and average the result (this is an enesmble).
Retrain a new model (using the same settings as the crossvalidation) on the entire dataset. Train for as many steps, and with the same hidden layer structure.
The following code trains the MPG dataset using a 5-fold cross validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions.
End of explanation
"""
import tensorflow.contrib.learn as skflow
from sklearn.cross_validation import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.cross_validation import KFold
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
filename_write = os.path.join(path,"auto-mpg-holdout.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Shuffle
np.random.seed(42)
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross validate
kf = KFold(len(x_main), n_folds=5)
oos_y = []
oos_pred = []
fold = 1
for train, test in kf:
print("Fold #{}".format(fold))
fold+=1
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
# Create a deep neural network with 3 hidden layers of 10, 20, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[10, 20, 10], steps=500)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
# Add the predictions to the OOS prediction list
pred = regressor.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Fold score (RMSE): {}".format(score))
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print("Cross-validated score (RMSE): {}".format(score))
# Write the cross-validated prediction
holdout_pred = regressor.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print("Holdout score (RMSE): {}".format(score))
"""
Explanation: Training with Cross Validation and a Holdout Set
If you have a considerable amount of data, it is always valuable to set aside a holdout set before you crossvalidate. This hold out set will be the final evaluation before you make use of your model for its real-world use.
The following program makes use of a hodlout set, and then still cross validates.
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from scipy.stats import zscore
import matplotlib.pyplot as plt
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(
hidden_units=[50, 25, 10],
batch_size = 32,
optimizer='SGD',
learning_rate=0.01,
steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x_test)
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
# Plot the chart
chart_regression(pred,y_test)
"""
Explanation: How Kaggle Competitions are Scored
Kaggle is a platform for competitive data science. Competitions are posted onto Kaggle by companies seeking the best model for their data. Competing in a Kaggle competition is quite a bit of work, I've competed in one Kaggle competition.
Kaggle awards "tiers", such as:
Kaggle Grandmaster
Kaggle Master
Kaggle Expert
Your tier is based on your performance in past competitions.
To compete in Kaggle you simply provide predictions for a dataset that they post. You do not need to submit any code. Your prediction output will place you onto the leaderboard of a competition.
An original dataset is sent to Kaggle by the company. From this dataset, Kaggle posts public data that includes "train" and "test. For the "train" data, the outcomes (y) are provided. For the test data, no outcomes are provided. Your submission file contains your predictions for the "test data". When you submit your results, Kaggle will calculate a score on part of your prediction data. They do not publish want part of the submission data are used for the public and private leaderboard scores (this is a secret to prevent overfitting). While the competition is still running, Kaggle publishes the public leaderboard ranks. Once the competition ends, the private leaderboard is revealed to designate the true winners. Due to overfitting, there is sometimes an upset in positions when the final private leaderboard is revealed.
Managing Hyperparameters
There are many different settings that you can use for a neural network. These can affect performance. The following code changes some of these, beyond their default values:
End of explanation
"""
import multiprocessing
print("Your system has {} cores.".format(multiprocessing.cpu_count()))
"""
Explanation: Grid Search
Finding the right set of hyperparameters can be a large task. Often computational power is thrown at this job. The scikit-learn grid search makes use of your computer's CPU cores to try every one of a defined number of hyperparameters to see which gets the best score.
The following code shows how many CPU cores are available to Python:
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from scipy.stats import zscore
from sklearn.grid_search import GridSearchCV
import multiprocessing
import time
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
def main():
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
start_time = time.time()
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# The hyperparameters specified here will be searched. Every combination.
param_grid = {
'learning_rate': [0.1, 0.01, 0.001],
'batch_size': [8, 16, 32]
}
# Create a deep neural network. The hyperparameters specified here remain fixed.
model = skflow.TensorFlowDNNRegressor(
hidden_units=[50, 25, 10],
batch_size = 32,
optimizer='SGD',
steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Startup grid search
threads = 1 #multiprocessing.cpu_count()
print("Using {} cores.".format(threads))
regressor = GridSearchCV(model, verbose=True, n_jobs=threads,
param_grid=param_grid,fit_params={'monitor':early_stop})
# Fit/train neural network
regressor.fit(x_train, y_train)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x_test)
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
print("Final options: {}".format(regressor.best_params_))
# Plot the chart
chart_regression(pred,y_test)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(hms_string(elapsed_time)))
# Allow windows to multi-thread (unneeded on advanced OS's)
# See: https://docs.python.org/2/library/multiprocessing.html
if __name__ == '__main__':
main()
"""
Explanation: The following code performs a grid search. Your system is queried for the number of cores available they are used to scan through the combinations of hyperparameters that you specify.
End of explanation
"""
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from scipy.stats import zscore
from scipy.stats import randint as sp_randint
from sklearn.grid_search import RandomizedSearchCV
import multiprocessing
import time
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
def main():
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
start_time = time.time()
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# The hyperparameters specified here will be searched. A random sample will be searched.
param_dist = {
'learning_rate': [0.1, 0.01, 0.001],
'batch_size': sp_randint(4, 32),
}
model = skflow.TensorFlowDNNRegressor(
hidden_units=[50, 25, 10],
batch_size = 32,
optimizer='SGD',
steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Random search
threads = 1 #multiprocessing.cpu_count()
print("Using {} cores.".format(threads))
regressor = RandomizedSearchCV(model, verbose=True, n_iter = 10,
n_jobs=threads, param_distributions=param_dist,
fit_params={'monitor':early_stop})
# Fit/train neural network
regressor.fit(x_train, y_train)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x_test)
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
print("Final options: {}".format(regressor.best_params_))
# Plot the chart
chart_regression(pred,y_test)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(hms_string(elapsed_time)))
# Allow windows to multi-thread (unneeded on advanced OS's)
# See: https://docs.python.org/2/library/multiprocessing.html
if __name__ == '__main__':
main()
"""
Explanation: The best combination of hyperparameters are displayed.
Random Search
It is also possable to conduct a random search. The random search is similar to the grid search, except that the entire search space is not used. Rather, random points in the search space are tried. For a random search you must specify the number of hyperparameter iterations (n_iter) to try.
End of explanation
"""
|
feststelltaste/software-analytics | notebooks/Java Type Dependency Analysis.ipynb | gpl-3.0 | import py2neo
query="""
MATCH
(:Project)-[:CONTAINS]->(artifact:Artifact)-[:CONTAINS]->(type:Type)
WHERE
// we don't want to analyze test artifacts
NOT artifact.type = "test-jar"
WITH DISTINCT type, artifact
MATCH
(type)-[:DEPENDS_ON*0..1]->(directDependency:Type)<-[:CONTAINS]-(artifact)
RETURN type.fqn as name, COLLECT(DISTINCT directDependency.fqn) as imports
"""
json_data = py2neo.Graph().run(query).data()
json_data[:3]
"""
Explanation: Introduction
Recently I came over a great visualization of the relationships between classes made by Mike Bostock with his Hierarchical Edge Bundling in D3:
<a href="https://bl.ocks.org/mbostock/7607999"><img src="resources/java_type_dependency_analysis_example.png" /></a>
I wondered how hard it would be to reimplement this visualization with jQAssistant and Neo4j and show actual dependencies between Java types. So let's have a look!
Idea
If you've scanned your project with jQAssistant, you get some graph data like the following:
The graph contains the information about the dependencies of all software entities like Java classes or interfaces. This information is exactly what we need to create a Dependency Analysis between Java types. We just have to create the right Cypher query and the result that fits into the D3 visualization above.
Getting the right Data
What you need to do is to scan your software with jQAssistant to get the information about the dependencies between classes. Just download the demo project at https://github.com/buschmais/spring-petclinic, build it with Java & Maven (mvn clean install) and start the embedded Neo4j graph database with mvn jqassistant:server .
With a running Neo4j database, we connect to it via py2neo and get the relationship information between nodes. In this example, we want to retrieve the direct dependency between any Java type that belongs to our application:
We identify our application's type's by using only artifacts that are contained in our project by using the corresponding node labels Project and Artifact.
We also filter out any test classes by skipping the non-relevant artifacts of the type test-jar.
With the remaining types, we search for all direct dependencies with the DEPENDS_ON relationship to the other types of our application. For reasons of completeness, we also provide the types that don't depend on any other type of our application. That's why we specify the *0..1 parameter in the relationship.
Finally, we COLLECT all the dependencies for one type into a list (more exact, the fqn aka full qualified name of the type), because that's what the D3 visualization needs as input.
The result is a dictionary with all the information needed for the D3 visualization.
End of explanation
"""
import json
with open ( "vis/flare-imports.json", mode='w') as json_file:
json_file.write(json.dumps(json_data, indent=3))
"""
Explanation: We just write that data to a file that we read into our D3 HTML-Template.
End of explanation
"""
|
vzg100/Post-Translational-Modification-Prediction | old/Phosphorylation Sequence Tests -Bagging -dbptm+ELM-VectorAvr..ipynb | mit | from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
"""
Explanation: Template for test
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos.csv", "S")
del x
"""
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Note: SMOTEEN seems to preform best
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos.csv", "Y")
del x
"""
Explanation: Y Phosphorylation
End of explanation
"""
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos.csv", "T")
del x
"""
Explanation: T Phosphorylation
End of explanation
"""
|
DillmannFrench/Intro-PYTHON | Cours13-DILLMANN-ISEP2016.ipynb | gpl-3.0 | import pandas
from pandas.io.data import DataReader
import matplotlib.pyplot as plt
%matplotlib inline
# Initialisation d'un type dict
d = {}
"""
Explanation: Programation Fonctionnelle
Nous allons exploiter la puissance des fonctions offerte par les différentes librairies de Python. L'objectif est de ne pas refaire ce qui à déjà été fait par d'autres
1) Visualisation boursière
End of explanation
"""
symbols_list = ['IBM','YELP', 'GOOG']
for ticker in symbols_list:
d[ticker] = DataReader(ticker, "yahoo", '2016-01-01')
# L'execution de cette fonction précise que vous ayez accés à Internet
pan = pandas.Panel(d)
df1 = pan.minor_xs('Adj Close')
px=df1.asfreq('B',method='pad')
rets = px.pct_change()
((1+ rets).cumprod() -1).plot()
"""
Explanation: Par exemple si nous nous intéressons a l'évolution du cours des actions (valeurs) de différentes entreprises cette année, nous verrons que la pluspart des outils existent et sont disponibles.
Valeurs recherchées :
IBM
YELP
GOOGLE
BRUKER
End of explanation
"""
from mpl_toolkits.mplot3d import *
import matplotlib.pyplot as plt
import numpy as np
from random import random, seed
from matplotlib import cm
#%%%%%%%%% Presentation d'une bulle rouge %%%%%%%%#
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
x = 10 * np.outer(np.cos(u), np.sin(v))
y = 10 * np.outer(np.sin(u), np.sin(v))
z = 10 * np.outer(np.ones(np.size(u)), np.cos(v))
ax.plot_surface(x, y, z, rstride=2, cstride=2, linewidth=0, alpha=1, color='y', antialiased=True, edgecolor=(0,0,0,0))
plt.show()
"""
Explanation: 2) Visualisation géométrique
End of explanation
"""
import numpy as np
# Constantes
my0=4*np.pi*1e-7; # perméabilité du vide
I0=-1; # Amplitude du courant
# le courant circule de gauche à droite
# Dimensions
d=25 # Diametre de la spire (mm)
segments=100 # discretization de la spire
alpha = 2*np.pi/(segments-1) # discretization de l'angle
# initialisarion de la spire
x=[i*0 for i in range(segments)]
y=[d/2*np.sin(i*alpha) for i in range(segments)]
z=[-d/2*np.cos(i*alpha) for i in range(segments)]
#Distance caracteristique, des filaments
distance_char=np.sqrt((z[2]-z[1])**2+(y[2]-y[1])**2);
# Definition du sens du positif du courant : gauche -> droite
# pour le calcul les longeurs sont exprimées en m
x_spire=np.array([x])*1e-3;
y_spire=np.array([y])*1e-3;
z_spire=np.array([z])*1e-3;
#%%%%%%%%%%%%%%% Affichage de la spire %%%%%%%%%%%%%%%%%%%%%%%%%%%#
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca(projection='3d')
plt.plot(([0,0]),([0,0]),([-0.05,0.05]), 'b-', label='ligne', linewidth=2)
plt.plot(y_spire, z_spire, 'g.', label='spire 1', linewidth=2)
plt.show()
#%%%%%%%%%% Calcul du champ magnetique en utilisant Biot et Savart
ndp=50 # Nombre de points
# limites x
xmin=-0.05
xmax= 0.05
# limites y
ymin=-0.05
ymax=+0.05
# limites z
zmin=-0.05
zmax=+0.05
dx=(xmax-xmin)/(ndp-1) #increment x
dy=(ymax-ymin)/(ndp-1) # increment y
dz=(zmax-zmin)/(ndp-1) # increment z
#%%%%%%%%%%%%%%% Calcul magnetostatique %%%%%%%%%%%%%%%%%%%%%%%%%%%#
bxf=np.zeros(ndp) # initialization de la composante Bx du champ
byf=np.zeros(ndp) # initialization de la composante By du champ
bzf=np.zeros(ndp) # initialization de la composante Bz du champ
I0f1=my0*I0/(4*np.pi) # Magnétostatique (on multiplie le courrant
#% $$ \mu_0/(4.\pi)par $$)
# Intégation du champ induit en un point de la ligne bleue
# par le courant circulant sur chanque segment de Boucle verte
bfx=0
bfy=0
bfz=0
nseg=np.size(z_spire)-1
for i in range(ndp):
#Initialisation des positions
xM=(xmin+i*dx)
yM=0
zM=0
#Initialisation des champs locaux
bfx=0
bfy=0
bfz=0
R=np.array([xM,yM,zM])
# vecteur position sur
# le point qui doit être calcul
# en intégrant la contribution
# de tous les courants le long
# de la boucle verte
for wseg in range(nseg):
xs=x_spire[0][wseg]
ys=y_spire[0][wseg]
zs=z_spire[0][wseg]
Rs=np.array([xs, ys, zs])
drsx=(x_spire[0][wseg+1]-x_spire[0][wseg])
drsy=(y_spire[0][wseg+1]-y_spire[0][wseg])
drsz=(z_spire[0][wseg+1]-z_spire[0][wseg])
drs=np.array([drsx, drsy, drsz])
#direction du courant
Delta_R= Rs - R
#vecteur entre l'élement de spire et
#le point où est calcul le champ
Delta_Rdrs=sum(Delta_R * drs)
Delta_Rdist=np.sqrt(Delta_R[0]**2+Delta_R[1]**2+Delta_R[2]**2)
#Delta_Rdis2=Delta_Rdist**2
Delta_Rdis3=Delta_Rdist**3
b2=1.0/Delta_Rdis3
b12=I0f1*b2*(-1)
# Produit vectoriel
Delta_Rxdrs_x=Delta_R[1]*drsz-Delta_R[2]*drsy
Delta_Rxdrs_y=Delta_R[2]*drsx-Delta_R[0]*drsz
Delta_Rxdrs_z=Delta_R[0]*drsy-Delta_R[1]*drsx
#Intégration
bfx=bfx+b12*Delta_Rxdrs_x
bfy=bfy+b12*Delta_Rxdrs_y
bfz=bfz+b12*Delta_Rxdrs_z
# Il faut utiliser un champ définit comme 3 listes :
# une liste pour chaque abscisse
bxf[i]+=bfx
byf[i]+=bfy
bzf[i]+=bfz
#%%%%%%%%%%% Modèle Théorique %%%%%%%%%%%%%%%%%%#
r=d/2; # rayon de la spire en mm
r=r*1e-3; # rayon de la spire en m
bx_analytique=[abs(my0*I0)*(r)**2/(2*((r)**2+(x)**2)**(3/2)) for x in np.linspace(xmin, xmax, ndp, endpoint = True)]
"""
Explanation: Visualisation d'un phénomène physique
L'equation de Biot-et-Savart
End of explanation
"""
#%%%%%%%%%%% Visualisation %%%%%%%%%%%%%%%%%%%%%%%#
plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bxf,'bo')
plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bx_analytique,'r-')
"""
Explanation: $$B(x)=\frac{\mu_o}{4\pi}.I_o.\frac{r^2}{2 (r^2 +x^2)^{3/2} }$$
End of explanation
"""
def my_funct(f,arg):
return f(arg)
my_funct(lambda x : 2*x*x,5)
"""
Explanation: Une autre manière de définir les fonctions : les Lambdas
End of explanation
"""
a=(lambda x: x*x)(8)
print(a)
def polynome(x):
return x**2 + 5*x + 4
racine=-4
print("La racine d'un polynome est la valeur pour laquelle est {0} " \
.format(polynome(racine)))
print("Avec un lambda c'est plus simple : ",end="")
print((lambda x:x**2 + 5*x + 4)(-4))
X=np.linspace(-10,10,50,endpoint = True)
plt.plot(X,(lambda x:x**2 + 5*x + 4)(X))
plt.show()
"""
Explanation: Lambda est un racourci pour créer des fonctions anonymes
Elles ne sont pas plus faciles à ecrire
End of explanation
"""
|
theandygross/MethylTools | .ipynb_checkpoints/Probe_Annotations-checkpoint.ipynb | apache-2.0 | import pandas as pd
DATA_STORE = '/data_ssd/methylation_annotation_2.h5'
store = pd.HDFStore(DATA_STORE)
islands = pd.read_hdf(DATA_STORE, 'islands')
locations = pd.read_hdf(DATA_STORE, 'locations')
other = pd.read_hdf(DATA_STORE, 'other')
snps = pd.read_hdf(DATA_STORE, 'snps')
probe_annotations = pd.read_hdf(DATA_STORE, 'probe_annotations')
probe_to_island = store['probe_to_island']
island_to_gene = store['island_to_gene']
"""
Explanation: Read in Probe Annotations
These are parsed out in Compile_Probe_Annoations notebook.
End of explanation
"""
def map_to_islands(s):
'''
s is a Series of measurments on the probe level.
'''
on_island = s.groupby(island_to_gene.Islands_Name).mean().order()
v = pd.concat([island_to_gene, on_island], axis=1).set_index(0)[1]
islands_mapped_to_genes = v.groupby(level=0).mean().order()
return on_island, islands_mapped_to_genes
"""
Explanation: Auxilary function to map a data-vector from probes onto CpG Islands
End of explanation
"""
def island_plot_maker(df, split, islands, ann, colors=None):
'''
df: a DataFrame of probe beta values
islands: a DataFrame mapping probes to CpG islands and
annotations
ann: a DataFrame mapping probes to gene annotations
and genomic coordinates of probe
'''
if colors is None:
colors = colors_st
groups = split.dropna().unique()
assert len(groups) == 2
def f(region):
p = ti(islands.Islands_Name == region)
p3 = ann.ix[p].join(islands.ix[p]).sort('Genomic_Coordinate')
p = p3.index
in_island = ti(p3.Relation_to_Island == 'Island')
fig, ax = subplots(figsize=(10,4))
for i,g in enumerate(groups):
ax.scatter(p3.Genomic_Coordinate, df[ti(split == g)].ix[p].mean(1),
color=colors[i], label=g)
ax.axvspan(p3.Genomic_Coordinate.ix[in_island[0]] - 30,
p3.Genomic_Coordinate.ix[in_island[-1]] + 30,
alpha=.2, color=colors[2], label='Island')
ax.set_xlabel('Genomic Coordinate')
ax.set_ylabel('Beta Value')
ax.legend(loc='lower right', fancybox=True)
prettify_ax(ax)
return f
"""
Explanation: Helper for making CpG island plots
End of explanation
"""
cpg_island = probe_to_island.Relation_to_Island == 'Island'
dhs_site = other.DHS == 'TRUE'
enhancer = other.Enhancer == 'TRUE'
gene_body = other.UCSC_RefGene_Group.str.contains('Body')
gene_tss = other.UCSC_RefGene_Group.str.contains('TSS')
promoter = other.Regulatory_Feature_Group.str.contains('Promoter_Associated')
"""
Explanation: Create annotation probe sets
End of explanation
"""
p = '/cellar/users/agross/TCGA_Code/MethylTools/Data/PRC2_Binding/'
prc2_probes = pd.read_csv(p + 'mapped_to_methylation_probes.csv',
index_col=0)
prc2_probes = prc2_probes.sum(1)>2
probe_sets = {'PRC2': prc2_probes, 'CpG Island': cpg_island,
'DHS Site': dhs_site, 'Enhancer': enhancer,
'Gene Body': gene_body, 'TSS': gene_tss,
'Promoter': promoter}
"""
Explanation: PRC2 probe annotations are initiallized in PRC2 Probes notbook.
End of explanation
"""
|
mne-tools/mne-tools.github.io | stable/_downloads/c7633c38a703b9d0a626a5a4fa161026/psf_ctf_label_leakage.ipynb | bsd-3-clause | # Authors: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
# Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Nicolas P. Rougier (graph code borrowed from his matplotlib gallery)
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import (read_inverse_operator,
make_inverse_resolution_matrix,
get_point_spread)
from mne.viz import circular_layout
from mne_connectivity.viz import plot_connectivity_circle
print(__doc__)
"""
Explanation: Visualize source leakage among labels using a circular graph
This example computes all-to-all pairwise leakage among 68 regions in
source space based on MNE inverse solutions and a FreeSurfer cortical
parcellation. Label-to-label leakage is estimated as the correlation among the
labels' point-spread functions (PSFs). It is visualized using a circular graph
which is ordered based on the locations of the regions in the axial plane.
End of explanation
"""
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fname_fwd = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-fixed-inv.fif'
forward = mne.read_forward_solution(fname_fwd)
# Convert forward solution to fixed source orientations
mne.convert_forward_solution(
forward, surf_ori=True, force_fixed=True, copy=False)
inverse_operator = read_inverse_operator(fname_inv)
# Compute resolution matrices for MNE
rm_mne = make_inverse_resolution_matrix(forward, inverse_operator,
method='MNE', lambda2=1. / 3.**2)
src = inverse_operator['src']
del forward, inverse_operator # save memory
"""
Explanation: Load forward solution and inverse operator
We need a matching forward solution and inverse operator to compute
resolution matrices for different methods.
End of explanation
"""
labels = mne.read_labels_from_annot('sample', parc='aparc',
subjects_dir=subjects_dir)
n_labels = len(labels)
label_colors = [label.color for label in labels]
# First, we reorder the labels based on their location in the left hemi
label_names = [label.name for label in labels]
lh_labels = [name for name in label_names if name.endswith('lh')]
# Get the y-location of the label
label_ypos = list()
for name in lh_labels:
idx = label_names.index(name)
ypos = np.mean(labels[idx].pos[:, 1])
label_ypos.append(ypos)
# Reorder the labels based on their location
lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))]
# For the right hemi
rh_labels = [label[:-2] + 'rh' for label in lh_labels]
"""
Explanation: Read and organise labels for cortical parcellation
Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
End of explanation
"""
# Compute first PCA component across PSFs within labels.
# Note the differences in explained variance, probably due to different
# spatial extents of labels.
n_comp = 5
stcs_psf_mne, pca_vars_mne = get_point_spread(
rm_mne, src, labels, mode='pca', n_comp=n_comp, norm=None,
return_pca_vars=True)
n_verts = rm_mne.shape[0]
del rm_mne
"""
Explanation: Compute point-spread function summaries (PCA) for all labels
We summarise the PSFs per label by their first five principal components, and
use the first component to evaluate label-to-label leakage below.
End of explanation
"""
with np.printoptions(precision=1):
for [name, var] in zip(label_names, pca_vars_mne):
print(f'{name}: {var.sum():.1f}% {var}')
"""
Explanation: We can show the explained variances of principal components per label. Note
how they differ across labels, most likely due to their varying spatial
extent.
End of explanation
"""
# get PSFs from Source Estimate objects into matrix
psfs_mat = np.zeros([n_labels, n_verts])
# Leakage matrix for MNE, get first principal component per label
for [i, s] in enumerate(stcs_psf_mne):
psfs_mat[i, :] = s.data[:, 0]
# Compute label-to-label leakage as Pearson correlation of PSFs
# Sign of correlation is arbitrary, so take absolute values
leakage_mne = np.abs(np.corrcoef(psfs_mat))
# Save the plot order and create a circular layout
node_order = lh_labels[::-1] + rh_labels # mirror label order across hemis
node_angles = circular_layout(label_names, node_order, start_pos=90,
group_boundaries=[0, len(label_names) / 2])
# Plot the graph using node colors from the FreeSurfer parcellation. We only
# show the 200 strongest connections.
fig, ax = plt.subplots(
figsize=(8, 8), facecolor='black', subplot_kw=dict(projection='polar'))
plot_connectivity_circle(leakage_mne, label_names, n_lines=200,
node_angles=node_angles, node_colors=label_colors,
title='MNE Leakage', ax=ax)
"""
Explanation: The output shows the summed variance explained by the first five principal
components as well as the explained variances of the individual components.
Evaluate leakage based on label-to-label PSF correlations
Note that correlations ignore the overall amplitude of PSFs, i.e. they do
not show which region will potentially be the bigger "leaker".
End of explanation
"""
# left and right lateral occipital
idx = [22, 23]
stc_lh = stcs_psf_mne[idx[0]]
stc_rh = stcs_psf_mne[idx[1]]
# Maximum for scaling across plots
max_val = np.max([stc_lh.data, stc_rh.data])
"""
Explanation: Most leakage occurs for neighbouring regions, but also for deeper regions
across hemispheres.
Save the figure (optional)
Matplotlib controls figure facecolor separately for interactive display
versus for saved figures. Thus when saving you must specify facecolor,
else your labels, title, etc will not be visible::
>>> fname_fig = meg_path / 'plot_label_leakage.png'
>>> fig.savefig(fname_fig, facecolor='black')
Plot PSFs for individual labels
Let us confirm for left and right lateral occipital lobes that there is
indeed no leakage between them, as indicated by the correlation graph.
We can plot the summary PSFs for both labels to examine the spatial extent of
their leakage.
End of explanation
"""
brain_lh = stc_lh.plot(subjects_dir=subjects_dir, subject='sample',
hemi='both', views='caudal',
clim=dict(kind='value',
pos_lims=(0, max_val / 2., max_val)))
brain_lh.add_text(0.1, 0.9, label_names[idx[0]], 'title', font_size=16)
"""
Explanation: Point-spread function for the lateral occipital label in the left hemisphere
End of explanation
"""
brain_rh = stc_rh.plot(subjects_dir=subjects_dir, subject='sample',
hemi='both', views='caudal',
clim=dict(kind='value',
pos_lims=(0, max_val / 2., max_val)))
brain_rh.add_text(0.1, 0.9, label_names[idx[1]], 'title', font_size=16)
"""
Explanation: and in the right hemisphere.
End of explanation
"""
|
huajianmao/learning | coursera/deep-learning/4.convolutional-neural-networks/week2/pa.1.Keras - Tutorial - Happy House v1.ipynb | mit | import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
"""
Explanation: Keras tutorial - the Happy House
Welcome to the first assignment of week 2. In this assignment, you will:
1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
2. See how you can in a couple of hours build a deep learning algorithm.
Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.
In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!
End of explanation
"""
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
"""
Explanation: Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).
1 - The Happy House
For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.
<img src="images/happy-house.jpg" style="width:350px;height:270px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : the Happy House</center></caption>
As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.
You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.
<img src="images/house-members.png" style="width:550px;height:250px;">
Run the following code to normalize the dataset and learn about its shapes.
End of explanation
"""
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
### END CODE HERE ###
return model
"""
Explanation: Details of the "Happy" dataset:
- Images are of shape (64,64,3)
- Training: 600 pictures
- Test: 150 pictures
It is now time to solve the "Happy" Challenge.
2 - Building a model in Keras
Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.
Here is an example of a model in Keras:
```python
def model(input_shape):
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
```
Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as X, Z1, A1, Z2, A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).
Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout().
Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
End of explanation
"""
### START CODE HERE ### (1 line)
### END CODE HERE ###
"""
Explanation: You have now built a function to describe your model. To train and test this model, there are four steps in Keras:
1. Create the model by calling the function above
2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
4. Test the model on test data by calling model.evaluate(x = ..., y = ...)
If you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation.
Exercise: Implement step 1, i.e. create the model.
End of explanation
"""
### START CODE HERE ### (1 line)
### END CODE HERE ###
"""
Explanation: Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.
End of explanation
"""
### START CODE HERE ### (1 line)
### END CODE HERE ###
"""
Explanation: Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
End of explanation
"""
### START CODE HERE ### (1 line)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
"""
Explanation: Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.
Exercise: Implement step 4, i.e. test/evaluate the model.
End of explanation
"""
### START CODE HERE ###
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
"""
Explanation: If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets. To pass this assignment, you have to get at least 75% accuracy.
To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.
If you have not yet achieved 75% accuracy, here're some things you can play around with to try to achieve it:
Try using blocks of CONV->BATCHNORM->RELU such as:
python
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
Change your optimizer. We find Adam works well.
If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
Run on more epochs, until you see the train accuracy plateauing.
Even if you have achieved 75% accuracy, please feel free to keep playing with your model to try to get even better results.
Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.
3 - Conclusion
Congratulations, you have solved the Happy House challenge!
Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.
<font color='blue'>
What we would like you to remember from this assignment:
- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
- Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.
4 - Test with your own image (Optional)
Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!
The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
End of explanation
"""
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
"""
Explanation: 5 - Other useful functions in Keras (Optional)
Two other basic features of Keras that you'll find useful are:
- model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
- plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.
Run the following code.
End of explanation
"""
|
lee212/simpleazure | ipynb/Tutorial - Account Setup for Azure Resource Manager (ARM).ipynb | gpl-3.0 | !yes|azure login
"""
Explanation: Azure CLI Account Setup for Azure Resource Manager (ARM)
Azure CLI provides an easy way to setup an account for Azure Resource Manager (ARM) and furthermore creates an new service principal for the Simple Azure access. In this tutorial, we use IPython helper (!) to run Azure CLI.
Interactive Login Azure Portal
azure cli tool asks you to open a web browser and sign in the azure portal to authenciate. The following command azure login will guide you to the page https://aka.ms/devicelogin with a unique one-time login verification code.
You will be asked to type the code in your browser to complete the login.
NOTE Run all cells step-by-step with instructions to complete Azure Account Setup.
End of explanation
"""
!azure account show
"""
Explanation: Credentials for Azure Python SDK
Azure Python SDK which Simple Azure is based on requires the credential information below for ARM and ASM (Azure Service Management).
subscription ID
tenant ID
client ID
client secret
The following sections demonstrate Azure CLI commands to obtain these information step-by-step.
Subscription ID and Tenant ID
account show displays subscription id and tenant id as ID and Tenant ID.
End of explanation
"""
sid_tid = !azure account show|awk -F ':' '/ID/{ print $3}'
sid = sid_tid[0]
tid = sid_tid[1]
"""
Explanation: IPython filters the subscription ID and tenant ID using awk command and stores into sid and tid variables.
End of explanation
"""
out=!azure ad sp create --name simpleazure
cid = out[6].split(":")[1].lstrip()
newout="\n".join(out)
print(newout)
"""
Explanation: Service Principal for Simple Azure
Once you loaded your azure credential, a service principal is required to get access of resource groups therefore Azure Services via Azure Resource Manager and Templates are permitted to use in Simple Azure. Azure CLI provides a few commands to complete this step.
"azure ad sp create" command create a new service principal in Active Directory with a name (--name option).
End of explanation
"""
password=""
!azure ad sp set -p $password $cid
"""
Explanation: Id after Service Principal Names is our client id for Simple Azure. cid variable stores the ID in the previous commands.
Set a Password for Service Principal
A password for Service Principal will be used as client_secret later in Simple Azure.
Please provide your desired password in below.
End of explanation
"""
!azure role assignment create --objectId $cid -o Owner -c /subscriptions/$sid
"""
Explanation: Note that '$cid' is a client id obtained from the previous command.
Assign Role to Service Principal
Assigning role permits certain actions to your service principal under your subscription id. "Owner" allows you have every rights to use resources without restrictions. See more roles: here
End of explanation
"""
from simpleazure import SimpleAzure as saz
"""
Explanation: Are you completed all steps without any issues?
Congraturations! You just completed login setup for your azure account.
Sample Run Simple Azure after Login Setup
Let's try to deploy a sample template using Simple Azure and the credentials that we just obtained.
Import Simple Azure
End of explanation
"""
import os
os.environ['AZURE_SUBSCRIPTION_ID'] = $sid
os.environ['AZURE_CLIENT_SECRET'] = $password
os.environ['AZURE_TENANT_ID'] = $tid
os.environ['AZURE_CLIENT_ID'] = $cid
saz_obj = saz()
"""
Explanation: Credentials via Environment Variables
End of explanation
"""
url = "https://raw.githubusercontent.com/Azure-Samples/resource-manager-python-template-deployment/master/templates/template.json"
"""
Explanation: Template from Azure-Samples
End of explanation
"""
saz_obj.arm.deploy(template = url, param = {"sshKeyData": "ssh-rsa AAAAB3...<skipped>... hroe.lee@simpleazure", 'dnsLabelPrefix':"simpleazure", 'vmName':'simpleazure-first-vm'})
"""
Explanation: Deploy with Template and Parameters
The sample template requires three parameters:
sshKeyData which is ssh public key string
dnsLabelPrefix which is unique DNS Name for the Storage Account
vmName which is virtual machine name to deploy
End of explanation
"""
saz_obj.arm.remove_resource_group()
"""
Explanation: Termination (deleting resource group)
Deleting a resource group where deployments are made stops all services and deletes resources in the group.
Simple Azure uses prefixed group name 'saz' and the following function will delete the group.
End of explanation
"""
|
chicagopython/CodingWorkshops | problems/data_science/trackcoder.ipynb | gpl-3.0 | import pandas
"""
Explanation: Data Analysis and visualization for tracking developer productivity
Chipy's mentorship program is an extra-ordinary jounery for becoming a better developer. As a mentee, you are expected to do a lot - you read new articles/books, write code, debug and troubleshoot, pair program with other mentees in coding workshop or your mentor. This is involves managing time efficiently and doing the effective things. But as the old adage goes, "you can't manage what you can't measure".
This project is the third of the three part series of building tools for the mentees for tracking time. The end goal of such a tool will be to aggregate anonymous data and analyze how does a typical mentee spend on blogging (b), coding (c), debugging (d), pair program (p) with mentor or other mentees.
In this project we will be using pandas to analyze the data gathered by using the command line tool we built in the first part of the series. We will also be using altair, a visualization library to do some exploratory analysis of the data.
Short url for this page: http://bit.ly/data_trackcoder
Is this project for you
Before you progress further, let's check if we are ready to solve this. You should
Have a personal computer with working wifi and power cord
Have Python 3 installed on your computer. Yes, Python 3 only 😎
Have some idea about lists, dictionaries and functions
Have created a virtual environment and installing packages with pip
In addition, you should be familiar with Part 1, and Part 2 of this three part exercise.
Getting the project setup in your computer
If you are familiar with git, run
git clone https://github.com/chicagopython/CodingWorkshops.git
If not, go to https://github.com/chicagopython/CodingWorkshops
Click on the Download Zip and unzip the file that gets downloaded.
From your command line (terminal in mac osx, or linux and command prompt in windows), change directory to the path where you have downloaded it.
On linux or OS X
> cd path/to/CodingWorkshops/problems/data_science/
On Windows
> cd path\to\CodingWorkshops\problems\data_science
Check if you have the latest notebook
If you have downloaded the notebook before the event of the Project Night, you have downloaded the notebook with only materials to review for project night without the actual problems. The actual problems will be released 2 hours before the event. Please update your notebook in order to get the challenge questions.
In that directory, run the following command
git pull
Installation of Required packages
The following packages are needed for this project.
numpy==1.14.2
pandas==0.22.0
python-dateutil==2.7.2
pytz==2018.4
scikit-learn==0.19.1
scipy==1.0.1
six==1.11.0
sklearn==0.20.0
altair==2.2.2
These packages are listed in the file requirements.txt in this directory.
From a terminal (in mac ox or linux) or command prompt (windows), install them using the following command.
pip install -r requirements.txt
Once the installation completed, start Jupyter notebook by issuing the command.
> jupyter notebook
Running the following command here will open up a browser (http://localhost:8888) and display all the notebooks under this directory.
Double click to open the trackcoder notebook.
Next execute the cell below by hitting Shift + Enter.
End of explanation
"""
import pandas as pd
import numpy as np
"""
Explanation: If the above line executes without any error, then congratulations 🎉 - you have successfully installed everything, and ready to get started.
Getting Started with Pandas
Loading pandas
We will start off with an gentle introduction to pandas that is mostly taken from the wonderful 10 minutes guide. Lets start by importing the necessary packages.
End of explanation
"""
description = pd.Series(data=['blogging','coding','debugging','mentor','pair_programming', 'research'],
index=['b', 'c', 'd','m','p','r'])
print(f"data: {description.values}\nindex: {description.index}")
"""
Explanation: Pandas Series and Dataframe
Series
A pandas Series is a one-dimensional labeled array capable of holding any data type. The axis labels are collectively referred to as the index. Lets create series from the different task types that we have defined in Part 1.
End of explanation
"""
mins = pd.Series([100,100,200,50,50,300], ['b', 'c', 'd', 'm', 'p', 'r'])
"""
Explanation: DataFrame
A pandas DataFrame is a 2-dimensional labeled data structure where the columns can be of different data types.
Lets create another series with number of minutes and same indexes as that of description.
End of explanation
"""
d = {'description': description, 'mins': mins}
frame = pd.DataFrame(d)
frame
"""
Explanation: Now lets create a dataframe using description and mins.
End of explanation
"""
db="../py101/trackcoder/to_do_list.db"
import sqlite3
conn = sqlite3.connect(db)
df = pd.read_sql_query("select * from todo", conn)
"""
Explanation: Loading the data
Next we will load the data present in the sqlite database in the folder CodingWorkshops/problems/py101/trackcoder/. If you choose to use a different dataset, all you need to do is change the value of db to the path of your file.
End of explanation
"""
df.head() #first 5 rows
df.tail() # last 5 rows
"""
Explanation: Viewing the data
End of explanation
"""
df['timestamp'] = df['timestamp'].astype('datetime64[ns]')
df['done'] = df['done'].astype('bool')
df.head()
"""
Explanation: Fix data type
Pandas has the following data types.
object
int64
float64
bool
datetime64
timedelta[ns]
category
Notice that when we imported the data from our sqlite database, all the columns got imported as objects. Lets fix this by editing the data type of the column inplace, i.e. modify the data frame so that beyond this point this change will persist for the dataframe.
End of explanation
"""
df.index
"""
Explanation: Index, columns and summary
End of explanation
"""
df.columns
"""
Explanation: Columns
End of explanation
"""
df.describe()
"""
Explanation: Quickly summarize the descriptive statistics
End of explanation
"""
df.iloc[[0,1,2],[2,3]]
"""
Explanation: Selecting data
Lets say we need to find the first three tasks, and get the values for timestamp and description of the task. Pandas provides a few ways to access the data from the dataframe - by label based indexes, numerical indexes or a hybrid approach.
Try them out by yourself, by running the code below.
```python
df[0:2] # gives you first three rows, all columns
df[0:2][['timestamp', 'description']] # returns a copy with only 'timestamp' and 'description'
df.iloc[0:2] # purely integer based indexing, similar to indexing in python. first three rows, all columns
df.iloc[0:2][['timestamp', 'description']]
df.iloc[[0,1,2],[2,3]]
```
End of explanation
"""
_df = df.copy()
"""
Explanation: To better understand how indexes work and show how the last two are different, lets make a copy of our dataframe.
End of explanation
"""
_df.set_index('task', inplace=True)
"""
Explanation: Lets set the index to be task instead of index pandas automatically provided us with.
End of explanation
"""
df.loc[0:2] # Purely label-location based indexer for selection by label. This works as index is an integer.
_df.loc[0:2] # Does not work
"""
Explanation: Take a look at how the _df is different from df.
Now execute the following cells to find how different indexes can be used for selecting data.
python
_df.loc['b'] # all rows matching task type b
_df.loc[_df['mins']==30, ['description','timestamp']] # returns only a dataframe where the mins equals 30
_df.loc[_df['mins']==30, 'description'] # returns a series where the mins equals 30
Finally, check how having different indexes change the way you access the data.
End of explanation
"""
df.groupby(['task']).count()
"""
Explanation: To understand more about how indexes work read through Zax's tutorial on Pandas MultiIndex
Aggregation
Now that we have some idea about the basics, lets get into the actual analysis. Lets start by getting the total count of each type of task that we have in our dataset.
End of explanation
"""
import altair as alt
alt.renderers.enable('notebook')
"""
Explanation: What are the frequencies of each task type?
Note the above result is a Series. One approach can be to reset the index of the series using the count of the task types and sort in the reverse order to get the list.
What are the frequency of each task type per day?
Aggregation can be performed on multiple columns as well. Hint: pd.DatetimeIndex(df['timestamp']).date will extract date from a timestamp.
What are the amount of time spent per task type?
You can use the sum function to add up the minutes and sort them in reverse order.
Hashtag analysis
Hashtags are simple and easy way to put contextual information and in our data we find them in task descriptions. A task description might have no hashtag at all, a single hashtag or multiple hashtags. To start we need to parse the hashtags out from the description using regular expression. The following shows how multiple hastags are parsed out from a singe description.
python
description = pd.Series(['#altair #pandas at project night'])
description.str.findall(r'#.*?(?=\s|$)').tolist()
Note the result returned by running the above snippet is a list of lists. You probably want to flatten the list.
Make a series of unique list of hashtags
What is the frequency of each hashtag?
Hint: In the pandas documentation, take a look at examples under the apply function
Which hash tag consumes the most amount of time?
This solution to this one is similar to the one above. Keep in mind that you need to handle the conditions where there are no hashtags in a description.
Plotting wih Altair
Visualization is a powerful technique to find patterns in a dataset. It is also useful for communicate the findings of an analysis of a dataset. In the next section we will answer some simple questions about our dataset using visualization. While matplotlib is one of the most successful packages for the purpose, we will be using Altair that provides a simple yet powerful declarative way of building charts.
Think of it as SQL, but for charts.
End of explanation
"""
alt.Chart(df).mark_bar().encode(
y='task',
x='count()',
color='task'
)
"""
Explanation: We need to enable the renderer based on which environment we are using - notebook for jupyter notebooks, jupterlab for jupyterlab etc.
What are the frequencies of each task type?
Lets try to answer the same question we solved above, but this time using altair.
Below is a bar diagram of our data. Lets break down what is going on in the function. From the official documentation:
The key to creating meaningful visualizations is to map properties of the data to visual properties in order to effectively communicate information. In Altair, this mapping of visual properties to data columns is referred to as
an encoding, and is most often expressed through the Chart.encode() method.
Here are the 3 steps for building charts in altair
pass your data to alt.Chart
python
alt.Chart(df)
select the type of chart you want to plot
python
alt.Chart(df).mark_bar()
encode map the property of the data to visual properties
python
alt.Chart(df).mark_bar().encode(
y='task' # map the y axis to df['task']
x='count()' # map the x axis to aggregate function of count defined in altair
color='task') # map the color to df['task']
End of explanation
"""
|
erdewit/ib_insync | notebooks/contract_details.ipynb | bsd-2-clause | from ib_insync import *
util.startLoop()
import logging
# util.logToConsole(logging.DEBUG)
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=11)
"""
Explanation: Contract details
End of explanation
"""
amd = Stock('AMD')
cds = ib.reqContractDetails(amd)
len(cds)
"""
Explanation: Suppose we want to find the contract details for AMD stock.
Let's create a stock object and request the details for it:
End of explanation
"""
cds[0]
"""
Explanation: We get a long list of contract details. Lets print the first one:
End of explanation
"""
contracts = [cd.contract for cd in cds]
contracts[0]
"""
Explanation: The contract itself is in the 'contract' property of the contract details. Lets make a list of contracts and look at the first:
End of explanation
"""
util.df(contracts)
"""
Explanation: To better spot the difference between all the contracts it's handy to convert to a DataFrame. There is a utility function to do that:
End of explanation
"""
amd = Stock('AMD', 'SMART', 'USD')
assert len(ib.reqContractDetails(amd)) == 1
"""
Explanation: We can see from this that AMD trades in different currencies on different exchanges.
Suppose we want the one in USD on the SMART exchange. The AMD contract is adjusted to
reflect that and becomes unique:
End of explanation
"""
intc = Stock('INTC', 'SMART', 'USD')
assert len(ib.reqContractDetails(intc)) == 1
"""
Explanation: Lets try the same for Intel:
End of explanation
"""
xxx = Stock('XXX', 'SMART', 'USD')
assert len(ib.reqContractDetails(xxx)) == 0
"""
Explanation: Let's try a non-existing contract:
End of explanation
"""
eurusd = Forex('EURUSD')
assert len(ib.reqContractDetails(eurusd)) == 1
"""
Explanation: or a Forex contract
End of explanation
"""
amd
ib.qualifyContracts(amd)
amd
"""
Explanation: With the qualifyContracts method the extra information that is send back
from the contract details request is used to fill in the original contracts.
Lets do that with amd and compare before and aftwards:
End of explanation
"""
contract_4391 = Contract(conId=4391)
ib.qualifyContracts(contract_4391)
assert contract_4391 == amd
"""
Explanation: TIP: When printing a contract, the output can be copy-pasted and it will be valid Python code.
The conId that is returned can by itself be used to uniquely specify a contract:
End of explanation
"""
qualContracts = ib.qualifyContracts(amd, intc, xxx, eurusd)
assert intc in qualContracts
assert xxx not in qualContracts
"""
Explanation: A whole bunch of contracts can be qualified at the same time. A list of all the successfull ones is returned:
End of explanation
"""
matches = ib.reqMatchingSymbols('intc')
matchContracts = [m.contract for m in matches]
matches
assert intc in matchContracts
ib.disconnect()
"""
Explanation: There is also an API function to request stocks (only stocks) that match a pattern:
End of explanation
"""
|
t-vi/pytorch-tvmisc | misc/2D-Wavelet-Transform.ipynb | mit | import pywt
from matplotlib import pyplot
%matplotlib inline
import numpy
from PIL import Image
import urllib.request
import io
import torch
URL = 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/bc/Zuse-Z4-Totale_deutsches-museum.jpg/315px-Zuse-Z4-Totale_deutsches-museum.jpg'
"""
Explanation: 2D Wavelet Transformation in PyTorch
Thomas Viehmann tv@lernapparat.de
The other day I got a question how to do wavelet transformation in PyTorch in a way that allows to compute gradients (that is gradients of outputs w.r.t. the inputs, probably not the coefficients). I like Pytorch and I happen to have a certain fancy for wavelets as well, so here we go.
We take an image of the Zuse Z4.
<center>(Image Credit: Clemens Pfeiffer, CC-BY 2.5 at Wikimedia)</center>
We will make use of PyTorch (of course) and the excellent PyWavelet aka pywt module. The latter includes a large library of wavelet coefficients and also functions to perform wavelet transforms. We only use the former.
So let's import stuff.
End of explanation
"""
print(pywt.families())
"""
Explanation: Let us see what wavelets are available:
End of explanation
"""
w=pywt.Wavelet('bior2.2')
pyplot.plot(w.dec_hi[::-1], label="dec hi")
pyplot.plot(w.dec_lo[::-1], label="dec lo")
pyplot.plot(w.rec_hi, label="rec hi")
pyplot.plot(w.rec_lo, label="rec lo")
pyplot.title("Bior 2.2 Wavelets")
pyplot.legend()
dec_hi = torch.tensor(w.dec_hi[::-1])
dec_lo = torch.tensor(w.dec_lo[::-1])
rec_hi = torch.tensor(w.rec_hi)
rec_lo = torch.tensor(w.rec_lo)
"""
Explanation: For this demo we will use the Biorthogonal 2.2 Wavelets. As we will not properly deal with boundaries, this is a compromise between not using the (almost trivial) Haar wavelet and using more elaborate but larger wavelets.
When adapting this code to other wavelets, you will need to adjust the padding. To use multiple channels (color images) you would want to .view the channels into the batch dimension.
The basic idea is that for each coordinate direction you apply a high pass and a low pass filter with a stride of 2, and one can take PyTorch convolutions for this purpose. With orthogonal wavelets, you would feed the same wavelet into the transposed convolution for decoding, with biorthogonal wavelets, you have a separate set of coefficients for decoding.
I adjust the filters so that they are ready for use with PyTorch convolutions, i.e. with correlations of the form $Y_{k,l} = \sum \psi_ij X_{i+k,j+l}$.
We take a look at the wavelets.
End of explanation
"""
imgraw = Image.open(io.BytesIO(urllib.request.urlopen(URL).read())).resize((256,256))
img = numpy.array(imgraw).mean(2)/255
img = torch.from_numpy(img).float()
pyplot.figure()
pyplot.imshow(img, cmap=pyplot.cm.gray)
"""
Explanation: Let us have a black and white picture:
End of explanation
"""
filters = torch.stack([dec_lo.unsqueeze(0)*dec_lo.unsqueeze(1),
dec_lo.unsqueeze(0)*dec_hi.unsqueeze(1),
dec_hi.unsqueeze(0)*dec_lo.unsqueeze(1),
dec_hi.unsqueeze(0)*dec_hi.unsqueeze(1)], dim=0)
inv_filters = torch.stack([rec_lo.unsqueeze(0)*rec_lo.unsqueeze(1),
rec_lo.unsqueeze(0)*rec_hi.unsqueeze(1),
rec_hi.unsqueeze(0)*rec_lo.unsqueeze(1),
rec_hi.unsqueeze(0)*rec_hi.unsqueeze(1)], dim=0)
"""
Explanation: We define the tensor product filter banks, i.e. we multiply filters for the two coordinates.
End of explanation
"""
def wt(vimg, levels=1):
h = vimg.size(2)
w = vimg.size(3)
padded = torch.nn.functional.pad(vimg,(2,2,2,2))
res = torch.nn.functional.conv2d(padded, filters[:,None],stride=2)
if levels>1:
res[:,:1] = wt(res[:,:1],levels-1)
res = res.view(-1,2,h//2,w//2).transpose(1,2).contiguous().view(-1,1,h,w)
return res
"""
Explanation: We can now define the wavelet transform and its inverse using pytorch conv2d and conv_transpose2d.
For the recursion, we only continue to process the top left component with two low passes.
This is different from taking tensor-product of the full 1d wavelet basis in that we do not further refine one low-pass coordinate when the other coordinate has taken the high-pass.
This seems to be the convention used e.g. JPEG compression. I seem to remember that there are also stability reasons to keep the aspect ratio bounded for multi-resolution analysis, but I do not have a good reference to point to.
On the component with two low passes, one then applies another transform up to a desired level. We rearrange the four filter output into an image of the same size. To allow for full reconstruction, we would need to deal with the boundaries - either by adapting the wavelets or by padding more - but we do not do this.
As PyTorch's nn module does this by default, we consider batch x channels x height x width.
End of explanation
"""
def iwt(vres, levels=1):
h = vres.size(2)
w = vres.size(3)
res = vres.view(-1,h//2,2,w//2).transpose(1,2).contiguous().view(-1,4,h//2,w//2).clone()
if levels>1:
res[:,:1] = iwt(res[:,:1], levels=levels-1)
res = torch.nn.functional.conv_transpose2d(res, inv_filters[:,None],stride=2)
res = res[:,:,2:-2,2:-2]
return res
"""
Explanation: Similar, we do the reconstruction (Inverse Wavelet Transform) using the conv_transpose2d function. We drop the excess coefficients.
End of explanation
"""
vimg = img[None,None]
res = wt(vimg,4)
pyplot.figure()
pyplot.imshow(res[0,0].data.numpy(),cmap=pyplot.cm.gray)
"""
Explanation: We can do this on our image. First the decomposition:
End of explanation
"""
rec = iwt(res, levels=4)
pyplot.imshow(rec[0,0].data.numpy(),cmap=pyplot.cm.gray)
"""
Explanation: And then the reconstruction.
End of explanation
"""
pyplot.imshow((rec-vimg).data[0,0].numpy(), cmap=pyplot.cm.gray)
pyplot.colorbar()
"""
Explanation: We can see where the reconstruction errors are:
End of explanation
"""
|
msschwartz21/craniumPy | experiments/glial_bridge/landmarks.ipynb | gpl-3.0 | import deltascope as ds
import deltascope.alignment as ut
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
from scipy.optimize import minimize
import os
import tqdm
import json
import datetime
"""
Explanation: Introduction: Landmarks
End of explanation
"""
# --------------------------------
# -------- User input ------------
# --------------------------------
data = {
# Specify sample type key
'30hpf': {
# Specify path to data directory
'path': '.\\Data\\30hpf\\Output-02-14-2019',
# Specify which channels are in the directory and are of interest
'channels': ['AT','ZRF']
},
'28hpf': {
'path': '.\Data\\28hpf\\Output-02-14-2019-yot-ilastik',
'channels': ['AT','ZRF']
},
'26hpf': {
'path': '.\Data\\26hpf\\Output-02-14-2019',
'channels': ['AT','ZRF']
},
'24hpf': {
'path': '.\Data\\24hpf\\Output-02-15-2019',
'channels': ['AT','ZRF']
},
'22hpf': {
'path': '.\Data\\22hpf\\Output-02-14-2019',
'channels': ['AT','ZRF']
}
}
"""
Explanation: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
End of explanation
"""
data_pairs = []
for s in data.keys():
for c in data[s]['channels']:
data_pairs.append((s,c))
"""
Explanation: We'll generate a list of pairs of stypes and channels for ease of use.
End of explanation
"""
D = {}
for s in data.keys():
D[s] = {}
for c in data[s]['channels']:
D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)
"""
Explanation: We can now read in all datafiles specified by the data dictionary above.
End of explanation
"""
# --------------------------------
# -------- User input ------------
# --------------------------------
# Pick an integer value for bin number
anum = 30
# Specify the percentiles which will be used to calculate landmarks
percbins = [50]
theta_step = np.pi/4
"""
Explanation: Calculate landmark bins
End of explanation
"""
lm = ds.landmarks(percbins=percbins, rnull=np.nan)
lm.calc_bins(D['28hpf']['AT'], anum, theta_step)
print('Alpha bins')
print(lm.acbins)
print('Theta bins')
print(lm.tbins)
"""
Explanation: Calculate landmark bins based on user input parameters and the previously specified control sample.
End of explanation
"""
lmdf = pd.DataFrame()
# Loop through each pair of stype and channels
for s,c in tqdm.tqdm(data_pairs):
print(s,c)
# Calculate landmarks for each sample with this data pair
for k,df in tqdm.tqdm(D[s][c].items()):
lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)
# Set timestamp for saving data
tstamp = datetime.datetime.now().strftime('%Y-%m-%d')
# Save completed landmarks to a csv file
lmdf.to_csv(os.path.join('.\Data',tstamp+'_landmarks.csv'))
# Save landmark bins to json file
bins = {
'acbins':list(lm.acbins),
'tbins':list(lm.tbins)
}
with open(os.path.join('.\Data', tstamp+'_landmarks_bins.json'), 'w') as outfile:
json.dump(bins, outfile)
"""
Explanation: Calculate landmarks
End of explanation
"""
|
jtwhite79/pyemu | verification/Freyberg/verify_unc_results.ipynb | bsd-3-clause | %matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
"""
Explanation: verify pyEMU results with the henry problem
End of explanation
"""
la = pyemu.Schur("freyberg.jcb",verbose=False,forecasts=[])
la.drop_prior_information()
jco_ord = la.jco.get(la.pst.obs_names,la.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
la.pst.write(ord_base+".pst")
"""
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
"""
pv_names = []
predictions = ["sw_gw_0","sw_gw_1","or28c05_0","or28c05_1"]
for pred in predictions:
pv = jco_ord.extract(pred).T
pv_name = pred + ".vec"
pv.to_ascii(pv_name)
pv_names.append(pv_name)
"""
Explanation: extract and save the forecast sensitivity vectors
End of explanation
"""
prior_uncfile = "pest.unc"
la.parcov.to_uncfile(prior_uncfile,covmat_file=None)
"""
Explanation: save the prior parameter covariance matrix as an uncertainty file
End of explanation
"""
post_mat = "post.cov"
post_unc = "post.unc"
args = [ord_base + ".pst","1.0",prior_uncfile,
post_mat,post_unc,"1"]
pd7_in = "predunc7.in"
f = open(pd7_in,'w')
f.write('\n'.join(args)+'\n')
f.close()
out = "pd7.out"
pd7 = os.path.join("i64predunc7.exe")
os.system(pd7 + " <" + pd7_in + " >"+out)
for line in open(out).readlines():
print(line)
"""
Explanation: PRECUNC7
write a response file to feed stdin to predunc7
End of explanation
"""
post_pd7 = pyemu.Cov.from_ascii(post_mat)
la_ord = pyemu.Schur(jco=ord_base+".jco",predictions=predictions)
post_pyemu = la_ord.posterior_parameter
#post_pyemu = post_pyemu.get(post_pd7.row_names)
"""
Explanation: load the posterior matrix written by predunc7
End of explanation
"""
post_pd7.x
post_pyemu.x
delta = (post_pd7 - post_pyemu).x
(post_pd7 - post_pyemu).to_ascii("delta.cov")
print(delta.sum())
print(delta.max(),delta.min())
delta = np.ma.masked_where(np.abs(delta) < 0.0001,delta)
plt.imshow(delta)
df = (post_pd7 - post_pyemu).to_dataframe().apply(np.abs)
df /= la_ord.pst.parameter_data.parval1
df *= 100.0
print(df.max())
delta
"""
Explanation: The cumulative difference between the two posterior matrices:
End of explanation
"""
print((delta.sum()/post_pyemu.x.sum()) * 100.0)
print(np.abs(delta).sum())
"""
Explanation: A few more metrics ...
End of explanation
"""
args = [ord_base + ".pst", "1.0", prior_uncfile, None, "1"]
pd1_in = "predunc1.in"
pd1 = os.path.join("i64predunc1.exe")
pd1_results = {}
for pv_name in pv_names:
args[3] = pv_name
f = open(pd1_in, 'w')
f.write('\n'.join(args) + '\n')
f.close()
out = "predunc1" + pv_name + ".out"
os.system(pd1 + " <" + pd1_in + ">" + out)
f = open(out,'r')
for line in f:
if "pre-cal " in line.lower():
pre_cal = float(line.strip().split()[-2])
elif "post-cal " in line.lower():
post_cal = float(line.strip().split()[-2])
f.close()
pd1_results[pv_name.split('.')[0].lower()] = [pre_cal, post_cal]
"""
Explanation: PREDUNC1
write a response file to feed stdin. Then run predunc1 for each forecast
End of explanation
"""
# save the results for verification testing
pd.DataFrame(pd1_results).to_csv("predunc1_results.dat")
pyemu_results = {}
for pname in la_ord.prior_prediction.keys():
pyemu_results[pname] = [np.sqrt(la_ord.prior_prediction[pname]),
np.sqrt(la_ord.posterior_prediction[pname])]
"""
Explanation: organize the pyemu results into a structure for comparison
End of explanation
"""
f = open("predunc1_textable.dat",'w')
for pname in pd1_results.keys():
print(pname)
f.write(pname+"&{0:6.5f}&{1:6.5}&{2:6.5f}&{3:6.5f}\\\n"\
.format(pd1_results[pname][0],pyemu_results[pname][0],
pd1_results[pname][1],pyemu_results[pname][1]))
print("prior",pname,pd1_results[pname][0],pyemu_results[pname][0])
print("post",pname,pd1_results[pname][1],pyemu_results[pname][1])
f.close()
"""
Explanation: compare the results:
End of explanation
"""
f = open("pred_list.dat",'w')
out_files = []
for pv in pv_names:
out_name = pv+".predvar1b.out"
out_files.append(out_name)
f.write(pv+" "+out_name+"\n")
f.close()
args = [ord_base+".pst","1.0","pest.unc","pred_list.dat"]
for i in range(36):
args.append(str(i))
args.append('')
args.append("n")
args.append("n")
args.append("y")
args.append("n")
args.append("n")
f = open("predvar1b.in", 'w')
f.write('\n'.join(args) + '\n')
f.close()
os.system("predvar1b.exe <predvar1b.in")
pv1b_results = {}
for out_file in out_files:
pred_name = out_file.split('.')[0]
f = open(out_file,'r')
for _ in range(3):
f.readline()
arr = np.loadtxt(f)
pv1b_results[pred_name] = arr
"""
Explanation: PREDVAR1b
write the nessecary files to run predvar1b
End of explanation
"""
omitted_parameters = [pname for pname in la.pst.parameter_data.parnme if pname.startswith("wf")]
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
omitted_parameters=omitted_parameters,
verbose=False)
df = la_ord_errvar.get_errvar_dataframe(np.arange(36))
df
"""
Explanation: now for pyemu
End of explanation
"""
fig = plt.figure(figsize=(6,8))
max_idx = 13
idx = np.arange(max_idx)
for ipred,pred in enumerate(predictions):
arr = pv1b_results[pred][:max_idx,:]
first = df[("first", pred)][:max_idx]
second = df[("second", pred)][:max_idx]
third = df[("third", pred)][:max_idx]
ax = plt.subplot(len(predictions),1,ipred+1)
#ax.plot(arr[:,1],color='b',dashes=(6,6),lw=4,alpha=0.5)
#ax.plot(first,color='b')
#ax.plot(arr[:,2],color='g',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(second,color='g')
#ax.plot(arr[:,3],color='r',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(third,color='r')
ax.scatter(idx,arr[:,1],marker='x',s=40,color='g',
label="PREDVAR1B - first term")
ax.scatter(idx,arr[:,2],marker='x',s=40,color='b',
label="PREDVAR1B - second term")
ax.scatter(idx,arr[:,3],marker='x',s=40,color='r',
label="PREVAR1B - third term")
ax.scatter(idx,first,marker='o',facecolor='none',
s=50,color='g',label='pyEMU - first term')
ax.scatter(idx,second,marker='o',facecolor='none',
s=50,color='b',label="pyEMU - second term")
ax.scatter(idx,third,marker='o',facecolor='none',
s=50,color='r',label="pyEMU - third term")
ax.set_ylabel("forecast variance")
ax.set_title("forecast: " + pred)
if ipred == len(predictions) -1:
ax.legend(loc="lower center",bbox_to_anchor=(0.5,-0.75),
scatterpoints=1,ncol=2)
ax.set_xlabel("singular values")
else:
ax.set_xticklabels([])
#break
plt.savefig("predvar1b_ver.eps")
"""
Explanation: generate some plots to verify
End of explanation
"""
cmd_args = [os.path.join("i64identpar.exe"),ord_base,"5",
"null","null","ident.out","/s"]
cmd_line = ' '.join(cmd_args)+'\n'
print(cmd_line)
print(os.getcwd())
os.system(cmd_line)
identpar_df = pd.read_csv("ident.out",delim_whitespace=True)
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
verbose=False)
df = la_ord_errvar.get_identifiability_dataframe(5)
df
"""
Explanation: Identifiability
End of explanation
"""
diff = identpar_df["identifiability"].values - df["ident"].values
diff.max()
fig = plt.figure()
ax = plt.subplot(111)
axt = plt.twinx()
ax.plot(identpar_df["identifiability"])
ax.plot(df.ident.values)
ax.set_xlim(-10,600)
diff = identpar_df["identifiability"].values - df["ident"].values
#print(diff)
axt.plot(diff)
axt.set_ylim(-1,1)
ax.set_xlabel("parameter")
ax.set_ylabel("identifiability")
axt.set_ylabel("difference")
"""
Explanation: cheap plot to verify
End of explanation
"""
|
EricKightley/sparsekmeans | plots/heuristic/heuristic.ipynb | mit | import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
from scipy.linalg import hadamard
from scipy.fftpack import dct
%matplotlib inline
n = 6 #number of data points (columns in plot)
K = 3 #number of centroids
m = 4 #subsampling dimension
p = 10 #latent (data) dimension (rows in plot)
np.random.seed(0)
DPI = 300 #figure DPI for saving
"""
Explanation: Sparsified K-means Heuristic Plots
This file generates a bunch of figures showing heuristically how sparsified k-means works.
End of explanation
"""
def this_is_dumb(x):
""" Surely there's a better way but this works. Permute X"""
y = np.copy(x)
np.random.shuffle(y)
return y
## Preconditioning plot
# Unconditioned
vals_unconditioned = [-20,20,1,-1,2,-2,3,-3,0,0]
X_unconditioned = np.array([this_is_dumb(vals_unconditioned) for i in range(n)]).T
# Conditioned
D = np.diag(np.random.choice([-1,1],p))
X_conditioned = dct(np.dot(D,X_unconditioned), norm = 'ortho', axis = 0)
## Subsampling plots
# Define the entries to set X
vals = [1 for i in range(m)]
vals.extend([0 for i in range(p-m)])
# Define X by permuting the values.
X = np.array([this_is_dumb(vals) for i in range(n)]).T
# means matrix
U = np.zeros((p,K))
# This is used to plot the full data in X (before subsampling)
Z = np.zeros_like(X)
# Generate two copies of X, one to plot just the column in question (YC) and one to plot the others (YO)
def get_col_X(col):
YO = np.copy(X)
YO[:,col-1] = -1
YC = - np.ones_like(X)
YC[:,col-1] = X[:,col-1]
return [YO,YC]
# Generate a copy of U modified to plot the rows selected by the column we chose of X
def get_rows_U(col):
US = np.copy(U)
US[np.where(X[:,col-1]==1)[0],:]=1
return US
"""
Explanation: Data Matrices
X is the data matrix, U is the centroid matrix. We define these, and then set up a few tricks to mask certain regions of the arrays for plotting selected columns. The idea is to generate two copies of X, one which is used to plot only the column in which we are interested, the other of which is used to plot the remainder of the data. We make two different plots so that we can set different alpha values (transparency) for the column in question vs. the rest of the data.
End of explanation
"""
def read_colors(path_in):
""" Crappy little function to read in the text file defining the colors."""
mycolors = []
with open(path_in) as f_in:
lines = f_in.readlines()
for line in lines:
line = line.lstrip()
if line[0:5] == 'shade':
mycolors.append(line.split("=")[1].strip())
return mycolors
CM = read_colors('CM.txt')
CA = read_colors('CA.txt')
CD = ['#404040','#585858','#989898']
# Set the axes colors
mpl.rc('axes', edgecolor = CD[0], linewidth = 1.3)
# Set up the colormaps and bounds
cmapM = mpl.colors.ListedColormap(['none', CM[1], CM[3]])
cmapA = mpl.colors.ListedColormap(['none', CA[1], CA[4]])
bounds = [-1,0,1,2]
normM = mpl.colors.BoundaryNorm(bounds, cmapM.N)
normA = mpl.colors.BoundaryNorm(bounds, cmapA.N)
bounds_unconditioned = [i for i in range(-5,6)]
cmap_unconditioned = mpl.colors.ListedColormap(CA[::-1] + CM)
norm_unconditioned = mpl.colors.BoundaryNorm(bounds_unconditioned, cmap_unconditioned.N)
"""
Explanation: Color Functions
We import the colors from files called 'CM.txt' (for main colors) and 'CA.txt' (for alternate colors). These text files are generated from www.paletton.com, by exporting the colors as text files. The text parsing is hacky but works fine for now. This makes it easy to try out different color schemes by directly exporting from patellon.
We use the colors to set up a colormap that we'll apply to the data matrices. We manually set the boundaries on the colormap to agree with how we defined the various matrices above. This way we can get different colored blocks, etc.
End of explanation
"""
def drawbrackets(ax):
""" Way hacky. Draws the brackets around X. """
ax.annotate(r'$n$ data points', xy=(0.502, 1.03), xytext=(0.502, 1.08), xycoords='axes fraction',
fontsize=14, ha='center', va='bottom',
arrowprops=dict(arrowstyle='-[, widthB=4.6, lengthB=0.35', lw=1.2))
ax.annotate(r'$p$ dimensions', xy=(-.060, 0.495), xytext=(-.22, 0.495), xycoords='axes fraction',
fontsize=16, ha='center', va='center', rotation = 90,
arrowprops=dict(arrowstyle='-[, widthB=6.7, lengthB=0.36', lw=1.2, color='k'))
def drawbracketsU(ax):
ax.annotate(r'$K$ centroids', xy=(0.505, 1.03), xytext=(0.505, 1.08), xycoords='axes fraction',
fontsize=14, ha='center', va='bottom',
arrowprops=dict(arrowstyle='-[, widthB=2.25, lengthB=0.35', lw=1.2))
def formatax(ax):
""" Probably want to come up with a different way to do this. Sets a bunch of formatting options we want. """
ax.tick_params(
axis='both', # changes apply to both axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off',
labelbottom='off',
labelleft = 'off') # labels along the bottom edge are off
ax.set_xticks(np.arange(0.5, n-.5, 1))
ax.set_yticks(np.arange(0.5, p-.5, 1))
ax.grid(which='major', color = CD[0], axis = 'x', linestyle='-', linewidth=1.3)
ax.grid(which='major', color = CD[0], axis = 'y', linestyle='--', linewidth=.5)
def drawbox(ax,col):
""" Draw the gray box around the column. """
s = col-2
box_X = ax.get_xticks()[0:2]
box_Y = [ax.get_yticks()[0]-1, ax.get_yticks()[-1]+1]
box_X = [box_X[0]+s,box_X[1]+s,box_X[1]+s,box_X[0]+s, box_X[0]+s]
box_Y = [box_Y[0],box_Y[0],box_Y[1],box_Y[1], box_Y[0]]
ax.plot(box_X,box_Y, color = CD[0], linewidth = 3, clip_on = False)
def plot_column_X(ax,col):
""" Draw data matrix with a single column highlighted. """
formatax(ax)
drawbrackets(ax)
drawbox(ax,col)
YO,YC = get_col_X(col)
ax.imshow(YO,
interpolation = 'none',
cmap=cmapM,
alpha = 0.8,
norm=normM)
ax.imshow(YC,
interpolation = 'none',
cmap=cmapM,
norm=normM)
def plot_column_U(ax,col):
""" Draw means matrix with rows corresponding to col highlighted. """
formatax(ax)
drawbracketsU(ax)
US = get_rows_U(col)
ax.imshow(US,
interpolation = 'none',
cmap=cmapA,
norm=normA)
def plot_column_selection(col,fn,save=False):
""" This one actually generates the plots. Wraps plot_column_X and plot_column_U,
saves the fig if we want to."""
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
plot_column_X(ax0,col)
plot_column_U(ax1,col)
if save == True:
fig.savefig(fn,dpi=DPI)
else:
plt.show()
"""
Explanation: Plotting Functions
End of explanation
"""
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(X_unconditioned,
interpolation = 'none')
#cmap=cmap_unconditioned,
#norm=norm_unconditioned)
ax1 = plt.subplot(gs[1])
formatax(ax1)
ax1.imshow(X_conditioned,
interpolation = 'none')
#cmap=cmap_unconditioned,
#norm=norm_unconditioned)
#ax1.imshow(X_unconditioned,
# interpolation = 'none',
# cmap=cmap_unconditioned,
# norm=norm_unconditioned)
plt.show()
# Make a plot showing the system before we subsample.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(Z,
interpolation = 'none',
cmap=cmapM,
norm=normM)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat0.png',dpi=DPI)
# Plot the subsampled system.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(X,
interpolation = 'none',
cmap=cmapM,
norm=normM)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat1.png',dpi=DPI)
# Pick out the first column.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
drawbox(ax0,1)
plot_column_X(ax0,1)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat2.png',dpi=DPI)
# make all 6 "final plots".
for i in range(1,p+1):
fn = 'col' + str(i) + '.png'
plot_column_selection(i,fn,save=True)
"""
Explanation: Generate the Plots
End of explanation
"""
|
lwahedi/CurrentPresentation | talks/MDI1/Slides.ipynb | mit | my_string = 'Hello World'
print(my_string)
"""
Explanation: Manipulating Data in Python
Laila A. Wahedi
Massive Data Institute Postdoctoral Fellow <br>McCourt School of Public Policy
Follow along: Wahedi.us, Current Presentation
Installing packages:
On a Mac:
Open terminal
On Windows:
Type cmd into the start menu
What is this?
Like an explorer window. The text at the start of the line tells you where you are.
To see what's there, type:
On Mac: ls (That's an L, not an i)
On Windows: dir
Installing Packages:
Install Packages
Install some packages so they are done by the time we need them.
Type:
pip install pandas
When that's done, type:
pip install matplotlib
pip install pickle
pip install statsmodels
Note: if you installed the Continuum Python distribution, you may already have some of these packages installed.
What just happened?
Pip is a download manager, it automatically looks online to find the package and installs it on your computer
Similar to the CRAN Mirror
Bonus: if you followed the link I sent to install Anaconda, you have the conda download manager too.
conda install works similarly to pip, and also works on many R packages when used with Jupyter notebooks.
Opening Jupyter
Open up another terminal/command prompt while your packages download
Navigate to the directory you want your script to be in
Type ls to see what folders are in the current folder
Type cd .. to go one folder up
Type cd folder_name to go to a sub-folder
Type mkdir folder_name to make a folder
Type:
jupyter notebook
What happened?
A browser window popped up
Address something along the lines of localhost:8888/tree
Address is listed in the terminal/command prompt
Go to this address from any browser to open your tree
Your Jupyter Tree
You should be in the folder you navigated to in terminal/cmd
You can make folders here too
Top right, select new notebook, python3
Open this notebook:
Move the file into the directory for your Jupyter Tree
Select the file from teh Jupyter Tree
A new tab should open in your browser.
Add new cells to take notes as you go along.
Change the type of note cells to Markdown
Some Python Basics
What is code?
Set of clear instructions to the computer
Start with basic set of building blocks
Put those building blocks together
Goal of programming:
Break your task into building blocks
Put building blocks together into an <b> algorithm </b>
Our tasks:
Move and manipulate data using these building blocks.
Building Blocks
<img src="building_blocks.png">
* Ball and urn example
Our First Algorithm
<img src="algorithm1.png">
Basic Data Structures
Strings
Text variables
Declare with either '', or ""
End of explanation
"""
my_int = 2
my_float = 2.2
new_float = my_int+my_float
print(new_float)
type(new_float)
"""
Explanation: Numbers:
ints: integers
floats: numbers with decimals
Combine them to get a new float
End of explanation
"""
my_list = [0,1,2,3,4]
"""
Explanation: Lists
Declare with: []
Ordered list of objects
End of explanation
"""
print(my_list[1])
"""
Explanation: Zero indexed
End of explanation
"""
my_list[2]='hello'
print(my_list)
"""
Explanation: Lists are Mutable:
End of explanation
"""
my_dictionary = {'apple':4,
'pear':'yum'}
print(my_dictionary['apple'])
"""
Explanation: Dictionaries
key value pairs indexed by key
Declared by {key:value}
End of explanation
"""
my_dictionary['numbers'] = my_list
print(my_dictionary['numbers'])
"""
Explanation: Add key value pairs later using indexing with brackets []
You can store data type you want
End of explanation
"""
my_set = {'thing1','thing2','cat in hat','thing1', 4,4}
print(my_set)
"""
Explanation: Sets
Like a list, but:
Contains unique values
Unordered
Declare with {}
End of explanation
"""
my_tuple = (1,3,2)
print(my_tuple)
"""
Explanation: Tuples
Like an ordered list, but can't be changed
Declare with ()
End of explanation
"""
# Declare Data
my_data = 'hello '
my_other_data = 'world'
#Manipulate it
manipulated_data = my_data+my_other_data
#Output it:
print(manipulated_data)
"""
Explanation: Our First Algorithm: Strings
<img src="algorithm1.png">
End of explanation
"""
# Declare Data
my_data = 1
my_other_data = 5
#Manipulate it
manipulated_data = 1/5
#Output it:
print(manipulated_data)
"""
Explanation: Our First Algorithm: Numbers
<img src="algorithm1.png">
End of explanation
"""
my_variable = 5
print(my_variable)
print(my_variable == 5)
"""
Explanation: Adding Choices
<img src="if_algorithm.png">
Writing Instructions Python Can Understand:
Whitespace matters
tabs indicate a block of code within an if statement or loop
: marks the start of the block
\ for a linebreak mid-line
Line breaks allowed inside (), [], {}
If statements
= to assign
Like <- in R
== to evaluate
End of explanation
"""
print(my_variable > 6)
print(my_variable in [1,4,7])
"""
Explanation: Compare values with:
<, <=, >, >=, ==, !=, in, not in,
End of explanation
"""
True + True
"""
Explanation: Booleans and Indicators
booleans are indicators of True or False
In Python3, True = 1, and False = 0
End of explanation
"""
my_bool = 'ice cream'
if my_bool == 'ice cream':
print('yay')
elif my_bool == 'cake':
print('woo!')
else:
print('Woe and great tragedy!')
"""
Explanation: None is a null value
None evaluates to false in an if statement.
If statements:
Used to change behavior depending on conditionals
Declare the statement
Declare the conditional action within a code block, or indentation:
Declare alternative actions in else
Stack conditionals with elif
End of explanation
"""
check = True
# check = False
# check = None
# check = 'monkey'
# check = 0
# check = 10
print('Check is:', check)
if check == 'monkey':
print('banana')
elif check:
print('yes')
else:
print('no')
if 1 not in [1,2,3]:
print('not not in')
if 1 in [1,2,3]:
print('in')
"""
Explanation: Play around yourself:
End of explanation
"""
n = 0
while n < 5:
print(n)
n= n+1
"""
Explanation: Repetition
<img src="loops.png">
While loops
Do something in block of code until condition met
Make sure you change the condition in every loop, or it will go forever
End of explanation
"""
print('use a range:')
for i in range(3):
print(i)
print('use a range slice:')
for i in range(3,6):
print(i)
print('iterate throubh a list:')
for i in my_list:
print(i)
"""
Explanation: For loops
repeat block of code for:
a certain number of iterations
For every element in a list
range() gives you an iterator
End of explanation
"""
my_list = [0,1,'cat',None,'frog',3]
animals = []
nums = []
for i in my_list:
if type(i)==str:
animals.append(i)
elif type(i)==int:
nums.append(i)
else:
pass
print(animals)
print(nums)
"""
Explanation: Put them together:
Play around on your own
End of explanation
"""
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# from ggplot import *
import pickle
import statsmodels.api as sm
"""
Explanation: Learn More:
These are the basic building blocks for scripting. To learn more about how to put them together to greater effect:
Take an introductory computer science course online
Check out GU Women Coders.
Can't make it in person? Their website has lots of great resources.
http://guwecode.georgetown.domains/
Import packages
Like library(package) in R
Pandas is a dataframe data structure like dataframes in R
matplotlib is a plotting package similar to plotting in Matlab
you can view plots inline in pandas notebooks with the inline command
ggplot is a plotting package built on matplotlib like ggplot2 in R
Pickle lets you save your workspace
Statsmodels contains many of the statistical models you know and love
End of explanation
"""
baad_covars = pd.read_csv('BAAD_1_Lethality_Data.tab',sep='\t')
"""
Explanation: Pandas Data Frames
Using Documentation
Pandas website
Stack Overflow
Copy errors into google
Look up syntax differences with R
Load Data from a comma separated file
Start by googling it: http://lmgtfy.com/?q=pandas+load+csv
We will use the Big Allied and Dangerous Data from START
https://dataverse.harvard.edu/file.xhtml?fileId=2298519&version=RELEASED&version=.0
End of explanation
"""
baad_covars.head()
"""
Explanation: Look at the data
End of explanation
"""
baad_covars.rename(columns = {'cowmastercountry':'country',
'masterccode':'ccode',
'mastertccode3606':'group_code',
'fatalities19982005':'fatalities'},
inplace = True)
baad_covars.replace({'country':{'United States of America':'US'}},
inplace = True)
print('Dimensions: ',baad_covars.shape)
baad_covars.head()
"""
Explanation: Rename things and adjust values
End of explanation
"""
#Set the index
baad_covars.set_index(['group_code'],inplace = True)
baad_covars.head()
"""
Explanation: Set a useful index
End of explanation
"""
baad_covars.loc[:, 'fatalities'].head()
"""
Explanation: Slicing
Get specific values from the dataframe.
Pandas has several slice operators.
iloc can be used to index the row by ordered integer. i.e. first row is 0, second row is 1, etc. Use this option sparingly. Better practice to use the index you have created.
loc uses the named index and columns.
Index using [row, columns]
For multiple columns, put your column names in a list
Use : for all values
Notice that the output keeps the index names.
End of explanation
"""
baad_covars.loc[:,['OrgAge']].plot.density()
print(baad_covars.loc[:,['OrgAge']].mean())
baad_covars.loc[:,['fatalities']].plot.hist(bins=20)
"""
Explanation: Look at the data
End of explanation
"""
baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1),
['group','country']].head()
"""
Explanation: Slicing Using Conditionals
Put conditionals in parentheses
Stack multiple conditionals using:
& when both conditions must always apply
| when at least one condition must apply
End of explanation
"""
baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1),
['terrStrong']] = None
baad_covars.loc[(baad_covars.fatalities>1) | (baad_covars.degree>=1),
['terrStrong']].head()
"""
Explanation: Handling Missing Values
First lets make some:
End of explanation
"""
baad_covars.loc[baad_covars.terrStrong.isnull(),'terrStrong'].head()
"""
Explanation: Handling Missing Values
We could index by them
End of explanation
"""
baad_covars['terrStrong'] = baad_covars.terrStrong.fillna(-77)
baad_covars.terrStrong.head()
"""
Explanation: Handling Missing Values
We could fill them:
End of explanation
"""
baad_covars['big'] = 0
baad_covars.loc[(baad_covars.fatalities>1) |
(baad_covars.degree>=1),
'big']=1
baad_covars.big.head()
"""
Explanation: Making New Columns
Assign values to a new column based on other columns:
End of explanation
"""
baad_covars.reset_index(inplace=True)
baad_covars.head()
"""
Explanation: Reindexing: Pop the index out without losing it
End of explanation
"""
baad_covars.set_index(['group','country'],inplace = True)
baad_covars.head()
"""
Explanation: Set a multi-index
End of explanation
"""
indonesia_grps = baad_covars.xs('Indonesia',level = 'country',drop_level=False)
indonesia_grps = indonesia_grps.loc[indonesia_grps.fatalities>=1,['degree','ContainRelig',
'ContainEthno','terrStrong',
'ordsize','OrgAge']]
indonesia_grps.head()
"""
Explanation: Using the new index, make a new dataframe
Note the new slicing operator for multi-index
End of explanation
"""
little_df = pd.DataFrame([1,2,3,4,5],columns = ['A'])
little_df['B']=[0,1,0,1,1]
copied_df = little_df
print('before:')
print(copied_df)
little_df.loc[little_df.A == 3,'B'] = 7
print('after')
copied_df
"""
Explanation: Warning: Making copies
If you set a variable as equal to an object, Python creates a reference rather than copying the whole object. More efficient, unless you really want to make a copy
End of explanation
"""
import copy
little_df = pd.DataFrame([1,2,3,4,5],columns = ['A'])
little_df['B']=[0,1,0,1,1]
copied_df = little_df.copy()
print('before:')
print(copied_df)
little_df.loc[little_df.A == 3,'B'] = 7
print('after')
copied_df
"""
Explanation: What happened?
copied_df changed when little_df changed.
Let's fix that: import "copy"
End of explanation
"""
indonesia_grps.to_csv('indonesia.csv')
pickle.dump(indonesia_grps, open('indonesia.p','wb'))
indonesia_grps = pickle.load(open('indonesia.p','rb'))
"""
Explanation: Saving
Unlike R or Stata, can't just save your workspace
Save as a csv now that we have the data we want
Pickle a variable to recreate it without having to reset indexes, etc.
End of explanation
"""
C = pd.DataFrame(['apple','orange','grape','pear','banana'],
columns = ['C'],
index = [2,4,3,0,1])
little_df['C'] = C
little_df
"""
Explanation: Next time:
Scrape Data from the internet
Clean the data
Merge that data into our data
Run a basic stats and ML model
Until then, here's some reference code on merging your data sets
Merging and Concatenating
Merges automatically if shared index
End of explanation
"""
C = pd.DataFrame(['apple','orange','grape','apple'],
columns = ['C'],
index = [2,4,3,'a'])
C['cuts']=['slices','wedges','whole','spirals']
print('C:')
print(C)
print('Inner: Intersection')
print(little_df.merge(right=C,
how='inner',
on=None,
left_index = True,
right_index =True))
print('Outer: Keep all rows')
print(little_df.merge(right=C,
how='outer',
on=None,
left_index = True,
right_index =True))
print('Left: Keep little_df')
print(little_df.merge(right=C,
how='left',
on=None,
left_index = True,
right_index =True))
print('Right: Keep C')
print(little_df.merge(right=C,
how='right',
on=None,
left_index = True,
right_index =True))
print('Outer, merging on column instead of index')
print(little_df.merge(right=C,
how='outer',
on='C',
left_index = True,
right_index =True))
"""
Explanation: Joins
Same as SQL, inner and outer
End of explanation
"""
add_df = pd.DataFrame({'A':[6],'B':[7],'C':'peach'},index= ['p'])
little_df = pd.concat([little_df,add_df])
little_df
"""
Explanation: Concatenate
Stack dataframes on top of one another
Stack dataframes beside one another
End of explanation
"""
|
IESD/cegads-domestic-model | cegads/examples/Basic usage.ipynb | gpl-2.0 | %pylab inline
import pandas as pd
"""
Explanation: cegads-domestic-model
This ipython notebook describes the basic usage of the cegads-domestic-model python library. The library implements a simple domestic appliance model based on data from chapter three of the DECC ECUK publication (https://www.gov.uk/government/collections/energy-consumption-in-the-uk). and provides a convenient interface for generating household simulations at the appliance level.
Installation
pip install [--upgrade] cegads-domestic-model
or visit the github repo and download the code.
The implementation is based on pandas so you will need to install that before it will work.
iPython setup
I'm now going to setup ipython with the %pylab inline magic to prepare matplotlib to create inline plots. I also import pandas which will be used later.
End of explanation
"""
from cegads import ScenarioFactory
factory = ScenarioFactory()
"""
Explanation: cegads.Scenario
Scenario instances encapsulate domestic consumption statistics for a given year and enable the creation of Household and Appliance instances with characteristics drawn from those data. Most basic usage of this library will start with the creation of a Scenario.
To create a Scenario, it is recommended to use a ScenarioFactory.
Here I import the ScenarioFactory class and instantiate a ScenarioFactory instance.
End of explanation
"""
wet_appliance_keys = ['Washing Machine', 'Dishwasher', 'Tumble Dryer', 'Washer-dryer']
df = factory._data.stack().unstack(level=0)
f, [ax1,ax2] = plt.subplots(1, 2, figsize=(12, 4))
for key in wet_appliance_keys:
ax1.plot(df.unstack(level=0).index, df[key]['consumption_per_appliance'], label=key)
ax2.plot(df.unstack(level=0).index, df[key]['appliances_per_household'], label=key)
plt.suptitle("Wet Appliances")
ax2.set_title("appliance numbers per household")
ax2.set_ylabel("appliances per household")
ax2.legend(loc=6, fontsize=8)
ax1.set_title("consumption per appliance")
ax1.set_ylabel("consumption per appliance (Wh/year)")
ax1.legend(loc=1, fontsize=8)
plt.show()
"""
Explanation: The default ScenarioFactory inherits from the ECUK class which loads the full ECUK dataset. The ScenarioFactory loads data from the ECUK tables 3.08 (the number of households in UK by year), 3.10 (the total consumption of each appliance category by year) and 3.12 (appliance ownership by year). It calculates the number of appliances per household and the consumption per appliance for all available years.
We can inspect the data, though this is not part of the public API and so may change.
End of explanation
"""
year = 2013
scenario = factory(year)
"""
Explanation: The ScenarioFactory is callable directly. Calling the factory with an integer year value will return a Scenario instance loaded with data from the requested year.
I can now pass a year into the factory to generate my Scenario. Here I load data from 2013.
End of explanation
"""
f, [ax2, ax1] = plt.subplots(1, 2, figsize=(12, 4), sharex=True)
ind = np.arange(len(scenario.index))
width = 0.65
ax1.bar(ind, scenario.appliances_per_household, width, color="red")
ax2.bar(ind, scenario.consumption_per_appliance, width, color="blue")
for ax in [ax1,ax2]:
ax.set_xticks(ind+width/2.)
ax.set_xticklabels(scenario.index, rotation=90)
ax.set_xlim(0, len(scenario._data.index))
ax1.axhline(y=1, ls="--", color="black", lw=1)
ax1.set_ylabel("appliances per household")
ax2.set_ylabel("consumption per\nappliance (Wh)")
plt.tight_layout()
"""
Explanation: We can inspect the underlying data for the given year. Here I extract the data and create a plot showing appliances per household. For most appliances the number per household is less than 1.0.
End of explanation
"""
test_appliances = [scenario.appliance(app, 60) for app in wet_appliance_keys]
test_appliances
"""
Explanation: Generating Appliance instances
Scenario instances are a convenient source of Appliance instances. The Scenario.appliance() method returns an appliance of the requested type with the appropriate annual consumption value allocated from the scenario data. In order to generate an appliance it is necessary to also provide a value for the appliances duty_cycle. Since all appliances are modelled as square waves, this value determines the wavelength of the square wave.
Here I create appliance instances for the wet appliances.
End of explanation
"""
f, [ax1, ax2] = plt.subplots(1, 2, figsize=(12, 4))
for app in test_appliances:
ax1.plot(app.profile.index, app.profile * 100, label=app.name)
ax2.plot(app.profile.index, app.profile.diff()*60*app.daily_total, label=app.name) # * 60 for Wh -> W conversion
ax1.legend(loc=2, fontsize=8)
ax2.legend(loc=2, fontsize=8)
ax1.set_title("cumulative distribution")
ax2.set_title("actual consumption")
ax1.set_ylabel('cumulative frequency (%)')
ax2.set_ylabel('load (W)')
for ax in [ax1, ax2]:
ax.set_xlabel('time')
ax.xaxis.set_major_locator(mpl.dates.HourLocator(interval=3))
ax.xaxis.set_major_formatter(mpl.dates.DateFormatter("%H:%M"))
ax1.yaxis.set_major_formatter(mpl.ticker.FormatStrFormatter("%.0f%%"))
plt.show()
"""
Explanation: The appliances are represented above by three attributes: the appliance name; the duty_cycle; and the daily consumption. The first two were provided as arguments to the Scenario.appliance method, the last one was allocated by the scenario by dividing the annual consumption per appliance figure by 365.
Appliance instances contain a reference to an ApplianceModel instance which does the heavy lifting. ApplianceModel instances have access to data from table 3.11 in the ECUK data and (for appliances that are mapped) can access a daily profile shape from here. The profile can be used with the daily total consumption to generate a consumption profile adjusted for the scenario year.
In the case of wet appliances there is only one profile provided in the ECUK data. As a consequence, though they have different magnitudes, all the wet appliances have the same shape.
Here I plot the cumulative distribution used by the model and also construct a consumption profile for each of the wet appliances.
End of explanation
"""
freq = "1Min"
start = datetime.datetime(year, 1, 1)
days = 7
test_simulations = [app.simulation(days, freq, start=start) for app in test_appliances]
"""
Explanation: Appliance simulation
To generate a simulated dataset, call the Appliance.simuation() method. The method requires the number of days and the required frequency (in pandas format) as arguments. It can also take optional keyword arguments (in this case I have passed in a start date - the default would be datetime.datetime.today()).
I have concatenated four simulations into a list. Each result is a pandas.Series.
End of explanation
"""
f, ax = plt.subplots(1, 1, figsize=(18, 3))
for app, sim in zip(test_appliances, test_simulations):
ax.plot(sim.index, sim*60, label=app.name)
ax.legend(fontsize=10, loc="best")
ax.xaxis.set_major_formatter(mpl.dates.DateFormatter("%d-%b"))
ax.set_ylabel("Consumption (W)")
ax.set_ylim(top=ax.get_ylim()[1]*1.2)
ax.grid()
plt.show()
"""
Explanation: Plotting the results shows the square wave form of the simulation. There is one duty cycle each day. The width of the cycle is determined by the user, the height is calculated from the cycle width and the daily consumption figure. The timing of the cycle is determined by drawing randomly from the overall consumption distribution.
End of explanation
"""
appliances_to_consider = [
('Washing Machine', 80),
('Dishwasher', 100),
('Tumble Dryer', 120),
('Washer-dryer', 180)
]
"""
Explanation: Generating Household instances
Household objects are a simple collection of Appliance instances with convenient wrapper functions to run simulations and return merged pandas.DataFrame objects containing the simulation results for each appliance.
As we saw above, the Scenario instance has information about how many appliances of each type are owned per household. The Scenario.household() method uses this information to generate Household instances with the appropriate number of appliances. It returns a randomly generated Household instance with a collection of Appliance instances appropriate to the scenario year.
The method takes a single argument. The argument is a list of 2-tuples as follows.
End of explanation
"""
n = 150
households = [scenario.household(appliances_to_consider) for i in range(n)]
for h in households[:3]:
print(h)
"""
Explanation: Each 2-tuple represents an appliance type and a duty_cycle. That is, the width of the square wave to be generated by the appliance during the simulation. Passing in these data as arguments to the Scenario.household() method will define the list of appliances to consider.
note: it is possible (and common) for a household to have no appliances
Here I create 150 households with a list comprehension, passing each the appliances_to_consider variable defined above. Looking at the first three items on the list we can see that Each household is loaded with appliances.
End of explanation
"""
names = ["household {:03}".format(i + 1) for i, h in enumerate(households) if len(h)]
result = pd.concat([h.simulation(days, freq, start=start) for h in households if len(h)], keys=names, axis=1)
result.columns.names = ['household', 'appliance']
"""
Explanation: Multiple-household simulation
Here I will use pandas.concat to combine the simulation results from all 150 households. I will also apply a unique name to each household to group the resulting dataset. Note that I am also ignoring empty households with a filter in the list comprehension.
this is the step that generates the data - it may take a few seconds
End of explanation
"""
loc = mpl.dates.DayLocator(interval=2)
fmt = mpl.dates.DateFormatter("%d-%b")
xax, yax = 4, 4
f, axes = plt.subplots(xax, yax, sharex=True, sharey=True, figsize=(12, 6))
for row, ax_row in enumerate(axes):
for col, ax in enumerate(ax_row):
name = names[row*yax + col]
for key in result[name]:
ax.plot(result.index, result[name][key])
ax.set_title(name)
ax.xaxis.set_major_locator(loc)
ax.xaxis.set_major_formatter(fmt)
plt.tight_layout()
plt.show()
"""
Explanation: Now, I can plot the data from some of these households.
End of explanation
"""
f, ax = plt.subplots(figsize=(12, 2))
ax.plot(result.index, result*60, alpha=0.5, lw=0.25)
plt.show()
"""
Explanation: Combining the results from all the households produces a bit of a mess. We can see that the overall usage pattern is structured.
End of explanation
"""
df = result.copy()
df.columns = df.columns.droplevel()
appliance_mean_profile = df.groupby(df.columns, axis=1).mean()
f, ax = plt.subplots(figsize=(12, 4))
for key in appliance_mean_profile:
ax.plot(appliance_mean_profile.index, appliance_mean_profile[key]*60, alpha=0.75, label="{}{}".format(key[:-2], 's'))
ax.plot(df.index, df.mean(axis=1)*60, color="black", lw=1.5) #average across all households
ax.set_ylabel("appliance load (W)")
plt.legend(fontsize=10)
plt.show()
"""
Explanation: The mean consumption of each appliance type shows that there are differences between appliances with washer-dryers consuming the most and washing machines the least. This is a reflection of the data in ECUK table 3.10.
End of explanation
"""
from cegads import ECUK
ecuk = ECUK()
for device in wet_appliance_keys:
print("{:20} {}".format(device, ecuk(2013, device).consumption_per_appliance / 365))
"""
Explanation: Discussion
So the library has generated minutely profiles of consumption for each appliance in 150 households. If the model works correctly we should see that the model profile presented above matches the simulated output. That is, we should see that the combination of all the square waves should match roughly with the smooth model profile for each appliance type. This is the ultimate purpose of the model.
Total consumption
Total consumption for each appliance type is determined by the ECUK data in table 3.10. We can dig into the library to find these raw figures.
End of explanation
"""
totals = df.sum() / 7 # total consumption divided by 7 for each appliance
totals.groupby(totals.index).mean() # average across all appliance types
"""
Explanation: We can now look at the average daily consumption of each appliance type in our simulation to see if they match.
End of explanation
"""
sims = [app.simulation(365, "30Min") for app in test_appliances]
shapes = [sim.groupby(sim.index.time).mean() for sim in sims]
f, axes = plt.subplots(1, len(sims), figsize=(16, 2.5), sharey=True)
for ax, app, shape in zip(axes, test_appliances, shapes):
i = [datetime.datetime.combine(app.profile.index[0], t) for t in shape.index]
ax.plot_date(i, shape * 2, color='red', label="simulation", ls="-", marker=None) # convert Wh per half-hour to W (*2)
ax.plot(app.profile.index, app.profile.diff() * app.daily_total * 60, color="black", lw=1.5, label="model")
ax.set_title(app.name)
ax.xaxis.set_major_locator(mpl.dates.HourLocator(interval=6))
ax.xaxis.set_major_formatter(mpl.dates.DateFormatter('%H:%M'))
ax.set_ylabel("consumption (W)")
ax.legend(loc="best", fontsize=8)
"""
Explanation: We might expect these figures to match precisely but in fact they don't. This is due to an artifact in the modelling process. Sometimes a duty cycle begins near the end of the simulation period (or ends near the beginning) and so consumption for that cycle actually passes over the edge of the dataset. We can expect that simulated consumption should never exceed these figures. It is only a problem at the very edges of the simulation period (though at the boundary between days it is possible for an appliance to be running two duty cycles at the same time). This can be improved.
Overall consumption profile
The profile of consumption is also determined by the ECUK data (table 3.11). We can access the data by digging into appliance instances. Due to the limitations of the raw data all wet appliances share the same profile, but have different consumption levels.
Running 365-day simulations on the example appliances allows us to generate comparable simulated average profiles.
End of explanation
"""
|
hanhanwu/Hanhan_Data_Science_Practice | sequencial_analysis/after_2020_practice/ts_RNN_basics.ipynb | mit | import pandas as pd
import numpy as np
import datetime
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
df = pd.read_csv('data/pm25.csv')
print(df.shape)
df.head()
df.isnull().sum()*100/df.shape[0]
df.dropna(subset=['pm2.5'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(
lambda row: datetime.datetime(year=row['year'],
month=row['month'], day=row['day'],hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
df.head()
df['year'].value_counts()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['pm2.5'], color='g')
g.set_title('pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
"""
Explanation: Time Series Forecast with Basic RNN
Dataset is downloaded from https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
End of explanation
"""
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))
df.head()
plt.figure(figsize=(5.5, 5.5))
g = sns.lineplot(data=df['scaled_pm2.5'], color='purple')
g.set_title('Scaled pm2.5 between 2010 and 2014')
g.set_xlabel('Index')
g.set_ylabel('scaled_pm2.5 readings')
# 2014 data as validation data, before 2014 as training data
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
df_val.reset_index(drop=True, inplace=True)
df_val.head()
# The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target,
## collecting the data with 1 stride rolling window.
def makeXy(ts, nb_timesteps):
"""
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
"""
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
print(X_train[0], y_train[0])
print(X_train[1], y_train[1])
X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
print(X_val[0], y_val[0])
print(X_val[1], y_val[1])
"""
Explanation: Note
Scaling the variables will make optimization functions work better, so here going to scale the variable into [0,1] range
End of explanation
"""
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
from keras.models import Sequential
from keras.layers import SimpleRNN
from keras.layers import Dense, Dropout, Input
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
from sklearn.metrics import mean_absolute_error
model = Sequential()
model.add(SimpleRNN(32, input_shape=(X_train.shape[1:])))
model.add(Dropout(0.2))
model.add(Dense(1, activation='linear'))
model.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae'])
model.summary()
save_weights_at = 'basic_rnn_model'
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
history = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
# load the best model
best_model = load_model('basic_rnn_model')
# Compare the prediction with y_true
preds = best_model.predict(X_val)
pred_pm25 = scaler.inverse_transform(preds)
pred_pm25 = np.squeeze(pred_pm25)
# Measure MAE of y_pred and y_true
mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)
print('MAE for the validation set:', round(mae, 4))
mae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds)
print('MAE for the scaled validation set:', round(mae, 4))
# Check the metrics and loss of each apoch
mae = history.history['mae']
val_mae = history.history['val_mae']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(mae))
plt.plot(epochs, mae, 'bo', label='Training MAE')
plt.plot(epochs, val_mae, 'b', label='Validation MAE')
plt.title('Training and Validation MAE')
plt.legend()
plt.figure()
# Here I was using MAE as loss too, that's why they lookedalmost the same...
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()
"""
Explanation: Note
In 2D array above for X_train, X_val, it means (number of samples, number of time steps)
However RNN input has to be 3D array, (number of samples, number of time steps, number of features per timestep)
Only 1 feature which is scaled_pm2.5
So, the code below converts 2D array to 3D array
End of explanation
"""
|
osamoylenko/YSDA_deeplearning17 | Seminar1/Homework 1 (Face Recognition).ipynb | mit | import scipy.io
image_h, image_w = 32, 32
data = scipy.io.loadmat('faces_data.mat')
X_train = data['train_faces'].reshape((image_w, image_h, -1)).transpose((2, 1, 0)).reshape((-1, image_h * image_w))
y_train = (data['train_labels'] - 1).ravel()
X_test = data['test_faces'].reshape((image_w, image_h, -1)).transpose((2, 1, 0)).reshape((-1, image_h * image_w))
y_test = (data['test_labels'] - 1).ravel()
n_features = X_train.shape[1]
n_train = len(y_train)
n_test = len(y_test)
n_classes = len(np.unique(y_train))
print('Dataset loaded.')
print(' Image size : {}x{}'.format(image_h, image_w))
print(' Train images : {}'.format(n_train))
print(' Test images : {}'.format(n_test))
print(' Number of classes : {}'.format(n_classes))
"""
Explanation: Face recognition
The goal of this seminar is to build two simple (anv very similar) face recognition pipelines using scikit-learn package. Overall, we'd like to explore different representations and see which one works better.
Prepare dataset
End of explanation
"""
def plot_gallery(images, titles, h, w, n_row=3, n_col=6):
"""Helper function to plot a gallery of portraits"""
plt.figure(figsize=(1.5 * n_col, 1.7 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray, interpolation='nearest')
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
titles = [str(y) for y in y_train]
plot_gallery(X_train, titles, image_h, image_w)
"""
Explanation: Now we are going to plot some samples from the dataset using the provided helper function.
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
nc = KNeighborsClassifier(n_neighbors = 3)
nc.fit(X_train, y_train)
test_score = nc.score(X_test, y_test)
print('Test score: {}'.format(test_score))
"""
Explanation: Nearest Neighbour baseline
The simplest way to do face recognition is to treat raw pixels as features and perform Nearest Neighbor Search in the Euclidean space. Let's use KNeighborsClassifier class.
End of explanation
"""
# Populate variable 'X_train_processed' with samples each of which has zero mean and unit variance.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_processed = scaler.fit_transform(X_train.astype(float))
X_test_processed = scaler.transform(X_test.astype(float))
"""
Explanation: Not very imperssive, is it?
Eigenfaces
All the dirty work will be done by the scikit-learn package. First we need to learn a dictionary of codewords. For that we preprocess the training set by making each face normalized (zero mean and unit variance)..
End of explanation
"""
from sklearn.decomposition import RandomizedPCA
def apply_pca(train, test, nc):
pca = RandomizedPCA(n_components = nc)
pca.fit(train)
train_pca = pca.transform(train)
test_pca = pca.transform(test)
return train_pca, test_pca, pca
X_train_pca, X_test_pca, pca = apply_pca(X_train_processed, X_test_processed, 64)
"""
Explanation: Now we are going to apply PCA to obtain a dictionary of codewords.
RamdomizedPCA class is what we need.
End of explanation
"""
# Visualize principal components.
plt.figure(figsize=(20,20))
for i in range(64):
plt.subplot(8, 8, i + 1)
plt.imshow(pca.components_[i].reshape(32, 32), cmap=plt.cm.gray, interpolation='nearest')
plt.xticks(())
plt.yticks(())
"""
Explanation: We plot a bunch of principal components.
End of explanation
"""
from sklearn.svm import SVC
def calc_svc_score(train_x, train_y, test_x, test_y):
svc = SVC(kernel = 'linear')
svc.fit(train_x, train_y)
return svc.score(test_x, test_y)
print('Test score: {}'.format(calc_svc_score(X_train_pca, y_train, X_test_pca, y_test)))
"""
Explanation: Transform training data, train an SVM and apply it to the encoded test data.
End of explanation
"""
n_components = [1, 2, 4, 8, 16, 32, 64]
accuracy = []
for nc in n_components:
X_train_pca, X_test_pca, pca = apply_pca(X_train_processed, X_test_processed, nc)
accuracy.append(calc_svc_score(X_train_pca, y_train, X_test_pca, y_test))
plt.figure(figsize=(10, 6))
plt.plot(n_components, accuracy)
print('Max accuracy: {}'.format(max(accuracy)))
"""
Explanation: How many components are sufficient to reach the same accuracy level?
End of explanation
"""
|
joonasfo/python | Assignment_03_notebook.ipynb | mit | %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import *
from numpy import *
"""
Explanation: Introduction to Numerical Problem Solving TX00BY09-3007
Assignment: 03 Graphical analysis
Description: Solve the problem 8 from previous exercises 02. Solve the problems 1c, 2a, and 4a from exercises 03.
Author: Joonas Forsberg
This document has been prepared for the third assignment for the course "Introduction to Numerical Problem Solving". It contains code examples and explanations for the written code. In addition, I've included reasoning behind the behavior where acceptable.
End of explanation
"""
def problem08(start, max_value, addition):
true_val = pi**4/90
total = 0
i = start
while i > max_value or i < max_value:
# Always add float32 (single prec. value to total)
total += np.float32(1/(i**4))
i += addition
# Numerical true error
enum = np.float32(true_val) - np.float32(total)
# True percent relative error
eper = enum / true_val
print("End result is {}, true percent relative error {} %".format(np.float32(total), eper * 100))
problem08(1, 10000, 1)
problem08(10000, 0, -1)
"""
Explanation: Problem 08
The infinite series
$$f(n) = \sum_{k=1}^{n}\frac{1}{k^4}$$
converges on a value of $f(n) = \pi^4/90$ as <i>n</i> approaches infinity. Write a program in single precision to calculate $f(n)$ for $n = 10,000$ by computing the sum from $k = 1$ to 10,000. Then repeat the calculation but reverse the order – that is, from k = 10, 000 to 1 using increments of −1. In each case, compute the true percent relative error. Explain the results.
End of explanation
"""
def problem01(selection):
if selection == "a":
print("Matti")
elif selection == "b":
print("Teppo")
elif selection == "c":
# Create graph based on
x = np.arange(-2, 2, 0.001)
y = x**5 - x**2 + 2
# Create the graph
plt.plot(x, y, marker=".", linestyle=":")
# Mark known root
plt.plot(0, 2.0, 'ro:')
# Mark known minima
plt.plot(0.737007, 1.67427, 'yo:')
plt.grid()
plt.show()
else:
print("Invalid selection")
#problem01("a")
#problem01("b")
problem01("c")
"""
Explanation: <b>Results without specifying single precision (float32):<br /></b>
End result is 1.082323233710861, true percent relative error 2.558289554517808e-11 %<br />
End result is 1.0823232337108049, true percent relative error 3.077333064776834e-11 %<br />
In order to present the results in single precision, we need to cast the values to numpy.float32. By default the calculations are performed with double precision (numpy.float64), which means the results are more accurate. With single precision the results are less accurate and as the true percent relative error value is low enough, it's not shown when using single precision.
Problem 01
Obtain graphs of the following functions. Locate the roots, local minimums, and local maximums graphically within two significant numbers and mark them into the graphs.
(a) $y = 3x^3 - x^2 2x + 1$, when $-2 \leq x \leq 2$<br />
(b) $y = x^4 + \frac{x^3}{3} - \frac{5x^2}{2} + x - 1$, when $ -3 \leq x \leq 2$<br />
(c) $y = x^5 - x^2 + 2$, when $-2 \leq x \leq 2$
End of explanation
"""
def problem02(selection):
if selection == "a":
x = np.arange(-1, 3, 0.0001)
y1 = x ** 3
y2 = 4 - 2 * x
plt.plot(x, y1)
plt.plot(x, y2)
idx = np.argwhere(np.diff(np.sign(y1 - y2)) != 0).reshape(-2) + 0
plt.plot(x[idx], y1[idx], 'ro')
print(x[idx])
print(y1[idx])
plt.grid()
plt.show()
elif selection == "b":
x = np.arange(-1, 3, 0.001)
y = x ** 3 + 2 * x - 4
plt.plot(x, y)
plt.grid()
plt.show()
else:
print("Invalid selection")
problem02("a")
#problem02("b")
"""
Explanation: We can analyse the graph provided and find out the equation has only one root (y = 2.0)
Local maxima:<br />
y = 1.67427, x = 0.737007
Local minima:<br />
y = 2.0, x = -0.000997453
The local minima is essentially the root value so it's not plotted.
Problem 02
(a) Draw $y = x^3$ and $y = 4 − 2x$ using the same axes. Note the x coordinate of the point of intersection within three significant figures accuracy.<br />
(b) Draw $y = x^3 + 2x − 4$. Note the coordinate of the point where the curve cuts the x axis. Compare your answer with that from (a). Explain your findings.
End of explanation
"""
def problem03(selection):
if selection == "a":
x = np.arange(-4, 4, 0.01)
y = (2 * x + 1)/(x - 3)
# Infinites in graph, limit y
ylim([-50,50])
xlim([-4, 4])
plt.plot(x, y)
# Draw asymptones
plt.plot((3, 3), (-100, 100), 'r-')
plt.plot((-100, 100), (2, 2), 'r-')
plt.grid()
plt.show()
elif selection == "b":
print("No calculations")
else:
print("Invalid selection")
problem03("a")
#problem03("b")
"""
Explanation: Actual intersection:<br />
x = 1.17951<br />
y = 1.64098<br />
Calculated values:<br />
x = 1.1795<br />
y = 1.64094428<br />
The drawn point is slighly off the intersection but falls within the three significant figures of accuracy.
Ref: https://stackoverflow.com/questions/28766692/intersection-of-two-graphs-in-python-find-the-x-value
Problem 04
Draw the following rational functions. State any asymptotes and draw or mark them in the graphs.<br />
(a) $f(x) = \frac{(2x + 1)}{(x - 3)} -4 \leq x \leq 4$<br />
(b) $g(s) = \frac{s}{(s + 1)} -3 \leq x \leq 3$
End of explanation
"""
|
dato-code/tutorials | notebooks/AnyGivenSunday.ipynb | apache-2.0 | #Fire up the GraphLab engine
import graphlab as gl
import graphlab.aggregate as agg
"""
Explanation: Any Given Sunday: Football and a Machine Learning Rookie
I love football more than engineers love coffee, all my Turi friends know that. Throughout the course of an NFL season I have fantasy teams, point-spread pools, survivor pools and non-stop pontification on the latest terrible play call, awful refs and amazing plays (GO HAWKS!). Despite hearing about all the neato, life-changing, world-saving, cutting-edge, inspirational things machine learning could be used for, my first thought was "Huh. I should learn that and use it for football."
Well, let's take a shot.
I'm no data scientist and this post won't be terribly math-y. I'm a software engineer that's fond of BBQ and learning by doing. Thankfully, I work with a slew of smart folks kind enough to assist with these educational shenanigans and provide guidance when I run off the rails. We'll apply the simplest of machine learning concepts, linear regression, to football in the simplest way and see what happens. You get to learn from all of the glorious mistakes I made so that you can go on to make bigger, better mistakes. We'll learn, together, inch by inch.
Setting up the data
My dataset is from Armchair Analysis, which is a pretty rad source of well-curated, nicely documented, NFL stats. You can grab yourself a copy for pretty cheap.
End of explanation
"""
team = gl.SFrame.read_csv('C:\\Users\\Susan\\Documents\\NFLData_2000-2014\\csv\TEAM.csv', header=True)
games = gl.SFrame.read_csv('C:\\Users\\Susan\\Documents\\NFLData_2000-2014\\csv\GAME.csv', header=True)
# TID=team id
# GID=Game id
# TNAME=team name
# PTS points,
# RY rushing yards,
# PY passing yards,
# RA rushing attempts
# PA pass attempts,
# PC pass completions
# SK sacks against,
# FUM fumbles lost,
# INT int for defense,
# TOP time of possession,
# TD tds,
# TDR td rushing,
# TDP td passing,
# PLO total plays offense,
# PLD total plays defense,
# DP points by defense
team = team.select_columns(
['TNAME', 'TID', 'GID', 'PTS', 'RY', 'RA', 'PY', 'PA', 'PC', 'PU',
'SK', 'FUM', 'INT', 'TOP', 'TD', 'TDR', 'TDP', 'DP'])
team['TOP'] = team['TOP'].astype(float)
team = team.join(games.select_columns(['GID', 'SEAS', 'WEEK']), 'GID')
team['SEAS'] = team['SEAS'].astype(str)
# restrict to regular season
team = team[team['WEEK'] < 18]
"""
Explanation: The team SFrame has all of one team's stats per game. The games SFrame has info about each game itself, which teams played in it, where, etc. So first, let me narrow it down to what I think are some relevant columns and join some info about each game to the team table.
End of explanation
"""
team.head(5)
"""
Explanation: Here's what we've got so far
End of explanation
"""
# add a column to indicate if the team won this game or not
winners = team.groupby(key_columns='GID', operations={'WIN_TID': agg.ARGMAX('PTS', 'TID')})
team = team.join(winners, 'GID')
team['WIN'] = team['TID'] == team['WIN_TID']
#create season summary
of_interest= ['PTS', 'RY', 'RA', 'PY', 'PA', 'SK', 'FUM', 'INT', 'TOP', 'TD', 'TDR', 'TDP', 'DP', 'WIN']
season = team.groupby(['SEAS', 'TNAME'], {'%s_SUM' % x : agg.SUM(x) for x in of_interest})
season_sums = filter(lambda x: '_SUM' in x, season.column_names())
season_sums.remove('WIN_SUM')
"""
Explanation: Predict the total season wins for a team
Ok. Great. I have data... how do I data science it? I know from intro reading that linear regression is probably the simplest thing for me to try. But ... what type of question can I even answer with linear regression? Well, the way I understand it, we can predict a continuous variable Y that changes based on the values of a bunch of other variables, X1, X2... etc. So as all the X's change, our predicted Y is going to change. Here is the better, math-happy, grown-up explanation to achieve next level enlightenment. The data science "hello world"-foobar example everyone on the planet uses to explain linear regression is predicting the price of a house. How much will a house cost (Y) based on a bunch of info we have (Xs) like square footage, number of rooms, etc? Well, I have a bunch of info about team stats. What do I want to predict?
Let's try to predict the total season wins for a team given that team's stats for the year. Right now I don't have a season summary for each team but I can make one. I'll also need to generate season win column for each team. I’m also going to split my data into a train set, and a testing set. I can feed my model the training set to learn on, then reserve the test set to see how well the model performs against data it hasn’t seen yet. What I don't need to do is code up the algorithm, GraphLab Create has a toolkit for that.
End of explanation
"""
season.head()
"""
Explanation: I now have season summaries that look like this
End of explanation
"""
# predict number of wins
#split the data into train and test
season_train, season_test = season.random_split(0.8, seed=0)
lin_model = gl.linear_regression.create(season_train, target='WIN_SUM', features=season_sums)
lin_pred = lin_model.predict(season_test)
lin_result = lin_model.evaluate(season_test)
"""
Explanation: Awesome. Let's get to it.
End of explanation
"""
print lin_result
"""
Explanation: How'd we do?
End of explanation
"""
print season_test.head(5)['WIN_SUM']
print lin_pred[0:5]
"""
Explanation: Printing out lin_result shows us our max_error was around 5, and our rmse (root mean squared error, a favorite measure of how wrong you are) is around 1.8. That's ... not bad? Just browsing the first few, the model is generally close but not great.
End of explanation
"""
gl.canvas.set_target('ipynb')
lin_model.show()
"""
Explanation: The first five actual win totals of my test data are (4,11,5,10,4) and my first five predictions out of lin_pred are (5.9, 11.8, 4.2, 8.7). I can print the lin_model to get a little more information about what's going on here, or you can use lin_model.show() to see it all pretty in your browser.
End of explanation
"""
# Why does TD_SUM end up in a negative coefficient?
# Points are represented by multiple features, they're correlated and it's confusing the model
season_sums.remove('PTS_SUM')
season_sums.remove('TDP_SUM')
season_sums.remove('TDR_SUM')
lin_model = gl.linear_regression.create(season_train, target='WIN_SUM', features=season_sums)
lin_pred = lin_model.predict(season_test)
lin_result = lin_model.evaluate(season_test)
lin_model.show()
"""
Explanation: Not surprisingly, Xs that are positive coefficients are things like the sum of passing touchdowns, total points, and time of possession. Xs that drag the Y value down are things like interceptions, sacks, and fumbles. For the life of me, I can't figure out why the sum of touchdowns would bring the win sum prediction down. I had to ask my data science sherpa Chris why this could happen.
Chris: I suspect that several of your features are highly correlated. This makes it tricky for the linear regression model to have parameter estimates that aren't noisy; it doesn't know whether or not to make two highly correlated features be large or small, since there are ways for the predictions to be the same in either case. (Check out wikipedia's page on "multicollinearity" to read more.)
You have two options:
1. hand select a subset of the features, and see if the coefficients from the model make more sense. Then decide if the accuracy of the previous model is worth the decrease in interpretability.
2. experiment with increasing l2_penalty and l1_penalty arguments: this encourages the optimization to find estimates where some features are either close to 0 or dropped entirely.
Yeah, actually. I do have multiple columns that are kinda correlated. All of the points columns are related, like the sums of rushing and passing TDs to the sum of total points. Alright, throwing every single column of data I had at this didn't give me what I wanted. Given Chris' explanation, the simplest thing to try is removing the columns that are duplicating the effect of points from the season_sums list of features and training the model again. After removing features like TDR_SUM (rushing touchdown sum) and TDP_SUM (passing tds) and total points, I end up with coefficients that look like this.
End of explanation
"""
print lin_result
print season_test.head(5)['WIN_SUM']
print lin_pred[0:5]
"""
Explanation: That's more what I expected. Touchdowns and time of possession lead the list of positive coefficients while interceptions lead the list of negative. It doesn’t look like my predictions changed all that much as a result, but the coefficients make more sense to me now.
End of explanation
"""
# OK, so after talking to chris refactor data into one row per game
# With values being Home - Away
# add info about who was home and away
more = team.join(games.select_columns(['GID', 'V', 'H', 'PTSV', 'PTSH']), 'GID')
home = more[more['TNAME'] == more['H']].remove_columns(['PTSH', 'PTSV'])
visit = more[more['TNAME'] == more['V']].remove_columns(['PTSH', 'PTSV'])
difftable = home.join(visit, 'GID')
#the visitor's info is now always under blank.1 column names
for thing in ['PTS', 'RY', 'RA', 'PY', 'PA', 'SK', 'FUM', 'INT', 'TOP', 'TD', 'TDR', 'TDP', 'DP']:
difftable['%s_DIFF' % thing] = difftable[thing] - difftable['%s.1' % thing]
diff_train, diff_test = difftable.random_split(0.8, seed=20)
diff_lgc_model = gl.logistic_classifier.create(diff_train, target='WIN',
features=[ i for i in difftable.column_names() if "_DIFF" in i])
diff_result = diff_lgc_model.evaluate(diff_test)
print diff_result
diff_lgc_model.show()
"""
Explanation: Predict who won a game
Now, what if I want to predict who won a specific game? Rather than trying to predict a sum, I want to predict whether one column, specifically the WIN column, contains a 1 (they won) or a 0 (they lost) based on a team's total stats for just that game. This sounds fun. But first, Chris advised that I do a little data reformatting. It'll be easier to try and predict a win by looking at the difference between the numbers of the two teams playing each other. I'm going to express this as Home-Away in columns named <stat>_DIFF.
One side note here. Whereas earlier I was trying to predict a continuous number, what I'm really trying to do now is predict what whether it's a WIN or a LOSS. In other words, the WIN column contains two categories; zero or one. It's not going to be a 0.7 or 1.3. Using a formula similar to linear regression, this is commonly called logistic regression, to classify the value Y. Rather than code up this math by hand, I'm going to use GLC's toolkit, getting right to the fun stuff.
End of explanation
"""
# CLEARLY I CAN PREDICT THE FUTURE
# oh. remove points
better_features= [ i for i in difftable.column_names() if "_DIFF" in i]
better_features.remove('PTS_DIFF')
better_features.remove('TD_DIFF')
better_features.remove('TDR_DIFF')
better_features.remove('TDP_DIFF')
better_features.remove('DP_DIFF')
better_model = gl.logistic_classifier.create(diff_train, target='WIN', features=better_features)
better_model.show()
print better_model.evaluate(diff_test)
"""
Explanation: Me: HOLY GUACAMOLE!! LOOK AT MY ACCURACY! IT'S ALMOST PERFECT!
Chris: That can't be right.
Me: I CAN PREDICT THE FUTURE!
Chris: I don't really think that's how it w-
Me: I'M QUITTING MY JOB AND GOING TO VEGAS! I AM AMAZING!!! SEE YOU CLOWNS LATER AAAAAAHAHAHAHAHA!!!!!!!
Chris: Did you leave the difference in scores as a feature for the model to use?
Me: AAAHAHAHA-... what? Yes? ... Yes, I did.
Chris: So the model learned that with 100% accuracy that the final score predicts who won.
Me: Ahem. Oh.
Me: ... sorry about the clowns thing.
Chris: ...
Me: ... I'm also un-quitting.
Chris: ...
Me: ... I'll go back to my desk now.
Yeah. Once again putting all the info I had into this black box wasn't quite the best idea. I want to predict who won based on every other stat but I don't want to use the score. The model picks up pretty quickly that some of my features like, oh, say, the point differential between teams, is a pretty darn good indicator of what is in the WIN column. Clever. Let’s remove that stuff.
End of explanation
"""
#Exploring the trees
btc_classifier = gl.boosted_trees_classifier.create(diff_train, target='WIN', features=better_features)
btc_classifier.show(view="Tree", tree_id=1)
"""
Explanation: That's not bad. The confusion matrix here is an awesome little guide to how the model did guessing the outcome of games.
##(N)ewb (F)eature exp(L)oration
While I’m in the business of classifying things, I browsed the list of classifiers in the API searching for other high-powered tools to naively explore. Chris pointed me to the boosted trees classifier as an interesting way to explore the effects of each of my features on the classification. It’s not entirely clear to me how a tree, regardless of whether I boosted it or purchased it legally, is going to help me out but it looks easy enough to try.
End of explanation
"""
btc_class2 = gl.boosted_trees_classifier.create(diff_train, target='WIN', features=better_features, max_iterations=1,
max_depth=3)
btc_class2.show(view="Tree", tree_id=0)
"""
Explanation: Um, what? I have no idea what I just did and this visualization isn’t helpful at all. I’ve created what looks like an insane decision shrub of a tree but I don’t think that every single tiny decision and combination of features is really this important. Let me try that again, this time using just one decision tree and limiting its depth.
End of explanation
"""
btc_class2.get_feature_importance()
"""
Explanation: Ok, limiting the depth did wonders for my readability here. This is interesting, the tree isn’t making decisions around the things I thought are important. The big first branch happens on rushing attempts. I can view what the model thinks is important with a call to model.get_feature_importance(). This gives me a sum of the nodes in the tree that are branching on each feature.
End of explanation
"""
btc_bigger = gl.boosted_trees_classifier.create(diff_train, target='WIN',
features=better_features, max_iterations=50, max_depth=5)
btc_bigger.show(view="Tree", tree_id=0)
"""
Explanation: Huh. Not what I thought. The number of rushing attempts is more important than interceptions and more important than rushing yards? While I could maybe see that, it doesn’t line up with the features that other folks on the internet have concluded are important. I’d expect to see something about offense, maybe passing, or total rush yards. Maybe if I make the tree deeper it’d be a little closer to what I’ve read is important in a football game?
End of explanation
"""
# OK so this isn't really what I expected.
# The first branch is Rushing Attempts, then Interceptions difference.
# should probably add more features
# PEN = penalty yardage against
# SRP = Succeessful rush plays
# SPP = Successful pass plays
# SFPY = The Total Starting Field Position Yardage: Dividing by the # of Drives on Offense (DRV)
# produces the Average Starting Field Position.
# PU = Punts
# DRV = Drives on Offense
# Create a pass yds / attempt feature, and an average starting field position (SFPY/DRV)
# Total rush / rush attempts (AVG RUSH)
extra = gl.SFrame.read_csv('C:\\Users\\Susan\\Documents\\NFLData_2000-2014\\csv\TEAM.csv', header=True)
# games = gl.SFrame.read_csv('C:\\Users\\Susan\\Documents\\NFLData_2000-2014\\csv\GAME.csv', header=True)
extra = extra.select_columns(
[ 'TID', 'GID','SRP', 'SPP', 'PLO', 'PLD', 'PEN', 'SFPY', 'DRV'])
# team_more_info['TOP'] = team_more_info['TOP'].astype(float)
# team_more_info = team_more_info.join(games.select_columns(['GID', 'SEAS', 'WEEK']), 'GID')
# team_more_info = team_more_info[team['WEEK'] < 18]
team = team.join(extra, on=['GID', 'TID'])
#GID 364 has no data (CAR/MIA 2001) for DRV ... without the if x['blah'] !=0 you get a divide by zero
# but ONLY at the point you try and materialize the sframe because the operations are lazy! gah.
team['YD_PER_P_ATT'] = team.apply(lambda x: float(x['PY']) / float(x['PA']) if x['PA'] != 0 else 0)
team['YD_PER_R_ATT'] = team.apply(lambda x: float(x['RY']) / float(x['RA']) if x['RA'] != 0 else 0)
team['AVG_STFP'] = team.apply(lambda x: float(x['SFPY']) / float(x['DRV']) if x['DRV'] != 0 else 0)
#On second thought, I want to drop that game altogether since I don't trust that. The score and other
#references show they did have an off drive. wth?
team = team.filter_by( [364], 'GID', exclude=True)
features = ['YD_PER_P_ATT', 'YD_PER_R_ATT', 'AVG_STFP', 'SRP', 'PY', 'RY', 'DRV', 'SPP', 'FUM', 'SK', 'INT', 'PLO',
'PLD', 'TOP', 'PEN', 'PU']
# doing that joining stuff again
more = team.join(games.select_columns(['GID', 'V', 'H', 'PTSV', 'PTSH']), 'GID')
home = more[more['TNAME'] == more['H']].remove_columns(['PTSH', 'PTSV'])
visit = more[more['TNAME'] == more['V']].remove_columns(['PTSH', 'PTSV'])
difftable = home.join(visit, 'GID')
#the visitor's info is now always under blank.1 column names
#so let's express everything again as a difference
for thing in features:
difftable['%s_DIFF' % thing] = difftable[thing] - difftable['%s.1' % thing]
diff_train, diff_test = difftable.random_split(0.8, seed=0)
stuff = [ i for i in difftable.column_names() if "_DIFF" in i]
btc_classifier = gl.boosted_trees_classifier.create(diff_train, target='WIN',
features=stuff)
btc_classifier.show(view="Tree", tree_id=1)
"""
Explanation: Nope.
I’m going to try going back and adding some more features… maybe compute average yards per pass attempt, yards per rush attempt, average starting field position. You know, just some good to know stuff.
End of explanation
"""
btc_classifier = gl.boosted_trees_classifier.create(diff_train, target='WIN',
features=stuff, max_depth=3)
all_f_results = btc_classifier.evaluate(diff_test)
btc_classifier.show(view="Tree")
print all_f_results
"""
Explanation: Gah, look at that mess again.
End of explanation
"""
print btc_classifier.get_feature_importance()
important_f = btc_classifier.get_feature_importance()['feature']
"""
Explanation: Hey! Not bad!
End of explanation
"""
models_i = []
accuracy_i = []
f_imp = []
for i in range(1, len(important_f )):
print
btc = gl.boosted_trees_classifier.create(diff_train, target='WIN',
features=important_f[:i],
max_depth=3)
models_i.append(btc)
accuracy_i.append(btc.evaluate(diff_test))
f_imp.append(btc.get_feature_importance())
accurate = [ x['accuracy'] for x in accuracy_i]
test_sf = gl.SFrame( {'feature': range(1, len(important_f)), 'accuracy': accurate} )
test_sf.show(view="Line Chart", x='feature', y='accuracy')
"""
Explanation: Oh man, that looks … actually pretty good. My most important features relate to the passing game, field position and ball control.
Out of all the features I gave it, the model identified the most important ones. Just for giggles, I can start making life hard for my model. Yes, yards per pass attempt appears to be important. What if I ONLY gave the model that feature? How accurate am I then? Thanks to Chris’ suggestion, I can loop through the list of features here, add them one by one and see how accurate I am with just that subset of features.
End of explanation
"""
|
xpharry/Udacity-DLFoudation | tutorials/sentiment-rnn/Sentiment RNN Solution.ipynb | mit | import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
MaxRobinson/CS449 | project1/mrobi100.ipynb | apache-2.0 | from __future__ import division # so that 1/2 = 0.5 and not 0
from IPython.core.display import *
import csv, math, copy, random
"""
Explanation: Module 12 - Programming Assignment
Directions
There are general instructions on Blackboard and in the Syllabus for Programming Assignments. This Notebook also has instructions specific to this assignment. Read all the instructions carefully and make sure you understand them. Please ask questions on the discussion boards or email me at EN605.445@gmail.com if you do not understand something.
<div style="background: mistyrose; color: firebrick; border: 2px solid darkred; padding: 5px; margin: 10px;">
You must follow the directions *exactly* or you will get a 0 on the assignment.
</div>
You must submit a zip file of your assignment and associated files (if there are any) to Blackboard. The zip file will be named after you JHED ID: <jhed_id>.zip. It will not include any other information. Inside this zip file should be the following directory structure:
<jhed_id>
|
+--module-01-programming.ipynb
+--module-01-programming.html
+--(any other files)
For example, do not name your directory programming_assignment_01 and do not name your directory smith122_pr1 or any else. It must be only your JHED ID. Make sure you submit both an .ipynb and .html version of your completed notebook. You can generate the HTML version using:
ipython nbconvert [notebookname].ipynb
or use the File menu.
Naive Bayes Classifier
In this assignment you will be using the mushroom data from the Decision Tree module:
http://archive.ics.uci.edu/ml/datasets/Mushroom
The assignment is to write a program that will learn and apply a Naive Bayes Classifier for this problem. You'll first need to calculate all of the necessary probabilities (don't forget to use +1 smoothing) using a learn function. You'll then need to have a classify function that takes your probabilities, a List of instances (possibly a list of 1) and returns a List of Tuples. Each Tuple is a class and the normalized probability of that class. The List should be sorted so that the probabilities are in descending order. For example,
[("e", 0.98), ("p", 0.02)]
when calculating the error rate of your classifier, you should pick the class with the highest probability (the first one in the list).
As a reminder, the Naive Bayes Classifier generates the un-normalized probabilities from the numerator of Bayes Rule:
$$P(C|A) \propto P(A|C)P(C)$$
where C is the class and A are the attributes (data). Since the normalizer of Bayes Rule is the sum of all possible numerators and you have to calculate them all, the normalizer is just the sum of the probabilities.
You'll also need an evaluate function as before. You should use the $error_rate$ again.
Use the same testing procedure as last time, on two randomized subsets of the data:
learn the probabilities for set 1
classify set 2
evaluate the predictions
learn the probabilities for set 2
classify set 1
evalute the the predictions
average the classification error.
Imports
End of explanation
"""
def attributes_domains():
return {
'label': ['e', 'p', '?'],
'cap-shape': ['b', 'c', 'x', 'f', 'k', 's', '?'],
'cap-surface': ['f', 'g', 'y', 's', '?'],
'cap-color': ['n', 'b', 'c', 'g', 'r', 'p', 'u', 'e', 'w', 'y', '?'],
'bruises?': ['t', 'f', '?'],
'odor': ['a', 'l', 'c', 'y', 'f', 'm', 'n', 'p', 's', '?'],
'gill-attachment': ['a', 'd', 'f', 'n', '?'],
'gill-spacing': ['c', 'w', 'd', '?'],
'gill-size': ['b', 'n', '?'],
'gill-color': ['k', 'n', 'b', 'h', 'g', 'r', 'o', 'p', 'u', 'e', 'w', 'y', '?'],
'stalk-shape': ['e', 't', '?'],
'salk-root': ['b', 'c', 'u', 'e', 'z', 'r', '?'],
'stalk-surface-above-ring': ['f', 'y', 'k', 's', '?'],
'stalk-surface-below-ring': ['f', 'y', 'k', 's', '?'],
'stalk-color-above-ring': ['n', 'b', 'c', 'g', 'o', 'p', 'e', 'w', 'y', '?'],
'stalk-color-below-ring': ['n', 'b', 'c', 'g', 'o', 'p', 'e', 'w', 'y', '?'],
'veil-type': ['p', 'u', '?'],
'veil-color': ['n', 'o', 'w', 'y', '?'],
'ring-number': ['n', 'o', 't', '?'],
'ring-type': ['c', 'e', 'f', 'l', 'n', 'p', 's', 'z', '?'],
'spore-print-color': ['k', 'n', 'b', 'h', 'r', 'o', 'u', 'w', 'y', '?'],
'population': ['a', 'c', 'n', 's', 'v', 'y', '?'],
'habitat': ['g', 'l', 'm', 'p', 'u', 'w', 'd', '?'],
}
"""
Explanation: attributes_domain
A helper function to return a dictionary of attributes, and the domains possible for that attribute.
This is used to start the Naive Bayes algorithm with the appropriate possible attributes and their domains.
A '?' attribute is added to every domain in case a record is missing a value for a given domain. In the Record the value for that domain is expected to have a '?' indicating that for that record the attribute value is unknown.
input:
None
return:
+ attributes: a dictionary of attribute names as keys and the attributes domain as a list of strings.
End of explanation
"""
def get_positive_label():
return 'e'
"""
Explanation: get_positive_label
A helper function to return the positive label for this implimentation of a Naive Bayes Classifier. Used incase the positive label were to change. "positive" in this context is simply derived from the data set, and that it is a POSITIVE thing to be able to eat a mushroom, thus the label for the dataset e is "Positive". This is the ONLY reason it's called positive.
The label is used in calculating the information gain, as well as determining the majority label of an attribute.
input:
None
return:
+ the label, a string.
End of explanation
"""
def get_negative_label():
return 'p'
"""
Explanation: get_negative_label
A helper function to return the negative label for this implimentation of a Naive Bayes Classifier. Used incase the negative label were to change. "Negative" in this context is simply derived from the data set, and that it is a NEGATIVE thing to eat a Poisonous mushroom, thus the label for the dataset p is "Negative". This is the ONLY reason it's called negative.
The label is used in calculating the information gain, as well as determining the majority label of an attribute.
input:
None
return:
+ the label, a string.
End of explanation
"""
def create_record(csv_record):
return {
'label': csv_record[0],
'cap-shape': csv_record[1],
'cap-surface': csv_record[2],
'cap-color': csv_record[3],
'bruises?': csv_record[4],
'odor': csv_record[5],
'gill-attachment': csv_record[6],
'gill-spacing': csv_record[7],
'gill-size': csv_record[8],
'gill-color': csv_record[9],
'stalk-shape': csv_record[10],
'salk-root': csv_record[11],
'stalk-surface-above-ring': csv_record[12],
'stalk-surface-below-ring': csv_record[13],
'stalk-color-above-ring': csv_record[14],
'stalk-color-below-ring': csv_record[15],
'veil-type': csv_record[16],
'veil-color': csv_record[17],
'ring-number': csv_record[18],
'ring-type': csv_record[19],
'spore-print-color': csv_record[20],
'population': csv_record[21],
'habitat': csv_record[22],
}
"""
Explanation: create_record
A helper function to create a record to be used in the Naive Bayes Classifier, given a record from the csv file.
Creates a dictionary that maps the attribute_name to the value of that attribute for a given record.
This is used to transform all of the data read in from the csv file into an easily usable dictionary for Naive Bayes Classifier.
input:
+ csv_record: a list of strings
return:
+ a dictionary that maps attribute_names to the value for that attribute.
End of explanation
"""
def create_distribution_dict():
attributes_with_domains = attributes_domains()
distribution = {}
for attribute, domains in attributes_with_domains.iteritems():
if attribute == 'label':
continue
for domain in domains:
distribution[(attribute, domain, 'label', get_positive_label())] = 1
distribution[(attribute, domain, 'label', get_negative_label())] = 1
return distribution
"""
Explanation: create_distribution_dict
A helper function to create a dictionary that holds the Naive Bayes Classifier distibutions for all of the $P(a_i|c_i)$ probabilities, for each $A$ where $A$ is all attributes and $a_i$ is a domain for a specific attribute.
The dictionary has the following strucutre:
python
{
(attribute, attribute_domain_value, 'label', label_value) : value
}
The key allows us to specify for which attribute, and for what domain value we are creating the distribution for, and the 'label' label_value allow us to create the "Given $c_i$" part of the distribution.
This dictionary is used first to create an overall count of each disitbution, and then is later used to hold the actual probability distibution for the Naive Bayes Classifier.
Note that the distibution for "counting" is initialized to 1. This is to account for the "+1" smoothing that is needed for calculating the probabilities later on for the $P(f_i | c)$ which describes the probability.
This is an important method for the algorithm because this function specifies how the distibution is stored.
input:
None
return:
+ a dictionary with the structure specified in the above discription.
End of explanation
"""
def read_file(path=None):
if path is None:
path = 'agaricus-lepiota.data'
with open(path, 'r') as f:
reader = csv.reader(f)
csv_list = list(reader)
records = []
for value in csv_list:
records.append(create_record(value))
return records
"""
Explanation: read_file
A helper function to read in the data from a CSV file, and transform it into a list of records, as described in the create_record description.
NOTE: If not given a path to a file, it assumes that the file is in your local directory, from which you are running this notebook. It also assumes that the file it is reading is "agaricus-lepiota.data".
The file can be found at https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data
Please also note that this file is the expected format of input for this entire Naive Bayes Classifier implementation.
Please do not try to run this with other data that is not in this format, or have the same bounds as this data set.
input:
+ path (optional): the path to the csv file you wish to read in.
return:
+ records: A list of records. Records have the shape described by the create_record description.
End of explanation
"""
def create_distribution_key(attribute, domain_value, label_value):
return (attribute, domain_value, 'label', label_value)
"""
Explanation: create_distribution_key
A helper function the key needed to access a given probability in the Naive Bayes Distribution dictionary, described in create_distribution_dict.
input:
+ attribute: a String that specifies the attribute for the probability to access
+ domain: a string that specifies the domain value for the probability to access
+ label_value: a string that specifies which classification label to use when accessing the probability.
return:
+ a tuple with the structure: (attribute_name, domain_value, 'label', label_value)
End of explanation
"""
def put_value_in_distribution(distribution, attribute, domain_value, label_value):
key = create_distribution_key(attribute, domain_value, label_value)
distribution[key] += 1
"""
Explanation: put_value_in_distribution
A helper function to increment the count by 1, in the distribution dictionary, of a given key.
Used when counting the number of occurenses of a particular $A=a_i, C=c_i$ when building out the distribution of the training set.
input:
+ distribution: a dictionary with the structure specified by create_distribution_dict
+ attribute: a String that specifies the attribute for the probability to access
+ domain: a string that specifies the domain value for the probability to access
+ label_value: a string that specifies which classification label to use when accessing the probability.
return:
None
End of explanation
"""
def get_label_count(records, label):
count = 0
for record in records:
if record['label'] == label:
count += 1
return count
"""
Explanation: get_label_count
A helper function that returns the number of records that have a given label.
This is used to get the total number of records with a given label.
This value is then used when calculating the normalized probabilites of the distribution, $$P(f_i | c_i) = \frac{Num((f_i,c_i)) + 1}{Num(c_i) + 1}$$
Specifically the $Num(c_i)$ part.
input:
+ records: a list of records.
return:
+ count: the number of records with the specified label
End of explanation
"""
def create_percentages(pos_count, neg_count, distribution):
pos_count_plus_1 = pos_count + 1
neg_count_plus_1 = neg_count + 1
pos_label = get_positive_label()
neg_label = get_negative_label()
for key in distribution:
if key[3] == pos_label:
distribution[key] = distribution[key] / pos_count_plus_1
elif key[3] == neg_label:
distribution[key] = distribution[key] / neg_count_plus_1
return distribution
"""
Explanation: create_percentages
A helper function that, given a distibution of counts for $(f_i, c_i)$ calculates the probability according to:
$$P(f_i | c_i) = \frac{Num((f_i,c_i)) + 1}{Num(c_i) + 1}$$
The distribution already contains the "count" for the probability, the $Num((f_i,c_i)) + 1$ part. To calculte the probability, we just divide by the dividend which is passed in in the form of the count for the positive and negative lables.
For each key in the distribution, we determine which $c_i$ it uses, and divide by the appropriate dividend.
These percentages or distributions are then used during the classification step.
input:
+ pos_count: an int, the number of records with the "positive" label in the training set.
+ neg_count: an int, the number of records with the "negative" label in the training set.
+ distribution: a dictionary, with the structure specified in create_distribution_dict
return:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, now with values that are probabilites rather than raw counts. Probability is calculated according to the above formula.
End of explanation
"""
def learn(records):
distribution = create_distribution_dict()
pos_count = get_label_count(records, get_positive_label())
neg_count = get_label_count(records, get_negative_label())
for record in records:
for attribute, domain_value in record.iteritems():
if attribute == 'label':
continue
put_value_in_distribution(distribution, attribute, domain_value, record['label'])
distribution = create_percentages(pos_count, neg_count, distribution)
distribution[('label', get_positive_label())] = pos_count / (pos_count + neg_count)
distribution[('label', get_negative_label())] = neg_count / (pos_count + neg_count)
return distribution
"""
Explanation: learn
The main function that learns the distribution for the Naive Bayes Classifier.
The function works as follows:
+ Create initial distribution counts
+ get positive label counts
+ get negative label counts
+ for each record in the training set:
+ For each attribute, and domain_value for the attribute:
+ put the value into the distribution (i.e incriment the value for that attribute, domain, and label tuple
+ the Corresponding value in the distribution is (Attribute, domain_value, 'label', actual label for record)
+ change the distribution from counts to probabilities
+ add special entries in the distribution for the Probability of each possible label.
+ the Probability of a given label is as follows: $P(c_i) = \frac{Num(c_i)}{Size Of Training Set}$
We then return the learned distribution, as our Naive Bayes Classifier.
input:
+ records: a list of records, as described by the create_record function.
return:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites for each $A$ and $C$ so that we have $P(A=a_i | C=c_i)$
End of explanation
"""
def calculate_probability_of(distribution, instance, label):
un_normalized_prob = distribution[('label', label)]
for attribute, domain_value in instance.iteritems():
if attribute == 'label':
continue
key = create_distribution_key(attribute, domain_value, label)
un_normalized_prob *= distribution[key]
return un_normalized_prob
"""
Explanation: calculate_probability_of
A helper function that calculates the un_normalized probability of a given instance (record), for a given label.
The un_normalized probability is caclulated as follows:
$$P(c_i) \prod_i P(f_i | c_i)$$
Where $f_i$ is a given attribute and attributes value, and c_i is a given label.
To calculate this, we itterate throught the instance's (record's) attributes, and values for the attributes, create the key into the distribution from the attribute and attribute's value and the label we are wishing to calculate the probability for.
This is then multiplied to the running product of the other probabilities.
The running product is initialized to the $P(c_i)$ to take care of the initial multiplicative term.
The un_normalized probability is then returned.
This is used when classifying a record, to get the probability that the record should have a certain label.
This is important because this probability is then normalized after all probabilities are gotten for all labels, and then used to determing how likely a record is part of a given class label.
input:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites.
+ instance: a record, as described by create_record
+ labelL: a string that describes a given label value.
return:
+ un_normalized_prob: a float that represents the un_normalized probability that a record belongs to the given class label.
End of explanation
"""
def normalize(probability_list):
sum_of_probabilities = 0
normalized_list = []
for prob_tuple in probability_list:
sum_of_probabilities += prob_tuple[1]
for prob_tuple in probability_list:
normalized_prob = prob_tuple[1] / sum_of_probabilities
normalized_list.append((prob_tuple[0], normalized_prob))
normalized_list.sort(key=lambda x: x[1], reverse=True)
return normalized_list
"""
Explanation: normalize
A helper function that normalizes a list of probabilities. The list of probabilities is for a single record, and should have the following structure:
python
[(label, probability), (label, probability)]
These probabilities should be un_normalized probabilities for each label.
This function normalizes the probabilities by summing the probabilities for each label together, then calculating the normalized probability for each label by dividing the probability for that label by the sum of all the probabilities.
This normalized probability is then placed into a new list with the same structure and same corresponding label.
The list of normalized probabilies is then SORTED in descending order. I.E. the label with the most likely possibility is in index position 0 for the list of probabilities**
This new normalized list of probabilities is then returned.
This function is important because this calculates the probabilities that are then used to choose which label should be used to describe a record. This is done during validation
input:
+ probability_list: a list of tuples, as described by: [(label, probability), (label, probability)]
return:
+ normalized_list: a list of tuples, as described by: [(label, probability), (label, probability)] with the probabilities being normalized as described above.
End of explanation
"""
def classify_instance(distribution, instance):
labels = [get_positive_label(), get_negative_label()]
probability_results = []
for label in labels:
probability = calculate_probability_of(distribution, instance, label)
probability_results.append((label, probability))
probability_results = normalize(probability_results)
return probability_results
"""
Explanation: classify_instance
A helper that does most of the work to classifiy a given instance of a record.
It works as follows:
+ create a list of possible labels
+ initialize results list.
+ for each label
+ calculate the un_normalized probability of the instance using calculate_probabily_of
+ add the probability to the results list as a tuple of (label, un_normalized probability)
+ normalize the probabilities, using normalize
+ note that now the list of results (a list of tuples) is now sorted in descending order by the value of the probability
return the normalized probabilities for that instance of a record.
This is important because this list describes the probabilities that this record should have a given label.
The First tuple in the list is the tuple with the label that has the Hightest probability for this record.
input:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites.
+ instace: a record, as described by create_record
return:
+ probability_results: a List of tuples with the structure as: [(label, normalized probability), (label, normalized probability)] sorted in descending order by probability.
NOTE: This is these are the probabilites for a SINGLE record
End of explanation
"""
def classify(distribution, instances):
results = []
for instance in instances:
results.append(classify_instance(distribution, instance))
return results
"""
Explanation: classify
A function to classify a list of instances(records).
Given a list of instances (records), classify each instance using classify_instance and put the result into a result list. Return the result list after each instance has been classified.
The Structure of the return list will be a List of lists where each inner list is a list of tuples, as described by the classify_instance function. An example will look as follows:
python
[ [('e', .999),('p', .001)], [('p', .78), ('e', .22)] ]
The first list [('e', .999),('p', .001)] corresponds to the probabilities for the first instance in the instances list and the second list to the second instance of the instances list. So on and so forth for each entry in the instances list.
input:
+ distribution: a dictionary, with the structure specified in create_distribution_dict, with values that are the probabilites.
+ instace: a record, as described by create_record
return:
+ results: a list of lists of tuples as described above.
End of explanation
"""
def evaluate(test_data, classifications):
number_of_errors = 0
for record, classification in zip(test_data, classifications):
if record['label'] != classification[0][0]:
number_of_errors += 1
return number_of_errors/len(test_data)
"""
Explanation: evaluate
The main evaluation method. Uses a simple $\frac{ Num Errors}{total Data Points}$ to calculate the error rate of the Naive Bayes Classifier.
Given a list of records (test_data) and a list of predicted classifications for that data set, run through both lists, and compire the label for the record to the predicted classification. If they do not match, increase the number of errors seen.
The label for the predicted classification is at position 0 of the predicted probabilities list, and position 0 of the tuple for that holds the label and probability of that label. i.e. for a classifications list that is as follows:
python
[ [('e', .999),('p', .001)], [('p', .78), ('e', .22)] ]
The predicted label for record 1 is 'e' since the corresponding predicted probabilities are [('e', .999),('p', .001)], the most likely label is at position 0 in the list, since they are sorted from most probable to least probable. Position 0 of the list gives us ('e', .999). The label for this selected probability is then at position 0 of the tuple, which gives us 'e'.
This label is then compared to the actual label for the record for correctness.
Return the number of erros seen divided by the total number of data points. This is the error rate.
input:
+ test_data: a list of records
+ classifications: a list of lists of tuples, as described by the classify function.
return:
+ error_rate: a float that represents the number of errors / total number of data points.
End of explanation
"""
test_records = read_file()
random.shuffle(test_records)
half_way = int(math.floor(len(test_records)/2))
set_1 = test_records[:half_way]
set_2 = test_records[half_way:]
"""
Explanation: Put your main function calls here.
Set up Training Sets
Shuffle training set to ensure no bias from data order
End of explanation
"""
distro_1 = learn(set_1)
"""
Explanation: Train Naive Bayes 1 on Set 1
End of explanation
"""
b1_c2 = classify(distro_1, set_2)
"""
Explanation: Get Predicted Classifications for Set 2 From Naive Bayes 1
End of explanation
"""
evaluation_b1_c2 = evaluate(set_2, b1_c2)
print "Error Rate for Naive Bayes 1 with Set 2 = {}".format(evaluation_b1_c2)
"""
Explanation: Evaluate Predicted Set 2 against Actual Set 2
End of explanation
"""
distro_2 = learn(set_2)
"""
Explanation: Train Naive Bayes 2 on Set 2
End of explanation
"""
b2_c1 = classify(distro_2, set_1)
"""
Explanation: Get Predicted Classifications for Set 1 From Naive Bayes 2
End of explanation
"""
evaluation_b2_c1 = evaluate(set_1, b2_c1)
print "Error Rate for Naive Bayes 2 with Set 1 = {}".format(evaluation_b2_c1)
"""
Explanation: Evaluate Predicted Set 1 against Actual Set 1
End of explanation
"""
average_error = (evaluation_b1_c2 + evaluation_b2_c1)/2
print "Average Error Rate: {}".format(average_error)
"""
Explanation: Calculate Average Error for Both Naive Bayes Distrobutions
End of explanation
"""
|
unpingco/Python-for-Probability-Statistics-and-Machine-Learning | chapters/probability/notebooks/ProbabilityInequalities.ipynb | mit | from pprint import pprint
import textwrap
import sys, re
"""
Explanation: Python for Probability, Statistics, and Machine Learning
End of explanation
"""
import sympy
import sympy.stats as ss
t=sympy.symbols('t',real=True)
x=ss.ChiSquared('x',1)
"""
Explanation: Useful Inequalities
In practice, few quantities can be analytically calculated. Some knowledge
of bounding inequalities helps find the ballpark for potential solutions. This
sections discusses three key inequalities that are important for
probability, statistics, and machine learning.
Markov's Inequality
Let $X$ be a non-negative random variable
and suppose that $\mathbb{E}(X) < \infty$. Then,
for any $t>0$,
$$
\mathbb{P}(X>t)\leq \frac{\mathbb{E}(X)}{t}
$$
This is a foundational inequality that is
used as a stepping stone to other inequalities. It is easy
to prove. Because $X>0$, we have the following,
$$
\begin{align}
\mathbb{E}(X)&=\int_0^\infty x f_x(x)dx =\underbrace{\int_0^t x f_x(x)dx}_{\text{omit this}}+\int_t^\infty x f_x(x)dx \\
&\ge\int_t^\infty x f_x(x)dx \ge t\int_t^\infty x f_x(x)dx = t \mathbb{P}(X>t)
\end{align}
$$
The step that establishes the inequality is the part where the
$\int_0^t x f_x(x)dx$ is omitted. For a particular $f_x(x)$ that my be
concentrated around the $[0,t]$ interval, this could be a lot to throw out.
For that reason, the Markov Inequality is considered a loose inequality,
meaning that there is a substantial gap between both sides of the inequality.
For example, as shown in Figure, the
$\chi^2$ distribution has a lot of its mass on the left, which would be omitted
in the Markov Inequality. Figure shows
the two curves established by the Markov Inequality. The gray shaded region is
the gap between the two terms and indicates that looseness of the bound
(fatter shaded region) for this case.
<!-- dom:FIGURE: [fig-probability/ProbabilityInequalities_001.png, width=500 frac=0.75] The $\chi_1^2$ density has much of its weight on the left, which is excluded in the establishment of the Markov Inequality. <div id="fig:ProbabilityInequalities_001"></div> -->
<!-- begin figure -->
<div id="fig:ProbabilityInequalities_001"></div>
<p>The $\chi_1^2$ density has much of its weight on the left, which is excluded in the establishment of the Markov Inequality.</p>
<img src="fig-probability/ProbabilityInequalities_001.png" width=500>
<!-- end figure -->
<!-- dom:FIGURE: [fig-probability/ProbabilityInequalities_002.png, width=500 frac=0.75] The shaded area shows the region between the curves on either side of the Markov Inequality. <div id="fig:ProbabilityInequalities_002"></div> -->
<!-- begin figure -->
<div id="fig:ProbabilityInequalities_002"></div>
<p>The shaded area shows the region between the curves on either side of the Markov Inequality.</p>
<img src="fig-probability/ProbabilityInequalities_002.png" width=500>
<!-- end figure -->
Chebyshev's Inequality
Chebyshev's Inequality drops out directly from the Markov Inequality. Let
$\mu=\mathbb{E}(X)$ and $\sigma^2=\mathbb{V}(X)$. Then, we have
$$
\mathbb{P}(\vert X-\mu\vert \ge t) \le \frac{\sigma^2}{t^2}
$$
Note that if we normalize so that $Z=(X-\mu)/\sigma$, we
have $\mathbb{P}(\vert Z\vert \ge k) \le 1/k^2$. In particular,
$\mathbb{P}(\vert Z\vert \ge 2) \le 1/4$. We can illustrate this
inequality using Sympy statistics module,
End of explanation
"""
r = ss.P((x-1) > t,x>1)+ss.P(-(x-1) > t,x<1)
"""
Explanation: To get the left side of the Chebyshev inequality, we
have to write this out as the following conditional probability,
End of explanation
"""
w=(1-ss.cdf(x)(t+1))+ss.cdf(x)(1-t)
"""
Explanation: This is because of certain limitations in the statistics module at
this point in its development regarding the absolute value function. We could
take the above expression, which is a function of $t$ and attempt to compute
the integral, but that would take a very long time (the expression is very long
and complicated, which is why we did not print it out above). This is because
Sympy is a pure-python module that does not utilize any C-level optimizations
under the hood. In this situation, it's better to use the built-in cumulative
density function as in the following (after some rearrangement of the terms),
End of explanation
"""
fw=sympy.lambdify(t,w)
"""
Explanation: To plot this, we can evaluated at a variety of t values by using
the .subs substitution method, but it is more convenient to use the
lambdify method to convert the expression to a function.
End of explanation
"""
map(fw,[0,1,2,3,4])
"""
Explanation: Then, we can evaluate this function using something like
End of explanation
"""
|
LucaCanali/Miscellaneous | Spark_Physics/HEP_benchmark/ADL_HEP_Query_Benchmark_Q1_Q5.ipynb | apache-2.0 | # Download the data if not yet available locally
# Download the reduced data set (2 GB)
! wget -r -np -R "index.html*" -e robots=off https://sparkdltrigger.web.cern.ch/sparkdltrigger/Run2012B_SingleMu_sample.orc/
# This downloads the full dataset (16 GB)
# ! wget -r -np -R "index.html*" -e robots=off https://sparkdltrigger.web.cern.ch/sparkdltrigger/Run2012B_SingleMu.orc/
# Start the Spark Session
# This uses local mode for simplicity
# The use of findspark is optional
import findspark
findspark.init("/home/luca/Spark/spark-3.2.1-bin-hadoop3.2")
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("HEP benchmark")
.master("local[4]")
.config("spark.driver.memory", "4g")
.config("spark.sql.orc.enableNestedColumnVectorizedReader", "true")
.getOrCreate()
)
# Read data for the benchmark tasks
# download data as detailed at https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
path = "sparkdltrigger.web.cern.ch/sparkdltrigger/"
input_data = "Run2012B_SingleMu_sample.orc"
# use this if you downloaded the full dataset
# input_data = "Run2012B_SingleMu.orc"
df_events = spark.read.orc(path + input_data)
df_events.printSchema()
print(f"Number of events: {df_events.count()}")
"""
Explanation: HEP Benchmark Queries Q1 to Q5
This follows the IRIS-HEP benchmark
and the article Evaluating Query Languages and Systems for High-Energy Physics Data
and provides implementations of the benchmark tasks using Apache Spark.
The workload and data:
- Benchmark jobs are implemented follwing IRIS-HEP benchmark
- The input data is a series of events from CMS opendata
- The job output is typically a histogram
- See also https://github.com/LucaCanali/Miscellaneous/tree/master/Spark_Physics
Author and contact: Luca.Canali@cern.ch
February, 2022
End of explanation
"""
# Compute the histogram for MET_pt
# The Spark function "width_bucket" is used to generate the histogram bucket number
# a groupBy operation with count is used to fill the histogram
# The result is a histogram with bins value and counts foreach bin (N_events)
min_val = 0
max_val = 100
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
df_events
.selectExpr(f"width_bucket(MET_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q1
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_T$ (missing transverse energy) of all events.
End of explanation
"""
# Jet_pt contains arrays of jet measurements
df_events.select("Jet_pt").show(5,False)
# Use the explode function to extract array data into DataFrame rows
df_events_jet_pt = df_events.selectExpr("explode(Jet_pt) as Jet_pt")
df_events_jet_pt.printSchema()
df_events_jet_pt.show(10, False)
# Compute the histogram for Jet_pt
# The Spark function "width_bucket" is used to generate the histogram bucket number
# a groupBy operation with count is used to fill the histogram
# The result is a histogram with bins value and counts foreach bin (N_events)
min_val = 15
max_val = 60
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
df_events_jet_pt
.selectExpr(f"width_bucket(Jet_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$p_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $p_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q2
Plot the $𝑝_𝑇$ (transverse momentum) of all jets in all events
End of explanation
"""
# Take Jet arrays for pt and eta and transform them to rows with explode()
df1 = df_events.selectExpr("explode(arrays_zip(Jet_pt, Jet_eta)) as Jet")
df1.printSchema()
df1.show(10, False)
# Apply a filter on Jet_eta
q3 = df1.select("Jet.Jet_pt").filter("abs(Jet.Jet_eta) < 1")
q3.show(10,False)
# Compute the histogram for Jet_pt
# The Spark function "width_bucket" is used to generate the histogram bucket number
# a groupBy operation with count is used to fill the histogram
# The result is a histogram with bins value and counts foreach bin (N_events)
min_val = 15
max_val = 60
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
q3
.selectExpr(f"width_bucket(Jet_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$p_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $p_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q3
Plot the $𝑝_𝑇$ of jets with |𝜂| < 1 (𝜂 is the jet pseudorapidity).
End of explanation
"""
# This will use MET adn Jet_pt
df_events.select("MET_pt","Jet_pt").show(10,False)
# The filter ispushed inside arrays of Jet_pt
# This use Spark's higher order functions for array processing
q4 = df_events.select("MET_pt").where("cardinality(filter(Jet_pt, x -> x > 40)) > 1")
q4.show(5,False)
# compute the histogram for MET_pt
min_val = 0
max_val = 100
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
q4
.selectExpr(f"width_bucket(MET_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlim(min_val, max_val)
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
"""
Explanation: Benchmark task: Q4
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_𝑇$ of the events that have at least two jets with
$𝑝_𝑇$ > 40 GeV (gigaelectronvolt).
End of explanation
"""
# filter the events
# select only events with 2 muons
# the 2 muons must have opposite charge
df_muons = df_events.filter("nMuon == 2").filter("Muon_charge[0] != Muon_charge[1]")
# Formula for dimuon mass in pt, eta, phi, m coordinates
# see also http://edu.itp.phys.ethz.ch/hs10/ppp1/2010_11_02.pdf
# and https://en.wikipedia.org/wiki/Invariant_mass
df_with_dimuonmass = df_muons.selectExpr("MET_pt","""
sqrt(2 * Muon_pt[0] * Muon_pt[1] *
( cosh(Muon_eta[0] - Muon_eta[1]) - cos(Muon_phi[0] - Muon_phi[1]) )
) as Dimuon_mass
""")
# apply a filter on the dimuon mass
Q5 = df_with_dimuonmass.filter("Dimuon_mass between 60 and 120")
# compute the histogram for MET_pt
min_val = 0
max_val = 100
num_bins = 100
step = (max_val - min_val) / num_bins
histogram_data = (
Q5
.selectExpr(f"width_bucket(MET_pt, {min_val}, {max_val}, {num_bins}) as bucket")
.groupBy("bucket")
.count()
.orderBy("bucket")
)
# convert bucket number to the corresponding dimoun mass value
histogram_data = histogram_data.selectExpr(f"round({min_val} + (bucket - 1/2) * {step},2) as value", "count as N_events")
# The action toPandas() here triggers the computation.
# Histogram data is fetched into the driver as a Pandas Dataframe.
%time histogram_data_pandas=histogram_data.toPandas()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
# cut the first and last bin
x = histogram_data_pandas.iloc[1:-1]["value"]
y = histogram_data_pandas.iloc[1:-1]["N_events"]
# line plot
f, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_xlabel('$𝐸^{𝑚𝑖𝑠𝑠}_T$ (GeV)')
ax.set_ylabel('Number of Events')
ax.set_title("Distribution of $𝐸^{𝑚𝑖𝑠𝑠}_T$ ")
plt.show()
spark.stop()
"""
Explanation: Benchmark task: Q5
Plot the $𝐸^{𝑚𝑖𝑠𝑠}_T$ of events that have an opposite-charge muon
pair with an invariant mass between 60 GeV and 120 GeV.
End of explanation
"""
|
kingsgeocomp/code-camp | notebook-10-recap.ipynb | mit | cities = ["Bristol", "London", "Manchester", "Edinburgh", "Belfast", "York"]
"""
Explanation: Recap 2
Second Checkpoint
Since the first recap, you've learned about lists, dictionaries and loops. Let's revise those concepts and how to use them in this notebook before continuing on to some new material. Answer the questions as best you can, working through any error messages you receive and remembering to refer back to previous notebooks.
Lists
First, here's a reminder of some useful methods (i.e. functions) that apply to lists:
| Method | Action |
|------------------------|------------------------------------------------------------------|
| list.count(x) | Return the number of times x appears in the list |
| list.insert(i, x) | Insert value x at a given position i |
| list.pop([i]) | Remove and return the value at position i (i is optional) |
| list.remove(x) | Remove the first element from the list whose value is x |
| list.reverse() | Reverse the elements of the list in place |
| list.sort() | Sort the items of the list in place |
| list.index(x) | Find the first occurence of x in the list |
| list[x:y] | Subset the list from index x to y-1 |
Interacting with Lists
Replace ??? in the following code blocks to make the code work as instructed in the comments. All of the methods that you need are listed above, so this is about testing yourself on your understanding both of how to read the help and how to index elements in a list.
a) The next line creates a list of city names (each element is a string) - run the code and check you understand what it is doing.
End of explanation
"""
print("The position of Manchester in the list is: " + str(cities.???('Manchester')))
print("The position of Manchester in the list is: " + str(cities.index('Manchester')))
"""
Explanation: b) Replace the ??? so that it prints the position of Manchester in the list
End of explanation
"""
print(cities[2 + ???])
print(cities[2 + 2])
"""
Explanation: c) Replace the ??? so that it prints Belfast
End of explanation
"""
print(cities[???])
print(cities[-2])
"""
Explanation: d) Use a negative index to print Belfast
End of explanation
"""
print(cities[???])
print(cities[6]) #anything above five would do it
"""
Explanation: e) Force Python to generate a list index out of range error. NB: This error happens you provide an index for which a list element does not exist
End of explanation
"""
temperatures = [15.6, 16.5, 13.4, 14.0, 15.2, 14.8]
"""
Explanation: f) Think about what the next line creates, then run the code.
End of explanation
"""
print(temperatures[???])
print(temperatures[1:4])
"""
Explanation: g) What would you change ??? to, to return [16.5, 13.4, 14.0]?
End of explanation
"""
print(temperatures[???])
print(temperatures[???])
print(temperatures[4:6])
print(temperatures[-3:-1])
"""
Explanation: h) What are two different ways of getting [15.2, 14.8] from the temperatures list?
End of explanation
"""
city="Manchester" # Use this to start the solution...
#your code here
city="Manchester" # Use this to get the solution...
index = cities.index(city)
print("The average temperature in " + cities[index] + " is " + str(temperatures[index]) + " degrees.")
"""
Explanation: i) Notice that the list of temperatures is the same length as the list of cities, that's because these are (roughly) average temperatures for each city! Given this, how do you print: "The average temperature in Manchester is 13.4 degrees." without doing any of the following:
1. Using a list index directly (i.e. cities[2] and temperatures[2]) or
2. Hard-coding the name of the city?
To put it another way, neither of these solutions is the answer:
python
print("The average temperature in Manchester is " + str(temperatures[2]) + " degrees.")
or
python
city=2
print("The average temperature in " + cities[city] + " is " + str(temperatures[city]) + " degrees.")
Hint: you need to combine some of the ideas we've used above!
End of explanation
"""
???
city="Belfast"
index = cities.index(city)
print("The average temperature in " + cities[index] + " is " + str(temperatures[index]) + " degrees.")
"""
Explanation: Now copy+paste your code and change only one thing in order to print out: "The average temperature in Belfast is 15.2 degrees"
End of explanation
"""
list1 = [1, 2, 3]
list2 = [4, 5, 6]
"""
Explanation: 1.2 Manipulating Multiple Lists
We'll create two lists for the next set of questions
End of explanation
"""
print( ??? )
print ( list1 + list2 )
"""
Explanation: j) How do you get Python to print: [1, 2, 3, 4, 5, 6]
End of explanation
"""
print( ??? )
print( list1+[list2])
"""
Explanation: k) How to you get Python to print: [1, 2, 3, [4, 5, 6]]
End of explanation
"""
list1 = [1, 2, 3]
list2 = [4, 5, 6]
"""
Explanation: Let's re-set the lists (run the next code block)
End of explanation
"""
list3 = ???
list3.???
print(list3)
list3 = list1+list2
list3.reverse()
print(list3)
"""
Explanation: l) How would you print out: [6, 5, 4, 3, 2, 1] ?
End of explanation
"""
list1.???
list2.???
print( list1+list2 )
list1.reverse()
list2.reverse()
print( list1+list2 )
"""
Explanation: m) How would you print out: [3, 2, 1, 6, 5, 4] ?
End of explanation
"""
list1.???
list2.???
print( list1+list2 )
list1.remove(1)
list2.remove(4)
print( list1+list2 )
"""
Explanation: n) How would you print out [3, 2, 6, 5] with a permanent change to the list (not slicing)? NB: this follows on from the previous question, so note that the order is still 'reversed'.
End of explanation
"""
cities = {
'San Francisco': [37.77, -122.43, 'SFO'],
'London': [51.51, -0.08, 'LDN'],
'Paris': [48.86,2.29, 'PAR'],
'Beijing': [39.92,116.40 ,'BEI'],
}
"""
Explanation: Dictionaries
Remember that dictionaries (a.k.a. dicts) are like lists in that they are data structures containing multiple elements. A key difference between dictionaries and lists is that while elements in lists are ordered, dicts are unordered. This means that whereas for lists we use integers as indexes to access elements, in dictonaries we use 'keys' (which can multiple different types; strings, integers, etc.). Consequently, an important concept for dicts is that of key-value pairs.
Creating an Atlas
Replace ??? in the following code block to make the code work as instructed in the comments. If you need some hints and reminders, revisit Code Camp Lesson 7.
Run the code and check you understand what the data structure that is being created (the data for each city are latitude, longitude and airport code)
End of explanation
"""
cities = ???
cities = {
'San Francisco': [37.77, -122.43, 'SFO'],
'London': [51.51, -0.08, 'LDN'],
'Paris': [48.86,2.29, 'PAR'],
'Beijing': [39.92,116.40 ,'BEI'],
'Chennai': [13.08, 80.27,'MAA']
}
"""
Explanation: a) Add a record to the dictionary for Chennai (data here)
End of explanation
"""
print(???)
print("The airport code for Chennai is " + cities["Chennai"][2])
"""
Explanation: b) In one line of code, print out the airport code for Chennai
End of explanation
"""
print(cities['Berlin'])
print(cities.get('Berlin'))
#Berlin is not in the dict.
#The first code block above returns an error because Berlin is missing
#The second does not because it uses the .get method which handles the error for us (returning a None value)
#This second method is 'safer' because of how it handles this error
#compare to use using the two different methods for London
print(cities['London'])
print(cities.get('London'))
print(cities.get('London'))
"""
Explanation: c) Check you understand the difference between the following two blocks of code by running them, checking the output and editing them (e.g. try the code again, but replacing Berlin with London)
End of explanation
"""
for k, v in cities.items():
print(k)
for k, v in cities.items():
print("The city of " + str(k) + " has an airport code of " + str(v[2]) )
"""
Explanation: d) Adapting the code below, print out the city name and airport code for every city in our Atlas.
End of explanation
"""
for ??? in cities.???:
print(??? + " is at latitude " + str(???))
for city, latitude in cities.items():
print(city + " is at latitude " + str(latitude[0]))
"""
Explanation: Loops
Recall from the previous notebook that loops are a way to iterate (or repeat) chunks of code. The two most common ways to iterate a set of commands are the while loop and the for loop.
Working with Loops
The questions below use for loops. Replace ??? in the following code block to make the code work as instructed in the comments. If you need some hints and reminders, revisit the previous notebook.
a) Print out the name and latitude of every city in the cities dictionary using a for loop
End of explanation
"""
for c in ???:
print(???)
for c in cities.items():
print(c)
"""
Explanation: b) Print out every city on a separate line using a for loop:
End of explanation
"""
citiesB = [
{'name': 'San Francisco',
'position': [37.77, -122.43],
'airport': 'SFO'},
{'name': 'London',
'position': [51.51, -0.08],
'airport': 'LDN'},
{'name': 'Paris',
'position': [48.86, 2.29],
'airport': 'PAR'},
{'name': 'Beijing',
'position': [39.92, 116.40],
'airport': 'BEI'}
]
for ??? in citiesB.???:
print(??? + " is at latitude " + str(???))
citiesB = [
{'name': 'San Francisco',
'position': [37.77, -122.43],
'airport': 'SFO'},
{'name': 'London',
'position': [51.51, -0.08],
'airport': 'LDN'},
{'name': 'Paris',
'position': [48.86, 2.29],
'airport': 'PAR'},
{'name': 'Beijing',
'position': [39.92, 116.40],
'airport': 'BEI'}
]
for city in citiesB:
print(city['name'] + " is at latitude " + str(city['position'][0]))
"""
Explanation: c) Now print using a loop this new data structure:
End of explanation
"""
|
n-witt/MachineLearningWithText_SS2017 | exercises/solutions/1 Numpy.ipynb | gpl-3.0 | import numpy as np
try:
np
except NameError:
print('Numpy not correctly imported')
"""
Explanation: Import the numpy package under the name np
End of explanation
"""
Z = np.zeros(10)
print(Z)
assert type(Z).__module__ == np.__name__
assert len(Z) == 10
assert sum(Z) == 0
"""
Explanation: 2. Create a null vector Z of size 10. Don't use [0, 0, ...] notation.
End of explanation
"""
Z = np.zeros(10)
Z[4] = 1
print(Z)
assert type(Z).__module__ == np.__name__
assert len(Z) == 10
assert sum(Z) == 1
"""
Explanation: 3. Create a null vector of size 10 but the fifth value which is 1
End of explanation
"""
Z = np.arange(10,50)
print(Z)
assert type(Z).__module__ == np.__name__
assert len(Z) == 40
assert sum(Z) == 1180
"""
Explanation: 4. Create a Numpy vector with values ranging from 10 to 49
End of explanation
"""
Z = Z[::-1]
print(Z)
assert type(Z).__module__ == np.__name__
assert len(Z) == 40
assert sum(Z) == 1180
assert Z[0] == 49
assert Z[-1] == 10
"""
Explanation: 5. Reverse the vector from the previous task (first element becomes last)
End of explanation
"""
Z = np.arange(9).reshape(3,3)
print(Z)
assert Z.shape == (3, 3)
assert np.all(sum(Z) == np.array([9, 12, 15]))
"""
Explanation: 6. Create a 3x3 matrix with values ranging from 0 to 8
End of explanation
"""
nz = np.array([1,2,0,0,4,0])
nz = (nz[nz != 0])
assert np.all(nz == np.array([1, 2, 4]))
"""
Explanation: 7. Find the indices of non-zero elements from [1,2,0,0,4,0] and store the result in nz
End of explanation
"""
Z = np.random.random((3,3,3))
print(Z)
assert Z.shape == (3, 3, 3)
"""
Explanation: 8. Create a 3x3x3 (i.e. three dimensions with three values each) array with random values invariable Z
End of explanation
"""
Z = np.random.random((10,10))
Zmin, Zmax = Z.min(), Z.max()
print(Zmin, Zmax)
z = Z.ravel()
idx = z.argsort()
assert z[idx[0]] == Zmin
assert z[idx[-1]] == Zmax
"""
Explanation: 13. Create a 10x10 array with random values and find the minimum and maximum values and store them in Zmin and Zmax.
End of explanation
"""
Z = np.random.random(30)
mean = Z.mean()
print(mean)
accumulative = 0
for z in Z:
accumulative += z
assert mean - (accumulative/len(Z)) < 0.0001
"""
Explanation: 14. Create a random vector of size 30 and find the mean value using Numpy. Store the result into mean
End of explanation
"""
Z = np.zeros((8,8), dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 1
print(Z)
"""
Explanation: 15. Create a 8x8 matrix and fill it with a chessboard pattern (say, 1 == 'black' and 0 == 'white'). Use fancy indexing.
End of explanation
"""
Z = np.dot(np.ones((5,3)), np.ones((3,2)))
print(Z)
"""
Explanation: 16. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product)
End of explanation
"""
Z = np.arange(11)
Z[(3 < Z) & (Z <= 8)] *= -1
print(Z)
"""
Explanation: 17. Given a array np.arange(11), negate all elements which are between 3 and 8, in place.
End of explanation
"""
|
mayuanucas/notes | python/python下划线命名规则.ipynb | apache-2.0 | 8 * 9
_ + 8
"""
Explanation: 在 python 中,下划线命名规则往往令人相当疑惑:单下划线、双下划线、双下划线还分前后……那它们的作用与使用场景到底有何区别呢?
1、单下划线(_)
通常情况下,单下划线(_)会在以下3种场景中使用:
1.1 在解释器中:
在这种情况下,“_”代表交互式解释器会话中上一条执行的语句的结果。这种用法首先被标准CPython解释器采用,然后其他类型的解释器也先后采用。
End of explanation
"""
for _ in range(1, 11):
print(_, end='、 ')
"""
Explanation: 1.2 作为一个名称:
这与上面一点稍微有些联系,此时“”作为临时性的名称使用。这样,当其他人阅读你的代码时将会知道,你分配了一个特定的名称,但是并不会在后面再次用到该名称。例如,下面的例子中,你可能对循环计数中的实际值并不感兴趣,此时就可以使用“”。
End of explanation
"""
from django.utils.translation import ugettext as _
from django.http import HttpResponse
def my_view(request):
output = _("Welcome to my site.")
return HttpResponse(output)
"""
Explanation: 1.3 国际化:
也许你也曾看到”_“会被作为一个函数来使用。这种情况下,它通常用于实现国际化和本地化字符串之间翻译查找的函数名称,这似乎源自并遵循相应的C约定。例如,在Django文档“转换”章节中,你将能看到如下代码:
End of explanation
"""
class A(object):
def _internal_use(self):
pass
def __method_name(self):
pass
print(dir(A()))
"""
Explanation: 可以发现,场景二和场景三中的使用方法可能会相互冲突,所以我们需要避免在使用“”作为国际化查找转换功能的代码块中同时使用“”作为临时名称。
2、名称前的单下划线(如:_shahriar)
程序员使用名称前的单下划线,用于指定该名称属性为“私有”。这有点类似于惯例,为了使其他人(或你自己)使用这些代码时将会知道以“_”开头的名称只供内部使用。正如Python文档中所述:
以下划线“_”为前缀的名称(如_spam)应该被视为API中非公开的部分(不管是函数、方法还是数据成员)。此时,应该将它们看作是一种实现细节,在修改它们时无需对外部通知。
正如上面所说,这确实类似一种惯例,因为它对解释器来说确实有一定的意义,如果你写了代码“from <模块/包名> import *”,那么以“_”开头的名称都不会被导入,除非模块或包中的“all”列表显式地包含了它们。不过值得注意的是,如果使用 import a_module 这样导入模块,仍然可以用 a_module._some_var 这样的形式访问到这样的对象。
另外单下划线开头还有一种一般不会用到的情况在于使用一个 C 编写的扩展库有时会用下划线开头命名,然后使用一个去掉下划线的 Python 模块进行包装。如 struct 这个模块实际上是 C 模块 _struct 的一个 Python 包装。
3、名称前的双下划线(如:__shahriar)
名称(具体为一个方法名)前双下划线(__)的用法并不是一种惯例,对解释器来说它有特定的意义。Python中的这种用法是为了避免与子类定义的名称冲突。Python文档指出,“__spam”这种形式(至少两个前导下划线,最多一个后续下划线)的任何标识符将会被“_classname__spam”这种形式原文取代,在这里“classname”是去掉前导下划线的当前类名。例如下面的例子:
End of explanation
"""
import time
print(time.__name__)
"""
Explanation: 正如所预料的,“_internal_use”并未改变,而“__method_name”却被变成了"_ClassName__method_name"__开头。
私有变量会在代码生成之前被转换为长格式(变为公有)。转换机制是这样的:在变量前端插入类名,并在前端加入一个下划线字符。这就是所谓的私有变量名字改编(Private name mangling)。 此时,如果你创建A的一个子类B,那么你将不能轻易地覆写A中的方法“__method_name”,
无论是单下划线还是双下划线开头的成员,都是希望外部程序开发者不要直接使用这些成员变量和这些成员函数,只是双下划线从语法上能够更直接的避免错误的使用,但是如果按照 _类名__成员名 则依然可以访问到。单下划线的在动态调试时可能会方便一些,只要项目组的人都遵守下划线开头的成员不直接使用,那使用单下划线或许会更好。
4、名称前后的双下划线(如:init)
这种用法表示Python中特殊的方法名。其实,这只是一种惯例,对Python系统来说,这将确保不会与用户自定义的名称冲突。通常,你将会覆写这些方法,并在里面实现你所需要的功能,以便Python调用它们。例如,当定义一个类时,你经常会覆写“init”方法。
双下划线开头双下划线结尾的是一些 Python 的“魔术”对象,如类成员的 init、del、add、getitem 等,以及全局的 file、name 等。 Python 官方推荐永远不要将这样的命名方式应用于自己的变量或函数,而是按照文档说明来使用。虽然你也可以编写自己的特殊方法名,但不要这样做。
5、题外话 if name == "main":
所有的 Python 模块都是对象并且有几个有用的属性,你可以使用这些属性方便地测试你所书写的模块。
模块是对象, 并且所有的模块都有一个内置属性 name。一个模块的 name 的值要看您如何应用模块。如果 import 模块, 那么 name__的值通常为模块的文件名, 不带路径或者文件扩展名。但是您也可以像一个标准的程序一样直接运行模块, 在这种情况下 __name__的值将是一个特别的缺省值:__main。
End of explanation
"""
__all__ = [
"foo",
"bar",
"egg",
]
"""
Explanation: 一旦了解到这一点, 就可以在模块内部为您的模块设计一个测试套件, 在其中加入这个 if 语句。当您直接运行模块, name 的值是 main, 所以测试套件执行。当您导入模块, __name__的值就是别的东西了, 所以测试套件被忽略。这样使得在将新的模块集成到一个大程序之前开发和调试容易多了。
6、用 all 暴露接口
Python 可以在模块级别暴露接口:
很多时候这么做还是很有好处的,提供了哪些是公开接口的约定。
不像 Ruby 或者 Java,Python 没有语言原生的可见性控制,而是靠一套需要大家自觉遵守的”约定“下工作。比如下划线开头的应该对外部不可见。同样,all 也是对于模块公开接口的一种约定,比起下划线,all 提供了暴露接口用的”白名单“。一些不以下划线开头的变量(比如从其他地方 import 到当前模块的成员)可以同样被排除出去。
6.1 控制 from xxx import * 的行为
代码中当然是不提倡用 from xxx import * 的写法的(因为判定一个特殊的函数或属性是从哪来的有些困难,并且会造成调试和重构都更困难。),但是在 console 调试的时候图个方便还是很常见的。如果一个模块 spam 没有定义 all,执行 from spam import * 的时候会将 spam 中非下划线开头的成员都导入当前命名空间中,这样当然就有可能弄脏当前命名空间。如果显式声明了 all,import * 就只会导入 all 列出的成员。如果 all 定义有误,列出的成员不存在,还会明确地抛出异常,而不是默默忽略。
6.3 定义 all 需要注意的地方
如上所述,all 应该是 list 类型的
不应该动态生成 all,比如使用列表解析式。all 的作用就是定义公开接口,如果不以字面量的形式显式写出来,就失去意义了。
即使有了 all 也不应该在非临时代码中使用 from xxx import * 语法,或者用元编程手段模拟 Ruby 的自动 import。Python 不像 Ruby,没有 Module 这种成员,模块就是命名空间隔离的执行者。如果打破了这一层,而且引入诸多动态因素,生产环境跑的代码就充满了不确定性,调试也会非常困难。
按照 PEP8 建议的风格,all 应该写在所有 import 语句下面,和函数、常量等模块成员定义的上面。
如果一个模块需要暴露的接口改动频繁,all 可以这样定义:
End of explanation
"""
|
AndreySheka/dl_ekb | hw3/HW3_Modules.ipynb | mit | class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
"""
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
"""
def forward(self, input):
"""
Takes an input object, and computes the corresponding output of the module.
"""
return self.updateOutput(input)
def backward(self,input, gradOutput):
"""
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
"""
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
"""
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
"""
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
"""
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
"""
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
"""
pass
def zeroGradParameters(self):
"""
Zeroes `gradParams` variable if the module has params.
"""
pass
def getParameters(self):
"""
Returns a list with its parameters.
If the module does not have parameters return empty list.
"""
return []
def getGradParameters(self):
"""
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
"""
return []
def training(self):
"""
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = True
def evaluate(self):
"""
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
"""
self.training = False
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Module"
"""
Explanation: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
End of explanation
"""
class Sequential(Module):
"""
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
"""
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
"""
Adds a module to the container.
"""
self.modules.append(module)
def updateOutput(self, input):
"""
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
"""
# Your code goes here. ################################################
return self.output
def backward(self, input, gradOutput):
"""
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
"""
# Your code goes here. ################################################
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
"""
Should gather all parameters in a list.
"""
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
"""
Should gather all gradients w.r.t parameters in a list.
"""
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
"""
Explanation: Sequential container
Define a forward and backward pass procedures.
End of explanation
"""
class Linear(Module):
"""
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
"""
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def accGradParameters(self, input, gradOutput):
# Your code goes here. ################################################
pass
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
"""
Explanation: Layers
input: batch_size x n_feats1
output: batch_size x n_feats2
End of explanation
"""
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "SoftMax"
"""
Explanation: This one is probably the hardest but as others only takes 5 lines of code in total.
- input: batch_size x n_feats
- output: batch_size x n_feats
End of explanation
"""
class BatchMeanSubtraction(Module):
def __init__(self, alpha = 0.):
super(BatchMeanSubtraction, self).__init__()
self.alpha = alpha
self.old_mean = None
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "BatchMeanNormalization"
"""
Explanation: One of the most significant recent ideas that impacted NNs a lot is Batch normalization. The idea is simple, yet effective: the features should be whitened ($mean = 0$, $std = 1$) all the way through NN. This improves the convergence for deep models letting it train them for days but not weeks. You are to implement a part of the layer: mean subtraction. That is, the module should calculate mean value for every feature (every column) and subtract it.
Note, that you need to estimate the mean over the dataset to be able to predict on test examples. The right way is to create a variable which will hold smoothed mean over batches (exponential smoothing works good) and use it when forwarding test examples.
When training calculate mean as folowing:
mean_to_subtract = self.old_mean * alpha + batch_mean * (1 - alpha)
when evaluating (self.training == False) set $alpha = 1$.
input: batch_size x n_feats
output: batch_size x n_feats
End of explanation
"""
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "Dropout"
"""
Explanation: Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask.
This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout.
While training (self.training == True) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. self.output = input.
input: batch_size x n_feats
output: batch_size x n_feats
End of explanation
"""
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
"""
Explanation: Activation functions
Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU):
End of explanation
"""
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "LeakyReLU"
"""
Explanation: Implement Leaky Rectified Linear Unit. Expriment with slope.
End of explanation
"""
class ELU(Module):
def __init__(self, alpha = 1.0):
super(ELU, self).__init__()
self.alpha = alpha
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "ELU"
"""
Explanation: Implement Exponential Linear Units activations.
End of explanation
"""
class SoftPlus(Module):
def __init__(self):
super(SoftPlus, self).__init__()
def updateOutput(self, input):
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "SoftPlus"
"""
Explanation: Implement SoftPlus activations. Look, how they look a lot like ReLU.
End of explanation
"""
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
"""
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
"""
return self.updateOutput(input, target)
def backward(self, input, target):
"""
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
"""
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
"""
Function to override.
"""
return self.output
def updateGradInput(self, input, target):
"""
Function to override.
"""
return self.gradInput
def __repr__(self):
"""
Pretty printing. Should be overrided in every module if you want
to have readable description.
"""
return "Criterion"
"""
Explanation: Criterions
Criterions are used to score the models answers.
End of explanation
"""
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
"""
Explanation: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
End of explanation
"""
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
# Your code goes here. ################################################
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
# Your code goes here. ################################################
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
"""
Explanation: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | quests/serverlessml/03_tfdata/solution/input_pipeline.ipynb | apache-2.0 | %%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
!pip install tensorflow==2.1.0 --user
"""
Explanation: Input pipeline into Keras
In this notebook, we will look at how to read large datasets, datasets that may not fit into memory, using TensorFlow. We can use the tf.data pipeline to feed data to Keras models that use a TensorFlow backend.
Learning Objectives
Use tf.data to read CSV files
Load the training data into memory
Prune the data by removing columns
Use tf.data to map features and labels
Adjust the batch size of our dataset
Shuffle the dataset to optimize for deep learning
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Let's start off with the Python imports that we need.
End of explanation
"""
import os, json, math
import numpy as np
import shutil
import logging
# SET TF ERROR LOG VERBOSITY
logging.getLogger("tensorflow").setLevel(logging.ERROR)
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
# If you're not using TF 2.0+, let's enable eager execution
if tf.version.VERSION < '2.0':
print('Enabling v2 behavior and eager execution; if necessary restart kernel, and rerun notebook')
tf.enable_v2_behavior()
"""
Explanation: Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.
End of explanation
"""
!ls -l ../../data/*.csv
"""
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data
End of explanation
"""
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
# load the training data
def load_dataset(pattern):
return tf.data.experimental.make_csv_dataset(pattern, 1, CSV_COLUMNS, DEFAULTS)
tempds = load_dataset('../../data/taxi-train*')
print(tempds)
"""
Explanation: Use tf.data to read the CSV files
See the documentation for make_csv_dataset.
If you have TFRecords (which is recommended), use make_batched_features_dataset instead.
End of explanation
"""
# print a few of the rows
for n, data in enumerate(tempds):
row_data = {k: v.numpy() for k,v in data.items()}
print(n, row_data)
if n > 2:
break
"""
Explanation: Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary:
End of explanation
"""
# get features, label
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# print a few rows to make it sure works
for n, data in enumerate(tempds):
row_data = {k: v.numpy() for k,v in data.items()}
features, label = features_and_labels(row_data)
print(n, label, features)
if n > 2:
break
"""
Explanation: What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column "key" and (2) keep the label separate from the features.
End of explanation
"""
def load_dataset(pattern, batch_size):
return (
tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
# try changing the batch size and watch what happens.
tempds = load_dataset('../../data/taxi-train*', batch_size=2)
print(list(tempds.take(3))) # truncate and print as a list
"""
Explanation: Batching
Let's do both (loading, features_label)
in our load_dataset function, and also add batching.
End of explanation
"""
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
.cache())
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
tempds = load_dataset('../../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)
print(list(tempds.take(1)))
tempds = load_dataset('../../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)
print(list(tempds.take(1)))
"""
Explanation: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
End of explanation
"""
|
YaniLozanov/Software-University | Python/Jupyter notebook/04.Complex Conditional Statements/Jupyter notebook/04.Complex Conditional Statements.ipynb | mit | age = float(input())
sex = input()
if sex == "m":
if age >= 16:
print("Mr.")
else:
print("Master")
else:
if age >= 16:
print("Ms.")
else:
print("Miss")
"""
Explanation: <h1 align="center">Complex Conditional Statements<h1>
<h2>01.Personal Titles</h2>
The first task of this topic is to write a console program that **introduces age** (decimal number) and **gender** ("m" or "f") and **prints an address** from among the following:
-> "Mr." - male (gender "m") aged 16 or over
-> "Master" - boy (sex "m") under 16 years of age
-> "Ms." - woman (gender "f") aged 16 or over
-> "Miss" - girl (gender "f") under 16 years of age
End of explanation
"""
product = input()
city = input()
quantity = float(input())
price = 0
if city == "Sofia":
if product == "coffee":
price = quantity * 0.50
elif product == "water":
price = quantity * 0.80
elif product == "beer":
price = quantity * 1.20
elif product == "sweets":
price = quantity * 1.45
elif product == "peanuts":
price = quantity * 1.60
elif city == "Plovdiv":
if product == "coffee":
price = quantity * 0.40
elif product == "water":
price = quantity * 0.70
elif product == "beer":
price = quantity * 1.15
elif product == "sweets":
price = quantity * 1.30
elif product == "peanuts":
price = quantity * 1.50
elif city == "Varna":
if product == "coffee":
price = quantity * 0.45
elif product == "water":
price = quantity * 0.70
elif product == "beer":
price = quantity * 1.10
elif product == "sweets":
price = quantity * 1.35
elif product == "peanuts":
price = quantity * 1.55
print(float("{0:.2f}".format(price)))
"""
Explanation: <h2>02.Small Shop</h2>
Problem:
The next task is to work with nested if.
Here is the condition: enterprising Bulgarian opens neighborhood shops in several cities and sells at different prices:
| city / product | coffee |water| beer | sweet | peanuts |
|---|---|---|---|---|---|
| Sofia | 0.50 | 0.80 | 1.20 | 1.45 | 1.60 |
| Plovdiv | 0.40 | 0.70 | 1.15 | 1.30 | 1.50 |
| Varna | 0.45 | 0.70 | 1.10 | 1.35 | 1.55 |
Write a program that reads from the console city (string),product (string) and quantity (decimal number), and calculates and prints how much the corresponding quantity of the selectedproduct costs in that city.
The answer to be accurate to the second character.
End of explanation
"""
x1 = float(input())
y1 = float(input())
x2 = float(input())
y2 = float(input())
x = float(input())
y = float(input())
if x >= x1 and x <= x2 and y >= y1 and y <= y2:
print("Inside")
else:
print("Outside")
"""
Explanation: <h2>03.Point in Rectangle</h2>
Problem:
Write a program that checks whether the point {x, y} is located in a rectangle {x1, y1} - {x2, y2}.
The input data is read from the console and consists of 6 rows: the decimal numbers x1, y1, x2, y2, x and y (taking ensure that x1 <x2 and y1 <y2). One point is internal to a rectangle if it is located somewhere in
its interior or on one of its sides.
Print "Inside" or "Outside".
End of explanation
"""
product = input()
if product == "banana":
print("fruit")
elif product == "apple":
print("fruit")
elif product == "kiwi":
print("fruit")
elif product == "cherry":
print("fruit")
elif product == "lemon":
print("fruit")
elif product == "grapes":
print("fruit")
elif product == "tomato":
print("vegetable")
elif product == "cucumber":
print("vegetable")
elif product == "pepper":
print("vegetable")
elif product == "carrot":
print("vegetable")
else:
print("unknown")
"""
Explanation: <h2>04.Fruit or Vegetable</h2>
Problem:
Write a program that introduces a product name and checks whether it is a fruit or a vegetable.
-> Fruit are: banana, apple, kiwi, cherry, lemon and grapes
-> Vegetables are: tomato, cucumber, pepper and carrot
-> All others are unknown ;
Display "fruit", "vegetable" or "unknown" according to the introduced product.
End of explanation
"""
num = float(input())
if num == 0 or (num >= 100 and num <= 200):
print()
else:
print("invalid")
"""
Explanation: <h2>05.Invalid Number</h2>
Problem:
A given number is valid if it is in the range [100 ... 200] or is 0.
Write a program that enters a whole number and print "invalid" if the entered number is not valid.
End of explanation
"""
x1 = float(input())
y1 = float(input())
x2 = float(input())
y2 = float(input())
x = float(input())
y = float(input())
if ((( y == y1 or y == y2)and (x1 <= x and x <= x2))) or ((x == x1 or x == x2)and (y1 <= y and y <= y2)):
print("Border")
else:
print("Inside / Outside")
"""
Explanation: <h2>06.Point on Rectangle Border</h2>
Problem:
Write a program that checks whether the point {x, y} is located on either side of a rectangle {x1, y1} - {x2, y2}.
The input data is read from the console and consists of 6 rows: the decimal numbers x1, y1, x2, y2, x and y (ensuring that x1 <x2 and y1 <y2).
Print "Border" (the point lies on one side)or "Inside / Outside" (otherwise).
End of explanation
"""
fruit = input()
day = input()
quantity = float(input())
price = 0
valid_Day = day == "Monday" or day == "Tuesday" or\
day == "Wednesday" or day == "Thursday" or\
day == "Friday"
if day == "Saturday" or day == "Sunday":
if fruit == "banana":
price = quantity * 2.70
print(price)
elif fruit == "apple":
price = quantity * 1.25
print(price)
elif fruit == "orange":
price = quantity * 0.90
print(price)
elif fruit == "grapefruit":
price = quantity * 1.60
print(price)
elif fruit == "kiwi":
price = quantity * 3.00
print(price)
elif fruit == "pineapple":
price = quantity * 5.60
print(price)
elif fruit == "grapes":
price = quantity * 4.20
print(price)
else:
print("error")
elif valid_Day:
if fruit == "banana":
price = quantity * 2.50
print(price)
elif fruit == "apple":
price = quantity * 1.20
print(price)
elif fruit == "orange":
price = quantity * 0.85
print(price)
elif fruit == "grapefruit":
price = quantity * 1.45
print(price)
elif fruit == "kiwi":
price = quantity * 2.70
print(price)
elif fruit == "pineapple":
price = quantity * 5.50
print(price)
elif fruit == "grapes":
price = quantity * 3.85
print(price)
else:
print("error")
else:
print("error")
"""
Explanation: <h2>07.Fruit Shop</h2>
Problem:
The fruit shop during the working days works at the following prices:
| fruit | banana | apple | orange | grapefruit | kiwi | pineapple | grapes |
|---|---|---|---|---|---|---|---|
| price | 2.50 | 1.20 | 0.85 | 1.45 | 2.70 | 5.50 | 3.85 |
Saturday and Sunday the shop runs at higher prices:
| fruit | banana | apple | orange | grapefruit | kiwi | pineapple | grapes |
|---|---|---|---|---|---|---|---|
| price | 2.70 | 1.25 | 0.90 | 1.60 | 3.00 | 5.60 | 4.20 |
Write a program that reads from the fruit console (banana / apple / orange / grapefruit / kiwi / pineapple /grapes), day of the week (Monday / Tuesday / Wednesday / Wednesday / Friday / Saturday / Sunday) and quantity(decimal number) and calculate the price according to the prices in the tables above.
Print the result rounded by 2 digits after the decimal point.
Invalid day of the week or invalid fruit name to print "error".
End of explanation
"""
city = input()
sales = float(input())
commission = 0
commission_Percent = 0
if city == "Sofia":
if 0 <= sales <= 500:
commission_Percent = 0.05
elif 500 <= sales <= 1000:
commission_Percent = 0.07
elif 1000 <= sales <= 10000:
commission_Percent = 0.08
elif sales > 10000:
commission_Percent = 0.12
elif city == "Varna":
if 0 <= sales <= 500:
commission_Percent = 0.045
elif 500 <= sales <= 1000:
commission_Percent = 0.075
elif 1000 <= sales <= 10000:
commission_Percent = 0.10
elif sales > 10000:
commission_Percent = 0.13
else:
print("error")
elif city == "Plovdiv":
if 0 <= sales <= 500:
commission_Percent = 0.055
elif 500 <= sales <= 1000:
commission_Percent = 0.08
elif 1000 <= sales <= 10000:
commission_Percent = 0.12
elif sales > 10000:
commission_Percent = 0.145
else:
print("error")
else:
print("error")
commission = sales * commission_Percent
print(float("{0:.2f}".format(commission)))
"""
Explanation: <h2>08.Trade Comissions</h2>
Problem:
The company gives the following commissions to its merchants according to the town in which the sales volume works:
| City | 0 ≤ s ≤ 500 | 500 < s ≤ 1,000 | 1,000 < s ≤ 10,000 | s > 10 000 |
|---|---|---|---|---|
| Sofia | 5% | 7% | 8% | 12% |
| Varna | 4.5% | 7.5% | 10% | 13% |
| Plovdiv | 5.5% | 8% | 12% | 14.5% |
Write a console program that reads a city name and sales volume (decimal number) and calculates and returns the amount of merchant commission according to the above table.
Score to be rounded by 2 digits after the decimal point.
In the case of an invalid city or sales volume (negative number) prints "error".
End of explanation
"""
num = float(input())
if num == 1:
print("Monday")
elif num == 2:
print("Tuesday")
elif num == 3:
print("Wednesday")
elif num == 4:
print("Thursday")
elif num == 5:
print("Friday")
elif num == 6:
print("Saturday")
elif num == 7:
print("Sunday")
else:
print("error")
"""
Explanation: <h2>09.Day of Week</h2>
Problem:
Print the name of the day of the week by day of the day from [1 ... 7] or
print an "Error" for an invalid number.
End of explanation
"""
animal = input()
if animal == "dog":
print("mammal")
elif animal == "crocodile" or animal == "snake" or animal == "tortoise":
print("reptile")
else:
print("unknown")
"""
Explanation: <h2>10.Animal Type</h2>
Problem:
Write a program that prints the animal's class according to its user name:
-> dog -> mammal
-> crocodile, tortoise, snake -> reptiles
-> others -> unknown
End of explanation
"""
projections_Type = input()
row = int(input())
cow = int(input())
tickets_Price = 0
total_Price = 0
if projections_Type == "Premiere":
tickets_Price = 12
elif projections_Type == "Normal":
tickets_Price = 7.5
elif projections_Type == "Discount":
tickets_Price = 5
total_Price = tickets_Price * (row * cow)
print("{0:.2f} leva".format(total_Price))
"""
Explanation: <h2>11.Cinema</h2>
Problem:
In one cinema hall, the chairs are arranged in rectangular shape in r row and c columns.
There are three types of projections with Tickets at different rates:
-> Premiere - Premiere, at a price of 12.00 BGN.
-> Normal - standard screening, at a price of 7.50 leva.
-> Discount - screening for children, students and students at a reduced price of 5.00 BGN.
Write a program that introduces a screen type, number of rows, and number of columns in the room numbers) and calculates total ticket revenue in a full room.
Print the result in a format such as the examples below, with 2 decimal places.
End of explanation
"""
import math
years_Type = input()
holidays = int(input())
weekends = int(input())
times = 0
weekends_games = 0
games_in_Sofia = 0
if years_Type == "normal":
weekends_games = (48 - weekends) * ( 3 / 4)
games_in_Sofia = holidays * (2/3)
times = weekends_games + games_in_Sofia + weekends
print(math.floor(times))
elif years_Type == "leap":
weekends_games = (48 - weekends) * ( 3 / 4)
games_in_Sofia = holidays * (2/3)
times = weekends_games + games_in_Sofia + weekends
times += 0.15 * times
print(math.floor(times))
"""
Explanation: <h2>12.Volleyball</h2>
Problem:
Vladi is a student, lives in Sofia and walks from time to time to his hometown.
He is very keen on volleyball but busy during business days and playing volleyball only on weekends and holidays.
Vladi plays in Sofia every Saturday when he is not at work and does not travel to his hometown, as well as in 2/3 of festive days.
He travels to his hometown h times in the year where he plays volleyball with his old ones friends on Sunday.
Vladi is not at work 3/4 of the weekends he's in Sofia.
Separately, through the leeches years Vladi plays with 15% more volleyball than normal.
We assume that the year has exactly 48 weekends,suitable for volleyball.
How many times , he played during the year
End of explanation
"""
h = int(input())
x = int(input())
y = int(input())
left_square = (0 < x <= h) and (0 < y < h) # Is the point in the left square.
mid = (h < x < 2 * h) and (0 < y < 4 * h) # Is the point in the middle cow.
right_square = (2 * h <= x < 3 * h) and (0 < y < h) # Is the point in the right square.
left_squares_borders = (x == 0 and 0 <= y <= h)or\
(y == h and 0 <= x <= h) # Is the point on the left squares border.
down_border = (y == 0 and 0 <= x <= h * 3)
right_squares_borders = (x == h * 3 and 0 <= y <= h)or\
(y == h and 2 * h <= x <= 3 * h) # Is the point on the right squares border.
mid_borders = (x == 2 * h and h <= y <= 4 * h)or\
(x == h and h <= y <= 4 * h)
top_border = (y == 4 * h and h <= x <= 2 * h)
inside = left_square or mid or right_square
border = left_squares_borders or right_squares_borders or\
down_border or mid_borders or top_border
if inside:
print("inside")
elif border:
print("border")
else:
print("Outside")
"""
Explanation: <h2>13.Point in the Figure</h2>
Problem:
Figure consists of 6 blocks of size h * h, located as in the figure on the right.
Lower left corner of the building is in position {0, 0}.
The upper right corner of the figure is in position {2 * h, 4 * h}.
In the figure the coordinates are given at h = 2.
Write a program that enters integer h and coordinates
a point {x, y} (integers) and prints whether the point is inside
the inside, outside the figure, or on any of the sides of the border.
End of explanation
"""
|
stijnvanhoey/course_gis_scripting | _solved/01-python-introduction.ipynb | bsd-3-clause | print("Hello INBO_course!") # python 3(!)
"""
Explanation: <p><font size="6"><b>Python the essentials: A minimal introduction</b></font></p>
Introduction to GIS scripting
May, 2017
© 2017, Stijn Van Hoey (stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
First steps
the obligatory...
End of explanation
"""
4*5
3**2
(3 + 4)/2, 3 + 4/2,
21//5, 21%5 # floor division, modulo
"""
Explanation: Python is a calculator
End of explanation
"""
3 > 4, 3 != 4, 3 == 4
"""
Explanation: also logical operators:
End of explanation
"""
my_variable_name = 'DS_course'
my_variable_name
name, age = 'John', 30
print('The age of {} is {:d}'.format(name, age))
"""
Explanation: Variable assignment
End of explanation
"""
import os
"""
Explanation: More information on print format: https://pyformat.info/
<div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
<li>Use relevant variable names, e.g. `name` instead of `n`
<li>Keep variable names lowercase, with underscore for clarity, e.g. `darwin_core` instead of `DarwinCore`
</div>
Loading functionalities
End of explanation
"""
os.listdir()
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>You would <b>load</b> a <b>library</b> (`library("ggplot2")`) instead of <b>importing</b> a package</p>
</div>
End of explanation
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Loading with defined short name (community agreement)
End of explanation
"""
%%file rehears1.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
"""Dummy function to print the string it"""
print('it')
import rehears1
rehears1.print_it()
%%file rehears2.py
#this writes a file in your directory, check it(!)
"A demo module."
def print_it():
"""Dummy function to print the string it"""
print('it')
def print_custom(my_input):
"""Dummy function to print the string that"""
print(my_input)
from rehears2 import print_it, print_custom
print_custom('DS_course')
"""
Explanation: Loading functions from any file/module/package:
End of explanation
"""
a_float = 5.
type(a_float)
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
Importing **packages** is always the first thing you do in python, since it offers the functionalities to work with!
</div>
Different options are available:
<span style="color:green">import <i>package-name</i></span> <br>importing all functionalities as such
<span style="color:green">from <i>package-name</i> import <i>specific function</i></span><br>importing a specific function or subset of the package
<span style="color:green">import <i>package-name</i> as <i>short-package-name</i></span><br>Very good way to keep a good insight in where you use what package
<div class="alert alert-danger">
<b>DON'T</b>: `from os import *`. Just don't!
</div>
Datatypes
Numerical
floats
End of explanation
"""
an_integer = 4
type(an_integer)
"""
Explanation: integers
End of explanation
"""
a_boolean = True
a_boolean
type(a_boolean)
3 > 4 # results in boolean
"""
Explanation: booleans
End of explanation
"""
print(False) # test yourself with FALSE
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>Booleans are written as <b>False</b> or <b>True</b>, NOT as <b>FALSE/TRUE</b></p>
</div>
End of explanation
"""
a_string = "abcde"
a_string
"""
Explanation: Containers
Strings
End of explanation
"""
a_string.capitalize(), a_string.upper(), a_string.endswith('f') # Check the other available methods for a_string yourself!
a_string.upper().replace('B', 'A')
a_string + a_string
a_string * 5
"""
Explanation: A string is a collection of characters...
End of explanation
"""
a_list = [1, 'a', 3, 4]
a_list
another_list = [1, 'a', 8.2, 4, ['z', 'y']]
another_list
a_list.append(8.2)
a_list
a_list.reverse()
a_list
"""
Explanation: Lists
A list can contain mixed data types (character, float, int, other lists,...)
End of explanation
"""
a_list + ['b', 5]
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
The list is updated <b>in-place</b>; a_list.reverse() does not return anything, it updates the list
</div>
End of explanation
"""
[el*2 for el in a_list] # list comprehensions...a short for-loop
"""
Explanation: ADVANCED users area: list comprehensions
End of explanation
"""
new_list = []
for element in a_list:
new_list.append(element*2)
print(new_list)
"""
Explanation: list comprehensions are basically a short-handed version of a for-loop inside a list. Hence, the previous action is similar to:
End of explanation
"""
[el for el in dir(list) if not el[0] == '_']
"""
Explanation: Another example checks the methods available for the list data type:
End of explanation
"""
[el for el in dir(list) if not el.startswith('_')]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Rewrite the previous list comprehension by using a builtin string method to test if the element starts with an underscore</li>
</ul>
</div>
End of explanation
"""
sentence = "the quick brown fox jumps over the lazy dog"
#split in words and get word lengths
[len(word) for word in sentence.split()]
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Given the sentence `the quick brown fox jumps over the lazy dog`, split the sentence in words and put all the word-lengths in a list.</li>
</ul>
</div>
End of explanation
"""
a_dict = {'a': 1, 'b': 2}
a_dict['c'] = 3
a_dict['a'] = 5
a_dict
a_dict.keys(), a_dict.values(), a_dict.items()
an_empty_dic = dict() # or just {}
an_empty_dic
example_dict = {"timeseries": [2, 5, 3],
"parameter": 21.3,
"scenario": "a"}
example_dict
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>R also has lists as data type, e.g. `list(c(2, 5, 3), 21.3, "a")`</p>
</div>
Dictionary
A dictionary is basically an efficient table that maps keys to values. It is an unordered container
It can be used to conveniently store and retrieve values associated with a name
End of explanation
"""
a_tuple = (1, 2, 4)
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>R also has a dictionary like data type, e.g. </p>
</div>
```R
example_dict <- list(c(2,5,3),21.3,"a")
names(example_dict) <- c("timeseries", "parameter", "scenario")
example_dict
$timeseries
[1] 2 5 3
$parameter
[1] 21.3
$scenario
[1] "a"
```
Tuple
End of explanation
"""
collect = a_list, a_dict
type(collect)
serie_of_numbers = 3, 4, 5
# Using tuples on the left-hand side of assignment allows you to extract fields
a, b, c = serie_of_numbers
print(c, b, a)
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
The type of brackets - (), [], {} - to use depends from the data type you want to create!
<li> [] -> list
<li> () -> tuple
<li> {} -> dictionary
</div>
End of explanation
"""
grades = [88, 72, 93, 94]
from IPython.display import SVG, display
display(SVG("../img/slicing-indexing.svg"))
grades[2]
"""
Explanation: Accessing container values
End of explanation
"""
from IPython.display import SVG, display
display(SVG("../img/slicing-slicing.svg"))
grades[1:3]
a_list = [1, 'a', 8.2, 4]
a_list[0], a_list[2]
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> Python starts counting from <b>0</b> !
</ul>
</div>
End of explanation
"""
a_string = "abcde"
a_string
a_string[2:4]
"""
Explanation: Select from...till
End of explanation
"""
a_list[-2]
"""
Explanation: Select, counting backward:
End of explanation
"""
a_list = [0, 1, 2, 3]
"""
Explanation: <div class="alert alert-warning">
<b>R comparison:</b><br>
<p>The `-` symbol in R has a completely different meaning: `NOT`</p>
</div>
```R
test <- c(1, 2, 3, 4, 5, 6)
test[-2]
[1] 1 3 4 5 6
```
End of explanation
"""
a_list[:3]
a_list[::2]
"""
Explanation: From the first element until a given index:
End of explanation
"""
a_dict = {'a': 1, 'b': 2}
a_dict['a']
"""
Explanation: Dictionaries
End of explanation
"""
a_tuple = (1, 2, 4)
a_tuple[1]
"""
Explanation: Tuples
End of explanation
"""
a_list
a_list[2] = 10 # element 2 changed -- mutable
a_list
a_tuple[1] = 10 # cfr. a_string -- immutable
a_string[3] = 'q'
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li> [] for accessing elements
</ul>
</div>
Note that L[start:stop] contains the elements with indices i such as start <= i < stop
(i ranging from start to stop-1). Therefore, L[start:stop] has (stop-start) elements.
Slicing syntax: L[start:stop:stride]
all slicing parameters are optional
Assigning new values to items -> mutable vs immutable
End of explanation
"""
for i in [1, 2, 3, 4]:
print(i)
"""
Explanation: Control flows (optional)
for-loop
End of explanation
"""
for i in a_list: # anything that is a collection/container can be looped
print(i)
"""
Explanation: <div class="alert alert-danger">
**Indentation** is VERY IMPORTANT in Python. Note that the second line in the example above is indented</li>
</div>
End of explanation
"""
for char in 'Hello DS':
print(char)
for i in a_dict: # items, keys, values
print(i)
for j, key in enumerate(a_dict.keys()):
print(j, key)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Loop through the characters of the string `Hello DS` and print each character separately within the loop</li>
</ul>
</div>
End of explanation
"""
b = 7
while b < 10:
b+=1
print(b)
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
When needing a iterator to count, just use `enumerate`. you mostly do not need i = 0 for... i = i +1.
<br>
Check [itertools](http://pymotw.com/2/itertools/) as well...
</div>
while
End of explanation
"""
if 'a' in a_dict:
print('a is in!')
if 3 > 4:
print('This is valid')
testvalue = False # 0, 1, None, False, 4 > 3
if testvalue:
print('valid')
else:
raise Exception("Not valid!")
myvalue = 3
if isinstance(myvalue, str):
print('this is a string')
elif isinstance(myvalue, float):
print('this is a float')
elif isinstance(myvalue, list):
print('this is a list')
else:
print('no idea actually')
"""
Explanation: if statement
End of explanation
"""
len(a_list)
"""
Explanation: Functions
We've been using functions the whole time...
End of explanation
"""
a_list.reverse()
a_list
"""
Explanation: <div class="alert alert-danger">
It is all about calling a **method/function** on an **object**!
</div>
End of explanation
"""
def custom_sum(a, b, verbose=False):
"""custom summation function
Parameters
----------
a : number
first number to sum
b : number
second number to sum
verbose: boolean
require additional information (True) or not (False)
Returns
-------
my_sum : number
sum of the provided two input elements
"""
if verbose:
print('print a lot of information to the user')
my_sum = a + b
return my_sum
"""
Explanation: <div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
Getting an overview of the available methods on the variable (i.e. object):
<img src="../img/tabbutton.jpg"></img>
</div>
Defining a function:
End of explanation
"""
custom_sum(2, 3, verbose=False) # [3], '4'
"""
Explanation: Setup of a function:
definition starts with def
function body is indented
return keyword precedes returned value
<div class="alert alert-danger">
**Indentation** is VERY IMPORTANT in Python. Note that the second line in the example above is indented</li>
</div>
End of explanation
"""
def f1():
print('this is function 1 speaking...')
def f2():
print('this is function 2 speaking...')
def function_of_functions(inputfunction):
return inputfunction()
function_of_functions(f1)
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Try **SHIFT-TAB** combination to read your own documentation!</li>
</ul>
</div>
<div class="alert alert-info">
<b>REMEMBER</b>:<br><br>
() for calling functions!
</div>
ADVANCED users area:
Functions are objects as well... (!)
End of explanation
"""
add_two = (lambda x: x + 2)
add_two(10)
"""
Explanation: Anonymous functions (lambda)
End of explanation
"""
|
xray/xray | doc/examples/ROMS_ocean_model.ipynb | apache-2.0 | import numpy as np
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
%matplotlib inline
import xarray as xr
"""
Explanation: ROMS Ocean Model Example
The Regional Ocean Modeling System (ROMS) is an open source hydrodynamic model that is used for simulating currents and water properties in coastal and estuarine regions. ROMS is one of a few standard ocean models, and it has an active user community.
ROMS uses a regular C-Grid in the horizontal, similar to other structured grid ocean and atmospheric models, and a stretched vertical coordinate (see the ROMS documentation for more details). Both of these require special treatment when using xarray to analyze ROMS ocean model output. This example notebook shows how to create a lazily evaluated vertical coordinate, and make some basic plots. The xgcm package is required to do analysis that is aware of the horizontal C-Grid.
End of explanation
"""
# load in the file
ds = xr.tutorial.open_dataset('ROMS_example.nc', chunks={'ocean_time': 1})
# This is a way to turn on chunking and lazy evaluation. Opening with mfdataset, or
# setting the chunking in the open_dataset would also achive this.
ds
"""
Explanation: Load a sample ROMS file. This is a subset of a full model available at
http://barataria.tamu.edu/thredds/catalog.html?dataset=txla_hindcast_agg
The subsetting was done using the following command on one of the output files:
#open dataset
ds = xr.open_dataset('/d2/shared/TXLA_ROMS/output_20yr_obc/2001/ocean_his_0015.nc')
# Turn on chunking to activate dask and parallelize read/write.
ds = ds.chunk({'ocean_time': 1})
# Pick out some of the variables that will be included as coordinates
ds = ds.set_coords(['Cs_r', 'Cs_w', 'hc', 'h', 'Vtransform'])
# Select a a subset of variables. Salt will be visualized, zeta is used to
# calculate the vertical coordinate
variables = ['salt', 'zeta']
ds[variables].isel(ocean_time=slice(47, None, 7*24),
xi_rho=slice(300, None)).to_netcdf('ROMS_example.nc', mode='w')
So, the ROMS_example.nc file contains a subset of the grid, one 3D variable, and two time steps.
Load in ROMS dataset as an xarray object
End of explanation
"""
if ds.Vtransform == 1:
Zo_rho = ds.hc * (ds.s_rho - ds.Cs_r) + ds.Cs_r * ds.h
z_rho = Zo_rho + ds.zeta * (1 + Zo_rho/ds.h)
elif ds.Vtransform == 2:
Zo_rho = (ds.hc * ds.s_rho + ds.Cs_r * ds.h) / (ds.hc + ds.h)
z_rho = ds.zeta + (ds.zeta + ds.h) * Zo_rho
ds.coords['z_rho'] = z_rho.transpose() # needing transpose seems to be an xarray bug
ds.salt
"""
Explanation: Add a lazilly calculated vertical coordinates
Write equations to calculate the vertical coordinate. These will be only evaluated when data is requested. Information about the ROMS vertical coordinate can be found (here)[https://www.myroms.org/wiki/Vertical_S-coordinate]
In short, for Vtransform==2 as used in this example,
$Z_0 = (h_c \, S + h \,C) / (h_c + h)$
$z = Z_0 (\zeta + h) + \zeta$
where the variables are defined as in the link above.
End of explanation
"""
ds.salt.isel(xi_rho=50, ocean_time=0).plot()
"""
Explanation: A naive vertical slice
Create a slice using the s-coordinate as the vertical dimension is typically not very informative.
End of explanation
"""
section = ds.salt.isel(xi_rho=50, eta_rho=slice(0, 167), ocean_time=0)
section.plot(x='lon_rho', y='z_rho', figsize=(15, 6), clim=(25, 35))
plt.ylim([-100, 1]);
"""
Explanation: We can feed coordinate information to the plot method to give a more informative cross-section that uses the depths. Note that we did not need to slice the depth or longitude information separately, this was done automatically as the variable was sliced.
End of explanation
"""
ds.salt.isel(s_rho=-1, ocean_time=0).plot(x='lon_rho', y='lat_rho')
"""
Explanation: A plan view
Now make a naive plan view, without any projection information, just using lon/lat as x/y. This looks OK, but will appear compressed because lon and lat do not have an aspect constrained by the projection.
End of explanation
"""
proj = ccrs.LambertConformal(central_longitude=-92, central_latitude=29)
fig = plt.figure(figsize=(15, 5))
ax = plt.axes(projection=proj)
ds.salt.isel(s_rho=-1, ocean_time=0).plot(x='lon_rho', y='lat_rho',
transform=ccrs.PlateCarree())
coast_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m',
edgecolor='k', facecolor='0.8')
ax.add_feature(coast_10m)
"""
Explanation: And let's use a projection to make it nicer, and add a coast.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/sandbox-3/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
csaladenes/csaladenes.github.io | present/mcc2/PythonDataScienceHandbook/05.06-Linear-Regression.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
"""
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
<!--NAVIGATION-->
< In Depth: Naive Bayes Classification | Contents | In-Depth: Support Vector Machines >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.06-Linear-Regression.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
In Depth: Linear Regression
Just as naive Bayes (discussed earlier in In Depth: Naive Bayes Classification) is a good starting point for classification tasks, linear regression models are a good starting point for regression tasks.
Such models are popular because they can be fit very quickly, and are very interpretable.
You are probably familiar with the simplest form of a linear regression model (i.e., fitting a straight line to data) but such models can be extended to model more complicated data behavior.
In this section we will start with a quick intuitive walk-through of the mathematics behind this well-known problem, before seeing how before moving on to see how linear models can be generalized to account for more complicated patterns in data.
We begin with the standard imports:
End of explanation
"""
rng = np.random.RandomState(1)
x = 10 * rng.rand(50)
y = 2 * x - 5 + rng.randn(50)
plt.scatter(x, y);
"""
Explanation: Simple Linear Regression
We will start with the most familiar linear regression, a straight-line fit to data.
A straight-line fit is a model of the form
$$
y = ax + b
$$
where $a$ is commonly known as the slope, and $b$ is commonly known as the intercept.
Consider the following data, which is scattered about a line with a slope of 2 and an intercept of -5:
End of explanation
"""
from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True)
model.fit(x[:, np.newaxis], y)
xfit = np.linspace(0, 10, 1000)
yfit = model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit);
"""
Explanation: We can use Scikit-Learn's LinearRegression estimator to fit this data and construct the best-fit line:
End of explanation
"""
print("Model slope: ", model.coef_[0])
print("Model intercept:", model.intercept_)
"""
Explanation: The slope and intercept of the data are contained in the model's fit parameters, which in Scikit-Learn are always marked by a trailing underscore.
Here the relevant parameters are coef_ and intercept_:
End of explanation
"""
rng = np.random.RandomState(1)
X = 10 * rng.rand(100, 3)
y = 0.5 + np.dot(X, [1.5, -2., 1.])
model.fit(X, y)
print(model.intercept_)
print(model.coef_)
"""
Explanation: We see that the results are very close to the inputs, as we might hope.
The LinearRegression estimator is much more capable than this, however—in addition to simple straight-line fits, it can also handle multidimensional linear models of the form
$$
y = a_0 + a_1 x_1 + a_2 x_2 + \cdots
$$
where there are multiple $x$ values.
Geometrically, this is akin to fitting a plane to points in three dimensions, or fitting a hyper-plane to points in higher dimensions.
The multidimensional nature of such regressions makes them more difficult to visualize, but we can see one of these fits in action by building some example data, using NumPy's matrix multiplication operator:
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
x = np.array([2, 3, 4])
poly = PolynomialFeatures(3, include_bias=False)
poly.fit_transform(x[:, None])
"""
Explanation: Here the $y$ data is constructed from three random $x$ values, and the linear regression recovers the coefficients used to construct the data.
In this way, we can use the single LinearRegression estimator to fit lines, planes, or hyperplanes to our data.
It still appears that this approach would be limited to strictly linear relationships between variables, but it turns out we can relax this as well.
Basis Function Regression
One trick you can use to adapt linear regression to nonlinear relationships between variables is to transform the data according to basis functions.
We have seen one version of this before, in the PolynomialRegression pipeline used in Hyperparameters and Model Validation and Feature Engineering.
The idea is to take our multidimensional linear model:
$$
y = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + \cdots
$$
and build the $x_1, x_2, x_3,$ and so on, from our single-dimensional input $x$.
That is, we let $x_n = f_n(x)$, where $f_n()$ is some function that transforms our data.
For example, if $f_n(x) = x^n$, our model becomes a polynomial regression:
$$
y = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots
$$
Notice that this is still a linear model—the linearity refers to the fact that the coefficients $a_n$ never multiply or divide each other.
What we have effectively done is taken our one-dimensional $x$ values and projected them into a higher dimension, so that a linear fit can fit more complicated relationships between $x$ and $y$.
Polynomial basis functions
This polynomial projection is useful enough that it is built into Scikit-Learn, using the PolynomialFeatures transformer:
End of explanation
"""
from sklearn.pipeline import make_pipeline
poly_model = make_pipeline(PolynomialFeatures(7),
LinearRegression())
"""
Explanation: We see here that the transformer has converted our one-dimensional array into a three-dimensional array by taking the exponent of each value.
This new, higher-dimensional data representation can then be plugged into a linear regression.
As we saw in Feature Engineering, the cleanest way to accomplish this is to use a pipeline.
Let's make a 7th-degree polynomial model in this way:
End of explanation
"""
rng = np.random.RandomState(1)
x = 10 * rng.rand(50)
y = np.sin(x) + 0.1 * rng.randn(50)
poly_model.fit(x[:, np.newaxis], y)
yfit = poly_model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit);
"""
Explanation: With this transform in place, we can use the linear model to fit much more complicated relationships between $x$ and $y$.
For example, here is a sine wave with noise:
End of explanation
"""
from sklearn.base import BaseEstimator, TransformerMixin
class GaussianFeatures(BaseEstimator, TransformerMixin):
"""Uniformly spaced Gaussian features for one-dimensional input"""
def __init__(self, N, width_factor=2.0):
self.N = N
self.width_factor = width_factor
@staticmethod
def _gauss_basis(x, y, width, axis=None):
arg = (x - y) / width
return np.exp(-0.5 * np.sum(arg ** 2, axis))
def fit(self, X, y=None):
# create N centers spread along the data range
self.centers_ = np.linspace(X.min(), X.max(), self.N)
self.width_ = self.width_factor * (self.centers_[1] - self.centers_[0])
return self
def transform(self, X):
return self._gauss_basis(X[:, :, np.newaxis], self.centers_,
self.width_, axis=1)
gauss_model = make_pipeline(GaussianFeatures(20),
LinearRegression())
gauss_model.fit(x[:, np.newaxis], y)
yfit = gauss_model.predict(xfit[:, np.newaxis])
plt.scatter(x, y)
plt.plot(xfit, yfit)
plt.xlim(0, 10);
"""
Explanation: Our linear model, through the use of 7th-order polynomial basis functions, can provide an excellent fit to this non-linear data!
Gaussian basis functions
Of course, other basis functions are possible.
For example, one useful pattern is to fit a model that is not a sum of polynomial bases, but a sum of Gaussian bases.
The result might look something like the following figure:
figure source in Appendix
The shaded regions in the plot are the scaled basis functions, and when added together they reproduce the smooth curve through the data.
These Gaussian basis functions are not built into Scikit-Learn, but we can write a custom transformer that will create them, as shown here and illustrated in the following figure (Scikit-Learn transformers are implemented as Python classes; reading Scikit-Learn's source is a good way to see how they can be created):
End of explanation
"""
model = make_pipeline(GaussianFeatures(30),
LinearRegression())
model.fit(x[:, np.newaxis], y)
plt.scatter(x, y)
plt.plot(xfit, model.predict(xfit[:, np.newaxis]))
plt.xlim(0, 10)
plt.ylim(-1.5, 1.5);
"""
Explanation: We put this example here just to make clear that there is nothing magic about polynomial basis functions: if you have some sort of intuition into the generating process of your data that makes you think one basis or another might be appropriate, you can use them as well.
Regularization
The introduction of basis functions into our linear regression makes the model much more flexible, but it also can very quickly lead to over-fitting (refer back to Hyperparameters and Model Validation for a discussion of this).
For example, if we choose too many Gaussian basis functions, we end up with results that don't look so good:
End of explanation
"""
def basis_plot(model, title=None):
fig, ax = plt.subplots(2, sharex=True)
model.fit(x[:, np.newaxis], y)
ax[0].scatter(x, y)
ax[0].plot(xfit, model.predict(xfit[:, np.newaxis]))
ax[0].set(xlabel='x', ylabel='y', ylim=(-1.5, 1.5))
if title:
ax[0].set_title(title)
ax[1].plot(model.steps[0][1].centers_,
model.steps[1][1].coef_)
ax[1].set(xlabel='basis location',
ylabel='coefficient',
xlim=(0, 10))
model = make_pipeline(GaussianFeatures(30), LinearRegression())
basis_plot(model)
"""
Explanation: With the data projected to the 30-dimensional basis, the model has far too much flexibility and goes to extreme values between locations where it is constrained by data.
We can see the reason for this if we plot the coefficients of the Gaussian bases with respect to their locations:
End of explanation
"""
from sklearn.linear_model import Ridge
model = make_pipeline(GaussianFeatures(30), Ridge(alpha=0.1))
basis_plot(model, title='Ridge Regression')
"""
Explanation: The lower panel of this figure shows the amplitude of the basis function at each location.
This is typical over-fitting behavior when basis functions overlap: the coefficients of adjacent basis functions blow up and cancel each other out.
We know that such behavior is problematic, and it would be nice if we could limit such spikes expliticly in the model by penalizing large values of the model parameters.
Such a penalty is known as regularization, and comes in several forms.
Ridge regression ($L_2$ Regularization)
Perhaps the most common form of regularization is known as ridge regression or $L_2$ regularization, sometimes also called Tikhonov regularization.
This proceeds by penalizing the sum of squares (2-norms) of the model coefficients; in this case, the penalty on the model fit would be
$$
P = \alpha\sum_{n=1}^N \theta_n^2
$$
where $\alpha$ is a free parameter that controls the strength of the penalty.
This type of penalized model is built into Scikit-Learn with the Ridge estimator:
End of explanation
"""
from sklearn.linear_model import Lasso
model = make_pipeline(GaussianFeatures(30), Lasso(alpha=0.001))
basis_plot(model, title='Lasso Regression')
"""
Explanation: The $\alpha$ parameter is essentially a knob controlling the complexity of the resulting model.
In the limit $\alpha \to 0$, we recover the standard linear regression result; in the limit $\alpha \to \infty$, all model responses will be suppressed.
One advantage of ridge regression in particular is that it can be computed very efficiently—at hardly more computational cost than the original linear regression model.
Lasso regression ($L_1$ regularization)
Another very common type of regularization is known as lasso, and involves penalizing the sum of absolute values (1-norms) of regression coefficients:
$$
P = \alpha\sum_{n=1}^N |\theta_n|
$$
Though this is conceptually very similar to ridge regression, the results can differ surprisingly: for example, due to geometric reasons lasso regression tends to favor sparse models where possible: that is, it preferentially sets model coefficients to exactly zero.
We can see this behavior in duplicating the ridge regression figure, but using L1-normalized coefficients:
End of explanation
"""
!sudo apt-get update
!apt-get -y install curl
!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
# !wget -o FremontBridge.csv "https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"
import pandas as pd
counts = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True)
weather = pd.read_csv('data/BicycleWeather.csv', index_col='DATE', parse_dates=True)
"""
Explanation: With the lasso regression penalty, the majority of the coefficients are exactly zero, with the functional behavior being modeled by a small subset of the available basis functions.
As with ridge regularization, the $\alpha$ parameter tunes the strength of the penalty, and should be determined via, for example, cross-validation (refer back to Hyperparameters and Model Validation for a discussion of this).
Example: Predicting Bicycle Traffic
As an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.
We have seen this data already in Working With Time Series.
In this section, we will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor.
Fortunately, the NOAA makes available their daily weather station data (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.
We will perform a simple linear regression to relate weather and other information to bicycle counts, in order to estimate how a change in any one of these parameters affects the number of riders on a given day.
In particular, this is an example of how the tools of Scikit-Learn can be used in a statistical modeling framework, in which the parameters of the model are assumed to have interpretable meaning.
As discussed previously, this is not a standard approach within machine learning, but such interpretation is possible for some models.
Let's start by loading the two datasets, indexing by date:
End of explanation
"""
daily = counts.resample('d').sum()
daily['Total'] = daily.sum(axis=1)
daily = daily[['Total']] # remove other columns
"""
Explanation: Next we will compute the total daily bicycle traffic, and put this in its own dataframe:
End of explanation
"""
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
for i in range(7):
daily[days[i]] = (daily.index.dayofweek == i).astype(float)
"""
Explanation: We saw previously that the patterns of use generally vary from day to day; let's account for this in our data by adding binary columns that indicate the day of the week:
End of explanation
"""
from pandas.tseries.holiday import USFederalHolidayCalendar
cal = USFederalHolidayCalendar()
holidays = cal.holidays('2012', '2016')
daily = daily.join(pd.Series(1, index=holidays, name='holiday'))
daily['holiday'].fillna(0, inplace=True)
"""
Explanation: Similarly, we might expect riders to behave differently on holidays; let's add an indicator of this as well:
End of explanation
"""
from datetime import datetime
def hours_of_daylight(date, axis=23.44, latitude=47.61):
"""Compute the hours of daylight for the given date"""
days = (date - datetime(2000, 12, 21)).days
m = (1. - np.tan(np.radians(latitude))
* np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))
return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.
daily['daylight_hrs'] = list(map(hours_of_daylight, daily.index))
daily[['daylight_hrs']].plot()
plt.ylim(8, 17)
"""
Explanation: We also might suspect that the hours of daylight would affect how many people ride; let's use the standard astronomical calculation to add this information:
End of explanation
"""
# temperatures are in 1/10 deg C; convert to C
weather['TMIN'] /= 10
weather['TMAX'] /= 10
weather['Temp (C)'] = 0.5 * (weather['TMIN'] + weather['TMAX'])
# precip is in 1/10 mm; convert to inches
weather['PRCP'] /= 254
weather['dry day'] = (weather['PRCP'] == 0).astype(int)
daily = daily.join(weather[['PRCP', 'Temp (C)', 'dry day']],rsuffix='0')
"""
Explanation: We can also add the average temperature and total precipitation to the data.
In addition to the inches of precipitation, let's add a flag that indicates whether a day is dry (has zero precipitation):
End of explanation
"""
daily['annual'] = (daily.index - daily.index[0]).days / 365.
"""
Explanation: Finally, let's add a counter that increases from day 1, and measures how many years have passed.
This will let us measure any observed annual increase or decrease in daily crossings:
End of explanation
"""
daily.head()
"""
Explanation: Now our data is in order, and we can take a look at it:
End of explanation
"""
# Drop any rows with null values
daily.dropna(axis=0, how='any', inplace=True)
column_names = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun', 'holiday',
'daylight_hrs', 'PRCP', 'dry day', 'Temp (C)', 'annual']
X = daily[column_names]
y = daily['Total']
model = LinearRegression(fit_intercept=False)
model.fit(X, y)
daily['predicted'] = model.predict(X)
"""
Explanation: With this in place, we can choose the columns to use, and fit a linear regression model to our data.
We will set fit_intercept = False, because the daily flags essentially operate as their own day-specific intercepts:
End of explanation
"""
daily[['Total', 'predicted']].plot(alpha=0.5);
"""
Explanation: Finally, we can compare the total and predicted bicycle traffic visually:
End of explanation
"""
params = pd.Series(model.coef_, index=X.columns)
params
"""
Explanation: It is evident that we have missed some key features, especially during the summer time.
Either our features are not complete (i.e., people decide whether to ride to work based on more than just these) or there are some nonlinear relationships that we have failed to take into account (e.g., perhaps people ride less at both high and low temperatures).
Nevertheless, our rough approximation is enough to give us some insights, and we can take a look at the coefficients of the linear model to estimate how much each feature contributes to the daily bicycle count:
End of explanation
"""
from sklearn.utils import resample
np.random.seed(1)
err = np.std([model.fit(*resample(X, y)).coef_
for i in range(1000)], 0)
"""
Explanation: These numbers are difficult to interpret without some measure of their uncertainty.
We can compute these uncertainties quickly using bootstrap resamplings of the data:
End of explanation
"""
print(pd.DataFrame({'effect': params.round(0),
'error': err.round(0)}))
"""
Explanation: With these errors estimated, let's again look at the results:
End of explanation
"""
|
shernshiou/CarND | Term1/02-CarND-Traffic-Sign-Classifier-Project/Traffic_Sign_Classifier1.ipynb | mit | # Load pickled data
import pickle
import csv
import cv2
import numpy as np
import math
import matplotlib.pyplot as plt
signnames = []
with open("signnames.csv", 'r') as f:
next(f)
reader = csv.reader(f)
signnames = list(reader)
n_classes = len(signnames)
training_file = "./train.p"
testing_file = "./test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
"""
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
"""
from sklearn import cross_validation
X_train, X_test = [], []
y_train, y_test = [], test['labels']
for i, img in enumerate(train['features']):
img = cv2.resize(img,(48, 48), interpolation = cv2.INTER_CUBIC)
X_train.append(img)
y_train.append(train['labels'][i])
# Adaptive Histogram (CLAHE)
imgLab = cv2.cvtColor(img, cv2.COLOR_RGB2Lab)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
l, a, b = cv2.split(imgLab)
l = clahe.apply(l)
imgLab = cv2.merge((l, a, b))
imgLab = cv2.cvtColor(imgLab, cv2.COLOR_Lab2RGB)
X_train.append(imgLab)
y_train.append(train['labels'][i])
# Rotate -15
M = cv2.getRotationMatrix2D((24, 24), -15.0, 1)
imgL = cv2.warpAffine(img, M, (48, 48))
X_train.append(imgL)
y_train.append(train['labels'][i])
# Rotate 15
M = cv2.getRotationMatrix2D((24, 24), 15.0, 1)
imgR = cv2.warpAffine(img, M, (48, 48))
X_train.append(imgR)
y_train.append(train['labels'][i])
for img in test['features']:
X_test.append(cv2.resize(img,(48, 48), interpolation = cv2.INTER_CUBIC))
X_train, X_validation, y_train, y_validation = cross_validation.train_test_split(X_train, y_train, test_size=0.2, random_state=7)
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
"""
Explanation: Preprocess Data
End of explanation
"""
n_train = len(X_train)
n_test = len(X_test)
image_shape = X_train[0].shape
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
print("Number of X_train = ", len(X_train))
print("Number of X_validation = ", len(X_validation))
print("Number of y_train = ", len(y_train))
print("Number of y_validation = ", len(y_validation))
"""
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below.
End of explanation
"""
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index], signnames[y_train[index]][1])
"""
Explanation: Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
End of explanation
"""
import tensorflow as tf
from tensorflow.contrib.layers import flatten
EPOCHS = 10
BATCH_SIZE = 128
def ConvNet(x):
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 48x48x3. Output = 42x42x100.
c1_W = tf.Variable(tf.truncated_normal([7, 7, 3, 100], mean=mu, stddev=sigma))
c1_b = tf.Variable(tf.zeros(100))
c1 = tf.nn.conv2d(x, c1_W, strides=[1, 1, 1, 1], padding='VALID')
c1 = tf.nn.bias_add(c1, c1_b)
c1 = tf.nn.relu(c1)
# Layer 2: Max Pooling. Input = 42x42x100. Output = 21x21x100.
s2 = tf.nn.max_pool(c1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Layer 3: Convolutional. Input = 21x21x100. Output = 18x18x150.
c3_W = tf.Variable(tf.truncated_normal([4, 4, 100, 150], mean=mu, stddev=sigma))
c3_b = tf.Variable(tf.zeros(150))
c3 = tf.nn.conv2d(s2, c3_W, strides=[1, 1, 1, 1], padding='VALID')
c3 = tf.nn.bias_add(c3, c3_b)
c3 = tf.nn.relu(c3)
# Layer 4: Max Pooling. Input = 18x18x150. Output = 9x9x150
s4 = tf.nn.max_pool(c3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Layer 5: Convolutional. Input = 9x9x150. Output = 6x6x250.
c5_W = tf.Variable(tf.truncated_normal([4, 4, 150, 250], mean=mu, stddev=sigma))
c5_b = tf.Variable(tf.zeros(250))
c5 = tf.nn.conv2d(s4, c5_W, strides=[1, 1, 1, 1], padding='VALID')
c5 = tf.nn.bias_add(c5, c5_b)
c5 = tf.nn.relu(c5)
# Layer 6: Max Pooling. Input = 6x6x250. Output = 3x3x250.
s6 = tf.nn.max_pool(c5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Layer 6: Flatten. Input = 3x3x250. Output = 2250
s6 = flatten(s6)
# Layer 7: Fully Connected. Input = 2250. Output = 300.
fc7_W = tf.Variable(tf.truncated_normal([2250, 300], mean=mu, stddev=sigma))
fc7_b = tf.Variable(tf.zeros(300))
fc7 = tf.add(tf.matmul(s6, fc7_W), fc7_b)
fc7 = tf.nn.relu(fc7)
# Layer 8: Fully Connected. Input = 300. Output = 43.
fc8_W = tf.Variable(tf.truncated_normal([300, 43], mean=mu, stddev=sigma))
fc8_b = tf.Variable(tf.zeros(43))
fc8 = tf.add(tf.matmul(fc7, fc8_W), fc8_b)
return fc8
"""
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
There are various aspects to consider when thinking about this problem:
Neural network architecture
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
"""
x = tf.placeholder(tf.float32, (None, 48, 48, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
"""
Explanation: Features and Labels
End of explanation
"""
rate = 0.001
logits = ConvNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
try:
saver
except NameError:
saver = tf.train.Saver()
saver.save(sess, 'convnet')
print("Model saved")
"""
Explanation: Model Training
End of explanation
"""
with tf.Session() as sess:
loader = tf.train.import_meta_graph("convnet.meta")
loader.restore(sess, tf.train.latest_checkpoint('./'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Model Evaluation
End of explanation
"""
from PIL import Image
# Visualizations will be shown in the notebook.
%matplotlib inline
new_images = []
new_labels = np.array([4, 17, 26, 28, 14])
fig = plt.figure()
for i in range(1, 6):
subplot = fig.add_subplot(2,3,i)
img = cv2.imread("./dataset/{}.png".format(i))
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
img = cv2.resize(img,(48, 48), interpolation = cv2.INTER_CUBIC)
subplot.set_title(signnames[new_labels[i-1]][1],fontsize=8)
subplot.imshow(img)
new_images.append(img)
"""
Explanation: Question 1
Describe how you preprocessed the data. Why did you choose that technique?
Answer:
I did not preprocess the data.
Question 2
Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?
Answer:
I generate additional data using 3 methods:
1. Adaptive histogram
2. Rotate -15 degree
3. Rotate 15 degree
Additionally, I resize the images from 32x32 to 48x48
I took the action according to the paper Multi-Column Deep Neural Network for Traffic Sign
Classification (for Adaptive histogram) and Traffic Sign Recognition with Multi-Scale Convolutional Networks (for rotation).
Finally, I split the training data into 2 parts following 80:20 for cross validation.
Question 3
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow
from the classroom.
Answer:
I uses the architecture from Multi-Column Deep Neural Network for Traffic Sign
Classification by Ciresan et. al. The following lines briefly describes the layers:
Input: 48x48x3 images
Layer 1: Convolutional layer with 7x7 kernel which output 100 maps of 42x42 neurons
Layer 2: Max-pooling layer with 2x2 kernel and 2 strides which output 100 maps of 21x21 neurons
Layer 3: Convolutional layer with 4x4 kernel which output 150 maps of 18x18 neurons
Layer 4: Max-pooling layer with 2x2 kernel and 2 strides which output 100 maps of 9x9 neurons
Layer 5: Convolutional layer with 4x4 kernel which output 250 maps of 6x6 neurons
Layer 6: Max-pooling layer with 2x2 kernel and 2 strides which output 100 maps of 3x3 neurons
Layer 7: Fully-connected layer outputing 300 neurons
Layer 8: Fully-connected layer outputing 43 neurons/logits
Question 4
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer:
Optimizer: Adam-optimizer
Batch size: 128
Epochs: 10
Hyperparameters: mu = 0, sigma = 0.1
Learning rate: 0.001
Question 5
What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.
Answer:
I tried Lenet-5 prior to the current architecture. The evaluation result is about 0.8. After changing the architecture, the result improved to above 0.9. I decided to add additional training images after viewing the images. Some of them are darkened, and some of them off-centered. Therefore, I took cue from 2 papers above to generate additional data by using adaptive histogram and rotation. The results improved.
Step 3: Test a Model on New Images
Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
"""
with tf.Session() as sess:
loader = tf.train.import_meta_graph("convnet.meta")
loader.restore(sess, tf.train.latest_checkpoint('./'))
new_pics_classes = sess.run(logits, feed_dict={x: new_images})
test_accuracy = evaluate(new_images, new_labels)
print("Test Accuracy = {:.3f}".format(test_accuracy))
top3 = sess.run(tf.nn.top_k(new_pics_classes, k=3, sorted=True))
for i in range(len(top3[0])):
labels = list(map(lambda x: signnames[x][1], top3[1][i]))
print("Image {} predicted labels: {} with probabilities: {}".format(i+1, labels, top3[0][i]))
"""
Explanation: Question 6
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.
Answer:
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/tfx_pipelines/pipeline/labs/tfx_pipeline_vertex.ipynb | apache-2.0 | from google.cloud import aiplatform as vertex_ai
"""
Explanation: Continuous training with TFX and Vertex
Learning Objectives
Containerize your TFX code into a pipeline package using Cloud Build.
Use the TFX CLI to compile a TFX pipeline.
Deploy a TFX pipeline version to run on Vertex Pipelines using the Vertex Python SDK.
Setup
End of explanation
"""
!python -c "import tensorflow as tf; print(f'TF version: {tf.__version__}')"
!python -c "import tfx; print(f'TFX version: {tfx.__version__}')"
!python -c "import kfp; print(f'KFP version: {kfp.__version__}')"
print(f"aiplatform: {vertex_ai.__version__}")
"""
Explanation: Validate lab package version installation
End of explanation
"""
%cd pipeline_vertex
!ls -la
"""
Explanation: Note: this lab was built and tested with the following package versions:
TF version: 2.6.2
TFX version: 1.4.0
KFP version: 1.8.1
aiplatform: 1.7.1
Review: example TFX pipeline design pattern for Vertex
The pipeline source code can be found in the pipeline_vertex folder.
End of explanation
"""
# TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}"
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env REGION={REGION}
%env ARTIFACT_STORE={ARTIFACT_STORE}
%env PROJECT_ID={PROJECT_ID}
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
"""
Explanation: The config.py module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters.
The default values can be overwritten at compile time by providing the updated values in a set of environment variables. You will set custom environment variables later on this lab.
The pipeline.py module contains the TFX DSL defining the workflow implemented by the pipeline.
The preprocessing.py module implements the data preprocessing logic the Transform component.
The model.py module implements the TensorFlow model code and training logic for the Trainer component.
The runner.py module configures and executes KubeflowV2DagRunner. At compile time, the KubeflowDagRunner.run() method converts the TFX DSL into the pipeline package into a JSON format for execution on Vertex.
The features.py module contains feature definitions common across preprocessing.py and model.py.
Exercise: build your pipeline with the TFX CLI
You will use TFX CLI to compile and deploy the pipeline. As explained in the previous section, the environment specific settings can be provided through a set of environment variables and embedded into the pipeline package at compile time.
Configure your environment resource settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training, Vizier, and Prediction.
ARTIFACT_STORE - An existing GCS bucket. You can use any bucket, but we will use here the bucket with the same name as the project.
End of explanation
"""
PIPELINE_NAME = "tfxcovertype"
DATA_ROOT_URI = f"gs://{PROJECT_ID}/data/tfxcovertype"
TFX_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{PIPELINE_NAME}"
PIPELINE_JSON = f"{PIPELINE_NAME}.json"
TRAIN_STEPS = 10
EVAL_STEPS = 5
%env PIPELINE_NAME={PIPELINE_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env TFX_IMAGE_URI={TFX_IMAGE_URI}
%env PIPELINE_JSON={PIPELINE_JSON}
%env TRAIN_STEPS={TRAIN_STEPS}
%env EVAL_STEPS={EVAL_STEPS}
"""
Explanation: Set the compile time settings to first create a pipeline version without hyperparameter tuning
Default pipeline runtime environment values are configured in the pipeline folder config.py. You will set their values directly below:
PIPELINE_NAME - the pipeline's globally unique name.
DATA_ROOT_URI - the URI for the raw lab dataset gs://{PROJECT_ID}/data/tfxcovertype.
TFX_IMAGE_URI - the image name of your pipeline container that will be used to execute each of your tfx components
End of explanation
"""
!gsutil cp ../../../data/* $DATA_ROOT_URI/dataset.csv
!gsutil ls $DATA_ROOT_URI/*
"""
Explanation: Let us populate the data bucket at DATA_ROOT_URI:
End of explanation
"""
!gcloud builds submit --timeout 15m --tag $TFX_IMAGE_URI .
"""
Explanation: Let us build and push the TFX container image described in the Dockerfile:
End of explanation
"""
# TODO: Your code here to compile the TFX Pipeline using the TFX CLI
"""
Explanation: Compile your pipeline code
The following command will execute the KubeflowV2DagRunner that compiles the pipeline described in pipeline.py into a JSON representation consumable by Vertex:
End of explanation
"""
# TODO: Your code here to use the vertex_ai sdk to deploy your
# pipeline image to Vertex Pipelines.
"""
Explanation: Note: you should see a {PIPELINE_NAME}.json file appear in your current pipeline directory.
Exercise: deploy your pipeline on Vertex using the Vertex SDK
Once you have the {PIPELINE_NAME}.json available, you can run the tfx pipeline on Vertex by launching a pipeline job using the aiplatform handle:
End of explanation
"""
|
gatakaba/kalmanfilter | examples/kalmanfilter/free_fall.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir('..')
from kalmanfilter.kalmanfilter import KalmanFilter
dt = 10 ** -3
"""
Explanation: 自由落下の状態方程式
質量M,ダンパ係数C,弾性係数Kの運動方程式は以下のようになる
$$ M \frac{d^{2}x}{dt^{2}} + C \frac{dx}{dt} + K x = f(t) $$
ここで空気抵抗を無視して方程式を整理すると
$$ M \frac{d^{2}x}{dt^{2}} = Mg $$
ここで 現在位置 $x$ と速度 $v$ を状態とすると状態空間方程式
$$ \mathbf{z}{t+1} = A \mathbf{z}{t} + B u_{t} $$
$$ \mathbf{x}{t} = C \mathbf{z}{t} $$
のパラメータは以下のようになる。
$ \mathbf{z}{t}= \left[ x{t} , v_{t} \right]^{T} $
,
$
A = \left[
\begin{array}{rr}
1 & dt \
0 & 1
\end{array}
\right]
$
,
$
B = \left[
\begin{array}{rr}
0 & 0 \
0 & \frac{dt}{M}
\end{array}
\right]
$
,
$ \mathbf{u}_{t}= \left[0 ,Mg \right]^{T} $
,
$ C= \left[1 ,0 \right] $
,
$
Q = \left[
\begin{array}{rr}
\frac{dt^{3}}{3} & \frac{dt^{2}}{2} \
\frac{dt^{2}}{2} & dt
\end{array}
\right]q
$,
$ R = r $
このモデルパラメータを用いてサンプリングレート1000Hzで0.5秒間物体を観測し、その後の1.5秒後までを予測してみよう。
プログラムは以下のようになる
End of explanation
"""
M=10
A = np.array([[1, dt],[0, 1]])
B = np.array([[0, 0], [0, dt / M]])
C = np.atleast_2d([1, 0])
q = 1
r = 1
Q = np.array([[dt ** 3 / 3, dt ** 2 / 2], [dt ** 2 / 2, dt]]) * q
R = np.eye(1) * r
s = np.array([0, 0])
"""
Explanation: モデルのパラメータを追加
End of explanation
"""
kf = KalmanFilter(A, C, Q, R, s, initial_covariance=None, drive_matrix=B)
N = 2000
t = np.arange(0, N * dt, dt)
true_x = np.empty(N)
observed_x = np.empty(500)
estimated_x = np.empty(N)
estimated_variance = np.empty(N)
"""
Explanation: カルマンフィルタ インスタンス作成
End of explanation
"""
y = -t ** 2 * 9.8 / 2 + 10 * t + 10
"""
Explanation: 真の軌道の作成(観測不可)
End of explanation
"""
for i in range(500):
x = y[i] + np.random.normal(0, 0.5)
observed_x[i] = x
u = np.array([0, -M * 9.8])
kf.update(x, u)
estimated_x[i] = kf.current_state[0][0]
estimated_variance[i] = kf.current_state[1][0, 0]
"""
Explanation: ガウシアンノイズが加算された観測データを用いて位置と速度を推定
End of explanation
"""
m, p = kf.predict_state(N - 500, u)
"""
Explanation: 1.5秒先まで予測
End of explanation
"""
for i in range(N - 500):
estimated_x[i + 500] = m[i][0]
estimated_variance[i + 500] = p[i][0, 0]
"""
Explanation: 予測した軌道データを格納
End of explanation
"""
%matplotlib inline
plt.plot(t[:500], observed_x, "k-", label="observed trajectory", alpha=0.25)
plt.plot(t, y, "g-", label="true trajectory")
plt.fill_between(t, estimated_x - estimated_variance ** 0.5, estimated_x + estimated_variance ** 0.5, alpha=0.25)
plt.plot(t, estimated_x, "ro-", label="filterd trajectory")
plt.legend()
"""
Explanation: 描画
End of explanation
"""
|
yfur/basic-mechanics-python | 1_modeling/1_modeling.ipynb | apache-2.0 | import numpy as np
from scipy.integrate import odeint
from math import sin
''' constants '''
m = 1 # mass of the pendulum [kg]
l = 1 # length of the pendulum [m]
g = 10 # Gravitational acceleration [m/s^2]
c = 0.3 # Damping constant [kg.m/(rad.s)]
''' time setting '''
t_end = 10 # simulation time [s]
t_fps = 50 # frame per second. This value means smoothness of produced graph and animation
t_step = 1/t_fps
t = np.arange(0, t_end, t_step)
''' initial value '''
theta_init = 0 # initial value of theta [rad]
dtheta_init = 1 # initial value of dot theta [rad/s]
s_init = np.array([theta_init, dtheta_init])
def odefunc(s, t):
theta = s[0]
dtheta = s[1]
ddtheta = -g/l*sin(theta) - c*dtheta# <- Equation of motion. *** THIS CODE CHANGED ***
return np.r_[dtheta, ddtheta]
s = odeint(odefunc, s_init, t)
print('ODE calculation finished.')
"""
Explanation: モデル化
Next >> 0_quickstart
Prev >> editing
シミュレーションを行う際に一番最初に行うのは モデル化 である.シミュレーションの結果は,どのようにモデル化を行ったかによって大きく影響される.当然ではあるが.
例えば,単振り子のシミュレーションにおいて,0_quickstartでは 摩擦 による運動の減衰を考えなかったが,これを考えてモデル化を行ってみる.
振子と天井の結点の部分で粘性摩擦を仮定し,角速度に比例した力$-c\dot{\theta}$がはたらくものとする.すると,運動方程式は
\begin{align}
ml\ddot{\theta} = -mg\sin\theta - c\dot{\theta}
\end{align}
となる.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.plot(t, s[:, 0])
plt.xlabel('t [s]')
plt.ylabel('theta [rad]')
plt.show()
"""
Explanation: 振り子の角度の時間変化をグラフにすると,このようになっている.
End of explanation
"""
import sympy as sym
import sympy.physics.mechanics as me
"""
Explanation: 摩擦 という要素を考えることによって運動の様相が変化したことがわかる.
シミュレーションを行う上では,考えたい物理モデルをどのようにモデル化するかによって,得られる結果が大きく変わる.所望のシミュレーションを行うためには,十分な力学の知識が必要となる.
Lagrange の運動方程式
0_quickstartでは,単振り子の運動方程式をニュートンの運動方程式から求めたが,今度は ラグランジュの運動方程式 から求める.
おもりの運動エネルギーは
\begin{align}
T = \frac{1}{2}m(l\dot{\theta})^2
\end{align}
であり,ポテンシャルエネルギー(位置エネルギー)は
\begin{align}
U = - m(-g)(l-l\cos\theta) = mgl(1 - \cos\theta)
\end{align}
である.したがって,系のラグランジアンは
\begin{align}
L = T - U = \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta)
\end{align}
であり,ラグランジュの運動方程式は
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) - \frac{\partial L}{\partial \theta} = 0
\end{align}
である.項を一つ一つ丁寧に計算をすると,
\begin{align}
\frac{\partial L}{\partial \dot{\theta}} = \frac{\partial }{\partial \dot{\theta}} \left( \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta) \right) = ml^2\dot{\theta}
\end{align}
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) = \frac{d}{dt} (ml^2\dot{\theta}) = ml^2\ddot{\theta}
\end{align}
\begin{align}
\frac{\partial L}{\partial \theta} = \frac{\partial }{\partial \theta} \left( \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta) \right) = -mgl \sin\theta
\end{align}
より,
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) - \frac{\partial L}{\partial \theta} = ml^2\ddot{\theta} - (-mgl \sin\theta) = 0
\end{align}
よって,
\begin{align}
ml^2\ddot{\theta} + mgl \sin\theta = 0
\end{align}
である.式を整理すると,
\begin{align}
\ddot{\theta} = -\frac{g}{l} \sin\theta
\end{align}
となっており,ニュートンの運動方程式から導出したものと同じ結果が得られたことがわかる.
Lagrange の運動方程式を SymPy で計算する
Lagrange の運動方程式は運動の自由度についてのミニマムな運動方程式を記述することができる.しかし,ラグランジアンとその偏微分の計算は複雑になりがちである.単振り子の例については,運動の自由度は1であり,かつ非常にシンプルな状況であるため手で計算してもよいのだが,これが他リンク系となったり,運動を2次元から3次元に拡張したりしたときには,もはや手計算で求める気力が起こらない.
そこで, Python を使って Lagrange の運動方程式を導く. SymPy の LagrangesMethod クラス を用いる.
End of explanation
"""
''' Define constants and generalized coordinates '''
t = sym.symbols('t')
l, m, g = sym.symbols('l m g')
theta = me.dynamicsymbols('theta')
dtheta = me.dynamicsymbols('theta', 1)
"""
Explanation: 定数については,m = sym.symbols('m') のように定義する.
なお,時間$t$については必ず m = sym.symbols('m') を定義する必要がある.
時間とともに変化する値(一般化座標)については, theta = me.dynamicsymbols('theta') のように定義する.また,これの微分(一般加速度)については, dtheta = me.dynamicsymbols('theta, 1') のようにして定義をする.
End of explanation
"""
''' Kinetic energy '''
T = m*(l*dtheta)**2/2
''' Potential energy '''
U = -m*(-g)*(l - l*sym.cos(theta))
''' Lagurangian '''
L = T - U
"""
Explanation: 物理モデルに必要な定数・変数を全て定義してから,力学的エネルギーをそれぞれ記述し,ラグランジアンについても計算する.
End of explanation
"""
''' Calculating the eom '''
LM = me.LagrangesMethod(L, [theta])
print(LM.form_lagranges_equations())
"""
Explanation: LM = me.LagrangesMethod(ラグランジアン, [一般化座標の配列]) という関数で,ラグランジュの運動方程式を定義する.
LM.form_lagranges_equations() でラグランジュの運動方程式が出力される.
End of explanation
"""
|
scotthuang1989/Python-3-Module-of-the-Week | internet/urllib.parse — Split URLs into Components.ipynb | apache-2.0 | from urllib.parse import urlparse
url = 'http://netloc/path;param?query=arg#frag'
parsed = urlparse(url)
print(parsed)
"""
Explanation: Parsing
End of explanation
"""
from urllib.parse import urlparse
url = 'http://user:pwd@NetLoc:80/path;param?query=arg#frag'
parsed = urlparse(url)
print('scheme :', parsed.scheme)
print('netloc :', parsed.netloc)
print('path :', parsed.path)
print('params :', parsed.params)
print('query :', parsed.query)
print('fragment:', parsed.fragment)
print('username:', parsed.username)
print('password:', parsed.password)
print('hostname:', parsed.hostname)
print('port :', parsed.port)
"""
Explanation: Although the return value acts like a tuple, it is really based on a namedtuple, a subclass of tuple that supports accessing the parts of the URL via named attributes as well as indexes. In addition to being easier to use for the programmer, the attribute API also offers access to several values not available in the tuple API.
End of explanation
"""
from urllib.parse import urlsplit
url = 'http://user:pwd@NetLoc:80/p1;para/p2;para?query=arg#frag'
parsed = urlsplit(url)
print(parsed)
print('scheme :', parsed.scheme)
print('netloc :', parsed.netloc)
print('path :', parsed.path)
print('query :', parsed.query)
print('fragment:', parsed.fragment)
print('username:', parsed.username)
print('password:', parsed.password)
print('hostname:', parsed.hostname)
print('port :', parsed.port)
"""
Explanation: The urlsplit() function is an alternative to urlparse(). It behaves a little differently, because it does not split the parameters from the URL. This is useful for URLs following RFC 2396, which supports parameters for each segment of the path.
End of explanation
"""
from urllib.parse import urlparse
original = 'http://netloc/path;param?query=arg#frag'
print('ORIG :', original)
parsed = urlparse(original)
print('PARSED:', parsed.geturl())
"""
Explanation: Unparsing
There are several ways to assemble the parts of a split URL back together into a single string. The parsed URL object has a geturl() method.
End of explanation
"""
from urllib.parse import urlparse, urlunparse
original = 'http://netloc/path;param?query=arg#frag'
print('ORIG :', original)
parsed = urlparse(original)
print('PARSED:', type(parsed), parsed)
t = parsed[:]
print('TUPLE :', type(t), t)
print('NEW :', urlunparse(t))
"""
Explanation: A regular tuple containing strings can be combined into a URL with urlunparse().
End of explanation
"""
from urllib.parse import urljoin
print(urljoin('http://www.example.com/path/file.html',
'anotherfile.html'))
print(urljoin('http://www.example.com/path/file.html',
'../anotherfile.html'))
"""
Explanation: Joining
In addition to parsing URLs, urlparse includes urljoin() for constructing absolute URLs from relative fragments.
End of explanation
"""
from urllib.parse import urlencode
query_args = {
'q': 'query string',
'foo': 'bar',
}
encoded_args = urlencode(query_args)
print('Encoded:', encoded_args)
"""
Explanation: Encoding Query Arguments
Before arguments can be added to a URL, they need to be encoded.
End of explanation
"""
from urllib.parse import urlencode
query_args = {
'foo': ['foo1', 'foo2'],
}
print('Single :', urlencode(query_args))
print('Sequence:', urlencode(query_args, doseq=True))
"""
Explanation: To pass a sequence of values using separate occurrences of the variable in the query string, set doseq to True when calling urlencode().
End of explanation
"""
|
pchmieli/h2o-3 | h2o-py/demos/H2O_tutorial_medium.ipynb | apache-2.0 | import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
from h2o.estimators.random_forest import H2ORandomForestEstimator
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame(python_obj=X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
"""
Explanation: H2O Tutorial
Author: Spencer Aiello
Contact: spencer@h2oai.com
This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.
Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.
Setting up your system for this demo
The following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Enable inline plotting in the Jupyter Notebook
End of explanation
"""
fr = h2o.import_file("Boston_housing_train.csv")
"""
Explanation: Intro to H2O Data Munging
Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.
End of explanation
"""
fr.head()
"""
Explanation: View the top of the H2O frame.
End of explanation
"""
fr.tail()
"""
Explanation: View the bottom of the H2O Frame
End of explanation
"""
fr["CRIM"].head() # Tab completes
"""
Explanation: Select a column
fr["VAR_NAME"]
End of explanation
"""
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
"""
Explanation: Select a few columns
End of explanation
"""
fr[2:7,:] # explicitly select all columns with :
"""
Explanation: Select a subset of rows
Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection.
End of explanation
"""
# The columns attribute is exactly like Pandas
print "Columns:", fr.columns, "\n"
print "Columns:", fr.names, "\n"
print "Columns:", fr.col_names, "\n"
# There are a number of attributes to get at the shape
print "length:", str( len(fr) ), "\n"
print "shape:", fr.shape, "\n"
print "dim:", fr.dim, "\n"
print "nrow:", fr.nrow, "\n"
print "ncol:", fr.ncol, "\n"
# Use the "types" attribute to list the column types
print "types:", fr.types, "\n"
"""
Explanation: Key attributes:
* columns, names, col_names
* len, shape, dim, nrow, ncol
* types
Note:
Since the data is not in local python memory
there is no "values" attribute. If you want to
pull all of the data into the local python memory
then do so explicitly with h2o.export_file and
reading the data into python memory from disk.
End of explanation
"""
fr.shape
"""
Explanation: Select rows based on value
End of explanation
"""
mask = fr["CRIM"]>1
fr[mask,:].shape
"""
Explanation: Boolean masks can be used to subselect rows based on a criteria.
End of explanation
"""
fr.describe()
"""
Explanation: Get summary statistics of the data and additional data distribution information.
End of explanation
"""
x = fr.names
y="Median_value"
x.remove(y)
"""
Explanation: Set up the predictor and response column names
Using H2O algorithms, it's easier to reference predictor and response columns
by name in a single frame (i.e., don't split up X and y)
End of explanation
"""
# Define and fit first 400 points
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=fr[:400,:])
model.predict(fr[400:fr.nrow,:]) # Predict the rest
"""
Explanation: Machine Learning With H2O
H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented.
Unlike Scikit-learn, H2O allows for categorical and missing data.
The basic work flow is as follows:
* Fit the training data with a machine learning algorithm
* Predict on the testing data
Simple model
End of explanation
"""
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
"""
Explanation: The performance of the model can be checked using the holdout dataset
End of explanation
"""
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=train, validation_frame=test)
perf = model.model_performance(test)
perf.r2()
"""
Explanation: Train-Test Split
Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.
End of explanation
"""
model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model
model.train(x=x, y=y, training_frame=fr)
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
"""
Explanation: There was a massive jump in the R^2 value. This is because the original data is not shuffled.
Cross validation
H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).
In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either:
* AUTO: Perform random assignment
* Random: Each row has a equal (1/nfolds) chance of being in any fold.
* Modulo: Observations are in/out of the fold based by modding on nfolds
End of explanation
"""
from sklearn.cross_validation import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
"""
Explanation: However, you can still make use of the cross_val_score from Scikit-Learn
Cross validation: H2O and Scikit-Learn
End of explanation
"""
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
"""
Explanation: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method.
End of explanation
"""
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
"""
Explanation: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Since the progress bar print out gets annoying let's disable that
End of explanation
"""
from sklearn import __version__
sklearn_version = __version__
print sklearn_version
"""
Explanation: Grid Search
Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)
Randomized grid search: H2O and Scikit-Learn
End of explanation
"""
%%time
from sklearn.grid_search import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,50),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=30,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
"""
Explanation: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
The steps to perform a randomized grid search:
1. Import model and RandomizedSearchCV
2. Define model
3. Specify parameters to test
4. Define grid search object
5. Fit data to grid search object
6. Collect scores
All the steps will be repeated from above.
Because 0.16.1 is installed, we use scipy to define specific distributions
ADVANCED TIP:
Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1).
We'll turn it back on again in the aftermath of a Parallel job.
If you don't want to run jobs in parallel, don't turn off the reference counting.
Pattern is:
>>> h2o.turn_off_ref_cnts()
>>> .... parallel job ....
>>> h2o.turn_on_ref_cnts()
End of explanation
"""
def report_grid_score_detail(random_search, charts=True):
"""Input fit grid search estimator. Returns df of scores with details"""
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
"""Return list of numeric dtypes variables"""
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
"""
Explanation: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
End of explanation
"""
%%time
params = {"ntrees": randint(30,40),
"max_depth": randint(4,10),
"mtries": randint(4,10),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=10, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
report_grid_score_detail(random_search)
"""
Explanation: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs:
End of explanation
"""
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
"""
Explanation: Transformations
Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful.
At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames.
Basic steps:
Remove the response variable from transformations.
Import transformer
Define transformer
Fit train data to transformer
Transform test and train data
Re-attach the response variable.
First let's normalize the data using the means and standard deviations of the training data.
Then let's perform a principal component analysis on the training data and select the top 5 components.
Using these components, let's use them to reduce the train and test design matrices.
End of explanation
"""
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print X_test_norm.shape
X_test_norm
"""
Explanation: Normalize Data: Use the means and standard deviations from the training data.
End of explanation
"""
pca = H2OPCA(k=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print X_test_norm_pca.shape
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
"""
Explanation: Then, we can apply PCA and keep the top 5 components. A user warning is expected here.
End of explanation
"""
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(k=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
"""
Explanation: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
Pipelines
"Tranformers unite!"
If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.
Steps:
Import Pipeline, transformers, and model
Define pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest).
Fit the training data to pipeline
Either transform or predict the testing data
End of explanation
"""
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__k": randint(2, 6),
"rf__ntrees": randint(50,80),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# k above
from sklearn.grid_search import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=30,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
"""
Explanation: This is so much easier!!!
But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.
Combining randomized grid search and pipelines
"Yo dawg, I heard you like models, so I put models in your models to model models."
Steps:
Import Pipeline, grid search, transformers, and estimators <Not shown below>
Define pipeline
Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words.
Define grid search
Fit to grid search
End of explanation
"""
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print save_path
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(fr)
"""
Explanation: Currently Under Development (drop-in scikit-learn pieces):
* Richer set of transforms (only PCA and Scale are implemented)
* Richer set of estimators (only RandomForest is available)
* Full H2O Grid Search
Other Tips: Model Save/Load
It is useful to save constructed models to disk and reload them between H2O sessions. Here's how:
End of explanation
"""
|
jokedurnez/RequiredEffectSize | Figure2_CorrSimulation/Correlation_simulation.ipynb | mit | import numpy
import nibabel
import os
import nilearn.plotting
import matplotlib.pyplot as plt
from statsmodels.regression.linear_model import OLS
import nipype.interfaces.fsl as fsl
import scipy.stats
if not 'FSLDIR' in os.environ.keys():
raise Exception('This notebook requires that FSL is installed and the FSLDIR environment variable is set')
%matplotlib inline
"""
Explanation: This notebook generates random synthetic fMRI data and a random behavioral regressor, and performs a standard univariate analysis to find correlations between the two. It is meant to demonstrate how easy it is to find seemingly impressive correlations with fMRI data when multiple tests are not properly controlled for.
In order to run this code, you must first install the standard Scientific Python stack (e.g. using anaconda) along with following additional dependencies:
* nibabel
* nilearn
* statsmodels
* nipype
In addition, this notebook assumes that FSL is installed and that the FSLDIR environment variable is defined.
End of explanation
"""
pthresh=0.001 # cluster forming threshold
cthresh=10 # cluster extent threshold
nsubs=28 # number of subjects
"""
Explanation: Set up default parameters. We use 28 subjects, which is the median sample size of the set of fMRI studies published in 2015 that were estimated from Neurosynth in the paper. We use a heuristic correction for multiple comparisons of p<0.001 and 10 voxels, like that show by Eklund et al. (2016, PNAS) to result in Type I error rates of 0.6-0.9.
End of explanation
"""
recreate_paper_figure=False
if recreate_paper_figure:
seed=6636
else:
seed=numpy.ceil(numpy.random.rand()*100000).astype('int')
print(seed)
numpy.random.seed(seed)
"""
Explanation: In order to recreate the figure from the paper exactly, we need to fix the random seed so that it will generate exactly the same random data. If you wish to generate new data, then set the recreate_paper_figure variable to False and rerun the notebook.
End of explanation
"""
maskimg=os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain_mask.nii.gz')
mask=nibabel.load(maskimg)
maskdata=mask.get_data()
maskvox=numpy.where(maskdata>0)
print('Mask includes %d voxels'%len(maskvox[0]))
"""
Explanation: Use the standard MNI152 2mm brain mask as the mask for the generated data
End of explanation
"""
imgmean=1000 # mean activation within mask
imgstd=100 # standard deviation of noise within mask
behavmean=100 # mean of behavioral regressor
behavstd=1 # standard deviation of behavioral regressor
data=numpy.zeros((maskdata.shape + (nsubs,)))
for i in range(nsubs):
tmp=numpy.zeros(maskdata.shape)
tmp[maskvox]=numpy.random.randn(len(maskvox[0]))*imgstd+imgmean
data[:,:,:,i]=tmp
newimg=nibabel.Nifti1Image(data,mask.get_affine(),mask.get_header())
newimg.to_filename('fakedata.nii.gz')
regressor=numpy.random.randn(nsubs,1)*behavstd+behavmean
numpy.savetxt('regressor.txt',regressor)
"""
Explanation: Generate a dataset for each subject. fMRI data within the mask are generated using a Gaussian distribution (mean=1000, standard deviation=100). Behavioral data are generated using a Gaussian distribution (mean=100, standard deviation=1).
End of explanation
"""
smoothing_fwhm=6 # FWHM in millimeters
smooth=fsl.IsotropicSmooth(fwhm=smoothing_fwhm,
in_file='fakedata.nii.gz',
out_file='fakedata_smooth.nii.gz')
smooth.run()
"""
Explanation: Spatially smooth data using a 6 mm FWHM Gaussian kernel
End of explanation
"""
glm = fsl.GLM(in_file='fakedata_smooth.nii.gz',
design='regressor.txt',
out_t_name='regressor_tstat.nii.gz',
demean=True)
glm.run()
"""
Explanation: Use FSL's GLM tool to run a regression at each voxel
End of explanation
"""
tcut=scipy.stats.t.ppf(1-pthresh,nsubs-1)
cl = fsl.Cluster()
cl.inputs.threshold = tcut
cl.inputs.in_file = 'regressor_tstat.nii.gz'
cl.inputs.out_index_file='tstat_cluster_index.nii.gz'
results=cl.run()
"""
Explanation: Use FSL's cluster tool to identify clusters of activation that exceed the specified cluster-forming threshold
End of explanation
"""
clusterimg=nibabel.load(cl.inputs.out_index_file)
clusterdata=clusterimg.get_data()
indices=numpy.unique(clusterdata)
clustersize=numpy.zeros(len(indices))
clustermean=numpy.zeros((len(indices),nsubs))
indvox={}
for c in range(1,len(indices)):
indvox[c]=numpy.where(clusterdata==c)
clustersize[c]=len(indvox[c][0])
for i in range(nsubs):
tmp=data[:,:,:,i]
clustermean[c,i]=numpy.mean(tmp[indvox[c]])
corr=numpy.corrcoef(regressor.T,clustermean[-1])
print('Found %d clusters exceeding p<%0.3f and %d voxel extent threshold'%(c,pthresh,cthresh))
print('Largest cluster: correlation=%0.3f, extent = %d voxels'%(corr[0,1],len(indvox[c][0])))
# set cluster to show - 0 is the largest, 1 the second largest, and so on
cluster_to_show=0
# translate this variable into the index of indvox
cluster_to_show_idx=len(indices)-cluster_to_show-1
# plot the (circular) relation between fMRI signal and
# behavioral regressor in the chosen cluster
plt.scatter(regressor.T,clustermean[cluster_to_show_idx])
plt.title('Correlation = %0.3f'%corr[0,1],fontsize=14)
plt.xlabel('Fake behavioral regressor',fontsize=18)
plt.ylabel('Fake fMRI data',fontsize=18)
m, b = numpy.polyfit(regressor[:,0], clustermean[cluster_to_show_idx], 1)
axes = plt.gca()
X_plot = numpy.linspace(axes.get_xlim()[0],axes.get_xlim()[1],100)
plt.plot(X_plot, m*X_plot + b, '-')
plt.savefig('scatter.png',dpi=600)
"""
Explanation: Generate a plot showing the brain-behavior relation from the top cluster
End of explanation
"""
tstat=nibabel.load('regressor_tstat.nii.gz').get_data()
thresh_t=clusterdata.copy()
cutoff=numpy.min(numpy.where(clustersize>cthresh))
thresh_t[thresh_t<cutoff]=0
thresh_t=thresh_t*tstat
thresh_t_img=nibabel.Nifti1Image(thresh_t,mask.get_affine(),mask.get_header())
"""
Explanation: Generate a thresholded statistics image for display
End of explanation
"""
mid=len(indvox[cluster_to_show_idx][0])/2
coords=numpy.array([indvox[cluster_to_show_idx][0][mid],
indvox[cluster_to_show_idx][1][mid],
indvox[cluster_to_show_idx][2][mid],1]).T
mni=mask.get_qform().dot(coords)
nilearn.plotting.plot_stat_map(thresh_t_img,
os.path.join(os.getenv('FSLDIR'),'data/standard/MNI152_T1_2mm_brain.nii.gz'),
threshold=cl.inputs.threshold,
cut_coords=mni[:3])
plt.savefig('slices.png',dpi=600)
"""
Explanation: Generate a figure showing the location of the selected activation focus.
End of explanation
"""
|
KushajveerSingh/Data-Science-Libraries | .ipynb_checkpoints/Advancd Python-checkpoint.ipynb | mit | if __name__ == "__main__":
# Do anything you want
pass
"""
Explanation: Python is a programming language. Python is often refereed as a scripting language as scripting languages are often interpreted and not compiled.<br>
End of explanation
"""
# If you want to give arguments to a function as a list
def f(x,y):
print(x, y)
myList= [1,2]
f(*myList)
"""
Explanation: You need this code segment becasue it allows you to execute the code within it when you are directly running the file. So whe nyou create modules you need some code but it must not be executed when you import that module in that case this if statement serves the need.
End of explanation
"""
@myDecorator
def aFunction():
print('Inside aFunction')
class my_decorator(object):
def __init__(self, f):
print("inside my_decorator.__init__()")
f()
def __call__(self):
print("inside my_decorator.__call__()")
@my_decorator
def aFunction():
print("inside aFunction()")
# The part before it would run before going to the next statement
# as we used decorators in which we defined the __init__ function
print("Finished decorating aFunction()")
aFunction()
# Decorators replace the original function object with the __call__
"""
Explanation: Decorators
Simple alternative to metaclasses. The '@' indicates the application of decorator.
End of explanation
"""
def foo():
pass
foo = staticmethod(foo)
# You can replace the above code using decorator as
@staticmethod
def foo():
pass
"""
Explanation: <p>How decorators work?<br>
When @my_decorator is called it goes to the __init__ function and executes it. We can also run the original function inside the init as function declaration is complete before executing the __init__. After the init function the original function is decorated that is replaced with the __call__ function so when we run original function the code of __call__ is executed.</p>
End of explanation
"""
|
mdiaz236/DeepLearningFoundations | gan_mnist/Intro_to_GANs_Solution.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
"""
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
"""
!mkdir checkpoints
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
mwegrzyn/mindReading2017 | content/_006_neurosynthDecoding_pt1.ipynb | gpl-3.0 | import os
imgList = ['../training/%s'%x for x in os.listdir('../training/')]; imgList.sort()
"""
Explanation: Wir haben bisher immer nur Daten unserer Person verwendet um Vorhersagen zu treffen. Das macht Sinn, da die Daten der Person gut die Besonderheiten ihrer Art zu denken widerspiegeln. Zum Beispiel könnte man bei jemandem mit atypischer rechts-lateraler Sprache wahrscheinlich nicht so leicht die Sprachaktivität aus den Daten anderer Menschen dekodieren, von denen vielleicht 90% linkslateralisiert sind. Ein anderer Grund warum wir mit dem Training auf Basis unserer eigenen Daten gut abschneiden könnten ist, dass unsere Aufgaben nicht unbedingt typisch für einen bestimmten kognitiven Prozess sein müssen. So haben wir gesehen, dass unsere Aufgabe an Gesichter zu denken eher autobiographische Gedächtnisfunktionen wiederspiegeln könnte, als die typische visuelle Wahrnehmung von Gesichtern.
Trotzdem gibt es auch Vorteile eines Vorgehens, bei dem wir die Daten anderer Menschen verwenden, um die Daten unserer Versuchsperson zu dekodieren. Die Vorteile ergeben sich gerade aus den mit dieser Aufgabe verbundenen Schwierigkeiten:
Was individuelle Besonderheiten der Person angeht, so wissen wir nicht, wie repräsentativ die Daten vom Untersuchungstag auch für andere Tage sind. Müdigkeit, Nervosität, Drogenkonsum (Koffein), unbequemes Liegen, Scannerparameter wie Auflösung,TR etc. werden mit Sicherheit alle "Besonderheiten" in die Daten einführen, die eine Woche später wieder anders sind.
Was Besonderheiten der Aufgabe angeht, schauen wir uns die Rolle, die eine große Datenbank wie Neurosynth dabei spielen könnten, nächste Woche an.
Wir machen uns eine Liste mit allen unseren Hirnbildern
End of explanation
"""
keywords = ['face','navigation','sensorimotor','dmn','language']
from nilearn import image, datasets, input_data, plotting
import seaborn as sns
import matplotlib.pylab as plt
%matplotlib inline
for keyword in keywords:
plotting.plot_stat_map('../ns/%s_specificity_z_FDR_0.01.nii.gz' % keyword,title=keyword)
plt.show()
nsList = ['../ns/%s_specificity_z_FDR_0.01.nii.gz' % keyword for keyword in keywords ]
masker = input_data.NiftiMasker(mask_img='../masks/MNI152_T1_2mm_brain_mask.nii.gz').fit()
import pandas as pd
roiDf = pd.DataFrame(masker.transform(nsList),index=keywords)
roiDf
for roi in roiDf.index:
thisData = roiDf.ix[roi]
thisImg = masker.inverse_transform(thisData)
plotting.plot_stat_map(thisImg,title=roi)
plt.show()
"""
Explanation: Wir benutzen Neurosynth als Aktivierungstemplates
End of explanation
"""
import pandas as pd
import numpy as np
def makeBigDf(imgList,masker):
bigDf = pd.DataFrame()
for img in imgList:
thisName = img.split('/')[-1].split('.')[0]
cond,num,content = thisName.split('_')
cont = '%s_%s' % (num,content)
thisDf = pd.DataFrame(masker.transform(img))
thisDf.index = [[cond],[cont]]
bigDf = pd.concat([bigDf,thisDf])
bigDf.sort_index(inplace=True)
return bigDf
blockDf = makeBigDf(imgList,masker)
blockDf
blockDf.shape
"""
Explanation: Unsere Daten
End of explanation
"""
def makeMetric(roiDf,blockDf):
return pd.DataFrame( np.corrcoef(roiDf,blockDf)[5:,:5], index=blockDf.index, columns=roiDf.index )
myCorrDf = makeMetric(roiDf,blockDf)
myCorrDf
plt.figure(figsize=(12,20))
sns.heatmap(myCorrDf,annot=True)
plt.show()
"""
Explanation: Ohne Kreuzvalidierung
Neurosynth ist ja sowieso von unseren Daten unabhängig
End of explanation
"""
def makeCorrPred(myCorrDf):
d = {}
# da die Namen der neurosynth-Regionen und unserer Bedingungen nicht
# übereinstimmen, müssen wir hier erlären, welche neurosynth-Karte
# jeweils als richtige Antwort zählt
roiNameDict = {'face':'gesichter','navigation':'homewalk','sensorimotor':'motorik','dmn':'ruhe','language':'sprache'}
# wir gehen durch jede Zeile
for cond,num in myCorrDf.index:
# wir wählen diese Zeile aus
thisDf = myCorrDf.ix[cond].ix[num]
# wir wählen die Spalte mit dem höhsten Wert aus
winner = thisDf.idxmax()
# wir schreiben einen eintrag mit folgenden infos:
# real : die tatsächliche bedingung (aus der zeile)
# winner: die spalte mit der höchsten korrelation
# hit: wir fragen, ob real und winner identisch sind (kann wahr oder falsch sein)
winnerTranslated = roiNameDict[winner]
d[num] = {'real':cond, 'winner':winnerTranslated,'hit':cond==winnerTranslated}
# wir packen das ganze in eine tabelle, die wir nett formatieregn
predDf = pd.DataFrame(d).T
predDf.index = [predDf['real'],predDf.index]
predDf.sort_index(inplace=True)
# wir rechnen aus, in wie viel prozent der Fälle wir richig lagen
percentCorrect = np.mean( [int(x) for x in predDf['hit']] )*100
return predDf,percentCorrect
corrPredDf,corrPcCorrect = makeCorrPred(myCorrDf)
corrPredDf
print "%i%% richtige Vorhersagen!" % corrPcCorrect
corrPredDf['correct'] = [int(x) for x in corrPredDf['hit']]
condPredDf = (corrPredDf.groupby(level=0).mean()*100).T
plt.figure(figsize=(8,4))
sns.barplot(data=condPredDf); plt.ylim(0,100); plt.ylabel('% correct');
plt.axhline(20,color='k',linewidth=1,linestyle='dashed')
plt.show()
"""
Explanation: Entscheidungsregel (winner takes all)
End of explanation
"""
|
kmclaugh/fastai_courses | kevin_files/lesson1.ipynb | apache-2.0 | %matplotlib inline
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
# path = "data/dogscats/"
path = "data/dogscats/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
import utils; reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
thehackerwithin/berkeley | code_examples/intropy_sp17/thw-python.ipynb | bsd-3-clause | 2 + 3 # Press <Ctrl-Enter to evaluate a cell>
2 + int(3.5 * 4) * float("8")
9 // 2 # Press <Ctrl-Enter to evaluate>
"""
Explanation: Introduction to Python
1. Installing Python
2. The Language
Expressions
List, Tuple and Dictionary
Strings
Functions
3. Example: Word Frequency Analysis with Python
Reading text files
Geting and using python packages : wordcloud
Histograms
Exporting data as text files
1. Installing Python:
Easy way : with a Python distribution, anaconda
https://www.continuum.io/downloads
Hard way : compile it yourself from source. It is open-source after all.
[Not covered here; was the main way in early days, before 2011 or even 2014]
Three Python user interfaces
Python Shell python
[yfeng1@waterfall ~]$ python
Python 2.7.12 (default, Sep 29 2016, 13:30:34)
[GCC 6.2.1 20160916 (Red Hat 6.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
Jupyter Notebook (in a browser, like this)
IDEs: PyCharm, Spyder, etc.
We use Jupyter Notebook here.
Jupyter Notebook is included in the Anaconda distribution.
2. Python the Language
2.1 Expressions
An expression looks like a math formula
End of explanation
"""
x = 2 + 3
x
"""
Explanation: To use the result of an expression in the future,
we assign an expression to a variable.
Type of a variable in python is usually implied.
(duck-typing -- read more on https://en.wikipedia.org/wiki/Duck_typing)
End of explanation
"""
print(x)
"""
Explanation: The weirdest expression in Python:
End of explanation
"""
MyListOfNumbers = [1,2,3,4,5,6,7]
"""
Explanation: Q: What happens under the hood?
2.2 List, Tuple, Set and Dictionary
A list is a list of expressions.
End of explanation
"""
len(MyListOfNumbers)
"""
Explanation: A list has a length
End of explanation
"""
for num in MyListOfNumbers:
print(num, end=', ')
"""
Explanation: We can loop over items in a list.
End of explanation
"""
MyTupleOfNumbers = (1, 2, 3, 4, 5, 6)
MyTupleOfNumbers = 1, 2, 3, 4, 5, 6
for num in MyTupleOfNumbers:
print(num, end=', ')
"""
Explanation: A tuple is almost a list, defined with () instead of [].
() can sometimes be omitted.
End of explanation
"""
MyListOfNumbers[4] = 99
print(MyListOfNumbers)
Tuple[4] = 99
"""
Explanation: But Tuples have a twist.
Items in a tuple is immutable;
Items in a list can change
Let's try it out
End of explanation
"""
MyDictionary = {}
MyDictionary[9] = 81
MyDictionary[3] = 9
print(MyDictionary)
"""
Explanation: Oops.
Tuple object does not support item assignment.
Tuples are immutable.
Dictionary
A dicionary records a mapping from Keys to Values.
Mathematically a dictionary defines a function on a finite, discrete domain.
End of explanation
"""
for k, v in MyDictionary.items():
print('Key', k, ":", 'Value', v, end=' | ')
"""
Explanation: We may write
MyDictionary : {9, 3} => R.
We can loop over items in a dictionary, as well
End of explanation
"""
"the hacker within", 'the hacker within', r'the hacker within', u'the hacker within', b'the hacker within'
"""
Explanation: 2.? String
We have seen strings a few times.
String literals can be defined with quotation marks, single or double.
End of explanation
"""
name = "the hacker within"
"""
Explanation: Q: Mind the tuple
If we assign a string literal to a variable,
we get a string variable
End of explanation
"""
print(name.upper())
print(name.split())
print(name.upper().split())
"""
Explanation: Python give us a lot of means to manipulate a string.
End of explanation
"""
name.find("hack")
name[name.find("hack"):]
"""
Explanation: We can look for substring from a string
End of explanation
"""
foo = "there are %03d numbers" % 3
print(foo)
"""
Explanation: Formatting strings with the traditional printf formats
End of explanation
"""
bname = name.encode()
print(bname)
print(bname.decode())
"""
Explanation: Conversion between bytes and strings
encode : from bytes to string
decode : from string to bytes
The conversion is called 'encoding'. The default encoding on Unix is UTF-8.
Q: What is the default encoding on Windows and OS X?
End of explanation
"""
def square_num(num):
return num*num
print(square_num(9))
print(square_num(3))
"""
Explanation: Encodings are important if you work with text beyond English.
2.? Functions
A function is a more compact representation of mathematical functions.
(still remember dictionaries)
End of explanation
"""
print(MyDictionary[9])
print(MyDictionary[3])
"""
Explanation: Compare this with our dictionary
End of explanation
"""
print(square_num(10))
print(MyDictionary[10])
"""
Explanation: The domain of a function is much bigger than a dictionary.
A diciontary only remembers what we told it;
a function reevalutes its body every time it is called.
End of explanation
"""
%%bash
curl -so titles.tsv https://raw.githubusercontent.com/thehackerwithin/berkeley/master/code_examples/spring17_survey/session_titles.tsv
head -5 titles.tsv
"""
Explanation: Oops. We never told MyDictionary about 10.
3. A Word Count Example
In this section we will analyze some textual data with Python.
We first obtain the data, with a bash cell.
End of explanation
"""
text = open('titles.tsv').read()
"""
Explanation: Reading in a text file is very easy in Python.
End of explanation
"""
with open('titles.tsv') as ff:
text = ff.read()
"""
Explanation: Q : There is a subtle problem.
We usually use a different syntax for reading files.
End of explanation
"""
words = text.split()
lines = text.split("\n")
print(words[::10]) # 1 word every 10
print(lines[::10]) # 1 line every 10
"""
Explanation: Let's chop the text off into semantic elements.
End of explanation
"""
import pip
pip.main(['install', "wordcloud"])
"""
Explanation: Looks like we read in the file correctly.
Let's visualize this data.
We use some exteral help from a package, wordcloud.
So we will first install the package with pip, the Python Package Manager.
End of explanation
"""
from wordcloud import WordCloud
wordcloud = WordCloud(width=800, height=300, prefer_horizontal=1, stopwords=None).generate(text)
wordcloud.to_image()
"""
Explanation: Oops I have already installed wordcloud. You may see a different message.
End of explanation
"""
freq_dict = {}
for word in words:
freq_dict[word] = freq_dict.get(word, 0) + 1
print(freq_dict)
print(freq_dict['Python'])
print(freq_dict['CUDA'])
"""
Explanation: The biggest keyword is Python. Let's get quantatitive:
Frequency statistics: How many times does each word occur in the file?
For each word, we need to remember a number (number of occurances)
Use dictionary.
We will examine all words in the file (splitted into words).
Use loop.
End of explanation
"""
def freq(items):
freq_dict = {}
for word in items:
freq_dict[word] = freq_dict.get(word, 0) + 1
return freq_dict
"""
Explanation: Seems to be working. Let's make a function.
End of explanation
"""
freq_dict = freq(words)
freq_freq = freq(freq_dict.values())
"""
Explanation: The function freq is a mapping between a list and a dictionary,
where each key of the dictionary (output) is associated with the number of occurances
of the key in the list (input).
End of explanation
"""
print(freq_freq)
"""
Explanation: Q : what is in freq_freq?
End of explanation
"""
top_word = ""
top_word_freq = 0
for word, freq in freq_dict.items():
if freq > top_word_freq:
top_word = word
top_word_freq = freq
print('word', top_word, 'freq', top_word_freq)
"""
Explanation: Q: Which is the most frequent word?
Answer
End of explanation
"""
most = (0, None)
for word, freq in freq_dict.items():
most = max([most, (freq, word)])
print(most)
"""
Explanation: Using the max function avoids writing an if
End of explanation
"""
next(reversed(sorted((freq, word) for word, freq in freq_dict.items())))
"""
Explanation: final challenge: the 1 liner.
End of explanation
"""
def save(filename, freq_dict):
ff = open(filename, 'w')
for word, freq in sorted(freq_dict.items()):
ff.write("%s %s\n" % (word, freq))
ff.close()
def save(filename, freq_dict):
with open(filename, 'w') as ff:
for word, freq in sorted(freq_dict.items()):
ff.write("%s %s\n" % (word, freq))
save("freq_dict_thw.txt", freq_dict)
!cat freq_dict_thw.txt
save("freq_freq_thw.txt", freq_freq)
!cat freq_freq_thw.txt
"""
Explanation: Exporting data
The world of Python has 4 corners.
We need to reach out to other applications.
Export the data from Python.
End of explanation
"""
import pandas as pd
dataframe = pd.read_table("freq_freq_thw.txt", sep=' ', header=None, index_col=0)
dataframe
%matplotlib inline
dataframe.plot(kind='bar')
import pandas as pd
dataframe = pd.read_table("freq_dict_thw.txt", sep=' ', header=None, index_col=0)
dataframe.plot(kind='bar')
"""
Explanation: Reading file in with Pandas
End of explanation
"""
|
cavaunpeu/willwolf.io-source | content/downloads/notebooks/intercausal_reasoning.ipynb | mit | from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
from scipy.optimize import fmin_powell
from scipy.stats import beta as beta_distribution
import seaborn as sns
from sklearn.linear_model import LogisticRegression
%matplotlib inline
plt.style.use('seaborn')
"""
Explanation: Intercausal Reasoning in Bayesian Networks
End of explanation
"""
PRESIDENT_PROBABILITY = .05
ACCIDENT_PROBABILITY = .12
TRAFFIC_PROBABILITY = {(0, 0): .15, (0, 1): .5, (1, 0): .6, (1, 1): .9}
TRIALS = 200
JointInput = namedtuple('JointInput', ['p', 'a'])
"""
Explanation: Abstract
The goal of this work is to perform simple intercausal reasoning on a 3-node Bayesian network.
In this network, both the "president being in town" and a "car accident on the highway" exert influence over whether a traffic jam occurs. With this relationship in mind, we will try to answer two simple questions:
- "What is the probability of an accident having occurred given that a traffic jam occurred?"
- "What is the probability of an accident having occurred given that a traffic jam occurred and the president is in town?"
We are given point estimates for all component probabilities which we can use to answer these questions. However, in the real world, we're just given data. As such, we use the given component probabilities (which we'd never actually know) to simulate this data, then use it to try to answer the questions at hand. This way, we'll build uncertainty into our answers as well.
Problem setup
Define component probabilities
End of explanation
"""
N_CHAINS = 4
N_SAMPLES = 2000
"""
Explanation: Define PyMC sampling parameters
We'll need these later on.
End of explanation
"""
president = np.random.binomial(n=1, p=PRESIDENT_PROBABILITY, size=TRIALS)
accident = np.random.binomial(n=1, p=ACCIDENT_PROBABILITY, size=TRIALS)
traffic_probabilities = [TRAFFIC_PROBABILITY[JointInput(p=p, a=a)] for p, a in zip(president, accident)]
traffic = np.random.binomial(n=1, p=traffic_probabilities)
print( f'President Mean: {president.mean()}' )
print( f'Accident Mean: {accident.mean()}' )
print( f'Traffic Mean: {traffic.mean()}' )
observed_data = pd.DataFrame({'president': president, 'accident': accident, 'traffic': traffic})
"""
Explanation: Simulate data
End of explanation
"""
times_president_observed = sum(president)
times_president_not_observed = len(president) - times_president_observed
president_probability_samples = np.random.beta(
a=1 + times_president_observed,
b=1 + times_president_not_observed,
size=N_CHAINS*N_SAMPLES
)
times_accident_observed = sum(accident)
times_accident_not_observed = len(accident) - times_accident_observed
accident_probability_samples = np.random.beta(
a=1 + times_accident_observed,
b=1 + times_accident_not_observed,
size=N_CHAINS*N_SAMPLES
)
"""
Explanation: Compute $P(\text{president})$ and $P(\text{accident})$ posteriors
One way to estimate the probability of the president being in town given observational data is to compute the observed proportion, i.e. if the president was seen in 4 of 200 trials, then the estimated probability is .02. Of course, this discards all uncertainty in our estimate (of which we have much). Uncertainty declines as the size of our data tends towards infinity, and 200 trials is not infinite.
Instead, we can express our belief - uncertainty included - in the true probability of observing president, and that of observing accident, as a Beta-distributed posterior. This follows trivially from the Beta-Binomial conjugacy*.
*Accessing this video costs $9 CAD. This said, the author Cam Davidson-Pilon's work on Bayesian statistics is excellent, and well-deserving of the fee. Alternatively, the Wikipedia page on conjugate priors contains some cursory information about Beta-Binomial conjugacy under the "Discrete distributions" table.
End of explanation
"""
glm = LogisticRegression()
_ = glm.fit(X=observed_data[['president', 'accident']], y=observed_data['traffic'])
precision = 5
print( f'Estimated intercept: {glm.intercept_[0]:.{precision}}' )
print( f'Estimated president coefficient: {glm.coef_[0][0]:.{precision}}' )
print( f'Estimated accident coefficient: {glm.coef_[0][1]:.{precision}}' )
"""
Explanation: Compute $P(\text{Traffic}\ |\ \text{President}, \text{Accident})$ posterior
This might look a little funky for those used to Bayesian parameter estimation for univariate systems - estimating $P(\text{heads})$ given the results of 15 coinflips, for example.
So, how do we do this? Do we filter the data for all unique combinations of (president, accident), i.e. $(0, 0)$, $(0, 1)$, $(1, 0)$, $(1, 1)$ then estimate in the same fashion as above? That way, we could estimate $P(\text{traffic} = 1\ |\ \text{president} = 0, \text{accident} = 0)$, $P(\text{traffic} = 1\ |\ \text{president} = 1, \text{accident} = 0)$, etc. In fact, this would work. But what if we added more variables? And what if the values were continuous? This would get messy quickly.
In fact, modeling $P(\text{Traffic}\ |\ \text{President}, \text{Accident})$ is none other than logistic regression. Think about it!
Herein, we'll formulate our model as a vanilla Binomial (logistic) regression, taking the form:
$$
\text{traffic} \sim \text{Binomial}(1, p)\
\log\bigg(\frac{p}{1 - p}\bigg) = \alpha + \beta_P P + \beta_A A\
\alpha \sim \text{Normal}(0, 10)\
\beta_P \sim \text{Normal}(0, 10)\
\beta_A \sim \text{Normal}(0, 10)\
$$
$P$ and $A$ represent president and accident respectively. The priors on their respective coefficients are meant to be uniformative - a mere guard against our sampler running off to check values really big or really small.
Finally, as this is one of my first times with PyMC3, I fit a baseline logistic regression model from scikit-learn on the same data just to check that our parameter estimates make sense. Note that scikit-learn's implementation specifies an L2 penalty - whose strength is controlled by an additional parameter $C$ - on the cost by default, which is analagous to placing Gaussian priors on model coefficients in the Bayesian case. I did not take care to ensure that the hyper-parameters ($\mu_P$ and $\sigma_P$, for example) of our Normal priors are comparable with the analagous parameter $C$ in the scikit-learn model.
Baseline logistic regression
End of explanation
"""
with pm.Model() as model:
priors = {
'Intercept': pm.Normal.dist(mu=0, sd=10),
'president': pm.Normal.dist(mu=0, sd=10),
'accident': pm.Normal.dist(mu=0, sd=10)
}
pm.glm.glm('traffic ~ president + accident', observed_data, family=pm.glm.families.Binomial(), priors=priors)
start_MAP = pm.find_MAP(fmin=fmin_powell, disp=False)
step = pm.NUTS(scaling=start_MAP)
trace = pm.sample(N_SAMPLES, step=step, njobs=N_CHAINS, progressbar=True)
warmup = 500
variables = ['Intercept', 'president', 'accident']
pm.traceplot(trace[warmup:], varnames=variables)
"""
Explanation: Bayesian logistic regression
This way, we get uncertainty in our estimates! Uncertainty is important, as we'll see in following plots.
Sometimes, the chains immediately flatline to a single value. I'm not sure if this is a Bayesian modeling thing, or a PyMC thing. I did not take the time to investigate. Should this happen when running this notebook, kindly try again. If stuck, restart the kernel and start again from there. Advice on this matter is appreciated.
End of explanation
"""
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def compute_counterfactual_predictions(president_value, accident_value, trace=trace):
log_odds_p = trace['Intercept'] + trace['president']*president_value + trace['accident']*accident_value
return sigmoid(log_odds_p)
def compute_prediction_interval(predictions, percentile=94):
lower_percentile_bound = (100 - percentile) / 2
upper_percentile_bound = 100 - lower_percentile_bound
return np.percentile(predictions, lower_percentile_bound), np.percentile(predictions, upper_percentile_bound)
input_combinations = [(0, 0), (0, 1), (1, 0), (1, 1)]
counterfactual_predictions = {}
observed_proportions = {}
for p, a in input_combinations:
counterfactual_predictions[JointInput(p=p, a=a)] = compute_counterfactual_predictions(p, a)
observed_proportions[JointInput(p=p, a=a)] = \
observed_data[(observed_data['president'] == p) & (observed_data['accident'] == a)]['traffic'].mean()
"""
Explanation: Compute counterfactual predictions
Our chains look stationary, well-mixing and similar to one another. This signals that our sampler probably did its job. Furthermore, the point estimates computed by the baseline model are contained in our posteriors. As the focus of this work is the overall analysis of the graphical model and not on these specific parameter estimates themselves, I don't dig further.
The next task is to compute counterfactual predictions for $P(\text{Traffic}\ |\ \text{President}, \text{Accident})$, i.e. "given some inputs for $P$ and $A$ in our model, what's the probability of observing traffic?" Remember, our model looks as follows:
$$
\text{traffic} \sim \text{Binomial}(1, p)\
\log\bigg(\frac{p}{1 - p}\bigg) = \alpha + \beta_P P + \beta_A A
$$
What values do we use for $\alpha, \beta_P, \beta_A$, you ask? Well, we've got a whole bunch of choices in the cell above.
As such, we'll take the trace for each variable - the values returned by the sampler, i.e. the squiggly plot on the right, from which we build the empirical distribution on the left - and plug in some new values for $P$ and $A$. The sampler used 4 chains of 2000 samples each, so we now have 8000 tuples of $(\alpha, \beta_P, \beta_A)$. Our new values for $P$ and $A$ can be whatever we want. (In fact, they don't even have to have been observed in our data to begin with!)
First, we'll take the tuple $(P = 0, A = 0)$. We'll then plug this into our regression equation for each of the 8000 tuples of $(\alpha, \beta_P, \beta_A)$. Solving for $p$, this will give us 8000 values for $P(\text{Traffic} = 1\ |\ \text{President} = 0, \text{Accident} = 0)$. We then repeat for $(P = 0, A = 1), (P = 1, A = 0)$ and $(P = 1, A = 1)$.
End of explanation
"""
plt.figure(figsize=(18, 12))
for subplot_idx, (p, a) in enumerate([(0, 0), (0, 1), (1, 0), (1, 1)]):
predictions = counterfactual_predictions[JointInput(p=p, a=a)]
observed_proportion = observed_proportions[JointInput(p=p, a=a)]
expected_proportion = TRAFFIC_PROBABILITY[JointInput(p=p, a=a)]
subplot = plt.subplot(221 + subplot_idx)
plt.setp(subplot.get_yticklabels(), visible=False)
plt.hist(predictions, edgecolor='white', linewidth=1, bins=30, alpha=.7)
plt.axvline(observed_proportion, color='green', linestyle='--', label='Observed Proportion')
plt.axvline(expected_proportion, color='#A60628', linestyle='--', label='Expected Proportion')
title = f'Posterior Distribution of \nP(Traffic = 1 | President = {p}, Accident = {a})'
plt.title(title)
plt.legend()
"""
Explanation: Plot $P(\text{Traffic}\ |\ \text{President}, \text{Accident})$ posteriors
Onto each posterior we also plot:
- The observed traffic proportion, i.e. observed_data['traffic'].mean() for the respective set of inputs $(P, A)$.
- The expected proportion, i.e. the original probability of observing traffic given the varying combinations of $(P, A)$. This is what we are trying to recover from our data. In the real world we'd never have this number, but when working with simulated data we can use it as a way to confirm that our model is reasonable.
Finally, we note the uncertainty we're able to express in our estimates. If we instead used the point (i.e. single value) parameter estimates from the scikit-learn model above, we'd be looking at a single value in each plot instead of a distribution.
End of explanation
"""
x = [1, 2, 3, 4]
y_observed = list(observed_proportions.values())
labels = [(key.p, key.a) for key in observed_proportions.keys()]
y_predicted_mean = [counterfactual_predictions[joint_input].mean() for joint_input in counterfactual_predictions]
y_predicted_PI = [compute_prediction_interval(counterfactual_predictions[joint_input]) for joint_input in counterfactual_predictions]
y_predicted_PI_lower_bound, y_predicted_PI_upper_bound = zip(*y_predicted_PI)
plt.figure(figsize=(12, 8))
plt.plot(x, y_observed, linewidth=2, label='Observed Proportion')
plt.xticks(x, labels)
plt.plot(x, y_predicted_mean, linewidth=2, label='Predicted Proportion')
plt.fill_between(x, y_predicted_PI_lower_bound, y_predicted_PI_upper_bound, alpha=.1, color='green')
plt.title('Counterfactual Predictions of P(Traffic = 1)')
plt.legend()
"""
Explanation: Plot mean and prediction interval for predicted $P(\text{traffic})$
For each unique combination of $(P, A)$ we have 8000 guesses at what $P(\text{Traffic}\ |\ \text{President}, \text{Accident})$ is implied. For each set, we compute a mean and a 94% interval then plot against the observed proportion to see how we did.
Given that our data were generated randomly, it is possible that we don't observe any positive occurrences of traffic given varying values of president and accident - especially at $(T = 1\ |\ P = 1, A = 1)$. As such, there would be no data point in the following plot for the corresponding value of $x$.
We can see that our model given the data is most sure about the probability of observing traffic given neither the president nor an accident. It is least sure about what happens with the former and without the latter. Finally, if the blue and green curves appear identical, do inspect the y_observed and y_predicted_mean objects to see what the true values are. (In most cases, they will not be identical even if they visually appear to be.)
End of explanation
"""
def compute_P_accident_1_given_traffic_1(president_probability, accident_probability, traffic_probability):
P_traffic_1_accident_1 = \
(1 - president_probability)*(accident_probability)*traffic_probability[JointInput(p=0, a=1)] +\
(president_probability)*(accident_probability)*traffic_probability[JointInput(p=1, a=1)]
P_traffic_1 = \
(1 - accident_probability)*(1 - president_probability)*traffic_probability[JointInput(p=0, a=0)] +\
(1 - accident_probability)*(president_probability)*traffic_probability[JointInput(p=1, a=0)] +\
(accident_probability)*(1 - president_probability)*traffic_probability[JointInput(p=0, a=1)] +\
(accident_probability)*(president_probability)*traffic_probability[JointInput(p=1, a=1)]
P_accident_1_given_traffic_1 = P_traffic_1_accident_1 / P_traffic_1
return P_accident_1_given_traffic_1
P_accident_1_given_traffic_1_point_estimate = \
compute_P_accident_1_given_traffic_1(PRESIDENT_PROBABILITY,ACCIDENT_PROBABILITY,TRAFFIC_PROBABILITY)
P_accident_1_given_traffic_1_with_uncertainty = \
compute_P_accident_1_given_traffic_1(president_probability_samples, accident_probability_samples, counterfactual_predictions)
color = sns.color_palette()[-1]
plt.figure(figsize=(12, 8))
plt.hist(P_accident_1_given_traffic_1_with_uncertainty, linewidth=1, bins=25, color=color, edgecolor='white', alpha=.7)
plt.title('Posterior Distribution of P(Accident = 1 | Traffic = 1)', fontsize=15)
plt.axvline(P_accident_1_given_traffic_1_point_estimate, color='green', linestyle='--', label='Point Estimate')
plt.xlabel('Probability', fontsize=14)
plt.ylabel('Count', fontsize=14)
plt.legend()
"""
Explanation: Returning to our original questions
To answer our original questions - $P(\text{Accident = 1}\ |\ \text{Traffic = 1})$ and $P(\text{Accident = 1}\ |\ \text{Traffic = 1}, \text{President = 1})$ - we invoke basic axioms of Bayes' Theorem, conditional probability and the factorization of probabilistic graphical models. Here are the key pieces that we'll need:
$$
P(P, A, T) = P(P)P(A)P(T\ |\ P, A)
$$
$$
P(X, Y) = P(X\ |\ Y)P(Y)
$$
$$
P(X\ |\ Y) = \frac{P(Y\ |\ X)P(X)}{P(Y)}
$$
Starting from the beginning, our goal is to reduce each question down to an algebraic expression of the distributions we've estimated from our simulated data. From there, we simply plug and play.
In the distributions that follow, we're able to express the uncertainty in our estimates of the true answer to each question. This allows us to:
- Contextualize our estimate. Therein, we can make statements like: "There are a ton of plausible values of this probability. If you're going to use it to make a decision or communicate it to others, please do so with caution and don't bet the farm." Conversely, if the following posterior were to be very narrow, we could say things like: "There is a very narrow range of values that satisfy the model given the data. You can be very sure about this probability estimate - perhaps even share it with the CEO!"
- Make intelligent choices. If we're going to make bets on traffic, i.e. decide how much money to invest in new urban infrastructure given an estimate of $P(\text{Traffic}\ |\ \text{President})$, different values of $P(\text{Traffic}\ |\ \text{President})$ will suggest different decisions, and each decision will carry a different cost. For example, concurring that the President's appearance causes traffic 80% of the time might lead us to building a new highway - expensive - while concurring 5% might lead us to simply open the POV lane for the duration of his stay. So, given our estimate (a distribution) of $P(\text{Traffic}\ |\ \text{President})$, how do we know how much money we should budget for the project? This is a motivating example for Bayesian decision theory which, using the entire posterior, allows us to responsibly answer this question. In fact, it's a lot easier than it sounds. My favorite resource on the topic is Rasmus Bååth's "Probable Points and Credible Intervals, Part 2: Decision Theory".
Finally, we plot the point answer to each question computed from the original probabilities defined at the beginning of this notebook. This gives us an idea of how well were able to recover the initial generative process given the simulated data.
$P(\text{Accident = 1}\ |\ \text{Traffic = 1})$
$\begin{align}
P(A = 1|T = 1)
&= \frac{P(T = 1 | A = 1)P(A = 1)}{P(T = 1)}\
&= \frac{P(T = 1, A = 1)P(A = 1)}{P(A = 1)P(T = 1)}\
&= \frac{P(T = 1, A = 1)}{P(T = 1)}\
\end{align}$
<br>
$\begin{align}
P(T = 1, A = 1)
&= \sum_{P}P(P, A = 1, T = 1))\
&= P(P = 0, A = 1, T = 1) + P(P = 1, A = 1, T = 1)\
&= P(P = 0)P(A = 1)P(T = 1 \ |\ P = 0, A = 1) + P(P = 1)P(A = 1)P(T = 1 \ |\ P = 1, A = 1)
\end{align}$
<br>
$\begin{align}
P(T = 1)
&= \sum_{A, P}P(A, P, T = 1))\
&= \sum_{A, P}P(A)P(P)P(T = 1\ |\ A, P))\
&=
P(A = 0)P(P = 0)P(T = 1\ |\ A = 0, P = 0) + P(A = 0)P(P = 1)P(T = 1\ |\ A = 0, P = 1) + P(A = 1)P(P = 0)P(T = 1\ |\ A = 1, P = 0) + P(A = 1)P(P = 1)P(T = 1\ |\ A = 1, P = 1)
\end{align}$
End of explanation
"""
def compute_P_accident_1_given_traffic_1_president_1(president_probability, accident_probability, traffic_probability):
P_accident_1_traffic_1_president_1 = \
traffic_probability[JointInput(p=1, a=1)]*accident_probability*president_probability
P_traffic_1_president_1 = \
traffic_probability[JointInput(p=1, a=0)]*president_probability*(1 - accident_probability) +\
traffic_probability[JointInput(p=1, a=1)]*president_probability*accident_probability
P_accident_1_given_traffic_1_president_1 = P_accident_1_traffic_1_president_1 / P_traffic_1_president_1
return P_accident_1_given_traffic_1_president_1
P_accident_1_given_traffic_1_president_1_point_estimate = \
compute_P_accident_1_given_traffic_1_president_1(PRESIDENT_PROBABILITY,ACCIDENT_PROBABILITY,TRAFFIC_PROBABILITY)
P_accident_1_given_traffic_1_president_1_with_uncertainty = \
compute_P_accident_1_given_traffic_1_president_1(president_probability_samples, accident_probability_samples, counterfactual_predictions)
plt.figure(figsize=(12, 8))
plt.hist(P_accident_1_given_traffic_1_president_1_with_uncertainty,
bins=25, color=color, linewidth=1, edgecolor='white', alpha=.7)
plt.title('Posterior Distribution of P(Accident = 1 | Traffic = 1, President = 1)', fontsize=15)
plt.axvline(P_accident_1_given_traffic_1_president_1_point_estimate, color='green', linestyle='--', label='Point Estimate')
plt.xlabel('Probability', fontsize=14)
plt.ylabel('Count', fontsize=14)
plt.legend()
"""
Explanation: $P(\text{Accident = 1}\ |\ \text{Traffic = 1}, \text{President = 1})$
$\begin{align}
P(A = 1\ |\ T = 1, P = 1)
&= \frac{P(A = 1, T = 1, P = 1)}{P(T = 1, P = 1)}
\end{align}$
<br>
$\begin{align}
P(A = 1, T = 1, P = 1)
&= P(T = 1\ |\ A = 1, P = 1)P(A = 1)P(P = 1)
\end{align}$
<br>
$\begin{align}
P(T = 1, P = 1)
&= \sum_{A}P(A, P = 1, T = 1)\
&= P(A = 0, P = 1, T = 1) + P(A = 1, P = 1, T = 1)\
&= P(T = 1\ |\ A = 0, P = 1)P(P = 1)P(A = 0) + P(T = 1\ |\ A = 1, P = 1)P(P = 1)P(A = 1)
\end{align}$
End of explanation
"""
N = 99
x = np.linspace(0, 1, 1002)[1:-1]
accident_trials = np.random.binomial(n=N, p=ACCIDENT_PROBABILITY, size=TRIALS)
plt.figure(figsize=(12, 8))
plt.xlim(0, .4)
def compute_beta_densities(yes_count, x=x):
no_count = N - yes_count
alpha = 1 + yes_count
beta = 1 + no_count
distribution = beta_distribution(alpha, beta)
return distribution.pdf(x)
for yes_count in accident_trials:
y = compute_beta_densities(yes_count)
plt.plot(x, y, color='black', linewidth=.5, alpha=.02)
median_yes_count = np.median(accident_trials)
y = compute_beta_densities(median_yes_count)
plt.plot(x, y, color='black', linewidth=1.5, alpha=1)
plt.title('Plausible Beta Posteriors of P(Accident = 1)', fontsize=15)
plt.xlabel('Probability', fontsize=14)
plt.ylabel('Density', fontsize=14)
"""
Explanation: But didn't this all depend on how the dice were rolled?
To conclude, I'd like to give some basic context on the stochasticity of the generative process itself. We said above that the probability of observing an accident is ACCIDENT_PROBABILITY = .12. We then simulated 200 trials and observed a discrete count. While the expectation of this count is $200 * .12 = 24$ accidents observed, we could have in reality observed 23, or 31, or even (but implausibly) 0. Next, we used this data to infer a posterior distribution for the true value of ACCIDENT_PROBABILITY, and later plugged it in to the expressions corresponding to the original questions at hand. How much did the stochasticity of these trials impact the final result?
The following data cover 199 more trials. In each, we "flip the coin" 99 times, instead of 1 time as was done above. In each trial, we plot the inferred posterior distribution for the true probability of accident. The distribution of given the median observed count is overlaid in bold.
As is clear, the expected values of these posteriors can vary wildly! In fact, the left tails of some barely touch the right tails of others. By all accounts, this will absolutely affect our final estimates. Is there anything we can do about it?
As yet, I see two options:
- Give an informative prior. The canonical Beta-Binomial formulation we've used implies a flat prior - $\text{Beta}(1, 1)$. A more informative prior would come from domain-level knowledge about how often traffic accidents occur.
- Build a better model. Perhaps there are other variables that directly contribute to accidents like weather, time of day, etc.? If such data were available we should use them to try to explain the variance in our outcome accident. Notwithstanding, this is a general point for a future analysis: the mentioned quantities are not actually found in our original graph itself.
Finally, here's a fitting quote that makes me smile:
“And with real data, these models will never accurately recover the data-generating mechanism. At best, they describe it in a scientifically useful way.” - Richard McElreath
End of explanation
"""
|
amuniversity/am-mooc | module 2/Ex 2b - Housing ,Linear Regression - Open.ipynb | gpl-2.0 | # import libraries
import matplotlib
import IPython
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import pylab
import seaborn as sns
import sklearn as sk
%matplotlib inline
"""
Explanation: Linear Regression , Housing
Prerequisites: Hopefully you have a good understanding of how linear regression. Its a good idea to review the slides for this module before you start this exercise. Also take a look at the notebook corresponding to exercise 2a.
We'll now learn to do linear regression. In IPython, we use the library Sci-Kit Learn that implements many of the algorithms for machine learning. This includes linear regression as well as more advanced techniques like SVMs, Neural Networks etc.
In this exercise, we will first make a simple model using just one predictor and then you will build and refine linear models using the techniques discussed in the slides. If you're stuck, several of the included links will help!
End of explanation
"""
## Read the housing data! This time its not comma separated but space separated. Read up on how you can use Pandas
## to read in space separated files into a data frame
housing = """Read a .txt file, pay attention to how the data is separated"""
# See if the import worked, print the first 5 lines using some in-built function. Where's your head at?
housing.head()
# Check if these of the variables are correlated using the visualization techniques built up in module 1!
# LSTAT and MEDV would be related. The more the proportion of poor houses, the smaller the price.
# Lets confirm our intuition
# Scatterplot between LSTAT and MEDV
sns.jointplot(housing.LSTAT,housing.MEDV,kind="reg")
"""
Explanation: Sci-Kit Learn, the machine learning library
If you noticed in the includes codes at the start of the file, this time we included a new library - sklearn. sklearn is short for Sci-Kit Learn, a machine learning library for Python.
The structure of scikit-learn:
Some of the following text is taken from the scikit-learn API paper: http://arxiv.org/pdf/1309.0238v1.pdf
All objects within scikit-learn share a uniform common basic API consisting of
three complementary interfaces: an estimator interface for building and fitting
models, a predictor interface for making predictions and a transformer interface
for converting data.
The estimator interface is at the core of the library. It defines instantiation
mechanisms of objects and exposes a fit method for learning a model from
training data. All supervised and unsupervised learning algorithms (e.g., for
classification, regression or clustering) are offered as objects implementing this
interface. Machine learning tasks like feature extraction, feature selection or
dimensionality reduction are also provided as estimators.
An example along these lines:
linear_model = LinearRegression()
linear_model.fit(LSTAT, MEDV)
If one changes methods, say, to a Logistic regression, one would simply replace LinearRegression() in the snippet above by LogisticRegression().
The predictor interface extends the notion of an estimator by adding a predict
method that takes an array X_test and produces predictions for X_test, based on
the learned parameters of the estimator. In the case of
supervised learning estimators, this method typically returns the predicted labels or values computed by the model. Some unsupervised learning estimators may also implement the predict interface, such as k-means, where the predicted values are the cluster labels.
clf.predict(X_test)
See sklearn in action
Go back to notebook Ex2a: Supervised Learning and scroll down to where we built the models. Remember how we told you to ignore the code? Its time to understand it now. Undersand how we build a simple linear model and how we used it to make predictions. Now its time to apply that knowledge!
Prompt: Build a predictor for the median house value in towns around Boston! Load up the data given in housing.names and housing.txt and build a linear model trying to predict MEDV using all the other features of the data!
Help: You can see the code for the linear model here and an example of linear regression here.
End of explanation
"""
# Define predictor and response
X = housing[['LSTAT']]
Y = housing.MEDV
# Load up the linear model and fit it.
from sklearn.linear_model import LinearRegression
lin_mod = LinearRegression()
lin_mod.fit(X,Y)
y_p = lin_mod.predict(X)
# Plot the results.
plt.scatter(X,Y,c='r')
plt.plot(X,y_p,c='y')
## Now start making your own regression!
# Remember the potential pitfalls we discussed.
# Correlation - check the correlation of each variable with the other. Heres a correlation-map to get you
# started on what predictors should be used and which ones are highly correlated and may pose a problem!
corr = housing.corr()
sns.heatmap(corr)
plt.savefig("correl.png")
# Start building your model here!
# First you'll need to separate out the predictors and response
X = """ predictors 1 through 13 """
Y = """response = MEDV"""
# You can reuse the lin_mod object to continue fitting to different data!
"""
Explanation: It seems that the predictor LSTAT is correlated with our response and will be a good base model. Lets try to build a simple linear model using just one predictor and response, (sklearn works the same way for more predictors, you just have to put them in one dataframe).
End of explanation
"""
|
dlab-berkeley/python-taq | tests/Basic HDF5 Operations.ipynb | bsd-2-clause | # But what symbol is that?
max_sym = None
max_rows = 0
for sym, rows in rec_counts.items():
if rows > max_rows:
max_rows = rows
max_sym = sym
max_sym, max_rows
"""
Explanation: Anyway, under a gigabyte. So, nothing to worry about even if we have 24 cores.
End of explanation
"""
# Most symbols also have way less rows - note this is log xvals
plt.hist(list(rec_counts.values()), bins=50, log=True)
plt.show()
"""
Explanation: Interesting... the S&P 500 ETF
End of explanation
"""
spy = taq_tb.get_node(max_sym)
# PyTables is record oriented...
%timeit np.mean(list(x['Bid_Price'] for x in spy.iterrows()))
# But this is faster...
%timeit np.mean(spy[:]['Bid_Price'])
np.mean(spy[:]['Bid_Price'])
"""
Explanation: Doing some compute
We'll use a "big" table to get some sense of timings
End of explanation
"""
spy_bp = spy.cols.Bid_Price
# this works...
np.mean(spy_bp)
# But it can't use numexpr
expr = tb.Expr('sum(spy_bp)')
# You can use numexpr to get the values of the column... but that's silly
# (sum doesn't work right, and the axis argument is non-functional)
%timeit result = expr.eval().mean()
tb.Expr('spy_bp').eval().mean()
"""
Explanation: Using numexpr?
numexpr is currently not set up to do reductions via HDF5. I've opened an issue here:
https://github.com/PyTables/PyTables/issues/548
End of explanation
"""
taq_tb.close()
%%time
spy_h5py = h5py.File(fname)[max_sym]
np.mean(spy_h5py['Bid_Price'])
"""
Explanation: h5py
End of explanation
"""
%%timeit
np.mean(spy_h5py['Bid_Price'])
"""
Explanation: h5py may be a touch faster than pytables for this kind of usage. But why does pandas use pytables?
End of explanation
"""
taq_tb.close()
"""
Explanation: Dask
It seems that there should be no need to, e.g., use h5py - but dask's read_hdf doens't seem to be working nicely...
End of explanation
"""
store = pd.HDFStore(fname)
store = pd.HDFStore('../test-data/')
# this is a fine way to iterate over our datasets (in addition to what's available in PyTables and h5py)
it = store.items()
key, tab = next(it)
tab
# The columns argument doesn't seem to work...
store.select(max_sym, columns=['Bid_Price']).head()
# columns also doesn't work here...
pd.read_hdf(fname, max_sym, columns=['Bid_Price']).head()
# So we use h5py (actually, pytables appears faster...)
spy_dask = dd.from_array(spy_h5py)
mean_job = spy_dask['Bid_Price'].mean()
mean_job.compute()
# This is appreciably slower than directly computing the mean w/ numpy
%timeit mean_job.compute()
"""
Explanation: spy_h5py = h5py.File(fname)[max_sym]
End of explanation
"""
class DDFs:
# A (key, table) list
datasets = []
dbag = None
def __init__(self, h5fname):
h5in = h5py.File(h5fname)
h5in.visititems(self.collect_dataset)
def collect_dataset(self, key, table):
if isinstance(table, h5py.Dataset):
self.datasets.append(dd.from_array(table)['Bid_Price'].mean())
def compute_mean(self):
# This is still very slow!
self.results = {key: result for key, result in dd.compute(*self.datasets)}
%%time
ddfs = DDFs(fname)
ddfs.datasets[:5]
len(ddfs.datasets)
dd.compute?
%%time
results = dd.compute(*ddfs.datasets[:20])
import dask.multiprocessing
%%time
# This crashes out throwing lots of KeyErrors
results = dd.compute(*ddfs.datasets[:20], get=dask.multiprocessing.get)
results[0]
"""
Explanation: Dask for an actual distributed task (but only on one file for now)
End of explanation
"""
from dask import delayed
@delayed
def mean_column(key, data, column='Bid_Price'):
return key, blaze.data(data)[column].mean()
class DDFs:
# A (key, table) list
datasets = []
def __init__(self, h5fname):
h5in = h5py.File(h5fname)
h5in.visititems(self.collect_dataset)
def collect_dataset(self, key, table):
if isinstance(table, h5py.Dataset):
self.datasets.append(mean_column(key, table))
def compute_mean(self, limit=None):
# Note that a limit of None includes all values
self.results = {key: result for key, result in dd.compute(*self.datasets[:limit])}
%%time
ddfs = DDFs(fname)
%%time
ddfs.compute_mean()
next(iter(ddfs.results.items()))
# You can also compute individual results as needed
ddfs.datasets[0].compute()
"""
Explanation: This ends up being a little faster than just using blaze (see below), but about half the time is spent setting thigs up in Dask.
End of explanation
"""
spy_blaze = blaze.data(spy_h5py)
%time
spy_blaze['Ask_Price'].mean()
taq_tb = tb.open_file(fname)
spy_tb = taq_tb.get_node(max_sym)
spy_blaze = blaze.data(spy_tb)
%time spy_blaze['Bid_Price'].mean()
taq_tb.close()
"""
Explanation: Blaze?
Holy crap!
End of explanation
"""
%%time
blaze_h5_file = blaze.data(fname)
# This is rather nice
blaze_h5_file.SPY.no_suffix.Bid_Price.mean()
blaze_h5_file.ZFKOJB.no_suffix.Bid_Price.mean()
"""
Explanation: Read directly with Blaze
Somehow this is not as impressive
End of explanation
"""
taq_h5py = h5py.File(fname)
class SymStats:
means = {}
def compute_stats(self, key, table):
if isinstance(table, h5py.Dataset):
self.means[key] = blaze.data(table)['Bid_Price'].mean()
ss = SymStats()
%time taq_h5py.visititems(ss.compute_stats)
means = iter(ss.means.items())
next(means)
ss.means['SPY/no_suffix']
"""
Explanation: Do some actual compute with Blaze
End of explanation
"""
taq_tb = tb.open_file(fname)
taq_tb.close()
pd.read_hdf?
pd.read_hdf(fname, max_sym, start=0, stop=1, chunksize=1)
max_sym
fname
%%timeit
node = taq_tb.get_node(max_sym)
pd.DataFrame.from_records(node[0:1])
%%timeit
# I've also tried this with `.get_node()`, same speed
pd.DataFrame.from_records(taq_tb.root.IXQAJE.no_suffix)
%%timeit
pd.read_hdf(fname, max_sym)
# Pandas has optimizations it likes to do with
%timeit spy_df = pd.read_hdf(fname, max_sym)
# Actually do it
spy_df = pd.read_hdf(fname, max_sym)
# This is fast, but loading is slow...
%timeit spy_df.Bid_Price.mean()
"""
Explanation: Pandas?
To load with Pandas, you need to close the pytables session
End of explanation
"""
|
viper-framework/har2tree | tutorial/tutorial.ipynb | bsd-3-clause | from pathlib import Path
Path.home()
"""
Explanation: Har2Tree Tutorial
Crawling a web page can sound like a bit of an abstract concept at first. How exactly can we extract data from a web page? What data is really interesting to look at? Where can it be found?
→ Every web browser generates a HAR file (short for http archive) when loading a web page. This file mostly contains information about what resources are loaded by the browser, as it was firstly designed to identify possible performance issues. However, as the whole file is in a standard JSON format, we can reverse engineer the process to extract useful information and make a whole tree out of all the resources found in that HAR file. This step is particularly important as it is really complicated to understand what is going on by simply looking at the HAR file. Example here!
This notebook will guide you through the core features that Har2Tree offers.
It is also important to note that Har2Tree is an API based on the TreeNode class of ETE3 Toolkit and that a lot of help can be found on the documentation there in case you want to know a bit more about how the program works.
Before we do anything: Setup
1. Prerequisites
For the following tutorial, we assume you have the following environment at your disposal.
Ubuntu 20.04 or more recent. You can also work with WSL 2
Python 3.8 or 3.9
2. Installing har2tree
If you are here it means that you already cloned the har2tree repository: you should be all set up already!
In case you got here another way, simply clone the repository in your desired folder:
bash
git clone https://github.com/Lookyloo/har2tree.git
You may also want to initialize a submodule containing a few sample captures:
bash
git submodule update --init --recursive
(Optional) 3. Retrieving useful files
At this point, you could use a pre-existing capture made for the tests of har2tree. They are located in tests / capture_samples.
However, you might want to take a look at how the files are downloaded to have a better understanding of the program and eventually use it on some pages of your choice.
<br/>
<br/>
Important note: Because Har2Tree was made for Lookyloo, it may require some additional files located in the same folder as the HAR file to be completely operational. To ensure that the program will fully work, we will simulate a capture using the public Lookyloo instance rather than download the HAR file in the conventional way (on Chrome: Ctrl + Shift + J > Network > F5 (Reload the page) > Arrow facing downwards) .
By simply adding /export at the end of the url when browsing on a capture, we can download all the files generated by Lookyloo. This includes the complete html capture of the page along with various other files that we will get into later on.
Capture link: → https://lookyloo.circl.lu/tree/b6b29698-4c97-4a21-adaa-f934e5bfb042
Download link: → https://lookyloo.circl.lu/tree/b6b29698-4c97-4a21-adaa-f934e5bfb042/export
You can then unzip the folder in the desired folder of your choice and your HAR folder is now ready!
Tip: unzip the folder in the same directory as this notebook, it will be easier for later.
Getting started
JSONThe place where the magic of the API begins is the CrawledTree object : it takes a list of HAR file paths and a uuid as parameters. <br/> To keep things simple for now, we will only be using one HAR file per tree.
To build OS paths in python, we are going to use the Path class from pathlib.
Note that the keyword __file__ doesn't work on Jupyter.
Let's see how we can tell the program to display our home directory:
End of explanation
"""
import uuid
uuid.uuid4()
"""
Explanation: Great. Now let's try to create our first tree. As mentioned before, you will also need to pass a uuid as a parameter, but don't worry, python has everything you need:
End of explanation
"""
from har2tree import CrawledTree
har_path = Path() / '..' / 'tests' / 'capture_samples' / 'http_redirect' / '0.har'
my_first_crawled_tree = CrawledTree([har_path], str(uuid.uuid4()))
"""
Explanation: Few notes though:
- CrawledTree takes a string as parameter and not a UUID, we just have to turn it into a string
- it takes a list of HAR paths, even if there's only one path as mentioned before
You might want to change the HAR path to what you downloaded in part 3 of the setup.
Enough talking:
Creating the tree
End of explanation
"""
my_first_crawled_tree.root_url
"""
Explanation: Part 1: Extracting simple data
If you didnt get any error, everything worked! Let's now see what we can do with that CrawledTree.
You can find all the properties in the parser.py file.
First, let's see what website you got the capture from:
End of explanation
"""
print(my_first_crawled_tree.start_time)
print(my_first_crawled_tree.user_agent)
"""
Explanation: Why not also check at what time the capture was made, as well as the user agent that made it:
End of explanation
"""
print(my_first_crawled_tree.redirects)
"""
Explanation: Finally, what really interests us: let's see if there are any redirects on the page.
End of explanation
"""
print(my_first_crawled_tree.root_hartree.start_time)
print(my_first_crawled_tree.root_hartree.start_time == my_first_crawled_tree.start_time)
"""
Explanation: And that's it for the first part. With very few lines of codes, we are able to extract very useful information in neglectable execution time. This makes it so much easier than having to go through the HAR file and find what you're looking for.
In the second part, we'll dig into the more complex features.
Part 2: the second level
In this part we will look into the root_hartree property of CrawledTree, which is nothing else than a Har2Tree object inside CrawledTree. You can see that it is initialized here.
Har2Tree's goal is to build a tree out of the different contents of a given HAR file. By doing that, a lot of subsequent methods are gonna get invoked, particularly in the nodes.py file. We will cover them later on.
The few properties we saw before are here to simplify the access of that sub-level tree. They are called that way : CrawledTree.root_hartree.<property>
Har2Tree properties
Let's start with something simple and display the start time to check if we get the same result as before:
End of explanation
"""
my_first_crawled_tree.root_hartree.stats
"""
Explanation: The stats property calls multiple useful other properties and displays them nicely in a JSON format. You can find what it calls here and trace it back to the other properties in case you want to know something in specific that is not covered here.
End of explanation
"""
print(my_first_crawled_tree.root_hartree.total_load_time)
print(my_first_crawled_tree.root_hartree.total_size_responses)
"""
Explanation: You can get a pretty good idea of the time taken to build a tree by calling the total_load_time property. It's not 100% precise as some loads are made in parallel but it gives a good approximation. Along with it, you can call total_size_responses that give you the size in bytes of the response bodies:
End of explanation
"""
har_path = Path() / '..' / #the_path_of_your_capture
complex_crawled_tree = CrawledTree([har_path], str(uuid.uuid4()))
print(complex_crawled_tree.root_hartree.total_load_time)
"""
Explanation: You can see that with this simple capture, it doesn't take a lot of time. What about a more complex one?
Task: export the har data of a website of your choice and make a tree out of it. Then check the value of total_load_time
End of explanation
"""
my_first_crawled_tree.root_hartree.root_after_redirect
"""
Explanation: A very interesting property to look at is root_after_redirect: it returns a URL in case there is at least one redirect on the capture.
The returned URL is the URL that you'll end up with after following all redirects of the page.
End of explanation
"""
har_properties = my_first_crawled_tree.root_hartree.har # For readability
print('uuid: ' + har_properties.capture_uuid)
print('Path: ' + str(har_properties.path) + '\n') # What we defined before
print('Initial redirects: ' + str(har_properties.has_initial_redirects)) # Our example from before
print('Final redirect: ' + har_properties.final_redirect + '\n') # Same as root_after_redirect
print('Unique representation: ' + repr(har_properties)) # path of the capture and the uuid at the same time
"""
Explanation: If you really want to dig deeper and investigate the whole construction of the tree, I recommend you take a look here. This will give you more insight and the whole construction with every single step cannot be covered here.
Same thing for make_hostname_tree: a short explanation is that it's basically a construction and aggregation of the URLTree depending on the hostnames.
HarFile class
If you take a look at the code, you'll notice that we actually call the function from har which is nothing else than an instance of HarFile to make it easier to access inside of the Har2Tree class. You can find its definition in the same file as its main use is to pre-process a lot of data useful in the Har2Tree class and give a more python-esque interface that lets us access the contents of the HAR file with ease.
Let's see some of its interesting features:
End of explanation
"""
print(har_properties.entries[0])
"""
Explanation: Only execute that one if you want to see all the informations of one given URL.
You could also remove the [0] to print out all entries but it takes <span style="color:red">a lot of time</span> and it is enough to print just the first one to get a good idea of what an entry looks like:
End of explanation
"""
for entry in har_properties.entries:
print(entry['request']['url'])
print(har_properties.number_entries)
"""
Explanation: As you can see, the URL is located in request > url. Let's see what we can do with that.
This next example is quite interesting. It shows the number of entries in the capture; then it prints out all the URLs loaded by the page; you can even retrace the redirects in the first 4 URLs.
End of explanation
"""
print(my_first_crawled_tree.root_hartree.all_url_requests)
print(len(my_first_crawled_tree.root_hartree.all_url_requests))
"""
Explanation: <span style="color:blue">Note:</span> and of course this is also implemented in the all_url_requests attribute, but in the Har2Tree class.
End of explanation
"""
print(my_first_crawled_tree.root_hartree.rendered_node)
print("\n")
print(my_first_crawled_tree.root_hartree.rendered_node.describe())
"""
Explanation: You can however see that we only have 6 as the duplicates were removed compared to 7 before.
And that weird Tree Node thing... we're getting there in the next part.
nodes.py
As you can see in this file, the nodes we use come from the Ete3 toolkit API.
Let's try to see what one of those nodes looks like; we'll begin with URLNode as it's pretty self explanatory.
We are going to use the rendered_node property as it returns the node which will ultimately be displayed on the screen.
End of explanation
"""
my_first_crawled_tree.root_hartree.rendered_node.name
"""
Explanation: You might find weird that <span style="color:#8b8b8b">the rendered URL itself is not displayed</span>. It's because the name of the node itself is not returned in the __str__ method of the TreeNode class as the show_internal parameter is set to False by default.
You could simply print out the name of the node like this:
End of explanation
"""
print(my_first_crawled_tree.root_hartree.rendered_node.get_ascii())
"""
Explanation: Or you could invoke the get_ascii method because its default show_internal parameter is set to True but you will have to zoom out to get someting readable:
End of explanation
"""
#TODO: I didnt manage to fix that one...
from getpass import getpass
!echo {getpass()} | "sudo -S ./interactive_tree.sh"
#Easiest way is to do ./interactive_tree.sh in your shell
"""
Explanation: Finally, you could run this little script. It invokes the method .show() of a node which opens a window with an interactive interface and really helps visualizing what the node actually contains. However, you may face a lot of problems while running it, so here is a screenshot just in case.
<span style="color:blue">Note:</span> you might want to take a look here and there, it really helped me to fix bugs.
End of explanation
"""
my_first_crawled_tree.root_hartree.rendered_node.urls_in_rendered_page
"""
Explanation: <br/>
As you can see, it's a lot of trouble to retrieve very little data that is not formatted nicely. That's why Lookyloo is also complex: it takes care of that in a more effective manner than ete3.
- Here is another interesting property that lists what URLs the elements that have href attribute lead to (it's way easier than it sounds).
End of explanation
"""
root_node = my_first_crawled_tree.root_hartree.url_tree.search_nodes(name=my_first_crawled_tree.root_url)[0]
print(root_node.name)
"""
Explanation: Time for a bit more complicated example: let's try to find the node containing the root URL using a method we saw previously:
End of explanation
"""
my_first_crawled_tree.root_hartree.hostname_tree.to_json()
"""
Explanation: To see all the informations that a node contains, you can simply dump all the features using the to_json method:
End of explanation
"""
my_first_crawled_tree.root_hartree.hostname_tree.features
"""
Explanation: But this is difficult to read. Instead, you could check the featuresproperty that are updated in the add_url method for HostNode or load_har_entry for URLNode that give a way clearer view of what's inside the node:
End of explanation
"""
print("request cookies: " + str(my_first_crawled_tree.root_hartree.hostname_tree.request_cookie))
print("response cookies: " + str(my_first_crawled_tree.root_hartree.hostname_tree.response_cookie))
print("3rd party cookies: " + str(my_first_crawled_tree.root_hartree.hostname_tree.third_party_cookies_received))
print("mixed content: " + str(my_first_crawled_tree.root_hartree.hostname_tree.mixed_content))
"""
Explanation: A few more HostNode interesting features:
request_cookie: returns the number of unique cookies sent in the requests of all URL nodes
response_cookie: returns the number of unique cookies received in the requests of all URL nodes
third_party_cookies_received: returns the number of unique 3rd party cookies received in the requests of all URL nodes
mixed_content: returns true if there is http and https URL nodes, false otherwise
End of explanation
"""
har_path = Path() / '..' / 'tests' / 'capture_samples' / 'cookie' / '0.har'
cookie_crawled_tree = CrawledTree([har_path], str(uuid.uuid4()))
print("request cookies: " + str(cookie_crawled_tree.root_hartree.hostname_tree.request_cookie))
print("response cookies: " + str(cookie_crawled_tree.root_hartree.hostname_tree.response_cookie))
print("3rd party cookies: " + str(cookie_crawled_tree.root_hartree.hostname_tree.third_party_cookies_received))
print("mixed content: " + str(cookie_crawled_tree.root_hartree.hostname_tree.mixed_content))
"""
Explanation: Let's see what happens with the cookie capture of capture_samples where we passed a cookie in the request:
End of explanation
"""
|
rogerallen/kaggle | dogscats/roger.ipynb | apache-2.0 | #Verify we are in the lesson1 directory
%pwd
%matplotlib inline
import os, sys
sys.path.insert(1, os.path.join(sys.path[0], '../utils'))
from utils import *
from vgg16 import Vgg16
from PIL import Image
from keras.preprocessing import image
from sklearn.metrics import confusion_matrix
"""
Explanation: Creating my own version of the dogs_cats_redux notebook in order to make my own entry into the Kaggle competition.
My dir structure is similar, but not exactly the same:
utils
dogscats (not lesson1)
data
(no extra redux subdir)
train
test
End of explanation
"""
current_dir = os.getcwd()
LESSON_HOME_DIR = current_dir
DATA_HOME_DIR = current_dir+'/data'
"""
Explanation: Note: had to comment out vgg16bn in utils.py (whatever that is)
End of explanation
"""
from shutil import copyfile
#Create directories
%cd $DATA_HOME_DIR
# did this once
#%mkdir valid
#%mkdir results
#%mkdir -p sample/train
#%mkdir -p sample/test
#%mkdir -p sample/valid
#%mkdir -p sample/results
#%mkdir -p test/unknown
%cd $DATA_HOME_DIR/train
# create validation set by renaming 2000
g = glob('*.jpg')
shuf = np.random.permutation(g)
NUM_IMAGES=len(g)
NUM_VALID = 2000 # 0.1*NUM_IMAGES
NUM_TRAIN = NUM_IMAGES-NUM_VALID
print("total=%d train=%d valid=%d"%(NUM_IMAGES,NUM_TRAIN,NUM_VALID))
for i in range(NUM_VALID):
os.rename(shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i])
# copy a small sample
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i])
%cd $DATA_HOME_DIR/valid
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(50): copyfile(shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i])
!ls {DATA_HOME_DIR}/train/ |wc -l
!ls {DATA_HOME_DIR}/valid/ |wc -l
!ls {DATA_HOME_DIR}/sample/train/ |wc -l
!ls {DATA_HOME_DIR}/sample/valid/ |wc -l
"""
Explanation: Create validation set and sample
ONLY DO THIS ONCE.
End of explanation
"""
#Divide cat/dog images into separate directories
%cd $DATA_HOME_DIR/sample/train
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
%cd $DATA_HOME_DIR/sample/valid
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
%cd $DATA_HOME_DIR/valid
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
%cd $DATA_HOME_DIR/train
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
# Create single 'unknown' class for test set
%cd $DATA_HOME_DIR/test
%mv *.jpg unknown/
!ls {DATA_HOME_DIR}/test
"""
Explanation: Rearrange image files into their respective directories
ONLY DO THIS ONCE.
End of explanation
"""
%cd $DATA_HOME_DIR
#Set path to sample/ path if desired
path = DATA_HOME_DIR + '/' #'/sample/'
test_path = DATA_HOME_DIR + '/test/' #We use all the test data
results_path=DATA_HOME_DIR + '/results/'
train_path=path + '/train/'
valid_path=path + '/valid/'
vgg = Vgg16()
#Set constants. You can experiment with no_of_epochs to improve the model
batch_size=64
no_of_epochs=2
#Finetune the model
batches = vgg.get_batches(train_path, batch_size=batch_size)
val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2)
vgg.finetune(batches)
#Not sure if we set this for all fits
vgg.model.optimizer.lr = 0.01
#Notice we are passing in the validation dataset to the fit() method
#For each epoch we test our model against the validation set
latest_weights_filename = None
#vgg.model.load_weights('/home/rallen/Documents/PracticalDL4C/courses/deeplearning1/nbs/data/dogscats/models/first.h5')
#vgg.model.load_weights(results_path+'ft1.h5')
latest_weights_filename='ft24.h5'
vgg.model.load_weights(results_path+latest_weights_filename)
"""
Explanation: Finetuning and Training
OKAY, ITERATE HERE
End of explanation
"""
# if you have run some epochs already...
epoch_offset=12 # trying again from ft1
for epoch in range(no_of_epochs):
print "Running epoch: %d" % (epoch + epoch_offset)
vgg.fit(batches, val_batches, nb_epoch=1)
latest_weights_filename = 'ft%d.h5' % (epoch + epoch_offset)
vgg.model.save_weights(results_path+latest_weights_filename)
print "Completed %s fit operations" % no_of_epochs
"""
Explanation: if you are training, stay here. if you are loading & creating submission skip down from here.
End of explanation
"""
# only if you have to
latest_weights_filename='ft1.h5'
vgg.model.load_weights(results_path+latest_weights_filename)
"""
Explanation: ```
Results of ft1.h5
0 val_loss: 0.2122 val_acc: 0.9830
1 val_loss: 0.1841 val_acc: 0.9855
[[987 7]
[ 20 986]]
--
2 val_loss: 0.2659 val_acc: 0.9830
3 val_loss: 0.2254 val_acc: 0.9850
4 val_loss: 0.2072 val_acc: 0.9845
[[975 19]
[ 11 995]]
Results of first0.h5
0 val_loss: 0.2425 val_acc: 0.9830
[[987 7]
[ 27 979]]
```
End of explanation
"""
val_batches, probs = vgg.test(valid_path, batch_size = batch_size)
filenames = val_batches.filenames
expected_labels = val_batches.classes #0 or 1
#Round our predictions to 0/1 to generate labels
our_predictions = probs[:,0]
our_labels = np.round(1-our_predictions)
cm = confusion_matrix(expected_labels, our_labels)
plot_confusion_matrix(cm, val_batches.class_indices)
#Helper function to plot images by index in the validation set
#Plots is a helper function in utils.py
def plots_idx(idx, titles=None):
plots([image.load_img(valid_path + filenames[i]) for i in idx], titles=titles)
#Number of images to view for each visualization task
n_view = 4
#1. A few correct labels at random
correct = np.where(our_labels==expected_labels)[0]
print "Found %d correct labels" % len(correct)
idx = permutation(correct)[:n_view]
plots_idx(idx, our_predictions[idx])
#2. A few incorrect labels at random
incorrect = np.where(our_labels!=expected_labels)[0]
print "Found %d incorrect labels" % len(incorrect)
idx = permutation(incorrect)[:n_view]
plots_idx(idx, our_predictions[idx])
#3a. The images we most confident were cats, and are actually cats
correct_cats = np.where((our_labels==0) & (our_labels==expected_labels))[0]
print "Found %d confident correct cats labels" % len(correct_cats)
most_correct_cats = np.argsort(our_predictions[correct_cats])[::-1][:n_view]
plots_idx(correct_cats[most_correct_cats], our_predictions[correct_cats][most_correct_cats])
#3b. The images we most confident were dogs, and are actually dogs
correct_dogs = np.where((our_labels==1) & (our_labels==expected_labels))[0]
print "Found %d confident correct dogs labels" % len(correct_dogs)
most_correct_dogs = np.argsort(our_predictions[correct_dogs])[:n_view]
plots_idx(correct_dogs[most_correct_dogs], our_predictions[correct_dogs][most_correct_dogs])
#4a. The images we were most confident were cats, but are actually dogs
incorrect_cats = np.where((our_labels==0) & (our_labels!=expected_labels))[0]
print "Found %d incorrect cats" % len(incorrect_cats)
if len(incorrect_cats):
most_incorrect_cats = np.argsort(our_predictions[incorrect_cats])[::-1][:n_view]
plots_idx(incorrect_cats[most_incorrect_cats], our_predictions[incorrect_cats][most_incorrect_cats])
#4b. The images we were most confident were dogs, but are actually cats
incorrect_dogs = np.where((our_labels==1) & (our_labels!=expected_labels))[0]
print "Found %d incorrect dogs" % len(incorrect_dogs)
if len(incorrect_dogs):
most_incorrect_dogs = np.argsort(our_predictions[incorrect_dogs])[:n_view]
plots_idx(incorrect_dogs[most_incorrect_dogs], our_predictions[incorrect_dogs][most_incorrect_dogs])
#5. The most uncertain labels (ie those with probability closest to 0.5).
most_uncertain = np.argsort(np.abs(our_predictions-0.5))
plots_idx(most_uncertain[:n_view], our_predictions[most_uncertain])
"""
Explanation: Validate Predictions
End of explanation
"""
batches, preds = vgg.test(test_path, batch_size = batch_size*2)
# Error allocating 3347316736 bytes of device memory (out of memory).
# got this error when batch-size = 128
# I see this pop up to 6GB memory with batch_size = 64 & this takes some time...
#For every image, vgg.test() generates two probabilities
#based on how we've ordered the cats/dogs directories.
#It looks like column one is cats and column two is dogs
print preds[:5]
filenames = batches.filenames
print filenames[:5]
#You can verify the column ordering by viewing some images
Image.open(test_path + filenames[1])
#Save our test results arrays so we can use them again later
save_array(results_path + 'test_preds.dat', preds)
save_array(results_path + 'filenames.dat', filenames)
"""
Explanation: Generate Predictions
End of explanation
"""
#Load our test predictions from file
preds = load_array(results_path + 'test_preds.dat')
filenames = load_array(results_path + 'filenames.dat')
#Grab the dog prediction column
isdog = preds[:,1]
print "Raw Predictions: " + str(isdog[:5])
print "Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)])
print "Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)])
#play it safe, round down our edge predictions
#isdog = isdog.clip(min=0.05, max=0.95)
#isdog = isdog.clip(min=0.02, max=0.98)
isdog = isdog.clip(min=0.01, max=0.99)
#Extract imageIds from the filenames in our test/unknown directory
filenames = batches.filenames
ids = np.array([int(f[8:f.find('.')]) for f in filenames])
subm = np.stack([ids,isdog], axis=1)
subm[:5]
%cd $DATA_HOME_DIR
submission_file_name = 'submission4.csv'
np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')
from IPython.display import FileLink
%cd $LESSON_HOME_DIR
FileLink('data/'+submission_file_name)
"""
Explanation: Submit Predictions to Kaggle!
End of explanation
"""
|
ericmjl/hiv-resistance-prediction | old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb | mit | ! echo $PATH
! echo $CUDA_ROOT
import pandas as pd
import numpy as np
from Bio import SeqIO
from Bio import AlignIO
from Bio.Align import MultipleSeqAlignment
from collections import Counter
from sklearn.preprocessing import LabelBinarizer
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier
from sklearn.metrics import mutual_info_score as mi
from lasagne import layers
from lasagne.updates import nesterov_momentum
from nolearn.lasagne import NeuralNet
import theano
"""
Explanation: Research Problem
Last year, I read a paper titled, "Feature Selection Methods for Identifying Genetic Determinants of Host Species in RNA Viruses". This year, I read another paper titled, "Predicting host tropism of influenza A virus proteins using random forest". The essence of these papers were to predict influenza virus host tropism from sequence features. The particular feature engineering steps were somewhat distinct, in which the former used amino acid sequences encoded as binary 1/0s, while the latter used physiochemical characteristics of the amino acid sequences instead. However, the core problem was essentially identical - predict a host classification from influenza protein sequence features. Random forest classifiers were used in both papers, and is a powerful method for identifying non-linear mappings from features to class labels. My question here was to see if I could get comparable performance using a simple neural network.
Data
I downloaded influenza HA sequences from the Influenza Research Database. Sequences dated from 1980 to 2015. Lab strains were excluded, duplicates allowed (captures host tropism of certain sequences). All viral subtypes were included.
Below, let's take a deep dive into what it takes to construct an artificial neural network!
The imports necessary for running this notebook.
End of explanation
"""
sequences = SeqIO.to_dict(SeqIO.parse('20150902_nnet_ha.fasta', 'fasta'))
# sequences
"""
Explanation: Read in the viral sequences.
End of explanation
"""
lengths = Counter()
for accession, seqrecord in sequences.items():
lengths[len(seqrecord.seq)] += 1
lengths.most_common(1)[0][0]
"""
Explanation: The sequences are going to be of variable length. To avoid the problem of doing multiple sequence alignments, filter to just the most common length (i.e. 566 amino acids).
End of explanation
"""
# For convenience, we will only work with amino acid sequencees of length 566.
final_sequences = dict()
for accession, seqrecord in sequences.items():
host = seqrecord.id.split('|')[1]
if len(seqrecord.seq) == lengths.most_common(1)[0][0]:
final_sequences[accession] = seqrecord
"""
Explanation: There are sequences that are ambiguously labeled. For example, "Environment" and "Avian" samples. We would like to give a more detailed prediction as to which hosts it likely came from. Therefore, take out the "Environment" and "Avian" samples.
End of explanation
"""
alignment = MultipleSeqAlignment(final_sequences.values())
alignment_array = np.array([list(rec) for rec in alignment])
"""
Explanation: Create a numpy array to store the alignment.
End of explanation
"""
# Create an empty dataframe.
# df = pd.DataFrame()
# # Create a dictionary of position + label binarizer objects.
# pos_lb = dict()
# for pos in range(lengths.most_common(1)[0][0]):
# # Convert position 0 by binarization.
# lb = LabelBinarizer()
# # Fit to the alignment at that position.
# lb.fit(alignment_array[:,pos])
# # Add the label binarizer to the dictionary.
# pos_lb[pos] = lb
# # Create a dataframe.
# pos = pd.DataFrame(lb.transform(alignment_array[:,pos]))
# # Append the columns to the dataframe.
# for col in pos.columns:
# maxcol = len(df.columns)
# df[maxcol + 1] = pos[col]
from isoelectric_point import isoelectric_points
df = pd.DataFrame(alignment_array).replace(isoelectric_points)
# Add in host data
df['host'] = [s.id.split('|')[1] for s in final_sequences.values()]
df = df.replace({'X':np.nan, 'J':np.nan, 'B':np.nan, 'Z':np.nan})
df.dropna(inplace=True)
df.to_csv('isoelectric_point_data.csv')
# Normalize data to between 0 and 1.
from sklearn.preprocessing import StandardScaler
df_std = pd.DataFrame(StandardScaler().fit_transform(df.ix[:,:-1]))
df_std['host'] = df['host']
ambiguous_hosts = ['Environment', 'Avian', 'Unknown', 'NA', 'Bird', 'Sea_Mammal', 'Aquatic_Bird']
unknowns = df_std[df_std['host'].isin(ambiguous_hosts)]
train_test_df = df_std[df_std['host'].isin(ambiguous_hosts) == False]
train_test_df.dropna(inplace=True)
"""
Explanation: The first piece of meat in the code begins here. In the cell below, we convert the sequence matrix into a series of binary 1s and 0s, to encode the features as numbers. This is important - AFAIK, almost all machine learning algorithms require numerical inputs.
End of explanation
"""
set([i for i in train_test_df['host'].values])
# Grab out the labels.
output_lb = LabelBinarizer()
output_lb.fit(train_test_df['host'])
Y = output_lb.fit_transform(train_test_df['host'])
Y = Y.astype(np.float32) # Necessary for passing the data into nolearn.
Y.shape
X = train_test_df.ix[:,:-1].values
X = X.astype(np.float32) # Necessary for passing the data into nolearn.
X.shape
"""
Explanation: With the cell above, we now have a sequence feature matrix, in which the 566 amino acids positions have been expanded to 6750 columns of binary sequence features.
The next step is to grab out the host species labels, and encode them as 1s and 0s as well.
End of explanation
"""
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=42)
"""
Explanation: Next up, we do the train/test split.
End of explanation
"""
rf = RandomForestClassifier()
rf.fit(X_train, Y_train)
predictions = rf.predict(X_test)
predicted_labels = output_lb.inverse_transform(predictions)
# Compute the mutual information between the predicted labels and the actual labels.
mi(predicted_labels, output_lb.inverse_transform(Y_test))
"""
Explanation: For comparison, let's train a random forest classifier, and see what the concordance is between the predicted labels and the actual labels.
End of explanation
"""
# et = ExtraTreesClassifier()
# et.fit(X_train, Y_train)
# predictions = et.predict(X_test)
# predicted_labels = output_lb.inverse_transform(predictions)
# mi(predicted_labels, output_lb.inverse_transform(Y_test))
"""
Explanation: By the majority-consensus rule, and using mutual information as the metric for scoring, things look not so bad! As mentioned above, the RandomForestClassifier is a pretty powerful method for finding non-linear patterns between features and class labels.
Uncomment the cell below if you want to try the scikit-learn's ExtraTreesClassifier.
End of explanation
"""
# unknown_hosts = unknowns.ix[:,:-1].values
# preds = rf.predict(unknown_hosts)
# output_lb.inverse_transform(preds)
"""
Explanation: As a demonstration of how this model can be used, let's look at the ambiguously labeled sequences, i.e. those from "Environment" and "Avian", to see whether we can make a prediction as to what host it likely came frome.
End of explanation
"""
from lasagne import nonlinearities as nl
net1 = NeuralNet(layers=[
('input', layers.InputLayer),
('hidden1', layers.DenseLayer),
#('dropout', layers.DropoutLayer),
#('hidden2', layers.DenseLayer),
#('dropout2', layers.DropoutLayer),
('output', layers.DenseLayer),
],
# Layer parameters:
input_shape=(None, X.shape[1]),
hidden1_num_units=300,
#dropout_p=0.3,
#hidden2_num_units=500,
#dropout2_p=0.3,
output_nonlinearity=nl.softmax,
output_num_units=Y.shape[1],
#allow_input_downcast=True,
# Optimization Method:
update=nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
regression=True,
max_epochs=100,
verbose=1
)
"""
Explanation: Alrighty - we're now ready to try out a neural network! For this try, we will use lasagne and nolearn, two packages which have made things pretty easy for building neural networks. In this segment, I'm going to not show experiments with multiple architectures, activations and the like. The goal is to illustrate how easy the specification of a neural network is.
The network architecture that we'll try is as such:
1 input layer, of shape 6750 (i.e. taking in the columns as data).
1 hidden layer, with 300 units.
1 output layer, of shape 140 (i.e. each of the class labels).
End of explanation
"""
net1.fit(X_train, Y_train)
"""
Explanation: Training a simple neural network on my MacBook Air takes quite a bit of time :). But the function call for fitting it is a simple nnet.fit(X, Y).
End of explanation
"""
preds = net1.predict(X_test)
preds.shape
"""
Explanation: Let's grab out the predictions!
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.bar(np.arange(len(preds[0])), preds[0])
"""
Explanation: We're going to see how good the classifier did by examining the class labels. The way to visualize this is to have, say, the class labels on the X-axis, and the probability of prediction on the Y-axis. We can do this sample by sample. Here's a simple example with no frills in the matplotlib interface.
End of explanation
"""
### NOTE: Change the value of i to anything above!
i = 111
plt.figure(figsize=(20,5))
plt.bar(np.arange(len(output_lb.classes_)), preds[i])
plt.xticks(np.arange(len(output_lb.classes_)) + 0.5, output_lb.classes_, rotation='vertical')
plt.title('Original Label: ' + output_lb.inverse_transform(Y_test)[i])
plt.show()
# print(output_lb.inverse_transform(Y_test)[i])
"""
Explanation: Alrighty, let's add some frills - the class labels, the probability of each class label, and the original class label.
End of explanation
"""
preds_labels = []
for i in range(preds.shape[0]):
maxval = max(preds[i])
pos = list(preds[i]).index(maxval)
preds_labels.append(output_lb.classes_[pos])
mi(preds_labels, output_lb.inverse_transform(Y_test))
"""
Explanation: Let's do a majority-consensus rule applied to the labels, and then compute the mutual information score again.
End of explanation
"""
output_lb.classes_
"""
Explanation: With a score of 0.73, that's not bad either! It certainly didn't outperform the RandomForestClassifier, but the default parameters on the RFC were probably pretty good to begin with. Notice how little tweaking on the neural network we had to do as well.
For good measure, these were the class labels. Notice how successful influenza has been in replicating across the many different species!
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | 0.7/notebooks/auto_examples/optimizer-with-different-base-estimator.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
np.random.seed(1234)
import matplotlib.pyplot as plt
"""
Explanation: Use different base estimators for optimization
Sigurd Carlen, September 2019.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
To use different base_estimator or create a regressor with different parameters,
we can create a regressor object and set it as kernel.
End of explanation
"""
noise_level = 0.1
# Our 1D toy problem, this is the function we are trying to
# minimize
def objective(x, noise_level=noise_level):
return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\
+ np.random.randn() * noise_level
from skopt import Optimizer
opt_gp = Optimizer([(-2.0, 2.0)], base_estimator="GP", n_initial_points=5,
acq_optimizer="sampling", random_state=42)
x = np.linspace(-2, 2, 400).reshape(-1, 1)
fx = np.array([objective(x_i, noise_level=0.0) for x_i in x])
from skopt.acquisition import gaussian_ei
def plot_optimizer(res, next_x, x, fx, n_iter, max_iters=5):
x_gp = res.space.transform(x.tolist())
gp = res.models[-1]
curr_x_iters = res.x_iters
curr_func_vals = res.func_vals
# Plot true function.
ax = plt.subplot(max_iters, 2, 2 * n_iter + 1)
plt.plot(x, fx, "r--", label="True (unknown)")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([fx - 1.9600 * noise_level,
fx[::-1] + 1.9600 * noise_level]),
alpha=.2, fc="r", ec="None")
if n_iter < max_iters - 1:
ax.get_xaxis().set_ticklabels([])
# Plot GP(x) + contours
y_pred, sigma = gp.predict(x_gp, return_std=True)
plt.plot(x, y_pred, "g--", label=r"$\mu_{GP}(x)$")
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.2, fc="g", ec="None")
# Plot sampled points
plt.plot(curr_x_iters, curr_func_vals,
"r.", markersize=8, label="Observations")
plt.title(r"x* = %.4f, f(x*) = %.4f" % (res.x[0], res.fun))
# Adjust plot layout
plt.grid()
if n_iter == 0:
plt.legend(loc="best", prop={'size': 6}, numpoints=1)
if n_iter != 4:
plt.tick_params(axis='x', which='both', bottom='off',
top='off', labelbottom='off')
# Plot EI(x)
ax = plt.subplot(max_iters, 2, 2 * n_iter + 2)
acq = gaussian_ei(x_gp, gp, y_opt=np.min(curr_func_vals))
plt.plot(x, acq, "b", label="EI(x)")
plt.fill_between(x.ravel(), -2.0, acq.ravel(), alpha=0.3, color='blue')
if n_iter < max_iters - 1:
ax.get_xaxis().set_ticklabels([])
next_acq = gaussian_ei(res.space.transform([next_x]), gp,
y_opt=np.min(curr_func_vals))
plt.plot(next_x, next_acq, "bo", markersize=6, label="Next query point")
# Adjust plot layout
plt.ylim(0, 0.07)
plt.grid()
if n_iter == 0:
plt.legend(loc="best", prop={'size': 6}, numpoints=1)
if n_iter != 4:
plt.tick_params(axis='x', which='both', bottom='off',
top='off', labelbottom='off')
"""
Explanation: Toy example
Let assume the following noisy function $f$:
End of explanation
"""
fig = plt.figure()
fig.suptitle("Standard GP kernel")
for i in range(10):
next_x = opt_gp.ask()
f_val = objective(next_x)
res = opt_gp.tell(next_x, f_val)
if i >= 5:
plot_optimizer(res, opt_gp._next_x, x, fx, n_iter=i-5, max_iters=5)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.plot()
"""
Explanation: GP kernel
End of explanation
"""
from skopt.learning import GaussianProcessRegressor
from skopt.learning.gaussian_process.kernels import ConstantKernel, Matern
# Gaussian process with Matérn kernel as surrogate model
from sklearn.gaussian_process.kernels import (RBF, Matern, RationalQuadratic,
ExpSineSquared, DotProduct,
ConstantKernel)
kernels = [1.0 * RBF(length_scale=1.0, length_scale_bounds=(1e-1, 10.0)),
1.0 * RationalQuadratic(length_scale=1.0, alpha=0.1),
1.0 * ExpSineSquared(length_scale=1.0, periodicity=3.0,
length_scale_bounds=(0.1, 10.0),
periodicity_bounds=(1.0, 10.0)),
ConstantKernel(0.1, (0.01, 10.0))
* (DotProduct(sigma_0=1.0, sigma_0_bounds=(0.1, 10.0)) ** 2),
1.0 * Matern(length_scale=1.0, length_scale_bounds=(1e-1, 10.0),
nu=2.5)]
for kernel in kernels:
gpr = GaussianProcessRegressor(kernel=kernel, alpha=noise_level ** 2,
normalize_y=True, noise="gaussian",
n_restarts_optimizer=2
)
opt = Optimizer([(-2.0, 2.0)], base_estimator=gpr, n_initial_points=5,
acq_optimizer="sampling", random_state=42)
fig = plt.figure()
fig.suptitle(repr(kernel))
for i in range(10):
next_x = opt.ask()
f_val = objective(next_x)
res = opt.tell(next_x, f_val)
if i >= 5:
plot_optimizer(res, opt._next_x, x, fx, n_iter=i - 5, max_iters=5)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.show()
"""
Explanation: Test different kernels
End of explanation
"""
|
lmoresi/UoM-VIEPS-Intro-to-Python | Notebooks/SphericalMeshing/CartesianTriangulations/Ex6-Scattered-Data.ipynb | mit | import numpy as np
HFdata = np.loadtxt("../Data/HeatFlowSEAustralia.csv", delimiter=',', usecols=(3,4,5), skiprows=1)
eastings = HFdata[:,0]
northings = HFdata[:,1]
heat_flow = HFdata[:,2]
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
# local coordinate reference system
proj = ccrs.epsg(28354)
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.coastlines(resolution='10m')
ax.set_extent([135, 148, -39, -30])
ax.scatter(eastings, northings,
marker="o", cmap=plt.cm.RdBu, c=heat_flow, transform=proj)
ax.gridlines(draw_labels=True)
plt.show()
"""
Explanation: Example 6 - Scattered Data and 'Heat Maps'
There are different ways to map point data to a smooth field. One way is to triangulate the data, smooth it and interpolate to a regular mesh (see previous notebooks). It is also possible to construct weighted averages from scattered points to a regular mesh. In this notebook we work through how to find where points lie in the mesh and map their values to nearby vertices.
Notebook contents
Scattered data
Computational mesh
Data count by triangle
Data count by nearest vertex
Distance weighting to vertices
The next example is Ex7-Refinement-of-Triangulations
Point data with uneven spatial distribution
Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse. At local scales it is convenient to use projected coordinate reference systems (CRS) to work in metres instead of degrees. We use the heat flow database for Southeastern Australia from Mather et al. 2017
End of explanation
"""
import stripy as stripy
xmin = eastings.min()
xmax = eastings.max()
ymin = northings.min()
ymax = northings.max()
extent = [xmin, xmax, ymin, ymax]
# define a mesh with 20km x 20km resolution
spacingX = 10000.0
spacingY = 10000.0
mesh = stripy.cartesian_meshes.square_mesh(extent, spacingX, spacingY, refinement_levels=0, tree=True)
print("number of points = {}".format(mesh.npoints))
"""
Explanation: Define a regular computational mesh
Use the (usual) icosahedron with face points included.
End of explanation
"""
triangles = mesh.containing_triangle(eastings, northings)
tris, counts = np.unique(triangles, return_counts=True)
print("number of unique triangles = {}".format(tris.shape[0]))
## map to nodes so we can plot this
hit_count = np.zeros(mesh.npoints)
for i in range(0, tris.shape[0]):
hit_count[mesh.simplices[tris[i]]] += counts[i]
hit_count /= 3.0
print("mean number of hits = {}".format(hit_count.mean()))
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.coastlines(resolution='10m')
ax.set_extent([135, 148, -39, -30])
ax.scatter(mesh.x, mesh.y,
marker="o", cmap=plt.cm.Reds, s=100, c=hit_count, alpha=0.33, transform=proj)
ax.gridlines(draw_labels=True)
plt.show()
"""
Explanation: Count heat flow points per triangle
This is a numpy wrapper around the TRIPACK routine which operates by retriangulation and is therefore not particularly fast.
End of explanation
"""
distances, vertices = mesh.nearest_vertices(eastings, northings, k=1)
nodes, ncounts = np.unique(vertices, return_counts=True)
hit_countn = np.zeros(mesh.npoints)
hit_countn[nodes] = ncounts
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.coastlines(resolution='10m')
ax.set_extent([135, 148, -39, -30])
ax.scatter(mesh.x, mesh.y,
marker="o", cmap=plt.cm.Reds, s=100, c=hit_countn, alpha=0.33, transform=proj)
ax.gridlines(draw_labels=True)
plt.show()
"""
Explanation: Count earthquakes per vertex
The Triangulation.nearest_vertices method uses a k-d tree to find the nearest vertices to a set of x,y points. It returns the nearest vertices and euclidean distance. This requires the k-d tree to have been built when the mesh was initialised (tree=True)
End of explanation
"""
distances, vertices = mesh.nearest_vertices(eastings, northings, k=100)
norm = distances.sum(axis=1)
# distances, vertices are arrays of shape (data_size, 10)
hit_countid = np.zeros(mesh.npoints)
## numpy shouldn't try to vectorise this reduction operation
for i in range(0,distances.shape[0]):
hit_countid[vertices[i,:]] += distances[i,:] / norm[i]
hit_countidr = np.zeros(mesh.npoints)
## numpy shouldn't try to vectorise this reduction operation
for i in range(0,distances.shape[0]):
hit_countidr[vertices[i,:]] += np.exp( -distances[i,:] / 0.02 )
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.coastlines(resolution='10m')
ax.set_extent([135, 148, -39, -30])
ax.scatter(mesh.x, mesh.y,
marker="o", cmap=plt.cm.Reds, s=100, c=hit_countid, alpha=0.33, transform=proj)
ax.gridlines(draw_labels=True)
plt.show()
"""
Explanation: Inverse distance weighted number of earthquakes
The k-d tree method provides a specified number of neighbours and the distance to those neighbours. This can be used in a number of ways to smooth or amalgamate data. Here for example is a weighted average of each earthquake to nearby nodes.
We compute the distances to $N$ nearby vertices and distribute information to those vertices in inverse proportion to their distance.
$$ w i = \frac{d _i}{\sum{i=1}^N d _i} $$
Alternatively, we might map information to the vertices by applying a radially symmetric kernel to the point data without normalising.
End of explanation
"""
|
joekasp/ionic_liquids | ionic_liquids/Interface.ipynb | mit | from ipywidgets import interact, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from visualization import core, plots
import utils
"""
Explanation: ILest: Ionic Liquids Estimation and Statistical Tools
How to use this GUI
Select Cell and choose Run All. This will execute the notebook cells and initialize the interactive widgets.
End of explanation
"""
data_file = widgets.Text(description='Dataset',value='datasets/compounddata.xlsx',disabled=False)
model_select = widgets.Dropdown(options=core.model_types(),value='LASSO',description='Model:',disabled=False)
test_withheld = widgets.FloatText(value=20,description='% Test',disabled=False,color='black',width='130px')
save_file = widgets.Text(description='Save Folder:',placeholder='model_folder')
run_button = widgets.Button(description='Train Model')
save_button = widgets.Button(description='Save Model')
obj_ = [None]
mod_data_ = [None,None]
X_mean_ = [None]
X_stdev_ = [None]
def run_event(b):
clear_output()
obj, X, y, X_mean, X_stdev = utils.train_model(model_select.value,data_file.value,float(test_withheld.value))
obj_[0] = obj #save variable outside scope
mod_data_[0] = X
mod_data_[1] = y
X_mean_[0] = X_mean
X_stdev_[0] = X_stdev
my_plot = plots.parity_plot(y,obj.predict(X))
plt.show(my_plot)
gui_1.children = [HBox(items1),HBox(items2)]
def save_event(b):
utils.save_model(obj_[0],X_mean_,X_stdev_,X=mod_data_[0],y=mod_data_[1],dirname=save_file.value)
run_button.on_click(run_event)
save_button.on_click(save_event)
items1 = [data_file,test_withheld,model_select,run_button]
items2 = [save_file,save_button]
gui_1 = VBox(layout=widgets.Layout(width='95%',display='inline-flex'))
gui_1.children = [HBox(items1)]
display(gui_1)
"""
Explanation: Training a Model
The following set of widgets are useful to train a model from a specificied dataset.
You can save the model and its contents into a directory by specifying the name and using the Save Model button.
IMPORTANT! This will produce the side effect of creating files on your local drive. If the directory does not already exist it will be created.
End of explanation
"""
A_list,A_smiles,B_list,B_smiles = core.read_SMILES()
model_dir = widgets.Text(description='Model:',placeholder='model_folder')
load_button = widgets.Button(description='Load Model')
a_select = widgets.Dropdown(options=A_list,description='A: ',width='550px')
b_select = widgets.Dropdown(options=B_list,description='B: ',width='400px')
temp_slider = widgets.IntSlider(value=298,min=250,max=400,step=1,orientation='horizontal',description='Temp (K)')
p_slider = widgets.IntSlider(value=103,min=80,max=150,step=1,orientation='horizontal',description='Pressure (kPa)')
m_slider = widgets.FloatSlider(value=0.50,min=0.0,max=1.0,step=0.01,orientation='horizontal',description='Mol Frac A')
plot_p_button = widgets.Button(description='Plot vs. Pressure',width='300px')
plot_t_button = widgets.Button(description='Plot vs. Temperature',width='300px')
plot_m_button = widgets.Button(description='Plot vs. Mole Fraction',width='300px')
mod_obj_ = [None]
X_ = [None]
y_ = [None]
X_mean_ = [None]
X_stdev_ = [None]
def load_event(b):
clear_output()
gui2.children = [HBox([model_dir,load_button],layout=widgets.Layout(width='95%',display='inline-flex')),
HBox([plot_t_button,plot_p_button,plot_m_button],
layout=widgets.Layout(width='95%',display='inline-flex')),
HBox([temp_slider,p_slider,m_slider]),HBox([a_select,b_select])]
obj,X_mean,X_stdev,X,y = utils.read_model(model_dir.value)
X_mean_[0] = X_mean
X_stdev_[0] = X_stdev
mod_obj_[0] = obj
X_[0] = X
y_[0] = y
replot('m')
def p_button(b):
replot(x_var='p')
def t_button(b):
replot(x_var='t')
def m_button(b):
replot(x_var='m')
def replot(x_var='m'):
clear_output()
a_idx = A_list.index(a_select.value)
b_idx = B_list.index(b_select.value)
x_vals,y_vals = utils.predict_model(A_smiles[a_idx],B_smiles[b_idx],mod_obj_[0],
temp_slider.value,p_slider.value,m_slider.value,
X_mean_[0],X_stdev_[0],flag=x_var)
my_plot = plots.scatter_plot(x_vals,y_vals,x_var)
plt.show(my_plot)
gui2 = VBox(layout=widgets.Layout(width='95%',display='inline-flex'))
gui2.children = [HBox([model_dir,load_button])]
display(gui2)
load_button.on_click(load_event)
plot_t_button.on_click(t_button)
plot_p_button.on_click(p_button)
plot_m_button.on_click(m_button)
"""
Explanation: Visualizing Results from a Saved Model
To load a model for easy visualization simply put in the name of the directory where the contents were saved. Use the sliders to specify parameter values and chose the independent variable you wish to explore. The components A and B can be easily changed using the dropdown menus.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_evoked_whitening.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
"""
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on [1]_.
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals, vol.
108, 328-342, NeuroImage.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, exclude='bads')
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
"""
Explanation: Set parameters
End of explanation
"""
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
"""
Explanation: Compute covariance using automated regularization
End of explanation
"""
evoked = epochs.average()
evoked.plot(time_unit='s') # plot evoked response
"""
Explanation: Show the evoked data:
End of explanation
"""
evoked.plot_white(noise_covs, time_unit='s')
"""
Explanation: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1.
End of explanation
"""
|
nwfpug/python-primer | notebooks/06-lists.ipynb | gpl-3.0 | # import thr random numbers module. More on modules in a future notebook
import random
"""
Explanation: Lists
<img src="../images/python-logo.png">
Lists are sequences that hold heterogenous data types that are separated by commas between two square brackets. Lists have zero-based indexing, which means that the first element in the list has an index on '0', the second element has an index of '1', and so on. The last element of the list has an index of 'N-1' where N is the length of the list.
End of explanation
"""
# empty list
a = list()
# or
a = []
# define a list
a = [1,2,3,4,2,2]
print a
# list of numbers from 0 to 9
a = range(10)
a
"""
Explanation: Define a list
End of explanation
"""
# Python is zer-bases indexing
a[0]
# Get the last element
a[-1]
# Get the next to the last element
a[-2]
a[:]
# Slice the list
a[0:6] # elements with indecies 0, 1, 2, & 3
a = [1,2,2,3,4,4,4,6,7,2,2,2]
# Get the number of occurences of the element 2
a.count(2)
# the original list
a
# remove the element at with index 2 and return that value
a.pop(2)
# a is now modified
a
# delete without return
del a[1] # delete element at index 1
# print a
a
2 not in a
5 in a
# list can contain any type of Python objects, including lists
f = [1, '2', 'a string', [1, ('3', 2)], {'a':1, 'b':2}]
# get element @ index 2
f[2]
# change it
f[2] = 3
f
# length of the list
len(f)
import random
# list comprehension
a = [int(100*random.random()) for i in xrange(150)]
print a
# the same as
a = []
for i in range(150):
a.append(int(100*random.random()))
# get the max and min of a numeric list
max(a), min(a)
# make a tuple into a list
x = (1,2,3,4,5)
list(x)
# add object to the end of the list
x = [1,2,3]
x.append(4)
print x
x.append([6,7,8])
print x
# Appends the contents of seq to list
x.extend([9,10,11,12,[13,14,15]])
print x
x.extend([1,2,3])
x
a = [1,2,3]
b = [4,5,6]
c = a+b
c
# Returns count of how many times obj occurs in list
x.count(3)
# Returns the lowest index in list that obj appears
x.index(10)
# Inserts object obj into list at offset index
print x[3]
x.insert(3, ['a','b','c'])
print x
# Removes and returns last object or obj from list
x.pop()
print x
print x[3]
x.pop(3)
print x
# Removes the first occurrence of obj from list
x = [1,2,2,3,4,5,2,3,4,6,3,4,5,6,2]
x.remove(2)
print x
# Reverses objects of list in place
x.reverse()
print x
# Sort x in place
x.sort()
print x
# duplicate a list
a = [1,2,3]
b = a*5
b
import random
[random.random() for _ in range(0, 10)]
x = [random.randint(0,1000) for _ in range(10)]
print x
random.choice(x)
print range(10)
print range(0,10)
print range(5,16)
print range(-6, 7, 2)
M=[[1,2,3],
[4,5,6],
[7,8,9]]
print M
# put the 2nd column of M in a list
column = []
for row in M:
column.append(row[1])
print column
# list comprehension - another way of extracting the 2nd column of M
column = [row[1] for row in M]
print column
# compute the transpose of the matrix M
[[row[i] for row in M] for i in range(3)]
# get the diagonal elements of M
diag = [M[i][i] for i in [0, 1, 2]]
print diag
# build a list with another list as elements
[[x ** 2, x ** 3] for x in range(4)]
# build a list with an if statement
[[x, x/2, x*2] for x in range(-6, 7, 2) if x > 0]
# does the same thing as above but more
big_list = []
for x in range(-6,7,2):
if x > 0:
big_list.append([x, x/2, x*2])
print big_list
# does the same as above but lots of code
big_list = []
for x in range(-6,7,2):
lil_list = []
if x > 0:
lil_list.append(x)
lil_list.append(x/2)
lil_list.append(x*2)
big_list.append(lil_list)
print big_list
L = ["Good", # clint
"Bad", #
"Ugly"]
print L
"""
Explanation: Accesing elements of a list
End of explanation
"""
|
Mynti207/cs207project | docs/stock_example_returns.ipynb | mit | # load data
with open('data/returns_include.json') as f:
stock_data_include = json.load(f)
with open('data/returns_exclude.json') as f:
stock_data_exclude = json.load(f)
# keep track of which stocks are included/excluded from the database
stocks_include = list(stock_data_include.keys())
stocks_exclude = list(stock_data_exclude.keys())
# check the number of market days in the year
num_days = len(stock_data_include[stocks_include[0]])
num_days
"""
Explanation: Stock Market Similarity Searches: Daily Returns
We have provided a year of daily returns for 379 S&P 500 stocks. We have explicitly excluded stocks with incomplete or missing data. We have pre-loaded 350 stocks in the database, and have excluded 29 stocks for later use in similarity searches.
Data source: <a href='www.stockwiz.com'>www.stockwiz.com</a>
End of explanation
"""
# 1. load the database server
# when running from the terminal
# python go_server_persistent.py --ts_length 244 --db_name 'stock_prices'
# here we load the server as a subprocess for demonstration purposes
server = subprocess.Popen(['python', '../go_server_persistent.py',
'--ts_length', str(num_days), '--data_dir', '../db_files', '--db_name', 'stock_returns'])
time.sleep(5) # make sure it loads completely
# 2. load the database webserver
# when running from the terminal
# python go_webserver.py
# here we load the server as a subprocess for demonstration purposes
webserver = subprocess.Popen(['python', '../go_webserver.py'])
time.sleep(5) # make sure it loads completely
# 3. import the web interface and initialize it
from webserver import *
web_interface = WebInterface()
"""
Explanation: Database Initialization
Let's start by initializing all the database components.
End of explanation
"""
# # insert into database
# for stock in stocks_include:
# web_interface.insert_ts(pk=stock, ts=TimeSeries(range(num_days), stock_data_include[stock]))
"""
Explanation: Stock Data Initialization
The database is now up and running. We have pre-loaded the data for you, but you can always unquote the code below to re-load the data if you accidentally delete it.
End of explanation
"""
len(web_interface.select())
"""
Explanation: Let's check how many stocks are currently in the database (should be 350).
End of explanation
"""
# let's look at the first 10 stocks
web_interface.select(fields=['ts'], additional={'sort_by': '+pk', 'limit': 10})
"""
Explanation: Let's look at the first 10 stocks, to check that the data has been loaded correctly.
End of explanation
"""
# # randomly pick vantage points
# # note: this can be time-intensive for a large number of vantage points
# num_vps = 10
# random_vps = np.random.choice(len(stocks_include), size=num_vps, replace=False)
# vpkeys = [stocks_include[s] for s in random_vps]
# # mark in database
# for vp in vpkeys:
# web_interface.insert_vp(vp)
"""
Explanation: Vantage Point Search
We need to initialize vantage points in order to carry out a vantage point search. Again, this has already been done for you, but you can re-create the results by running the following code.
End of explanation
"""
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the vantage point similarity search
result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1)
stock_match = list(result)[0]
stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_match})[stock_match]['ts']
print('Most similar stock:', stock_match)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(stock_ts.values(), label='Result:' + stock_match)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Return Similarity')
plt.show()
"""
Explanation: Let's pick one of our excluded stocks and carry out a vantage point similarity search.
End of explanation
"""
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the isax tree similarity search
result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]))
# could not find a match
if result == 'ERROR: NO_MATCH':
print('Could not find a similar stock.')
# found a match
else:
# closest time series
stock_match = list(result)[0]
stock_ts = web_interface.select(fields=['ts'], md={'pk': stock_match})[stock_match]['ts']
print('Most similar stock:', stock_match)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(stock_ts.values(), label='Result:' + stock_match)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Return Similarity')
plt.show()
"""
Explanation: iSAX Tree Search
Let's pick another one of our excluded stocks and carry out an iSAX tree similarity search. Note that this is an approximate search technique, so it will not always be able to find a similar stock.
End of explanation
"""
# pick the stock
stock = np.random.choice(stocks_exclude)
print('Stock:', stock)
# run the vantage point similarity search
result = web_interface.vp_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]), 1)
match_vp = list(result)[0]
ts_vp = web_interface.select(fields=['ts'], md={'pk': match_vp})[match_vp]['ts']
print('VP search result:', match_vp)
# run the isax similarity search
result = web_interface.isax_similarity_search(TimeSeries(range(num_days), stock_data_exclude[stock]))
# could not find an isax match
if result == 'ERROR: NO_MATCH':
print('iSAX search result: Could not find a similar stock.')
# found a match
else:
# closest time series
match_isax = list(result)[0]
ts_isax = web_interface.select(fields=['ts'], md={'pk': match_isax})[match_isax]['ts']
print('iSAX search result:', match_isax)
# visualize similarity
plt.plot(stock_data_exclude[stock], label='Query:' + stock)
plt.plot(ts_vp.values(), label='Result:' + match_vp)
plt.plot(ts_isax.values(), label='Result:' + match_isax)
plt.xticks([])
plt.legend(loc='best')
plt.title('Daily Stock Return Similarity')
plt.show()
"""
Explanation: Comparing Similarity Searches
Now, let's pick one more random stock, carry out both types of similarity searches, and compare the results.
End of explanation
"""
print(web_interface.isax_tree())
"""
Explanation: iSAX Tree Representation
Finally, let's visualize the iSAX tree. The clusters represent groups of "similar" stocks.
End of explanation
"""
# terminate processes before exiting
os.kill(server.pid, signal.SIGINT)
time.sleep(5) # give it time to terminate
web_interface = None
webserver.terminate()
"""
Explanation: Termination
Always remember to terminate any outstanding processes!
End of explanation
"""
|
mdeff/ntds_2017 | projects/reports/movie_success/Treat_Metacritic_ROI.ipynb | mit | %matplotlib inline
import configparser
import os
import requests
from tqdm import tqdm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
from scipy import sparse, stats, spatial
import scipy.sparse.linalg
from sklearn import preprocessing, decomposition
import librosa
import IPython.display as ipd
import json
#added by me:
import requests
from pygsp import graphs, filters, plotting
plt.rcParams['figure.figsize'] = (17, 5)
plotting.BACKEND = 'matplotlib'
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6);
"""
Explanation: Metacritic and ROI Analysis
End of explanation
"""
df = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv')
#df = df[df['Metacritic'] != 0]
df.head()
"""
Explanation: Load Dataset
End of explanation
"""
unique, counts = np.unique(df['Metacritic'], return_counts=True)
plt.bar(unique,counts,align='center',width=.6);
ratings_nz = np.array(df[df['Metacritic'] != 0]['Metacritic'])
mu = np.mean(ratings_nz)
std = np.std(ratings_nz)
plt.xlabel('Ratings')
plt.ylabel('Counts')
plt.title("Metacritic Ratings ($ \mu=%.2f,$ $\sigma=%.2f $)" %(mu,std));
plt.savefig('images/Metacritic_distribution.png')
"""
Explanation: Metacritic Ratings Representation
End of explanation
"""
plt.hist(df['ROI'],bins='auto');
data = np.array(df['ROI'])
# This is the colormap I'd like to use.
cm = plt.cm.get_cmap('RdYlGn');
# Plot histogram.
n, bins, patches = plt.hist(data, 25, normed=1, color='yellow');
bin_centers = 0.5 * (bins[:-1] + bins[1:]);
# scale values to interval [0,1]
col = bin_centers - min(bin_centers)
col /= max(col)
for c, p in zip(col, patches):
plt.setp(p, 'facecolor', cm(c));
plt.xlabel('ROI');
plt.savefig('images/ROI_regression.png');
plt.show();
np.percentile(df['ROI'], 75)
"""
Explanation: ROI Representation
End of explanation
"""
df.to_csv('Saved_Datasets/NewFeaturesDataset.csv', encoding='utf-8', index=False)
"""
Explanation: Save Dataset
End of explanation
"""
print("%.2f" % (len(df[df['ROI']>1])/len(df)*100))
print("%.2f" % (len(df[df['Metacritic']>50])/len(df)*100))
"""
Explanation: Metacritic VS. ROI
End of explanation
"""
df_sorted = df.sort_values(by=['Metacritic'])
plt.plot(df_sorted['Metacritic'],df_sorted['ROI'])
plt.xlabel('Metacritic Ratings')
plt.ylabel('ROI')
plt.title('Evolution of ROI according to Metacritic ratings');
plt.savefig('images/roi_vs_metacritic.png')
"""
Explanation: We can see that the ROI and the ratings are not correlated as the ROI doesn't necessarily increases for good movies :
End of explanation
"""
df_roi_sorted = df.sort_values(by=['ROI'],ascending=False)
df_met_sorted = df.sort_values(by=['Metacritic'],ascending=False)
mean_roi, mean_met = [], []
for r in np.arange(0.01, 1.0, 0.01):
limit_roi = df_roi_sorted.iloc[int(len(df)*r)]['ROI']
limit_met = df_met_sorted.iloc[int(len(df)*r)]['Metacritic']
success_roi = df[df['ROI'] > limit_roi]
success_met = df[df['Metacritic'] > limit_met]
mean_roi.append([r,np.mean(success_roi['Metacritic'])])
mean_met.append([r,np.mean(success_met['ROI'])])
mean_roi = np.array(mean_roi)
mean_met = np.array(mean_met)
f, axarr = plt.subplots(2, sharex=True)
axarr[0].plot(mean_roi[:,0],mean_roi[:,1]);
axarr[0].set_ylabel('Metacritic Mean')
axarr[1].plot(mean_met[:,0],mean_met[:,1]);
axarr[1].set_xlabel('Success/Failure Ratio')
axarr[1].set_ylabel('ROI')
f.subplots_adjust(hspace=0);
plt.setp([a.get_xticklabels() for a in f.axes[:-1]], visible=False);
ratio = 0.2
df_sorted = df.sort_values(by=['ROI'],ascending=False)
limit_roi = df_sorted.iloc[int(len(df)*ratio)]['ROI']
success = df[df['ROI'] > limit_roi]
failure = df[df['ROI'] <= limit_roi]
print("The ROI needed to be a successful movie is: "+str(limit_roi)[:4])
print("There are "+str(int(len(df)*ratio))+" successful movies in the dataset.")
"""
Explanation: How to determine the success of a movie ?
Try: consider that the 30% of the movies with the highest ROI are the successful movies.
To determine an optimal ratio to use, try to find a high enough ratio which leads to a maximum metacritic mean:
End of explanation
"""
df = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv')
df = df.drop(df[df.Metacritic == 0].index)
crit_norm = np.array(df['Metacritic'])
w = np.zeros((len(df),len(df)))
for i in range(0,len(df)):
for j in range(i,len(df)):
if (i == j):
w[i,j] = 0
continue
if (crit_norm[i] == 0 or crit_norm[j] == 0):
w[i,j] = w[j,i] = 0
else:
w[i,j] = w[j,i] = 1.0 - (abs(crit_norm[i]-crit_norm[j])/100)
plt.hist(w.reshape(-1), bins=50);
plt.title('Metacritic weights matrix histogram')
plt.savefig('images/metacritic_weights_hist.png')
print('The mean value is: {}'.format(w.mean()))
print('The max value is: {}'.format(w.max()))
print('The min value is: {}'.format(w.min()))
plt.spy(w)
"""
Explanation: Create Normalized Metacritic Weight Matrix
$$ W(i,j) = \begin{cases}
0 & \text{if } Metacritic_{normed}(i,j) = 0\
1-\frac{abs(Metacritic(i) - Metacritic(j))}{100} & \text{otherwise} \end{cases}$$
End of explanation
"""
W = pd.DataFrame(w)
W.head()
W.to_csv('Saved_Datasets/NormalizedMetacriticW.csv', encoding='utf-8', index=False)
"""
Explanation: Save as csv
End of explanation
"""
degrees = np.zeros(len(w))
#reminder: the degrees of a node for a weighted graph are the sum of its weights
for i in range(0, len(w)):
degrees[i] = sum(w[i])
plt.hist(degrees, bins=50);
#reminder: L = D - W for weighted graphs
laplacian = np.diag(degrees) - w
#computation of the normalized Laplacian
laplacian_norm = scipy.sparse.csgraph.laplacian(w, normed = True)
plt.spy(laplacian_norm);
plt.spy(np.diag(degrees))
NEIGHBORS = 300
#sort the order of the weights
sort_order = np.argsort(w, axis = 1)
#declaration of a sorted weight matrix
sorted_weights = np.zeros((len(w), len(w)))
for i in range (0, len(w)):
for j in range(0, len(w)):
if (j >= len(w) - NEIGHBORS):
#copy the k strongest edges for each node
sorted_weights[i, sort_order[i,j]] = w[i,sort_order[i,j]]
else:
#set the other edges to zero
sorted_weights[i, sort_order[i,j]] = 0
#ensure the matrix is symmetric
bigger = sorted_weights.transpose() > sorted_weights
sorted_weights = sorted_weights - sorted_weights*bigger + sorted_weights.transpose()*bigger
np.fill_diagonal(sorted_weights, 0)
plt.spy(sorted_weights)
#reminder: L = D - W for weighted graphs
laplacian = np.diag(degrees) - sorted_weights
#computation of the normalized Laplacian
laplacian_norm = scipy.sparse.csgraph.laplacian(sorted_weights, normed = True)
np.fill_diagonal(laplacian_norm, 1)
plt.spy(laplacian_norm);
laplacian_norm = sparse.csr_matrix(laplacian_norm)
eigenvalues, eigenvectors = sparse.linalg.eigsh(laplacian_norm, k = 10, which = 'SM')
plt.plot(eigenvalues, '.-', markersize=15);
plt.xlabel('')
plt.ylabel('Eigenvalues')
plt.show()
success = preprocessing.LabelEncoder().fit_transform(df['success'])
print(success)
x = eigenvectors[:, 1]
y = eigenvectors[:, 2]
plt.scatter(x, y, c=success, cmap='RdBu', alpha=0.5);
G = graphs.Graph(sorted_weights)
G.compute_laplacian('normalized')
G.compute_fourier_basis(recompute=True)
plt.plot(G.e[0:10]);
G.set_coordinates(G.U[:,1:3])
G.plot()
G.plot_signal(success, vertex_size=20)
"""
Explanation: Embedding
End of explanation
"""
|
rflamary/POT | notebooks/plot_barycenter_1D.ipynb | mit | # Author: Remi Flamary <remi.flamary@unice.fr>
#
# License: MIT License
import numpy as np
import matplotlib.pylab as pl
import ot
# necessary for 3d plot even if not used
from mpl_toolkits.mplot3d import Axes3D # noqa
from matplotlib.collections import PolyCollection
"""
Explanation: 1D Wasserstein barycenter demo
This example illustrates the computation of regularized Wassersyein Barycenter
as proposed in [3].
[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015).
Iterative Bregman projections for regularized transportation problems
SIAM Journal on Scientific Computing, 37(2), A1111-A1138.
End of explanation
"""
#%% parameters
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
a1 = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std
a2 = ot.datasets.make_1D_gauss(n, m=60, s=8)
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
"""
Explanation: Generate data
End of explanation
"""
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
"""
Explanation: Plot data
End of explanation
"""
#%% barycenter computation
alpha = 0.2 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
"""
Explanation: Barycenter computation
End of explanation
"""
#%% barycenter interpolation
n_alpha = 11
alpha_list = np.linspace(0, 1, n_alpha)
B_l2 = np.zeros((n, n_alpha))
B_wass = np.copy(B_l2)
for i in range(0, n_alpha):
alpha = alpha_list[i]
weights = np.array([1 - alpha, alpha])
B_l2[:, i] = A.dot(weights)
B_wass[:, i] = ot.bregman.barycenter(A, M, reg, weights)
#%% plot interpolation
pl.figure(3)
cmap = pl.cm.get_cmap('viridis')
verts = []
zs = alpha_list
for i, z in enumerate(zs):
ys = B_l2[:, i]
verts.append(list(zip(x, ys)))
ax = pl.gcf().gca(projection='3d')
poly = PolyCollection(verts, facecolors=[cmap(a) for a in alpha_list])
poly.set_alpha(0.7)
ax.add_collection3d(poly, zs=zs, zdir='y')
ax.set_xlabel('x')
ax.set_xlim3d(0, n)
ax.set_ylabel('$\\alpha$')
ax.set_ylim3d(0, 1)
ax.set_zlabel('')
ax.set_zlim3d(0, B_l2.max() * 1.01)
pl.title('Barycenter interpolation with l2')
pl.tight_layout()
pl.figure(4)
cmap = pl.cm.get_cmap('viridis')
verts = []
zs = alpha_list
for i, z in enumerate(zs):
ys = B_wass[:, i]
verts.append(list(zip(x, ys)))
ax = pl.gcf().gca(projection='3d')
poly = PolyCollection(verts, facecolors=[cmap(a) for a in alpha_list])
poly.set_alpha(0.7)
ax.add_collection3d(poly, zs=zs, zdir='y')
ax.set_xlabel('x')
ax.set_xlim3d(0, n)
ax.set_ylabel('$\\alpha$')
ax.set_ylim3d(0, 1)
ax.set_zlabel('')
ax.set_zlim3d(0, B_l2.max() * 1.01)
pl.title('Barycenter interpolation with Wasserstein')
pl.tight_layout()
pl.show()
"""
Explanation: Barycentric interpolation
End of explanation
"""
|
YoungKwonJo/mlxtend | docs/examples/classifier_nn_mlp.ipynb | bsd-3-clause | from mlxtend.data import iris_data
X, y = iris_data()
X = X[:, 2:]
"""
Explanation: mlxtend - Multilayer Perceptron Examples
Sections
Classify Iris
Classify handwritten digits from MNIST
<br>
<br>
Classify Iris
Load 2 features from Iris (petal length and petal width) for visualization purposes.
End of explanation
"""
from mlxtend.classifier import NeuralNetMLP
import numpy as np
nn1 = NeuralNetMLP(n_output=3,
n_features=X.shape[1],
n_hidden=30,
l2=0.0,
l1=0.0,
epochs=5000,
eta=0.001,
alpha=0.00,
minibatches=1,
shuffle=True,
random_state=0)
nn1.fit(X, y)
y_pred = nn1.predict(X)
acc = np.sum(y == y_pred, axis=0) / X.shape[0]
print('Accuracy: %.2f%%' % (acc * 100))
"""
Explanation: Train neural network for 3 output flower classes ('Setosa', 'Versicolor', 'Virginica'), regular gradient decent (minibatches=1), 30 hidden units, and no regularization.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(range(len(nn1.cost_)), nn1.cost_)
plt.ylim([0, 300])
plt.ylabel('Cost')
plt.xlabel('Epochs')
plt.grid()
plt.show()
"""
Explanation: Now, check if the gradient descent converged after 5000 epochs, and choose smaller learning rate (eta) otherwise.
End of explanation
"""
X_std = np.copy(X)
for i in range(2):
X_std[:,i] = (X[:,i] - X[:,i].mean()) / X[:,i].std()
nn2 = NeuralNetMLP(n_output=3,
n_features=X_std.shape[1],
n_hidden=30,
l2=0.0,
l1=0.0,
epochs=1000,
eta=0.05,
alpha=0.1,
minibatches=1,
shuffle=True,
random_state=1)
nn2.fit(X_std, y)
y_pred = nn2.predict(X_std)
acc = np.sum(y == y_pred, axis=0) / X_std.shape[0]
print('Accuracy: %.2f%%' % (acc * 100))
plt.plot(range(len(nn2.cost_)), nn2.cost_)
plt.ylim([0, 300])
plt.ylabel('Cost')
plt.xlabel('Epochs')
plt.show()
"""
Explanation: Standardize features for smoother and faster convergence.
End of explanation
"""
from mlxtend.evaluate import plot_decision_regions
plot_decision_regions(X, y, clf=nn1)
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.show()
"""
Explanation: Visualize the decision regions.
End of explanation
"""
from mlxtend.data import mnist_data
X, y = mnist_data()
"""
Explanation: <br>
<br>
Classify handwritten digits from MNIST
Load a 5000-sample subset of the MNIST dataset.
End of explanation
"""
def plot_digit(X, y, idx):
img = X[idx].reshape(28,28)
plt.imshow(img, cmap='Greys', interpolation='nearest')
plt.title('true label: %d' % y[idx])
plt.show()
plot_digit(X, y, 4)
"""
Explanation: Visualize a sample from the MNIST dataset.
End of explanation
"""
nn = NeuralNetMLP(n_output=10, n_features=X.shape[1],
n_hidden=100,
l2=0.0,
l1=0.0,
epochs=300,
eta=0.0005,
alpha=0.0,
minibatches=50,
random_state=1)
"""
Explanation: Initialize the neural network to recognize the 10 different digits (0-10) using 300 epochs and minibatch learning.
End of explanation
"""
nn.fit(X, y, print_progress=True)
y_pred = nn.predict(X)
acc = np.sum(y == y_pred, axis=0) / X.shape[0]
print('Accuracy: %.2f%%' % (acc * 100))
"""
Explanation: Learn the features while printing the progress to get an idea about how long it may take.
End of explanation
"""
plt.plot(range(len(nn.cost_)), nn.cost_)
plt.ylim([0, 500])
plt.ylabel('Cost')
plt.xlabel('Mini-batches * Epochs')
plt.show()
plt.plot(range(len(nn.cost_)//50), nn.cost_[::50], color='red')
plt.ylim([0, 500])
plt.ylabel('Cost')
plt.xlabel('Epochs')
plt.show()
"""
Explanation: Check for convergence.
End of explanation
"""
|
NeuroDataDesign/fngs | docs/ebridge2/fngs_merge/week_0613/friston.ipynb | apache-2.0 | import numpy as np
def friston_model(mc_params):
(t, m) = mc_params.shape
friston = np.zeros((t, 4*m))
# the motion parameters themselves
friston[:, 0:m] = mc_params
# square the motion parameters
friston[:, m:2*m] = np.square(mc_params)
# use the motion estimated at the preceding timepoint
# as a regressor
friston[1:, 2*m:3*m] = mc_params[:-1, :]
# use the motion estimated at the preceding timepoint
# squared as a regressor
friston[:, 3*m:4*m] = np.square(friston[:, 2*m:3*m])
return friston
"""
Explanation: Friston 24 Parameter Model
Even after perfect realignment, Friston et. al have shown that movement related artifacts are still extant in the BOLD signal motion correction paper. The types of motion artifacts can be divided into two types:
motion artifacts due to the per-volume motion: these motion artifacts are due to the motion of the subject's head at the current timepoint.
motion artifacts due to preceding volume motion: these artifacts are due to motion of the subject's head at preceding timepoints.
As shown by Friston et al., this motion, while sometimes substantial and making up a significant fraction of the fMRI signal, can be effectively removed by incorporating the subject motion parameters corrected in the volume realignment step into the nuisance GLM to remove motion artifacts. Under this model, the timeseries can be written as follows:
\begin{align}
Y &= WR + T
\end{align}
where $Y \in \mathbb{R}^{t \times n}$ for $t$ timesteps, $n$ voxels, $W \in \mathbb{R}^{t \times r}$ is our design matrix with $r$ regressors, $R \in \mathbb{R}^{r \times n}$ are our regressor coefficients. Then $WR$ is our contribution to the BOLD signal that is modellable by our identified regressors $W$, and $T \in \mathbb{R}^{t \times n}$ is the true timeseries we seek.
We can solve this system using the least-squares solution, using the assumption that our term not modellable by our regressors squared should be minimal:
\begin{align}
W = (R^TR)^{-1}R^TY
\end{align}
We incorporate our nuisance parameters into our design matrix:
\begin{align}
R = \begin{bmatrix} x_1 & f_{1} & f^2_1 & s_1 & s^2_1 & y_1 \ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \ y_t & f_t & f^2_t & s_t & s^2_t & y_t \end{bmatrix}
\end{align}
where $f$ are our first order regressors estimated by mcflirt, and $s(t) = f(t-1)$ and $s(0) = 0$ are our our regressors dependent on the previous timepoint. Then $f^2(t)$ and $s^2(t)$ are the squared regressors.
Pseudocode
Friston Model
Inputs:
+ mc_params $\in \mathbb{R}^{t \times 6}$: a t x 6 array of the x/y/z translational, and x/y/z rotational motion parameters estimated by FLIRT.
Outputs:
+ friston $\in \mathbb{R}^{t \times 24}$: a t x 24 array of the friston-24 motion regressors.
friston(mc_params):
+ friston = zeros(t, 24) # initialize the friston regressors
+ friston[:, 0:6] = mc_params # the 1st order motion parameters
+ friston[:, 6:12] = mc_params.$^2$ # the 2nd order motion parameters
+ friston[1:end, 12:18] = mc_params[0:end - 1, :] # each timepoint has a regressor of previous timepoint
+ friston[:, 18:24] = friston[:, 12:18].$^2$ # second order regressor of previous timepoint
+ return(friston)
Implementation
End of explanation
"""
t = 20
m = 2
motion = .2*np.random.rand(t, m) + np.column_stack((.05*np.array(range(0, t)), .07*np.array(range(0, t))))
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, t), motion, label='first order regs')
ax.set_ylabel('Simulated Motion')
ax.set_xlabel('Timepoint')
ax.set_title('Simulated motion parameters estimated by FLIRT')
ax.legend()
fig.tight_layout()
fig.show()
"""
Explanation: Simulations
Basic Simulation
For our first simulation, we will just look at the parameters estimated by the friston model for a simple set of motion regressors. We will use $m=2$ regressors with $100$ timesteps. Our signal will be two linear trends with some normally distributed noise.
End of explanation
"""
friston = friston_model(motion)
friston.shape == (t, 4*m) # make sure that we have 4 sets of 2 regressors
"""
Explanation: As we can see, we have 2 regressors for motion, which keeps our visualizations as simple as possible (the logic is the same with 6 instead, except we would have 6 lines for each category instead of 2). Below, we visualize the first order motion regressors.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, t), motion, label='first order regs')
ax.plot(range(0, t), friston[:, m:2*m], label='second order regs')
ax.set_ylabel('Simulated Motion')
ax.set_xlabel('Timepoint')
ax.set_title('Comparison of first and second order regressors')
ax.legend()
fig.tight_layout()
fig.show()
"""
Explanation: We will show plots of the first order regressors with the second order regressors, and the shifted regressors, separately to make visualization easier. We know from our friston code the ordering of the regressors so this should be easy:
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, t), motion, label='first order regs')
ax.plot(range(0, t), friston[:, 2*m:3*m], label='shifted first order')
ax.set_ylabel('Simulated Motion')
ax.set_xlabel('Timepoint')
ax.set_title('Comparison of first order and first order shifted')
ax.legend()
fig.tight_layout()
fig.show()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, t), friston[:, m:2*m], label='second order regs')
ax.plot(range(0, t), friston[:, 3*m:4*m], label='shifted second order')
ax.set_ylabel('Simulated Motion')
ax.set_xlabel('Timepoint')
ax.set_title('Comparison of second order and second order shifted')
ax.legend()
fig.tight_layout()
fig.show()
"""
Explanation: Visually, these appear to be roughly correct; we can see that the final red timepoint, for instance, is the square of the final orange timepoint. Let's take a look at the other regressors following a similar approach:
End of explanation
"""
|
mdda/fossasia-2016_deep-learning | notebooks/2-CNN/6-StyleTransfer/4-Art-Style-Transfer-inception_tf.ipynb | mit | import tensorflow as tf
import numpy as np
import scipy
import scipy.misc # for imresize
import matplotlib.pyplot as plt
%matplotlib inline
import time
from urllib.request import urlopen # Python 3+ version (instead of urllib2)
import os # for directory listings
import pickle
AS_PATH='./images/art-style'
"""
Explanation: Art Style Transfer
This notebook is a re-implementation of the algorithm described in "A Neural Algorithm of Artistic Style" (http://arxiv.org/abs/1508.06576) by Gatys, Ecker and Bethge. Additional details of their method are available at http://arxiv.org/abs/1505.07376 and http://bethgelab.org/deepneuralart/.
An image is generated which combines the content of a photograph with the "style" of a painting. This is accomplished by jointly minimizing the squared difference between feature activation maps of the photo and generated image, and the squared difference of feature correlation between painting and generated image. A total variation penalty is also applied to reduce high frequency noise.
This notebook was originally sourced from Lasagne Recipes, but has been modified to use a GoogLeNet network (pre-trained and pre-loaded), in TensorFlow and given some features to make it easier to experiment with.
Other implementations :
* https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/15_Style_Transfer.ipynb (with video)
* https://github.com/cysmith/neural-style-tf
* https://github.com/anishathalye/neural-style
End of explanation
"""
import os, sys
tf_zoo_models_dir = './models/tensorflow_zoo'
if not os.path.exists(tf_zoo_models_dir):
print("Creating %s directory" % (tf_zoo_models_dir,))
os.makedirs(tf_zoo_models_dir)
if not os.path.isfile( os.path.join(tf_zoo_models_dir, 'models', 'README.md') ):
print("Cloning tensorflow model zoo under %s" % (tf_zoo_models_dir, ))
!cd {tf_zoo_models_dir}; git clone https://github.com/tensorflow/models.git
sys.path.append(tf_zoo_models_dir + "/models/research/slim")
print("Model Zoo model code installed")
"""
Explanation: Add TensorFlow Slim Model Zoo to path
End of explanation
"""
from datasets import dataset_utils
targz = "inception_v1_2016_08_28.tar.gz"
url = "http://download.tensorflow.org/models/"+targz
checkpoints_dir = './data/tensorflow_zoo/checkpoints'
if not os.path.exists(checkpoints_dir):
os.makedirs(checkpoints_dir)
if not os.path.isfile( os.path.join(checkpoints_dir, 'inception_v1.ckpt') ):
tarfilepath = os.path.join(checkpoints_dir, targz)
if os.path.isfile(tarfilepath):
import tarfile
tarfile.open(tarfilepath, 'r:gz').extractall(checkpoints_dir)
else:
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
# Get rid of tarfile source (the checkpoint itself will remain)
os.unlink(tarfilepath)
print("Checkpoint available locally")
slim = tf.contrib.slim
from nets import inception
from preprocessing import inception_preprocessing
image_size = inception.inception_v1.default_image_size
IMAGE_W=224
image_size
def prep_image(im):
if len(im.shape) == 2:
im = im[:, :, np.newaxis]
im = np.repeat(im, 3, axis=2)
# Resize so smallest dim = 224, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = scipy.misc.imresize(im, (224, int(w*224/h)))
else:
im = scipy.misc.imresize(im, (int(h*224/w), 224))
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Now rescale it to [-1,+1].float32 from [0..255].unit8
im = ( im.astype('float32')/255.0 - 0.5 ) * 2.0
return rawim, im
"""
Explanation: The Inception v1 (GoogLeNet) Architecture|
Download the Inception V1 checkpoint¶
Functions for building the GoogLeNet model with TensorFlow / slim and preprocessing the images are defined in model.inception_v1_tf - which was downloaded from the TensorFlow / slim Model Zoo.
The actual code for the slim model will be <a href="model/tensorflow_zoo/models/slim/nets/inception_v1.py" target=_blank>here</a>.
End of explanation
"""
photos = [ '%s/photos/%s' % (AS_PATH, f) for f in os.listdir('%s/photos/' % AS_PATH) if not f.startswith('.')]
photo_i=-1 # will be incremented in next cell (i.e. to start at [0])
"""
Explanation: Choose the Photo to be Enhanced
End of explanation
"""
photo_i += 1
photo = plt.imread(photos[photo_i % len(photos)])
photo_rawim, photo = prep_image(photo)
plt.imshow(photo_rawim)
"""
Explanation: Executing the cell below will iterate through the images in the ./images/art-style/photos directory, so you can choose the one you want
End of explanation
"""
styles = [ '%s/styles/%s' % (AS_PATH, f) for f in os.listdir('%s/styles/' % AS_PATH) if not f.startswith('.')]
style_i=-1 # will be incremented in next cell (i.e. to start at [0])
"""
Explanation: Choose the photo with the required 'Style'
End of explanation
"""
style_i += 1
style = plt.imread(styles[style_i % len(styles)])
style_rawim, style = prep_image(style)
plt.imshow(style_rawim)
def plot_layout(artwork):
def no_axes():
plt.gca().xaxis.set_visible(False)
plt.gca().yaxis.set_visible(False)
plt.figure(figsize=(9,6))
plt.subplot2grid( (2,3), (0,0) )
no_axes()
plt.imshow(photo_rawim)
plt.subplot2grid( (2,3), (1,0) )
no_axes()
plt.imshow(style_rawim)
plt.subplot2grid( (2,3), (0,1), colspan=2, rowspan=2 )
no_axes()
plt.imshow(artwork, interpolation='nearest')
plt.tight_layout()
"""
Explanation: Executing the cell below will iterate through the images in the ./images/art-style/styles directory, so you can choose the one you want
End of explanation
"""
tf.reset_default_graph()
# This creates an image 'placeholder' - image inputs should be (224,224,3).float32 each [-1.0,1.0]
input_image_float = tf.placeholder(tf.float32, shape=[None, None, 3], name='input_image_float')
#input_image_var = tf.Variable(tf.zeros([image_size,image_size,3], dtype=tf.uint8), name='input_image_var' )
# Define the pre-processing chain within the graph - based on the input 'image' above
#processed_image = inception_preprocessing.preprocess_image(input_image, image_size, image_size, is_training=False)
processed_image = input_image_float
processed_images = tf.expand_dims(processed_image, 0)
print("Model builder starting")
# Here is the actual model zoo model being instantiated :
with slim.arg_scope(inception.inception_v1_arg_scope()):
_, end_points = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
# Create an operation that loads the pre-trained model from the checkpoint
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
slim.get_model_variables('InceptionV1')
)
print("Model defined")
#dir(slim.get_model_variables('InceptionV1')[10])
#[ v.name for v in slim.get_model_variables('InceptionV1') ]
sorted(end_points.keys())
#dir(end_points['Mixed_4b'])
#end_points['Mixed_4b'].name
"""
Explanation: Precompute layer activations for photo and artwork
This takes ~ 20 seconds
End of explanation
"""
photo_layers = [
# used for 'content' in photo - a mid-tier convolutional layer
'Mixed_4b', #Theano : 'inception_4b/output',
# 'pool4/3x3_s2',
]
style_layers = [
# used for 'style' - conv layers throughout model (not same as content one)
'Conv2d_1a_7x7', #Theano : 'conv1/7x7_s2',
'Conv2d_2c_3x3', #Theano : 'conv2/3x3',
'Mixed_3b', #Theano : 'inception_3b/output',
'Mixed_4d', #Theano : 'inception_4d/output',
# 'conv1/7x7_s2', 'conv2/3x3', 'pool3/3x3_s2', 'inception_5b/output',
]
all_layers = photo_layers+style_layers
# Actually, we'll capture more data than necessary, so we can compare the how they look (below)
photo_layers_capture = all_layers # more minimally = photo_layers
style_layers_capture = all_layers # more minimally = style_layers
"""
Explanation: So that gives us a pallette of GoogLeNet layers from which we can choose to pay attention to :
End of explanation
"""
# Now let's run the pre-trained model on the photo and the style
style_features={}
photo_features={}
with tf.Session() as sess:
# This is the loader 'op' we defined above
init_fn(sess)
# This run grabs all the layer constants for the original photo image input
photo_layers_np = sess.run([ end_points[k] for k in photo_layers_capture ], feed_dict={input_image_float: photo})
for i,l in enumerate(photo_layers_np):
photo_features[ photo_layers_capture[i] ] = l
# This run grabs all the layer constants for the style image input
style_layers_np = sess.run([ end_points[k] for k in style_layers_capture ], feed_dict={input_image_float: style})
for i,l in enumerate(style_layers_np):
style_features[ style_layers_capture[i] ] = l
# Helpful display of
for i,name in enumerate(all_layers):
desc = []
if name in style_layers:
desc.append('style')
l=style_features[name]
if name in photo_layers:
desc.append('photo')
l=photo_features[name]
print(" Layer[%d].shape=%18s, %s.name = '%s'" % (i, str(l.shape), '+'.join(desc), name,))
"""
Explanation: Let's grab (constant) values for all the layers required for the original photo, and the style image :
End of explanation
"""
for name in all_layers:
print("Layer Name : '%s'" % (name,))
plt.figure(figsize=(12,6))
for i in range(4):
if name in photo_features:
plt.subplot(2, 4, i+1)
plt.imshow(photo_features[ name ][0, :, :, i], interpolation='nearest') # , cmap='gray'
plt.axis('off')
if name in style_features:
plt.subplot(2, 4, 4+i+1)
plt.imshow(style_features[ name ][0, :, :, i], interpolation='nearest') #, cmap='gray'
plt.axis('off')
plt.show()
"""
Explanation: Here are what the layers each see (photo on the top, style on the bottom for each set) :
End of explanation
"""
art_features = {}
for name in all_layers:
art_features[name] = end_points[name]
"""
Explanation: Define the overall loss / badness function
Let's now create model losses, which involve the end_points evaluated from the generated image, coupled with the appropriate constant layer losses from above :
End of explanation
"""
def gram_matrix(tensor):
shape = tensor.get_shape()
# Get the number of feature channels for the input tensor,
# which is assumed to be from a convolutional layer with 4-dim.
num_channels = int(shape[3])
# Reshape the tensor so it is a 2-dim matrix. This essentially
# flattens the contents of each feature-channel.
matrix = tf.reshape(tensor, shape=[-1, num_channels])
# Calculate the Gram-matrix as the matrix-product of
# the 2-dim matrix with itself. This calculates the
# dot-products of all combinations of the feature-channels.
gram = tf.matmul(tf.transpose(matrix), matrix)
return gram
def content_loss(P, X, layer):
p = tf.constant( P[layer] )
x = X[layer]
loss = 1./2. * tf.reduce_mean(tf.square(x - p))
return loss
def style_loss(S, X, layer):
s = tf.constant( S[layer] )
x = X[layer]
S_gram = gram_matrix(s)
X_gram = gram_matrix(x)
layer_shape = s.get_shape()
N = layer_shape[1]
M = layer_shape[2] * layer_shape[3]
loss = tf.reduce_mean(tf.square(X_gram - S_gram)) / (4. * tf.cast( tf.square(N) * tf.square(M), tf.float32))
return loss
def total_variation_loss_l1(x):
loss = tf.add(
tf.reduce_sum(tf.abs(x[1:,:,:] - x[:-1,:,:])),
tf.reduce_sum(tf.abs(x[:,1:,:] - x[:,:-1,:]))
)
return loss
def total_variation_loss_lX(x):
loss = tf.reduce_sum(
tf.pow(
tf.square( x[1:,:-1,:] - x[:-1,:-1,:]) + tf.square( x[:-1,1:,:] - x[:-1,:-1,:]),
1.25)
)
return loss
# And here are some more TF nodes, to compute the losses using the layer values 'saved off' earlier
losses = []
# content loss
cl = 10.
losses.append(cl *1. * content_loss(photo_features, art_features, 'Mixed_4b'))
# style loss
sl = 2. *1000. *1000.
losses.append(sl *1. * style_loss(style_features, art_features, 'Conv2d_1a_7x7'))
losses.append(sl *1. * style_loss(style_features, art_features, 'Conv2d_2c_3x3'))
losses.append(sl *10. * style_loss(style_features, art_features, 'Mixed_3b'))
losses.append(sl *10. * style_loss(style_features, art_features, 'Mixed_4d'))
# total variation penalty
vp = 10. /1000. /1000.
losses.append(vp *1. * total_variation_loss_lX(input_image_float))
#losses.append(vp *1. * total_variation_loss_l1(input_image_float))
# ['193.694946', '5.038591', '1.713539', '8.238111', '0.034608', '9.986152']
# ['0.473700', '0.034096', '0.010799', '0.021023', '0.164272', '0.539243']
# ['2.659750', '0.238304', '0.073061', '0.190739', '0.806217', '3.915816']
# ['1.098473', '0.169444', '0.245660', '0.109285', '0.938582', '0.028973']
# ['0.603620', '1.707279', '0.498789', '0.181227', '0.060200', '0.002774']
# ['0.788231', '0.920096', '0.358549', '0.806517', '0.256121', '0.002777']
total_loss = tf.reduce_sum(losses)
# And define the overall symbolic gradient operation
total_grad = tf.gradients(total_loss, [input_image_float])[0]
"""
Explanation: This defines various measures of difference that we'll use to compare the current output image with the original sources.
End of explanation
"""
art_image = photo
#art_image = np.random.uniform(-1.0, +1.0, (image_size, image_size, 3))
x0 = art_image.flatten().astype('float64')
iteration=0
"""
Explanation: Get Ready for Optimisation by SciPy
This uses the BFGS routine :
* R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound Constrained Optimization, (1995), SIAM Journal on Scientific and Statistical Computing, 16, 5, pp. 1190-1208.
Initialize with the original photo, since going from noise (the code that's commented out) takes many more iterations :
End of explanation
"""
t0 = time.time()
with tf.Session() as sess:
init_fn(sess)
# This helper function (to interface with scipy.optimize) must close over sess
def eval_loss_and_grad(x): # x0 is a 3*image_size*image_size float64 vector
x_image = x.reshape(image_size,image_size,3).astype('float32')
x_loss, x_grad = sess.run( [total_loss, total_grad], feed_dict={input_image_float: x_image} )
print("\nEval Loss @ ", [ "%.6f" % l for l in x[100:106]], " = ", x_loss)
#print("Eval Grad = ", [ "%.6f" % l for l in x_grad.flatten()[100:106]] )
losses_ = sess.run( losses, feed_dict={input_image_float: x_image} )
print("Eval loss components = ", [ "%.6f" % l for l in losses_])
return x_loss.astype('float64'), x_grad.flatten().astype('float64')
x0, x0_loss, state = scipy.optimize.fmin_l_bfgs_b( eval_loss_and_grad, x0, maxfun=50)
iteration += 1
print("Iteration %d, in %.1fsec, Current loss : %.4f" % (iteration, float(time.time() - t0), x0_loss))
art_raw = np.clip( ((x0*0.5 + 0.5) * 255.0), a_min=0.0, a_max=255.0 )
plot_layout( art_raw.reshape(image_size,image_size,3).astype('uint8') )
"""
Explanation: Optimize all those losses, and show the image
To refine the result, just keep hitting 'run' on this cell (each iteration is about 60 seconds) :
End of explanation
"""
|
mdeff/ntds_2016 | toolkit/04_ex_visualization.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Random time series.
n = 1000
rs = np.random.RandomState(42)
data = rs.randn(n, 4).cumsum(axis=0)
# plt.figure(figsize=(15,5))
# plt.plot(data[:, 0])
# df = pd.DataFrame(...)
# df.plot(...)
"""
Explanation: A Python Tour of Data Science: Data Visualization
Michaël Defferrard, PhD student, EPFL LTS2
Exercise
Data visualization is a key aspect of exploratory data analysis.
During this exercise we'll gradually build more and more complex vizualisations. We'll do this by replicating plots. Try to reproduce the lines but also the axis labels, legends or titles.
Goal of data visualization: clearly and efficiently communicate information through visual representations. While tables are generally used to look up a specific measurement, charts are used to show patterns or relationships.
Means: mainly statistical graphics for exploratory analysis, e.g. scatter plots, histograms, probability plots, box plots, residual plots, but also infographics for communication.
Data visualization is both an art and a science. It should combine both aesthetic form and functionality.
1 Time series
To start slowly, let's make a static line plot from some time series. Reproduce the plots below using:
1. The procedural API of matplotlib, the main data visualization library for Python. Its procedural API is similar to matlab and convenient for interactive work.
2. Pandas, which wraps matplotlib around his DataFrame format and makes many standard plots easy to code. It offers many helpers for data visualization.
Hint: to plot with pandas, you first need to create a DataFrame, pandas' tabular data format.
End of explanation
"""
data = [10, 40, 25, 15, 10]
categories = list('ABCDE')
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
# Right plot.
# axes[1].
# axes[1].
# Left plot.
# axes[0].
# axes[0].
"""
Explanation: 2 Categories
Categorical data is best represented by bar or pie charts. Reproduce the plots below using the object-oriented API of matplotlib, which is recommended for programming.
Question: What are the pros / cons of each plot ?
Tip: the matplotlib gallery is a convenient starting point.
End of explanation
"""
import seaborn as sns
import os
df = sns.load_dataset('iris', data_home=os.path.join('..', 'data'))
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
# Your code for Seaborn: distplot() and boxplot().
import ggplot
# Your code for ggplot.
import altair
# altair.Chart(df).mark_bar(opacity=.75).encode(
# x=...,
# y=...,
# color=...
# )
"""
Explanation: 3 Frequency
A frequency plot is a graph that shows the pattern in a set of data by plotting how often particular values of a measure occur. They often take the form of an histogram or a box plot.
Reproduce the plots with the following three libraries, which provide high-level declarative syntax for statistical visualization as well as a convenient interface to pandas:
* Seaborn is a statistical visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Its advantage is that you can modify the produced plots with matplotlib, so you loose nothing.
* ggplot is a (partial) port of the popular ggplot2 for R. It has his roots in the influencial book the grammar of graphics. Convenient if you know ggplot2 already.
* Vega is a declarative format for statistical visualization based on D3.js, a low-level javascript library for interactive visualization. Vincent (discontinued) and altair are Python libraries to vega. Altair is quite new and does not provide all the needed functionality yet, but it is promising !
Hints:
* Seaborn, look at distplot() and boxplot().
* ggplot, we are interested by the geom_histogram geometry.
End of explanation
"""
# One line with Seaborn.
"""
Explanation: 4 Correlation
Scatter plots are very much used to assess the correlation between 2 variables. Pair plots are then a useful way of displaying the pairwise relations between variables in a dataset.
Use the seaborn pairplot() function to analyze how separable is the iris dataset.
End of explanation
"""
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# df['pca1'] =
# df['pca2'] =
# df['tsne1'] =
# df['tsne2'] =
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
sns.swarmplot(x='pca1', y='pca2', data=df, hue='species', ax=axes[0])
sns.swarmplot(x='tsne1', y='tsne2', data=df, hue='species', ax=axes[1]);
"""
Explanation: 5 Dimensionality reduction
Humans can only comprehend up to 3 dimensions (in space, then there is e.g. color or size), so dimensionality reduction is often needed to explore high dimensional datasets. Analyze how separable is the iris dataset by visualizing it in a 2D scatter plot after reduction from 4 to 2 dimensions with two popular methods:
1. The classical principal componant analysis (PCA).
2. t-distributed stochastic neighbor embedding (t-SNE).
Hints:
* t-SNE is a stochastic method, so you may want to run it multiple times.
* The easiest way to create the scatter plot is to add columns to the pandas DataFrame, then use the Seaborn swarmplot().
End of explanation
"""
|
keylime1/courses_12-752 | assignments/2/12-752_Assignment_2_Starter.ipynb | mit | temperatureDateConverter = lambda d : dt.datetime.strptime(d,'%Y-%m-%d %H:%M:%S')
temperature = np.genfromtxt('../../data/temperature.csv',delimiter=",",dtype=[('timestamp', type(dt.datetime.now)),('tempF', 'f8')],converters={0: temperatureDateConverter}, skiprows=1)
"""
Explanation: Section 1.1 - Importing the Data
Let's begin in the same way we did for Assignment #2 of 2014, but this time let's start with importing the temperature data:
End of explanation
"""
print "The variable 'temperature' is a " + str(type(temperature)) + " and it has the following shape: " + str(temperature.shape)
"""
Explanation: Notice that, because we are asking for the data to be interpreted as having different types for each column, and the the numpy.ndarray can only handle homoegenous types (i.e., all the elements of the array must be of the same type) then the resulting array is a one dimensional ndarray of tuples. Each tuple corresponds to a row in the file and in it, then, are the three columns for the row.
Formally, this is called a Structured Array and is something you should read up on if you want to fully understand what it means and how to handle these types of data structures:
https://docs.scipy.org/doc/numpy/user/basics.rec.html
End of explanation
"""
temperature.dtype.fields
"""
Explanation: Fortunately, these structured arrays allow us to access the content inside the tuples directly by calling the field names. Let's figure out what those field names are:
End of explanation
"""
plt.plot(temperature['timestamp'])
"""
Explanation: Now let's see what the timestamps look like, for this dataset:
End of explanation
"""
print "The minimum difference between any two consecutive timestamps is: " + str(np.min(np.diff(temperature['timestamp'])))
print "The maximum difference between any two consecutive timestamps is: " + str(np.max(np.diff(temperature['timestamp'])))
"""
Explanation: Seems as if there are no gaps, but let's make sure about that. First, let's compute the minimum and maximum difference between any two consecutive timestamps:
End of explanation
"""
temperature = temperature[0:-1:3]
"""
Explanation: Given that they both are 5 minutes, then it means that there really is no gap in the datset, and all temperature measurements were taken 5 minutes apart.
Since we need temperature readings every 15 minutes we can downsample this dataset. There are many ways to do the downsampling, and it is important to understand the effects each of them may have on the final result we are seeking. However, this is beyond the scope of the class, so I will pick a very naïve approach and simply select every third sample:
End of explanation
"""
print "First timestamp is on \t{}. \nLast timestamp is on \t{}.".format(temperature['timestamp'][0], temperature['timestamp'][-1])
"""
Explanation: Finally, let's make a note of when the first and last timestamp are:
End of explanation
"""
dateConverter = lambda d : dt.datetime.strptime(d,'%Y/%m/%d %H:%M:%S')
power = np.genfromtxt('../../data/campusDemand.csv',delimiter=",",names=True,dtype=['S255',dt.datetime,'f8'],converters={1: dateConverter})
"""
Explanation: Loading the Power Data
Just as we did before, we start with the genfromtxt function:
End of explanation
"""
name, indices, counts = np.unique(power['Point_name'], return_index=True,return_counts=True)
"""
Explanation: Let's figure out how many meters there are, and where they are in the ndarray, as well as how many datapoints they have.
End of explanation
"""
for i in range(len(name)):
print str(name[i])+"\n\t from "+str(power[indices[i]]['Time'])+" to "+str(power[indices[i]+counts[i]-1]['Time'])+"\n\t or "+str(power[indices[i]+counts[i]-1]['Time']-power[indices[i]]['Time'])
"""
Explanation: Now let's print that information in a more readable fashion:
End of explanation
"""
power=power[power['Point_name']==name[3]]
"""
Explanation: Since only one meter needs to be used, pick the one you like and discard the rest:
End of explanation
"""
power = np.sort(power,order='Time')
fig1= plt.figure(figsize=(15,5))
plt.plot(power['Time'],power['Value'])
plt.title(name[0])
plt.xlabel('Time')
plt.ylabel('Power [Watts]')
"""
Explanation: Let's make sure the data is sorted by time and then let's plot it
End of explanation
"""
power = np.sort(power,order='Time')
print "The minimum difference between any two consecutive timestamps is: " + str(np.min(np.diff(power['Time'])))
print "The maximum difference between any two consecutive timestamps is: " + str(np.max(np.diff(power['Time'])))
"""
Explanation: Are there gaps in this dataset?
End of explanation
"""
print "First timestamp is on \t{}. \nLast timestamp is on \t{}.".format(power['Time'][0], power['Time'][-1])
"""
Explanation: And when is the first and last timestamp for this dataset? (We would like them to overlap as much as possible):
End of explanation
"""
print "Power data from {0} to {1}.\nTemperature data from {2} to {3}".format(power['Time'][0], power['Time'][-1], temperature['timestamp'][0], temperature['timestamp'][-1])
"""
Explanation: So let's summarize the differences in terms of the timestamps:
There is at least one significant gap (1 day and a few hours), and there's also a strange situation that causes two consecutive samples to have the same timestamp (i.e., the minimum difference is zero).
The temperature dataset starts a little later, and ends almost a full day later than the power dataset.
Yes, this is incovnenient, I know. It is painful to know that not only are the two datasets sampled at different rates, but they are also of different lengths of time and one of them has gaps.
This is what real data looks like, in case you were wondering.
At this point, one of the simplest ways to move forward without having to re-invent the wheel would be to rely on the help of more powerful libraries such as Pandas.
However, just to make things more fun and instructional, I am going to go through the trouble of implementing a interpolation function myself and will use it to obtain power values at exactly the same timestamps as the temperature data is providing.
In other words, let's assume that the timestamps for the temperature data are $t^T_i$ $\forall i \in [1, 2, \ldots n_T]$, and that the timestamps for the power data are $t^P_i$ $\forall i \in [1, 2, \ldots n_P]$, where $n_T$ and $n_P$ are the number of records in the temperature and power datasets, respectively. What I am interested in doing is finding the values of power $P$ at exactly all of the $n_T$ temperature timestamps, i.e. find $P(t^T_i)$ $\forall i$.
We will do all of these things in the next section.
Harmonizing the time series
First let's remember what times the two time series (power and temperature) start and end:
End of explanation
"""
temperature = temperature[0:-24]
"""
Explanation: Clearly, we don't need the portion of the temperature data that is collected beyond the dates that we have power data. Let's remove this (note that the magic number 24 corresponds to 360 minutes or 6 hours):
End of explanation
"""
def power_interp(tP, P, tT):
# This function assumes that the input is an numpy.ndarray of datetime objects
# Most useful interpolation tools don't work well with datetime objects
# so we convert all datetime objects into the number of seconds elapsed
# since 1/1/1970 at midnight (also called the UNIX Epoch, or POSIX time):
toposix = lambda d: (d - dt.datetime(1970,1,1,0,0,0)).total_seconds()
tP = map(toposix, tP)
tT = map(toposix, tT)
# Now we interpolate
from scipy.interpolate import interp1d
f = interp1d(tP, P,'linear')
return f(tT)
"""
Explanation: Now let's create the interpolation function:
End of explanation
"""
newPowerValues = power_interp(power['Time'], power['Value'], temperature['timestamp'])
"""
Explanation: And let's use that funciton to get a copy of the interpolated power values, extracted at exactly the same timestamps as the temperature dataset:
End of explanation
"""
toposix = lambda d: (d - dt.datetime(1970,1,1,0,0,0)).total_seconds()
timestamp_in_seconds = map(toposix,temperature['timestamp'])
timestamps = temperature['timestamp']
temp_values = temperature['tempF']
power_values = newPowerValues
"""
Explanation: Finally, to keep things simple, let's restate the variables that matter:
End of explanation
"""
plt.figure(figsize=(15,15))
plt.plot(timestamps,power_values,'ro')
plt.figure(figsize=(15,15))
plt.plot(timestamps, temp_values, '--b')
"""
Explanation: And let's plot it to see what it looks like.
End of explanation
"""
weekday = map(lambda t: t.weekday(), timestamps)
weekends = np.where( ) ## Note that depending on how you do this, the result could be a tuple of ndarrays.
weekdays = np.where( )
"""
Explanation: Task #1
Now let's put all of this data into a single structured array.
Task #2
Since we have the timestamps in 'datetime' format we can easily do the extraction of the indeces:
End of explanation
"""
len(weekday) == len(weekends[0]) + len(weekdays[0]) ## This is assuming you have a tuple of ndarrays
"""
Explanation: Did we do this correctly?
End of explanation
"""
hour = map(lambda t: t.hour, timestamps)
occupied = np.where( )
unoccupied = np.where( )
"""
Explanation: Seems like we did.
Task #3
Similar as in the previous task...
End of explanation
"""
def Tc(temperature, T_bound):
# The return value will be a matrix with as many rows as the temperature
# array, and as many columns as len(T_bound) [assuming that 0 is the first boundary]
Tc_matrix = np.zeros((len(temperature), len(T_bound)))
return Tc_matrix
"""
Explanation: Task #4
Let's calculate the temperature components, by creating a function that does just that:
End of explanation
"""
|
jegibbs/phys202-2015-work | assignments/assignment12/FittingModelsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Fitting Models Exercise 1
Imports
End of explanation
"""
a_true = 0.5
b_true = 2.0
c_true = -4.0
N = 30
dy = 2.0
"""
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
"""
x = np.linspace(-5,5,N)
y = a_true*x**2 + b_true*x + c_true + np.random.normal(0.0, dy, size=N)
plt.errorbar(x, y, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
assert True # leave this cell for grading the raw data generation and plot
"""
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
"""
def model(x, a, b, c):
return a*x**2+b*x+c
theta_best, theta_cov = opt.curve_fit(model, x, y, sigma=dy)
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))
xfit = np.linspace(-5,5,30)
yfit = theta_best[0]*xfit**2 + theta_best[1]*xfit + theta_best[2]
plt.plot(xfit, yfit)
plt.errorbar(x, y, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y');
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
"""
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation
"""
|
ehthiede/PyEDGAR | examples/Delay_Embedding/Delay_Embedding.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import pyedgar
from pyedgar.data_manipulation import tlist_to_flat, flat_to_tlist, delay_embed, lift_function
%matplotlib inline
"""
Explanation: Delay Embedding and the MFPT
Here, we give an example script, showing the effect of Delay Embedding on a Brownian motion on the Muller-Brown potential, projeted onto its y-axis. This script may take a long time to run, as considerable data is required to accurately reconstruct the hidden degrees of freedom.
End of explanation
"""
ntraj = 700
trajectory_length = 40
lag_values = np.arange(1, 37, 2)
embedding_values = lag_values[1:] - 1
"""
Explanation: Load Data and set Hyperparameters
We first load in the pre-sampled data. The data consists of 400 short trajectories, each with 30 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.
Set Hyperparameters
Here we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.
End of explanation
"""
trajs_2d = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length] # Raw trajectory
trajs = trajs_2d[:, :, 1] # Only keep y coordinate
stateA = (trajs > 1.15).astype('float')
stateB = (trajs < 0.15).astype('float')
# Convert to list of trajectories format
trajs = [traj_i.reshape(-1, 1) for traj_i in trajs]
stateA = [A_i for A_i in stateA]
stateB = [B_i for B_i in stateB]
# Load the true results
true_mfpt = np.load('data/htAB_1_0_0_1.npy')
"""
Explanation: Load and format the data
End of explanation
"""
flattened_trajs, traj_edges = tlist_to_flat(trajs)
flattened_stateA = np.hstack(stateA)
flattened_stateB = np.hstack(stateB)
print("Flattened Shapes are: ", flattened_trajs.shape, flattened_stateA.shape, flattened_stateB.shape,)
"""
Explanation: We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.
End of explanation
"""
# Build the basis set
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous')
diff_atlas.fit(flattened_trajs)
flat_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flattened_stateA))
basis = flat_to_tlist(flat_basis, traj_edges)
flat_basis_no_boundaries = diff_atlas.make_dirichlet_basis(200)
basis_no_boundaries = flat_to_tlist(flat_basis_no_boundaries, traj_edges)
# Perform DGA calculation
mfpt_BA_lags = []
for lag in lag_values:
mfpt = pyedgar.galerkin.compute_mfpt(basis, stateA, lag=lag)
pi = pyedgar.galerkin.compute_change_of_measure(basis_no_boundaries, lag=lag)
flat_pi = np.array(pi).ravel()
flat_mfpt = np.array(mfpt).ravel()
mfpt_BA = np.mean(flat_mfpt * flat_pi * np.array(stateB).ravel()) / np.mean(flat_pi * np.array(stateB).ravel())
mfpt_BA_lags.append(mfpt_BA)
"""
Explanation: Construct DGA MFPT by increasing lag times
We first construct the MFPT with increasing lag times.
End of explanation
"""
mfpt_BA_embeddings = []
for lag in embedding_values:
# Perform delay embedding
debbed_traj = delay_embed(trajs, n_embed=lag)
lifted_A = lift_function(stateA, n_embed=lag)
lifted_B = lift_function(stateB, n_embed=lag)
flat_debbed_traj, embed_edges = tlist_to_flat(debbed_traj)
flat_lifted_A = np.hstack(lifted_A)
# Build the basis
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d',
epsilon='bgh_generous', neighbor_params={'algorithm':'brute'})
diff_atlas.fit(flat_debbed_traj)
flat_deb_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flat_lifted_A))
deb_basis = flat_to_tlist(flat_deb_basis, embed_edges)
flat_pi_basis = diff_atlas.make_dirichlet_basis(200)
pi_basis = flat_to_tlist(flat_deb_basis, embed_edges)
# Construct the Estimate
deb_mfpt = pyedgar.galerkin.compute_mfpt(deb_basis, lifted_A, lag=1)
pi = pyedgar.galerkin.compute_change_of_measure(pi_basis)
flat_pi = np.array(pi).ravel()
flat_mfpt = np.array(deb_mfpt).ravel()
deb_mfpt_BA = np.mean(flat_mfpt * flat_pi * np.array(lifted_B).ravel()) / np.mean(flat_pi * np.array(lifted_B).ravel())
mfpt_BA_embeddings.append(deb_mfpt_BA)
"""
Explanation: Construct DGA MFPT with increasing Delay Embedding
We now construct the MFPT using delay embedding.
End of explanation
"""
plt.plot(embedding_values, mfpt_BA_embeddings, label="Delay Embedding")
plt.plot(lag_values, mfpt_BA_lags, label="Lags")
plt.axhline(true_mfpt[0] * 10, color='k', label='True')
plt.axhline((true_mfpt[0] + true_mfpt[1]) * 10., color='k', linestyle=':')
plt.axhline((true_mfpt[0] - true_mfpt[1]) * 10., color='k', linestyle=':')
plt.legend()
plt.ylim(0, 100)
plt.xlabel("Lag / Delay Length")
plt.ylabel("Estimated MFPT")
"""
Explanation: Plot the Results
We plot the results of our calculation, against the true value (black line, with the standard deviation in stateB given by the dotted lines). We see that increasing the lag time causes the mean-first-passage time to grow unboundedly. In contrast, with delay embedding the mean-first-passage time converges. We do, however, see one bad fluction at a delay length of 16, and that as the the delay length gets sufficiently long, the calculation blows up.
End of explanation
"""
|
Flaviolib/dx | 08_dx_fourier_pricing.ipynb | agpl-3.0 | import dx
import datetime as dt
"""
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Fourier-based Option Pricing
For several reasons, it is beneficial to have available alternative valuation and pricing approaches to the Monte Carlo simulation approach. One application area is to benchmark Monte Carlo-based valuation results against other (potentially more accurate) results. Another area is model calibration to liquidly traded vanilla instruments where generally faster numerial methods can be applied.
This part introduces Fouried-based valuation functions and benchmarks valuation results from the "standard", simulation-based DX Analytics modeling approach to output of those functions.
End of explanation
"""
# constant short rate
r = dx.constant_short_rate('r', 0.01)
# geometric Brownian motion
me = dx.market_environment('me', dt.datetime(2015, 1, 1))
me.add_constant('initial_value', 100.)
me.add_constant('volatility', 0.2)
me.add_constant('final_date', dt.datetime(2015, 12, 31))
me.add_constant('currency', 'EUR')
# jump component
me.add_constant('lambda', 0.4)
me.add_constant('mu', -0.6)
me.add_constant('delta', 0.2)
# stochastic volatiltiy component
me.add_constant('rho', -.5)
me.add_constant('kappa', 5.0)
me.add_constant('theta', 0.02)
me.add_constant('vol_vol', 0.3)
# valuation environment
val_env = dx.market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('paths', 55000)
# 25,000 paths
val_env.add_constant('frequency', 'D')
# weekly frequency
val_env.add_curve('discount_curve', r)
val_env.add_constant('starting_date', dt.datetime(2015, 1, 1))
val_env.add_constant('final_date', dt.datetime(2015, 12, 31))
# add valuation environment to market environment
me.add_environment(val_env)
"""
Explanation: Risk Factors
The examples and benchmarks to follow rely on four different models:
geometric Brownian motion (Black-Scholes-Merton 1973)
jump diffusion (Merton 1976)
stochastic volatility (Heston 1993)
stochastic volatility jump diffusion (Bates 1996)
For details on these models and the Fourier-based option pricing approach refer to Hilpisch (2015) (cf. http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1119037999.html).
We first define the single market and valuation environments.
End of explanation
"""
gbm = dx.geometric_brownian_motion('gbm', me)
jd = dx.jump_diffusion('jd', me)
sv = dx.stochastic_volatility('sv', me)
svjd = dx.stoch_vol_jump_diffusion('svjd', me)
"""
Explanation: Equipped with the single market environments and the valuation environment, we can instantiate the simulation model objects.
End of explanation
"""
# market environment for the options
me_option = dx.market_environment('option', dt.datetime(2015, 1, 1))
me_option.add_constant('maturity', dt.datetime(2015, 12, 31))
me_option.add_constant('strike', 100.)
me_option.add_constant('currency', 'EUR')
me_option.add_environment(me)
me_option.add_environment(val_env)
euro_put_gbm = dx.valuation_mcs_european_single('euro_put', gbm, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_gbm = dx.valuation_mcs_european_single('euro_call', gbm, me_option,
'np.maximum(maturity_value - strike, 0)')
euro_put_jd = dx.valuation_mcs_european_single('euro_put', jd, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_jd = dx.valuation_mcs_european_single('euro_call', jd, me_option,
'np.maximum(maturity_value - strike, 0)')
euro_put_sv = dx.valuation_mcs_european_single('euro_put', sv, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_sv = dx.valuation_mcs_european_single('euro_call', sv, me_option,
'np.maximum(maturity_value - strike, 0)')
euro_put_svjd = dx.valuation_mcs_european_single('euro_put', svjd, me_option,
'np.maximum(strike - maturity_value, 0)')
euro_call_svjd = dx.valuation_mcs_european_single('euro_call', svjd, me_option,
'np.maximum(maturity_value - strike, 0)')
"""
Explanation: Plain Vanilla Put and Call Options
Based on the just defined risk factors, we define 8 diffent options---a European put and call option per risk factor, respectively.
End of explanation
"""
import numpy as np
import pandas as pd
"""
Explanation: Valuation Benchmarking
In this sub-section, we benchmark the Monte Carlo value estimates against the Fourier-based pricing results.
End of explanation
"""
freq = '2m' # used for maturity definitions
periods = 3 # number of intervals for maturity grid
strikes = 5 # number of strikes per maturity
initial_value = 100 # initial value for all risk factors
start = 0.8 # lowest strike in percent of spot
end = 1.2 # highest strike in percent of spot
start_date = '2015/3/1' # start date for simulation/pricing
"""
Explanation: We first define some parameters used throughout.
End of explanation
"""
euro_put_gbm.present_value()
# method call needed for initialization
"""
Explanation: Geometric Brownian Motion
We need to initialize the valuation object first.
End of explanation
"""
bsm_option = dx.BSM_european_option('bsm_opt', me_option)
"""
Explanation: There is a valuation class for European put and call options in the Black-Scholes-Merton model available called BSM_european_option. It is based on the analytical pricing formula for that model and is instantiated as follows:
End of explanation
"""
%%time
# European put
print '%4s | %7s | %7s | %7s | %7s | %7s' % ('T', 'strike', 'mcs', 'fou', 'dif', 'rel')
for maturity in pd.date_range(start=start_date, freq=freq, periods=periods):
bsm_option.maturity = maturity
euro_put_gbm.update(maturity=maturity)
for strike in np.linspace(start, end, strikes) * initial_value:
T = (maturity - me_option.pricing_date).days / 365.
euro_put_gbm.update(strike=strike)
mcs = euro_put_gbm.present_value()
bsm_option.strike = strike
ana = bsm_option.put_value()
print '%4.3f | %7.3f | %7.4f | %7.4f | %7.4f | %7.2f ' \
% (T, strike, mcs, ana, mcs - ana, (mcs - ana) / ana * 100)
"""
Explanation: The following routine benchmarks the Monte Carlo value estimates for the European put option against the output from the valuation object based on the analytical pricing formula. The results are quite good since this model is quite easy to discretize exactly and therefore generally shows good convergence of the Monte Carlo estimates.
End of explanation
"""
euro_call_gbm.present_value()
# method call needed for initialization
%%time
# European calls
print '%4s | %7s | %7s | %7s | %7s | %7s' % ('T', 'strike', 'mcs', 'fou', 'dif', 'rel')
for maturity in pd.date_range(start=start_date, freq=freq, periods=periods):
euro_call_gbm.update(maturity=maturity)
for strike in np.linspace(start, end, strikes) * initial_value:
T = (maturity - me_option.pricing_date).days / 365.
euro_call_gbm.update(strike=strike)
mcs = euro_call_gbm.present_value()
bsm_option.strike = strike
bsm_option.maturity = maturity
ana = bsm_option.call_value()
print '%4.3f | %7.3f | %7.4f | %7.4f | %7.4f | %7.2f ' \
% (T, strike, mcs, ana, mcs - ana, (mcs - ana) / ana * 100)
"""
Explanation: The same now for the European call option.
End of explanation
"""
def valuation_benchmarking(valuation_object, fourier_function):
print '%4s | %7s | %7s | %7s | %7s | %7s' % ('T', 'strike', 'mcs', 'fou', 'dif', 'rel')
for maturity in pd.date_range(start=start_date, freq=freq, periods=periods):
valuation_object.update(maturity=maturity)
me_option.add_constant('maturity', maturity)
for strike in np.linspace(start, end, strikes) * initial_value:
T = (maturity - me_option.pricing_date).days / 365.
valuation_object.update(strike=strike)
mcs = valuation_object.present_value()
me_option.add_constant('strike', strike)
fou = fourier_function(me_option)
print '%4.3f | %7.3f | %7.4f | %7.4f | %7.4f | %7.3f ' \
% (T, strike, mcs, fou, mcs - fou, (mcs - fou) / fou * 100)
"""
Explanation: Benchmarking Function
All other valuation benchmarks are generated with Fourier-based pricing functions for which the handling is identical. We therefore use the following function for the benchmarks from now on:
End of explanation
"""
euro_put_jd.present_value()
# method call needed for initialization
"""
Explanation: Jump Diffusion
The next model is the jump diffusion as proposed by Merton (1976).
End of explanation
"""
%time valuation_benchmarking(euro_put_jd, dx.M76_put_value)
"""
Explanation: There is a Fourier-based pricing function available which is called M76_put_value and which is used for the benchmarking for the European put options that follows.
End of explanation
"""
euro_call_jd.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_call_jd, dx.M76_call_value)
"""
Explanation: Accordingly, the benchmarking for the European call options based on the Fourier-based M76_call_value function.
End of explanation
"""
euro_put_sv.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_put_sv, dx.H93_put_value)
"""
Explanation: Stochastic Volatility
Stochastic volatility models like the one of Heston (1993) are popular to reproduce implied volatility smiles observed in markets. First, the benchmarking for the European put options using the Fourier-based H93_put_value function.
End of explanation
"""
euro_call_sv.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_call_sv, dx.H93_call_value)
"""
Explanation: Second, the benchmarking for the European call options based on the Fourier-based H93_call_value function.
End of explanation
"""
euro_put_svjd.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_put_svjd, dx.B96_put_value)
"""
Explanation: Stochastic Volatility Jump-Diffusion
Finally, we consider the combination of the stochastic volatility and jump diffusion models from before as proposed by Bates (1996). The Fourier-based pricing function for European put options is called B96_put_value.
End of explanation
"""
euro_call_svjd.present_value()
# method call needed for initialization
%time valuation_benchmarking(euro_call_svjd, dx.B96_call_value)
"""
Explanation: The Fourier-based counterpart function for European call options is called B96_call_value.
End of explanation
"""
|
mohanprasath/Course-Work | machine_learning/learning_python_3.ipynb | gpl-3.0 | print("Hello World!")
print("Hello Again")
print("I like typing this.")
print("This is fun.")
print('Yay! Printing.')
print("I'd much rather you 'not'.")
print('I "said" do not touch this.')
'''
Notes:
octothorpe, mesh, or pund #
'''
"""
Explanation: Learning Python3 from URL
https://learnpythonthehardway.org/python3/
Exercise 1: A Good First Program
https://learnpythonthehardway.org/python3/ex1.html
End of explanation
"""
# A comment, this is so you can read your program later.
# Anything after the # is ignored by python.
print("I could have code like this.") # and the comment after is ignored
# You can also use a comment to "disable" or comment out code:
# print("This won't run.")
print("This will run.")
"""
Explanation: Exercise 2: Comments and Pound Characters
https://learnpythonthehardway.org/python3/ex2.html
End of explanation
"""
# BODMAS
print("I will now count my chickens:")
print("Hens", 25 + 30 / 6)
print("Roosters", 100 - 25 * 3 % 4)
print("Now I will count the eggs:")
print(3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6)
print("Is it true that 3 + 2 < 5 - 7?")
print(3 + 2 < 5 - 7)
print("What is 3 + 2?", 3 + 2)
print("What is 5 - 7?", 5 - 7)
print("Oh, that's why it's False.")
print("How about some more.")
print("Is it greater?", 5 > -2)
print("Is it greater or equal?", 5 >= -2)
print("Is it less or equal?", 5 <= -2)
"""
Explanation: Exercise 3: Numbers and Math
https://learnpythonthehardway.org/python3/ex3.html
End of explanation
"""
cars = 100
space_in_a_car = 4.0
drivers = 30
passengers = 90
cars_not_driven = cars - drivers
cars_driven = drivers
carpool_capacity = cars_driven * space_in_a_car
average_passengers_per_car = passengers / cars_driven
print("There are", cars, "cars available.")
print("There are only", drivers, "drivers available.")
print("There will be", cars_not_driven, "empty cars today.")
print("We can transport", carpool_capacity, "people today.")
print("We have", passengers, "to carpool today.")
print("We need to put about", average_passengers_per_car,
"in each car.")
# assigning variables in a single line
a = b = c = 0
# this seems easier but when using basic objects like arrays or dictionaries it gets wierder
l1 = l2 = []
l1.append(1)
print(l1, l2)
l2.append(2)
print(l1, l2)
# Here list objects l1 and l2 are names assigned to the same memory location. It works different from following
# code
l1 = []
l2 = []
l1.append(1)
print(l1, l2)
l2.append(2)
print(l1, l2)
"""
Explanation: Exercise 4: Variables and Names
https://learnpythonthehardway.org/python3/ex4.html
End of explanation
"""
my_name = 'Zed A. Shaw'
my_age = 35 # not a lie
my_height = 74 # inches
my_weight = 180 # lbs
my_eyes = 'Blue'
my_teeth = 'White'
my_hair = 'Brown'
print(f"Let's talk about {my_name}.")
print(f"He's {my_height} inches tall.")
print(f"He's {my_weight} pounds heavy.")
print("Actually that's not too heavy.")
print(f"He's got {my_eyes} eyes and {my_hair} hair.")
print(f"His teeth are usually {my_teeth} depending on the coffee.")
# this line is tricky, try to get it exactly right
total = my_age + my_height + my_weight
print(f"If I add {my_age}, {my_height}, and {my_weight} I get {total}.")
# f'' format string
# converting inches to pounds
def inches_to_centi_meters(inches):
centi_meters = inches * 2.54
return centi_meters
def pounds_to_kilo_grams(pounds):
kilo_grams = pounds * 0.453592
return kilo_grams
inches = 1.0
pounds = 1.0
print(inches, inches_to_centi_meters(inches))
print(pounds, pounds_to_kilo_grams(pounds))
"""
Explanation: Exercise 5: More Variables and Printing
https://learnpythonthehardway.org/python3/ex5.html
End of explanation
"""
types_of_people = 10
x = f"There are {types_of_people} types of people."
binary = "binary"
do_not = "don't"
y = f"Those who know {binary} and those who {do_not}."
print(x)
print(y)
print(f"I said: {x}")
print(f"I also said: '{y}'")
hilarious = False
joke_evaluation = "Isn't that joke so funny?! {}"
print(joke_evaluation.format(hilarious))
w = "This is the left side of..."
e = "a string with a right side."
print(w + e)
"""
Explanation: Exercise 6: Strings and Text
https://learnpythonthehardway.org/python3/ex6.html
End of explanation
"""
print("Mary had a little lamb.")
print("Its fleece was white as {}.".format('snow'))
print("And everywhere that Mary went.")
print("." * 10) # what'd that do?
end1 = "C"
end2 = "h"
end3 = "e"
end4 = "e"
end5 = "s"
end6 = "e"
end7 = "B"
end8 = "u"
end9 = "r"
end10 = "g"
end11 = "e"
end12 = "r"
# watch end = ' ' at the end. try removing it to see what happens
print(end1 + end2 + end3 + end4 + end5 + end6, end=' ')
print(end7 + end8 + end9 + end10 + end11 + end12)
"""
Explanation: Exercise 7: More Printing
https://learnpythonthehardway.org/python3/ex7.html
End of explanation
"""
formatter = "{} {} {} {}"
print(formatter.format(1, 2, 3, 4))
print(formatter.format("one", "two", "three", "four"))
print(formatter.format(True, False, False, True))
print(formatter.format(formatter, formatter, formatter, formatter))
print(formatter.format(
"Try your",
"Own text here",
"Maybe a poem",
"Or a song about fear"
))
"""
Explanation: Exercise 8: Printing, Printing
https://learnpythonthehardway.org/python3/ex8.html
End of explanation
"""
:( You have to pay 30
"""
Explanation: Exercise 9: Printing, Printing, Printing
https://learnpythonthehardway.org/python3/ex9.html
End of explanation
"""
|
frederickayala/session-based-recsys | Qvik Session-Based Recommender Systems/GRU4RecLab.ipynb | mit | # -*- coding: utf-8 -*-
import theano
import pickle
import sys
import os
sys.path.append('../..')
import numpy as np
import pandas as pd
import gru4rec #If this shows an error probably the notebook is not in GRU4Rec/examples/rsc15/
import evaluation
# Validate that the following assert makes sense in your platform
# This works on Windows with a NVIDIA GPU
# In other platforms theano.config.device gives other things than 'cuda' when using the GPU
assert 'cuda' in theano.config.device,("Theano is not configured to use the GPU. Please check .theanorc. "
"Check http://deeplearning.net/software/theano/tutorial/using_gpu.html")
"""
Explanation: Presented at <a href="http://qvik.fi/"><img style="height:100px" src="https://qvik.com/wp-content/themes/qvik/images/qvik-logo-dark.png"/></a>
Session-based recommender Systems: Hands-on GRU4Rec
Frederick Ayala Gómez, PhD Student in Computer Science at ELTE University. Visiting Researcher at Aalto's Data Mining Group
Let's keep in touch!
Twitter: https://twitter.com/fredayala <br/>
LinkedIn: https://linkedin.com/in/frederickayala <br/>
GitHub: https://github.com/frederickayala
<hr/>
Few notes:
This notebook was tested on Windows and presents how to use GRU4Rec
The paper of GRU4Rec is: B. Hidasi, et al. 2015 “Session-based recommendations with recurrent neural networks”. CoRR
The poster of this paper can be found in http://www.hidasi.eu/content/gru4rec_iclr16_poster.pdf
For OSx and Linux, CUDA, Theano and Anaconda 'might' need some extra steps
On Linux Desktop (e.g. Ubuntu Desktop ), be careful with installing CUDA and NVIDIA drivers. It 'might' break lightdm 🙈🙉🙊
An NVIDIA GEFORCE GTX 980M was used
The starting point of this notebook is the original python demo file from Balázs Hidasi's GRU4REC repository.
It's recommended to use Anaconda to install stuff easier
Installation steps:
Install CUDA 8.0 from https://developer.nvidia.com/cuda-downloads
Optional: Install cuDNN https://developer.nvidia.com/cudnn
Install Anaconda 4.3.1 for Python 3.6 from https://www.continuum.io/downloads
Open Anaconda Navigator
Go to Enviroments / Create / Python Version 3.6 and give some name
In Channels, add: conda-forge then click on Update index...
Click on your enviroment Play arrow and choose Open Terminal
Install the libraries that we need:
conda install numpy scipy pandas mkl-service libpython m2w64-toolchain nose nose-parameterized sphinx pydot-ng
conda install theano pygpu
conda install matplotlib seaborn statsmodels
Create a .theanorc file in your home directory and add the following: <br/>
[global] <br/>
device = cuda <br/>
# Only if you want to use cuDNN <br/>
[dnn]<br/>
include_path=/path/to/cuDNN/include <br/>
library_path=/path/to/cuDNN/lib/x64
Get the GRU4Rec code and the dataset
GRU4Rec:
git clone https://github.com/hidasib/GRU4Rec.git
YOOCHOOSE Dataset:
http://2015.recsyschallenge.com/challenge.html
To get the training and testing files we have to preprocess the original dataset.
Go to the terminal that is running your anaconda enviroment
Navigate to the GRU4Rec folder
Edit the file GRU4Rec/examples/rsc15/preprocess.py and modify the following variables:
PATH_TO_ORIGINAL_DATA The path to the input raw dataset
PATH_TO_PROCESSED_DATA The path to where you want the output
Run the command: python preprocess.py
This will take some time, when the process ends you will have the files rsc15_train_full.txt and rsc15_test.txt in your PATH_TO_PROCESSED_DATA path
Place this notebook in the folder GRU4Rec/examples/rsc15/
That's it! we are ready to run GRU4Rec
End of explanation
"""
PATH_TO_TRAIN = 'C:/Users/frede/datasets/recsys2015/rsc15_train_full.txt'
PATH_TO_TEST = 'C:/Users/frede/datasets/recsys2015/rsc15_test.txt'
data = pd.read_csv(PATH_TO_TRAIN, sep='\t', dtype={'ItemId':np.int64})
valid = pd.read_csv(PATH_TO_TEST, sep='\t', dtype={'ItemId':np.int64})
"""
Explanation: Update PATH_TO_TRAIN and PATH_TO_TEST to the path for rsc15_train_full.txt and rsc15_test.txt respectively
End of explanation
"""
%matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
"""
Explanation: Let's take a look to the datasets
End of explanation
"""
data.head()
valid.head()
sessions_training = set(data.SessionId)
print("There are %i sessions in the training dataset" % len(sessions_training))
sessions_testing = set(valid.SessionId)
print("There are %i sessions in the testing dataset" % len(sessions_testing))
assert len(sessions_testing.intersection(sessions_training)) == 0, ("Huhu!"
"there are sessions from the testing set in"
"the training set")
print("Sessions in the testing set doesn't exist in the training set")
items_training = set(data.ItemId)
print("There are %i items in the training dataset" % len(items_training))
items_testing = set(valid.ItemId)
print("There are %i items in the testing dataset" % len(items_testing))
assert items_testing.issubset(items_training), ("Huhu!"
"there are items from the testing set "
"that are not in the training set")
print("Items in the testing set exist in the training set")
df_visualization = data.copy()
df_visualization["value"] = 1
df_item_count = df_visualization[["ItemId","value"]].groupby("ItemId").sum()
# Most of the items are infrequent
df_item_count.describe().transpose()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.kdeplot(df_item_count["value"], ax=ax)
ax.set(xlabel='Item Frequency', ylabel='Kernel Density Estimation')
plt.show()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.distplot(df_item_count["value"],
hist_kws=dict(cumulative=True),
kde_kws=dict(cumulative=True))
ax.set(xlabel='Item Frequency', ylabel='Cummulative Probability')
plt.show()
# Let's analyze the co-occurrence
df_cooccurrence = data.copy()
df_cooccurrence["next_SessionId"] = df_cooccurrence["SessionId"].shift(-1)
df_cooccurrence["next_ItemId"] = df_cooccurrence["ItemId"].shift(-1)
df_cooccurrence["next_Time"] = df_cooccurrence["Time"].shift(-1)
df_cooccurrence = df_cooccurrence.query("SessionId == next_SessionId").dropna()
df_cooccurrence["next_ItemId"] = df_cooccurrence["next_ItemId"].astype(int)
df_cooccurrence["next_SessionId"] = df_cooccurrence["next_SessionId"].astype(int)
df_cooccurrence.head()
df_cooccurrence["time_difference_minutes"] = np.round((df_cooccurrence["next_Time"] - df_cooccurrence["Time"]) / 60, 2)
df_cooccurrence[["time_difference_minutes"]].describe().transpose()
df_cooccurrence["value"] = 1
df_cooccurrence_sum = df_cooccurrence[["ItemId","next_ItemId","value"]].groupby(["ItemId","next_ItemId"]).sum().reset_index()
df_cooccurrence_sum[["value"]].describe().transpose()
"""
Explanation: Sneak Peak to the dataset
End of explanation
"""
n_layers = 100
save_to = os.path.join(os.path.dirname(PATH_TO_TEST), "gru_" + str(n_layers) +".pickle")
if not os.path.exists(save_to):
print('Training GRU4Rec with ' + str(n_layers) + ' hidden units')
gru = gru4rec.GRU4Rec(layers=[n_layers], loss='top1', batch_size=50,
dropout_p_hidden=0.5, learning_rate=0.01, momentum=0.0)
gru.fit(data)
pickle.dump(gru, open(save_to, "wb"))
else:
print('Loading existing GRU4Rec model with ' + str(n_layers) + ' hidden units')
gru = pickle.load(open(save_to, "rb"))
"""
Explanation: Training GRU
End of explanation
"""
res = evaluation.evaluate_sessions_batch(gru, valid, None,cut_off=20)
print('The proportion of cases having the desired item within the top 20 (i.e Recall@20): {}'.format(res[0]))
batch_size = 500
print("Now let's try to predict over the first %i items of our testint dataset" % batch_size)
df_valid = valid.head(batch_size)
df_valid["next_ItemId"] = df_valid["ItemId"].shift(-1)
df_valid["next_SessionId"] = df_valid["SessionId"].shift(-1)
session_ids = valid.head(batch_size)["SessionId"].values
input_item_ids = valid.head(batch_size)["ItemId"].values
predict_for_item_ids=None
%timeit gru.predict_next_batch(session_ids=session_ids, input_item_ids=input_item_ids, predict_for_item_ids=None, batch=batch_size)
df_preds = gru.predict_next_batch(session_ids=session_ids,
input_item_ids=input_item_ids,
predict_for_item_ids=None,
batch=batch_size)
df_valid.shape
df_preds.shape
df_preds.columns = df_valid.index.values
len(items_training)
df_preds
for c in df_preds:
df_preds[c] = df_preds[c].rank(ascending=False)
df_valid_preds = df_valid.join(df_preds.transpose())
df_valid_preds = df_valid_preds.query("SessionId == next_SessionId").dropna()
df_valid_preds["next_ItemId"] = df_valid_preds["next_ItemId"].astype(int)
df_valid_preds["next_SessionId"] = df_valid_preds["next_SessionId"].astype(int)
df_valid_preds["next_ItemId_at"] = df_valid_preds.apply(lambda x: x[int(x["next_ItemId"])], axis=1)
df_valid_preds_summary = df_valid_preds[["SessionId","ItemId","Time","next_ItemId","next_ItemId_at"]]
df_valid_preds_summary.head(20)
cutoff = 20
df_valid_preds_summary_ok = df_valid_preds_summary.query("next_ItemId_at <= @cutoff")
df_valid_preds_summary_ok.head(20)
recall_at_k = df_valid_preds_summary_ok.shape[0] / df_valid_preds_summary.shape[0]
print("The recall@%i for this batch is %f"%(cutoff,recall_at_k))
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.kdeplot(df_valid_preds_summary["next_ItemId_at"], ax=ax)
ax.set(xlabel='Next Desired Item @K', ylabel='Kernel Density Estimation')
plt.show()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.distplot(df_valid_preds_summary["next_ItemId_at"],
hist_kws=dict(cumulative=True),
kde_kws=dict(cumulative=True))
ax.set(xlabel='Next Desired Item @K', ylabel='Cummulative Probability')
plt.show()
print("Statistics for the rank of the next desired item (Lower the best)")
df_valid_preds_summary[["next_ItemId_at"]].describe()
"""
Explanation: Evaluating GRU
End of explanation
"""
|
fabriziocosta/pyMotif | meme_example.ipynb | mit | # Meme().display_meme_help()
from eden.util import configure_logging
import logging
configure_logging(logging.getLogger(),verbosity=2)
from utilities import Weblogo
wl = Weblogo(color_scheme='classic')
meme1 = Meme(alphabet="dna", # {ACGT}
gap_in_alphabet=False,
mod="anr", # Any number of repititions
output_dir="meme_anr",
nmotifs=3, # Number of motives to be found
weblogo_obj = wl
)
meme1.fit(fasta_file="seq18.fa")
predictions = meme1.predict(input_seqs=test, return_list=True)
for p in predictions: print p
predictions = meme1.predict(input_seqs="seq9.fa", return_list=False)
for p in predictions: print p
match = meme1.transform(input_seqs=test, return_match=True)
for m in match: print m
match = meme1.transform(input_seqs=test, return_match=False)
for m in match: print m
"""
Explanation: <h1>MEME Wrapper Example</h1>
End of explanation
"""
print meme1.e_values
"""
Explanation: <h3>E-value of each motif</h3>
End of explanation
"""
meme2 = Meme(alphabet="dna", mod="anr", nmotifs=3)
predictions = meme2.fit_predict(fasta_file="seq18.fa", return_list=True)
for p in predictions: print p
matches = meme2.fit_transform(fasta_file="seq18.fa", return_match=True)
for m in matches: print m
"""
Explanation: <h2>fit_predict() and fit_transform() example</h2>
End of explanation
"""
#printing motives as lists
for motif in meme1.motives_list:
for m in motif:
print m
print
"""
Explanation: <h3>Print motives as lists</h3>
End of explanation
"""
meme1.display_logo(do_alignment=False)
"""
Explanation: <h3>Display Sequence logo of un-aligned motives</h3>
End of explanation
"""
meme1.display_logo(motif_num=1)
"""
Explanation: <h3>Display Logo of specified motif</h3>
End of explanation
"""
meme1.align_motives() #MSA with Muscle
motives1=meme1.aligned_motives_list
for m in motives1:
for i in m:
print i
print
"""
Explanation: <h3>Multiple Sequence Alignment of motives with Muscle</h3>
Note: Motives in this example were already aligned, hence no dashes appear in the alignment
End of explanation
"""
meme1.display_logo(do_alignment=True)
"""
Explanation: <h3>Display sequence logo of aligned motives</h3>
End of explanation
"""
meme1.display()
meme1.matrix()
"""
Explanation: <h3>Position Weight Matrices for motifs</h3>
End of explanation
"""
meme1.display(motif_num=3)
"""
Explanation: <h4>Display PWM of single motif</h4>
End of explanation
"""
test_seq = 'GGAGAAAATACCGC' * 10
seq_score = meme1.score(motif_num=2, seq=test_seq)
print seq_score
"""
Explanation: <h4>Scoring a sequence w.r.t a motif</h4>
End of explanation
"""
meme2 = Meme(alphabet="dna", scoring_criteria="hmm", k=1, threshold=1.0,mod="anr", nmotifs=3, minw=7, maxw=9)
matches = meme2.fit_transform(fasta_file="seq9.fa", return_match=True)
for m in matches: print m
%%time
# Markov Model score
mm_score = meme2.score(motif_num=2, seq="ACGT"*10)
print mm_score
"""
Explanation: <h3> Transform with HMM as scoring criteria</h3>
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
"""
Explanation: LAB 3c: BigQuery ML Model Deep Neural Network.
Learning Objectives
Create and evaluate DNN model with BigQuery ML.
Create and evaluate DNN model with feature engineering with ML.TRANSFORM.
Calculate predictions with BigQuery's ML.PREDICT.
Introduction
In this notebook, we will create multiple deep neural network models to predict the weight of a baby before it is born, using first no feature engineering and then the feature engineering from the previous lab using BigQuery ML.
We will create and evaluate a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculate predictions with BigQuery's ML.PREDICT. If you need a refresher, you can go back and look how we made a baseline model in the notebook BQML Baseline Model or how we combined linear models with feature engineering in the notebook BQML Linear Models with Feature Engineering.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
"""
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
"""
Explanation: Verify tables exist
Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_4
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
FROM
babyweight.babyweight_data_train
"""
Explanation: Lab Task #1: Model 4: Increase complexity of model using DNN_REGRESSOR
DNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs.
MODEL_TYPE="DNN_REGRESSOR"
hidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array will be the number of hidden layers. The default value for hidden_units is [Min(128, N / (𝜶(Ni+No)))] (1 hidden layer), with N the training data size, Ni, No the input layer and output layer units, respectively, 𝜶 is constant with value 10. The upper bound of the rule will make sure the model won’t be over fitting. Note that, we currently have a model size limitation to 256MB.
dropout: Probability to drop a given coordinate during training; dropout is a very common technique to avoid overfitting in DNNs. The default value is zero, which means we will not drop out any coordinate during training.
batch_size: Number of samples that will be served to train the network for each sub iteration. The default value is Min(1024, num_examples) to balance the training speed and convergence. Serving all training data in each sub-iteration may lead to convergence issues, and is not advised.
Create DNN_REGRESSOR model
Change model type to use DNN_REGRESSOR, add a list of integer HIDDEN_UNITS, and add an integer BATCH_SIZE.
* Hint: Create a model_4.
Note: Model creation takes around 40-50 minutes.
End of explanation
"""
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4)
"""
Explanation: Get training information and evaluate
Let's first look at our training statistics.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Now let's evaluate our trained model on our eval dataset.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
babyweight.final_model
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
# TODO: Add FEATURE CROSS of:
# is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
"""
Explanation: Lab Task #2: Final Model: Apply the TRANSFORM clause
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries.
Let's apply the TRANSFORM clause to the final model and run the query.
Note: Model creation takes around 40-50 minutes.
End of explanation
"""
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model)
"""
Explanation: Let's first look at our training statistics.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Now let's evaluate our trained model on our eval dataset.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from original dataset
))
"""
Explanation: Lab Task #3: Predict with final model.
Now that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery ML.PREDICT function.
Predict from final model using an example from original dataset
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from simulated dataset
))
"""
Explanation: Modify above prediction query using example from simulated dataset
Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality.
End of explanation
"""
|
CORE-GATECH-GROUP/serpent-tools | examples/Detector.ipynb | mit | import os
pinFile = os.path.join(
os.environ["SERPENT_TOOLS_DATA"],
"fuelPin_det0.m",
)
bwrFile = os.path.join(
os.environ["SERPENT_TOOLS_DATA"],
"bwr_det0.m",
)
"""
Explanation: Copyright (c) 2017-2020 Serpent-Tools developer team, GTRC
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Data files are not included with the python package, but can be downloaded from the GitHub repository. For this tutorial, the files are placed in the directory identified with the SERPENT_TOOLS_DATA environment variable.
End of explanation
"""
%matplotlib inline
from matplotlib import pyplot
import serpentTools
pin = serpentTools.read(pinFile)
bwr = serpentTools.read(bwrFile)
print(pin.detectors)
print(bwr.detectors)
"""
Explanation: DetectorReader
Basic Operation
This notebook details how to utilize the serpentTools package for reading detector files, [input]_det[N].m produced by SERPENT [1].
Detectors can be defined with many binning parameters, listed on the SERPENT Wiki.
One could define a detector that has a spatial mesh, dx/dy/dz/, but also includes reaction and material bins, dr, dm.
Detectors are stored on the reader object in the detectors dictionary as custom Detector objects. Here, all energy and spatial grid data are stored, including other binning information such as reaction, universe, and lattice bins.
End of explanation
"""
nodeFlx = pin.detectors['nodeFlx']
print(nodeFlx.bins.shape)
nodeFlx.bins[:3,:].T
"""
Explanation: These detectors were defined for a single fuel pin with 16 axial layers and a BWR assembly, with a description of the detectors provided in the output:
|Name| Description|
|----|------------|
|nodeFlx| One-group flux tallied in each axial layer |
|spectrum|CSEWG 239 group stucture for flux and U-235 fission cross section|
|xymesh|Two-group flux for a 20x20 xy grid|
For each Detector object, the full tally matrix from the file is stored in the bins array.
End of explanation
"""
pin['nodeFlx']
bwr.get('spectrum')
"""
Explanation: Here, only three columns, shown as rows for readability, are changing:
column 0: universe column
column 10: tally column
column 11: errors
Detectors can also be obtained by indexing into the reader object, as
End of explanation
"""
assert nodeFlx.tallies.shape == (16, )
assert nodeFlx.errors.shape == (16, )
nodeFlx.tallies
nodeFlx.errors
"""
Explanation: Tally data is reshaped corresponding to the bin information provided by Serpent. The tally and error columns are recast into multi-dimensional arrays where each dimension is some unique bin type like energy or spatial bin index. For this case, since the only variable bin quantity is that of the changing universe, the tallies and errors attributes will be 1D arrays.
End of explanation
"""
nodeFlx.indexes
"""
Explanation: Note: Python and numpy arrays are zero-indexed, meaning the first item is accessed with array[0], rather than array[1].
Bin information is retained through the indexes attribute. Each entry indicates what bin type is changing along that dimension of tallies and errors. Here, universe is the first item and indicates that the first dimension of tallies corresponds to a changing universe bin.
End of explanation
"""
spectrum = bwr.detectors['spectrum']
print(spectrum.grids['E'][:5, :])
"""
Explanation: For detectors that include some grid matrices, such as spatial or energy meshes DET<name>E, these arrays are stored in the grids dictionary
End of explanation
"""
xy = bwr.detectors['xymesh']
xy.indexes
"""
Explanation: Multi-dimensional Detectors
The Detector objects are capable of reshaping the detector data into an array where each axis corresponds to a varying bin. In the above examples, the reshaped data was one-dimensional, because the detectors only tallied data against one bin, universe and energy. In the following example, the detector has been configured to tally the fission and capture rates (two dr arguments) in an XY mesh.
End of explanation
"""
print(xy.bins.shape)
print(xy.tallies.shape)
print(xy.bins[:5, 10])
print(xy.tallies[0, 0, :5])
"""
Explanation: Traversing the first axis in the tallies array corresponds to changing the value of the energy. The second axis corresponds to changing ymesh values, and the final axis reflects changes in xmesh.
End of explanation
"""
spectrum.slice({'reaction': 1})[:20]
"""
Explanation: Slicing
As the detectors produced by SERPENT can contain multiple bin types, obtaining data from the tally data can become complicated. This retrieval can be simplified using the slice method. This method takes an argument indicating what bins (keys in indexes) to fix at what position.
If we want to retrive the tally data for the capture reaction in the spectrum detector, you would instruct the slice method to use column 1 along the axis that corresponds to the reaction bin, as the fission reaction corresponded to reaction tally 2 in the original matrix. Since python and numpy arrays are zero indexed, the second reaction tally is stored in column 1.
End of explanation
"""
spectrum.slice({'reaction': 1}, 'errors')[:20]
"""
Explanation: This method also works for slicing the error, or score, matrix
End of explanation
"""
nodeFlx.plot();
ax = nodeFlx.plot(steps=True, label='steps')
ax = nodeFlx.plot(sigma=100, ax=ax, c='k', alpha=0.6, marker='x', label='sigma')
"""
Explanation: Plotting Routines
Each Detector object is capable of simple 1D and 2D plotting routines.
The simplest 1D plot method is simply plot, however a wide range of plot options are present.
|Option|Description|
|-|-|
|what|What data to plot|
|ax|Preprepared figure on which to add this plot|
|xdim|Quantity from indexes to use as x-axis|
|sigma|Confidence interval to place on errors - 1D|
|steps|Draw tally values as constant inside bin - 1D|
|xlabel|Label to apply to x-axis|
|ylabel|Label to apply to y-axis|
|loglog|Use a log scalling on both of the axes|
|logx|Use a log scaling on the x-axis|
|logy|Use a log scaling on the y-axis|
|legend|Place a legend on the figure|
|ncol|Number of columns to apply to the legend|
The plot routine also accepts various options, which can be found in the matplotlib.pyplot.plot documentation
End of explanation
"""
nodeFlx.plot(xdim='universe', what='errors',
ylabel='Relative tally error [%]');
"""
Explanation: Passing what='errors' to the plot method plots the associated relative errors, rather than the tally data on the y-axis.
Similarly, passing a key from indexes sets the x-axis to be that specific index.
End of explanation
"""
xy.meshPlot('x', 'y', fixed={'energy': 0},
cbarLabel='Mesh-integrated flux $[n/cm^2/s]$',
title="Fast spectrum flux $[>0.625 eV]$");
"""
Explanation: Mesh Plots
For data with dimensionality greater than one, the meshPlot method can be used to plot some 2D slice of the data on a Cartesian grid. Passing a dictionary as the fixed argument restricts the tally data down to two dimensions.
The X and Y axis can be quantities from grids or indexes. If the quantity to be used for an axis is in the grids dictionary, then the appropriate spatial or energetic grid from the detector file will be used. Otherwise, the axis will reflect changes in a specific bin type. The following keyword arguments can be used in conjunction with the above options to format the mesh plots.
|Option|Action|
|------|------|
|cmap|Colormap to apply to the figure|
|cbarLabel|Label to apply to the colorbar|
|logScale|If true, use a logarithmic scale for the colormap|
|normalizer|Apply a custom non-linear normalizer to the colormap|
The cmap argument must be something that matplotlib can understand as a valid colormap [2]. This can be a string of any of the colormaps supported by matplotlib.
Since the xymesh detector is three dimensions, (energy, x, and y), we must pick an energy group to plot.
End of explanation
"""
ax = spectrum.meshPlot('e', 'reaction', what='errors',
ylabel='Reaction type', cmap='PuBu_r',
cbarLabel="Relative error $[\%]$",
xlabel='Energy [MeV]', logColor=True,
logx=True);
ax.set_yticks([0.5, 1.5]);
ax.set_yticklabels([r'$\psi$', r'$U-235 \sigma_f$'], rotation=90,
verticalalignment='center');
"""
Explanation: The meshPlot also supports a range of labeling and plot options.
Here, we attempt to plot the flux and U-235 fission reaction rate errors as a function of energy, with
the two reaction rates separated on the y-axis. Passing logColor=True applies a logarithmic color scale to all the positive data. Data that is zero is not shown, and errors will be raised if the data contain negative quantities.
Here we also apply custom y-tick labels to reflect the reaction that is being plotted.
End of explanation
"""
xy.plot(fixed={'energy': 1, 'xmesh': 1},
xlabel='Y position',
ylabel='Thermal flux along x={}'
.format(xy.grids['X'][1, 0]));
"""
Explanation: Using the slicing arguments allows access to the 1D plot methods from before
End of explanation
"""
fig, axes = pyplot.subplots(1, 3, figsize=(16, 4))
fix = {'reaction': 0}
spectrum.plot(fixed=fix, ax=axes[0]);
spectrum.spectrumPlot(fixed=fix, ax=axes[1], normalize=False);
spectrum.spectrumPlot(fixed=fix, ax=axes[2]);
"""
Explanation: Spectrum Plots
The Detector objects are also capable of energy spectrum plots, if an associated energy grid is given. The normalize option will normalize the data per unit lethargy. This plot takes some additional assumptions with the scaling and labeling, but all the same controls as the above line plots.
The spectrum plot method is designed to prepare plots of energy spectra. Supported arguments for the spectrumPlot method include
|Option|Default|Description|
|-|-|-|
|normalize|True|Normalize tallies per unit lethargy|
|fixed| None|Dictionary that controls matrix reduction|
|sigma|3|Level of confidence for statistical errors|
|xscale|'log'|Set the x scale to be log or linear|
|yscale|'linear'|Set the y scale to be log or linear|
The figure below demonstrates the default options and control in this spectrumPlot routine by
Using the less than helpful plot routine with no formatting
Using spectrumPlot without normalization to show default labels and scaling
Using spectrumPlot with normalization
Since our detector has energy bins and reaction bins, we need to reduce down to one-dimension with the fixed command.
End of explanation
"""
labels = (
'flux',
r'$\sigma_f^{U-235}\psi$') # render as mathtype
spectrum.plot(labels=labels, loglog=True);
spectrum.spectrumPlot(labels=labels, legend='above', ncol=2);
"""
Explanation: Multiple line plots
Plots can be made against multiple bins, such as spectrum in different materials or reactions, with the plot and spectrumPlot
methods. Below is the flux spectrum and spectrum of the U-235 fission reaction rate from the same detector. The labels argument is what is used to label each individual plot in the order of the bin index.
End of explanation
"""
hexFile = os.path.join(
os.environ["SERPENT_TOOLS_DATA"],
'hexplot_det0.m',
)
hexR = serpentTools.read(hexFile)
hexR.detectors
"""
Explanation: Hexagonal Detectors
SERPENT allows the creation of hexagonal detectors with the dh card, like
det hex2 2 0.0 0.0 1 5 5 0.0 0.0 1
det hex3 3 0.0 0.0 1 5 5 0.0 0.0 1
which would create two hexagonal detectors with different orientations. Type 2 detectors have two faces perpendicular to the x-axis, while type 3 detectors have faces perpendicular to the y-axis.
For more information, see the dh card from SERPENT wiki.
serpentTools is capable of storing data tallies and grid structures from hexagonal detectors in HexagonalDetector objects.
End of explanation
"""
hex2 = hexR.detectors['hex2']
hex2.tallies
hex2.indexes
"""
Explanation: Here, two HexagonalDetector objects are produced, with similar tallies and slicing methods as demonstrated above.
End of explanation
"""
hex2.pitch = 1
hex2.hexType = 2
hex2.hexPlot();
hex3 = hexR.detectors['hex3']
hex3.pitch = 1
hex3.hexType = 3
hex3.hexPlot();
"""
Explanation: Creating hexagonal mesh plots with these objects requires setting the pitch and hexType attributes.
End of explanation
"""
|
ernestyalumni/MLgrabbag | supervised-theano.ipynb | mit | import theano
import theano.tensor as T
# cf. https://github.com/lisa-lab/DeepLearningTutorials/blob/c4db2098e6620a0ac393f291ec4dc524375e96fd/code/logistic_sgd.py
"""
Explanation: I started here: Deep Learning tutorial
End of explanation
"""
import cPickle, gzip, numpy
import os
os.getcwd()
os.listdir( os.getcwd() )
f = gzip.open('./Data/mnist.pkl.gz')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
type(train_set), type(valid_set), type(test_set)
type(train_set[0]), type(train_set[1])
def shared_dataset(data_xy):
""" Function that loads the dataset into shared variables
The reason we store our dataset in shared variables is to allow
Theano to copy it into the GPU memory (when code is run on GPU).
Since copying data into the GPU is slow, copying a minibatch everytime
is needed (the default behavior if the data is not in a shared
variable) would lead to a large decrease in performance.
"""
data_x, data_y = data_xy
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
# When storing data on the GPU it has to be stored as floats
# therefore we will store the labels as ``floatX`` as well
# (``shared_y`` does exactly that). But during our computations
# we need them as ints (we use labels as index, and if they are
# floats it doesn't make sense) therefore instead of returning
# ``shared_y`` we will ahve to cast it to int. This little hack
# lets us get around this issue
return shared_x, T.cast(shared_y, 'int32')
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * batch_size: 3 * batch_size]
label = train_set_y[2 * batch_size: 3 * batch_size]
dir(train_set_x)
"""
Explanation: cf. 3.2 Datasets, 3.2.1 MNIST Dataset
End of explanation
"""
os.listdir("../DeepLearningTutorials/code")
import subprocess
subprocess.call(['python','../DeepLearningTutorials/code/logistic_sgd.py'])
subprocess.call(['THEANO_FLAGS=device=gpu,floatX=float32 python',
'../DeepLearningTutorials/code/logistic_sgd.py'])
execfile('../DeepLearningTutorials/code/logistic_sgd_b.py')
os.listdir( '../' )
import sklearn
"""
Explanation: GPU note
Using the GPU
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python myscriptIwanttorunonthegpu.py
From theano's documentation, "Using the GPU", "Only computations with float32 data-type can be accelerated. Better support for float64 is expected in upcoming hardware, but float64 computations are still relatively slow (Jan 2010)." Hence floatX=float32.
I ran the script logistic_sgd.py locally, that's found in DeepLearningTutorials from lisa-lab's github
End of explanation
"""
|
saudijack/unfpyboot | Day_02/00_Scipy/04_Breakout_trapezoid_rule_solution.ipynb | mit | %pylab inline
def trapz(x, y):
return 0.5*np.sum((x[1:]-x[:-1])*(y[1:]+y[:-1]))
"""
Explanation: Basic numerical integration: the trapezoid rule
Illustrates: basic array slicing, functions as first class objects.
In this exercise, you are tasked with implementing the simple trapezoid rule
formula for numerical integration. If we want to compute the definite integral
$$
\int_{a}^{b}f(x)dx
$$
we can partition the integration interval $[a,b]$ into smaller subintervals,
and approximate the area under the curve for each subinterval by the area of
the trapezoid created by linearly interpolating between the two function values
at each end of the subinterval:
<img src="http://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Trapezoidal_rule_illustration.png/316px-Trapezoidal_rule_illustration.png" /img>
The blue line represents the function $f(x)$ and the red line
is the linear interpolation. By subdividing the interval $[a,b]$, the area under $f(x)$ can thus be approximated as the sum of the areas of all
the resulting trapezoids.
If we denote by $x_{i}$ ($i=0,\ldots,n,$ with $x_{0}=a$ and
$x_{n}=b$) the abscissas where the function is sampled, then
$$
\int_{a}^{b}f(x)dx\approx\frac{1}{2}\sum_{i=1}^{n}\left(x_{i}-x_{i-1}\right)\left(f(x_{i})+f(x_{i-1})\right).
$$
The common case of using equally spaced abscissas with spacing $h=(b-a)/n$ reads simply
$$
\int_{a}^{b}f(x)dx\approx\frac{h}{2}\sum_{i=1}^{n}\left(f(x_{i})+f(x_{i-1})\right).
$$
One frequently receives the function values already precomputed, $y_{i}=f(x_{i}),$
so the equation above becomes
$$
\int_{a}^{b}f(x)dx\approx\frac{1}{2}\sum_{i=1}^{n}\left(x_{i}-x_{i-1}\right)\left(y_{i}+y_{i-1}\right).
$$
Exercises
1
Write a function trapz(x, y), that applies the trapezoid formula to pre-computed values,
where x and y are 1-d arrays.
End of explanation
"""
def trapzf(f, a, b, npts=100):
x = np.linspace(a, b, npts)
y = f(x)
return trapz(x, y)
"""
Explanation: 2
Write a function trapzf(f, a, b, npts=100) that accepts a function f, the endpoints a
and b and the number of samples to take npts. Sample the function uniformly at these
points and return the value of the integral.
End of explanation
"""
exact = 9.0
x = np.linspace(0, 3, 50)
y = x**2
print exact
print trapz(x, y)
def f(x): return x**2
print trapzf(f, 0, 3, 50)
"""
Explanation: 3
Verify that both functions above are correct by showing that they produces correct values
for a simple integral such as $\int_0^3 x^2$.
End of explanation
"""
npts = [5, 10, 20, 50, 100, 200]
err = []
for n in npts:
err.append(trapzf(f, 0, 3, n)-exact)
plt.semilogy(npts, np.abs(err))
plt.title(r'Trapezoid approximation to $\int_0^3 x^2$')
plt.xlabel('npts')
plt.ylabel('Error')
"""
Explanation: 4
Repeat the integration for several values of npts, and plot the error as a function of npts
for the integral in #3.
End of explanation
"""
def f(x):
return (x-3)*(x-5)*(x-7)+85
x = linspace(0, 10, 200)
y = f(x)
"""
Explanation: An illustration using matplotlib and scipy
We define a function with a little more complex look
End of explanation
"""
a, b = 1, 9
xint = x[logical_and(x>=a, x<=b)][::30]
yint = y[logical_and(x>=a, x<=b)][::30]
"""
Explanation: Choose a region to integrate over and take only a few points in that region
End of explanation
"""
plot(x, y, lw=2)
axis([0, 10, 0, 140])
fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)
text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20);
"""
Explanation: Plot both the function and the area below it in the trapezoid approximation
End of explanation
"""
from scipy.integrate import quad, trapz
integral, error = quad(f, 1, 9)
print "The integral is:", integral, "+/-", error
print "The trapezoid approximation with", len(xint), "points is:", trapz(yint, xint)
"""
Explanation: In practice, we don't need to implement numerical integration ourselves, as scipy has both basic trapezoid
rule integrators and more sophisticated ones. Here we illustrate both:
End of explanation
"""
|
borja876/Thinkful-DataScience-Borja | Amazon+Reviews 180108.ipynb | mit | #Import data from json file and create a list
data = []
with open('/home/borjaregueral/Digital_Music_5.json') as f:
for line in f:
data.append(json.loads(line))
#Create a dataframe with the columns that are interesting for this exercise
#Columns left out: 'helpful', 'reviewTime', 'reviewerID','reviewerName'
names = ["overall", "reviewText"]
amazonraw = pd.DataFrame(data, columns=names)
amazonraw['overall'] = amazonraw['overall'].astype(int)
amazonraw.head()
#Analyse the dataset: types, length of the dataframe and NaN
amazonraw.info()
amazonraw.dtypes
"""
Explanation: Import & Analize Data
End of explanation
"""
amazonraw.overall.describe()
#Change the Overall variable into a categorical variable
#Ratings equal or lower than 3 have been considered negative as the mean is 4.25.
#The hypothesis is that although the abovmentioned ratings could be considered positive they are negative
amazonraw.loc[amazonraw['overall'] <= 3, 'Sentiment'] = 0
amazonraw.loc[amazonraw['overall'] >=4 , 'Sentiment'] = 1
amazonraw.loc[amazonraw['Sentiment'] == 0, 'Category'] ='Negative'
amazonraw.loc[amazonraw['Sentiment'] == 1, 'Category'] = 'Positive'
#Count the each of the categories
a = amazonraw['Category'].value_counts('Positive')
b = pd.value_counts(amazonraw['Category'].values, sort=False)
print('Number of ocurrencies:\n', b)
print('\n')
print('Frequency of each value:\n', a)
#Downsample majority class (due to computational restrictions we downsample the majority instead of upsampling the minority)
# Separate majority and minority classes
amazon_majority = amazonraw[amazonraw.Sentiment == 1]
amazon_minority = amazonraw[amazonraw.Sentiment == 0]
# Downsample mairlinesass
amazon_majority_downsampled = resample(amazon_majority, replace=False, n_samples=12590, random_state=123)
# Combine minority class with downsampled majority class
amazon = pd.concat([amazon_majority_downsampled, amazon_minority])
# Display new class counts
amazon.Category.value_counts()
#Graphical representation of the positive and negative reviews
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
sns.set(style="white")
ax = sns.countplot(x="overall", data=amazonraw)
plt.title('Amazon Ratings')
plt.subplot(1, 2, 2)
sns.set(style="white")
ax = sns.countplot(x="Category", data=amazon)
plt.title('Categories in the downsampled dataset')
#Create new dataframe that has the Categories, Overall scores, Sentiment and ReviewText
names = ['Category',"overall",'Sentiment', "reviewText"]
amazon1 = pd.DataFrame(amazon, columns=names)
amazon.head()
#Lines are reshuffled and 50% of the dataset is used to reduce the computing effort
amazon2 = amazon1.sample(frac=1, random_state=7)
#Predictors and prediced variables are formed
X = amazon2['reviewText']
y = amazon2['Sentiment']
#Split the data set into train and test 70/30
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.3, random_state=135)
#KFold for cross validation analysis
kf = KFold(5)
"""
Explanation: Build Sentiment Scores and Categories
End of explanation
"""
#Analysis starts with Bag of Words and common English words are extracted
vect = CountVectorizer(analyzer = 'word', stop_words='english').fit(X_train)
X_trainvec = vect.transform(X_train)
X_testvec = vect.transform(X_test)
#Count the number of english words and take a look at the type of words that are extracted
print("Number of stop words is :", len(ENGLISH_STOP_WORDS), "\n")
print("Examples: ", list(ENGLISH_STOP_WORDS)[::10])
#Take a look at the features identified by bag of words
features_names = vect.get_feature_names()
print(len(features_names))
print("\n")
# print first 20 features
print(features_names[:20])
print("\n")
# print last 20 features
print(features_names[-20:])
#Size of the X_trainvector sparse matrix
print(X_trainvec.shape)
X_trainvec
#Check the size of the y_train vector to avoid problems when running the logistic regression model
y_train.shape
"""
Explanation: Bag of Words
End of explanation
"""
# Initialize and fit the model.
l3 = BernoulliNB()
l3.fit(X_trainvec, y_train)
# Predict on training set
predtrain_y = l3.predict(X_trainvec)
#Predicting on the test set
l3 = BernoulliNB()
l3.fit(X_testvec, y_test)
# Predict on training set
predtest_y = l3.predict(X_testvec)
#Evaluation of the model (testing)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Bernouilli accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(l3,X_testvec,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Bernoulli
End of explanation
"""
# Initialize and fit the model.
lr = LogisticRegression()
lr.fit(X_trainvec, y_train)
#Once the model has been trained test it on the test dataset
lr.fit(X_testvec, y_test)
# Predict on test set
predtest_y = lr.predict(X_testvec)
#Evaluate model (test set)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Logistics accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(lr,X_testvec,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Logistic Model
End of explanation
"""
vect2 = TfidfVectorizer(min_df=20, analyzer = 'word', stop_words = 'english',
ngram_range = (1,3)
).fit(X_train)
X_train_vectorized = vect2.transform(X_train)
X_test_vectorized = vect2.transform(X_test)
features_names = vect2.get_feature_names()
print(len(features_names))
"""
Explanation: TFIDF
End of explanation
"""
# Initialize and fit the model.
lr2 = LogisticRegression(class_weight='balanced')
#Create range of values to fit parameters
k1 = ['l1', 'l2']
k2 = np.arange(50) + 1
k3 = ['balanced', None]
parameters = {'penalty': k1,
'C': k2,
'class_weight':k3}
#Fit parameters
lrr = GridSearchCV(lr2, param_grid=parameters, cv=kf)
#Fit on Training set
lrr.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", lrr.best_params_)
#Once the model has been trained test it on the test dataset
lr2.fit(X_test_vectorized, y_test)
# Predict on test set
predtest2_y = lrr.predict(X_test_vectorized)
#Evaluate model (test set)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest2_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest2_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest2_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Losgistics model accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(lr2,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Logistic Model
End of explanation
"""
# Initialize and fit the model.
l3 = BernoulliNB()
#Create range of values to fit parameters
k1 = np.arange(50) + 1
parameters = {'alpha': k1
}
#Fit parameters
l33 = GridSearchCV(l3, param_grid=parameters, cv=kf)
#Fit on Training set
l33.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", l33.best_params_)
# Predict on the test data set
l33.fit(X_test_vectorized, y_test)
# Predict on training set
predtest3_y = l33.predict(X_test_vectorized)
#Evaluation of the model (testing)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest3_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest3_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest3_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Bernouilli set accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(l33,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Bernouilli Model
End of explanation
"""
# Initialize and fit the model
KNN = KNeighborsClassifier(n_jobs=-1)
#Create range of values to fit parameters
k1 = [1,3,5,7,9,11,13,15,17,19,21]
k3 = ['uniform', 'distance']
parameters = {'n_neighbors': k1,
'weights':k3}
#Fit parameters
clf = GridSearchCV(KNN, param_grid=parameters, cv=kf)
#Fit the tunned model
clf.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", clf.best_params_)
#Initialize the model on test dataset
clf.fit(X_test_vectorized, y_test)
# Predict on test dataset
predtest3_y = clf.predict(X_test_vectorized)
#Evaluate model on the test set
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest3_y, target_names=target_names))
#Create confusion matrix
confusion = confusion_matrix(y_test, predtest3_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest3_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
#Print Results
print((
'KNN accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(clf,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: KNN model
End of explanation
"""
#For the Random Forest hyperparameters tuning,due to computational restrictions,
#grid search will be applied to one paramter at a time on the train set
#updating the value as we move along the hyperparameters tuning
#Number of trees
param_test1 = {'n_estimators':range(300,400,20)}
gsearch1 = GridSearchCV(estimator = RandomForestClassifier(),
param_grid = param_test1, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch1.fit(X_train_vectorized, y_train)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#Max depth and min sample split
#Tried values for max depth from 2-60 with values under 0.8641. To find the value that increases accuracy
# the range between 60-80 is used
# min sample split values from 50-500 being the value between 80-120 the ones that increases accuracy
param_test2 = {'max_depth':range(61,80,2), 'min_samples_split': range(80,121,20)}
gsearch2 = GridSearchCV(estimator = RandomForestClassifier(n_estimators = 360),
param_grid = param_test2, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch2.fit(X_train_vectorized, y_train)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Re run the min_sample split with the min_sample leaf
param_test3 = {'min_samples_leaf':range(2,33,10)}
gsearch3 = GridSearchCV(estimator = RandomForestClassifier(n_estimators = 360, max_depth = 65 , min_samples_split = 80 ),
param_grid = param_test3, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch3.fit(X_train_vectorized, y_train)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
#Based on the results shown for the minimum sample split, we will lwave it in the default number
#Re run the min_sample split with the min_sample leaf
param_test4 = {'criterion':['gini', 'entropy']}
gsearch4 = GridSearchCV(estimator = RandomForestClassifier(n_estimators = 360, max_depth = 65 , min_samples_split = 80),
param_grid = param_test4, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch4.fit(X_train_vectorized, y_train)
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
#Fit in test dataset
gsearch4.fit(X_test_vectorized, y_test)
#Predict on test dataset
predtestrf_y = gsearch4.predict(X_test_vectorized)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
print((
'Random Forest accuracy:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(cross_val_score(gsearch4,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Random Forest
End of explanation
"""
# Train model
OTM = DecisionTreeClassifier()
#Create range of values to fit parameters
k2 = ['auto', 'sqrt', 'log2']
parameters = {'max_features': k2
}
#Fit parameters
OTM1 = GridSearchCV(OTM, param_grid=parameters, cv=kf)
#Fit the tunned model
OTM1.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", OTM1.best_params_)
#Fit on test dataset
OTM1.fit(X_test_vectorized, y_test)
#Predict parameters on test dataset
predtestrf_y = OTM1.predict(X_test_vectorized)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
print((
'Decision Tree accuracy:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(cross_val_score(OTM1,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Decision Tree
End of explanation
"""
# Train model
svc = SVC()
#Create range of values to fit parameters
ks1 = np.arange(20)+1
ks4 = ['linear','rbf']
parameters = {'C': ks1,
'kernel': ks4}
#Fit parameters
svc1 = GridSearchCV(svc, param_grid=parameters, cv=kf)
#Fit the tunned model
svc1.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", svc1.best_params_)
#Fit tunned model on Test set
svc1.fit(X_test_vectorized, y_test)
# Predict on training set
predtestsvc_y = svc1.predict(X_test_vectorized)
#Test Scores
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtestsvc_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestsvc_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestsvc_y, margins=True)
print((
'SVC accuracy:{}\n'
).format(cross_val_score(svc1,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: SVC
End of explanation
"""
#For the Gradient Boosting hyperparameters tuning,due to computational restrictions,
#grid search will be applied to one paramter at a time on the train set
#updating the value as we move along the hyperparameters tuning
#Number of trees
param_test1 = {'n_estimators':range(20,90,10)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=500,min_samples_leaf=50,max_depth=8,max_features='sqrt',subsample=0.8,random_state=10),
param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch1.fit(X_train_vectorized, y_train)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#Max depth and min sample split
param_test2 = {'max_depth':range(5,20,2), 'min_samples_split':range(200,1001,200)}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=80, max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch2.fit(X_train_vectorized, y_train)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Re run the min_sample split with the min_sample leaf
param_test3 = {'min_samples_split':range(200,1001,200),'min_samples_leaf':range(30,71,10)}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=80,max_depth=19,min_samples_split=600,max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch3.fit(X_train_vectorized, y_train)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
#Max features considering the results obtained
#for the combination of the 'min_samples_split', 'min_samples_leaf' and 'max_depth'
#The value of 600 has been maintained as it is the one that gives a better accuracy for every value of 'max_depth'
param_test4 = {'max_features':range(60,74,2)}
gsearch4 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=80,max_depth=19,min_samples_split=600,min_samples_leaf=40,max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch4.fit(X_train_vectorized, y_train)
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
#Tuning the subsample
param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9,0.95]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,
n_estimators=80,max_depth=19,min_samples_split=600,
min_samples_leaf=40,max_features=62,
subsample=0.8, random_state=10),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch5.fit(X_train_vectorized, y_train)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
#Instead of having a 10% learning rate, we halve the learning rate and double the number of trees to see if we
#can improve the accuracy
param_test5 = {'subsample':[0.8,0.85,0.9,0.95]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.05, n_estimators=160,
max_depth=19,min_samples_split=600,
min_samples_leaf=40,max_features=62,
subsample=0.9, random_state=10),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch5.fit(X_train_vectorized, y_train)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
#Fit on the test set
gsearch5.fit(X_test_vectorized, y_test)
# Predict on test set
predtestrf_y = gsearch5.predict(X_test_vectorized)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
print((
'Gradient Boosting accuracy:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(cross_val_score(gsearch5,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
"""
Explanation: Gradient Boosting
End of explanation
"""
|
NervanaSystems/neon_course | 08 Overfitting Tutorial.ipynb | apache-2.0 | from neon.initializers import Gaussian
from neon.optimizers import GradientDescentMomentum, Schedule
from neon.layers import Conv, Dropout, Activation, Pooling, GeneralizedCost
from neon.transforms import Rectlin, Softmax, CrossEntropyMulti, Misclassification
from neon.models import Model
from neon.data import CIFAR10
from neon.callbacks.callbacks import Callbacks
from neon.backends import gen_backend
be = gen_backend(batch_size=128, backend='gpu')
# hyperparameters
learning_rate = 0.05
weight_decay = 0.001
num_epochs = 25
print "Loading Data"
dataset = CIFAR10(path='data/', normalize=False,
contrast_normalize=True, whiten=True,
pad_classes=True) # CIFAR10 has 10 classes, network has 16 outputs, so we pad some extra classes.
train_set = dataset.train_iter
valid_set = dataset.valid_iter
print "Building Model"
init_uni = Gaussian(scale=0.05)
opt_gdm = GradientDescentMomentum(learning_rate=float(learning_rate), momentum_coef=0.9,
wdecay=float(weight_decay),
schedule=Schedule(step_config=[200, 250, 300], change=0.1))
relu = Rectlin()
conv = dict(init=init_uni, batch_norm=False, activation=relu)
convp1 = dict(init=init_uni, batch_norm=False, activation=relu, padding=1)
convp1s2 = dict(init=init_uni, batch_norm=False, activation=relu, padding=1, strides=2)
layers = [
Conv((3, 3, 64), **convp1),
Conv((3, 3, 64), **convp1s2),
Conv((3, 3, 128), **convp1),
Conv((3, 3, 128), **convp1s2),
Conv((3, 3, 128), **convp1),
Conv((1, 1, 128), **conv),
Conv((1, 1, 16), **conv),
Pooling(8, op="avg"),
Activation(Softmax())]
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, output_file='data.h5', eval_set=valid_set, eval_freq=1)
print "Training"
mlp.fit(train_set, optimizer=opt_gdm, num_epochs=num_epochs, cost=cost, callbacks=callbacks)
print('Misclassification error = %.1f%%' % (mlp.eval(valid_set, metric=Misclassification())*100))
"""
Explanation: Overfitting Tutorial
Many deep learning models run the danger of overfitting on the training set. When this happens, the model fails to generalize its performance to unseen data, such as a separate validation set. Here we present a simple tutorial on how to recognize overfitting using our visualization tools, and how to apply Dropout layers to prevent overfitting.
We use a simple network of convolutional layers on the CIFAR-10 dataset, a dataset of images belonging to 10 categories.
The code below will build the model and train on the CIFAR-10 dataset for 25 epochs (~2 minutes on Titan X GPUs), displaying both the training cost as well as the cost on the validation set.
Note: We highly recommend users run this model on Maxwell GPUs.
End of explanation
"""
from neon.visualizations.figure import cost_fig, hist_fig, deconv_summary_page
from neon.visualizations.data import h5_cost_data, h5_hist_data, h5_deconv_data
from bokeh.plotting import output_notebook, show
cost_data = h5_cost_data('data.h5', False)
output_notebook()
show(cost_fig(cost_data, 400, 800, epoch_axis=False))
"""
Explanation: Overfitting
You should notice that in the logs above, after around Epoch 15, the model begins to overfit. Even though the cost on the training set continues to decrease, the validation loss flattens (even increasing slightly). We can visualize these effects using the code below.
Note: The same plots can be created using our nvis command line utility (see: http://neon.nervanasys.com/docs/latest/tools.html)
End of explanation
"""
layers = [
Conv((3, 3, 64), **convp1),
Conv((3, 3, 64), **convp1s2),
Dropout(keep=.5), # Added Dropout
Conv((3, 3, 128), **convp1),
Conv((3, 3, 128), **convp1s2),
Dropout(keep=.5), # Added Dropout
Conv((3, 3, 128), **convp1),
Conv((1, 1, 128), **conv),
Conv((1, 1, 16), **conv),
Pooling(8, op="avg"),
Activation(Softmax())]
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, output_file='data.h5', eval_set=valid_set, eval_freq=1)
print "Training"
mlp.fit(train_set, optimizer=opt_gdm, num_epochs=num_epochs, cost=cost, callbacks=callbacks)
print('Misclassification error = %.1f%%' % (mlp.eval(valid_set, metric=Misclassification())*100))
"""
Explanation: This situation illustrates the importance of plotting the validation loss (blue) in addition to the training cost (red). The training cost may mislead the user into thinking that model is continuing to perform well, but we can see from the validation loss that the model has begun to overfit.
Dropout layers
To correct overfitting, we introduce Dropout layers to the model, as shown below. Dropout layers randomly silence a subset of units for each minibatch, and are an effective means of preventing overfitting.
End of explanation
"""
from neon.visualizations.figure import cost_fig, hist_fig, deconv_summary_page
from neon.visualizations.data import h5_cost_data, h5_hist_data, h5_deconv_data
from bokeh.plotting import output_notebook, show
cost_data = h5_cost_data('data.h5', False)
output_notebook()
show(cost_fig(cost_data, 400, 800, epoch_axis=False))
"""
Explanation: We then plot the results of the training run below.
End of explanation
"""
|
fotis007/python_intermediate | Python_2_1.ipynb | gpl-3.0 | #List
a = [1, 5, 2, 84, 23]
b = list("hallo")
c = range(10)
list(c)
#dictionary
z = dict(a=2,b=5,c=1)
z
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Python-für-Fortgeschrittene" data-toc-modified-id="Python-für-Fortgeschrittene-1"><span class="toc-item-num">1 </span>Python für Fortgeschrittene</a></div><div class="lev2 toc-item"><a href="#Überblick-über-den-Kurs" data-toc-modified-id="Überblick-über-den-Kurs-11"><span class="toc-item-num">1.1 </span>Überblick über den Kurs</a></div><div class="lev2 toc-item"><a href="#1.-Sitzung:-Wiederholung" data-toc-modified-id="1.-Sitzung:-Wiederholung-12"><span class="toc-item-num">1.2 </span>1. Sitzung: Wiederholung</a></div><div class="lev3 toc-item"><a href="#Datenstrukturen-im-Überblick" data-toc-modified-id="Datenstrukturen-im-Überblick-121"><span class="toc-item-num">1.2.1 </span>Datenstrukturen im Überblick</a></div><div class="lev2 toc-item"><a href="#Aufgabe" data-toc-modified-id="Aufgabe-13"><span class="toc-item-num">1.3 </span>Aufgabe</a></div><div class="lev2 toc-item"><a href="#Programmsteuerung" data-toc-modified-id="Programmsteuerung-14"><span class="toc-item-num">1.4 </span>Programmsteuerung</a></div><div class="lev2 toc-item"><a href="#Aufgaben" data-toc-modified-id="Aufgaben-15"><span class="toc-item-num">1.5 </span>Aufgaben</a></div><div class="lev2 toc-item"><a href="#Funktionen" data-toc-modified-id="Funktionen-16"><span class="toc-item-num">1.6 </span>Funktionen</a></div><div class="lev2 toc-item"><a href="#Aufgaben" data-toc-modified-id="Aufgaben-17"><span class="toc-item-num">1.7 </span>Aufgaben</a></div><div class="lev2 toc-item"><a href="#Dateien-lesen-und-schreiben" data-toc-modified-id="Dateien-lesen-und-schreiben-18"><span class="toc-item-num">1.8 </span>Dateien lesen und schreiben</a></div><div class="lev2 toc-item"><a href="#Aufgabe" data-toc-modified-id="Aufgabe-19"><span class="toc-item-num">1.9 </span>Aufgabe</a></div><div class="lev2 toc-item"><a href="#Reguläre-Ausdrücke" data-toc-modified-id="Reguläre-Ausdrücke-110"><span class="toc-item-num">1.10 </span>Reguläre Ausdrücke</a></div><div class="lev2 toc-item"><a href="#Aufgabe" data-toc-modified-id="Aufgabe-111"><span class="toc-item-num">1.11 </span>Aufgabe</a></div>
# Python für Fortgeschrittene
## Überblick über den Kurs
1. Wiederholung Basiswissen Python
2. Funktionales Programmieren 1: Iteratoren, List Comprehension, map und filter
3. Programme strukturieren, Funktionales Programmieren 2: Generatoren
4. Graphen
5. Datenanalyse 1: numpy
6. Datenanalyse 2: pandas
7. Datenanalyse 3: matplotlib
8. Datenanalyse 4: Maschinelles Lernen
9. Arbeiten mit XML: lxml
## 1. Sitzung: Wiederholung
<h3>Datenstrukturen im Überblick</h3>
<ul>
<li>Sequence (geordnete Folge)</li>
<ul>
<li>String (enthält Folge von Unicode-Zeichen) <b>nicht</b> veränderbar</li>
<li>List (enthält Elemente des gleichen Datentyps; beliebige Länge) veränderbar</li>
<li>Tuple (enthält Elemente unterschiedlichen Datentyps; gleiche Länge) <b>nicht</b> veränderbar</li>
<li>namedtuple (Tuple, dessen Felder Namen haben) <b>nicht</b> veränderbar</li>
<li>Range (Folge von Zahlen) <b>nicht</b> veränderbar</li>
<li>deque (double-ended queue) veränderbar</li>
</ul>
<li>Maps (ungeordnete Zuordnungen)</li>
<ul>
<li>Dictionary (enthält key-value Paare)</li>
<li>Counter</li>
<li>OrderedDict</li>
</ul>
<li>Set (Gruppe von Elementen ohne Duplikate)</li>
<ul>
<li>Set (enthält ungeordnet Elemente ohne Duplikate; veränderbar)</li>
<li>Frozenset (wie Set, nur unveränderlich)</li>
</ul>
</ul>
End of explanation
"""
#tuple
a = ("a", 1)
#check
type(a)
"""
Explanation: <h2>Aufgabe</h2>
<ul><li>Schreiben Sie eine vergleichbare Zuweisung für jede der oben aufgelisteten Datenstrukturen</li></ul>
End of explanation
"""
a = True
b = False
if a == True:
print("a is true")
else:
print("a is not true")
"""
Explanation: <h2>Programmsteuerung</h2>
<p>Programmkontrolle: Bedingte Verzweigung</p>
<p>Mit if kann man den Programmablauf abhängig vom Wahrheitswert von Bedingungen verzweigen lassen. Z.B.:</p>
End of explanation
"""
#chr(x) gibt den char mit dem unicode value x aus
for c in range(80,90):
print(chr(c),end=" ")
"""
Explanation: Mit <b>for</b> kann man Schleifen über alles Iterierbare machen. Z.B.:
End of explanation
"""
for c in range(65,112):
if c % 2 == 0:
print(chr(c))
"""
Explanation: <h2>Aufgaben</h2>
<ul><li>Geben Sie alle Buchstaben von A bis z aus, deren Unicode-Code eine gerade Zahl ist.</li>
End of explanation
"""
a = "ABCDEuvwwxyz"
for i in a:
print(i)
"""
Explanation: <ul>Zählen Sie, wie häufig der Buchstabe "a" im folgenden Satz vorkommt: "Goethes literarische Produktion umfasst Lyrik, Dramen, erzählende Werke (in Vers und Prosa), autobiografische, kunst- und literaturtheoretische sowie naturwissenschaftliche Schriften."</ul>
End of explanation
"""
#diese Funktion dividiert 2 Zahlen:
def div(a, b):
return a / b
#test
div(6,2)
"""
Explanation: <h2>Funktionen</h2>
<p>Funktionen dienen der Modularisierung des Programms und der Komplexitätsreduktion. Sie ermöglichen die Wiederverwendung von Programmcode und eine einfachere Fehlersuche.
End of explanation
"""
a = "Hallo"
def count_vowels(s):
result = 0
for i in s:
if i in "AEIOUaeiou":
result += 1
return result
count_vowels(a)
s = "hallo"
for i in s:
print(i)
"""
Explanation: <h2>Aufgaben</h2>
<p>Schreiben Sie eine Funktion, die die Anzahl der Vokale in einem String zählt.</p>
End of explanation
"""
words = []
with open("goethe.txt", "w", encoding="utf-8") as fin:
for line in fin:
re.findall("\w+", s)
"""
Explanation: <h2>Dateien lesen und schreiben</h2>
<p>open(file, mode='r', encoding=None) können Sie Dateien schreiben oder lesen. </p>
<p><b>modes:</b> <br/>
"r" - read (default)<br/>
"w" - write. Löscht bestehende Inhalte<br/>
"a" - append. Hängt neue Inhalte an.<br/>
"t" - text (default) <br/>
"b" - binary. <br/>
"x" - exclusive. Öffnet Schreibzugriff auf eine Datei. Gibt Fehlermeldung, wenn die Datei existiert.</p>
<p> <b>encoding</b>
"utf-8"<br/>
"ascii"<br/>
"cp1252"<br/>
"iso-8859-1"<br/><p>
End of explanation
"""
import re
s = "Dies ist ein Beispiel."
re.findall(r"[A-ZÄÖÜ]", s)
re.findall("\w+", s)
"""
Explanation: <h2>Aufgabe</h2>
<p>Schreiben Sie diesen Text in eine Datei mit den Namen "goethe.txt" (utf-8):<br/>
<code>
Johann Wolfgang von Goethe (* 28. August 1749 in Frankfurt am Main; † 22. März 1832 in Weimar), geadelt 1782, gilt als einer der bedeutendsten Repräsentanten deutschsprachiger Dichtung.
Goethes literarische Produktion umfasst Lyrik, Dramen, erzählende Werke (in Vers und Prosa), autobiografische, kunst- und literaturtheoretische sowie naturwissenschaftliche Schriften. Daneben ist sein umfangreicher Briefwechsel von literarischer Bedeutung. Goethe war Vorbereiter und wichtigster Vertreter des Sturm und Drang. Sein Roman Die Leiden des jungen Werthers machte ihn in Europa berühmt. Gemeinsam mit Schiller, Herder und Wieland verkörpert er die Weimarer Klassik. Im Alter wurde er auch im Ausland als Repräsentant des geistigen Deutschlands angesehen.
Am Hof von Weimar bekleidete er als Freund und Minister des Herzogs Carl August politische und administrative Ämter und leitete ein Vierteljahrhundert das Hoftheater.
Im Deutschen Kaiserreich wurde er „zum Kronzeugen der nationalen Identität der Deutschen“[1] und als solcher für den deutschen Nationalismus vereinnahmt. Es setzte damit eine Verehrung nicht nur des Werks, sondern auch der Persönlichkeit des Dichters ein, dessen Lebensführung als vorbildlich empfunden wurde. Bis heute zählen Gedichte, Dramen und Romane von ihm zu den Meisterwerken der Weltliteratur.</code>
<h2>Reguläre Ausdrücke</h2>
<ul>
<li>Zeichenklassen<br/>z.B. '.' (hier und im folgenden ohne '') = beliebiges Zeichen
<li>Quantifier<br/>z.B. '+' = 1 oder beliebig viele des vorangehenden Zeichens'ab+' matches 'ab' 'abb' 'abbbbb', aber nicht 'abab'
<li>Positionen<br/>z.B. '^' am Anfang der Zeile
<li>Sonstiges<br/>Gruppen (x), '|' Oder‚ '|', Non-greedy: ?, '\' Escape character
</ul>
<p>Beispiel. Aufgabe: Finden Sie alle Großbuchstaben in einem String s.
End of explanation
"""
|
bocklund/notebooks | thermodynamics/.ipynb_checkpoints/miscibility-gaps-checkpoint.ipynb | mit | import warnings
warnings.simplefilter('ignore') # ignore warnings for nicer output
import numpy as np
from sympy import symbols, log, lambdify, solve
import scipy.constants
from ipywidgets import interact
from bokeh.io import push_notebook, show, output_notebook
from bokeh.plotting import figure
from bokeh.models import Span, layouts
output_notebook()
"""
Explanation: Miscibility Gaps
End of explanation
"""
# define the Gibbs free energy symbolically
ua, ub, S, Hmix, T, xb = symbols('ua ub S Hmix T xb')
xa = 1 - xb
G = ua*xa + ub*xb + Hmix*xa*xb - T*S
# define the ideal mixing entropy and substitute that in our Gibbs energy function
ideal_mixing_entropy = -scipy.constants.R*(xa*log(xa)+xb*log(xb))
G = G.subs(S, ideal_mixing_entropy)
"""
Explanation: We define a function G that returns $G$ for a given phase. $G$ is given as a regular solution for a binary A, B system. The mixing enthalpy, $H_\textrm{mix}$, is a changable paramter (Hmix). The entropy, $S$, could concievably be modeled in several ways, but here a simple ideal mixing approximation is substituted.
End of explanation
"""
xb_plot = np.linspace(0, 1, 1000) # discritized plotting domain
p = figure(title="Free Energy, $G$", plot_height=300, plot_width=600, x_range=(0,1))
p.xaxis.axis_label = 'X_B'
p.yaxis.axis_label = 'G'
r = p.line(xb_plot, lambdify(xb, G.subs({T: 300, Hmix:0, ua: -2000, ub:-1000}), 'numpy')(xb_plot), line_width=3)
s1 = Span(location=None, dimension='height', line_color='red', line_dash='dashed', line_width=3)
s2 = Span(location=None, dimension='height', line_color='red', line_dash='dashed', line_width=3)
p.add_layout(s1)
p.add_layout(s2)
phase_diag = figure(title="Phase diagram", plot_height=300, plot_width=600, x_range=(0,1), y_range=(0,2000))
phase_diag.xaxis.axis_label = 'X_B'
phase_diag.yaxis.axis_label = 'T'
xt = phase_diag.line(np.linspace(0,0,2000), np.tile(np.linspace(0,2000,1000), 2), line_width=3)
ps1 = Span(location=None, dimension='height', line_color='red', line_dash='dashed', line_width=3)
ps2 = Span(location=None, dimension='height', line_color='red', line_dash='dashed', line_width=3)
hspan = Span(location=300, dimension='width', line_color='red', line_dash='dashed', line_width=3)
phase_diag.add_layout(ps1)
phase_diag.add_layout(ps2)
phase_diag.add_layout(hspan)
fig = layouts.Column(p, phase_diag)
show(fig, notebook_handle=True)
@interact(T=(0,2000, 10), Hmix=(-10000,20000, 200), ua=(-10000, 0, 100), ub=(-10000, 0, 100))
def update(T=300, Hmix=0, ua=-2000, ub=-1000):
# we have to do a little wrangling to keep the labels nice.
# first alias the args with prefix underscores
_T, _Hmix, _ua, _ub = T, Hmix, ua, ub
# redefine as symbols for the substitution and update the curve plot
ua, ub, Hmix, T = symbols('ua ub Hmix T')
r.data_source.data['y'] = lambdify(xb, G.subs({T: _T, Hmix: _Hmix, ua: _ua, ub:_ub}), 'numpy')(xb_plot)
# find the roots of the second derivative and plot those as spans
roots = solve(G.subs({T: _T, Hmix: _Hmix, ua: _ua, ub:_ub}).diff(xb).diff(xb), xb)
if roots:
# calculate the Spans
if all([root.is_real for root in roots]):
s1.location = float(roots[0])
s2.location = float(roots[1])
ps1.location = float(roots[0])
ps2.location = float(roots[1])
else:
s1.location = None
s2.location = None
ps1.location = None
ps2.location = None
else:
s1.location = None
s2.location = None
ps1.location = None
ps2.location = None
# calculate the miscibility gap, leaving T free.
hspan.location = _T
misc_roots = solve(G.subs({Hmix: _Hmix, ua: _ua, ub:_ub}).diff(xb).diff(xb), xb)
# calculate the miscibility gap
misc_xs = np.ravel(lambdify(T, misc_roots)(np.linspace(0,2000,1000)))
xt.data_source.data['x'] = misc_xs
push_notebook()
"""
Explanation: Next we will create a plotting domain for $x_\textrm{B}$ and create a plot with widgets that allow the temperature, mixing enthalpy, and chemical potentials of the pure elements, $\mu_\textrm{A}$ and $\mu_\textrm{B}$, to be changed.
There are two plots below. The first is the free energy curve for the phase that is controlled by the temperature, mixing enthalpy and chemical potential sliders.
The bottom plot is the phase diagram for the phase. It plots the miscibility gap, if there is one. If there is no miscibility gap, the plot will be blank. The horizontal line indicates the temperature of the free energy curve.
Note: the root finding is currently broken for negative mixing enthalpies and the phase diagram will plot faulty lines
End of explanation
"""
|
cstrelioff/ARM-ipynb | Chapter3/chptr3.3-R.ipynb | mit | %%R
# I had to import foreign to get access to read.dta
library("foreign")
kidiq <- read.dta("../../ARM_Data/child.iq/kidiq.dta")
# I won't attach kidiq-- i generally don't attach to avoid confusion(s)
#attach(kidiq)
"""
Explanation: 3.3 Interactions
Read the data
Data are in the child.iq directory of the ARM_Data download-- you might have
to change the path I use below to reflect the path on your computer.
End of explanation
"""
%%R
library("arm")
"""
Explanation: Load the arm library-- see the Chapter 3.1 notebook if you need help.
End of explanation
"""
%%R
fit <- lm(kidiq$kid_score ~ kidiq$mom_hs + kidiq$mom_iq + kidiq$mom_hs:kidiq$mom_iq)
display(fit)
"""
Explanation: Regression-- interactions, Pg 34
End of explanation
"""
%%R
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
pch=20, xaxt="n", yaxt="n", type="n")
curve(coef(fit)[1] + coef(fit)[2] + (coef(fit)[3] + coef(fit)[4])*x,
add=TRUE, col="gray")
curve(coef(fit)[1] + coef(fit)[3]*x, add=TRUE)
points(kidiq$mom_iq[kidiq$mom_hs==0], kidiq$kid_score[kidiq$mom_hs==0],
pch=20)
points(kidiq$mom_iq[kidiq$mom_hs==1], kidiq$kid_score[kidiq$mom_hs==1],
col="gray", pch=20)
axis(1, c(80,100,120,140))
axis(2, c(20,60,100,140))
"""
Explanation: Figure 3.4 (a), Pg 35
End of explanation
"""
%%R
plot(kidiq$mom_iq, kidiq$kid_score,
xlab="Mother IQ score", ylab="Child test score",
pch=20, type="n", xlim=c(0,150), ylim=c(0,150))
curve(coef(fit)[1] + coef(fit)[2] + (coef(fit)[3] + coef(fit)[4])*x,
add=TRUE, col="gray")
curve(coef(fit)[1] + coef(fit)[3]*x, add=TRUE)
points(kidiq$mom_iq[kidiq$mom_hs==0], kidiq$kid_score[kidiq$mom_hs==0],
pch=20)
points(kidiq$mom_iq[kidiq$mom_hs==1], kidiq$kid_score[kidiq$mom_hs==1],
col="gray", pch=20)
"""
Explanation: Figure 3.4 (b), Pg 35
End of explanation
"""
|
Uberi/zen-and-the-art-of-telemetry | Moon Phase Correlation Analysis.ipynb | mit | import ujson as json
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import plotly.plotly as py
from moztelemetry import get_pings, get_pings_properties, get_one_ping_per_client
from moztelemetry.histogram import Histogram
import datetime as dt
%pylab inline
"""
Explanation: Moon Phase Correlation Analysis
End of explanation
"""
def approximate_moon_visibility(current_date):
days_per_synodic_month = 29.530588853 # change this if the moon gets towed away
days_since_known_new_moon = (current_date - dt.date(2015, 7, 16)).days
phase_fraction = (days_since_known_new_moon % days_per_synodic_month) / days_per_synodic_month
return (1 - phase_fraction if phase_fraction > 0.5 else phase_fraction) * 2
def date_string_to_date(date_string):
return dt.datetime.strptime(date_string, "%Y%m%d").date()
"""
Explanation: This Wikipedia article has a nice description of how to calculate the current phase of the moon. In code, that looks like this:
End of explanation
"""
pings = get_pings(sc, app="Firefox", channel="nightly", submission_date=("20150705", "20150805"), fraction=0.1, schema="v4")
"""
Explanation: Let's randomly sample 10% of pings for nightly submissions made from 2015-07-05 to 2015-08-05:
End of explanation
"""
subset = get_pings_properties(pings, ["clientId", "meta/submissionDate", "payload/simpleMeasurements/firstPaint"])
subset = get_one_ping_per_client(subset)
cached = subset.cache()
"""
Explanation: Extract the startup time metrics with their submission date and make sure we only consider one submission per user:
End of explanation
"""
pairs = cached.map(lambda p: (approximate_moon_visibility(date_string_to_date(p["meta/submissionDate"])), p["payload/simpleMeasurements/firstPaint"]))
pairs = np.asarray(pairs.filter(lambda p: p[1] != None and p[1] < 100000000).collect())
"""
Explanation: Obtain an array of pairs, each containing the moon visibility and the startup time:
End of explanation
"""
plt.figure(figsize=(15, 7))
plt.scatter(pairs.T[0], pairs.T[1])
plt.xlabel("Moon visibility ratio")
plt.ylabel("Startup time (ms)")
plt.show()
"""
Explanation: Let's see what this data looks like:
End of explanation
"""
np.corrcoef(pairs.T)[0, 1]
"""
Explanation: The correlation coefficient is now easy to calculate:
End of explanation
"""
|
rishuatgithub/MLPy | nlp/3. Word Vectors + PCA + Cosine Similarity.ipynb | apache-2.0 | ## load the word embeddings from the google news vectors. Load it once.
#embeddings = KeyedVectors.load_word2vec_format('../../Data/GoogleNews-vectors-negative300.bin', binary=True)
## Building a word embeddings for the small subset that is required in here
f = open('../../Data/word_vectors/capitals.txt', 'r').read()
set_words = set(nltk.word_tokenize(f))
select_words = words = ['king', 'queen', 'oil', 'gas', 'happy', 'sad', 'city', 'town',
'village', 'country', 'continent', 'petroleum', 'joyful']
for w in select_words:
set_words.add(w)
def get_word_embeddings(embeddings):
'''
Get the word embeddings
'''
word_embeddings = {}
for word in embeddings.vocab:
if word in set_words:
word_embeddings[word] = embeddings[word]
return word_embeddings
word_embeddings = get_word_embeddings(embeddings)
print(len(word_embeddings))
pickle.dump(word_embeddings, open( "word_embeddings_subset.p", "wb" ) )
"""
Explanation: Loading a google embedding
End of explanation
"""
word_embeddings = pickle.load(open('word_embeddings_subset.p','rb'))
len(word_embeddings)
word_embeddings['Spain'].size ## The size of the word embeddings
"""
Explanation: Load Word embeddings from pickle file
End of explanation
"""
def cosine_similarity(A,B):
'''
Returns the cosine similarity between vectors A and B
'''
d = np.dot(A,B)
norm_a = np.sqrt(np.dot(A,A))
norm_b = np.sqrt(np.dot(B,B))
cos = d / (norm_a * norm_b)
return cos
king = word_embeddings['king']
queen = word_embeddings['queen']
cosine_similarity(king,queen) ## between 0 and 1 is similar
"""
Explanation: Predict relationships between words
The cosine similarity function is:
$$\cos (\theta)=\frac{\mathbf{A} \cdot \mathbf{B}}{\|\mathbf{A}\|\|\mathbf{B}\|}=\frac{\sum_{i=1}^{n} A_{i} B_{i}}{\sqrt{\sum_{i=1}^{n} A_{i}^{2}} \sqrt{\sum_{i=1}^{n} B_{i}^{2}}}\tag{1}$$
$A$ and $B$ represent the word vectors and $A_i$ or $B_i$ represent index i of that vector. & Note that if A and B are identical, you will get $cos(\theta) = 1$.
Otherwise, if they are the total opposite, meaning, $A= -B$, then you would get $cos(\theta) = -1$.
If you get $cos(\theta) =0$, that means that they are orthogonal (or perpendicular).
Numbers between 0 and 1 indicate a similarity score.
Numbers between -1-0 indicate a dissimilarity score.
End of explanation
"""
def euclidean_distance(A,B):
'''
Calculate the euclidean distance between two vectors
'''
d = np.linalg.norm(A - B)
return d
king = word_embeddings['king']
queen = word_embeddings['queen']
euclidean_distance(king,queen) ## somewhat similar
"""
Explanation: You will now implement a function that computes the similarity between two vectors using the Euclidean distance. Euclidean distance is defined as:
$$ \begin{aligned} d(\mathbf{A}, \mathbf{B})=d(\mathbf{B}, \mathbf{A}) =\sqrt{\left(A_{1}-B_{1}\right)^{2}+\left(A_{2}-B_{2}\right)^{2}+\cdots+\left(A_{n}-B_{n}\right)^{2}} \ =\sqrt{\sum_{i=1}^{n}\left(A_{i}-B_{i}\right)^{2}} \end{aligned}$$
$n$ is the number of elements in the vector
$A$ and $B$ are the corresponding word vectors.
The more similar the words, the more likely the Euclidean distance will be close to 0.
End of explanation
"""
def get_country(city1, country1, city2, embeddings):
'''
Find the most likely country (country2) for a given set of inputs
'''
city1_embed = embeddings[city1]
country1_embed = embeddings[country1]
city2_embed = embeddings[city2]
# get embedding of country 2 (it's a combination of the embeddings of country 1, city 1 and city 2)
# Remember: King - Man + Woman = Queen
vec = country1_embed - city1_embed + city2_embed
similarity = -1
country = ''
group = set((city1, country1, city2))
## iterate through all words in the embedding
for word in embeddings.keys():
if word not in group:
word_embedd = embeddings[word]
cos_similarity = cosine_similarity(vec, word_embedd) ## find cos similarity
if cos_similarity > similarity:
similarity = cos_similarity
country = (word, similarity)
return country
get_country('Athens', 'Greece', 'Cairo', word_embeddings)
get_country('London', 'England', 'Moscow', word_embeddings)
"""
Explanation: Finding the capital of the country
End of explanation
"""
## load a sample country data set
data = pd.read_csv('../../Data/word_vectors/capitals.txt', sep=' ')
data.columns = ['city1', 'country1', 'city2', 'country2']
data.head()
def get_accuracy(data, embeddings):
'''
get the overall accuracy of the word embedding model
'''
correct = 0
for i, row in data.iterrows():
city1 = row['city1']
country1 = row['country1']
city2 = row['city2']
country2 = row['country2']
predict_country, predict_similarity = get_country(city1, country1, city2, embeddings)
if predict_country == country2:
correct += 1
total_data = len(data)
accuracy = correct / total_data
return accuracy
print(f'Model Accuracy : {get_accuracy(data, word_embeddings)*100:10.2f}')
"""
Explanation: Model Accuracy
End of explanation
"""
def compute_pca(X, n_components=2):
'''
Compute the PCA
Input:
X: of dimension (m,n) where each row corresponds to a word vector
n_components: Number of components you want to keep.
Output:
X_reduced: data transformed in 2 dims/columns + regenerated original data
'''
# mean center the data
X_demeaned = X - np.mean(X,axis=0)
print('X_demeaned.shape: ',X_demeaned.shape)
# calculate the covariance matrix
covariance_matrix = np.cov(X_demeaned, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
eigen_vals, eigen_vecs = np.linalg.eigh(covariance_matrix, UPLO='L')
# sort eigenvalue in increasing order (get the indices from the sort)
idx_sorted = np.argsort(eigen_vals)
# reverse the order so that it's from highest to lowest.
idx_sorted_decreasing = idx_sorted[::-1]
# sort the eigen values by idx_sorted_decreasing
eigen_vals_sorted = eigen_vals[idx_sorted_decreasing]
# sort eigenvectors using the idx_sorted_decreasing indices
eigen_vecs_sorted = eigen_vecs[:,idx_sorted_decreasing]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
eigen_vecs_subset = eigen_vecs_sorted[:,0:n_components]
X_reduced = np.dot(eigen_vecs_subset.transpose(),X_demeaned.transpose()).transpose()
return X_reduced
np.random.seed(1)
X = np.random.rand(3, 10)
print(X)
X_reduced = compute_pca(X, n_components=2)
print("Your original matrix was " + str(X.shape) + " and it became:")
print(X_reduced)
def get_vectors(embeddings, words):
"""
Input:
embeddings: a word
fr_embeddings:
words: a list of words
Output:
X: a matrix where the rows are the embeddings corresponding to the rows on the list
"""
m = len(words)
X = np.zeros((1, 300))
for word in words:
english = word
eng_emb = embeddings[english]
X = np.row_stack((X, eng_emb))
X = X[1:,:]
return X
words = ['oil', 'gas', 'happy', 'sad', 'city', 'town',
'village', 'country', 'continent', 'petroleum', 'joyful']
# given a list of words and the embeddings, it returns a matrix with all the embeddings
X = get_vectors(word_embeddings, words)
print('You have 11 words each of 300 dimensions thus X.shape is:', X.shape)
# We have done the plotting for you. Just run this cell.
result = compute_pca(X, 2)
plt.scatter(result[:, 0], result[:, 1])
for i, word in enumerate(words):
plt.annotate(word, xy=(result[i, 0] - 0.05, result[i, 1] + 0.1))
plt.show()
"""
Explanation: PCA
Now you will explore the distance between word vectors after reducing their dimension. The technique we will employ is known as principal component analysis (PCA). As we saw, we are working in a 300-dimensional space in this case. Although from a computational perspective we were able to perform a good job, it is impossible to visualize results in such high dimensional spaces.
You can think of PCA as a method that projects our vectors in a space of reduced dimension, while keeping the maximum information about the original vectors in their reduced counterparts. In this case, by maximum infomation we mean that the Euclidean distance between the original vectors and their projected siblings is minimal. Hence vectors that were originally close in the embeddings dictionary, will produce lower dimensional vectors that are still close to each other.
You will see that when you map out the words, similar words will be clustered next to each other. For example, the words 'sad', 'happy', 'joyful' all describe emotion and are supposed to be near each other when plotted. The words: 'oil', 'gas', and 'petroleum' all describe natural resources. Words like 'city', 'village', 'town' could be seen as synonyms and describe a similar thing.
Before plotting the words, you need to first be able to reduce each word vector with PCA into 2 dimensions and then plot it. The steps to compute PCA are as follows:
- Mean normalize the data
- Compute the covariance matrix of your data ($\Sigma$).
- Compute the eigenvectors and the eigenvalues of your covariance matrix
- Multiply the first K eigenvectors by your normalized data. The transformation should look something as follows:
End of explanation
"""
|
ganprad/rentorbuy | rentorbuy.ipynb | mit | import quandl
quandl.ApiConfig.api_key = '############'
"""
Explanation: Import Data:
Explore the data.
Pick a starting point and create visualizations that might help understand the data better.
Come back and explore other parts of the data and create more visualizations and models.
Quandl is a great place to start exploring datasets and has the Zillow Research Datasets(and many other datasets) that can be merged to create the customized dataset that might provide solutions to a specific problem.
It also has an easy to use api. I started with the Zillow data because it was the first real estate dataset on the Quandl site and it contains a lot of metrics that look interesting.
End of explanation
"""
zillow_codes = pd.read_csv('input/ZILLOW-datasets-codes.csv',header=None)
zillow_codes.columns = ['codes','description']
"""
Explanation: I downloaded the Zillow codes dataset: https://www.quandl.com/data/ZILLOW-Zillow-Real-Estate-Research/usage/export
This was useful while exploring area specific codes and descriptions in Zillow Research Dataset which contains 1,318,489 datasets. One can use regular expressions among other tools during EDA.
End of explanation
"""
def cleanup_desc(df):
'''Function cleans up description column of Zillow codes dataframe.'''
df.description.values[:] = (df.loc[:,'description']).str.split(':').apply(lambda x: x[1].split('- San Francisco, CA')).apply(lambda x:x[0])
return df
def get_df(df,col='code',exp='/M'):
'''
Function takes in the zillow_codes dataframe and ouputs
a dataframe filtered by specified column and expression:
Inputs:
col: 'code' or 'description'
exp: string Reference: https://blog.quandl.com/api-for-housing-data
Ouputs:
pd.DataFrame
'''
indices = [i for i,val in enumerate(df[col].str.findall(exp)) if val != []]
print('Number of data points: {}'.format(len(indices)))
return df.iloc[indices,:]
def print_random_row(df):
randint = np.random.randint(df.shape[0])
print(df.codes.iloc[randint])
print(df.description.iloc[randint])
print_random_row(zillow_codes)
#Zip with Regex:
zip_df = get_df(zillow_codes,col='codes',exp=r'/Z94[0-9][0-9][0-9]_')
print_random_row(zip_df)
#Metro Code: '/M'
metro_df = get_df(zillow_codes,col='codes',exp='/M12')
print_random_row(metro_df)
"""
Explanation: API Reference:
https://blog.quandl.com/api-for-housing-data
End of explanation
"""
#Getting neighborhood level information: '/N'
#Getting metro level information: '/M'
neighborhoods = get_df(zillow_codes,col='codes',exp='/N')
zips = get_df(zillow_codes,col='codes',exp='/Z')
zips_chicago = get_df(zips,col='description',exp='Chicago, IL')
# neighborhoods_sfo = get_df(neighborhoods,col='description',exp=' San Francisco, CA')
neighborhoods_chicago = get_df(neighborhoods,col='description',exp='Chicago, IL')
print_random_row(neighborhoods_chicago)
#mspf = Median Sale Price per Square Foot
# mspf_neighborhoods_sfo = get_df(neighborhoods_sfo,col='codes',exp='_MSPFAH')
#prr = Price to rent ratio
# prr_neighborhoods_sfo = get_df(neighborhoods_sfo,col='codes',exp='_PRRAH')
#phi = Percent of Homes increasing in values - all homes
phiv_neighborhoods_chicago = get_df(neighborhoods_chicago,col='codes',exp='_PHIVAH')
phiv_zips_chicago = get_df(zips_chicago,col='codes',exp='_PHIVAH')
print_random_row(phiv_zips_chicago)
#Clearning up descriptions:
neighborhood_names = phiv_neighborhoods_chicago.description.apply(lambda x: x.replace('Zillow Home Value Index (Neighborhood): Percent Of Homes Increasing In Values - All Homes - ',''))
zips_names = phiv_zips_chicago.description.apply(lambda x:x.replace('Zillow Home Value Index (Zip): Percent Of Homes Increasing In Values - All Homes - ',''))
zips_names[:1]
neighborhood_names[:1]
def get_quandl_data(df,names,filter_val=246):
quandl_get = [quandl.get(code) for i,code in enumerate(df['codes'])]
#Cleaned up DF and DF columns
cleaned_up = pd.concat([val for i,val in enumerate(quandl_get) if val.shape[0]>=filter_val],axis=1)
cleaned_names = [names.iloc[i] for i,val in enumerate(quandl_get) if val.shape[0]>=filter_val]
cleaned_up.columns = cleaned_names
#Some time series have fewer than 246 data points, ignoring these points for the moment.
#Saving the indices and time series in a separate anomaly dict with {name:ts}
anomaly_dict = {names.iloc[i]:val for i,val in enumerate(quandl_get) if val.shape[0]<filter_val}
return quandl_get,anomaly_dict,cleaned_up
# = get_quandl_data(phiv_neighborhoods_chicago)
phiv_quandl_get_list,phiv_anomaly_dict,phiv_chicago_neighborhoods_df = get_quandl_data(phiv_neighborhoods_chicago,neighborhood_names)
phiv_chicago_neighborhoods_df.sample(2)
phiv_quandl_get_list,phiv_anomaly_dict,phiv_chicago_zips_df = get_quandl_data(phiv_zips_chicago,zips_names)
phiv_chicago_zips_df.shape
phiv_chicago_zips_df.sample(10)
phiv_chicago_neighborhoods_df['Logan Square, Chicago, IL'].plot()
phiv_chicago_neighborhoods_df.to_csv('input/phiv_chicago_neighborhoods_df.csv')
phiv_chicago_zips_df.to_csv('input/phiv_zips_df.csv')
# phiv_chicago_neighborhoods = pd.read_csv('input/phiv_chicago_neighborhoods_df.csv')
phiv_chicago_fil_neigh_df = pd.read_csv('input/phiv_chicago_fil_neigh_df.csv')
phiv_chicago_fil_neigh_df.set_index('Date',inplace=True)
# phiv_chicago_neighborhoods_df.shape
gc.collect()
"""
Explanation: Percent of homes increasing in value:
Percent of homes increasing in value is a metric that may be useful while making the decision to buy or rent.
Using the code is "/Z" for getting data by zipcode.
Using the description to filter the data to Chicago zipcodes.
Make API call to Quandl and get the percent of homes increasing values for all homes by neighborhood in Chicago.(http://www.realestatedecoded.com/zillow-percent-homes-increasing-value/)
End of explanation
"""
from fbprophet import Prophet
data = phiv_chicago_zips_df['60647, Chicago, IL']
m = Prophet(mcmc_samples=200,interval_width=0.95,weekly_seasonality=False,changepoint_prior_scale=4,seasonality_prior_scale=1)
data = phiv_chicago_neighborhoods_df['Logan Square, Chicago, IL']
m = Prophet(mcmc_samples=200,interval_width=0.95,weekly_seasonality=False,changepoint_prior_scale=4)
# data = np.log(data)
data = pd.DataFrame(data).reset_index()
data.columns=['ds','y']
# data = data[data['ds'].dt.year>2009]
data.sample(10)
# m.fit(data)
params = dict(mcmc_samples=200,interval_width=0.98,weekly_seasonality=False,changepoint_prior_scale=0.5)
def prophet_forecast(data,params,periods=4,freq='BMS'):
m = Prophet(**params)
data = pd.DataFrame(data).reset_index()
data.columns=['ds','y']
# data = data[data['ds'].dt.year>2008]
# print(data.sample(10))
m.fit(data)
future = m.make_future_dataframe(periods=4,freq=freq)#'M')
# print(type(future))
forecast = m.predict(future)
return m,forecast
data = phiv_chicago_zips_df['60645, Chicago, IL']
m,forecast = prophet_forecast(data,params)
# forecast = m.predict()
m.plot(forecast)
# data
"""
Explanation: Using Prophet for time series forecasting:
"Prophet is a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers." -
https://facebookincubator.github.io/prophet/
End of explanation
"""
def area_chart_create(fcst,cols,trend_name='PHIC(%)',title='60645'):
#Process data:
fcst = fcst[cols]
# fcst.loc[:,'ymin']=fcst[cols[2]]+fcst[cols[3]]#+fcst['trend']
# fcst.loc[:,'ymax']=fcst[cols[2]]+fcst[cols[4]]#+fcst['trend']
chart = alt.Chart(fcst).mark_area().encode(
x = alt.X(fcst.columns[0]+':T',title=title,
axis=alt.Axis(
ticks=20,
axisWidth=0.0,
format='%Y',
labelAngle=0.0,
),
scale=alt.Scale(
nice='month',
),
timeUnit='yearmonth',
),
y= alt.Y(fcst.columns[3]+':Q',title=trend_name),
y2=fcst.columns[4]+':Q')
return chart.configure_cell(height=200.0,width=700.0,)
# cols = ['ds','trend']+['yearly','yearly_lower','yearly_upper']
cols = ['ds','trend']+['yhat','yhat_lower','yhat_upper']
yhat_uncertainity = area_chart_create(forecast,cols=cols)
yhat_uncertainity
def trend_chart_create(fcst,trend_name='PHIC(%)trend'):
chart = alt.Chart(fcst).mark_line().encode(
color= alt.Color(value='#000'),
x = alt.X(fcst.columns[0]+':T',title='Logan Sq',
axis=alt.Axis(ticks=10,
axisWidth=0.0,
format='%Y',
labelAngle=0.0,
),
scale=alt.Scale(
nice='month',
),
timeUnit='yearmonth',
),
y=alt.Y(fcst.columns[2]+':Q',title=trend_name)
)
return chart.configure_cell(height=200.0,width=700.0,)
trend = trend_chart_create(forecast)
trend
"""
Explanation: Creating a chart using Altair:
This provides a pythonic interface to Vega-lite and makes it easy to create plots: https://altair-viz.github.io/
End of explanation
"""
layers = [yhat_uncertainity,trend]
lchart = alt.LayeredChart(forecast,layers = layers)
cols = ['ds','trend']+['yhat','yhat_lower','yhat_upper']
def create_unc_chart(fcst,cols=cols,tsname='% of Homes increasing in value',title='Logan Sq'):
'''
Create Chart showing the trends and uncertainity in forecasts.
'''
yhat_uncertainity = area_chart_create(fcst,cols=cols,trend_name=tsname,title=title)
trend = trend_chart_create(fcst,trend_name=tsname)
layers = [yhat_uncertainity,trend]
unc_chart = alt.LayeredChart(fcst,layers=layers)
return unc_chart
unc_chart = create_unc_chart(forecast,title='60645')
unc_chart
"""
Explanation: Layered Vega-lite chart created using Altair:
End of explanation
"""
from geopy.geocoders import Nominatim
geolocator = Nominatim()
#Example:
location = geolocator.geocode("60647")
location
"""
Explanation: Geopy:
Geopy is great for geocoding and geolocation:
It provides a geocoder wrapper class for the OpenStreetMap Nominatim class.
It is convenient and can be used for getting latitude and longitude information from addresses.
https://github.com/geopy/geopy
End of explanation
"""
from time import sleep
def get_lat_lon(location):
lat = location.latitude
lon = location.longitude
return lat,lon
def get_locations_list(address_list,geolocator,wait_time=np.arange(10,20,5)):
'''
Function returns the geocoded locations of addresses in address_list.
Input:
address_list : Python list
Output:
locations: Python list containing geocoded location objects.
'''
locations = []
for i,addr in enumerate(address_list):
# print(addr)
sleep(5)
loc = geolocator.geocode(addr)
lat = loc.latitude
lon = loc.longitude
locations.append((addr,lat,lon))
# print(lat,lon)
sleep(1)
return locations
zip_list = phiv_chicago_zips_df.columns.tolist()
zip_locations= get_locations_list(zip_list,geolocator)
zip_locations[:2]
zips_lat_lon = pd.DataFrame(zip_locations)
zips_lat_lon.columns=['zip','lat','lon']
zips_lat_lon.sample(2)
"""
Explanation: Getting the geocoder locations from addresses:
End of explanation
"""
import folium
from folium import plugins
params = dict(mcmc_samples=20,interval_width=0.95,weekly_seasonality=False,changepoint_prior_scale=4)
map_name='CHI_zips_cluster_phiv_forecast.html'
map_osm = folium.Map(location=[41.8755546,-87.6244212],zoom_start=10)
marker_cluster = plugins.MarkerCluster().add_to(map_osm)
for name,row in zips_lat_lon[:].iterrows():
address = row['zip']
if pd.isnull(address):
continue
data = phiv_chicago_zips_df[address]
m,forecast = prophet_forecast(data,params)
unc_chart = create_unc_chart(forecast,title=address)
unc_chart = unc_chart.to_json()
popchart = folium.VegaLite(unc_chart)
popup = folium.Popup(max_width=800).add_child(popchart)
lat = row['lat']
lon = row['lon']
folium.Marker(location=(lat,lon),popup=popup).add_to(marker_cluster)
map_osm.save(map_name)
"""
Explanation: Folium plots:
Using openstreetmaps(https://www.openstreetmap.org) to create a map with popup forecasts on zip code markers.
Folium makes it easy to plot data on interactive maps.
It provides an interface to the Leaflet.js library.
Open .html file in browser to view the interactive map
End of explanation
"""
|
griffinfoster/fundamentals_of_interferometry | 1_Radio_Science/1_9_a_brief_introduction_to_interferometry.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.8 Astronomical radio sources
Next: 1.10 The Limits of Single Dish Astronomy
Section status: <span style="background-color:green"> </span>
Import standard modules:
End of explanation
"""
from IPython.display import display
from ipywidgets import interact
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
def double_slit (p0=[0],a0=[1],baseline=1,d1=5,d2=5,wavelength=.1,maxint=None):
"""Renders a toy dual-slit experiment.
'p0' is a list or array of source positions (drawn along the vertical axis)
'a0' is an array of source intensities
'baseline' is the distance between the slits
'd1' and 'd2' are distances between source and plate and plate and screen
'wavelength' is wavelength
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of double_slit() into the same intensity scale, i.e. for comparison.
"""
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([]) and plt.yticks([])
plt.axhline(0, ls=':')
baseline /= 2.
## draw representation of slits
plt.arrow(0, 1,0, baseline-1, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0,-1,0, 1-baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, -baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
## draw representation of lightpath from slits to centre of screen
plt.arrow(0, baseline,d2,-baseline, length_includes_head=True)
plt.arrow(0,-baseline,d2, baseline, length_includes_head=True)
## draw representation of sinewave from the central position
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/2
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
## and we accumulate the interference pattern for each source into 'pattern'
xs = np.arange(-1, 1, .01)
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for p,a in np.broadcast(p0,a0):
plt.plot(-d1, p, marker='o', ms=10, mfc='red', mew=0)
total_intensity += a
if p == p0[0] or p == p0[-1]:
plt.arrow(-d1, p, d1, baseline-p, length_includes_head=True)
plt.arrow(-d1, p, d1,-baseline-p, length_includes_head=True)
# compute the two pathlenghts
path1 = np.sqrt(d1**2 + (p-baseline)**2) + np.sqrt(d2**2 + (xs-baseline)**2)
path2 = np.sqrt(d1**2 + (p+baseline)**2) + np.sqrt(d2**2 + (xs+baseline)**2)
diff = path1 - path2
# caccumulate interference pattern from this source
pattern = pattern + a*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
# show pattern for one source at 0
double_slit(p0=[0])
"""
Explanation: 1.9 A brief introduction to interferometry and its history
1.9.1 The double-slit experiment
The basics of interferometry date back to Thomas Young's double-slit experiment ➞ of 1801. In this experiment, a plate pierced by two parallel slits is illuminated by a monochromatic source of light. Due to the wave-like nature of light, the waves passing through the two slits interfere, resulting in an interference pattern, or fringe, projected onto a screen behind the slits:
<img src="figures/514px-Doubleslit.svg.png" width="50%"/>
Figure 1.9.1: Schematic diagram of Young's double-slit experiment. Credit: Unknown.
The position on the screen $P$ determines the phase difference between the two arriving wavefronts. Waves arriving in phase interfere constructively and produce bright strips in the interference pattern. Waves arriving out of phase interfere destructively and result in dark strips in the pattern.
In this section we'll construct a toy model of a dual-slit experiment. Note that this model is not really physically accurate, it is literally just a "toy" to help us get some intuition for what's going on. A proper description of interfering electromagnetic waves will follow later.
Firstly, a monochromatic electromagnetic wave of wavelength $\lambda$ can be described by at each point in time and space as a complex quantity i.e. having an amplitude and a phase, $A\mathrm{e}^{\imath\phi}$. For simplicity, let us assume a constant amplitude $A$ but allow the phase to vary as a function of time and position.
Now if the same wave travels along two paths of different lengths and recombines at point $P$, the resulting electric field is a sum:
$E=E_1+E_2 = A\mathrm{e}^{\imath\phi}+A\mathrm{e}^{\imath(\phi-\phi_0)},$
where the phase delay $\phi_0$ corresponds to the pathlength difference $\tau_0$:
$\phi_0 = 2\pi\tau_0/\lambda.$
What is actually "measured" on the screen, the brightness, is, physically, a time-averaged electric field intensity $EE^$, where the $^$ represents complex conjugation (this is exactly what our eyes, or a photographic plate, or a detector in the camera perceive as "brightness"). We can work this out as
$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^ = A^2 + A^2
+ A^2 \mathrm{e}^{\imath\phi_0}
+ A^2 \mathrm{e}^{-\imath\phi_0} =
2A^2 + 2A^2 \cos{\phi_0}.
$
Note how phase itself has dropped out, and the only thing that's left is the phase delay $\phi_0$. The first part of the sum is constant, while the second part, the interfering term, varies with phase difference $\phi_0$, which in turn depends on position on the screen $P$. It is easy to see that the resulting intensity $EE^*$ is a purely real quantity that varies from 0 to $4A^2$. This is exactly what produces the alternating bright and dark stripes on the screen.
1.9.2 A toy double-slit simulator
Let us write a short Python function to (very simplistically) simulate a double-slit experiment. Note, understanding the code presented is not a requirement to understand the experiment. Those not interested in the code implementation should feel free to look only at the results.
End of explanation
"""
interact(lambda baseline,wavelength:double_slit(p0=[0],baseline=baseline,wavelength=wavelength),
baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
"""
Explanation: This function draws a double-slit setup, with a light source at position $p$ (in fact the function can render multiple sources, but we'll only use it for one source for the moment). The dotted blue line shows the optical axis ($p=0$). The sine wave (schematically) shows the wavelength. (Note that the units here are arbitrary, since it is only geometry relative to wavelength that determines the results). The black lines show the path of the light waves through the slits and onto the screen at the right. The strip on the right schematically renders the resulting interference pattern, and the red curve shows a cross-section through the pattern.
Inside the function, we simply compute the pathlength difference along the two paths, convert it to phase delay, and render the corresponding interference pattern.
<div class=warn>
<b>Warning:</b> Once again, let us stress that this is just a "toy" rendering of an interferometer. It serves to demonstrate the basic principles, but it is not physically accurate. In particular, it does not properly model diffraction or propagation. Also, since astronomical sources are effectively infinitely distant (compared to the size of the interferometer), the incoming light rays should be parallel (or equivalently, the incoming wavefront should be planar, as in the first illustration in this chapter).
</div>
1.9.3 Playing with the baseline
First of all, note how the properties of the interference pattern vary with baseline $B$ (the distance between the slits) and wavelength $\lambda$. Use the sliders below to adjust both. Note how increasing the baseline increases the frequency of the fringe, as does reducing the wavelength.
End of explanation
"""
interact(lambda position,baseline,wavelength:double_slit(p0=[position],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
"""
Explanation: 1.9.4 From the double-slit box to an interferometer
The original double-slit experiment was conceived as a demonstration of the wave-like nature of light. The role of the light source in the experiment was simply to illuminate the slits. Let us now turn it around and ask ourselves, given a working dual-slit setup, could we use it to obtain some information about the light source? Could we use the double-slit experiment as a measurement device, i.e. an interferometer?
1.9.4.1 Measuring source position
Obviously, we could measure source intensity -- but that's not very interesting, since we can measure that by looking at the source directly. Less obviously, we could measure the source position. Observe what happens when we move the source around, and repeat this experiment for longer and shorter baselines:
End of explanation
"""
double_slit([0],baseline=1.5,wavelength=0.1)
double_slit([0.69],baseline=1.5,wavelength=0.1)
"""
Explanation: Note that long baselines are very sensitive to change in source position, while short baselines are less sensitive. As we'll learn in Chapter 4, the spatial resolution (i.e. the distance at which we can distinguish sources) of an interfrometer is given by $\lambda/B$ , while the spatial resolution of a conventional telescope is given by $\lambda/D$, where $D$ is the dish (or mirror) aperture. This is a fortunate fact, as in practice it is much cheaper to build long baselines than large apertures!
On the other hand, due to the periodic nature of the interference pattern, the position measurement of a long baseline is ambiguous. Consider that two sources at completely different positions produce the same interference pattern:
End of explanation
"""
double_slit([0],baseline=0.5,wavelength=0.1)
double_slit([0.69],baseline=0.5,wavelength=0.1)
"""
Explanation: On the other hand, using a shorter baseline resolves the ambiguity:
End of explanation
"""
interact(lambda position,intensity,baseline,wavelength:
double_slit(p0=[0,position],a0=[1,intensity],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),intensity=(.2,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
"""
Explanation: Modern interferometers exploit this by using an array of elements, which provides a whole range of possible baselines.
1.9.4.2 Measuring source size
Perhaps less obviously, we can use an inteferometer to measure source size. Until now we have been simulating only point-like sources. First, consider what happens when we add a second source to the experiment (fortunately, we wrote the function above to accommodate such a scenario). The interference pattern from two (independent) sources is the sum of the individual interference patterns. This seems obvious, but will be shown more formally later on. Here we add a second source, with a slider to control its position and intensity. Try to move the second source around, and observe how the superimposed interference pattern can become attenuated or even cancel out.
End of explanation
"""
double_slit(p0=[0,0.25],baseline=1,wavelength=0.1)
double_slit(p0=[0,0.25],baseline=1.5,wavelength=0.1)
"""
Explanation: So we can already use our double-slit box to infer something about the structure of the light source. Note that with two sources of equal intensity, it is possible to have the interference pattern almost cancel out on any one baseline -- but never on all baselines at once:
End of explanation
"""
interact(lambda extent,baseline,wavelength:
double_slit(p0=np.arange(-extent,extent+.01,.01),baseline=baseline,wavelength=wavelength),
extent=(0,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
"""
Explanation: Now, let us simulate an extended source, by giving the simulator an array of closely spaced point-like sources. Try playing with the extent slider. What's happening here is that the many interference patterns generated by each little part of the extended source tend to "wash out" each other, resulting in a net loss of amplitude in the pattern. Note also how each particular baseline length is sensitive to a particular range of source sizes.
End of explanation
"""
double_slit(p0=[0],baseline=1,wavelength=0.1)
double_slit(p0=np.arange(-0.2,.21,.01),baseline=1,wavelength=0.1)
"""
Explanation: We can therefore measure source size by measuring the reduction in the amplitude of the interference pattern:
End of explanation
"""
interact(lambda d1,d2,position,extent: double_slit(p0=np.arange(position-extent,position+extent+.01,.01),d1=d1,d2=d2),
d1=(1,5,.1),d2=(1,5,.1),
position=(-1,1,.01),extent=(0,1,.01)) and None
"""
Explanation: In fact historically, this was the first application of interferometry in astronomy. In a famous experiment in 1920, a Michelson interferometer installed at Mount Wilson Observatory was used to measure the diameter of the red giant star Betelgeuse.
<div class=advice>
The historical origins of the term <em><b>visibility</b></em>, which you will become intimately familiar with in the course of these lectures, actually lie in the experiment described above. Originally, "visibility" was defined as just that, i.e. a measure of the contrast between the light and dark stripes of the interference pattern.
</div>
<div class=advice>
Modern interferometers deal in terms of <em><b>complex visibilities</b></em>, i.e. complex quantitities. The amplitude of a complex visibility, or <em>visibility amplitude</em>, corresponds to the intensity of the interference pattern, while the <em>visibility phase</em> corresponds to its relative phase (in our simulator, this is the phase of the fringe at the centre of the screen). This one complex number is all the information we have about the light source. Note that while our double-slit experiment shows an entire pattern, the variation in that pattern across the screen is entirely due to the geometry of the "box" (generically, this is the instrument used to make the measurement) -- the informational content, as far as the light source is concerned, is just the amplitude and the phase!
</div>
<div class=advice>
In the single-source simulations above, you can clearly see that amplitude encodes source shape (and intensity), while phase encodes source position. <b>Visibility phase measures position, amplitude measures shape and intensity.</b> This is a recurring theme in radio interferometry, one that we'll revisit again and again in subsequent lectures.
</div>
Note that a size measurement is a lot simpler than a position measurement. The phase of the fringe pattern gives us a very precise measurement of the position of the source relative to the optical axis of the instrument. To get an absolute position, however, we would need to know where the optical axis is pointing in the first place -- for practical reasons, the precision of this is a lot less. The amplitude of the fringe partern, on the other hand, is not very sensitive to errors in the instrument pointing. It is for this reason that the first astronomical applications of interferometry dealt with size measurements.
1.9.4.3 Measuring instrument geometry
Until now, we've only been concerned with measuring source properties. Obviously, the interference pattern is also quite sensitive to instrument geometry. We can easily see this in our toy simulator, by playing with the position of the slits and the screen:
End of explanation
"""
double_slit(p0=[0], a0=[0.4], maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, maxint=2)
"""
Explanation: This simple fact has led to many other applications for interferometers, from geodetic VLBI (where continental drift is measured by measuring extremely accurate antenna positions via radio interferometry of known radio sources), to the recent gravitational wave detection by LIGO (where the light source is a laser, and the interference pattern is used to measure miniscule distortions in space-time -- and thus the geometry of the interferometer -- caused by gravitational waves).
1.9.5 Practical interferometers
If you were given the job of constructing an interferometer for astronomical measurements, you would quickly find that the double-slit experiment does not translate into a very practical design. The baseline needs to be quite large; a box with slits and a screen is physically unwieldy. A more viable design can be obtained by playing with the optical path.
The basic design still used in optical interferometry to this day is the Michelson stellar interferometer mentioned above. This is schematically laid out as follows:
<IMG SRC="figures/471px-Michelson_stellar_interferometer.svg.png" width="50%"/>
Figure 1.9.2: Schematic of a Michelson interferometer. Credit: Unknown.
The outer set of mirrors plays the role of slits, and provides a baseline of length $d$, while the rest of the optical path serves to bring the two wavefronts together onto a common screen. The first such interferometer, used to carry out the Betelgeuse size measurement, looked like this:
<IMG SRC="figures/Hooker_interferometer.jpg" width="50%"/>
Figure 1.9.3: 100-inch Hooker Telescope at Mount Wilson Observatory in southern California, USA. Credit: Unknown.
In modern optical interferometers using the Michelson layout, the role of the "outer" mirrors is played by optical telescopes in their own right. For example, the Very Large Telescope operated by ESO can operate as an inteferometer, combining four 8.2m and four 1.8m individual telescopes:
<IMG SRC="figures/Hard_Day's_Night_Ahead.jpg" width="100%"/>
Figure 1.9.4: The Very Large Telescope operated by ESO. Credit: European Southern Observatory.
In the radio regime, the physics allow for more straightforward designs. The first radio interferometric experiment was the sea-cliff interferometer developed in Australia during 1945-48. This used reflection off the surface of the sea to provide a "virtual" baseline, with a single antenna measuring the superimposed signal:
<IMG SRC="figures/sea_int_medium.jpg" width="50%"/>
Figure 1.9.5: Schematic of the sea-cliff single antenna interferometer developed in Australia post-World War 2. Credit: Unknown.
In a modern radio interferometer, the "slits" are replaced by radio dishes (or collections of antennas called aperture arrays) which sample and digitize the incoming wavefront. The part of the signal path between the "slits" and the "screen" is then completely replaced by electronics. The digitized signals are combined in a correlator, which computes the corresponding complex visibilities. We will study the details of this process in further lectures.
In contrast to the delicate optical path of an optical interferometer, digitized signals have the advantage of being endlessly and losslessly replicatable. This has allowed us to construct entire intererometric arrays. An example is the the Jansky Very Large Array (JVLA, New Mexico, US) consisting of 27 dishes:
<IMG SRC="figures/USA.NM.VeryLargeArray.02.jpg" width="50%"/>
Figure 1.9.6: Telescope elments of the Jansky Very Large Array (JVLA) in New Mexico, USA. Credit: Unknown.
The MeerKAT telescope coming online in the Karoo, South Africa, will consist of 64 dishes. This is an aerial photo showing the dish foundations being prepared:
<IMG SRC="figures/2014_core_02.jpg" width="50%"/>
Figure 1.9.7: Layout of the core of the MeerKAT array in the Northern Cape, South Africa. Credit: Unknown.
In an interferometer array, each pair of antennas forms a different baseline. With $N$ antennas, the correlator can then simultaneously measure the visibilities corresponding to $N(N-1)/2$ baselines, with each pairwise antenna combination yielding a unique baseline.
1.9.5.1 Additive vs. multiplicative interferometers
The double-slit experiment, the Michelson interferometer, and the sea-cliff interferometer are all examples of additive interferometers, where the fringe pattern is formed up by adding the two interfering signals $E_1$ and $E_2$:
$$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^
$$
As we already discussed above, the first two terms in this sum are constant (corresponding to the total intensity of the two signals), while the cross-term $E_1 E_2^$ and its complex conjugate is the interfering* term that is responsible for fringe formation.
Modern radio interferometers are multiplicative. Rather than adding the signals, the antennas measure $E_1$ and $E_2$ and feed these measurements into a cross-correlator, which directly computes the $E_1 E_2^*$ term.
1.9.6 Aperture synthesis vs. targeted experiments
Interferometry was born as a way of conducting specific, targeted, and rather exotic experiments. The 1920 Betelgeuse size measurement is a typical example. In contrast to a classical optical telescope, which could directly obtain an image of the sky containing information on hundreds to thousands of objects, an interferometer was a very delicate apparatus for indirectly measuring a single physical quantity (the size of the star in this case). The spatial resolution of that single measurement far exceeded anything available to a conventional telescope, but in the end it was always a specific, one-off measurement. The first interferometers were not capable of directly imaging the sky at that improved resolution.
In radio interferometry, all this changed in the late 1960s with the development of the aperture synthesis technique by Sir Martin Ryle's group in Cambridge. The crux of this tehnique lies in combining the information from multiple baselines.
To understand this point, consider the following. As you saw from playing with the toy double-slit simulator above, for each baseline length, the interference pattern conveys a particular piece of information about the sky. For example, the following three "skies" yield exactly the same interference pattern on a particular baseline, so a single measurement would be unable to distinguish between them:
End of explanation
"""
double_slit(p0=[0], a0=[0.4], baseline=0.5, maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], baseline=0.5, maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, baseline=0.5, maxint=2)
"""
Explanation: However, as soon as we take a measurement on another baseline, the difference becomes apparent:
End of explanation
"""
def michelson (p0=[0],a0=[1],baseline=50,maxbaseline=100,extent=0,d1=9,d2=1,d3=.2,wavelength=.1,fov=5,maxint=None):
"""Renders a toy Michelson interferometer with an infinitely distant (astronomical) source
'p0' is a list or array of source positions (as angles, in degrees).
'a0' is an array of source intensities
'extent' are source extents, in degrees
'baseline' is the baseline, in lambdas
'maxbaseline' is the max baseline to which the plot is scaled
'd1' is the plotted distance between the "sky" and the interferometer arms
'd2' is the plotted distance between arms and screen, in plot units
'd3' is the plotted distance between inner mirrors, in plot units
'fov' is the notionally rendered field of view radius (in degrees)
'wavelength' is wavelength, used for scale
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of michelson() into the same intensity scale, i.e. for comparison.
"""
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([])
# label Y axis with degrees
yt,ytlab = plt.yticks()
plt.yticks(yt,["-%g"%(float(y)*fov) for y in yt])
plt.ylabel("Angle of Arrival (degrees)")
plt.axhline(0, ls=':')
## draw representation of arms and light path
maxbaseline = max(maxbaseline,baseline)
bl2 = baseline/float(maxbaseline) # coordinate of half a baseline, in plot units
plt.plot([0,0],[-bl2,bl2], 'o', ms=10)
plt.plot([0,d2/2.,d2/2.,d2],[-bl2,-bl2,-d3/2.,0],'-k')
plt.plot([0,d2/2.,d2/2.,d2],[ bl2, bl2, d3/2.,0],'-k')
plt.text(0,0,'$b=%d\lambda$'%baseline, ha='right', va='bottom', size='xx-large')
## draw representation of sinewave from the central position
if isinstance(p0,(int,float)):
p0 = [p0]
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/(2.*fov)
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
xs = np.arange(-1, 1, .01)
## xsdiff is corresponding pathlength difference
xsdiff = (np.sqrt(d2**2 + (xs-d3)**2) - np.sqrt(d2**2 + (xs+d3)**2))
## and we accumulate the interference pattern for each source into 'pattern'
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for pos,ampl in np.broadcast(p0,a0):
total_intensity += ampl
pos1 = pos/float(fov)
if extent: # simulate extent by plotting 100 sources of 1/100th intensity
positions = np.arange(-1,1.01,.01)*extent/fov + pos1
else:
positions = [pos1]
# draw arrows indicating lightpath
plt.arrow(-d1, bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
plt.arrow(-d1,-bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
for p in positions:
# compute the pathlength difference between slits and position on screen
plt.plot(-d1, p, marker='o', ms=10*ampl, mfc='red', mew=0)
# add pathlength difference at slits
diff = xsdiff + (baseline*wavelength)*np.sin(p*fov*np.pi/180)
# accumulate interference pattern from this source
pattern = pattern + (float(ampl)/len(positions))*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
print("visibility (Imax-Imin)/(Imax+Imin): ",(pattern.max()-pattern.min())/(total_intensity*2))
# show patern for one source at 0
michelson(p0=[0])
"""
Explanation: With a larger number of baselines, we can gather enough information to reconstruct an image of the sky. This is because each baseline essentially measures one Fourier component of the sky brightness distribution (Chapter 4 will explain this in more detail); and once we know the Fourier components, we can compute a Fourier transform in order to recover the sky image. The advent of sufficiently powerful computers in the late 1960s made this technique practical, and turned radio interferometers from exotic contraptions into generic imaging instruments. With a few notable exceptions, modern radio interferometry is aperture synthesis.
This concludes our introduction to radio interferometry; the rest of this course deals with aperture synthesis in detail. The remainder of this notebook consists of a few more interactive widgets that you can use to play with the toy dual-slit simulator.
Appendix: Recreating the Michelson interferometer
For completeness, let us modify the function above to make a more realistic interferometer. We'll implement two changes:
we'll put the light source infinitely far away, as an astronomical source should be
we'll change the light path to mimic the layout of a Michelson interferometer.
End of explanation
"""
# single source
interact(lambda position, intensity, baseline:
michelson(p0=[position], a0=[intensity], baseline=baseline, maxint=2),
position=(-5,5,.01),intensity=(.2,1,.01),baseline=(10,100,.01)) and None
"""
Explanation: We have modified the setup as follows. First, the source is now infinitely distant, so we define the source position in terms of the angle of arrival of the incoming wavefront (with 0 meaning on-axis, i.e. along the vertical axis). We now define the baseline in terms of wavelengths. The phase difference of the wavefront arriving at the two arms of the interferometer is completely defined in terms of the angle of arrival. The two "rays" entering the outer arms of the interferometer indicate the angle of arrival.
The rest of the optical path consists of a series of mirrors to bring the two signals together. Note that the frequency of the fringe pattern is now completely determined by the internal geometry of the instrument (i.e. the distances between the inner set of mirrors and the screen); however the relative phase of the pattern is determined by source angle. Use the sliders below to get a feel for this.
Note that we've also modified the function to print the "visibility", as originally defined by Michelson.
End of explanation
"""
interact(lambda position1,position2,intensity1,intensity2,baseline:
michelson(p0=[position1,position2], a0=[intensity1,intensity2], baseline=baseline, maxint=2),
position1=(-5,5,.01), position2=(-5,5,.01), intensity1=(.2,1,.01), intensity2=(.2,1,.01),
baseline=(10,100,.01)) and None
"""
Explanation: And here's the same experiment for two sources:
End of explanation
"""
arcsec = 1/3600.
interact(lambda extent_arcsec, baseline:
michelson(p0=[0], a0=[1], extent=extent_arcsec*arcsec, maxint=1,
baseline=baseline,fov=1*arcsec),
extent_arcsec=(0,0.1,0.001),
baseline=(1e+4,1e+7,1e+4)
) and None
"""
Explanation: A.1 The Betelgeuse size measurement
For fun, let us use our toy to re-create the Betelgeuse size measurement of 1920 by A.A. Michelson and F.G. Pease. Their experiment was set up as follows. The interferometer they constructed had movable outside mirrors, giving it a baseline that could be adjusted from a maximum of 6m downwards. Red light has a wavelength of ~650n; this gave them a maximum baseline of 10 million wavelengths.
For the experiment, they started with a baseline of 1m (1.5 million wavelengths), and verified that they could see fringes from Betelguese with the naked eye. They then adjusted the baseline up in small increments, until at 3m the fringes disappeared. From this, they inferred the diameter of Betelgeuse to be about 0.05".
You can repeat the experiment using the sliders below. You will probably find your toy Betelegeuse to be somewhat larger than 0.05". This is because our simulator is too simplistic -- in particular, it assumes a monochromatic source of light, which makes the fringes a lot sharper.
End of explanation
"""
|
Housebeer/Natural-Gas-Model | .ipynb_checkpoints/Fitting curve-checkpoint.ipynb | mit | import numpy as np
from scipy.optimize import leastsq
import pylab as plt
import pandas as pd
N = 1000 # number of data points
t = np.linspace(0, 4*np.pi, N)
data = 3.0*np.sin(t+0.001) + 0.5 + np.random.randn(N) # create artificial data with noise
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)
guess_phase = 0
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_std*np.sin(t+guess_phase) + guess_mean
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*np.sin(t+x[1]) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_std*np.sin(t+est_phase) + est_mean
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
"""
Explanation: Fitting curve to data
Within this notebook we do some data analytics on historical data to feed some real numbers into the model. Since we assume the consumer data to be resemble a sinus, due to the fact that demand is seasonal, we will focus on fitting data to this kind of curve.
End of explanation
"""
importfile = 'CBS Statline Gas Usage.xlsx'
df = pd.read_excel(importfile, sheetname='Month', skiprows=1)
df.drop(['Onderwerpen_1', 'Onderwerpen_2', 'Perioden'], axis=1, inplace=True)
df
# transpose
df = df.transpose()
new_header = df.iloc[0]
df = df[1:]
df.rename(columns = new_header, inplace=True)
#df.drop(['nan'], axis=0, inplace=True)
df
x = range(len(df.index))
df['Via regionale netten'].plot(figsize=(18,5))
plt.xticks(x, df.index, rotation='vertical')
plt.show()
"""
Explanation: import data for our model
This is data imported from statline CBS webportal.
End of explanation
"""
#b = self.base_demand
#m = self.max_demand
#y = b + m * (.5 * (1 + np.cos((x/6)*np.pi)))
b = 603
m = 3615
N = 84 # number of data points
t = np.linspace(0, 83, N)
#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise
data = np.array(df['Via regionale netten'].values, dtype=np.float64)
guess_mean = np.mean(data)
guess_std = 2*np.std(data)/(2**0.5)
guess_phase = 0
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
print(est_std, est_phase, est_mean)
#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise
data = np.array(df['Elektriciteitscentrales'].values, dtype=np.float64)
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)
guess_phase = 0
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
print(est_std, est_phase, est_mean)
#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise
data = np.array(df['Overige verbruikers'].values, dtype=np.float64)
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)
guess_phase = 0
guess_saving = .997
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = np.power(guess_saving,t) * (guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase))))
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data
est_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))
plt.plot(data, '.')
plt.plot(data_fit, label='after fitting')
plt.plot(data_first_guess, label='first guess')
plt.legend()
plt.show()
print('max_demand: %s, phase_shift: %s, base_demand:%s' %(est_std, est_phase, est_mean))
"""
Explanation: now let fit different consumer groups
End of explanation
"""
|
google/neural-tangents | notebooks/function_space_linearization.ipynb | apache-2.0 | !pip install --upgrade pip
!pip install -q tensorflow-datasets
!pip install --upgrade jax[cuda11_cudnn805] -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q git+https://www.github.com/google/neural-tangents
"""
Explanation: <a href="https://colab.research.google.com/github/google/neural-tangents/blob/main/notebooks/function_space_linearization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Import & Utils
Install JAX, Tensorflow Datasets, and Neural Tangents.
The first line specifies the version of jaxlib that we would like to import. Note, that "cp36" species the version of python (version 3.6) used by JAX. Make sure your colab kernel matches this version.
End of explanation
"""
from jax import jit
from jax import grad
from jax import random
import jax.numpy as np
from jax.example_libraries.stax import logsoftmax
from jax.example_libraries import optimizers
import tensorflow_datasets as tfds
import neural_tangents as nt
from neural_tangents import stax
"""
Explanation: Import libraries
End of explanation
"""
def process_data(data_chunk):
"""Flatten the images and one-hot encode the labels."""
image, label = data_chunk['image'], data_chunk['label']
samples = image.shape[0]
image = np.array(np.reshape(image, (samples, -1)), dtype=np.float32)
image = (image - np.mean(image)) / np.std(image)
label = np.eye(10)[label]
return {'image': image, 'label': label}
@optimizers.optimizer
def momentum(learning_rate, momentum=0.9):
"""A standard momentum optimizer for testing.
Different from `jax.example_libraries.optimizers.momentum` (Nesterov).
"""
learning_rate = optimizers.make_schedule(learning_rate)
def init_fn(x0):
v0 = np.zeros_like(x0)
return x0, v0
def update_fn(i, g, state):
x, velocity = state
velocity = momentum * velocity + g
x = x - learning_rate(i) * velocity
return x, velocity
def get_params(state):
x, _ = state
return x
return init_fn, update_fn, get_params
"""
Explanation: Define helper functions for processing data and defining a vanilla momentum optimizer
End of explanation
"""
dataset_size = 64
ds_train, ds_test = tfds.as_numpy(
tfds.load('mnist:3.*.*', split=['train[:%d]' % dataset_size,
'test[:%d]' % dataset_size],
batch_size=-1)
)
train = process_data(ds_train)
test = process_data(ds_test)
"""
Explanation: Function Space Linearization
Create MNIST data pipeline using TensorFlow Datasets.
End of explanation
"""
learning_rate = 1e0
training_steps = np.arange(1000)
print_every = 100.0
"""
Explanation: Setup some experiment parameters.
End of explanation
"""
init_fn, f, _ = stax.serial(
stax.Dense(512, 1., 0.05),
stax.Erf(),
stax.Dense(10, 1., 0.05))
key = random.PRNGKey(0)
_, params = init_fn(key, (-1, 784))
"""
Explanation: Create a Fully-Connected Network.
End of explanation
"""
ntk = nt.batch(nt.empirical_ntk_fn(f, vmap_axes=0),
batch_size=64, device_count=0)
g_dd = ntk(train['image'], None, params)
g_td = ntk(test['image'], train['image'], params)
"""
Explanation: Construct the NTK.
End of explanation
"""
opt_init, opt_apply, get_params = optimizers.sgd(learning_rate)
state = opt_init(params)
"""
Explanation: Now that we have the NTK and a network we can compare against a number of different dynamics. Remember to reinitialize the network and NTK if you want to try a different dynamics.
Gradient Descent, MSE Loss
Create a optimizer and initialize it.
End of explanation
"""
loss = lambda fx, y_hat: 0.5 * np.mean((fx - y_hat) ** 2)
grad_loss = jit(grad(lambda params, x, y: loss(f(params, x), y)))
"""
Explanation: Create an MSE loss and a gradient.
End of explanation
"""
predictor = nt.predict.gradient_descent_mse(g_dd, train['label'],
learning_rate=learning_rate)
fx_train = f(params, train['image'])
"""
Explanation: Create an MSE predictor and compute the function space values of the network at initialization.
End of explanation
"""
print ('Time\tLoss\tLinear Loss')
X, Y = train['image'], train['label']
predictions = predictor(training_steps, fx_train)
for i in training_steps:
params = get_params(state)
state = opt_apply(i, grad_loss(params, X, Y), state)
if i % print_every == 0:
exact_loss = loss(f(params, X), Y)
linear_loss = loss(predictions[i], Y)
print('{}\t{:.4f}\t{:.4f}'.format(i, exact_loss, linear_loss))
"""
Explanation: Train the network.
End of explanation
"""
opt_init, opt_apply, get_params = optimizers.sgd(learning_rate)
state = opt_init(params)
"""
Explanation: Gradient Descent, Cross Entropy Loss
Create a optimizer and initialize it.
End of explanation
"""
loss = lambda fx, y_hat: -np.mean(logsoftmax(fx) * y_hat)
grad_loss = jit(grad(lambda params, x, y: loss(f(params, x), y)))
"""
Explanation: Create an Cross Entropy loss and a gradient.
End of explanation
"""
predictor = nt.predict.gradient_descent(loss, g_dd, train['label'], learning_rate=learning_rate)
fx_train = f(params, train['image'])
"""
Explanation: Create a Gradient Descent predictor and compute the function space values of the network at initialization.
End of explanation
"""
print ('Time\tLoss\tLinear Loss')
X, Y = train['image'], train['label']
predictions = predictor(training_steps, fx_train)
for i in training_steps:
params = get_params(state)
state = opt_apply(i, grad_loss(params, X, Y), state)
if i % print_every == 0:
t = i * learning_rate
exact_loss = loss(f(params, X), Y)
linear_loss = loss(predictions[i], Y)
print('{:.0f}\t{:.4f}\t{:.4f}'.format(i, exact_loss, linear_loss))
"""
Explanation: Train the network.
End of explanation
"""
mass = 0.9
opt_init, opt_apply, get_params = momentum(learning_rate, mass)
state = opt_init(params)
"""
Explanation: Momentum, Cross Entropy Loss
Create a optimizer and initialize it.
End of explanation
"""
loss = lambda fx, y_hat: -np.mean(logsoftmax(fx) * y_hat)
grad_loss = jit(grad(lambda params, x, y: loss(f(params, x), y)))
"""
Explanation: Create a Cross Entropy loss and a gradient.
End of explanation
"""
predictor = nt.predict.gradient_descent(loss,
g_dd, train['label'], learning_rate=learning_rate, momentum=mass)
fx_train = f(params, train['image'])
"""
Explanation: Create a momentum predictor and initialize it.
End of explanation
"""
print ('Time\tLoss\tLinear Loss')
X, Y = train['image'], train['label']
predictions = predictor(training_steps, fx_train)
for i in training_steps:
params = get_params(state)
state = opt_apply(i, grad_loss(params, X, Y), state)
if i % print_every == 0:
exact_loss = loss(f(params, X), Y)
linear_loss = loss(predictions[i], Y)
print('{:.0f}\t{:.4f}\t{:.4f}'.format(i, exact_loss, linear_loss))
"""
Explanation: Train the network.
End of explanation
"""
|
krismolendyke/den | notebooks/Authorization.ipynb | mit | import os
DEN_CLIENT_ID = os.environ["DEN_CLIENT_ID"]
DEN_CLIENT_SECRET = os.environ["DEN_CLIENT_SECRET"]
"""
Explanation: Authorization
Following the Nest authorization documentation.
Setup
Get the values of Client ID and Client secret from the clients page and set them in the environment before running this IPython Notebook. The environment variable names should be DEN_CLIENT_ID and DEN_CLIENT_SECRET, respectively.
End of explanation
"""
import uuid
def _get_state():
"""Get a unique id string."""
return str(uuid.uuid1())
_get_state()
"""
Explanation: Get Authorization URL
Available per client. For Den it is:
https://home.nest.com/login/oauth2?client_id=54033edb-04e0-4fc7-8306-5ed6cb7d7b1d&state=STATE
Where STATE should be a value that is:
Used to protect against cross-site request forgery attacks
Format: any unguessable string
We strongly recommend that you use a new, unique value for each call
Create STATE helper
End of explanation
"""
API_PROTOCOL = "https"
API_LOCATION = "home.nest.com"
from urlparse import SplitResult, urlunsplit
from urllib import urlencode
def _get_url(path, query, netloc=API_LOCATION):
"""Get a URL for the given path and query."""
split = SplitResult(scheme=API_PROTOCOL, netloc=netloc, path=path, query=query, fragment="")
return urlunsplit(split)
def get_auth_url(client_id=DEN_CLIENT_ID):
"""Get an authorization URL for the given client id."""
path = "login/oauth2"
query = urlencode({"client_id": client_id, "state": _get_state()})
return _get_url(path, query)
get_auth_url()
"""
Explanation: Create Authorization URL Helper
End of explanation
"""
!open "{get_auth_url()}"
"""
Explanation: Get Authorization Code
get_auth_url() returns a URL that should be visited in the browser to get an authorization code.
For Den, this authorization code will be a PIN.
End of explanation
"""
pin = ""
"""
Explanation: Cut and paste that PIN here:
End of explanation
"""
def get_access_token_url(client_id=DEN_CLIENT_ID, client_secret=DEN_CLIENT_SECRET, code=pin):
"""Get an access token URL for the given client id."""
path = "oauth2/access_token"
query = urlencode({"client_id": client_id,
"client_secret": client_secret,
"code": code,
"grant_type": "authorization_code"})
return _get_url(path, query, "api." + API_LOCATION)
get_access_token_url()
"""
Explanation: Get Access Token
Use the pin code to request an access token. https://developer.nest.com/documentation/cloud/authorization-reference/
End of explanation
"""
import requests
r = requests.post(get_access_token_url())
print r.status_code
assert r.status_code == requests.codes.OK
r.json()
"""
Explanation: POST to that URL to get a response containing an access token:
End of explanation
"""
access_token = r.json()["access_token"]
access_token
"""
Explanation: It seems like the access token can only be created once and has a 10 year expiration time.
End of explanation
"""
|
Aniruddha-Tapas/Applied-Machine-Learning | Clustering/Seeds Clustering.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn import preprocessing
import matplotlib.pyplot as plt
cols = ['Area', 'Perimeter','Compactness','Kernel_Length','Kernel_Width','Assymetry_Coefficient','Kernel_Groove_Length', 'Class']
# read .csv from provided dataset
csv_filename="seeds_dataset.txt"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename,delim_whitespace=True,names=cols)
df.head()
features = df.columns[:-1]
features
X = df[features]
y = df['Class']
X.head()
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
X_train.shape, y_train.shape
"""
Explanation: Seeds Clustering
In this example, we would implement Unsupervised Learning techniques to cluster seeds into groups based on the attributes in the dataset.
You can download the dataset from : https://archive.ics.uci.edu/ml/datasets/seeds
To construct the data, seven geometric parameters of wheat kernels were measured:
area A,
perimeter P,
compactness C = 4piA/P^2,
length of kernel,
width of kernel,
asymmetry coefficient
length of kernel groove.
The 3 Classes - three different varieties of wheat
All of these parameters are real-valued continuous.
End of explanation
"""
y.unique()
len(features)
"""
Explanation: <hr>
Unsupervised Learning
<hr>
Feature Transformation
The first PCA dimension is the dimension in the data with highest variance. Intuitively, it corresponds to the 'longest' vector one can find in the 6-dimensional feature space that captures the data, that is, the eigenvector with the largest eigenvalue.
ICA, as opposed to PCA, finds the subcomponents that are statistically independent.
PCA
End of explanation
"""
# Apply PCA with the same number of dimensions as variables in the dataset
from sklearn.decomposition import PCA
pca = PCA(n_components=7) #7 components for 7 variables
pca.fit(X)
# Print the components and the amount of variance in the data contained in each dimension
print(pca.components_)
print(pca.explained_variance_ratio_)
"""
Explanation: No of components will be equal to no of feature variables i.e. 7.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(list(pca.explained_variance_ratio_),'-o')
plt.title('Explained variance ratio as function of PCA components')
plt.ylabel('Explained variance ratio')
plt.xlabel('Component')
plt.show()
features
"""
Explanation: The explained variance is high for the first two dimensions, but drops significantly beginning with the third dimension. Thus, the first two components explain already 86.5 % of the variation in the data.
How many dimension to choose for the analysis really depends on the goal of the analysis. Even though PCA reduces the feature space (with all advantages that brings, such as faster computations) and makes interpreting the data easier for us by projecting them down to a lower dimension, it necessarily comes with a loss of information that may or may not be desired.
It the case at hand, assuming interpretation is the goal (creating seed clusters) and given the sharp drop of the explained variance after the second component, we would choose the first two dimensions for analysis.
End of explanation
"""
X = df[features].values
y= df['Class'].values
pca = PCA(n_components=2)
reduced_X = pca.fit_transform(X)
red_x, red_y = [], []
blue_x, blue_y = [], []
green_x, green_y = [], []
for i in range(len(reduced_X)):
if y[i] == 1:
red_x.append(reduced_X[i][0])
red_y.append(reduced_X[i][1])
elif y[i] == 2:
blue_x.append(reduced_X[i][0])
blue_y.append(reduced_X[i][1])
else:
green_x.append(reduced_X[i][0])
green_y.append(reduced_X[i][1])
plt.scatter(red_x, red_y, c='r', marker='x')
plt.scatter(blue_x, blue_y, c='b', marker='D')
plt.scatter(green_x, green_y, c='g', marker='.')
plt.show()
"""
Explanation: The first dimension seems to basically represent only the 'Area'-feature, as this feature has a strong negative projection on the first dimension. The other features have rather weak (mostly negative) projections on the first dimension. That is, the first dimension basically tells us whether the 'area'-feature value is high or low, mixed with a little bit of information from the other features.
The second dimension is mainly represented by the features 'Perimeter' and 'Compactness', in the order of decreasing importance, and has rather low correlation with the other features.
There are two main uses of this information. The first use is feature interpretation and hypothesis formation. We could form initial conjectures about the seed clusters contained in the data. One conjecture could be that the bulk of seeds can be split into clusters ordering mainly by 'Area wise' and seeds mainly ordering by 'Perimeter' and 'Compactness' from the datasource. The second use is that, given knowledge of the PCA components, new features can be engineered for further analysis of the problem. These features could be generated by applying an exact PCA-transformation or by using some heuristic based on the feature combinations recovered in PCA.
Applying PCA to visualize high-dimensional data
End of explanation
"""
# Import clustering modules
from sklearn.cluster import KMeans
from sklearn.mixture import GMM
# First we reduce the data to two dimensions using PCA to capture variation
pca = PCA(n_components=2)
reduced_data = pca.fit_transform(X)
print(reduced_data[:10]) # print upto 10 elements
kmeans = KMeans(n_clusters=3)
clusters = kmeans.fit(reduced_data)
print(clusters)
# Plot the decision boundary by building a mesh grid to populate a graph.
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
hx = (x_max-x_min)/1000.
hy = (y_max-y_min)/1000.
xx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))
# Obtain labels for each point in mesh. Use last trained model.
Z = clusters.predict(np.c_[xx.ravel(), yy.ravel()])
# Find the centroids for KMeans or the cluster means for GMM
centroids = kmeans.cluster_centers_
print('*** K MEANS CENTROIDS ***')
print(centroids)
# TRANSFORM DATA BACK TO ORIGINAL SPACE FOR ANSWERING 7
print('*** CENTROIDS TRANSFERED TO ORIGINAL SPACE ***')
print(pca.inverse_transform(centroids))
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('Clustering on the seeds dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
"""
Explanation: Clustering
In this section we will choose either K Means clustering or Gaussian Mixed Models clustering, which implements expectation-maximization. Then we will sample elements from the clusters to understand their significance.
Choosing a Cluster Type
K Means Clustering or Gaussian Mixture Models?
Before discussing the advantages of K Means vs Gaussian Mixture models, it is helpful to observe that both methods are actually very similar. The main difference is that Gaussian Mixture models make a probabilistic assignment of points to classes depending on some distance metric, whereas K Means makes a deterministic assignment depending on some metric. Now, when the variance of the Gaussian mixtures is very small, this method becomes very similar to K Means, since the assignment probabilities to a specific cluster converge to 0 or 1 for any point in the domain. Because of the probabilistic assignment, Gaussian Mixtures (in contrast to K Means) are often characterized as soft clustering algorithms.
An advantage of Gaussian Mixture models is that, if there is some a priori uncertainty about the assignment of a point to a cluster, this uncertainty is inherently reflected in the probabilistic model (soft assignment) and assignment probabilities can be computed for any data point after the model is trained. On the other hand, if a priori the clusters assignments are expected to be deterministic, K Means has advantages. An example would be a data generating process that actually is a mixture of Gaussians. Applying a Gaussian mixture model is more natural given this data generating process. When it comes to processing speed, the EM algorithm with Gaussian mixtures is generally slightly slower than Lloyd's algorithm for K Means, since computing the normal probability (EM) is generally slower than computing the L2-norm (K Means). A disadvantage of both methods is that they can get stuck in local minima (this can be considered as the cost of solving NP-hard problems (global min for k-means) approximately).
Since there is no strong indication that the data are generated from a mixture of normals (this assesment may be different given more information about the nature of the spending data) and the goal is to "hard"-cluster them (and not assign probabilities), I decided the use the general-purpose k-means algorithm.
A decision on the number of clusters will be made by visualizing the final clustering and deciding whether k equals the number of data centers found by visual inspection. Note that many other approaches for this task could be utilized, such as silhoutte analysis (see for example http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html).
Below is some starter code to help you visualize some cluster data. The visualization is based on this demo from the sklearn documentation.
End of explanation
"""
distortions = []
for i in range(1, 11):
km = KMeans(n_clusters=i,
init='k-means++',
n_init=10,
max_iter=300,
random_state=0)
km.fit(X)
distortions .append(km.inertia_)
plt.plot(range(1,11), distortions , marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.tight_layout()
#plt.savefig('./figures/elbow.png', dpi=300)
plt.show()
"""
Explanation: <hr>
Elbow Method
Using the elbow method to find the optimal number of clusters
One of the main challenges in unsupervised learning is that we do not know the definitive answer. We don't have the ground truth class labels in our dataset that allow us to apply the techniques in order to evaluate the performance of a supervised model. Thus, in order to quantify the quality of clustering, we need to use intrinsic metrics—such as the within-cluster SSE (distortion) to compare the performance of different k-means clusterings. Conveniently, we don't need to compute the within-cluster SSE explicitly as it is already accessible via the inertia_ attribute after fitting a KMeans model.
Based on the within-cluster SSE, we can use a graphical tool, the so-called elbow method, to estimate the optimal number of clusters k for a given task. Intuitively,
we can say that, if k increases, the distortion will decrease. This is because the
samples will be closer to the centroids they are assigned to. The idea behind the elbow method is to identify the value of k where the distortion begins to increase most rapidly, which will become more clear if we plot distortion for different
values of k:
End of explanation
"""
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
km = KMeans(n_clusters=3,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(i / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2)
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
# plt.savefig('./figures/silhouette.png', dpi=300)
plt.show()
"""
Explanation: As we can see in the following plot, the elbow is located at k = 3, which provides evidence that k = 3 is indeed a good choice for this dataset.
Quantifying the quality of clustering via silhouette plots
Another intrinsic metric to evaluate the quality of a clustering is silhouette analysis, which can also be applied to clustering algorithms other than k-means that we will discuss later in this chapter. Silhouette analysis can be used as a graphical tool to plot a measure of how tightly grouped the samples in the clusters are. To calculate the silhouette coefficient of a single sample in our dataset, we can apply the following three steps:
1. Calculate the cluster cohesion a(i) as the average distance between a sample x(i) and all other points in the same cluster.
2. Calculate the cluster separation b(i) from the next closest cluster as the average distance between the sample x(i) and all samples in the nearest cluster.
3. Calculate the silhouette s(i) as the difference between cluster cohesion and separation divided by the greater of the two as shown :
s(i) = b(i) - a(i) / max(b(i),a(i))
The silhouette coefficient is bounded in the range -1 to 1. Based on the preceding formula, we can see that the silhouette coefficient is 0 if the cluster separation and cohesion are equal (b(i)=a(i)). Furthermore, we get close to an ideal silhouette coefficient of 1 if (b(i)>>a(i)) since b(i) quantifies how dissimilar a sample is to other clusters, and a(i) tells us how similar it is to the other samples in its own cluster, respectively.
The silhouette coefficient is available as silhouette_samples from scikit-learn's metric module, and optionally the silhouette_scores can be imported. This calculates the average silhouette coefficient across all samples, which is equivalent to numpy.mean(silhouette_samples(…)). By executing the following code, we will now create a plot of the silhouette coefficients for a k-means clustering with k=3:
End of explanation
"""
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='complete')
labels = ac.fit_predict(X)
print('Cluster labels: %s' % labels)
"""
Explanation: Our clustering with 3 centroids is good.
Applying agglomerative clustering via scikit-learn
In this section, we saw how to perform agglomerative hierarchical clustering
using SciPy. However, there is also an AgglomerativeClustering implementation in scikit-learn, which allows us to choose the number of clusters that we want to return. This is useful if we want to prune the hierarchical cluster tree. By setting
the n_cluster parameter to 3, we will now cluster the samples into three groups using the complete linkage approach based on the Euclidean distance metric:
End of explanation
"""
from sklearn.cross_validation import train_test_split
X = df[features]
y = df['Class']
X_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.25, random_state=42)
"""
Explanation:
End of explanation
"""
from sklearn import cluster
clf = cluster.KMeans(init='k-means++', n_clusters=3, random_state=5)
clf.fit(X_train)
print( clf.labels_.shape)
print (clf.labels_)
# Predict clusters on testing data
y_pred = clf.predict(X_test)
from sklearn import metrics
print ("Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)))
print ("Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) )
print ("Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)))
print ("Confusion matrix")
print (metrics.confusion_matrix(y_test, y_pred))
"""
Explanation: <hr>
Clustering using K Means:
End of explanation
"""
# Affinity propagation
aff = cluster.AffinityPropagation()
aff.fit(X_train)
print (aff.cluster_centers_indices_.shape)
y_pred = aff.predict(X_test)
from sklearn import metrics
print ("Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)))
print ("Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) )
print ("Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)))
print ("Confusion matrix")
print (metrics.confusion_matrix(y_test, y_pred))
"""
Explanation: <hr>
Affinity Propogation
End of explanation
"""
ms = cluster.MeanShift()
ms.fit(X_train)
print( ms.cluster_centers_)
y_pred = ms.predict(X_test)
from sklearn import metrics
print ("Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)))
print ("Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) )
print ("Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)))
print ("Confusion matrix")
print (metrics.confusion_matrix(y_test, y_pred))
"""
Explanation: <hr>
MeanShift
End of explanation
"""
from sklearn import mixture
# Define a heldout dataset to estimate covariance type
X_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split(
X_train, y_train,test_size=0.25, random_state=42)
for covariance_type in ['spherical','tied','diag','full']:
gm=mixture.GMM(n_components=3, covariance_type=covariance_type, random_state=42, n_init=5)
gm.fit(X_train_heldout)
y_pred=gm.predict(X_test_heldout)
print ("Adjusted rand score for covariance={}:{:.2}".format(covariance_type,
metrics.adjusted_rand_score(y_test_heldout, y_pred)))
gm = mixture.GMM(n_components=3, covariance_type='tied', random_state=42)
gm.fit(X_train)
# Print train clustering and confusion matrix
y_pred = gm.predict(X_test)
print ("Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred)))
print ("Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred)) )
print ("Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred)))
print ("Confusion matrix")
print (metrics.confusion_matrix(y_test, y_pred))
pl=plt
from sklearn import decomposition
# In this case the seeding of the centers is deterministic,
# hence we run the kmeans algorithm only once with n_init=1
pca = decomposition.PCA(n_components=2).fit(X_train)
reduced_X_train = pca.transform(X_train)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .01 # point in the mesh [x_min, m_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will asign a color to each
x_min, x_max = reduced_X_train[:, 0].min() + 1, reduced_X_train[:, 0].max() - 1
y_min, y_max = reduced_X_train[:, 1].min() + 1, reduced_X_train[:, 1].max() - 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
gm.fit(reduced_X_train)
#print np.c_[xx.ravel(),yy.ravel()]
Z = gm.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
pl.figure(1)
pl.clf()
pl.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=pl.cm.Paired,
aspect='auto', origin='lower')
#print reduced_X_train.shape
pl.plot(reduced_X_train[:, 0], reduced_X_train[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = gm.means_
pl.scatter(centroids[:, 0], centroids[:, 1],
marker='.', s=169, linewidths=3,
color='w', zorder=10)
pl.title('Mixture of gaussian models on the seeds dataset (PCA-reduced data)\n'
'Means are marked with white dots')
pl.xlim(x_min, x_max)
pl.ylim(y_min, y_max)
pl.xticks(())
pl.yticks(())
pl.show()
"""
Explanation: <hr>
Mixture of Guassian Models
End of explanation
"""
|
AllenDowney/ThinkStats2 | code/chap07ex.ipynb | gpl-3.0 | from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import thinkstats2
import thinkplot
"""
Explanation: Chapter 7
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/brfss.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/CDBRFS08.ASC.gz")
import brfss
df = brfss.ReadBrfss(nrows=None)
"""
Explanation: Scatter plots
I'll start with the data from the BRFSS again.
End of explanation
"""
def SampleRows(df, nrows, replace=False):
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
"""
Explanation: The following function selects a random subset of a DataFrame.
End of explanation
"""
sample = SampleRows(df, 5000)
heights, weights = sample.htm3, sample.wtkg2
"""
Explanation: I'll extract the height in cm and the weight in kg of the respondents in the sample.
End of explanation
"""
thinkplot.Scatter(heights, weights, alpha=1)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: Here's a simple scatter plot with alpha=1, so each data point is fully saturated.
End of explanation
"""
def Jitter(values, jitter=0.5):
n = len(values)
return np.random.normal(0, jitter, n) + values
"""
Explanation: The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noise to the data.
NOTE: The version of Jitter in the book uses noise with a uniform distribution. Here I am using a normal distribution. The normal distribution does a better job of blurring artifacts, but the uniform distribution might be more true to the data.
End of explanation
"""
heights = Jitter(heights, 1.3)
weights = Jitter(weights, 0.5)
"""
Explanation: Heights were probably rounded off to the nearest inch, which is 2.6 cm, so I'll add random values from -1.3 to 1.3.
End of explanation
"""
thinkplot.Scatter(heights, weights, alpha=1.0)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: And here's what the jittered data look like.
End of explanation
"""
thinkplot.Scatter(heights, weights, alpha=0.1, s=10)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: The columns are gone, but now we have a different problem: saturation. Where there are many overlapping points, the plot is not as dark as it should be, which means that the outliers are darker than they should be, which gives the impression that the data are more scattered than they actually are.
This is a surprisingly common problem, even in papers published in peer-reviewed journals.
We can usually solve the saturation problem by adjusting alpha and the size of the markers, s.
End of explanation
"""
thinkplot.HexBin(heights, weights)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it.
An alternative to a scatter plot is something like a HexBin plot, which breaks the plane into bins, counts the number of respondents in each bin, and colors each bin in proportion to its count.
End of explanation
"""
cleaned = df.dropna(subset=['htm3', 'wtkg2'])
"""
Explanation: In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.
Exercise: So far we have been working with a subset of only 5000 respondents. When we include the entire dataset, making an effective scatter plot can be tricky. As an exercise, experiment with Scatter and HexBin to make a plot that represents the entire dataset well.
Plotting percentiles
Sometimes a better way to get a sense of the relationship between variables is to divide the dataset into groups using one variable, and then plot percentiles of the other variable.
First I'll drop any rows that are missing height or weight.
End of explanation
"""
bins = np.arange(135, 210, 5)
indices = np.digitize(cleaned.htm3, bins)
groups = cleaned.groupby(indices)
"""
Explanation: Then I'll divide the dataset into groups by height.
End of explanation
"""
for i, group in groups:
print(i, len(group))
"""
Explanation: Here are the number of respondents in each group:
End of explanation
"""
mean_heights = [group.htm3.mean() for i, group in groups]
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]
"""
Explanation: Now we can compute the CDF of weight within each group.
End of explanation
"""
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
"""
Explanation: And then extract the 25th, 50th, and 75th percentile from each group.
End of explanation
"""
def Cov(xs, ys, meanx=None, meany=None):
xs = np.asarray(xs)
ys = np.asarray(ys)
if meanx is None:
meanx = np.mean(xs)
if meany is None:
meany = np.mean(ys)
cov = np.dot(xs-meanx, ys-meany) / len(xs)
return cov
"""
Explanation: Exercise: Yet another option is to divide the dataset into groups and then plot the CDF for each group. As an exercise, divide the dataset into a smaller number of groups and plot the CDF for each group.
Correlation
The following function computes the covariance of two variables using NumPy's dot function.
End of explanation
"""
heights, weights = cleaned.htm3, cleaned.wtkg2
Cov(heights, weights)
"""
Explanation: And here's an example:
End of explanation
"""
def Corr(xs, ys):
xs = np.asarray(xs)
ys = np.asarray(ys)
meanx, varx = thinkstats2.MeanVar(xs)
meany, vary = thinkstats2.MeanVar(ys)
corr = Cov(xs, ys, meanx, meany) / np.sqrt(varx * vary)
return corr
"""
Explanation: Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.
End of explanation
"""
Corr(heights, weights)
"""
Explanation: The correlation of height and weight is about 0.51, which is a moderately strong correlation.
End of explanation
"""
np.corrcoef(heights, weights)
"""
Explanation: NumPy provides a function that computes correlations, too:
End of explanation
"""
import pandas as pd
def SpearmanCorr(xs, ys):
xranks = pd.Series(xs).rank()
yranks = pd.Series(ys).rank()
return Corr(xranks, yranks)
"""
Explanation: The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).
Pearson's correlation is not robust in the presence of outliers, and it tends to underestimate the strength of non-linear relationships.
Spearman's correlation is more robust, and it can handle non-linear relationships as long as they are monotonic. Here's a function that computes Spearman's correlation:
End of explanation
"""
SpearmanCorr(heights, weights)
"""
Explanation: For heights and weights, Spearman's correlation is a little higher:
End of explanation
"""
def SpearmanCorr(xs, ys):
xs = pd.Series(xs)
ys = pd.Series(ys)
return xs.corr(ys, method='spearman')
"""
Explanation: A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.
End of explanation
"""
SpearmanCorr(heights, weights)
"""
Explanation: The result is the same as for the one we wrote.
End of explanation
"""
Corr(cleaned.htm3, np.log(cleaned.wtkg2))
"""
Explanation: An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.
End of explanation
"""
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
"""
Explanation: Exercises
Using data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables?
End of explanation
"""
|
RaoUmer/lightning-example-notebooks | plots/circle.ipynb | mit | from lightning import Lightning
from numpy import random, asarray
"""
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Circle plots in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
"""
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
"""
Explanation: Connect to server
End of explanation
"""
connections = random.rand(50,50)
connections[connections<0.98] = 0
lgn.circle(connections)
"""
Explanation: <hr> Just connections
Circle plots show connections between nodes in a graph as lines between points around a circle. Let's make one for a set of random, sparse connections.
End of explanation
"""
connections = random.rand(50,50)
connections[connections<0.98] = 0
lgn.circle(connections, labels=['node ' + str(x) for x in range(50)])
"""
Explanation: We can add a text label to each node. Here we'll just add a numeric identifier. Clicking on a node label highlights its connections -- try it!
End of explanation
"""
connections = random.rand(50,50)
connections[connections<0.98] = 0
group = (random.rand(50) * 3).astype('int')
lgn.circle(connections, labels=['group ' + str(x) for x in group], group=group)
"""
Explanation: <hr> Adding groups
Circle plots are useful for visualizing hierarchical relationships. You can specify multiple levels of grouping using a nested list. Let's start with one.
End of explanation
"""
connections = random.rand(50,50)
connections[connections<0.98] = 0
group1 = (random.rand(50) * 3).astype('int')
group2 = (random.rand(50) * 4).astype('int')
lgn.circle(connections, labels=['group ' + str(x) for x in group2], group=[group1, group2])
"""
Explanation: <hr> Nested groups
And now try adding a second level. We'll label by the second group to make clear what's going on. If you click on any of the outermost arcs, it will highlight connections to/from that group.
End of explanation
"""
|
mdeff/ntds_2016 | project/reports/airbnb_booking/Main Machine Learning.ipynb | mit | import pandas as pd
import numpy as np
import time
import machine_learning_helper as machine_learning_helper
import metrics_helper as metrics_helper
import sklearn.neighbors, sklearn.linear_model, sklearn.ensemble, sklearn.naive_bayes
from sklearn.model_selection import KFold, train_test_split, ShuffleSplit
from sklearn import model_selection
from sklearn import ensemble
from xgboost.sklearn import XGBClassifier
import scipy as sp
import xgboost as xgb
import matplotlib.pyplot as plt
% matplotlib inline
from sklearn.model_selection import learning_curve
from sklearn import linear_model, datasets
import os
"""
Explanation: Machine Learning
This file builds the training dataset from the multiple csv files of the kaggle challenge. It applies four different prediction models and evaluates the importance of the 156 features built and the learning curve of the models.
End of explanation
"""
dataFolder = 'cleaned_data'
resultFolder = 'results'
filenameAdress_train_user = 'cleaned_train_user.csv'
filenameAdress_test_user = 'cleaned_test_user.csv'
filenameAdress_time_mean_user_id = 'time_mean_user_id.csv'
filenameAdress_time_total_user_id = 'time_total_user_id.csv'
filenameAdress_total_action_user_id = 'total_action_user_id.csv'
df_train_users = pd.read_csv(os.path.join(dataFolder, filenameAdress_train_user))
df_test_users = pd.read_csv(os.path.join(dataFolder, filenameAdress_test_user))
df_time_mean_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_time_mean_user_id))
df_time_total_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_time_total_user_id))
df_total_action_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_total_action_user_id))
"""
Explanation: Read .csv files
End of explanation
"""
df_total_action_user_id.columns = ['id','action']
df_sessions = pd.merge(df_time_mean_user_id, df_time_total_user_id, on='id', how='outer')
df_sessions = pd.merge(df_sessions, df_total_action_user_id, on='id', how='outer')
df_sessions.columns = ['id','time_mean_user','time_total_user','action']
"""
Explanation: Construct sessions data frame
This dataframe contains the features that were extracted from the file sessions. For more information about these features, see notebook Main preprocessing.
End of explanation
"""
y_labels, label_enc = machine_learning_helper.buildTargetMat(df_train_users)
"""
Explanation: 1. From data frame to matrix : Construct y_train
The destination countries, now as string, are encoded in int format. Each country will be assigned to a int.
End of explanation
"""
X_train, X_test = machine_learning_helper.buildFeatsMat(df_train_users, df_test_users, df_sessions)
#X_train = X_train[200000:201000]
#y_labels = y_labels[200000:201000]
"""
Explanation: 2. From data frame to matrix : Construct X_train & X_test
Feature engineering.
Added features :
- time_mean_user
- time_total_user
- total_action_user
- Date created account
- Date first active
End of explanation
"""
X_train_sparse = sp.sparse.csr_matrix(X_train.values)
"""
Explanation: For Memory purpose, the train matrix is formatted in sparse
End of explanation
"""
cv = model_selection.KFold(n_splits=5, random_state=None, shuffle=True)
"""
Explanation: 3. Cross validation setup
5 folds cross validation, shuffled.
End of explanation
"""
number_trees = [125, 300, 500, 600 ]
max_depth = [5, 8, 12, 16, 20]
rf_score_trees = []
rf_score_depth = []
rf_param_trees = []
rf_param_depth = []
#Loop for hyperparameter number_trees
for number_trees_idx, number_trees_value in enumerate(number_trees):
print('number_trees_idx: ',number_trees_idx+1,'/',len(number_trees),', value: ', number_trees_value)
# Random forest
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=number_trees_value, max_depth=14)
#Scores
scores = model_selection.cross_val_score(rand_forest_model, X_train_sparse, y_labels, cv=cv, verbose = 10, n_jobs = 12, scoring=metrics_helper.ndcg_scorer)
rf_score_trees.append(scores.mean())
rf_param_trees.append(number_trees_value)
print('Mean NDCG for this number_trees = ', scores.mean())
# best number of trees from above
print()
print('best NDCG:')
print(np.max(rf_score_trees))
print('best parameter num_trees:')
idx_best = np.argmax(rf_score_trees)
best_num_trees_RF = rf_param_trees[idx_best]
print(best_num_trees_RF)
#Loop for hyperparameter max_depth
for max_depth_idx, max_depth_value in enumerate(max_depth):
print('max_depth_idx: ',max_depth_idx+1,'/',len(max_depth),', value: ', max_depth_value)
# Random forest
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=best_num_trees_RF, max_depth=max_depth_value)
#Scores
scores = model_selection.cross_val_score(rand_forest_model, X_train_sparse, y_labels, cv=cv, verbose = 10, n_jobs = 12, scoring=metrics_helper.ndcg_scorer)
rf_score_depth.append(scores.mean())
rf_param_depth.append(max_depth_value)
print('Mean NDCG for this max:_depth = ', scores.mean())
# best max_depth from above
print()
print('best NDCG:')
print(np.max(rf_score_depth))
print('best parameter max_depth:')
idx_best = np.argmax(rf_score_depth)
best_max_depth_RF = rf_param_depth[idx_best]
print(best_max_depth_RF)
"""
Explanation: 4. Machine Learning
Several models are tried, and their parameter optimized through Cross validation. The code is optimized to run on 12 processors at the same time. The metric used is the NDCG. Because of the computation complexity, the for loops for the cross validations were not nested.
Models that were tried:
- Random Forest
- eXtreme Gradient Boosting XCGB
- 2 layers stack model:
- Logistic regression
- eXtreme Gradient Boosting XCGB
- Voting classifer
- Random Forest
- eXtreme Gradient Boosting XCGB
- 2 layers stack model
Model 1 : RandomForest
End of explanation
"""
best_num_trees_RF = 600
best_max_depth_RF = 16
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=best_num_trees_RF, max_depth=best_max_depth_RF)
rand_forest_model.fit(X_train_sparse,y_labels)
y_pred1 = rand_forest_model.predict_proba(X_test)
id_test = df_test_users['id']
cts1,idsubmission1 = machine_learning_helper.get5likelycountries(y_pred1, id_test)
ctsSubmission1 = label_enc.inverse_transform(cts1)
# Save to csv
df_submission1 = pd.DataFrame(np.column_stack((idsubmission1, ctsSubmission1)), columns=['id', 'country'])
df_submission1.to_csv(os.path.join(resultFolder, 'submission_country_dest_RF.csv'),index=False)
"""
Explanation: Random forest 600 trees, 16 depth
- NDCG = 0.821472784776
- Kaggle Private Leader Board NDCG = 0.86686
Predict Countries and convert to CSV for submision for RF model
End of explanation
"""
learning_rates = [0.001, 0.01, 0.05,0.1, 0.2]
max_depth = [3, 5, 7, 9, 12]
n_estimators = [20,30,50,75,100]
gamma = [0,0.3, 0.5, 0.7, 1]
best_gamma_XCG, best_num_estimators_XCG,best_num_depth_XCG, best_learning_rate_XCG = machine_learning_helper.CrossVal_XGB(X_train_sparse, y_labels, cv,max_depth,n_estimators,learning_rates,gamma)
"""
Explanation: Model 2 : eXtreme Gradient Boosting XCGB
5 folds cross validation, using ndcg as scoring metric.
Grid Search to find best parameter.
End of explanation
"""
best_learning_rate_XCG = 0.1
best_num_depth_XCG = 5
best_gamma_XCG = 0.7
best_num_estimators_XCG = 75
XGB_model = XGBClassifier(max_depth=best_num_depth_XCG, learning_rate=best_learning_rate_XCG, n_estimators=best_num_estimators_XCG,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
XGB_model.fit(X_train,y_labels, eval_metric=metrics_helper.ndcg_scorer)
y_pred2 = XGB_model.predict_proba(X_test)
id_test = df_test_users['id']
cts2,idsubmission2 = machine_learning_helper.get5likelycountries(y_pred2, id_test)
ctsSubmission2 = label_enc.inverse_transform(cts2)
df_submission2 = pd.DataFrame(np.column_stack((idsubmission2, ctsSubmission2)), columns=['id', 'country'])
df_submission2.to_csv(os.path.join(resultFolder, 'submission_country_dest_XGB.csv'),index=False)
"""
Explanation: XGboost - learning_rate = 0.1, gamma =1, depth = 7, estimators = 75
- NDCG = 0.826134
- Kaggle Private Leader Board NDCG = 0.86967 (rank 756)
XGboost - learning_rate = 0.1, gamma =0.7, depth = 5, estimators = 75
- NDCG = 0.826394
- Kaggle Private Leader Board NDCG = 0.86987 (rank 698)
Predict Countries and convert to CSV for submision of xgb model
End of explanation
"""
# Build 1st layer training matrix, text matrix, target vector
y_labels_binary, X_train_layer1, X_test_layer1 = machine_learning_helper.buildFeatsMatBinary(df_train_users, df_test_users, df_sessions)
#y_labels_binary = y_labels_binary[0:1000]
#X_train_layer1 = X_train_layer1[0:1000]
y_labels_binary = y_labels_binary.astype(np.int8)
# Build 1st layer model
# Cross validation with parameter C
C = [0.1, 1.0, 10, 100, 1000]
logistic_score_C = []
logistic_param_C = []
#Loop for hyperparameter
for C_idx, C_value in enumerate(C):
print('C_idx: ',C_idx+1,'/',len(C),', value: ', C_value)
# Logistic
model = linear_model.LogisticRegression(C = C_value)
#Scores
scores = model_selection.cross_val_score(model, X_train_layer1, y_labels_binary, cv=cv, verbose = 10, scoring='f1', n_jobs = 12)
logistic_score_C.append(scores.mean())
logistic_param_C.append(C_value)
print('Mean f1 for this C = ', scores.mean())
# best C from above
print()
print('best f1:')
print(np.max(logistic_score_C))
print('best parameter C:')
idx_best = np.argmax(logistic_score_C)
best_C_logistic = logistic_param_C[idx_best]
print(best_C_logistic)
# Build model with best parameter from cross validation
logreg_layer1 = linear_model.LogisticRegression(C = best_C_logistic)
logreg_layer1.fit(X_train_layer1, y_labels_binary)
score_training = logreg_layer1.predict(X_train_layer1)
# 1st layer model prediction
prediction_layer_1 = logreg_layer1.predict(X_test_layer1)
"""
Explanation: Model 3 : Stacking
As seen previously, the classes in this dataset are unbalanced. Indeed, half of the users didn't book. We are going to try to make good use of that information.
This model is composed of 2 layers :
- In a first layer, a logistic regression determines if a user is going to book or not. This binary classification model is trained on the training set. The prediction on the test set by this model is added to a second layer, as a meta feature.
A small mistake : For the training of the 1st layer, the features of the date_account_created and timestamp_first_active were not used.
The second layer is an XGBoost algorithm. It is trained on the new training set, which is made on the original one connected with the output of the first layer under the column 'meta_layer_1'.
<img src="https://s23.postimg.org/8g018p4a3/1111.png">
Layer 1 : Logistic regression
This logistic regressionw will determine if a user booked or not. It is a binary classification problem.
End of explanation
"""
from sklearn import metrics
metrics.accuracy_score(y_labels_binary,score_training)
"""
Explanation: Training accuracy:
End of explanation
"""
# Build 2nd layer training matrix, text matrix, target vector
#df_train_users.reset_index(inplace=True,drop=True)
#y_labels, label_enc = machine_learning_helper.buildTargetMat(df_train_users)
#y_labels = y_labels[0:1000]
#X_train_layer1 = X_train_layer1[0:1000]
X_train_layer2 = X_train_layer1
X_train_layer2['meta_layer_1'] = pd.Series(y_labels_binary).astype(np.int8)
X_test_layer2 = X_test_layer1
X_test_layer2['meta_layer_1'] = pd.Series(prediction_layer_1).astype(np.int8)
learning_rates = [0.001, 0.01, 0.05,0.1, 0.2]
max_depth = [3, 5, 7, 9, 12]
n_estimators = [20,30,50,75,100]
gamma = [0,0.3, 0.5, 0.7, 1]
cv2 = model_selection.KFold(n_splits=5, random_state=None, shuffle=True)
best_gamma_XCG, best_num_estimators_XCG,best_num_depth_XCG, best_learning_rate_XCG = machine_learning_helper.CrossVal_XGB(X_train_layer2, y_labels, cv2,max_depth,n_estimators,learning_rates,gamma)
"""
Explanation: Layer 2 : XGBoost
Using the previous result as a meta_feature, this model will determine the 5 most likely countries in which a user will travel.
End of explanation
"""
best_learning_rate_XCG = 0.1
best_num_depth_XCG = 5
best_gamma_XCG = 0.7
best_num_estimators_XCG = 50
XGB_model = XGBClassifier(max_depth=best_num_depth_XCG, learning_rate=best_learning_rate_XCG, n_estimators=best_num_estimators_XCG,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
XGB_model.fit(X_train_layer2,y_labels, eval_metric=metrics_helper.ndcg_scorer)
y_pred2 = XGB_model.predict_proba(X_test_layer2)
id_test = df_test_users['id']
cts2,idsubmission2 = machine_learning_helper.get5likelycountries(y_pred2, id_test)
ctsSubmission2 = label_enc.inverse_transform(cts2)
df_submission2 = pd.DataFrame(np.column_stack((idsubmission2, ctsSubmission2)), columns=['id', 'country'])
df_submission2.to_csv(os.path.join(resultFolder, 'submission_country_dest_stacking.csv'),index=False)
"""
Explanation: 2 layers stack model - learning_rate = 0.1, gamma =0.7, depth = 5, estimators = 75
- Kaggle Private Leader Board NDCG = 0.82610
Predict Countries and convert to CSV for submision of 2 Layer Stack model
End of explanation
"""
# Create the sub models
estimators = []
model1 = ensemble.RandomForestClassifier(max_depth=best_max_depth_RF, n_estimators= best_num_trees_RF)
estimators.append(('random_forest', model1))
model2 = XGBClassifier(max_depth=best_num_depth_XCG,learning_rate=best_learning_rate_XCG,n_estimators= best_num_estimators_XCG,
objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
estimators.append(('xgb', model2))
model3 = XGB_model
estimators.append(('2layer', model3))
# Create Voting classifier
finalModel = ensemble.VotingClassifier(estimators,voting='soft')
# Run cross validation score
results = model_selection.cross_val_score(finalModel, X_train, y_labels, cv=cv, scoring = metrics_helper.ndcg_scorer, verbose = 10, n_jobs=12)
print("Voting Classifier Cross Validation Score found:")
print(results.mean())
"""
Explanation: 4. Voting Model
Now we are going to vote between the 3 models optimized with their best parameters
End of explanation
"""
finalModel.fit(X_train,y_labels)
y_pred1 = finalModel.predict_proba(X_test)
id_test = df_test_users['id']
cts1,idsubmission1 = machine_learning_helper.get5likelycountries(y_pred1, id_test)
ctsSubmission1 = label_enc.inverse_transform(cts1)
df_submission1 = pd.DataFrame(np.column_stack((idsubmission1, ctsSubmission1)), columns=['id', 'country'])
df_submission1.to_csv(os.path.join(resultFolder, 'submission_country_dest_Voting.csv'),index=False)
"""
Explanation: Voting classifier
- NDCG = TODO
- Kaggle Private Leader Board NDCG = TODO
Predict countries from Voting model and export
End of explanation
"""
model = XGBClassifier(max_depth=5, learning_rate=0.1, n_estimators=75,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma=0.7 )
model.fit(X_train,y_labels)
machine_learning_helper.plotFeaturesImportance(model,X_train)
"""
Explanation: 5. Evaluating features importance
End of explanation
"""
fig, ax = plt.subplots(figsize=(15, 10))
xgb.plot_importance(model,height=0.7, ax=ax)
machine_learning_helper.plotFeaturesImportance(XGB_model,X_train_layer2)
fig, ax = plt.subplots(figsize=(15, 10))
xgb.plot_importance(XGB_model,height=0.7, ax=ax)
"""
Explanation: The figure above shows the 20 most important features following the NDCG score. The age feature is by far the most important one.
The figure below shows the most important features using the F score.
End of explanation
"""
|
khrapovs/metrix | notebooks/doppler_nonparametrics.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as ss
import sympy as sp
sns.set_context('notebook')
%matplotlib inline
"""
Explanation: Nonparametric estimatio of Doppler function
End of explanation
"""
x = np.linspace(.01, .99, num=1e3)
doppler = lambda x : np.sqrt(x * (1 - x)) * np.sin(1.2 * np.pi / (x + .05))
plt.plot(x, doppler(x))
plt.show()
"""
Explanation: Doppler function
$$r\left(x\right)=\sqrt{x\left(1-x\right)}\sin\left(\frac{1.2\pi}{x+.05}\right),\quad x\in\left[0,1\right]$$
End of explanation
"""
from sympy.utilities.lambdify import lambdify
from IPython.display import display, Math, Latex
u = sp.Symbol('u')
sym_doppler = lambda x : (x * (1 - x))**.5 * sp.sin(1.2 * sp.pi / (x + .05))
d_doppler = sym_doppler(u).diff()
dd_doppler = sym_doppler(u).diff(n=2)
display(Math(sp.latex(d_doppler)))
d_doppler = np.vectorize(lambdify(u, d_doppler))
dd_doppler = np.vectorize(lambdify(u, dd_doppler))
plt.plot(x, d_doppler(x))
plt.show()
"""
Explanation: Derivative of Doppler function
End of explanation
"""
def f_rtexp(x, lmbd=1, b=1):
return np.exp(-x / lmbd) / lmbd / (1 - np.exp(-b / lmbd))
def f_ltexp(x, lmbd=1, b=1):
return np.exp(x / lmbd) / lmbd / (np.exp(b / lmbd) - 1)
def right_trunc_exp(lmbd=1, b=1, size=1000):
X = np.sort(np.random.rand(size))
return - lmbd * np.log(1 - X * (1 - np.exp(-b / lmbd)))
def left_trunc_exp(lmbd=1, b=1, size=1000):
X = np.sort(np.random.rand(size))
return lmbd * np.log(1 - X * (1 - np.exp(b / lmbd)))
# Equivalent using SciPy:
# Y = ss.truncexpon.rvs(1, size=1000)
lmbd = .2
Y1 = right_trunc_exp(lmbd=lmbd)
Y2 = left_trunc_exp(lmbd=lmbd)
density1 = ss.gaussian_kde(Y1)
density2 = ss.gaussian_kde(Y2)
U = np.linspace(0, 1, num=1e3)
"""
Explanation: Left and right truncated exponentials
Right truncated:
$$f\left(x\right)=\frac{e^{-x/\lambda}/\lambda}{1-e^{-b/\lambda}},\quad F\left(x\right)=\frac{1-e^{-x/\lambda}}{1-e^{-b/\lambda}},\quad F^{-1}\left(x\right)=-\lambda\log\left(1-x\left(1-e^{-b/\lambda}\right)\right)$$
Left truncated:
$$f\left(x\right)=\frac{e^{x/\lambda}/\lambda}{e^{b/\lambda}-1},\quad F\left(x\right)=\frac{1-e^{x/\lambda}}{1-e^{b/\lambda}},\quad F^{-1}\left(x\right)=\lambda\log\left(1-x\left(1-e^{b/\lambda}\right)\right)$$
End of explanation
"""
fig = plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.hist(Y1, normed=True, bins=20, label='Histogram')
plt.plot(U, f_rtexp(U, lmbd=lmbd), lw=4, color=[0, 0, 0], label='True density')
plt.plot(U, density1(U), lw=4, color='red', label='Kernel density')
plt.legend()
plt.title('Right truncated')
plt.subplot(1, 2, 2)
plt.hist(Y2, normed=True, bins=20, label='Histogram')
plt.plot(U, f_ltexp(U, lmbd=lmbd), lw=4, color=[0, 0, 0], label='True density')
plt.plot(U, density2(U), lw=4, color='red', label='Kernel density')
plt.legend()
plt.title('Left truncated')
plt.show()
"""
Explanation: Draw the densitites
End of explanation
"""
def indicator(x):
return np.asfarray((np.abs(x) <= 1.) & (np.abs(x) >= 0.))
def kernel(x, ktype='Truncated'):
if ktype == 'Truncated':
return .5 * indicator(x)
if ktype == 'Epanechnikov':
return 3./4. * (1 - x**2) * indicator(x)
if ktype == 'Biweight':
return 15./16. * (1 - x**2)**2 * indicator(x)
if ktype == 'Triweight':
return 35./36. * (1 - x**2)**3 * indicator(x)
if ktype == 'Gaussian':
return 1./np.sqrt(2. * np.pi) * np.exp(- .5 * x**2)
def roughness(ktype='Truncated'):
if ktype == 'Truncated':
return 1./2.
if ktype == 'Epanechnikov':
return 3./5.
if ktype == 'Biweight':
return 5./7.
if ktype == 'Triweight':
return 350./429.
if ktype == 'Gaussian':
return np.pi**(-.5)/2.
def sigmak(ktype='Truncated'):
if ktype == 'Truncated':
return 1./3.
if ktype == 'Epanechnikov':
return 1./5.
if ktype == 'Biweight':
return 1./7.
if ktype == 'Triweight':
return 1./9.
if ktype == 'Gaussian':
return 1.
x = np.linspace(0., 2., 100)
names = ['Truncated', 'Epanechnikov', 'Biweight', 'Triweight', 'Gaussian']
for name in names:
plt.plot(x, kernel(x, ktype=name), label=name, lw=2)
plt.legend()
plt.show()
"""
Explanation: Kernels
Truncated (Uniform): $k_{0}\left(u\right)=\frac{1}{2}1\left(\left|u\right|\leq1\right)$
Epanechnikov: $k_{1}\left(u\right)=\frac{3}{4}\left(1-u^{2}\right)1\left(\left|u\right|\leq1\right)$
Biweight: $k_{2}\left(u\right)=\frac{15}{16}\left(1-u^{2}\right)^{2}1\left(\left|u\right|\leq1\right)$
Triweight: $k_{2}\left(u\right)=\frac{35}{36}\left(1-u^{2}\right)^{3}1\left(\left|u\right|\leq1\right)$
Gaussian: $k_{\phi}\left(u\right)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}u^2\right)$
End of explanation
"""
def weight(U, X, h=.1, ktype='Truncated'):
# X - N-array
# U - M-array
# XmU - M*N-array
XmU = (X - np.atleast_2d(U).T) / h
# K - M*N-array
K = kernel(XmU, ktype)
# K.sum(1) - M-array
# K.T - N*M-array
# K.T / K.sum(1) - N*M-array
return (K.T / K.sum(1)).T
"""
Explanation: Nadaraya-Watson (NW) or local constant estimator
Local weighting
For each observed data $X$ ($N$-vector) and grid $U$ ($M$-vector) this function returns $N\times M$-matrix of weights
End of explanation
"""
def NW(U, X, Y, h=.1, ktype='Truncated'):
return np.dot(weight(U, X, h, ktype), Y)
"""
Explanation: Nadaraya-Watson (NW)
$$\hat{m}\left(x\right)=\frac{\sum_{i=1}^{n}k\left(\frac{X_{i}-x}{h}\right)Y_{i}}{\sum_{i=1}^{n}k\left(\frac{X_{i}-x}{h}\right)}$$
End of explanation
"""
def generate_data(N=1000, M=500, lmbd=1, trunc='left'):
if trunc == 'left':
X = left_trunc_exp(lmbd=lmbd, size=N)
if trunc == 'right':
X = right_trunc_exp(lmbd=lmbd, size=N)
e = np.random.normal(0, .1, N)
Y = doppler(X) + e
U = np.linspace(.01, .99, M)
return X, Y, U
"""
Explanation: Generate data
$$Y_{i}=m\left(X_{i}\right)+\epsilon_{i},\quad\epsilon_{i}\sim NID\left(0,\sigma=0.1\right)$$
End of explanation
"""
X, Y, U = generate_data()
# Nadaraya-Watson estimator
Yhat = NW(U, X, Y, h=.05, ktype='Truncated')
fig = plt.figure(figsize=(10, 6))
plt.plot(U, doppler(U), lw=2, color='blue', label='True')
plt.plot(U, Yhat, lw=2, color='red', label='Fitted')
plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized')
plt.xlim([0, 1])
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()
"""
Explanation: Perform estimation and plot the results
End of explanation
"""
def fx(x, lmbd=1, b=1):
return sp.exp(-x / lmbd) / lmbd / (1 - sp.exp(-b / lmbd))
dfx = fx(u).diff()
fx = np.vectorize(lambdify(u, fx(u)))
dfx = np.vectorize(lambdify(u, dfx))
def bias(U, etype='NW', h=.05, ktype='Gaussian'):
if etype == 'NW':
bias = .5 * dd_doppler(U) + d_doppler(U) * dfx(U) / fx(U)
if etype == 'LL':
bias = .5 * dd_doppler(U) * fx(U)
return bias * h**2 * sigmak(ktype)
h = .05
ktype = 'Gaussian'
fig = plt.figure(figsize=(15, 6))
X, Y, U = generate_data()
Yhat = NW(X, X, Y, h=h, ktype=ktype)
Ynobias = Yhat - bias(X, etype='NW', h=h, ktype=ktype)
plt.plot(X, doppler(X), lw=2, color='blue', label='True')
plt.plot(X, Yhat, lw=2, color='red', label='Fitted')
plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized')
plt.plot(X, Ynobias, lw=2, color='green', label='No Bias')
plt.xlim([0, 1])
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()
"""
Explanation: Bias correction
For bias computation we take the density of $X$ and two derivatives of conditional mean $m(x)$ as known. In practice, they have to be estimated.
End of explanation
"""
def LL(U, X, Y, h=.1, ktype='Truncated'):
# X - N-array
# U - M-array
# K - M*N-array
W = weight(U, X, h, ktype)
alpha = np.empty(U.shape[0])
beta = np.empty(U.shape[0])
for i in range(U.shape[0]):
# N*N-array
K = np.diag(W[i])
# N-array
Z1 = (X - U[i]) / h
Z0 = np.ones(Z1.shape)
# 2*N-array
Z = np.vstack([Z0, Z1]).T
# 2*2-array
A = np.dot(Z.T, np.dot(K, Z))
# 2-array
B = np.dot(Z.T, np.dot(K, Y))
# 2-array
coef = np.dot(np.linalg.inv(A), B)
alpha[i] = coef[0]
beta[i] = coef[1]
return alpha, beta
"""
Explanation: Local Linear (LL) estimator
$$\left(\begin{array}{c}
\hat{\alpha}\left(x\right)\
\hat{\beta}\left(x\right)
\end{array}\right)=\left(\sum_{i=1}^{n}k_{i}\left(x\right)Z_{i}\left(x\right)Z_{i}\left(x\right)^{\prime}\right)^{-1}\sum_{i=1}^{n}k_{i}\left(x\right)Z_{i}\left(x\right)Y_{i}$$
$$\left(\begin{array}{c}
\hat{\alpha}\left(x\right)\
\hat{\beta}\left(x\right)
\end{array}\right)
=\left(Z\left(x\right)^{\prime}K\left(x\right)Z\left(x\right)\right)^{-1}Z\left(x\right)^{\prime}K\left(x\right)Y$$
$K(x)$ - $N\times N$
$Z(x)$ - $N\times 2$
$Y$ - $N\times 1$
End of explanation
"""
X, Y, U = generate_data()
Yhat, dYhat = LL(U, X, Y, h=.05, ktype='Gaussian')
fig = plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.plot(U, doppler(U), lw=2, color='blue', label='True')
plt.plot(U, Yhat, lw=2, color='red', label='Fitted')
plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized')
plt.xlim([0, 1])
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.title('Doppler function')
plt.subplot(1, 2, 2)
plt.plot(U, d_doppler(U), lw=2, color='blue', label='True')
plt.plot(U, dYhat, lw=2, color='red', label='Fitted')
plt.xlim([0, 1])
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.title('Doppler function derivative')
plt.show()
"""
Explanation: Perform estimation and plot the results
End of explanation
"""
X1, Y1, U = generate_data(lmbd=.1, trunc='left')
X2, Y2, U = generate_data(lmbd=.1, trunc='right')
ktype = 'Gaussian'
h = .05
Y1hat = NW(U, X1, Y1, h=h, ktype=ktype)
Y2hat = NW(U, X2, Y2, h=h, ktype=ktype)
fig = plt.figure(figsize=(15, 10))
plt.subplot(2, 2, 1)
plt.hist(X1, normed=True, bins=20, label='Histogram')
plt.ylabel('X1')
plt.subplot(2, 2, 2)
plt.hist(X2, normed=True, bins=20, label='Histogram')
plt.ylabel('X2')
plt.subplot(2, 2, 3)
plt.plot(U, doppler(U), lw=2, color='blue', label='True')
plt.plot(U, Y1hat, lw=2, color='red', label='Fitted')
plt.scatter(X1, Y1, s=15, lw=.5, facecolor='none', label='Realized')
plt.xlim([0, 1])
plt.xlabel('X1')
plt.ylabel('Y1')
plt.legend()
plt.subplot(2, 2, 4)
plt.plot(U, doppler(U), lw=2, color='blue', label='True')
plt.plot(U, Y2hat, lw=2, color='red', label='Fitted')
plt.scatter(X2, Y2, s=15, lw=.5, facecolor='none', label='Realized')
plt.xlim([0, 1])
plt.xlabel('X2')
plt.ylabel('Y2')
plt.legend()
plt.show()
"""
Explanation: Comparison for different DGP of X
End of explanation
"""
def error(Y, X, h, ktype):
ehat = np.empty(X.shape)
for i in range(X.shape[0]):
ehat[i] = Y[i] - NW(X[i], np.delete(X, i), np.delete(Y, i), h=h, ktype=ktype)
return np.array(ehat)
"""
Explanation: Conditional variance and confidence intervals
Leave-one-out errors
End of explanation
"""
N = 500
X, Y, U = generate_data(N=N, lmbd=.2)
h = .05
ktype = 'Epanechnikov'
Yhat = NW(U, X, Y, h=h, ktype=ktype)
ehat = error(Y, X, h, ktype)
sigma2hat = NW(U, X, ehat**2, h=.1, ktype=ktype)
fxhat = ss.gaussian_kde(X)(U)
V2hat = roughness(ktype) * sigma2hat / fxhat / N / h
shat = V2hat**.5
"""
Explanation: Estimate variance
End of explanation
"""
fig = plt.figure(figsize = (10, 10))
plt.subplot(3, 1, 1)
plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized')
#plt.plot(U, doppler(U), lw=2, color='blue', label='True')
plt.fill_between(U, Yhat - 2*shat, Yhat + 2*shat, lw=0, color='red', alpha=.2, label='+2s')
plt.plot(U, Yhat, lw=2, color='red', label='Fitted')
plt.ylabel('Y')
plt.legend()
plt.xlim([0, 1])
ylim = plt.gca().get_ylim()
plt.title('Data')
plt.subplot(3, 1, 2)
plt.scatter(X, ehat, s=15, lw=.5, facecolor='none', label='Errors')
plt.axhline(color='black')
plt.ylim(ylim)
plt.xlim([0, 1])
plt.title('Errors')
plt.subplot(3, 1, 3)
plt.plot(U, sigma2hat**.5, lw=2, color='red', label='Estimate')
plt.plot(U, .1 * np.ones(U.shape), lw=2, color='blue', label='True')
plt.ylim([0, .4])
plt.xlim([0, 1])
plt.legend()
plt.xlabel('X')
plt.title('Conditional variance')
plt.tight_layout()
plt.show()
"""
Explanation: Plot the results
End of explanation
"""
N = 500
X, Y, U = generate_data(N=N)
ktype = 'Gaussian'
H = np.linspace(.001, .05, 100)
CV = np.array([])
for h in H:
ehat = error(Y, X, h, ktype)
CV = np.append(CV, np.mean(ehat**2))
h = H[CV.argmin()]
Yhat = NW(U, X, Y, h=h, ktype=ktype)
ehat = error(Y, X, h, ktype)
sigma2hat = NW(U, X, ehat ** 2, h=h, ktype=ktype)
fxhat = ss.gaussian_kde(X)(U)
V2hat = roughness(ktype) * sigma2hat / fxhat / N / h
shat = V2hat**.5
plt.figure(figsize=(10, 5))
plt.plot(H, CV)
plt.scatter(h, CV.min(), facecolor='none', lw=2, s=100)
plt.xlim([H.min(), H.max()])
plt.xlabel('Bandwidth, h')
plt.ylabel('cross-validation, CV')
plt.show()
"""
Explanation: Bandwidth selection
Cross-validation criterion
$$\tilde{e}{i}\left(h\right)=Y{i}-\tilde{m}{-i}\left(X{i},h\right)$$
$$CV\left(h\right)=\frac{1}{n}\sum_{i=1}^{n}\tilde{e}_{i}\left(h\right)^{2}$$
$$\hat{h}=\arg\min_{h\geq h_{l}}CV\left(h\right)$$
End of explanation
"""
plt.figure(figsize=(10, 5))
#plt.plot(U, doppler(U), lw=2, color='blue', label='True')
plt.fill_between(U, Yhat - 2*shat, Yhat + 2*shat, lw=0, color='red', alpha=.2, label='+2s')
plt.plot(U, Yhat, lw=2, color='red', label='Fitted')
plt.scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized')
plt.xlim([0, 1])
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()
plt.show()
"""
Explanation: Plot the (optimized) fit
End of explanation
"""
|
albahnsen/ML_RiskManagement | exercises/03-IncomePrediction.ipynb | mit | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# read the data and set the datetime as the index
import zipfile
with zipfile.ZipFile('../datasets/income.csv.zip', 'r') as z:
f = z.open('income.csv')
income = pd.read_csv(f, index_col=0)
income.head()
income.shape
"""
Explanation: Exercise 03
Estimate a regression using the Income data
Forecast of income
We'll be working with a dataset from US Census indome (data dictionary).
Many businesses would like to personalize their offer based on customer’s income. High-income customers could be, for instance, exposed to premium products. As a customer’s income is not always explicitly known, predictive model could estimate income of a person based on other information.
Our goal is to create a predictive model that will be able to output an estimation of a person income.
End of explanation
"""
income.plot(x='Age', y='Income', kind='scatter')
"""
Explanation: Exercise 4.1
What is the relation between the age and Income?
For a one percent increase in the Age how much the income increases?
Using sklearn estimate a linear regression and predict the income when the Age is 30 and 40 years
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mohc/cmip6/models/sandbox-3/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
EstevesDouglas/UNICAMP-FEEC-IA369Z | dev/checkpoint/2017-05-09-estevesdouglas-notebook.ipynb | gpl-3.0 | -- Campainha IoT - LHC - v1.1
-- ESP Inicializa pinos, Configura e Conecta no Wifi, Cria conexão TCP
-- e na resposta de um "Tocou" coloca o ESP em modo DeepSleep para economizar bateria.
-- Se nenhuma resposta for recebida em 15 segundos coloca o ESP em DeepSleep.
led_pin = 3
status_led = gpio.LOW
ip_servidor = "192.168.1.10"
ip_campainha = "192.168.1.20"
voltagem=3333
function desliga_circuito()
print("Colocando ESP em Deep Sleep")
node.dsleep(0)
end
function read_voltage()
-- Desconecta do wifi para poder ler a voltagem de alimentação do ESP.
wifi.sta.disconnect()
voltagem = adc.readvdd33()
print("Voltagem: "..voltagem)
-- Inicializa o Wifi e conecta no servidor
print("Inicializando WiFi")
init_wifi()
end
function pisca_led()
gpio.write(led_pin, status_led)
if status_led == gpio.LOW then
status_led = gpio.HIGH
else
status_led = gpio.LOW
end
end
function init_pins()
gpio.mode(led_pin, gpio.OUTPUT)
gpio.write(led_pin, status_led)
end
function init_wifi()
wifi.setmode(wifi.STATION)
wifi.sta.config("SSID", "password")
wifi.sta.connect()
wifi.sta.setip({ip=ip_campainha,netmask="255.255.255.0",gateway="192.168.1.1"})
-- Aguarda conexão com Wifi antes de enviar o request.
function try_connect()
if (wifi.sta.status() == 5) then
tmr.stop(0)
print("Conectado, mandando request")
manda_request()
-- Se nenhuma confirmação for recebida em 15 segundos, desliga o ESP.
tmr.alarm(2,15000,0, desliga_circuito)
else
print("Conectando...")
end
end
tmr.alarm(0,1000,1, function() try_connect() end )
end
function manda_request()
tmr.alarm(1, 200, 1, pisca_led)
print("Request enviado")
-- Cria a conexão TCP
conn=net.createConnection(net.TCP,false)
-- Envia o toque da campainha e voltagem para o servidor
conn:on("connection", function(conn)
conn:send("GET /?bateria=" ..voltagem.. " HTTP/1.0\r\n\r\n")
end)
-- Se receber "Tocou" do servidor, desliga o ESP.
conn:on("receive", function(conn, data)
if data:find("Tocou") ~= nil then
desliga_circuito()
end
end)
-- Conectar no servidor
conn:connect(9999,ip_servidor)
end
print("Inicializando pinos")
init_pins()
print ("Lendo voltagem")
read_voltage()
"""
Explanation: IA369Z - Reprodutibilidade em Pesquisa Computacional.
testes
teste
teste
teste
teste
teste
Descrição de códigos para devices e coletas
Code Client Device
ESP8266 Runing program language LUA.
End of explanation
"""
# !/usr/bin/python2
import time
import BaseHTTPServer
import os
import random
import string
import requests
from urlparse import parse_qs, urlparse
HOST_NAME = '0.0.0.0'
PORT_NUMBER = 9999
# A variável MP3_DIR será construida tendo como base o diretório HOME do usuário + Music/Campainha
# (e.g: /home/usuario/Music/Campainha)
MP3_DIR = os.path.join(os.getenv('HOME'), 'Music', 'Campainha')
VALID_CHARS = set(string.ascii_letters + string.digits + '_.')
CHAVE_THINGSPEAK = 'XYZ11ZYX99XYZ1XX'
# Salva o arquivo de log no diretório do usuário (e.g: /home/usuário/campainha.log)
ARQUIVO_LOG = os.path.join(os.getenv('HOME'), 'campainha.log')
def filtra(mp3):
if not mp3.endswith('.mp3'):
return False
for c in mp3:
if not c in VALID_CHARS:
return False
return True
def log(msg, output_file=None):
if output_file is None:
output_file = open(ARQUIVO_LOG, 'a')
output_file.write('%s: %s\n' % (time.asctime(), msg))
output_file.flush()
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(s):
s.send_header("Content-type", "text/plain")
query = urlparse(s.path).query
if not query:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
components = dict(qc.split('=') for qc in query.split('&'))
if not 'bateria' in components:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
s.send_response(200)
s.end_headers()
s.wfile.write('Tocou')
s.wfile.flush()
log("Atualizando thingspeak")
r = requests.post('https://api.thingspeak.com/update',
data={'api_key': CHAVE_THINGSPEAK, 'field1': components['bateria']})
log("Thingspeak retornou: %d" % r.status_code)
log("Tocando MP3")
mp3s = [f for f in os.listdir(MP3_DIR) if filtra(f)]
mp3 = random.choice(mp3s)
os.system("mpv " + os.path.join(MP3_DIR, mp3))
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
log("Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
log("Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER))
"""
Explanation: Server Local : Runing soun local area.
Program Python
End of explanation
"""
import numpy as np
import csv
with open('database.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
print ', '.join(row)
"""
Explanation: Export database from dashaboard about device IoT
Arquivo csv
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.