repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
tedunderwood/fiction | bert/interpret_results.ipynb | mit | # modules needed
import pandas as pd
from scipy.stats import pearsonr
import numpy as np
"""
Explanation: # Interpret results through aggregation
Since I'm working with long documents, I'm not really concerned with BERT's raw predictions about individual text chunks. Instead I need to know how good the predictions are when aggregated at the volume level.
This notebook answers that question, pairing BERT's predictions with a metadata file that got spun off when data was originally created. For a given TASK, this file will be named, for instance, bertmeta/dev_rows_{TASK_NAME}.tsv. This metadata file lists the index of each text chunk but also the docid (usually, volume-level ID) associated with a larger document.
We can then group predictions by docid and evaluate accuracy at the volume level. I have tried doing this by averaging logits, as well as binary voting.
My tentative conclusion is that in most cases binary voting is preferable; I'm not sure whether the logits are scaled in a way that produces a reliable mean.
End of explanation
"""
pred = pd.read_csv('reports/sf512max/predictions.tsv', sep = '\t', header = None, names = ['real', 'pred'])
pred.head()
meta = pd.read_csv('bertmeta/dev_rows_SF512max.tsv', sep = '\t')
meta.head()
pred.shape
meta.shape
# Here we're aligning the dataframes by setting the index of "pred"
# to match the idx column of "meta."
pred = pred.assign(idx = meta['idx'])
pred = pred.set_index('idx')
pred.head()
"""
Explanation: Aggregate results; use binary voting
The generat strategy here is to create a dataframe called pred that holds the predictions, and another one called meta that holds indexes paired with volume IDs (or review IDs when we're doing this for the sentiment dataset).
Then we align the dataframes.
End of explanation
"""
correct = []
right = 0
for idx, row in pred.iterrows():
if row['pred'] == row['real']:
correct.append(True)
right += 1
else:
correct.append(False)
print(right / len(pred))
"""
Explanation: Measure accuracy at the chunk level
End of explanation
"""
byvol = meta.groupby('docid')
rightvols = 0
allvols = 0
bertprobs = dict()
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
for idx, row in df.iterrows():
total += 1
true_class = row['class']
predicted_class = pred.loc[idx, 'pred']
assert true_class == pred.loc[idx, 'real']
if true_class == predicted_class:
right += 1
if predicted_class:
positive += 1
bertprobs[vol] = positive/total
if right/ total >= 0.5:
rightvols += 1
allvols += 1
print()
print('Overall accuracy:', rightvols / allvols)
"""
Explanation: And now at the document level
End of explanation
"""
pred = pd.read_csv('reports/sf512max/logits.tsv', sep = '\t', header = None, names = ['real', 'pred'])
pred.head()
right = 0
for idx, row in pred.iterrows():
if row['pred'] >= 0:
predclass = 1
else:
predclass = 0
if predclass == row['real']:
correct.append(True)
right += 1
else:
correct.append(False)
print(right / len(pred))
# Here we're aligning the dataframes by setting the index of "pred"
# to match the idx column of "meta."
pred = pred.assign(idx = meta['idx'])
pred = pred.set_index('idx')
pred.head()
byvol = meta.groupby('docid')
rightvols = 0
allvols = 0
bertprobs = dict()
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
predictions = []
for idx, row in df.iterrows():
predict = pred.loc[idx, 'pred']
predictions.append(predict)
true_class = row['class']
volmean = sum(predictions) / len(predictions)
if volmean >= 0:
predicted_class = 1
else:
predicted_class = 0
if true_class == predicted_class:
rightvols += 1
allvols += 1
print()
print('Overall accuracy:', rightvols / allvols)
"""
Explanation: adding logits
The same process as above, except we load predictions from a file called logits.tsv.
End of explanation
"""
def corrdist(filename, bertprobs):
'''
Checks for correlation.
'''
# If I were coding elegantly, I would not repeat
# the same code twice, but this is just a sanity check, so
# the structure here is that we do exactly the same thing
# for models 0-4 and for models 5-9.
root = '../temp/' + filename
logisticprob = dict()
for i in range(0, 10):
# note the range endpoints
tt_df = pd.read_csv(root + str(i) + '.csv', index_col = 'docid')
for key, value in bertprobs.items():
if key in tt_df.index:
l_prob = tt_df.loc[key, 'probability']
if key not in logisticprob:
logisticprob[key] = []
logisticprob[key].append(l_prob)
a = []
b = []
for key, value in logisticprob.items():
aval = sum(value) / len(value)
bval = bertprobs[key]
a.append(aval)
b.append(bval)
print(pearsonr(a, b))
print(len(a), len(b))
corrdist('BoWSF', bertprobs)
thisprobs = dict()
lastprobs = dict()
root = '../temp/BoWSF'
for i in range(0, 10):
df = pd.read_csv(root + str(i) + '.csv', index_col = 'docid')
a = []
b = []
for idx, row in df.iterrows():
thisprobs[idx] = row.probability
if idx in lastprobs:
a.append(lastprobs[idx])
b.append(thisprobs[idx])
if len(a) > 0:
print(pearsonr(a, b))
lastprobs = thisprobs
thisprobs = dict()
met = pd.read_csv('bertmeta/dev_rows_SF0_500.tsv', sep = '\t')
met.head()
# regression
byvol = meta.groupby('docid')
volpred = []
volreal = []
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
predictions = []
for idx, row in df.iterrows():
predict = pred.loc[idx, 'pred']
predictions.append(predict)
true_class = float(row['class'])
volmean = sum(predictions) / len(predictions)
volpred.append(volmean)
volreal.append(true_class)
print()
print('Overall accuracy:', pearsonr(volpred, volreal))
"""
Explanation: Random curiosity
I was interested to know how closely BERT predictions correlate with bag-of-words modeling, and whether it's less closely than BoW models with each other. The answer is, yes, the correlation is less strong, and there's potential here for an ensemble model.
End of explanation
"""
|
ajaybhat/DLND | Project 1/Project-1.ipynb | apache-2.0 | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
"""
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
"""
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
"""
rides[:24*10].plot(x='dteday', y='cnt')
"""
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
"""
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
"""
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
"""
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
"""
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
"""
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
"""
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
"""
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
"""
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
"""
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
### Forward pass ###
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
### Backward pass ###
output_errors = targets-final_outputs
output_grad = output_errors
hidden_errors = np.dot(self.weights_hidden_to_output.T,output_errors)
hidden_grad = self.activation_function_derivative(hidden_inputs)
self.weights_hidden_to_output += self.lr*np.dot(output_grad,hidden_outputs.T)
self.weights_input_to_hidden += self.lr*np.dot(hidden_errors* hidden_grad,inputs.T)
def activation_function(self,x):
return 1/(1 + np.exp(-x))
def activation_function_derivative(self,x):
return self.activation_function(x)*(1-self.activation_function(x))
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
hidden_inputs = np.dot(self.weights_input_to_hidden,inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
"""
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
"""
import sys
### Set the hyperparameters here ###
epochs = 2000
learning_rate = 0.008
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
if e%(epochs/10) == 0:
sys.stdout.write("\nProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
"""
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], 'r',label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, 'g', label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
"""
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
"""
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
def runTest(self):
pass
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner(verbosity=1).run(suite)
"""
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model predicts the data fairly well for the limited amount it is provided. It fails massively for the period December 23-28, because real-world scenarios indicate most people would not get bikes at that time, and would be staying home. However, the network predicts the same behavior as the other days of the month, and so it fails. This could be avoided by feeding it similar data of the type seen in the sample (as part of the training data).
Another way is to experiment with the activation function itself: Sigmoid can be replaced by tanh or Leaky RelU functions. http://cs231n.github.io/neural-networks-1/
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
"""
|
spro/practical-pytorch | conditional-char-rnn/conditional-char-rnn.ipynb | mit | import glob
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'-"
n_letters = len(all_letters) + 1 # Plus EOS marker
EOS = n_letters - 1
# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii("O'Néàl"))
# Read a file and split into lines
def read_lines(filename):
lines = open(filename).read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
# Build the category_lines dictionary, a list of lines per category
category_lines = {}
all_categories = []
for filename in glob.glob('../data/names/*.txt'):
category = filename.split('/')[-1].split('.')[0]
all_categories.append(category)
lines = read_lines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
print('# categories:', n_categories, all_categories)
"""
Explanation: Practical PyTorch: Generating Names with a Conditional Character-Level RNN
In the last tutorial we used a RNN to classify names into their language of origin. This time we'll turn around and generate names from languages. This model will improve upon the RNN we used to generate Shakespeare one character at a time by adding another input (representing the language) so we can specify what kind of name to generate.
```
python generate.py Russian
Rovakov
Uantov
Shavakov
python generate.py German
Gerren
Ereng
Rosher
python generate.py Spanish
Salla
Parer
Allan
python generate.py Chinese
Chan
Hang
Iun
```
Being able to "prime" the generator with a specific category brings us a step closer to the Sequence to Sequence model used for machine translation.
Recommended Reading
I assume you have at least installed PyTorch, know Python, and understand Tensors:
http://pytorch.org/ For installation instructions
Deep Learning with PyTorch: A 60-minute Blitz to get started with PyTorch in general
jcjohnson's PyTorch examples for an in depth overview
Introduction to PyTorch for former Torchies if you are former Lua Torch user
It would also be useful to know about RNNs and how they work:
The Unreasonable Effectiveness of Recurrent Neural Networks shows a bunch of real life examples
Understanding LSTM Networks is about LSTMs specifically but also informative about RNNs in general
I also suggest the previous tutorials:
Classifying Names with a Character-Level RNN for using an RNN to classify text into categories
Generating Shakespeare with a Character-Level RNN for using an RNN to generate one character at a time
Preparing the Data
See Classifying Names with a Character-Level RNN for more detail - we're using the exact same dataset. In short, there are a bunch of plain text files data/names/[Language].txt with a name per line. We split lines into an array, convert Unicode to ASCII, and end up with a dictionary {language: [names ...]}.
End of explanation
"""
import torch
import torch.nn as nn
from torch.autograd import Variable
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size)
self.o2o = nn.Linear(hidden_size + output_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, category, input, hidden):
input_combined = torch.cat((category, input, hidden), 1)
hidden = self.i2h(input_combined)
output = self.i2o(input_combined)
output_combined = torch.cat((hidden, output), 1)
output = self.o2o(output_combined)
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(1, self.hidden_size))
"""
Explanation: Creating the Network
This network extends the last tutorial's RNN with an extra argument for the category tensor, which is concatenated along with the others. The category tensor is a one-hot vector just like the letter input.
We will interpret the output as the probability of the next letter. When sampling, the most likely output letter is used as the next input letter.
I added a second linear layer o2o (after combining hidden and output) to give it more muscle to work with. There's also a dropout layer, which randomly zeros parts of its input with a given probability (here 0.1) and is usually used to fuzz inputs to prevent overfitting. Here we're using it towards the end of the network to purposely add some chaos and increase sampling variety.
End of explanation
"""
import random
# Get a random category and random line from that category
def random_training_pair():
category = random.choice(all_categories)
line = random.choice(category_lines[category])
return category, line
"""
Explanation: Preparing for Training
First of all, helper functions to get random pairs of (category, line):
End of explanation
"""
# One-hot vector for category
def make_category_input(category):
li = all_categories.index(category)
tensor = torch.zeros(1, n_categories)
tensor[0][li] = 1
return Variable(tensor)
# One-hot matrix of first to last letters (not including EOS) for input
def make_chars_input(chars):
tensor = torch.zeros(len(chars), n_letters)
for ci in range(len(chars)):
char = chars[ci]
tensor[ci][all_letters.find(char)] = 1
tensor = tensor.view(-1, 1, n_letters)
return Variable(tensor)
# LongTensor of second letter to end (EOS) for target
def make_target(line):
letter_indexes = [all_letters.find(line[li]) for li in range(1, len(line))]
letter_indexes.append(n_letters - 1) # EOS
tensor = torch.LongTensor(letter_indexes)
return Variable(tensor)
"""
Explanation: For each timestep (that is, for each letter in a training word) the inputs of the network will be (category, current letter, hidden state) and the outputs will be (next letter, next hidden state). So for each training set, we'll need the category, a set of input letters, and a set of output/target letters.
Since we are predicting the next letter from the current letter for each timestep, the letter pairs are groups of consecutive letters from the line - e.g. for "ABCD<EOS>" we would create ("A", "B"), ("B", "C"), ("C", "D"), ("D", "EOS").
The category tensor is a one-hot tensor of size <1 x n_categories>. When training we feed it to the network at every timestep - this is a design choice, it could have been included as part of initial hidden state or some other strategy.
End of explanation
"""
# Make category, input, and target tensors from a random category, line pair
def random_training_set():
category, line = random_training_pair()
category_input = make_category_input(category)
line_input = make_chars_input(line)
line_target = make_target(line)
return category_input, line_input, line_target
"""
Explanation: For convenience during training we'll make a random_training_set function that fetches a random (category, line) pair and turns them into the required (category, input, target) tensors.
End of explanation
"""
def train(category_tensor, input_line_tensor, target_line_tensor):
hidden = rnn.init_hidden()
optimizer.zero_grad()
loss = 0
for i in range(input_line_tensor.size()[0]):
output, hidden = rnn(category_tensor, input_line_tensor[i], hidden)
loss += criterion(output, target_line_tensor[i])
loss.backward()
optimizer.step()
return output, loss.data[0] / input_line_tensor.size()[0]
"""
Explanation: Training the Network
In contrast to classification, where only the last output is used, we are making a prediction at every step, so we are calculating loss at every step.
The magic of autograd allows you to simply sum these losses at each step and call backward at the end. But don't ask me why initializing loss with 0 works.
End of explanation
"""
import time
import math
def time_since(t):
now = time.time()
s = now - t
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
"""
Explanation: To keep track of how long training takes I am adding a time_since(t) function which returns a human readable string:
End of explanation
"""
n_epochs = 100000
print_every = 5000
plot_every = 500
all_losses = []
loss_avg = 0 # Zero every plot_every epochs to keep a running average
learning_rate = 0.0005
rnn = RNN(n_letters, 128, n_letters)
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
start = time.time()
for epoch in range(1, n_epochs + 1):
output, loss = train(*random_training_set())
loss_avg += loss
if epoch % print_every == 0:
print('%s (%d %d%%) %.4f' % (time_since(start), epoch, epoch / n_epochs * 100, loss))
if epoch % plot_every == 0:
all_losses.append(loss_avg / plot_every)
loss_avg = 0
"""
Explanation: Training is business as usual - call train a bunch of times and wait a few minutes, printing the current time and loss every print_every epochs, and keeping store of an average loss per plot_every epochs in all_losses for plotting later.
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
plt.figure()
plt.plot(all_losses)
"""
Explanation: Plotting the Network
Plotting the historical loss from all_losses shows the network learning:
End of explanation
"""
max_length = 20
# Generate given a category and starting letter
def generate_one(category, start_char='A', temperature=0.5):
category_input = make_category_input(category)
chars_input = make_chars_input(start_char)
hidden = rnn.init_hidden()
output_str = start_char
for i in range(max_length):
output, hidden = rnn(category_input, chars_input[0], hidden)
# Sample as a multinomial distribution
output_dist = output.data.view(-1).div(temperature).exp()
top_i = torch.multinomial(output_dist, 1)[0]
# Stop at EOS, or add to output_str
if top_i == EOS:
break
else:
char = all_letters[top_i]
output_str += char
chars_input = make_chars_input(char)
return output_str
# Get multiple samples from one category and multiple starting letters
def generate(category, start_chars='ABC'):
for start_char in start_chars:
print(generate_one(category, start_char))
generate('Russian', 'RUS')
generate('German', 'GER')
generate('Spanish', 'SPA')
generate('Chinese', 'CHI')
"""
Explanation: Sampling the Network
To sample we give the network a letter and ask what the next one is, feed that in as the next letter, and repeat until the EOS token.
Create tensors for input category, starting letter, and empty hidden state
Create a string output_str with the starting letter
Up to a maximum output length,
Feed the current letter to the network
Get the next letter from highest output, and next hidden state
If the letter is EOS, stop here
If a regular letter, add to output_str and continue
Return the final name
Note: Rather than supplying a starting letter every time we generate, we could have trained with a "start of string" token and had the network choose its own starting letter.
End of explanation
"""
|
eecs445-f16/umich-eecs445-f16 | lecture14_unsupervised-learning-pca-clustering/lecture14_unsupervised-learning-pca-clustering.ipynb | mit | from __future__ import division
# plotting
%matplotlib inline
from matplotlib import pyplot as plt;
import matplotlib as mpl;
from mpl_toolkits.mplot3d import Axes3D
# scientific
import numpy as np;
import sklearn as skl;
import sklearn.datasets;
import sklearn.cluster;
import sklearn.mixture;
# ipython
import IPython;
# python
import os;
import random;
#####################################################
# image processing
import PIL;
# trim and scale images
def trim(im, percent=100):
print("trim:", percent);
bg = PIL.Image.new(im.mode, im.size, im.getpixel((0,0)))
diff = PIL.ImageChops.difference(im, bg)
diff = PIL.ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
x = im.crop(bbox)
return x.resize(((x.size[0]*percent)//100,
(x.size[1]*percent)//100),
PIL.Image.ANTIALIAS);
"""
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
End of explanation
"""
def plot_plane():
# random samples
n = 200;
data = np.random.random((3,n));
data[2,:] = 0.4 * data[1,:] + 0.6 * data[0,:];
# plot plane
fig = plt.figure(figsize=(10,6));
ax = fig.add_subplot(111, projection="3d");
ax.scatter(*data);
plot_plane()
"""
Explanation: EECS 445: Machine Learning
Lecture 14: Unsupervised Learning: PCA and Clustering
Instructor: Jacob Abernethy
Date: November 2, 2016
Midterm exam information
Statistics:
<span>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Total Score</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>141.0</td>
</tr>
<tr>
<th>mean</th>
<td>48.7</td>
</tr>
<tr>
<th>std</th>
<td>12.4</td>
</tr>
<tr>
<th>min</th>
<td>15.0</td>
</tr>
<tr>
<th>25%</th>
<td>40.0</td>
</tr>
<tr>
<th>50%</th>
<td>50.0</td>
</tr>
<tr>
<th>75%</th>
<td>58.0</td>
</tr>
<tr>
<th>max</th>
<td>72.0</td>
</tr>
</tbody>
</table>
</span>
Score histogram
<img src="images/midterm_dist.png">
Announcements
We will be updating the HW4 to give you all an extra week
We will add a "free form" ML challenge via Kaggle as well
Want a regrade on the midterm? We'll post a policy soon.
References
[MLAPP] Murphy, Kevin. Machine Learning: A Probabilistic Perspective. 2012.
[PRML] Bishop, Christopher. Pattern Recognition and Machine Learning. 2006.
Goal Today: Methods for Unsupervised Learning
We generally a call a problem "Unsupervised" when we don't have any labels!
Outline
Principle Component Analysis
Classical View
Low dimensional representation of data
Relationship to SVD
Clustering
Core idea behind clustering
K-means algorithm
K-means++ etc.
Principal Components Analysis
Uses material from [MLAPP] and [PRML]
Dimensionality Reduction
High-dimensional data may have low-dimensional structure.
- We only need two dimensions to describe a rotated plane in 3d!
End of explanation
"""
## scikit example: Faces recognition example using eigenfaces and SVMs
from __future__ import print_function
from time import time
import matplotlib.pyplot as plt
from sklearn.cross_validation import train_test_split
from sklearn.datasets import fetch_lfw_people
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import RandomizedPCA
from sklearn.svm import SVC
###############################################################################
# Download the data, if not already on disk and load it as numpy arrays
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)
# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]
# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
###############################################################################
# Split into a training set and a test set using a stratified k fold
# split into a training and testing set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
###############################################################################
# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled
# dataset): unsupervised feature extraction / dimensionality reduction
n_components = 150
#print("Extracting the top %d eigenfaces from %d faces"
# % (n_components, X_train.shape[0]))
#t0 = time()
pca = RandomizedPCA(n_components=n_components, whiten=True).fit(X_train)
#print("done in %0.3fs" % (time() - t0))
eigenfaces = pca.components_.reshape((n_components, h, w))
#print("Projecting the input data on the eigenfaces orthonormal basis")
#t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
#print("done in %0.3fs" % (time() - t0))
###############################################################################
# Train a SVM classification model
#print("Fitting the classifier to the training set")
#t0 = time()
param_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], }
clf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid)
clf = clf.fit(X_train_pca, y_train)
#print("done in %0.3fs" % (time() - t0))
#print("Best estimator found by grid search:")
#print(clf.best_estimator_)
###############################################################################
# Quantitative evaluation of the model quality on the test set
#print("Predicting people's names on the test set")
#t0 = time()
y_pred = clf.predict(X_test_pca)
#print("done in %0.3fs" % (time() - t0))
#print(classification_report(y_test, y_pred, target_names=target_names))
#print(confusion_matrix(y_test, y_pred, labels=range(n_classes)))
###############################################################################
# Qualitative evaluation of the predictions using matplotlib
def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
"""Helper function to plot a gallery of portraits"""
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
# plot the result of the prediction on a portion of the test set
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]
true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]
return 'predicted: %s\ntrue: %s' % (pred_name, true_name)
prediction_titles = [title(y_pred, y_test, target_names, i)
for i in range(y_pred.shape[0])]
plot_gallery(X_test, prediction_titles, h, w)
"""
Explanation: Dimensionality Reduction
Data may even be embedded in a low-dimensional nonlinear manifold.
- How can we recover a low-dimensional representation?
<img src="images/swiss-roll.png">
Dimensionality Reduction
As an even more extreme example, consider a dataset consisting of the same image translated and rotate in different directions:
- Only 3 degrees of freedom for a 100x100-dimensional dataset!
<img src="images/pca_1.png" align="middle">
Principal Components Analysis
Given a set $X = {x_n}$ of observations
* in a space of dimension $D$,
* find a linear subspace of dimension $M < D$
* that captures most of its variability.
PCA can be described in two equivalent ways:
* maximizing the variance of the projection, or
* minimizing the squared approximation error.
PCA: Equivalent Descriptions
Maximize variance or minimize squared projection error:
<img src="images/pca_2.png" height = "300px" width = "300px" align="middle">
PCA: Equivalent Descriptions
With mean at the origin $ c_i^2 = a_i^2 + b_i^2 $, with constant $c_i^2$
Minimizing $b_i^2$ maximizes $a_i^2$ and vice versa
<img src="images/pca_3.png" height = "300px" width = "300px" align="middle">
PCA: First Principal Component
Given data points ${x_n}$ in $D$-dim space.
* Mean $\bar x = \frac{1}{N} \sum_{n=1}^{N} x_n $
* Data covariance ($D \times D$ matrix):
$ S = \frac{1}{N} \sum_{n=1}^{N}(x_n - \bar x)(x_n - \bar x)^T$
Let $u_1$ be the principal component we want.
* Unit length $u_1^T u_1 = 1$
* Projection of $x_n$ is $u_1^T x_n$
PCA: First Principal Component
Goal: Maximize the projection variance over directions $u_1$:
$$ \frac{1}{N} \sum_{n=1}^{N}{u_1^Tx_n - u_1^T \bar x }^2 = u_1^TSu_1$$
Use a Lagrange multiplier to enforce $u_1^T u_1 = 1$
Maximize: $u_1^T S u_1 + \lambda(1-u_1^T u_1)$
Derivative is zero when $ Su_1 = \lambda u_1$
That is, $u_1^T S u_1 = \lambda $
So $u_1$ is eigenvector of $S$ with largest eigenvalue.
PCA: Maximizing Variance
The top $M$ eigenvectors of the empirical covariance matrix $S$ give the $M$ principal components of the data.
- Minimizes squared projection error
- Maximizes projection variances
Recall: These are the top $M$ left singular vectors of the data matrix $\hat X$, where $\hat X := X - \bar x \mathbb{1}_N$, i.e. we shift $X$ to ensure 0-mean rows.
Key points for computing SVD
$$\def\bX{\bar X}$$
Let $X$ be the $n \times m$ data matrix ($n$ rows, one for each example). We want to represent our data using only the top $k$ principle components.
1. Mean-center the data, so that $\bX$ is $X$ with each row subtracted by the mean row $\frac 1 n \sum_i X_{i:}$
1. Compute the SVD of $\bX$, i.e. $\bX = U \Sigma V^\top$
1. We can construct $\Sigma_k$ which drops all but the top $k$ singular values from $\Sigma$
1. We can represent $\bX$ either in terms of the principle components, $\Sigma_k V^\top$ or we can look at the data in the original representation after dropping the lower components, which is $U \Sigma_k V^\top$
Example: Eigenfaces
<img src="images/pca_9.png" align="middle">
Example: Face Recognition via Eigenfaces
End of explanation
"""
eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)
"""
Explanation: Example: Face Recognition
End of explanation
"""
X, y = skl.datasets.make_blobs(1000, cluster_std=[1.0, 2.5, 0.5], random_state=170)
plt.scatter(X[:,0], X[:,1])
"""
Explanation: Break time!
<img src="images/finger_cat.gif"/>
Soon to come: Latent Variable Models
Uses material from [MLAPP] §10.1-10.4, §11.1-11.2
Latent Variable Models
In general, the goal of probabilistic modeling is to
Use what we know to make inferences about what we don't know.
Graphical models provide a natural framework for this problem.
- Assume unobserved variables are correlated due to the influence of unobserved latent variables.
- Latent variables encode beliefs about the generative process.
Example to Come: Gaussian Mixture Models
This dataset is hard to explain with a single distribution.
- Underlying density is complicated overall...
- But it's clearly three Gaussians!
End of explanation
"""
|
eroicaleo/LearningPython | HandsOnML/ch03/ex01.ipynb | mit | from scipy.io import loadmat
mnist = loadmat('./datasets/mnist-original.mat')
mnist
X, y = mnist['data'], mnist['label']
X = X.T
X.shape
y = y.T
y.shape
type(y)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
"""
Explanation: 3.1 Problem description
Try to build a classifier for the MNIST dataset that achieves over 97% accuracy
on the test set. Hint: the KNeighborsClassifier works quite well for this task;
you just need to find good hyperparameter values (try a grid search on the
weights and n_neighbors hyperparameters).
Load the data
End of explanation
"""
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
len(X_train)
shuffle_index = np.random.permutation(len(X_train))
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
"""
Explanation: Split test and training data
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
forest_clf.fit(X_train, y_train)
forest_pred = forest_clf.predict(X_test)
forest_pred = forest_pred.reshape(10000,1)
accuracy = (forest_pred == y_test).sum() / len(y_test)
print(accuracy)
"""
Explanation: 3.2 Training a Random Forest Classifier for baseline
The reason to use Random Forest Classifier is it runs faster than linear model
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1)
knn_clf.fit(X_train, y_train)
knn_clf.predict([X_test[0]])
# for i in range(1000):
# knn_clf.predict([X_test[i]])
knn_pred = knn_clf.predict(X_test)
knn_pred = knn_pred.reshape(10000, 1)
accuracy = (knn_pred == y_test).sum() / len(y_test)
print(accuracy)
"""
Explanation: 3.3 Training a KNeighborsClassifier Classifier with default settings
Seems like we have to have n_jobs = 1 so the prediction runs within reasonable time.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_jobs': [-1], 'n_neighbors': [3, 5, 11, 19], 'weights': ['uniform', 'distance']}
]
grid_search = GridSearchCV(knn_clf, param_grid, cv=3, scoring='accuracy', n_jobs=-1)
grid_search.fit(X_train, y_train)
"""
Explanation: 3.4 GridSearchCV
End of explanation
"""
|
junhwanjang/DataSchool | Lecture/11. 추정 및 검정/6) MLE 모수 추정의 예.ipynb | mit | theta0 = 0.6
x = sp.stats.bernoulli(theta0).rvs(1000)
N0, N1 = np.bincount(x, minlength=2)
N = N0 + N1
theta = N1/N
theta
"""
Explanation: MLE 모수 추정의 예
베르누이 분포의 모수 추정
각각의 시도 $x_i$에 대한 확률은 베르누이 분포
$$ P(x | \theta ) = \text{Bern}(x | \theta ) = \theta^x (1 - \theta)^{1-x}$$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} $$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \big{ {x_i} \log\theta + (1-x_i)\log(1 - \theta) \big} \
&=& \sum_{i=1}^N {x_i} \log\theta + \left( N-\sum_{i=1}^N x_i \right) \log( 1 - \theta ) \
\end{eqnarray}
$$
$x = 1$(성공) 또는 $x= 0$ (실패) 이므로
전체 시도 횟수 $N$
그 중 성공 횟수 $N_1 = \sum_{i=1}^N {x_i}$
따라서 Log-Likelihood는
$$
\begin{eqnarray}
\log L
&=& N_1 \log\theta + (N-N_1) \log(1 - \theta) \
\end{eqnarray}
$$
Log-Likelihood Derivative
$$
\begin{eqnarray}
\dfrac{\partial \log L}{\partial \theta}
&=& \dfrac{\partial}{\partial \theta} \big{ N_1 \log\theta + (N-N_1) \log(1 - \theta) \big} = 0\
&=& \dfrac{N_1}{\theta} - \dfrac{N-N_1}{1-\theta} = 0 \
\end{eqnarray}
$$
$$
\dfrac{N_1}{\theta} = \dfrac{N-N_1}{1-\theta}
$$
$$
\dfrac{1-\theta}{\theta} = \dfrac{N-N_1}{N_1}
$$
$$
\dfrac{1}{\theta} - 1 = \dfrac{N}{N_1} - 1
$$
$$
\theta= \dfrac{N_1}{N}
$$
End of explanation
"""
theta0 = np.array([0.1, 0.3, 0.6])
x = np.random.choice(np.arange(3), 1000, p=theta0)
N0, N1, N2 = np.bincount(x, minlength=3)
N = N0 + N1 + N2
theta = np.array([N0, N1, N2]) / N
theta
"""
Explanation: 카테고리 분포의 모수 추정
각각의 시도 $x_i$에 대한 확률은 카테고리 분포
$$ P(x | \theta ) = \text{Cat}(x | \theta) = \prod_{k=1}^K \theta_k^{x_k} $$
$$ \sum_{k=1}^K \theta_k = 1 $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \sum_{k=1}^K {x_{i,k}} \log\theta_k \
&=& \sum_{k=1}^K \log\theta_k \sum_{i=1}^N {x_{i,k}}
\end{eqnarray}
$$
$x_k$가 나온 횟수 $N_k = \sum_{i=1}^N {x_{i,k}}$이라고 표시
따라서 Log-Likelihood는
$$
\begin{eqnarray}
\log L
&=& \sum_{k=1}^K \log\theta_k N_k
\end{eqnarray}
$$
추가 조건
$$ \sum_{k=1}^K \theta_k = 1 $$
Log-Likelihood Derivative with Lagrange multiplier
$$
\begin{eqnarray}
\dfrac{\partial \log L}{\partial \theta_k}
&=& \dfrac{\partial}{\partial \theta_k} \left{ \sum_{k=1}^K \log\theta_k N_k + \lambda \left(1- \sum_{k=1}^K \theta_k\right) \right} = 0 \
\dfrac{\partial \log L}{\partial \lambda}
&=& \dfrac{\partial}{\partial \lambda} \left{ \sum_{k=1}^K \log\theta_k N_k + \lambda \left(1- \sum_{k=1}^K \theta_k \right) \right} = 0\
\end{eqnarray}
$$
$$
\dfrac{N_1}{\theta_1} = \dfrac{N_2}{\theta_2} = \cdots = \dfrac{N_K}{\theta_K} = \lambda
$$
$$
\sum_{k=1}^K N_k = N
$$
$$
\lambda \sum_{k=1}^K \theta_k = \lambda = N
$$
$$
\theta_k = \dfrac{N_k}{N}
$$
End of explanation
"""
mu0 = 1
sigma0 = 2
x = sp.stats.norm(mu0, sigma0).rvs(1000)
xbar = x.mean()
s2 = x.std(ddof=1)
xbar, s2
"""
Explanation: 정규 분포의 모수 추정
각각의 시도 $x_i$에 대한 확률은 가우시안 정규 분포
$$ P(x | \theta ) = N(x | \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\right)$$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \left{ -\dfrac{1}{2}\log(2\pi\sigma^2) - \dfrac{(x_i-\mu)^2}{2\sigma^2} \right} \
&=& -\dfrac{N}{2} \log(2\pi\sigma^2) - \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2
\end{eqnarray}
$$
Log-Likelihood Derivative
$$
\begin{eqnarray}
\dfrac{\partial \log L}{\partial \mu}
&=& \dfrac{\partial}{\partial \mu} \left{ \dfrac{N}{2} \log(2\pi\sigma^2) + \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \right} = 0 \
\dfrac{\partial \log L}{\partial \sigma^2}
&=& \dfrac{\partial}{\partial \sigma^2} \left{ \dfrac{N}{2} \log(2\pi\sigma^2) + \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \right} = 0\
\end{eqnarray}
$$
$$
\dfrac{2}{2\sigma^2}\sum_{i=1}^N (x_i-\mu) = 0
$$
$$
N \mu = \sum_{i=1}^N x_i
$$
$$
\mu = \dfrac{1}{N}\sum_{i=1}^N x_i = \bar{x}
$$
$$
\dfrac{N}{2\sigma^2 } - \dfrac{1}{2(\sigma^2)^2}\sum_{i=1}^N (x_i-\mu)^2 = 0
$$
$$
\sigma^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i-\mu)^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i-\bar{x})^2 = s^2
$$
End of explanation
"""
mu0 = np.array([0, 1])
sigma0 = np.array([[1, 0.2], [0.2, 4]])
x = sp.stats.multivariate_normal(mu0, sigma0).rvs(1000)
xbar = x.mean(axis=0)
S2 = np.cov(x, rowvar=0)
print(xbar)
print(S2)
"""
Explanation: 다변수 정규 분포의 모수 추정
MLE for Multivariate Gaussian Normal Distribution
각각의 시도 $x_i$에 대한 확률은 다변수 정규 분포
$$ P(x | \theta ) = N(x | \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$
샘플이 $N$개 있는 경우, Likelihood
$$ L = P(x_{1:N}|\theta) = \prod_{i=1}^N \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x_i-\mu)^T \Sigma^{-1} (x_i-\mu) \right)$$
Log-Likelihood
$$
\begin{eqnarray}
\log L
&=& \log P(x_{1:N}|\theta) \
&=& \sum_{i=1}^N \left{ -\log((2\pi)^{D/2} |\Sigma|^{1/2}) - \dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right} \
&=& C -\dfrac{N}{2} \log|\Sigma| - \dfrac{1}{2} \sum (x-\mu)^T \Sigma^{-1} (x-\mu)
\end{eqnarray}
$$
precision matrix $\Lambda = \Sigma^{-1}$
$$
\begin{eqnarray}
\log L
&=& C + \dfrac{N}{2} \log|\Lambda| - \dfrac{1}{2} \sum(x-\mu)^T \Lambda (x-\mu)
\end{eqnarray}
$$
$$ \dfrac{\partial L}{\partial \mu} = - \dfrac{\partial}{\partial \mu} \sum_{i=1}^N (x_i-\mu)^T \Lambda (x_i-\mu) = \sum_{i=1}^N 2\Lambda (x_i - \mu) = 0 $$
$$ \mu = \dfrac{1}{N}\sum_{i=1}^N x_i $$
$$ \dfrac{\partial L}{\partial \Lambda} = \dfrac{\partial}{\partial \Lambda} \dfrac{N}{2} \log|\Lambda| - \dfrac{\partial}{\partial \Lambda} \dfrac{1}{2} \sum_{i=1}^N \text{tr}( (x_i-\mu)(x_i-\mu)^T\Lambda) =0 $$
$$ \dfrac{N}{2} \Lambda^{-T} = \dfrac{1}{2}\sum_{i=1}^N (x_i-\mu)(x_i-\mu)^T $$
$$ \Sigma = \dfrac{1}{N}\sum_{i=1}^N (x_i-\mu)(x_i-\mu)^T $$
End of explanation
"""
|
hauser-tristan/heatwave-defcomp-examples | scripts/evaluate_definitions.ipynb | mit | #--- Libraries
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from pandas.tseries.offsets import *
from scipy.stats import beta
# place graphics in the notebook document
%matplotlib inline
# use 'casual' graphic style
_ = plt.xkcd()
# set more style defaults
sns.set_palette('deep')
#--- Load data
# read file of observed meteorological events
warnings = pd.read_pickle('../data/event_sequences.pkl')
# read file of reported heatwaves
disrec = pd.read_csv('../data/Heatwaves_database.csv')
# repair region name with space before name
disrec.loc[disrec.Region==' Tamil Nadu','Region'] = 'Tamil Nadu'
#--- Define functions for evaluating distributions
# ==============================================================
# Function to calculate the mode of the beta distribution
def mode(z) :
if (z[0]>1) & (z[1]>1) :
return (z[0]-1)/(z[0]+z[1]-2)
elif (z[0]>1) & (z[1] < 1) :
return 1.0
else :
return 0.0
# ==============================================================
# ==============================================================
# Functions to calculate the 95%interval of the beta distribution
def credint_low(z) :
return beta.ppf(0.05, z[0], z[1])
def credint_upp(z) :
return beta.ppf(0.95, z[0], z[1])
# ==============================================================
"""
Explanation: Evaluate Delay Periods
Here, want to evaluate what delay thresholds to allow between defined meteorological states and disaster reports. Procedure is toe evaluate the definitions for a range of delays and optimize the POD.
Set up
End of explanation
"""
#--- See how DesInvetar reports correspond to time periods marked as dangerous
# list out regions (in '_' instead of ' ' format used in data frame)
regions = ["_".join(x.title().split()) for x in set(disrec.Region.where(disrec.Country=='India'))
if str(x) != 'nan']
# choose max number of days to link to warning
max_thrs = 15
# loop over regions
for region in regions :
print('---------\n\n')
print(region)
print('---------')
# list dates of heat disasters reported for the region (need to use name format with spaces)
dis_dates = set(disrec['Date (YMD)'].where(disrec.Region == ' '.join(region.split('_'))))
# remove nans (from regions that aren't in the selected country
dis_dates = [x for x in dis_dates if str(x) != 'nan']
# check for bad dates
bd = list()
for i in range(len(dis_dates)) :
if (dis_dates[i].split("/")[2] == '0') :
bd.extend([i])
# remove bad dates
for i in sorted(bd, reverse=True):
dis_dates.pop(i)
# set as date object
dis_dates = pd.to_datetime(dis_dates,format='%Y/%m/%d')
# loop through definitions
definitions = list(warnings[region].columns)
for deflab in definitions :
print(deflab)
# loop over different impact/reporting delay thresholds
for dely_thrs in range(1,max_thrs) :
# start counters
detc = 0 ; falm = 0 ; flag = 0 ; reps = 0
event_dates = []
# loop through reports
for i in range(len(dis_dates)) :
# something bad has been reported
reps += 1
# consider the range of possible dates for 'event'
event_range = pd.date_range(end=dis_dates[i],freq='D',periods=dely_thrs)
# keep track of all possible dates
event_dates.extend(event_range)
# was it expected?
if ( max(warnings[region][deflab][event_range]) == 1 ) :
detc += 1
# loop through days that had a warning
warnings[region]['date'] = warnings[region].index
for d in warnings[region]['date'][~(warnings[region]['date'].where(warnings[region][deflab] == 1)).isnull()] :
# a day got flagged with a warning
flag += 1
# is there any reason to believe something happened then?
if ( d not in event_dates ) :
falm += 1
# use Jefferys conjugate prior to estimate signficance of stats
# POD
alpha_pod = 0.5 + detc
beta_pod = 0.5 + reps - detc
# FAR
alpha_far = 0.5 + falm
beta_far = 0.5 + flag - falm
# create dictionary, if doesn't already exist
if not ('post_stats' in globals()) :
post_stats = {deflab: pd.DataFrame({'pod':[[alpha_pod,beta_pod]], 'far':[[alpha_far,beta_far]]})}
# if this definition has no entry, then create one
elif deflab not in post_stats :
post_stats[deflab] = pd.DataFrame({'pod':[[alpha_pod,beta_pod]],
'far':[[alpha_far,beta_far]]})
# if this definition already has an entry, then append to it
elif deflab in post_stats :
post_stats[deflab] = post_stats[deflab].append({'pod':[alpha_pod,beta_pod],
'far':[alpha_far,beta_far]},ignore_index=True)
# remove seperate date column from table
warnings[region].drop('date',1,inplace=True)
# create (if doesn't already exist) dictionary to hold data for each region
if not ('posterior_prms' in globals()) :
posterior_prms = {region : post_stats}
del(post_stats)
else :
posterior_prms[region] = post_stats
del(post_stats)
#
pd.to_pickle(posterior_prms, '../data/post_prms.pkl')
"""
Explanation: Calculate statistics
End of explanation
"""
# read dictionary of parameters
posterior_prms = pd.read_pickle('../data/post_prms.pkl')
# read region names from dictionary headers
regions = posterior_prms.keys()
# read definition names from meteorological warnings file
definitions = list(warnings[regions[0]].columns)
"""
Explanation: If the statistics have already been calculated can just load saved data file rather than rerunning the code block above.
End of explanation
"""
#--- Create matrix of optimal delay values
# set empty matrix
opt_delay_thrs = pd.DataFrame(np.zeros([len(regions),len(definitions)]))
# set rows as regions
opt_delay_thrs.index = regions
# set columns as definitions
opt_delay_thrs.columns = definitions
# cycle through all the regions and definitions
for region in regions :
for deflab in definitions :
# find the empericial POD (mode of posterior distribution)
modes = [mode(posterior_prms[region][deflab].pod[i]) for i in np.array(range(max_thrs-2))]
# find the lowest threshold value which optimizes the POD
opt_delay_thrs[deflab][region] = modes.index(max(modes))
#--- Show table
opt_delay_thrs
#--- Chart values
# set background
sns.set_style('whitegrid')
# adjust font size
sns.set_context("notebook", font_scale=1.5)
# reset data for plotting
df = opt_delay_thrs.copy(deep=True)
df['region'] = [ ' '.join(x.split('_')) for x in opt_delay_thrs.index ]
df = pd.melt(df,id_vars='region')
df['delay threshold'] = df['value'].astype('int')
# create plot
chart = sns.FacetGrid(df,col='region',size=4)
_ = chart.map(sns.countplot,'delay threshold')
sns.despine(left=True)
# save figure
plt.savefig('../figures/delay_threshold_values.png')
"""
Explanation: Find optimal threshold
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/tensorflow_extended/solutions/penguin_tfdv.ipynb | apache-2.0 | # Install the TensorFlow Extended library
!pip install -U tfx
"""
Explanation: Data validation using TFX Pipeline and TensorFlow Data Validation
Learning Objectives
Understand the data types, distributions, and other information (e.g., mean value, or number of uniques) about each feature.
Generate a preliminary schema that describes the data.
Identify anomalies and missing values in the data with respect to given schema.
Introduction
In this notebook, we will create and run TFX pipelines
to validate input data and create an ML model. This notebook is based on the
TFX pipeline we built in
Simple TFX Pipeline Tutorial.
If you have not read that tutorial yet, you should read it before proceeding
with this notebook.
In this notebook, we will create two TFX pipelines.
First, we will create a pipeline to analyze the dataset and generate a
preliminary schema of the given dataset. This pipeline will include two new
components, StatisticsGen and SchemaGen.
Once we have a proper schema of the data, we will create a pipeline to train
an ML classification model based on the pipeline from the previous tutorial.
In this pipeline, we will use the schema from the first pipeline and a
new component, ExampleValidator, to validate the input data.
The three new components, StatisticsGen, SchemaGen and ExampleValidator, are
TFX components for data analysis and validation, and they are implemented
using the
TensorFlow Data Validation library.
Please see
Understanding TFX Pipelines
to learn more about various concepts in TFX.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Install TFX
End of explanation
"""
# Load necessary libraries
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
"""
Explanation: Restart the kernel (Kernel > Restart kernel > Restart). Please ignore any incompatibility warnings and errors.
Check the TensorFlow and TFX versions.
End of explanation
"""
import os
# We will create two pipelines. One for schema generation and one for training.
SCHEMA_PIPELINE_NAME = "penguin-tfdv-schema"
PIPELINE_NAME = "penguin-tfdv"
# Output directory to store artifacts generated from the pipeline.
SCHEMA_PIPELINE_ROOT = os.path.join('pipelines', SCHEMA_PIPELINE_NAME)
PIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)
# Path to a SQLite DB file to use as an MLMD storage.
SCHEMA_METADATA_PATH = os.path.join('metadata', SCHEMA_PIPELINE_NAME,
'metadata.db')
METADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')
# Output directory where created models from the pipeline will be exported.
SERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)
from absl import logging
logging.set_verbosity(logging.INFO) # Set default logging level.
"""
Explanation: Set up variables
There are some variables used to define a pipeline. You can customize these
variables as you want. By default all output from the pipeline will be
generated under the current directory.
End of explanation
"""
import urllib.request
import tempfile
# TODO
DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.
_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_filepath = os.path.join(DATA_ROOT, "data.csv")
urllib.request.urlretrieve(_data_url, _data_filepath)
"""
Explanation: Prepare example data
We will download the example dataset for use in our TFX pipeline. The dataset
we are using is
Palmer Penguins dataset
which is also used in other
TFX examples.
There are four numeric features in this dataset:
culmen_length_mm
culmen_depth_mm
flipper_length_mm
body_mass_g
All features were already normalized to have range [0,1]. We will build a
classification model which predicts the species of penguins.
Because the TFX ExampleGen component reads inputs from a directory, we need
to create a directory and copy the dataset to it.
End of explanation
"""
# Print the first ten lines of the file
!head {_data_filepath}
"""
Explanation: Take a quick look at the CSV file.
End of explanation
"""
def _create_schema_pipeline(pipeline_name: str,
pipeline_root: str,
data_root: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Creates a pipeline for schema generation."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# TODO
# NEW: Computes statistics over data for visualization and schema generation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# TODO
# NEW: Generates schema based on the generated statistics.
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'], infer_feature_shape=True)
components = [
example_gen,
statistics_gen,
schema_gen,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
"""
Explanation: You should be able to see five feature columns. species is one of 0, 1 or 2,
and all other features should have values between 0 and 1. We will create a TFX
pipeline to analyze this dataset.
Generate a preliminary schema
TFX pipelines are defined using Python APIs. We will create a pipeline to
generate a schema from the input examples automatically. This schema can be
reviewed by a human and adjusted as needed. Once the schema is finalized it can
be used for training and example validation in later tasks.
In addition to CsvExampleGen which is used in
Simple TFX Pipeline Tutorial,
we will use StatisticsGen and SchemaGen:
StatisticsGen calculates
statistics for the dataset.
SchemaGen examines the
statistics and creates an initial data schema.
See the guides for each component or
TFX components tutorial
to learn more on these components.
Write a pipeline definition
We define a function to create a TFX pipeline. A Pipeline object
represents a TFX pipeline which can be run using one of pipeline
orchestration systems that TFX supports.
End of explanation
"""
# run the pipeline using Local TFX DAG runner
tfx.orchestration.LocalDagRunner().run(
_create_schema_pipeline(
pipeline_name=SCHEMA_PIPELINE_NAME,
pipeline_root=SCHEMA_PIPELINE_ROOT,
data_root=DATA_ROOT,
metadata_path=SCHEMA_METADATA_PATH))
"""
Explanation: Run the pipeline
We will use LocalDagRunner as in the previous tutorial.
End of explanation
"""
from ml_metadata.proto import metadata_store_pb2
# Non-public APIs, just for showcase.
from tfx.orchestration.portable.mlmd import execution_lib
def get_latest_artifacts(metadata, pipeline_name, component_id):
"""Output artifacts of the latest run of the component."""
context = metadata.store.get_context_by_type_and_name(
'node', f'{pipeline_name}.{component_id}')
executions = metadata.store.get_executions_by_context(context.id)
latest_execution = max(executions,
key=lambda e:e.last_update_time_since_epoch)
return execution_lib.get_artifacts_dict(metadata, latest_execution.id,
[metadata_store_pb2.Event.OUTPUT])
# Non-public APIs, just for showcase.
from tfx.orchestration.experimental.interactive import visualizations
def visualize_artifacts(artifacts):
"""Visualizes artifacts using standard visualization modules."""
for artifact in artifacts:
visualization = visualizations.get_registry().get_visualization(
artifact.type_name)
if visualization:
visualization.display(artifact)
from tfx.orchestration.experimental.interactive import standard_visualizations
standard_visualizations.register_standard_visualizations()
"""
Explanation: You should see INFO:absl:Component SchemaGen is finished. if the pipeline
finished successfully.
We will examine the output of the pipeline to understand our dataset.
Review outputs of the pipeline
As explained in the previous tutorial, a TFX pipeline produces two kinds of
outputs, artifacts and a
metadata DB(MLMD) which contains
metadata of artifacts and pipeline executions. We defined the location of
these outputs in the above cells. By default, artifacts are stored under
the pipelines directory and metadata is stored as a sqlite database
under the metadata directory.
You can use MLMD APIs to locate these outputs programatically. First, we will
define some utility functions to search for the output artifacts that were just
produced.
End of explanation
"""
# Non-public APIs, just for showcase.
from tfx.orchestration.metadata import Metadata
from tfx.types import standard_component_specs
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
SCHEMA_METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
# Find output artifacts from MLMD.
stat_gen_output = get_latest_artifacts(metadata_handler, SCHEMA_PIPELINE_NAME,
'StatisticsGen')
stats_artifacts = stat_gen_output[standard_component_specs.STATISTICS_KEY]
schema_gen_output = get_latest_artifacts(metadata_handler,
SCHEMA_PIPELINE_NAME, 'SchemaGen')
schema_artifacts = schema_gen_output[standard_component_specs.SCHEMA_KEY]
"""
Explanation: Now we can examine the outputs from the pipeline execution.
End of explanation
"""
# docs-infra: no-execute
visualize_artifacts(stats_artifacts)
"""
Explanation: It is time to examine the outputs from each component. As described above,
Tensorflow Data Validation(TFDV)
is used in StatisticsGen and SchemaGen, and TFDV also
provides visualization of the outputs from these components.
In this tutorial, we will use the visualization helper methods in TFX which
use TFDV internally to show the visualization.
Examine the output from StatisticsGen
End of explanation
"""
visualize_artifacts(schema_artifacts)
"""
Explanation: <!-- <img class="tfo-display-only-on-site"
src="images/penguin_tfdv/penguin_tfdv_statistics.png"/> -->
You can see various stats for the input data. These statistics are supplied to
SchemaGen to construct an initial schema of data automatically.
Examine the output from SchemaGen
End of explanation
"""
import shutil
_schema_filename = 'schema.pbtxt'
SCHEMA_PATH = 'schema'
os.makedirs(SCHEMA_PATH, exist_ok=True)
_generated_path = os.path.join(schema_artifacts[0].uri, _schema_filename)
# Copy the 'schema.pbtxt' file from the artifact uri to a predefined path.
shutil.copy(_generated_path, SCHEMA_PATH)
"""
Explanation: This schema is automatically inferred from the output of StatisticsGen. You
should be able to see 4 FLOAT features and 1 INT feature.
Export the schema for future use
We need to review and refine the generated schema. The reviewed schema needs
to be persisted to be used in subsequent pipelines for ML model training. In
other words, you might want to add the schema file to your version control
system for actual use cases. In this tutorial, we will just copy the schema
to a predefined filesystem path for simplicity.
End of explanation
"""
print(f'Schema at {SCHEMA_PATH}-----')
!cat {SCHEMA_PATH}/*
"""
Explanation: The schema file uses
Protocol Buffer text format
and an instance of
TensorFlow Metadata Schema proto.
End of explanation
"""
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# We don't need to specify _FEATURE_KEYS and _FEATURE_SPEC any more.
# Those information can be read from the given schema file.
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _build_keras_model(schema: schema_pb2.Schema) -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
# ++ Changed code: Uses all features in the schema except the label.
feature_keys = [f.name for f in schema.feature if f.name != _LABEL_KEY]
inputs = [keras.layers.Input(shape=(1,), name=f) for f in feature_keys]
# ++ End of the changed code.
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# ++ Changed code: Reads in schema file passed to the Trainer component.
schema = tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema_pb2.Schema())
# ++ End of the changed code.
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _build_keras_model(schema)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
"""
Explanation: You should be sure to review and possibly edit the schema definition as
needed. In this tutorial, we will just use the generated schema unchanged.
Validate input examples and train an ML model
We will go back to the pipeline that we created in
Simple TFX Pipeline Tutorial,
to train an ML model and use the generated schema for writing the model
training code.
We will also add an
ExampleValidator
component which will look for anomalies and missing values in the incoming
dataset with respect to the schema.
Write model training code
We need to write the model code as we did in
Simple TFX Pipeline Tutorial.
The model itself is the same as in the previous tutorial, but this time we will
use the schema generated from the previous pipeline instead of specifying
features manually. Most of the code was not changed. The only difference is
that we do not need to specify the names and types of features in this file.
Instead, we read them from the schema file.
End of explanation
"""
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
schema_path: str, module_file: str, serving_model_dir: str,
metadata_path: str) -> tfx.dsl.Pipeline:
"""Creates a pipeline using predefined schema with TFX."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Computes statistics over data for visualization and example validation.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
# NEW: Import the schema.
schema_importer = tfx.dsl.Importer(
source_uri=schema_path,
artifact_type=tfx.types.standard_artifacts.Schema).with_id(
'schema_importer')
# TODO
# NEW: Performs anomaly detection based on statistics and data schema.
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'])
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
schema=schema_importer.outputs['result'], # Pass the imported schema.
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
# NEW: Following three components were added to the pipeline.
statistics_gen,
schema_importer,
example_validator,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
metadata_connection_config=tfx.orchestration.metadata
.sqlite_metadata_connection_config(metadata_path),
components=components)
"""
Explanation: Now you have completed all preparation steps to build a TFX pipeline for
model training.
Write a pipeline definition
We will add two new components, Importer and ExampleValidator. Importer
brings an external file into the TFX pipeline. In this case, it is a file
containing schema definition. ExampleValidator will examine
the input data and validate whether all input data conforms the data schema
we provided.
End of explanation
"""
tfx.orchestration.LocalDagRunner().run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
schema_path=SCHEMA_PATH,
module_file=_trainer_module_file,
serving_model_dir=SERVING_MODEL_DIR,
metadata_path=METADATA_PATH))
"""
Explanation: Run the pipeline
End of explanation
"""
metadata_connection_config = tfx.orchestration.metadata.sqlite_metadata_connection_config(
METADATA_PATH)
with Metadata(metadata_connection_config) as metadata_handler:
ev_output = get_latest_artifacts(metadata_handler, PIPELINE_NAME,
'ExampleValidator')
anomalies_artifacts = ev_output[standard_component_specs.ANOMALIES_KEY]
"""
Explanation: You should see INFO:absl:Component Pusher is finished, if the pipeline
finished successfully.
Examine outputs of the pipeline
We have trained the classification model for penguins, and we also have
validated the input examples in the ExampleValidator component. We can analyze
the output from ExampleValidator as we did with the previous pipeline.
End of explanation
"""
visualize_artifacts(anomalies_artifacts)
"""
Explanation: ExampleAnomalies from the ExampleValidator can be visualized as well.
End of explanation
"""
|
Grzego/nn-workshop | 1-neural-networks-intro/neural-networks-empty.ipynb | mit | # skorzystamy z gotowej funkcji do pobrania tego zbioru
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
"""
Explanation: Dane treningowe
Ponieważ będziemy potrzebowali na czymś wytrenować naszą sieć neuronową skorzystamy z popularnego zbioru w Machine Learningu czyli MNIST. Zbiór ten zawiera ręcznie pisane cyfry od 0 do 9. Są to niewielkie obrazki o wielkości 28x28 pixeli.
Pobierzmy i załadujmy zbiór.
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns # można osobno doinstalować tą paczke (rysuje ładniejsze wykresy)
import numpy as np
# pozwala na rysowanie w notebooku (nie otwiera osobnego okna)
%matplotlib inline
"""
Explanation: Zaimportujmy dodatkowe biblioteki do wyświetlania wykresów/obrazów oraz numpy który jest paczką do obliczeń na macierzach
End of explanation
"""
print(mnist.data.shape) # 28x28
print(mnist.target.shape)
"""
Explanation: Sprawdźmy ile jest przykładów w zbiorze
End of explanation
"""
for i in range(10):
r = np.random.randint(0, len(mnist.data))
plt.subplot(2, 5, i + 1)
plt.axis('off')
plt.title(mnist.target[r])
plt.imshow(mnist.data[r].reshape((28, 28)))
plt.show()
"""
Explanation: Teraz wyświetlmy pare przykładowych obrazków ze zbioru
End of explanation
"""
input_layer = 784
hidden_layer = ...
output_layer = 10
"""
Explanation: Tworzenie sieci neuronowej
Żeby za bardzo nie komplikować sprawy stworzymy sieć o trzech warstwach. Na początku ustalmy ilość neuronów w każdej z warstw. Ponieważ wielkość obrazka to 28x28 pixeli potrzebujemy więc 784 neuronów wejściowych. W warstwie ukrytej możemy ustawić ilość na dowolną. Ponieważ mamy do wyboru 10 różnych cyfr tyle samo neuronów damy w warstwie wyjściowej.
End of explanation
"""
# wcztanie już wytrenowanych wag (parametrów)
import h5py
with h5py.File('weights.h5', 'r') as file:
W1 = file['W1'][:]
W2 = file['W2'][:]
def sigmoid(x):
pass
"""
Explanation: Kluczowym elementem sieci neuronowych są ich wagi na połączeniach między neuronami. Aktualnie po prostu wczytamy już wytrenowane wagi dla sieci.
End of explanation
"""
def forward_pass(x, w1, w2):
# x - wejście sieci
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
pass
# uruchomienie sieci i sprawdzenie jej działania
# użyj funkcji forward_pass dla kilku przykładów i zobacz czy sieć odpowiada poprawnie
"""
Explanation: Obliczenia wykonywane przez sieć neuronową można rozrysować w postaci grafu obliczeniowego, gdzie każdy z wierzchołków reprezentuje jakąś operację na wejściach. Wykorzystywana przez nas sieć przedstawiona jest na grafie poniżej (@ to mnożenie macierzy):
End of explanation
"""
x_train = ...
y_train = ...
"""
Explanation: Trenowanie sieci (Back-propagation)
Należy przygotować dane pod trenowanie sieci. Chodzi tu głównie o zakodowanie mnist.target w sposób 'one-hot encoding'. Czyli: $$y = \left[ \begin{matrix} 0 \ 1 \ 2 \ \vdots \ 8 \ 9 \end{matrix} \right] \Longrightarrow \left[ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{matrix} \right]$$
Uwaga: aktualnie wszystkie dane są posortowane względem odpowiedzi. Czyli wszystkie zera są na początku póżniej są jednyki, itd. Takie ustawienie może w znaczący sposób utrudnić trenowanie sieci. Dlatego należy dane na starcie "przetasować". Trzeba przy tym pamiętać, żeby wejścia dalej odpowiadały tym samym wyjściom.
End of explanation
"""
W1 = ...
W2 = ...
"""
Explanation: Na starcie parametry są zwyczajnie losowane. Wykorzystamy do tego funkcje np.random.rand(dim_1, dim_2, ..., dim_n) losuje ona liczby z przedziału $[0, 1)$ i zwraca tensor o podanych przez nas wymiarach.
Uwaga: Mimo, że funkcja zwraca liczby z przedziału $[0, 1)$ nasze startowe parametry powinny być z przedziału $(-0.01, 0.01)$
End of explanation
"""
def loss_func(y_true, y_pred):
# y_true - poprawna odpowiedź
# y_pred - odpowiedź wyliczona przez sieć neuronową
pass
def sigmoid_derivative(x):
# implementacja
pass
def loss_derivative(y_true, y_pred):
# y_true - poprawna odpowiedź
# y_pred - odpowiedź wyliczona przez sieć neuronową
pass
def back_prop(x, y, w1, w2):
# x - wejście sieci
# y - poprawne odpowiedzi
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
# zastąp linie pod spodem kodem z funkcji forward_pass
# >>>
...
# <<<
...
return loss, dw1, dw2
"""
Explanation: Implementacja propagacji wstecznej
Podobnie jak przy optymalizowaniu funkcji, do wyliczenia gradientów wykorzystamy backprop. Graf obliczeniowy jest trochę bardziej skomplikowany. (@ oznacza mnożenie macierzy)
Do zaimplementowania funkcji back_prop(...) będziemy jeszcze potrzebować pochodnych dla naszych funkcji oraz funkcje straty.
End of explanation
"""
def apply_gradients(w1, w2, dw1, dw2, learning_rate):
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
# dw1 - gradienty dla parametrów warstwy ukrytej
# dw2 - gradienty dla parametrów warstwy wyjściowej
# learning_rate - krok optymalizacji
...
return w1, w2
"""
Explanation: Napiszemy jeszcze funkcje, która będzie wykonywała jeden krok optymalizacji dla podanych parametrów i ich gradientów o podanym kroku.
End of explanation
"""
def accuracy(x, y, w1, w2):
# x - wejście sieci
# y - poprawne odpowiedzi
# w1 - parametry warstwy ukrytej
# w2 - parametry warstwy wyjściowej
# hint: użyj funkcji forward_pass i np.argmax
pass
"""
Explanation: Żeby móc lepiej ocenić postęp uczenia się sieci napiszemy funkcje, która będzie wyliczać jaki procent odpowiedzi udzielanych przez sieć neuronową jest poprawny.
End of explanation
"""
nb_epoch = 5 # ile razy będziemy iterować po danych treningowych
learning_rate = 0.001
batch_size = 16 # na jak wielu przykładach na raz będziemy trenować sieć
losses = []
for epoch in range(nb_epoch):
print('\nEpoch %d' % (epoch,))
for i in range(0, len(x_train), batch_size):
x_batch = ...
y_batch = ...
# wykonaj back_prop dla pojedynczego batch'a
...
# zaktualizuj parametry
...
losses.append(loss)
print('\r[%5d/%5d] loss: %8.6f - accuracy: %10.6f' % (i + 1, len(x_train),
loss, accuracy(x_batch, y_batch, W1, W2)), end='')
plt.plot(losses)
plt.show()
print('Dokładność dla całego zbioru:', accuracy(x_train, y_train, W1, W2))
"""
Explanation: W końcu możemy przejść do napisania głównej pętli uczącej.
End of explanation
"""
|
ivastar/clear | notebooks/forward_modeling/Extract_beam.ipynb | mit | from grizli import model
from grizli import multifit
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
from shutil import copy
from astropy.table import Table
from astropy import wcs
from astropy.io import fits
from glob import glob
import os
## Seaborn is used to make plots look nicer.
## If you don't have it, you can comment it out and it won't affect the rest of the code
import seaborn as sea
sea.set(style='white')
sea.set(style='ticks')
sea.set_style({'xtick.direct'
'ion': 'in','xtick.top':True,'xtick.minor.visible': True,
'ytick.direction': "in",'ytick.right': True,'ytick.minor.visible': True})
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
"""
Explanation: Forward modeling tutorial using mosaic images
Extract BeamCutout
Here I will show you how to extract BeamCutouts. Saving these (fits) files before modeling will make entire process quicker. The BeamCutouts contain the orient information which is nessecary for better fitting models. Here's what Gabe says about this from his grizli notebooks:
To interact more closely with an individual object, its information can be extracted from the full exposure with the BeamCutout class. This object will contain the high-level GrismDisperser object useful for generating the model spectra and it will also have tools for analyzing and fitting the observed spectra.
It also makes detailed cutouts of the parent direct and grism images preserving the native WCS information.
End of explanation
"""
Grism_flts = glob('/Volumes/Vince_CLEAR/Data/Grism_fields/ERSPRIME/*GrismFLT.fits')
grp = multifit.GroupFLT(grism_files = Grism_flts, verbose=False)
"""
Explanation: Set files and target
For this example I'll be using one of my quiescent galaxys from GOODS north.
End of explanation
"""
beams = grp.get_beams(39170)
pa = -1
for BEAM in beams:
if pa != BEAM.get_dispersion_PA():
print('Instrument : {0}, ORIENT : {1}'.format(BEAM.grism.filter,BEAM.get_dispersion_PA()))
pa = BEAM.get_dispersion_PA()
# save out G102 - 345
BEAM = beams[16]
BEAM.write_fits(root='98', clobber=True)
fits.setval('98_39170.g102.A.fits', 'EXPTIME', ext=0,
value=fits.open('98_39170.g102.A.fits')[1].header['EXPTIME'])
# save out G102 - 78
BEAM = beams[4]
BEAM.write_fits(root='78', clobber=True)
fits.setval('78_39170.g102.A.fits', 'EXPTIME', ext=0,
value=fits.open('78_39170.g102.A.fits')[1].header['EXPTIME'])
# save out G102 - 48
BEAM = beams[8]
BEAM.write_fits(root='48', clobber=True)
fits.setval('48_39170.g102.A.fits', 'EXPTIME', ext=0,
value=fits.open('48_39170.g102.A.fits')[1].header['EXPTIME'])
# save out G141 - 345
BEAM = beams[0]
BEAM.write_fits(root='345', clobber=True)
fits.setval('345_39170.g141.A.fits', 'EXPTIME', ext=0,
value=fits.open('345_39170.g141.A.fits')[1].header['EXPTIME'])
## G102 cutouts
for i in glob('*.g102*'):
g102_beam = model.BeamCutout(fits_file=i)
plt.figure()
plt.imshow(g102_beam.beam.direct)
plt.xticks([])
plt.yticks([])
plt.title(i)
## G141 cutout
g141_beam = model.BeamCutout(fits_file='345_39170.g141.A.fits')
plt.figure()
plt.imshow(g141_beam.beam.direct)
plt.xticks([])
plt.yticks([])
plt.title('345_39170.g141.A.fits')
## G102 cutouts
for i in glob('*.g102*'):
g102_beam = model.BeamCutout(fits_file=i)
plt.figure()
plt.imshow(g102_beam.grism.data['SCI']- g102_beam.contam, vmin = -0.1, vmax=0.5)
plt.xticks([])
plt.yticks([])
plt.title(i)
## G141 cutout
g141_beam = model.BeamCutout(fits_file='345_39170.g141.A.fits')
plt.figure()
plt.imshow(g141_beam.grism.data['SCI']- g141_beam.contam, vmin = -0.1, vmax=0.5)
plt.xticks([])
plt.yticks([])
plt.title('345_39170.g141.A.fits')
"""
Explanation: Use Grizli to extract beam
First you'll need to create a GrismFLT object.
Next run blot_catalog to create the catalog of objects in the field.
Another routine (photutils_detection) is used if you're not using mosiac images and segmentation maps,
but since we have them you should do it this way.
End of explanation
"""
|
gfrubi/FM2 | Notebooks/Ejemplo-Difusion-Calor-1D.ipynb | gpl-3.0 | %matplotlib inline
from numpy import *
import matplotlib.pyplot as plt
from ipywidgets import interact
plt.style.use('classic')
"""
Explanation: Ecuación de difusión del calor 1D
Discutiremos la solución de la ecuación de difusión del calor unidimensional,
$$
\alpha \frac{\partial^2 \psi}{\partial^2 x}-\frac{\partial \psi}{\partial t}=0,
$$
en el dominio $-\infty<x<\infty$, $t\ge 0$, empleando la transformada de Fourier espacial, y la condición de borde/inicial
\begin{align}
\lim_{x \rightarrow \pm \infty}\psi(x,t)=0,\qquad
\psi(x,0)=\phi(x),
\end{align}
Ejemplo 1: Perfil inicial Delta de Dirac
Primero consideraremos la solución simple que se encuentra en el caso que el perfil inicial de temperatura corresponde a una Delta de Dirac,
$$
\phi(x)=\lambda\, \delta(x),
$$
en cuyo caso
$$
\psi_1(x,t)=\frac{\lambda}{2\sqrt{\pi \alpha t}}e^{-\frac{x^2}{4 \alpha t}},\qquad t>0.
$$
End of explanation
"""
def Psi1(x,t,alpha):
return (1/(2*sqrt(pi*alpha*t)))*exp(-x**2/(4.*alpha*t))
"""
Explanation: Definimos una función para evaluar nuestra solución
End of explanation
"""
x = linspace(-50,50,1000)
def g1(t,alpha):
plt.plot(x,Psi1(x,t,alpha), label='$\psi(x,%d)$' %t)
plt.xlim(-50,50)
plt.ylim(0,Psi1(0,0.1,alpha))
plt.grid(True)
plt.xlabel('$x$',fontsize=15)
plt.ylabel('$\psi(x,t)$',fontsize=15)
plt.title('Difusión del calor $1D$, $\phi(x)=\delta(x)$, $t=%.1f$' %t)
plt.legend()
"""
Explanation: Y otra función que la grafica, para valor de $t$ y $\alpha$ dados:
End of explanation
"""
g1(0.05,1)
"""
Explanation: Por ejemplo, podemos graficar la solución para $t=0.01$ y $\alpha=1$
End of explanation
"""
interact(g1,t=(0.1,50,0.1),alpha=(0.1,10,0.1))
"""
Explanation: Graficamos ahora en forma interactiva
End of explanation
"""
from scipy.special import erf
"""
Explanation: Ejemplo 2: Perfil inicial escalonado
Si ahora consideramos
$$
\phi(x)=\left\lbrace\begin{array}{ll} T_0, & x\in(-a,a) \ 0, & \text{otro caso} \end{array}\right.
$$
entonces la solución es de la forma
$$
\Psi_2(x,t) = \frac{T_0}{2}\left[erf\left(\frac{x+a}{\sqrt{4\alpha t}}\right) +erf\left(\frac{a-x}{\sqrt{4\alpha t}}\right)\right],
$$
donde $erf(x)$ denota la función error, definida por
$$
erf(x):= \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}\,dt.
$$
Esta función está predefinida en el módulo scipy.special, que podemos entonces importar
End of explanation
"""
a = 1
alpha = 0.01
def Psi2(x,t):
return (erf((x+a)/(sqrt(4*alpha*t))) + erf((a-x)/(sqrt(4*alpha*t))))/2.
"""
Explanation: Con esto, la solución es de la forma (consideramos $a=1$, $T_0=1$ y $\alpha=0.01$)
End of explanation
"""
x = linspace(-5,5,1000)
def g2(t):
plt.plot(x,Psi2(x,t),label='$\psi(x,%d)$' %t)
plt.title('Difusión del calor $1D$')
plt.xlim(-5,5)
plt.ylim(0,1.2)
plt.grid(True)
plt.xlabel('$x$',fontsize=15)
plt.ylabel('$\psi(x,t)$',fontsize=15)
plt.legend()
interact(g2, t=(0,50,0.1))
"""
Explanation: Realizamos los gráficos en forma similar al primer caso:
End of explanation
"""
from matplotlib.animation import FuncAnimation
alpha = 1
t = 0.01
x = linspace(-50,50,1000)
fig, ax = plt.subplots()
ax.set_ylim(0, Psi1(0,0.1,alpha))
ax.grid()
ax.set_xlim(-50,50)
ax.set_xlabel('$x$',fontsize=15)
ax.set_ylabel('$\psi(x,t)$',fontsize=15)
ax.legend()
line = ax.plot(x, Psi1(x,t,alpha), lw=2)[0]
title = ax.set_title('Difusión del calor $1D$, $\phi(x)=\delta(x)$, $t=%.1f$' %t)
def animate(i):
t = 1.*i
ax.set_title('Difusión del calor $1D$, $\phi(x)=\delta(x)$, $t=%.1f$' %t)
line.set_ydata(Psi1(x,t,alpha))
anim = FuncAnimation(fig, animate, interval=100, frames=150) #100 msec entre frames
anim.save('difusion-calor-1D-Delta.gif', writer='imagemagick')
"""
Explanation: Exportando animaciones a archivos .gif
Es posible crear una animación de la solución usando la función FuncAnimation. El código siguiente genera el archivo difusion-calor-1D-Delta.gif con la animación de la primera solución:
End of explanation
"""
x = linspace(-5,5,1000)
fig, ax = plt.subplots()
ax.set_ylim(0,1.2)
ax.grid()
ax.set_xlim(-5,5)
ax.set_xlabel('$x$',fontsize=15)
ax.set_ylabel("$\psi(x,t)$",fontsize=15)
ax.legend()
line = ax.plot(x, Psi2(x,0), lw=2)[0]
def animate(i):
t = 0.2*i
ax.set_title('Difusión del calor $1D$: Escalón, $t=%.1f$' %t)
line.set_ydata(Psi2(x,t))
anim = FuncAnimation(fig, animate, interval=100, frames=200) #100 msec entre frames
anim.save('difusion-calor-1D-Escalon.gif', writer='imagemagick')
"""
Explanation: Análogamente, para la segunda solución, creamos el archivo difusion-calor-1D-escalon.gif:
End of explanation
"""
fig = plt.figure(figsize=(10,0.2))
ax = plt.subplot()
ax.set_xlim(-5,5)
ax.set_xlabel(r'$x$',fontsize=15)
ax.set_yticklabels([])
x = linspace(-5,5,1000)
y = linspace(0,0.2,2)
X,Y = meshgrid(x,y)
Z = Psi2(X,0)
cax = ax.pcolormesh(X, Y, Z, cmap='plasma')
def animate(i):
t = 0.05*i
ax.set_title('Difusión del calor $1D$: Escalón, $t=%.1f$' %t)
cax.set_array(Psi2(X,t).flatten())
anim = FuncAnimation(fig, animate, interval=50, frames=100) #100 msec entre frames=10fps
anim.save('difusion-calor-1D-escalon-colores.gif', writer='imagemagick')
"""
Explanation: Otra forma interesante de representar la solución es por medio de un gráfico de colores:
End of explanation
"""
|
unnikrishnankgs/va | venv/lib/python3.5/site-packages/tensorflow/models/object_detection/object_detection_tutorial.ipynb | bsd-2-clause | import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
"""
Explanation: Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
End of explanation
"""
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
"""
Explanation: Env setup
End of explanation
"""
from utils import label_map_util
from utils import visualization_utils as vis_util
"""
Explanation: Object detection imports
Here are the imports from the object detection module.
End of explanation
"""
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
"""
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
"""
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
"""
Explanation: Download Model
End of explanation
"""
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
"""
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
"""
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
"""
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
"""
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
"""
Explanation: Helper code
End of explanation
"""
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
"""
Explanation: Detection
End of explanation
"""
|
datactive/bigbang | examples/experimental_notebooks/Corr between centrality and community 0.1.ipynb | mit | %matplotlib inline
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.analysis.graph as graph
import bigbang.ingress.mailman as mailman
import bigbang.analysis.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
import numpy as np
import math
from itertools import repeat
urls = ["https://mm.icann.org/pipermail/cc-humanrights/",
"http://mm.icann.org/pipermail/future-challenges/"]
archives= [Archive(url,archive_dir="../archives") for url in urls]
"""
Explanation: An IPython notebook that explores the relationship(correlation) between betweenness centrality and community membership of a number of mailing-lists in a given time period.
End of explanation
"""
min_date = []
max_date = []
for arch in archives:
if min_date == [] and max_date == []:
max_date = max(arch.data['Date'])
min_date = min(arch.data['Date'])
else:
if max(arch.data['Date']) > max_date:
max_date = max_date = max(arch.data['Date'])
if min(arch.data['Date']) < min_date:
min_date = min_date = min(arch.data['Date'])
max_date = [int(str(max_date)[:4]), int(str(max_date)[5:7])]
min_date = [int(str(min_date)[:4]), int(str(min_date)[5:7])]
date_from_whole = [2015,7]#min_date #set date_from_whole based on actual time frame in mailing lists
date_to_whole = max_date #set date_to_whole based on actual time fram in mailing lists
print(date_from_whole)
print(date_to_whole)
total_month = (date_to_whole[0] - date_from_whole[0])*12 + (date_to_whole[1]-date_from_whole[1]+1)
date_from = []
date_to = []
temp_year = date_from_whole[0]
temp_month = date_from_whole[1]
for i in range(total_month):
date_from.append(pd.datetime(temp_year,temp_month,1,tzinfo=pytz.utc))
if temp_month == 12:
temp_year += 1
temp_month = 0
date_to.append(pd.datetime(temp_year,temp_month+1,1,tzinfo=pytz.utc))
temp_month += 1
def filter_by_date(df,d_from,d_to):
return df[(df['Date'] > d_from) & (df['Date'] < d_to)]
IG = []
for k in range(total_month):
dfs = [filter_by_date(arx.data,
date_from[k],
date_to[k]) for arx in archives]
bdf = pd.concat(dfs)
IG.append(graph.messages_to_interaction_graph(bdf))
#RG = graph.messages_to_reply_graph(messages)
#IG = graph.messages_to_interaction_graph(bdf)
print(len(bdf))
bc = []
for j in range(total_month):
bc.append(pd.Series(nx.betweenness_centrality(IG[j])))
len(bc)
"""
Explanation: The following sets start month and end month, both inclusive.
End of explanation
"""
new_dict = [{} for i in repeat(None, total_month)]
new_dict1 = [{} for i in repeat(None, total_month)]
for t in range(total_month):
filtered_activity = []
for i in range(len(archives)):
df = archives[i].data
fdf = filter_by_date(df,date_from[t],date_to[t])
if len(fdf) >0:
filtered_activity.append(Archive(fdf).get_activity().sum())
for k in range(len(filtered_activity)):
for g in range(len(filtered_activity[k])):
original_key = list(filtered_activity[k].keys())[g]
new_key = (original_key[original_key.index("(") + 1:original_key.rindex(")")])
if new_key not in new_dict[t]:
new_dict[t][new_key] = 0
new_dict1[t][new_key] = 0
new_dict[t][new_key] += math.log(filtered_activity[k].get_values()[g]+1)
#can define community membership by changing the above line.
#example, direct sum of emails would be
new_dict1[t][new_key] += filtered_activity[k].get_values()[g]
for i in range(len(new_dict1)):
[x+1 for x in list(new_dict1[i].values())]
[np.log(x) for x in list(new_dict1[i].values())]
#check if there's name difference, return nothing if perfect.
for i in range(total_month):
set(new_dict[i].keys()).difference(bc[i].index.values)
set(bc[i].index.values).difference(list(new_dict[i].keys()))
set(new_dict1[i].keys()).difference(bc[i].index.values)
set(bc[i].index.values).difference(list(new_dict1[i].keys()))
#A list of corresponding betweenness centrality and community membership for all users, monthly
comparison = []
comparison1 = []
for i in range(len(new_dict)):
comparison.append(pd.DataFrame([new_dict[i], bc[i]]))
comparison1.append(pd.DataFrame([new_dict1[i], bc[i]]))
corr = []
corr1 = []
for i in range(len(new_dict)):
corr.append(np.corrcoef(comparison[i].get_values()[0],comparison[i].get_values()[1])[0,1])
corr1.append(np.corrcoef(comparison1[i].get_values()[0],comparison1[i].get_values()[1])[0,1])
corr1
#Blue as sum of log, red as log of sum, respect to community membership
x = list(range(1,total_month+1))
y = corr
plt.plot(x, y, marker='o')
z = corr1
plt.plot(x, z, marker='o', linestyle='--', color='r')
"""
Explanation: new_dict is a dictionary with keys as users' names, and values of their community membership(can have different interpretation)
Here the community membership for a user is defined as sum of log(Ni + 1), with Ni corresponds to the number of emails a user sent to Mailing list i.
End of explanation
"""
|
idc9/law-net | vertex_metrics_experiment/chalboards/federal_tdidf.ipynb | mit | top_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/'
import os
import sys
import time
from math import *
import copy
import cPickle as pickle
from glob import glob
import re
# data
import numpy as np
import pandas as pd
# viz
import matplotlib.pyplot as plt
# graph
import igraph as ig
# our code
sys.path.append(top_directory + 'code/')
from pipeline.download_data import *
from pipeline.make_raw_case_metadata import *
from load_data import case_info
sys.path.append(top_directory + 'explore/vertex_metrics_experiment/code/')
from make_case_text_files import *
from bag_of_words import *
from similarity_matrix import *
from make_snapshots import *
# directory set up
data_dir = '/Users/iaincarmichael/Documents/courtlistener/data/'
experiment_data_dir = data_dir + 'federal/'
text_dir = experiment_data_dir + 'textfiles/'
# courts
courts = ['scotus', 'cafc', 'cadc']
courts += ['ca' + str(i+1) for i in range(11)]
# jupyter notebook settings
%load_ext autoreload
%autoreload 2
%matplotlib inline
"""
Explanation: this could be helpful
http://stackoverflow.com/questions/31784011/scikit-learn-fitting-data-into-chunks-vs-fitting-it-all-at-once
End of explanation
"""
class textfile_chunks:
def __init__(self, paths, chunk_size):
self.i = 0
self.paths = paths
self.chunk_size = chunk_size
self.num_files = len(paths)
self.num_chunks = ceil(float(self.num_files)/self.chunk_size)
def __iter__(self):
return self
def next(self):
if self.i < self.num_chunks:
# file paths to return
file_paths = self.paths[self.i:min(self.i + self.chunk_size, self.num_files)]
# read in files and put them into dict
files = {}
for path in file_paths:
text = open(path, 'r').read()
files[path] = text_normalization(text)
self.i += 1
return files
else:
raise StopIteration()
class textfile_iter:
def __init__(self, paths):
self.i = 0
self.paths = paths
self.num_files = len(paths)
def __iter__(self):
return self
def next(self):
if self.i < self.num_files:
text = open(self.paths[self.i], 'r').read()
self.i += 1
return text_normalization(text)
else:
raise StopIteration()
"""
Explanation: make iterator
End of explanation
"""
file_paths = glob.glob(text_dir + '*.txt')
# text_chunker = textfile_chunks(file_paths, 3)
tf_iter = textfile_iter(file_paths)
bag_of_words = CountVectorizer()
%time BOW = bag_of_words.fit_transform(tf_iter)
vocab = bag_of_words.get_feature_names()
CLid_to_index = {re.search(r'(\d+)\.txt', file_paths[i]).group(1): i for i in range(len(file_paths))}
# save data
save_sparse_csr(experiment_data_dir + 'bag_of_words_matrix', BOW)
with open(experiment_data_dir + 'CLid_to_index.p', 'wb') as fp:
pickle.dump(CLid_to_index, fp)
with open(experiment_data_dir + 'vocab.p', 'wb') as fp:
pickle.dump(vocab, fp)
"""
Explanation: run bag of words
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/statespace_forecasting.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
macrodata = sm.datasets.macrodata.load_pandas().data
macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q')
"""
Explanation: Forecasting in statsmodels
This notebook describes forecasting using time series models in statsmodels.
Note: this notebook applies only to the state space model classes, which are:
sm.tsa.SARIMAX
sm.tsa.UnobservedComponents
sm.tsa.VARMAX
sm.tsa.DynamicFactor
End of explanation
"""
endog = macrodata['infl']
endog.plot(figsize=(15, 5))
"""
Explanation: Basic example
A simple example is to use an AR(1) model to forecast inflation. Before forecasting, let's take a look at the series:
End of explanation
"""
# Construct the model
mod = sm.tsa.SARIMAX(endog, order=(1, 0, 0), trend='c')
# Estimate the parameters
res = mod.fit()
print(res.summary())
"""
Explanation: Constructing and estimating the model
The next step is to formulate the econometric model that we want to use for forecasting. In this case, we will use an AR(1) model via the SARIMAX class in statsmodels.
After constructing the model, we need to estimate its parameters. This is done using the fit method. The summary method produces several convenient tables showing the results.
End of explanation
"""
# The default is to get a one-step-ahead forecast:
print(res.forecast())
"""
Explanation: Forecasting
Out-of-sample forecasts are produced using the forecast or get_forecast methods from the results object.
The forecast method gives only point forecasts.
End of explanation
"""
# Here we construct a more complete results object.
fcast_res1 = res.get_forecast()
# Most results are collected in the `summary_frame` attribute.
# Here we specify that we want a confidence level of 90%
print(fcast_res1.summary_frame(alpha=0.10))
"""
Explanation: The get_forecast method is more general, and also allows constructing confidence intervals.
End of explanation
"""
print(res.forecast(steps=2))
fcast_res2 = res.get_forecast(steps=2)
# Note: since we did not specify the alpha parameter, the
# confidence level is at the default, 95%
print(fcast_res2.summary_frame())
"""
Explanation: The default confidence level is 95%, but this can be controlled by setting the alpha parameter, where the confidence level is defined as $(1 - \alpha) \times 100\%$. In the example above, we specified a confidence level of 90%, using alpha=0.10.
Specifying the number of forecasts
Both of the functions forecast and get_forecast accept a single argument indicating how many forecasting steps are desired. One option for this argument is always to provide an integer describing the number of steps ahead you want.
End of explanation
"""
print(res.forecast('2010Q2'))
fcast_res3 = res.get_forecast('2010Q2')
print(fcast_res3.summary_frame())
"""
Explanation: However, if your data included a Pandas index with a defined frequency (see the section at the end on Indexes for more information), then you can alternatively specify the date through which you want forecasts to be produced:
End of explanation
"""
fig, ax = plt.subplots(figsize=(15, 5))
# Plot the data (here we are subsetting it to get a better look at the forecasts)
endog.loc['1999':].plot(ax=ax)
# Construct the forecasts
fcast = res.get_forecast('2011Q4').summary_frame()
fcast['mean'].plot(ax=ax, style='k--')
ax.fill_between(fcast.index, fcast['mean_ci_lower'], fcast['mean_ci_upper'], color='k', alpha=0.1);
"""
Explanation: Plotting the data, forecasts, and confidence intervals
Often it is useful to plot the data, the forecasts, and the confidence intervals. There are many ways to do this, but here's one example
End of explanation
"""
# Step 1: fit model parameters w/ training sample
training_obs = int(len(endog) * 0.8)
training_endog = endog[:training_obs]
training_mod = sm.tsa.SARIMAX(
training_endog, order=(1, 0, 0), trend='c')
training_res = training_mod.fit()
# Print the estimated parameters
print(training_res.params)
# Step 2: produce one-step-ahead forecasts
fcast = training_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
"""
Explanation: Note on what to expect from forecasts
The forecast above may not look very impressive, as it is almost a straight line. This is because this is a very simple, univariate forecasting model. Nonetheless, keep in mind that these simple forecasting models can be extremely competitive.
Prediction vs Forecasting
The results objects also contain two methods that all for both in-sample fitted values and out-of-sample forecasting. They are predict and get_prediction. The predict method only returns point predictions (similar to forecast), while the get_prediction method also returns additional results (similar to get_forecast).
In general, if your interest is out-of-sample forecasting, it is easier to stick to the forecast and get_forecast methods.
Cross validation
Note: some of the functions used in this section were first introduced in statsmodels v0.11.0.
A common use case is to cross-validate forecasting methods by performing h-step-ahead forecasts recursively using the following process:
Fit model parameters on a training sample
Produce h-step-ahead forecasts from the end of that sample
Compare forecasts against test dataset to compute error rate
Expand the sample to include the next observation, and repeat
Economists sometimes call this a pseudo-out-of-sample forecast evaluation exercise, or time-series cross-validation.
Example
We will conduct a very simple exercise of this sort using the inflation dataset above. The full dataset contains 203 observations, and for expositional purposes we'll use the first 80% as our training sample and only consider one-step-ahead forecasts.
A single iteration of the above procedure looks like the following:
End of explanation
"""
# Step 1: append a new observation to the sample and refit the parameters
append_res = training_res.append(endog[training_obs:training_obs + 1], refit=True)
# Print the re-estimated parameters
print(append_res.params)
"""
Explanation: To add on another observation, we can use the append or extend results methods. Either method can produce the same forecasts, but they differ in the other results that are available:
append is the more complete method. It always stores results for all training observations, and it optionally allows refitting the model parameters given the new observations (note that the default is not to refit the parameters).
extend is a faster method that may be useful if the training sample is very large. It only stores results for the new observations, and it does not allow refitting the model parameters (i.e. you have to use the parameters estimated on the previous sample).
If your training sample is relatively small (less than a few thousand observations, for example) or if you want to compute the best possible forecasts, then you should use the append method. However, if that method is infeasible (for example, because you have a very large training sample) or if you are okay with slightly suboptimal forecasts (because the parameter estimates will be slightly stale), then you can consider the extend method.
A second iteration, using the append method and refitting the parameters, would go as follows (note again that the default for append does not refit the parameters, but we have overridden that with the refit=True argument):
End of explanation
"""
# Step 2: produce one-step-ahead forecasts
fcast = append_res.forecast()
# Step 3: compute root mean square forecasting error
true = endog.reindex(fcast.index)
error = true - fcast
# Print out the results
print(pd.concat([true.rename('true'),
fcast.rename('forecast'),
error.rename('error')], axis=1))
"""
Explanation: Notice that these estimated parameters are slightly different than those we originally estimated. With the new results object, append_res, we can compute forecasts starting from one observation further than the previous call:
End of explanation
"""
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.append(updated_endog, refit=False)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
"""
Explanation: Putting it altogether, we can perform the recursive forecast evaluation exercise as follows:
End of explanation
"""
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
"""
Explanation: We now have a set of three forecasts made at each point in time from 1999Q2 through 2009Q3. We can construct the forecast errors by subtracting each forecast from the actual value of endog at that point.
End of explanation
"""
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
"""
Explanation: To evaluate our forecasts, we often want to look at a summary value like the root mean square error. Here we can compute that for each horizon by first flattening the forecast errors so that they are indexed by horizon and then computing the root mean square error fore each horizon.
End of explanation
"""
# Setup forecasts
nforecasts = 3
forecasts = {}
# Get the number of initial training observations
nobs = len(endog)
n_init_training = int(nobs * 0.8)
# Create model for initial training sample, fit parameters
init_training_endog = endog.iloc[:n_init_training]
mod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')
res = mod.fit()
# Save initial forecast
forecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)
# Step through the rest of the sample
for t in range(n_init_training, nobs):
# Update the results by appending the next observation
updated_endog = endog.iloc[t:t+1]
res = res.extend(updated_endog)
# Save the new set of forecasts
forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)
# Combine all forecasts into a dataframe
forecasts = pd.concat(forecasts, axis=1)
print(forecasts.iloc[:5, :5])
# Construct the forecast errors
forecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)
print(forecast_errors.iloc[:5, :5])
# Reindex the forecasts by horizon rather than by date
def flatten(column):
return column.dropna().reset_index(drop=True)
flattened = forecast_errors.apply(flatten)
flattened.index = (flattened.index + 1).rename('horizon')
print(flattened.iloc[:3, :5])
# Compute the root mean square error
rmse = (flattened**2).mean(axis=1)**0.5
print(rmse)
"""
Explanation: Using extend
We can check that we get similar forecasts if we instead use the extend method, but that they are not exactly the same as when we use append with the refit=True argument. This is because extend does not re-estimate the parameters given the new observation.
End of explanation
"""
print(endog.index)
"""
Explanation: By not re-estimating the parameters, our forecasts are slightly worse (the root mean square error is higher at each horizon). However, the process is faster, even with only 200 datapoints. Using the %%timeit cell magic on the cells above, we found a runtime of 570ms using extend versus 1.7s using append with refit=True. (Note that using extend is also faster than using append with refit=False).
Indexes
Throughout this notebook, we have been making use of Pandas date indexes with an associated frequency. As you can see, this index marks our data as at a quarterly frequency, between 1959Q1 and 2009Q3.
End of explanation
"""
# Annual frequency, using a PeriodIndex
index = pd.period_range(start='2000', periods=4, freq='A')
endog1 = pd.Series([1, 2, 3, 4], index=index)
print(endog1.index)
# Quarterly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='QS')
endog2 = pd.Series([1, 2, 3, 4], index=index)
print(endog2.index)
# Monthly frequency, using a DatetimeIndex
index = pd.date_range(start='2000', periods=4, freq='M')
endog3 = pd.Series([1, 2, 3, 4], index=index)
print(endog3.index)
"""
Explanation: In most cases, if your data has an associated data/time index with a defined frequency (like quarterly, monthly, etc.), then it is best to make sure your data is a Pandas series with the appropriate index. Here are three examples of this:
End of explanation
"""
index = pd.DatetimeIndex([
'2000-01-01 10:08am', '2000-01-01 11:32am',
'2000-01-01 5:32pm', '2000-01-02 6:15am'])
endog4 = pd.Series([0.2, 0.5, -0.1, 0.1], index=index)
print(endog4.index)
"""
Explanation: In fact, if your data has an associated date/time index, it is best to use that even if does not have a defined frequency. An example of that kind of index is as follows - notice that it has freq=None:
End of explanation
"""
mod = sm.tsa.SARIMAX(endog4)
res = mod.fit()
"""
Explanation: You can still pass this data to statsmodels' model classes, but you will get the following warning, that no frequency data was found:
End of explanation
"""
res.forecast(1)
"""
Explanation: What this means is that you cannot specify forecasting steps by dates, and the output of the forecast and get_forecast methods will not have associated dates. The reason is that without a given frequency, there is no way to determine what date each forecast should be assigned to. In the example above, there is no pattern to the date/time stamps of the index, so there is no way to determine what the next date/time should be (should it be in the morning of 2000-01-02? the afternoon? or maybe not until 2000-01-03?).
For example, if we forecast one-step-ahead:
End of explanation
"""
# Here we'll catch the exception to prevent printing too much of
# the exception trace output in this notebook
try:
res.forecast('2000-01-03')
except KeyError as e:
print(e)
"""
Explanation: The index associated with the new forecast is 4, because if the given data had an integer index, that would be the next value. A warning is given letting the user know that the index is not a date/time index.
If we try to specify the steps of the forecast using a date, we will get the following exception:
KeyError: 'The `end` argument could not be matched to a location related to the index of the data.'
End of explanation
"""
|
biothings/biothings_explorer | jupyter notebooks/Multiomics + Service.ipynb | apache-2.0 | from biothings_explorer.query.predict import Predict
from biothings_explorer.query.visualize import display_graph
import nest_asyncio
nest_asyncio.apply()
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: Use case
Description: For a patient with disease X, what are some factors (such as genetic features, comorbidities, etc) that could cause sensitivity or resistance to drug Y?
End of explanation
"""
from biothings_explorer.hint import Hint
ht = Hint()
luad = ht.query("lung adenocarcinoma")['Disease'][0]
luad
"""
Explanation: Use Case 1: Mutations in what genes cause sensitivity to drug Y in patients with Lung Adenocarcinoma
Step 1: Retrieve representation of Lung Adenocarcinoma in BTE
End of explanation
"""
query_config = {
"filters": [
{
"frequency": {
">": 0.1
},
},
{
"disease_context": {
"=": "MONDO:0005061"
},
"pvalue": {
"<": 0.05
},
"effect_size": {
"<": 0.00
}
}
],
"predicates": [None, None, "physically_interacts_with"]
}
pd = Predict(
input_objs=[luad],
intermediate_nodes =['Gene', 'ChemicalSubstance'],
output_types =['Gene'],
config=query_config
)
pd.connect(verbose=True)
df = pd.display_table_view()
df
df[["input_label", "pred1", "pred1_source", "node1_label", "pred2", "pred2_source", "node2_label", "pred3", "pred3_source", "output_label"]]
df1 = df.query("node1_id == output_id").drop_duplicates()
df1.head()
res = display_graph(df1)
"""
Explanation: Step 2: Use BTE to perform query
End of explanation
"""
paad = ht.query("MONDO:0006047")['Disease'][0]
paad
query_config = {
"filters": [
{
"frequency": {
">": 0.1
},
},
{
"disease_context": {
"=": "MONDO:0006047"
},
"pvalue": {
"<": 0.05
},
"effect_size": {
"<": 0.00
}
}
],
"predicates": [None, None, "physically_interacts_with"]
}
pd = Predict([paad], ['Gene', 'ChemicalSubstance'], ['Gene'], config=query_config)
pd.connect(verbose=True)
df1 = pd.display_table_view()
df1
df2 = df1.query("node1_id == output_id").drop_duplicates()
df2[["input_label", "pred1", "pred1_source", "node1_label", "pred2", "pred2_source", "node2_label", "pred3", "pred3_source", "output_label"]]
"""
Explanation: Use Case 2: Mutations in what genes cause sensitivity to drug Y in patients with pancreatic adenocarcinoma
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.determinize.ipynb | gpl-3.0 | import vcsn
lady4 = vcsn.context('lal_char(abc), b').ladybird(3)
lady4
lady4d = lady4.determinize()
lady4d
"""
Explanation: automaton.determinize
Compute the (accessible part of the) determinization of an automaton.
Preconditions:
- its labelset is free
- its weightset features a division operator (which is the case for $\mathbb{B}$).
Postconditions:
- the result is deterministic
- the result is equivalent to the input automaton
- the result is accessible
- the result need not be complete
Caveats:
- might not terminate if the weightset is not $\mathbb{B}$ (see automaton.has_twins_property)
See also:
- automaton.has_twins_property
- automaton.is_deterministic
Examples
Ladybird
Ladybird is a well known example that shows the exponential growth on the number of resulting states.
End of explanation
"""
%%automaton a
context = "lal_char(a), b"
0 -> 1 a
a.determinize()
"""
Explanation: The resulting automaton has states labeled with subsets of the input automaton set of states.
Empty input
If the input automaton has an empty accessible part (i.e., it has no initial state), then the output is empty (which is genuinely displayed as nothing below).
End of explanation
"""
%%automaton a
context = "lal_char, q"
$ -> 0 <2>
0 -> 1 <2>a
0 -> 2 <3>a
1 -> 1 <3>b
1 -> 3 <5>c
2 -> 2 <3>b
2 -> 3 <3>d
3 -> $
d = a.determinize()
d
"""
Explanation: Weighted automata
The algorithm we use requires a division operator. The following example has weights in $\mathbb{Q}$ (and was chosen because the algorithm terminates on it).
End of explanation
"""
a.is_equivalent(d)
"""
Explanation: Which is obviously deterministic. Of course they are equivalent:
End of explanation
"""
%%automaton a
context = "lal_char, zmin"
$ -> 0 <0>
0 -> 1 <1>a
0 -> 2 <2>a
1 -> 1 <3>b
1 -> 3 <5>c
2 -> 2 <3>b
2 -> 3 <6>d
3 -> $ <0>
d = a.determinize()
d
"""
Explanation: The next example works in $\mathbb{Z}_{\min}$, which features a division: the usual subtraction.
End of explanation
"""
a.shortest(10)
d.shortest(10)
"""
Explanation: Of course, they are equivalent. However we cannot use automaton.is_equivalent here.
End of explanation
"""
vcsn.context('lal_char, q').expression('a*+(<2>a)*').automaton()
"""
Explanation: Caveats
Some weighted automata cannot be determinized. See automaton.has_twins_property for a possible means to detect whether the procedure will terminate.
The following automaton, for example, admits no equivalent deterministic automaton.
End of explanation
"""
|
arviz-devs/arviz | doc/source/getting_started/WorkingWithInferenceData.ipynb | apache-2.0 | import arviz as az
import numpy as np
import xarray as xr
xr.set_options(display_expand_data=False, display_expand_attrs=False);
"""
Explanation: (working_with_InferenceData)=
Working with InferenceData
Here we present a collection of common manipulations you can use while working with InferenceData.
End of explanation
"""
idata = az.load_arviz_data("centered_eight")
idata
"""
Explanation: display_expand_data=False makes the default view for {class}xarray.DataArray fold the data values to a single line. To explore the values, click on the {fas}database icon on the left of the view, right under the xarray.DataArray text. It has no effect on Dataset objects that already default to folded views.
display_expand_attrs=False folds the attributes in both DataArray and Dataset objects to keep the views shorter. In this page we print DataArrays and Datasets several times and they always have the same attributes.
End of explanation
"""
post = idata.posterior
post
"""
Explanation: Get the dataset corresponding to a single group
End of explanation
"""
post["log_tau"] = np.log(post["tau"])
idata.posterior
"""
Explanation: :::{tip}
You'll have noticed we stored the posterior group in a new variable: post. As .copy() was not called, now using idata.posterior or post is equivalent.
Use this to keep your code short yet easy to read. Store the groups you'll need very often as separate variables to use explicitly, but don't delete the InferenceData parent. You'll need it for many ArviZ functions to work properly. For example: {func}~arviz.plot_pair needs data from sample_stats group to show divergences, {func}~arviz.compare needs data from both log_likelihood and posterior groups, {func}~arviz.plot_loo_pit needs not 2 but 3 groups: log_likelihood, posterior_predictive and posterior.
:::
Add a new variable
End of explanation
"""
stacked = az.extract_dataset(idata)
stacked
"""
Explanation: Combine chains and draws
End of explanation
"""
az.extract_dataset(idata, num_samples=100)
"""
Explanation: You can also use {meth}xarray.Dataset.stack if you only want to combine the chain and draw dimensions. {func}arviz.extract_dataset is a convenience function aimed at taking care of the most common subsetting operations with MCMC samples. It can:
- Combine chains and draws
- Return a subset of variables (with optional filtering with regular expressions or string matching)
- Return a subset of samples. Moreover by default it returns a random subset to prevent getting non-representative samples due to bad mixing.
- Acess any group
(idata/random_subset)=
Get a random subset of the samples
End of explanation
"""
stacked.mu.values
"""
Explanation: :::{tip}
Use a random seed to get the same subset from multiple groups: az.extract_dataset(idata, num_samples=100, rng=3) and az.extract_dataset(idata, group="log_likelihood", num_samples=100, rng=3) will continue to have matching samples
:::
Obtain a NumPy array for a given parameter
Let's say we want to get the values for mu as a NumPy array.
End of explanation
"""
len(idata.observed_data.school)
"""
Explanation: Get the dimension lengths
Let’s check how many groups are in our hierarchical model.
End of explanation
"""
idata.observed_data.school
"""
Explanation: Get coordinate values
What are the names of the groups in our hierarchical model? You can access them from the coordinate name school in this case
End of explanation
"""
idata.sel(chain=[0, 2])
"""
Explanation: Get a subset of chains
Let’s keep only chain 0 and 2 here. For the subset to take effect on all relevant InferenceData groups: posterior, sample_stats, log_likelihood, posterior_predictive we will use the {meth}arviz.InferenceData.sel, the method of InferenceData instead of {meth}xarray.Dataset.sel.
End of explanation
"""
idata.sel(draw=slice(100, None))
"""
Explanation: Remove the first n draws (burn-in)
Let’s say we want to remove the first 100 samples, from all the chains and all InferenceData groups with draws.
End of explanation
"""
idata.sel(draw=slice(100, None), groups="posterior")
"""
Explanation: If you check the burnin object you will see that the groups posterior, posterior_predictive, prior and sample_stats have 400 draws compared to idata that has 500. The group observed_data has not been affected because it does not have the draw dimension. Alternatively, you can specify which group or groups you want to change.
End of explanation
"""
post.mean()
"""
Explanation: Compute posterior mean values along draw and chain dimensions
To compute the mean value of the posterior samples, do the following:
End of explanation
"""
post.mean(dim=['chain', 'draw'])
"""
Explanation: This computes the mean along all dimensions. This is probably what you want for mu and tau, which have two dimensions (chain and draw), but maybe not what you expected for theta, which has one more dimension school.
You can specify along which dimension you want to compute the mean (or other functions).
End of explanation
"""
post["mlogtau"] = post["log_tau"].rolling({'draw': 50}).mean()
"""
Explanation: Compute and store posterior pushforward quantities
We use "posterior pushfoward quantities" to refer to quantities that are not variables in the posterior but deterministic computations using posterior variables.
You can use xarray for these pushforward operations and store them as a new variable in the posterior group. You'll then be able to plot them with ArviZ functions, calculate stats and diagnostics on them (like the {func}~arviz.mcse) or save and share the inferencedata object with the pushforward quantities included.
Compute the rolling mean of $\log(\tau)$ with {meth}xarray.DataArray.rolling, storing the result in the posterior
End of explanation
"""
post['theta_school_diff'] = post.theta - post.theta.rename(school="school_bis")
"""
Explanation: Using xarray for pusforward calculations has all the advantages of working with xarray. It also inherits the disadvantages of working with xarray, but we believe those to be outweighed by the advantages, and we have already shown how to extract the data as NumPy arrays. Working with InferenceData is working mainly with xarray objects and this is what is shown in this guide.
Some examples of these advantages are specifying operations with named dimensions instead of positional ones (as seen in some previous sections),
automatic alignment and broadcasting of arrays (as we'll see now),
or integration with Dask (as shown in the {ref}dask_for_arviz guide).
In this cell you will compute pairwise differences between schools on their mean effects (variable theta).
To do so, substract the variable theta after renaming the school dimension to the original variable.
Xarray then aligns and broadcasts the two variables because they have different dimensions, and
the result is a 4d variable with all the pointwise differences.
Eventually, store the result in the theta_school_diff variable:
End of explanation
"""
post
"""
Explanation: :::{note}
:class: dropdown
This same operation using NumPy would require manual alignment of the two arrays to make sure they broadcast correctly. The could would be something like:
python
theta_school_diff = theta[:, :, :, None] - theta[:, :, None, :]
:::
The theta_shool_diff variable in the posterior has kept the named dimensions and coordinates:
End of explanation
"""
post['theta_school_diff'].sel(school="Choate", school_bis="Deerfield")
"""
Explanation: Advanced subsetting
To select the value corresponding to the difference between the Choate and Deerfield schools do:
End of explanation
"""
school_idx = xr.DataArray(["Choate", "Hotchkiss", "Mt. Hermon"], dims=["pairwise_school_diff"])
school_bis_idx = xr.DataArray(["Deerfield", "Choate", "Lawrenceville"], dims=["pairwise_school_diff"])
post['theta_school_diff'].sel(school=school_idx, school_bis=school_bis_idx)
"""
Explanation: For more advanced subsetting (the equivalent to what is sometimes called "fancy indexing" in NumPy) you need to provide the indices as {class}~xarray.DataArray objects:
End of explanation
"""
post['theta_school_diff'].sel(
school=["Choate", "Hotchkiss", "Mt. Hermon"],
school_bis=["Deerfield", "Choate", "Lawrenceville"]
)
"""
Explanation: Using lists or NumPy arrays instead of DataArrays does colum/row based indexing. As you can see, the result has 9 values of theta_shool_diff instead of the 3 pairs of difference we selected in the previous cell:
End of explanation
"""
idata_rerun = idata.sel(chain=[0, 1]).copy().assign_coords(coords={"chain":[4,5]},groups="posterior_groups")
"""
Explanation: Add new chains using concat
After checking the {func}~arviz.mcse and realizing you need more samples, you rerun the model with two chains
and obtain an idata_rerun object.
End of explanation
"""
idata_complete = az.concat(idata, idata_rerun, dim="chain")
idata_complete.posterior.dims["chain"]
"""
Explanation: You can combine the two into a single InferenceData object using {func}arviz.concat:
End of explanation
"""
rng = np.random.default_rng(3)
idata.add_groups(
{"predictions": {"obs": rng.normal(size=(4, 500, 2))}},
dims={"obs": ["new_school"]},
coords={"new_school": ["Essex College", "Moordale"]}
)
idata
"""
Explanation: Add groups to InferenceData objects
You can also add new groups to InferenceData objects with the {meth}~arviz.InferenceData.extend (if the new groups are already in an InferenceData object) or with {meth}~arviz.InferenceData.add_groups (if the new groups are dictionaries or xarray.Dataset objects).
End of explanation
"""
|
relopezbriega/mi-python-blog | content/notebooks/DistStatsPy.ipynb | gpl-2.0 | # <!-- collapse=True -->
# importando modulos necesarios
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import stats
import seaborn as sns
np.random.seed(2016) # replicar random
# parametros esteticos de seaborn
sns.set_palette("deep", desat=.6)
sns.set_context(rc={"figure.figsize": (8, 4)})
# Graficando histograma
mu, sigma = 0, 0.2 # media y desvio estandar
datos = np.random.normal(mu, sigma, 1000) #creando muestra de datos
# histograma de distribución normal.
cuenta, cajas, ignorar = plt.hist(datos, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma')
plt.show()
"""
Explanation: Distribuciones de probabilidad con Python
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Matemáticas, análisis de datos y python. El contenido esta bajo la licencia BSD.
<img alt="Distribuciones estadísticas" title="Distribuciones estadísticas" src="https://relopezbriega.github.io/images/distribution.png" high=650px width=600px>
Introducción
Las variables aleatorias han llegado a desempeñar un papel importante en casi todos los campos de estudio: en la Física, la Química y la Ingeniería; y especialmente en las ciencias biológicas y sociales. Estas variables aleatorias son medidas y analizadas en términos
de sus propiedades estadísticas y probabilísticas, de las cuales una característica subyacente es su función de distribución. A pesar de que el número potencial de distribuciones puede ser muy grande, en la práctica, un número relativamente pequeño se utilizan; ya sea porque tienen características matemáticas que las hace fáciles de usar o porque se asemejan bastante bien a una porción de la realidad, o por ambas razones combinadas.
¿Por qué es importante conocer las distribuciones?
Muchos resultados en las ciencias se basan en conclusiones que se extraen sobre una población general a partir del estudio de una muestra de esta población. Este proceso se conoce como inferencia estadística; y este tipo de inferencia con frecuencia se basa en hacer suposiciones acerca de la forma en que los datos se distribuyen, o requiere realizar alguna transformación de los datos para que se ajusten mejor a alguna de las distribuciones conocidas y estudiadas en profundidad.
Las distribuciones de probabilidad teóricas son útiles en la inferencia estadística porque sus propiedades y características son conocidas. Si la distribución real de un conjunto de datos dado es razonablemente cercana a la de una distribución de probabilidad teórica, muchos de los cálculos se pueden realizar en los datos reales utilizando hipótesis extraídas de la distribución teórica.
Graficando distribuciones
Histogramas
Una de las mejores maneras de describir una variable es representar los valores que aparecen en el conjunto de datos y el número de veces que aparece cada valor. La representación más común de una distribución es un histograma, que es un gráfico que muestra la frecuencia de cada valor.
En Python, podemos graficar fácilmente un histograma con la ayuda de la función hist de matplotlib, simplemente debemos pasarle los datos y la cantidad de contenedores en los que queremos dividirlos. Por ejemplo, podríamos graficar el histograma de una distribución normal del siguiente modo.
End of explanation
"""
# Graficando FMP
n, p = 30, 0.4 # parametros de forma de la distribución binomial
n_1, p_1 = 20, 0.3 # parametros de forma de la distribución binomial
x = np.arange(stats.binom.ppf(0.01, n, p),
stats.binom.ppf(0.99, n, p))
x_1 = np.arange(stats.binom.ppf(0.01, n_1, p_1),
stats.binom.ppf(0.99, n_1, p_1))
fmp = stats.binom.pmf(x, n, p) # Función de Masa de Probabilidad
fmp_1 = stats.binom.pmf(x_1, n_1, p_1) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.plot(x_1, fmp_1)
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.vlines(x_1, 0, fmp_1, colors='g', lw=5, alpha=0.5)
plt.title('Función de Masa de Probabilidad')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
"""
Explanation: Función de Masa de Probabilidad
Otra forma de representar a las distribuciones discretas es utilizando su Función de Masa de Probabilidad o FMP, la cual relaciona cada valor con su probabilidad en lugar de su frecuencia como vimos anteriormente. Esta función es normalizada de forma tal que el valor total de probabilidad sea 1. La ventaja que nos ofrece utilizar la FMP es que podemos comparar dos distribuciones sin necesidad de ser confundidos por las diferencias en el tamaño de las muestras. También debemos tener en cuenta que FMP funciona bien si el número de valores es pequeño; pero a medida que el número de valores aumenta, la probabilidad asociada a cada valor se hace cada vez más pequeña y el efecto del ruido aleatorio aumenta.
Veamos un ejemplo con Python.
End of explanation
"""
# Graficando Función de Distribución Acumulada con Python
x_1 = np.linspace(stats.norm(10, 1.2).ppf(0.01),
stats.norm(10, 1.2).ppf(0.99), 100)
fda_binom = stats.binom.cdf(x, n, p) # Función de Distribución Acumulada
fda_normal = stats.norm(10, 1.2).cdf(x_1) # Función de Distribución Acumulada
plt.plot(x, fda_binom, '--', label='FDA binomial')
plt.plot(x_1, fda_normal, label='FDA nomal')
plt.title('Función de Distribución Acumulada')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.legend(loc=4)
plt.show()
"""
Explanation: Función de Distribución Acumulada
Si queremos evitar los problemas que se generan con FMP cuando el número de valores es muy grande, podemos recurrir a utilizar la Función de Distribución Acumulada o FDA, para representar a nuestras distribuciones, tanto discretas como continuas. Esta función relaciona los valores con su correspondiente percentil; es decir que va a describir la probabilidad de que una variable aleatoria X sujeta a cierta ley de distribución de probabilidad se sitúe en la zona de valores menores o iguales a x.
End of explanation
"""
# Graficando Función de Densidad de Probibilidad con Python
FDP_normal = stats.norm(10, 1.2).pdf(x_1) # FDP
plt.plot(x_1, FDP_normal, label='FDP nomal')
plt.title('Función de Densidad de Probabilidad')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
"""
Explanation: Función de Densidad de Probabilidad
Por último, el equivalente a la FMP para distribuciones continuas es la Función de Densidad de Probabilidad o FDP. Esta función es la derivada de la Función de Distribución Acumulada.
Por ejemplo, para la distribución normal que graficamos anteriormente, su FDP es la siguiente. La típica forma de campana que caracteriza a esta distribución.
End of explanation
"""
# Graficando Poisson
mu = 3.6 # parametro de forma
poisson = stats.poisson(mu) # Distribución
x = np.arange(poisson.ppf(0.01),
poisson.ppf(0.99))
fmp = poisson.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Poisson')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = poisson.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Poisson')
plt.show()
"""
Explanation: Distribuciones
Ahora que ya conocemos como podemos hacer para representar a las distribuciones; pasemos a analizar cada una de ellas en más detalle para conocer su forma, sus principales aplicaciones y sus propiedades. Comencemos por las distribuciones discretas.
Distribuciones Discretas
Las distribuciones discretas son aquellas en las que la variable puede tomar solo algunos valores determinados. Los principales exponentes de este grupo son las siguientes:
Distribución Poisson
La Distribución Poisson esta dada por la formula:
$$p(r; \mu) = \frac{\mu^r e^{-\mu}}{r!}$$
En dónde $r$ es un entero ($r \ge 0$) y $\mu$ es un número real positivo. La Distribución Poisson describe la probabilidad de encontrar exactamente $r$ eventos en un lapso de tiempo si los acontecimientos se producen de forma independiente a una velocidad constante $\mu$. Es una de las distribuciones más utilizadas en estadística con varias aplicaciones; como por ejemplo describir el número de fallos en un lote de materiales o la cantidad de llegadas por hora a un centro de servicios.
En Python la podemos generar fácilmente con la ayuda de scipy.stats, paquete que utilizaremos para representar a todas las restantes distribuciones a lo largo de todo el artículo.
End of explanation
"""
# Graficando Binomial
N, p = 30, 0.4 # parametros de forma
binomial = stats.binom(N, p) # Distribución
x = np.arange(binomial.ppf(0.01),
binomial.ppf(0.99))
fmp = binomial.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Binomial')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = binomial.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Binomial')
plt.show()
"""
Explanation: Distribución Binomial
La Distribución Binomial esta dada por la formula:
$$p(r; N, p) = \left(\begin{array}{c} N \ r \end{array}\right) p^r(1 - p)^{N - r}
$$
En dónde $r$ con la condición $0 \le r \le N$ y el parámetro $N$ ($N > 0$) son enteros; y el parámetro $p$ ($0 \le p \le 1$) es un número real. La Distribución Binomial describe la probabilidad de exactamente $r$ éxitos en $N$ pruebas si la probabilidad de éxito en una sola prueba es $p$.
End of explanation
"""
# Graficando Geométrica
p = 0.3 # parametro de forma
geometrica = stats.geom(p) # Distribución
x = np.arange(geometrica.ppf(0.01),
geometrica.ppf(0.99))
fmp = geometrica.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Geométrica')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = geometrica.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Geométrica')
plt.show()
"""
Explanation: Distribución Geométrica
La Distribución Geométrica esta dada por la formula:
$$p(r; p) = p(1- p)^{r-1}
$$
En dónde $r \ge 1$ y el parámetro $p$ ($0 \le p \le 1$) es un número real. La Distribución Geométrica expresa la probabilidad de tener que esperar exactamente $r$ pruebas hasta encontrar el primer éxito si la probabilidad de éxito en una sola prueba es $p$. Por ejemplo, en un proceso de selección, podría definir el número de entrevistas que deberíamos realizar antes de encontrar al primer candidato aceptable.
End of explanation
"""
# Graficando Hipergeométrica
M, n, N = 30, 10, 12 # parametros de forma
hipergeometrica = stats.hypergeom(M, n, N) # Distribución
x = np.arange(0, n+1)
fmp = hipergeometrica.pmf(x) # Función de Masa de Probabilidad
plt.plot(x, fmp, '--')
plt.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
plt.title('Distribución Hipergeométrica')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = hipergeometrica.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Hipergeométrica')
plt.show()
"""
Explanation: Distribución Hipergeométrica
La Distribución Hipergeométrica esta dada por la formula:
$$p(r; n, N, M) = \frac{\left(\begin{array}{c} M \ r \end{array}\right)\left(\begin{array}{c} N - M\ n -r \end{array}\right)}{\left(\begin{array}{c} N \ n \end{array}\right)}
$$
En dónde el valor de $r$ esta limitado por $\max(0, n - N + M)$ y $\min(n, M)$ inclusive; y los parámetros $n$ ($1 \le n \le N$), $N$ ($N \ge 1$) y $M$ ($M \ge 1$) son todos números enteros. La Distribución Hipergeométrica describe experimentos en donde se seleccionan los elementos al azar sin reemplazo (se evita seleccionar el mismo elemento más de una vez). Más precisamente, supongamos que tenemos $N$ elementos de los cuales $M$ tienen un cierto atributo (y $N - M$ no tiene). Si escogemos $n$ elementos al azar sin reemplazo, $p(r)$ es la probabilidad de que exactamente $r$ de los elementos seleccionados provienen del grupo con el atributo.
End of explanation
"""
# Graficando Bernoulli
p = 0.5 # parametro de forma
bernoulli = stats.bernoulli(p)
x = np.arange(-1, 3)
fmp = bernoulli.pmf(x) # Función de Masa de Probabilidad
fig, ax = plt.subplots()
ax.plot(x, fmp, 'bo')
ax.vlines(x, 0, fmp, colors='b', lw=5, alpha=0.5)
ax.set_yticks([0., 0.2, 0.4, 0.6])
plt.title('Distribución Bernoulli')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = bernoulli.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Bernoulli')
plt.show()
"""
Explanation: Distribución de Bernoulli
La Distribución de Bernoulli esta dada por la formula:
$$p(r;p) = \left{
\begin{array}{ll}
1 - p = q & \mbox{si } r = 0 \ \mbox{(fracaso)}\
p & \mbox{si } r = 1 \ \mbox{(éxito)}
\end{array}
\right.$$
En dónde el parámetro $p$ es la probabilidad de éxito en un solo ensayo, la probabilidad de fracaso por lo tanto va a ser $1 - p$ (muchas veces expresada como $q$). Tanto $p$ como $q$ van a estar limitados al intervalo de cero a uno. La Distribución de Bernoulli describe un experimento probabilístico en donde el ensayo tiene dos posibles resultados, éxito o fracaso. Desde esta distribución se pueden deducir varias Funciones de Densidad de Probabilidad de otras distribuciones que se basen en una serie de ensayos independientes.
End of explanation
"""
# Graficando Normal
mu, sigma = 0, 0.2 # media y desvio estandar
normal = stats.norm(mu, sigma)
x = np.linspace(normal.ppf(0.01),
normal.ppf(0.99), 100)
fp = normal.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Normal')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = normal.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Normal')
plt.show()
"""
Explanation: Distribuciones continuas
Ahora que ya conocemos las principales distribuciones discretas, podemos pasar a describir a las distribuciones continuas; en ellas a diferencia de lo que veíamos antes, la variable puede tomar cualquier valor dentro de un intervalo específico. Dentro de este grupo vamos a encontrar a las siguientes:
Distribución de Normal
La Distribución Normal, o también llamada Distribución de Gauss, es aplicable a un amplio rango de problemas, lo que la convierte en la distribución más utilizada en estadística; esta dada por la formula:
$$p(x;\mu, \sigma^2) = \frac{1}{\sigma \sqrt{2 \pi}} e^{\frac{-1}{2}\left(\frac{x - \mu}{\sigma} \right)^2}
$$
En dónde $\mu$ es el parámetro de ubicación, y va a ser igual a la media aritmética y $\sigma^2$ es el desvío estándar. Algunos ejemplos de variables asociadas a fenómenos naturales que siguen el modelo de la Distribución Normal son:
* características morfológicas de individuos, como la estatura;
* características sociológicas, como el consumo de cierto producto por un mismo grupo de individuos;
* características psicológicas, como el cociente intelectual;
* nivel de ruido en telecomunicaciones;
* errores cometidos al medir ciertas magnitudes;
* etc.
End of explanation
"""
# Graficando Uniforme
uniforme = stats.uniform()
x = np.linspace(uniforme.ppf(0.01),
uniforme.ppf(0.99), 100)
fp = uniforme.pdf(x) # Función de Probabilidad
fig, ax = plt.subplots()
ax.plot(x, fp, '--')
ax.vlines(x, 0, fp, colors='b', lw=5, alpha=0.5)
ax.set_yticks([0., 0.2, 0.4, 0.6, 0.8, 1., 1.2])
plt.title('Distribución Uniforme')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = uniforme.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Uniforme')
plt.show()
"""
Explanation: Distribución Uniforme
La Distribución Uniforme es un caso muy simple expresada por la función:
$$f(x; a, b) = \frac{1}{b -a} \ \mbox{para} \ a \le x \le b
$$
Su función de distribución esta entonces dada por:
$$
p(x;a, b) = \left{
\begin{array}{ll}
0 & \mbox{si } x \le a \
\frac{x-a}{b-a} & \mbox{si } a \le x \le b \
1 & \mbox{si } b \le x
\end{array}
\right.
$$
Todos los valore tienen prácticamente la misma probabilidad.
End of explanation
"""
# Graficando Log-Normal
sigma = 0.6 # parametro
lognormal = stats.lognorm(sigma)
x = np.linspace(lognormal.ppf(0.01),
lognormal.ppf(0.99), 100)
fp = lognormal.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Log-normal')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = lognormal.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Log-normal')
plt.show()
"""
Explanation: Distribución de Log-normal
La Distribución Log-normal esta dada por la formula:
$$p(x;\mu, \sigma) = \frac{1}{ x \sigma \sqrt{2 \pi}} e^{\frac{-1}{2}\left(\frac{\ln x - \mu}{\sigma} \right)^2}
$$
En dónde la variable $x > 0$ y los parámetros $\mu$ y $\sigma > 0$ son todos números reales. La Distribución Log-normal es aplicable a variables aleatorias que están limitadas por cero, pero tienen pocos valores grandes. Es una distribución con asimetría positiva. Algunos de los ejemplos en que la solemos encontrar son:
* El peso de los adultos.
* La concentración de los minerales en depósitos.
* Duración de licencia por enfermedad.
* Distribución de riqueza
* Tiempos muertos de maquinarias.
End of explanation
"""
# Graficando Exponencial
exponencial = stats.expon()
x = np.linspace(exponencial.ppf(0.01),
exponencial.ppf(0.99), 100)
fp = exponencial.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Exponencial')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = exponencial.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Exponencial')
plt.show()
"""
Explanation: Distribución de Exponencial
La Distribución Exponencial esta dada por la formula:
$$p(x;\alpha) = \frac{1}{ \alpha} e^{\frac{-x}{\alpha}}
$$
En dónde tanto la variable $x$ como el parámetro $\alpha$ son números reales positivos. La Distribución Exponencial tiene bastantes aplicaciones, tales como la desintegración de un átomo radioactivo o el tiempo entre eventos en un proceso de Poisson donde los acontecimientos suceden a una velocidad constante.
End of explanation
"""
# Graficando Gamma
a = 2.6 # parametro de forma.
gamma = stats.gamma(a)
x = np.linspace(gamma.ppf(0.01),
gamma.ppf(0.99), 100)
fp = gamma.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Gamma')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = gamma.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Gamma')
plt.show()
"""
Explanation: Distribución Gamma
La Distribución Gamma esta dada por la formula:
$$p(x;a, b) = \frac{a(a x)^{b -1} e^{-ax}}{\Gamma(b)}
$$
En dónde los parámetros $a$ y $b$ y la variable $x$ son números reales positivos y $\Gamma(b)$ es la función gamma. La Distribución Gamma comienza en el origen de coordenadas y tiene una forma bastante flexible. Otras distribuciones son casos especiales de ella.
End of explanation
"""
# Graficando Beta
a, b = 2.3, 0.6 # parametros de forma.
beta = stats.beta(a, b)
x = np.linspace(beta.ppf(0.01),
beta.ppf(0.99), 100)
fp = beta.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Beta')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = beta.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Beta')
plt.show()
"""
Explanation: Distribución Beta
La Distribución Beta esta dada por la formula:
$$p(x;p, q) = \frac{1}{B(p, q)} x^{p-1}(1 - x)^{q-1}
$$
En dónde los parámetros $p$ y $q$ son números reales positivos, la variable $x$ satisface la condición $0 \le x \le 1$ y $B(p, q)$ es la función beta. Las aplicaciones de la Distribución Beta incluyen el modelado de variables aleatorias que tienen un rango finito de $a$ hasta $b$. Un
ejemplo de ello es la distribución de los tiempos de actividad en las redes de proyectos. La Distribución Beta se utiliza también con frecuencia como una probabilidad a priori para proporciones [binomiales]((https://es.wikipedia.org/wiki/Distribuci%C3%B3n_binomial) en el análisis bayesiano.
End of explanation
"""
# Graficando Chi cuadrado
df = 34 # parametro de forma.
chi2 = stats.chi2(df)
x = np.linspace(chi2.ppf(0.01),
chi2.ppf(0.99), 100)
fp = chi2.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución Chi cuadrado')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = chi2.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma Chi cuadrado')
plt.show()
"""
Explanation: Distribución Chi cuadrado
La Distribución Chi cuadrado esta dada por la función:
$$p(x; n) = \frac{\left(\frac{x}{2}\right)^{\frac{n}{2}-1} e^{\frac{-x}{2}}}{2\Gamma \left(\frac{n}{2}\right)}
$$
En dónde la variable $x \ge 0$ y el parámetro $n$, el número de grados de libertad, es un número entero positivo. Una importante aplicación de la Distribución Chi cuadrado es que cuando un conjunto de datos es representado por un modelo teórico, esta distribución puede ser utilizada para controlar cuan bien se ajustan los valores predichos por el modelo, y los datos realmente observados.
End of explanation
"""
# Graficando t de Student
df = 50 # parametro de forma.
t = stats.t(df)
x = np.linspace(t.ppf(0.01),
t.ppf(0.99), 100)
fp = t.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución t de Student')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = t.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma t de Student')
plt.show()
"""
Explanation: Distribución T de Student
La Distribución t de Student esta dada por la función:
$$p(t; n) = \frac{\Gamma(\frac{n+1}{2})}{\sqrt{n\pi}\Gamma(\frac{n}{2})} \left( 1 + \frac{t^2}{2} \right)^{-\frac{n+1}{2}}
$$
En dónde la variable $t$ es un número real y el parámetro $n$ es un número entero positivo. La Distribución t de Student es utilizada para probar si la diferencia entre las medias de dos muestras de observaciones es estadísticamente significativa. Por ejemplo, las alturas de una muestra aleatoria de los jugadores de baloncesto podría compararse con las alturas de una muestra aleatoria de jugadores de fútbol; esta distribución nos podría ayudar a determinar si un grupo es significativamente más alto que el otro.
End of explanation
"""
# Graficando Pareto
k = 2.3 # parametro de forma.
pareto = stats.pareto(k)
x = np.linspace(pareto.ppf(0.01),
pareto.ppf(0.99), 100)
fp = pareto.pdf(x) # Función de Probabilidad
plt.plot(x, fp)
plt.title('Distribución de Pareto')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.show()
# histograma
aleatorios = pareto.rvs(1000) # genera aleatorios
cuenta, cajas, ignorar = plt.hist(aleatorios, 20)
plt.ylabel('frequencia')
plt.xlabel('valores')
plt.title('Histograma de Pareto')
plt.show()
"""
Explanation: Distribución de Pareto
La Distribución de Pareto esta dada por la función:
$$p(x; \alpha, k) = \frac{\alpha k^{\alpha}}{x^{\alpha + 1}}
$$
En dónde la variable $x \ge k$ y el parámetro $\alpha > 0$ son números reales. Esta distribución fue introducida por su inventor, Vilfredo Pareto, con el fin de explicar la distribución de los salarios en la sociedad. La Distribución de Pareto se describe a menudo como la base de la regla 80/20. Por ejemplo, el 80% de las quejas de los clientes con respecto al funcionamiento de su vehículo por lo general surgen del 20% de los componentes.
End of explanation
"""
|
astro4dev/OAD-Data-Science-Toolkit | Teaching Materials/Programming/Python/PythonISYA2018/02.BasicPythonII/01_control_structures.ipynb | gpl-3.0 | a = 1
b = 4
if a < b:
print('a is smaller than b')
elif a > b:
print('a is larger than b')
else:
print('a is equal to b')
"""
Explanation: If (elif, else)
This is probably the most used conditional structure in programming.
Here is the syntax in Python
End of explanation
"""
5 in [1, 2, 4]
3 in [1, 2, 3]
not (2 == 5)
"""
Explanation: Important: the lines of code after the if, elif, else statement have to be indented by 4 spaces to indicate that they are part of the same instructions block.
The table below includes some of the operators that can be used inside the if conditions. They all return a boolean variable (True or False).
| Operator | Meaning
| -------- | ---------
| == | Equal to
| != | Not equal
| < | Smaller than
| > | Larger than
| <= | Smaller-equal than
| >= | Larger-equal than
| not | Logiccal Negation
| in | Checks whether an element is in a list
| and | Checks whether two conditions are True
| or | Checks whether at least one condition is True
| is | Checks whether an elemenent is equal to another.
Let see some examples:
End of explanation
"""
a = []
while len(a) < 10:
a.append(0) # add elements to the list until I have 10.
print(a)
# This is an example of random walk.
import random
position = 0
n_steps = 0
while abs(position) < 10: # I will walk until I am a distance of 10
step = 2.0*(random.random()-0.5) # this is a random variable between -1 and 1
position += step
n_steps += 1
print(position, n_steps) # this is the final position and the number of steps
"""
Explanation: while
This control structure also checks for a condition to be True in order to execute a series of instructions. The 4 space rule for indentantion also applies.
End of explanation
"""
|
metabolite-atlas/metatlas | notebooks/reference/Workflow_Notebook_VS_Auto_RT_Predict_V2.ipynb | bsd-3-clause | from IPython.core.display import Markdown, display, clear_output, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%matplotlib notebook
%matplotlib inline
%env HDF5_USE_FILE_LOCKING=FALSE
import sys, os
#### add a path to your private code if not using production code ####
#print ('point path to metatlas repo')
sys.path.insert(0,"/global/homes/v/vrsingan/repos/metatlas") #where your private code is
######################################################################
from metatlas.plots import dill2plots as dp
from metatlas.io import metatlas_get_data_helper_fun as ma_data
from metatlas.plots import chromatograms_mp_plots as cp
from metatlas.plots import chromplotplus as cpp
from metatlas.datastructures import metatlas_objects as metob
import time
import numpy as np
import multiprocessing as mp
import pandas as pd
import operator
import matplotlib.pyplot as plt
pd.set_option('display.max_rows', 5000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_colwidth', 100)
def printmd(string):
display(Markdown(string))
"""
Explanation: 1. Import Python Packages
To install the kernel used by NERSC-metatlas users, copy the following text to $HOME/.ipython/kernels/mass_spec_cori/kernel.json
{
"argv": [
"/global/common/software/m2650/python-cori/bin/python",
"-m",
"IPython.kernel",
"-f",
"{connection_file}"
],
"env": {
"PATH": "/global/common/software/m2650/python-cori/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
},
"display_name": "mass_spec_cori",
"language": "python"
}
End of explanation
"""
project_directory='/global/homes/FIRST-INITIAL-OF-USERNAME/USERNAME/PROJECTDIRECTORY/' # <- edit this line, do not copy the path directly from NERSC (ex. the u1, or u2 directories)
output_subfolder='HILIC_POS_20190830/' # <- edit this as 'chromatography_polarity_yyyymmdd/'
output_dir = os.path.join(project_directory,output_subfolder)
output_data_qc = os.path.join(output_dir,'data_QC')
if not os.path.exists(project_directory):
os.makedirs(project_directory)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(output_data_qc):
os.makedirs(output_data_qc)
"""
Explanation: 2. Set atlas, project and output directories from your nersc home directory
Create a project folder name for this analysis by replacing the PROJECTDIRECTORY string text in red below. Make sure to update the rest of the direcory to point to your home directory. The pwd block will print out the directory where this jupyter notebook is stored.
Create a subdirectory name for the output, each run through you may want to create a new output folder.
When you run the block the folders will be created in your home directory. If the directory already exists, the block will just set the path for use with future code blocks.
End of explanation
"""
groups = dp.select_groups_for_analysis(name = '%20201106%505892%HILIC%KLv1%',
most_recent = True,
remove_empty = True,
include_list = ['QC'], exclude_list = ['NEG']) #['QC','Blank']
groups = sorted(groups, key=operator.attrgetter('name'))
file_df = pd.DataFrame(columns=['file','time','group'])
for g in groups:
for f in g.items:
if hasattr(f, 'acquisition_time'):
file_df = file_df.append({'file':f, 'time':f.acquisition_time,'group':g}, ignore_index=True)
else:
file_df = file_df.append({'file':f, 'time':0,'group':g}, ignore_index=True)
file_df = file_df.sort_values(by=['time'])
for file_data in file_df.iterrows():
print(file_data[1].file.name)
"""
Explanation: 3. Select groups and get QC files
End of explanation
"""
# DO NOT EDIT THIS BLOCK
pos_templates = ['HILICz150_ANT20190824_TPL_EMA_Unlab_POS',
'HILICz150_ANT20190824_TPL_QCv3_Unlab_POS',
'HILICz150_ANT20190824_TPL_ISv5_Unlab_POS',
'HILICz150_ANT20190824_TPL_ISv5_13C15N_POS',
'HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS']
neg_templates = ['HILICz150_ANT20190824_TPL_EMA_Unlab_NEG',
'HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG',
'HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG',
'HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG',
'HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG']
#Atlas File Name
QC_template_filename = pos_templates[1]
atlases = metob.retrieve('Atlas',name=QC_template_filename,
username='vrsingan')
names = []
for i,a in enumerate(atlases):
print(i,a.name,pd.to_datetime(a.last_modified,unit='s'),len(a.compound_identifications))
# #Alternatively use this block to create QC atlas from spreadsheet
# import datetime
#dp = reload(dp)
# QC_template_filename = " " #<- Give the template filename to be used for storing in Database
#myAtlas = dp.make_atlas_from_spreadsheet('/global/project/projectdirs/metatlas/projects/1_TemplateAtlases/TemplateAtlas_HILICz150mm_Annotation20190824_QCv3_Unlabeled_Positive.csv',
# QC_template_filename,
# filetype='csv',
# sheetname='',
# polarity = 'positive',
# store=True,
# mz_tolerance = 20)
#atlases = dp.get_metatlas_atlas(name=QC_template_filename,do_print = True,most_recent=True)
myAtlas = atlases[-1]
atlas_df = ma_data.make_atlas_df(myAtlas)
atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
print(myAtlas.name)
print(myAtlas.username)
"""
Explanation: 4. Get template QC atlas from database
Available templates in Database:
Index Atlas_name(POS)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_POS\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_POS\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_POS\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_POS\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS
Index Atlas_name(NEG)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_NEG\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG
End of explanation
"""
# rt_allowance = 1.5
# atlas_df['rt_min'] = atlas_df['rt_peak'].apply(lambda rt: rt-rt_allowance)
# atlas_df['rt_max'] = atlas_df['rt_peak'].apply(lambda rt: rt+rt_allowance)
# for compound in range(len(myAtlas.compound_identifications)):
# rt_peak = myAtlas.compound_identifications[compound].rt_references[0].rt_peak
# myAtlas.compound_identifications[compound].rt_references[0].rt_min = rt_peak - rt_allowance
# myAtlas.compound_identifications[compound].rt_references[0].rt_max = rt_peak + rt_allowance
"""
Explanation: 4b. Uncomment the block below to adjust RT window
End of explanation
"""
all_files = []
for file_data in file_df.iterrows():
all_files.append((file_data[1].file,file_data[1].group,atlas_df,myAtlas))
pool = mp.Pool(processes=min(4, len(all_files)))
t0 = time.time()
metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files)
pool.close()
pool.terminate()
#If you're code crashes here, make sure to terminate any processes left open.
print(time.time() - t0)
"""
Explanation: 5. Create metatlas dataset from QC files and QC atlas
End of explanation
"""
# dp = reload(dp)
# num_data_points_passing = 3
# peak_height_passing = 1e4
# atlas_df_passing = dp.filter_atlas(atlas_df=atlas_df, input_dataset=metatlas_dataset, num_data_points_passing = num_data_points_passing, peak_height_passing = peak_height_passing)
# print("# Compounds in Atlas: "+str(len(atlas_df)))
# print("# Compounds passing filter: "+str(len(atlas_df_passing)))
# atlas_passing = myAtlas.name+'_filteredby-datapnts'+str(num_data_points_passing)+'-pkht'+str(peak_height_passing)
# myAtlas_passing = dp.make_atlas_from_spreadsheet(atlas_df_passing,
# atlas_passing,
# filetype='dataframe',
# sheetname='',
# polarity = 'positive',
# store=True,
# mz_tolerance = 20)
# atlases = dp.get_metatlas_atlas(name=atlas_passing,do_print = True, most_recent=True)
# myAtlas = atlases[-1]
# atlas_df = ma_data.make_atlas_df(myAtlas)
# atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
# print(myAtlas.name)
# print(myAtlas.username)
# metob.to_dataframe([myAtlas])#
# all_files = []
# for file_data in file_df.iterrows():
# all_files.append((file_data[1].file,file_data[1].group,atlas_df,myAtlas))
# pool = mp.Pool(processes=min(4, len(all_files)))
# t0 = time.time()
# metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files)
# pool.close()
# pool.terminate()
# #If you're code crashes here, make sure to terminate any processes left open.
# print(time.time() - t0)
"""
Explanation: 5b Optional: Filter atlas for compounds with no or low signals
Uncomment the below 3 blocks to filter the atlas.
Please ensure that correct polarity is used for the atlases.
End of explanation
"""
from importlib import reload
dp=reload(dp)
rts_df = dp.make_output_dataframe(input_dataset = metatlas_dataset, fieldname='rt_peak', use_labels=True, output_loc = output_data_qc, summarize=True)
rts_df.to_csv(os.path.join(output_data_qc,"QC_Measured_RTs.csv"))
rts_df
"""
Explanation: 6. Summarize RT peak across files and make data frame
End of explanation
"""
import itertools
import math
from __future__ import division
from matplotlib import gridspec
import matplotlib.ticker as mticker
rts_df['atlas RT peak'] = [compound['identification'].rt_references[0].rt_peak for compound in metatlas_dataset[0]]
# number of columns in rts_df that are not values from a specific input file
num_not_files = len(rts_df.columns) - len(metatlas_dataset)
rts_df_plot = rts_df.sort_values(by='standard deviation', ascending=False, na_position='last') \
.drop(['#NaNs'], axis=1) \
.dropna(axis=0, how='all', subset=rts_df.columns[:-num_not_files])
fontsize = 2
pad = 0.1
cols = 8
rows = int(math.ceil((rts_df.shape[0]+1)/8))
fig = plt.figure()
gs = gridspec.GridSpec(rows, cols, figure=fig, wspace=0.2, hspace=0.4)
for i, (index, row) in enumerate(rts_df_plot.iterrows()):
ax = fig.add_subplot(gs[i])
ax.tick_params(direction='in', length=1, pad=pad, width=0.1, labelsize=fontsize)
ax.scatter(range(rts_df_plot.shape[1]-num_not_files),row[:-num_not_files], s=0.2)
ticks_loc = np.arange(0,len(rts_df_plot.columns)-num_not_files , 1.0)
ax.axhline(y=row['atlas RT peak'], color='r', linestyle='-', linewidth=0.2)
ax.set_xlim(-0.5,len(rts_df_plot.columns)-num_not_files+0.5)
ax.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc))
range_columns = list(rts_df_plot.columns[:-num_not_files])+['atlas RT peak']
ax.set_ylim(np.nanmin(row.loc[range_columns])-0.12,
np.nanmax(row.loc[range_columns])+0.12)
[s.set_linewidth(0.1) for s in ax.spines.values()]
# truncate name so it fits above a single subplot
ax.set_title(row.name[:33], pad=pad, fontsize=fontsize)
ax.set_xlabel('Files', labelpad=pad, fontsize=fontsize)
ax.set_ylabel('Actual RTs', labelpad=pad, fontsize=fontsize)
plt.savefig(os.path.join(output_data_qc, 'Compound_Atlas_RTs.pdf'), bbox_inches="tight")
for i,a in enumerate(rts_df.columns):
print(i, a)
selected_column=9
"""
Explanation: 7. Create Compound atlas RTs plot and choose file for prediction
End of explanation
"""
from sklearn.linear_model import LinearRegression, RANSACRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_absolute_error as mae
actual_rts, pred_rts, polyfit_rts = [],[],[]
current_actual_df = rts_df.loc[:,rts_df.columns[selected_column]]
bad_qc_compounds = np.where(~np.isnan(current_actual_df))
current_actual_df = current_actual_df.iloc[bad_qc_compounds]
current_pred_df = atlas_df.iloc[bad_qc_compounds][['rt_peak']]
actual_rts.append(current_actual_df.values.tolist())
pred_rts.append(current_pred_df.values.tolist())
ransac = RANSACRegressor(random_state=42)
rt_model_linear = ransac.fit(current_pred_df, current_actual_df)
coef_linear = rt_model_linear.estimator_.coef_[0]
intercept_linear = rt_model_linear.estimator_.intercept_
poly_reg = PolynomialFeatures(degree=2)
X_poly = poly_reg.fit_transform(current_pred_df)
rt_model_poly = LinearRegression().fit(X_poly, current_actual_df)
coef_poly = rt_model_poly.coef_
intercept_poly = rt_model_poly.intercept_
for i in range(rts_df.shape[1]-5):
current_actual_df = rts_df.loc[:,rts_df.columns[i]]
bad_qc_compounds = np.where(~np.isnan(current_actual_df))
current_actual_df = current_actual_df.iloc[bad_qc_compounds]
current_pred_df = atlas_df.iloc[bad_qc_compounds][['rt_peak']]
actual_rts.append(current_actual_df.values.tolist())
pred_rts.append(current_pred_df.values.tolist())
"""
Explanation: 8. Create RT adjustment model - Linear & Polynomial Regression
End of explanation
"""
#User can change to use particular qc file
import itertools
import math
from __future__ import division
from matplotlib import gridspec
x = list(itertools.chain(*pred_rts))
y = list(itertools.chain(*actual_rts))
rows = int(math.ceil((rts_df.shape[1]+1)/5))
cols = 5
fig = plt.figure(constrained_layout=False)
gs = gridspec.GridSpec(rows, cols, figure=fig)
plt.rc('font', size=6)
plt.rc('axes', labelsize=6)
plt.rc('xtick', labelsize=3)
plt.rc('ytick', labelsize=3)
for i in range(rts_df.shape[1]-5):
x = list(itertools.chain(*pred_rts[i]))
y = actual_rts[i]
ax = fig.add_subplot(gs[i])
ax.scatter(x, y, s=2)
ax.plot(np.linspace(0, max(x),100), coef_linear*np.linspace(0,max(x),100)+intercept_linear, linewidth=0.5,color='red')
ax.plot(np.linspace(0, max(x),100), (coef_poly[1]*np.linspace(0,max(x),100))+(coef_poly[2]*(np.linspace(0,max(x),100)**2))+intercept_poly, linewidth=0.5,color='green')
ax.set_title("File: "+str(i))
ax.set_xlabel('predicted RTs')
ax.set_ylabel('actual RTs')
fig_legend = "FileIndex FileName"
for i in range(rts_df.shape[1]-5):
fig_legend = fig_legend+"\n"+str(i)+" "+rts_df.columns[i]
fig.tight_layout(pad=0.5)
plt.text(0,-0.03*rts_df.shape[1], fig_legend, transform=plt.gcf().transFigure)
plt.savefig(os.path.join(output_data_qc, 'Actual_vs_Predicted_RTs.pdf'), bbox_inches="tight")
"""
Explanation: 8. Plot actual vs predict RT values and fit a median coeff+intercept line
End of explanation
"""
qc_df = rts_df[[rts_df.columns[selected_column]]]
qc_df = qc_df.copy()
print("Linear Parameters :", coef_linear, intercept_linear)
print("Polynomial Parameters :", coef_poly,intercept_poly)
qc_df.columns = ['RT Measured']
atlas_df.index = qc_df.index
qc_df['RT Reference'] = atlas_df['rt_peak']
qc_df['RT Linear Pred'] = qc_df['RT Reference'].apply(lambda rt: coef_linear*rt+intercept_linear)
qc_df['RT Polynomial Pred'] = qc_df['RT Reference'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
qc_df['RT Diff Linear'] = qc_df['RT Measured'] - qc_df['RT Linear Pred']
qc_df['RT Diff Polynomial'] = qc_df['RT Measured'] - qc_df['RT Polynomial Pred']
qc_df.to_csv(os.path.join(output_data_qc, "RT_Predicted_Model_Comparison.csv"))
qc_df
# CHOOSE YOUR MODEL HERE (linear / polynomial).
#model = 'linear'
model = 'polynomial'
"""
Explanation: 9. Choose your model
End of explanation
"""
# Save model
with open(os.path.join(output_data_qc,'rt_model.txt'), 'w') as f:
if model == 'linear':
f.write('coef = {}\nintercept = {}\nqc_actual_rts = {}\nqc_predicted_rts = {}'.format(coef_linear,
intercept_linear,
', '.join([g.name for g in groups]),
myAtlas.name))
f.write('\n'+repr(rt_model_linear.set_params()))
else:
f.write('coef = {}\nintercept = {}\nqc_actual_rts = {}\nqc_predicted_rts = {}'.format(coef_poly,
intercept_poly,
', '.join([g.name for g in groups]),
myAtlas.name))
f.write('\n'+repr(rt_model_poly.set_params()))
"""
Explanation: 10. Save RT model (optional)
End of explanation
"""
pos_atlas_indices = [0,1,2,3,4]
neg_atlas_indices = [0,1,2,3,4]
free_text = '' # this will be appended to the end of the csv filename exported
save_to_db = False
for ix in pos_atlas_indices:
atlases = metob.retrieve('Atlas',name=pos_templates[ix], username='vrsingan')
prd_atlas_name = pos_templates[ix].replace('TPL', 'PRD')
if free_text != '':
prd_atlas_name = prd_atlas_name+"_"+free_text
prd_atlas_filename = prd_atlas_name+'.csv'
myAtlas = atlases[-1]
PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
if model == 'linear':
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
else:
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
PRD_atlas_df.to_csv(os.path.join(output_data_qc,prd_atlas_filename), index=False)
if save_to_db:
dp.make_atlas_from_spreadsheet(PRD_atlas_df,
prd_atlas_name,
filetype='dataframe',
sheetname='',
polarity = 'positive',
store=True,
mz_tolerance = 12)
print(prd_atlas_name+" Created!")
for ix in neg_atlas_indices:
atlases = metob.retrieve('Atlas',name=neg_templates[ix], username='vrsingan')
prd_atlas_name = neg_templates[ix].replace('TPL', 'PRD')
if free_text != '':
prd_atlas_name = prd_atlas_name+"_"+free_text
prd_atlas_filename = prd_atlas_name+'.csv'
myAtlas = atlases[-1]
PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
if model == 'linear':
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
else:
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
PRD_atlas_df.to_csv(os.path.join(output_data_qc,prd_atlas_filename), index=False)
if save_to_db:
dp.make_atlas_from_spreadsheet(PRD_atlas_df,
prd_atlas_name,
filetype='dataframe',
sheetname='',
polarity = 'negative',
store=True,
mz_tolerance = 12)
print(prd_atlas_name+" Created!")
"""
Explanation: 11. Auto RT adjust Template atlases
Available templates in Database:
Index Atlas_name(POS)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_POS\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_POS\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_POS\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_POS\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS
Index Atlas_name(NEG)\
0 HILICz150_ANT20190824_TPL_EMA_Unlab_NEG\
1 HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG\
2 HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG\
3 HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG\
4 HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG
End of explanation
"""
## Optional for custom template predictions
# atlas_name = '' #atlas name
# save_to_db = False
# atlases = metob.retrieve('Atlas',name=atlas_name, username='*')
# myAtlas = atlases[-1]
# PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
# PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
# if model == 'linear':
# PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
# else:
# PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
# PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
# PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
# PRD_atlas_df.to_csv(os.path.join(output_data_qc, name=atlas_name.replace('TPL','PRD'), index=False)
# if save_to_db:
# dp.make_atlas_from_spreadsheet(PRD_atlas_df,
# PRD_atlas_name,
# filetype='dataframe',
# sheetname='',
# polarity = 'positive', # NOTE - Please make sure you are choosing the correct polarity
# store=True,
# mz_tolerance = 12)
"""
Explanation: OPTIONAL BLOCK FOR RT PREDICTION OF CUSTOM ATLAS
End of explanation
"""
|
ioggstream/python-course | connexion-101/notebooks/04-01-connexion-writing-operationid.ipynb | agpl-3.0 | # connexion provides a predefined problem object
from connexion import problem
# Exercise: write a get_status() returning a successful response to problem.
help(problem)
def get_status():
return problem(
status=200,
title="OK",
detail="The application is working properly"
)
"""
Explanation: Implementing methods
Once we have a mock server, we could already provide an interface to
external services mocking our replies.
This is very helpful to enable
clients to test our API and enable quick feedbacks on data types and possible responses.
Now that we have the contract, we should start with the implementation!
OperationId
OperationId is the OAS3 fields with maps the resource-target with the python function to call.
paths:
/status
get:
...
operationId: api.get_status
...
The method signature should reflect the function's one.
OAS allows to pass parameters to the resource target via:
- query parameters
- http headers
- request body
Implement get_status
At first we'll just implement the get_status in api.py function that:
takes no input parameters;
returns a problem+json
End of explanation
"""
!curl http://localhost:5000/datetime/v1/status -kv
"""
Explanation: Now run the spec in a terminal using
cd /code/notebooks/oas3/
connexion run /code/notebooks/oas3/ex-03-02-path.yaml
play a bit with the Swagger UI
and try making a request!
End of explanation
"""
from random import randint
from connexion import problem
def get_status():
headers = {"Cache-Control": "no-store"}
p = randint(1, 5)
if p == 5:
return problem(
status=503,
title="Service Temporarily Unavailable",
detail="Retry after the number of seconds specified in the the Retry-After header.",
headers=dict(**headers, **{'Retry-After': str(p)})
)
return problem(
status=200,
title="OK",
detail="So far so good.",
headers=headers
)
"""
Explanation: Returning errors.
Our api.get_status implementation always returns 200 OK, but in the real world APIs could
return different kind of errors.
An interoperable API should:
fail fast, avoiding that application errors result in stuck connections;
implement a clean error semantic.
In our Service Management framework we expect that:
if the Service is unavailable, we must return 503 Service Unavailable http status
we must return the Retry-After header specifying the number of seconds
when to retry.
TODO: ADD CIRCUIT_BREAKER IMAGE
To implement this we must:
add the returned headers in the OAS3 interface;
pass the headers to the flask Response
Exercise
Modify the OAS3 spec in ex-04-01-headers.yaml and:
add a 503 response to the /status path;
when a 503 is returned, the retry-after header is returned.
Hint: you can define a header in components/headers like that:
```
components:
headers:
Retry-After:
description: |-
Retry contacting the endpoint at least after seconds.
See https://tools.ietf.org/html/rfc7231#section-7.1.3
schema:
format: int32
type: integer
```
Or just $ref the Retry-After defined in https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/headers/Retry-After
Modify api.py:get_status such that:
returns a 503 on 20% of the requests;
when a 503 is returned, the retry-after header is returned;
on each response, return the Cache-Control: no-store header to avoid
caching on service status.
Bonus track: Google post on HTTP caching
End of explanation
"""
|
tbphu/Fachkurs_Bachelor_WS1617 | general/ode/Introduction_ODEs.ipynb | mit | import numpy as np
# define ODE (y,t,p)
"""
Explanation: Introduction
What is an ODE
Differential equations can be used to describe the time-dependent behaviour of a variable.
$$\frac{\text{d}\vec{x}}{\text{d}t} = f(\vec{x}, t)$$
The variable stands for a concentration or a number of individuals in a population.
In general, a first order ODE has two parts, the increasing (birth, production,...) and the decreasing (death, degradation, consumption,...) part:
$$\frac{\text{d}\vec{x}}{\text{d}t} = \sum_{}\text{Rates}{\text{production}} - \sum{}\text{Rates}_{\text{loss}}$$
You probably already know ways to solve a differential equation algebraically by 'separation of variables' (Trennung der Variablen) in the homogeneous case or 'variation of parameters' (Variation der Konstanten) in the inhomogeneous case. Here, we want to discuss the use of numerical methods to solve your ODE system.
Numerical integration
In principle, every numerical procedure to solve an ODE is based on the so-called "Euler" method. It's very easy to understand, you just have to read the $\frac{\text{d}\vec{x}}{\text{d}t}$ as a $\frac{\Delta \vec{x}}{\Delta t}$. Then you can multiply both sides of the equation with $\Delta t$ and you have an equation describing the change of your variables during a certain time intervall $\Delta t$:
$$ \Delta \vec{x} = f(\vec{x}, t)\cdot \Delta t$$
Of course, the smaller yoy choose the time intervall $\Delta t$, the more accurate your result will be in comparison to the analytical solution.
So it's clear, we chose a tiny one, right? Well, not exactly, the smaller your time intervall the longer the simulation will take. Therefore, we need a compromise and here the provided software will help us by constantly testing and observing the numerical solution and adapt the "step size" $\Delta t$ automatically.
Lotka-Volterra: A prey-predator model
Model equations:
$$ \frac{\mathrm{d}\,R}{\mathrm{d}\,t} = aR-bRW $$
$$ \frac{\mathrm{d}\,W}{\mathrm{d}\,t} = cWR-dW $$
Variables:
R: Rabbit population
W: Wolf population
Parameters:
a: rabbit's birth rate
b: predation rate
c: wolf's benefit
d: wolf's death rate
Let's start
we write a small function f, that receives a list of the current values of our variables x, the current time t and parameters p. The function has to evaluate the equations of our system or $\frac{\text{d}\vec{x}}{\text{d}t}$, respectively. Afterwards, it returns the values of the equations as another list.
Important
Since this function f is used by the solver, we are not allowed to change the input (arguments) or output (return value) of this function.
End of explanation
"""
# initial values of variables
# define p = [a, b, c, d]
# time grid
"""
Explanation: Before we start the simulation of our model, we have to define our system.
We start with our static information:
1. Initial conditions for our variables
2. Values of the paramters
3. Simulation time and number of time points at which we want to have the values for our variables (the time grid). Use numpy!!
End of explanation
"""
from scipy.integrate import odeint
# solve ODE using odeint (f, y0, t, (p,))
"""
Explanation: Last but not least, we need to import and call our solver. The result will be a matrix with our time courses as columns and the values at the specified time points. Since we have a values for every time point and every species, we can directly plot the results using matplotlib.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Plot results
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Phase space diagram
End of explanation
"""
# solve ODE using odeint
y = odeint(f, y0, t, (p,))
"""
Explanation: Exercises
1. Implement the Goldbeter model:
$\frac{\mathrm{d}C}{\mathrm{d}t}=v_i-v_dX\frac{C}{K_d+C}-k_dC$
$\frac{\mathrm{d}M}{\mathrm{d}t}=V_1\frac{1-M}{K_1+(1-M)}-V_2\frac{M}{K_2+M}$
$\frac{\mathrm{d}X}{\mathrm{d}t}=V_3\frac{1-X}{K_3+(1-X)}-V_4\frac{X}{K_4+X}$
with $V_1=\frac{C}{K_c+C}V_{M1}$ and $V_3=MV_{M3}$
and $M+M^=1$ and $X+X^=1$
Tasks:
Plot the trajectories and the phase space diagrams
Perform parameter sampling (20 random parameter sets for the interval [0, 1] ) and plot trajectories and phase space diagrams for each parameter set
Scan the parameter 'Kd' in a range where the solution oscillates (~20 values) and plot trajectories and phase space diagrams for each parameter set
Determine frequency and amplitude of those solutions and plot them in dependency of the 'Kd'
Solution
End of explanation
"""
|
karolaya/PDI | PS-01/problem_set_1.ipynb | mit | '''This is a definition script, so we do not have to rewrite code'''
import numpy as np
import cv2
import matplotlib.pyplot as mplt
# set matplotlib to print inline (Jupyter)
%matplotlib inline
# path prefix
pth = '../data/'
# files to be used as samples
# list *files* holds the names of the test images
files = ['cameraman.png', 'moon.jpg', 'rose.bmp', 'skull.bmp', 'Woman.bmp','hut.jpg']
# Usefull function
def rg(img_path):
return cv2.imread(pth+img_path, cv2.IMREAD_GRAYSCALE)
"""
Explanation: Digital Image Processing - Problem Set 1
Student Names:
Karolay Ardila Salazar
Julián Sibaja García
Andrés Simancas Mateus
Instructions
This first Problem Set covers the topics of basic image manipulation, spatial resolution and intensity level resolution. <br>
Your solutions to the following problems should include commented source code and a short description of each function. You should test your functions with several input images, besides the ones provided here. Include the input and output images that you used for experimentation. Analyze your results. If you discover something interesting, let us know!
En la celda siguiente se importan las librerías necesarias, se definen ciertas variables globales, como el camino a las imágenes, una lista con imágenes a utilizar y una función global llamada rg, que retorna la imagen como un arreglo de numpy (para acortar los códigos de las secciones siguientes).<br>
Antes de ejecutar cualquier otra celda, corra esta primero.
Definitions
End of explanation
"""
img1 = rg(files[0])
hg, wd = img1.shape # function that returns a tuple with 2 parameters
print 'height: ' + str(hg)
print 'width: '+ str(wd)
mplt.figure()
mplt.imshow(img1, cmap='gray')
mplt.title(files[0])
img2 = img1.copy()
cv2.imwrite(pth + 'new_cameraman.png', img2) # create a new file called 'cameraman_new.png'
if files[-1] != 'new_cameraman.png':
files.append('new_cameraman.png') # add the new element to the list
print files # checking
"""
Explanation: <b>1. </b>Load image from a file and display the image. Determine the size of the image. Finally, save a new copy of the image in a new file.<br /> <br />
Para resolver este punto, primero guardamos en la variable img1 la imagen que vamos a utilizar, esto lo hacemos llamando la función 'rg' que lee una imagen en escala de grices. Esta función recibe el nombre de la foto como parámetro, en este caso el primer elemento de la lista files, la cual contiene todos los nombres de las imágenes utilizadas en el taller. Luego, con img1.shape, creamos una tupla que contiene a hg y wd, que son height y width respectivamente; mostramos entonces en consola el valor de cada una de estas variables para saber su tamaño. Seguidamente con mplt.figure creamos una nueva figura, la cual mostrará img1 que es la imagen original en escala de grises. Luego, en img2 hacemos una copia de img1 para poder guardarla en un nuevo archivo con cv2.imwrite. Finalmente, agregarmos esta nueva imagen con el nombre 'new_cameraman.png' en la lista files.
End of explanation
"""
img = rg(files[4])
def flip_image(im_data, flag):
im_flip = cv2.flip(im_data, flag) # flag=1 flip arround y-axis; flag=0 flip arround x-axis
return im_flip
vert = flip_image(img, 1)
hor = flip_image(img, 0)
ls = [img, vert, hor]
for i in ls:
mplt.figure()
mplt.imshow(i, cmap='gray')
mplt.title(files[4])
"""
Explanation: <b>2. </b>Write a function <code>flip_image</code>, which flips an image either vertically or horizontally. The function should take two input parameters: the matrix storing the image data and a flag to indicate whether the image should be flipped vertically or horizontally. Use this function to flip an example image both vertically and horizontally. <b>Tip:</b> You can use numpy array indexing or OpenCV's <a href="http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#flip">flip</a> function to solve this problem. <br /> <br />
En este punto, primero guardamos en la variable img la imagen que utilizaremos. Luego de guardarla, procedemos a crear nuestra función flip_image que recibe los parámetros 'im_data' que es la matriz de la imagen (img) y el parámetro 'flag' que indica el eje en que queremos rotar la imagen. 1 significa rotar en el eje y (verticalmente), mientras 0 rota en el eje x (horizontalmente). Dentro de la función retornamos im_flip que es una variable que almacena el resultado de usar la función cv2.flip. Ahora, en la variable vert guardamos el resultado de llamar la función que creamos antes con los parámetros img y 1; lo mismo pasa con la variable hor, pero con flag = 0. Para mostrar las tres imágenes, creamos una lista con las tres variables que necesitamos, img, vert y hor, para luego mostrarlas con mplt.imshow.
End of explanation
"""
img = rg(files[1])
def neg_im():
im = 255 - img
return im
image = neg_im()
mplt.figure()
mplt.imshow(img, cmap='gray')
mplt.title(files[1])
mplt.figure()
mplt.imshow(image, cmap='gray')
mplt.title(files[1]+'_negative')
"""
Explanation: <b>3. </b> Write a function to generate the negative of an image. This means that a new image is created in which the pixel values are all equal to 1.0 minus the pixel value in the original image. <br /> <br />
Para este punto, primero guardamos la imagen que deseamos utilizar en la variable img. Seguidamente creamos la función neg_im() que no posee parámetros y se encarga de restar 255 a la imagen original. En la variable image, llamamos la función y almacenamos el resultado para luego mostrarlo en una figura con la imagen original para ver el contraste entre las dos.
End of explanation
"""
def average_intensity(img):
return img.mean()
for avg, name in zip([average_intensity(rg(i)) for i in files], files):
print 'Average intensity of "', name, '":', avg
"""
Explanation: <b>4. </b>Write a function <code>average_intensity</code>, which calculates the average intensity level of an image. Use this function on example images and discuss your results. You can use images from section 2 and 3 <br /> <br />
La función average_intensity recibe como parámetros la matriz en escala de grises de la imagen y retorna el valor medio de esta img.mean(). Para probar la función anterior se iteró sobre la lista files del siguiente modo: se creó una lista de intensidades [avg ... for i in ...], se crea una tupla para cada intensidad con su nombre de archivo respectivo; cada tupla es desempacada en un ciclo for donde se muestra el mensaje al usuario.
End of explanation
"""
def threshold_image(img, th):
# ignore retVal parameter, no Otsu binarization will be applied
_, t = cv2.threshold(img, th, 255, cv2.THRESH_BINARY_INV)
return t
img = threshold_image(rg(files[3]), 150)
mplt.figure()
mplt.imshow(img, cmap='gray')
mplt.title('Threshold of ' + files[3])
"""
Explanation: <b>5. </b>Write a function <code>threshold_image</code> which thresholds an image based on a threshold level given as a parameter to the function. The function should take two parameters: the image to be thresholded and the threshold level. The result of the function should be a new thresholded image. <br /> <br />
La función threshold_image toma como parámetros la matriz de la imagen y un entero que indica el punto para hacer la umbralización; el interior de esta función es simple: se llama la función threshold de OpenCV, con la opción de que la imagen resultante sea binaria invertida; esta función retorna una tupla, de la cual solo nos interesa el segundo valor (el primero sería el valor de umbral óptimo, si se llamase la función threshold con th=0). La imagen obtenida luego se muestra con Matplotlib.
End of explanation
"""
def avg_intensity_threshold_image(img):
return threshold_image(img, average_intensity(img))
ths = [avg_intensity_threshold_image(rg(i)) for i in files]
for t, name in zip(ths, files):
mplt.figure()
mplt.imshow(t, cmap='gray')
mplt.title('Threshold of ' + name)
"""
Explanation: <b>6. </b>Write a function avg_intensity_threshold_image which takes an image as its only parameter and thresholds this image based on the images average intensity value. <b>Hint:</b> Simply write a new function that uses the two functions just written. <br /> <br />
La función avg_intensity_threshold_image es muy simple, es solo llamar la función del ejercicio anterior pero siendo el segundo parámetro el promedio de intensidad de la imagen (función realizada en el punto 4). Luego se aplica la operación a varias imágenes y se muestran en pantalla.
End of explanation
"""
def downSampling(src,N):
'''Downsample an image N times.
src: Image data
N: Number of subsamples(it must be lower than 9)
[!] if N is too large it will display the max amount of samples
'''
if src.shape[:2] != (512,512) : # image(512x512) verification
print("[!] Image is not 512x512")
else:
# Plotting the base image
mplt.figure()
mplt.imshow(src, cmap='gray')
mplt.title("Image size: "+str(src.shape[:2]))
n = 1
# Loop for plotting the N subsamples
while(n <= N):
src = cv2.pyrDown(src) # subsample to 1/2
mplt.figure()
mplt.imshow(src, cmap='gray')
mplt.title("Image size: "+str(src.shape[:2]))
n += 1
if n >= 9: # Condition in the case of 'N' been too large
return
pic = rg(files[5])
downSampling(pic, 3)
"""
Explanation: <b>7. </b>Write a function which subsamples a grayscale image of size 512x512 by factors of 2, i.e., 256, 128, 64 and display your results. There are multiple ways to do this as discussed in the textbook. You may simply sub-sample, average, etc. Describe which you used and why.
<img style="float: left; margin: 0px 0px 15px 15px;" src="../data/rose.bmp" height="256" width="256">
<img style="float: left; margin: 0px 0px 15px 15px;" src="../data/rose.bmp" height="128" width="128">
<img style="float: left; margin: 0px 0px 15px 15px;" src="../data/rose.bmp" height="64" width="64">
<img style="float: left; margin: 0px 0px 15px 15px;" src="../data/rose.bmp" height="32" width="32">
Lo primero que se hace es verificar que la imagen sea del tamaño indicado, si esto sucede, se muestra la imagen base. Luego se submuestrea la imagen N veces (dentro de un ciclo); el submuestreo se realiza con la función de OpenCV pyrDown que realiza la submuestra a la mitad. En caso que N sea muy grande el ciclo se rompe a la novena iteración.
End of explanation
"""
img = rg(files[3])
# 1
num_levels = 4
# 2
step = 255.0/(num_levels-1)
list_levels = step*np.arange(0, num_levels)
# 3
orig_levels = np.arange(0,256)
# 4
distance = np.zeros((num_levels,256))
# 5
for level_id in range(0,num_levels):
distance[level_id, :] = abs(list_levels[level_id]-orig_levels)
# 6
LUT_quant = list_levels[np.argmin(distance, 0)]
# 7
LUT_quant = np.uint8(LUT_quant)
# 8
img_th = cv2.LUT(img, LUT_quant)
# 9
mplt.figure()
mplt.imshow(img_th, cmap='gray')
mplt.title('Threshold of ')
"""
Explanation: <b>8. </b>Keeping your original image resolution reduce the number of gray levels in your image from 256 to 2 in powers of 2. Display your results.
Partamos de la idea general: se quiere reducir el número de niveles de grises a potencias de 2, esto es se puede elegir si se quieren <br>
2, 4, 8, 16, 32, 64, 128 o 256<br>
niveles de grises. La idea que el código en la celda de abajo utiliza es la siguiente: Se tiene la imagen convencional con valores normales (0 a 256), para cada uno de estos valores se quiere encontrar su correspondiente imagen en un dominio reducido de potencias de 2. Esto se soluciona con un lookup table. Un lookup table funciona del siguiente modo:<br>
Imagine que se quieren 4 niveles de grises:
<code>
Estos son [a, b, c, d]
</code>
Imagine que existe una función que crea un arreglo de 256 elementos, pero estos elementos nada más pueden ser a, b, c o d:
<code>
A = crearTablaLookUP([a,b,c,d])
</code>
Como A, tiene 256 elementos, cualquier valor de intensidad de la imagen es un índice adecuado para A. De modo que para solucionar el problema basta iterar sobre todos los valores de la imagen y reemplazarlos por su correspondiente valor en A:
<code>
imagen-final = A[i] para todo i en la imagen original.
Ejemplo: A[255] = d
A[0] = a
</code>
Lo anterior resume el código de abajo; ahora lo explicaremos por secciones, de la 1 a la 8 (vea el código):
Se selecciona el número de niveles deseados,
Se crea una lista con las posibles intensidades para los respectivos niveles deseados, ej. 4 niveles, posibles niveles = (255/3)*[0,1,2,3] = [0, 85, 170, 255]. Esto es, se hace un espaciado equitativo,
Se crea una lista con los números del 0 al 255,
Se crea una matriz de distancias, con número de filas igual al número de niveles deseados y número de columnas igual a 256 (su uso se explicará luego: sirve para crear A),
Para cada fila de la matriz de distancia le asignamos el valor absoluto de la resta entre el nivel correspondiente a cada fila y la lista creada en el punto 3. Visualmente es algo similar a lo siguiente
<code>
distancias = [
[ 0 - [0 1 2 ...85 ... 170 ... 255] ],
[ 85 - [0 1 2 ...85 ... 170 ... 255] ],
[ 170 - [0 1 2 ...85 ... 170 ... 255] ],
[ 255 - [0 1 2 ...85 ... 170 ... 255] ],
]
</code>
Comparamos los elementos de cada columna de la matriz de distancias entre ellos mismos y seleccionamos el índice (contando de arriba a abajo por columna) donde se encuentra el menor. Para clarificar esto observe:
<code>
distancias = [
[ 0 - [0 1 2 ...85 ... 170 ... 255] ],
[ 85 - [0 1 2 ...85 ... 170 ... 255] ],
[ 170 - [0 1 2 ...85 ... 170 ... 255] ],
[ 255 - [0 1 2 ...85 ... 170 ... 255] ],
]
= [ # Recuerde que es el valor absoluto
[ 0 1 2 ... 85 ... 170 ... 255],
[ 85 84 83 ... 0 ... 85 ... 170],
[170 169 168 ... 85 ... 0 ... 85],
[255 254 253 ...170 ... 85 ... 0],
# 0 1 2 0 0 0 Menores por columna
# 0 0 0 ... 1 ... 2 ... 3 Indices en que estan los menores, vistos verticalemente
]
</code>
Todo lo anterior lo realiza una función de numpy llamada argmin con segundo argumento 0 para que compare por columnas. Es claro que el arreglo que arroja es de 256 números, pero los números están en el rango<br>
0 a el número de niveles deseados - 1<br>
Reemplazamos cada valor de la lista por su equivalente en la lista de niveles creada en 2. Ejemplo:
<code>
# l_actual = [0 0 0 0 ... 1 1 1 1 1 ... 2 2 2 2 2 ... 3 3 3 3 3]
l_nuevo = lista_niveles[l_actual]
# l_nuevo = [0 0 0 ... 85 85 85 ... 170 170 170 ... 255 255 255]
</code>
La lista obtenida es la lista A que se había comentado al principio. (Look Up Table)
Se pasa la lista obtenida a un arreglo de numpy,
Cada valor de la imagen se reemplaza por su respectivo valor en A. Esto lo hace automáticamente una función de OpenCV llamada LUT,
Se muestra el resultado.
End of explanation
"""
|
saashimi/CPO-datascience | superseded/Normalized Dataset - Comparison.ipynb | mit | #Import required packages
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
def format_date(df_date):
"""
Splits Meeting Times and Dates into datetime objects where applicable using regex.
"""
df_date['Days'] = df_date['Meeting_Times'].str.extract('([^\s]+)', expand=True)
df_date['Start_Date'] = df_date['Meeting_Dates'].str.extract('([^\s]+)', expand=True)
df_date['Year'] = df_date['Term'].astype(str).str.slice(0,4)
df_date['Quarter'] = df_date['Term'].astype(str).str.slice(4,6)
df_date['Term_Date'] = pd.to_datetime(df_date['Year'] + df_date['Quarter'], format='%Y%m')
df_date['End_Date'] = df_date['Meeting_Dates'].str.extract('(?<=-)(.*)(?= )', expand=True)
df_date['Start_Time'] = df_date['Meeting_Times'].str.extract('(?<= )(.*)(?=-)', expand=True)
df_date['Start_Time'] = pd.to_datetime(df_date['Start_Time'], format='%H%M')
df_date['End_Time'] = df_date['Meeting_Times'].str.extract('((?<=-).*$)', expand=True)
df_date['End_Time'] = pd.to_datetime(df_date['End_Time'], format='%H%M')
df_date['Duration_Hr'] = ((df_date['End_Time'] - df_date['Start_Time']).dt.seconds)/3600
return df_date
def format_xlist(df_xl):
"""
revises % capacity calculations by using Max Enrollment instead of room capacity.
"""
df_xl['Cap_Diff'] = np.where(df_xl['Xlst'] != '',
df_xl['Max_Enrl'].astype(int) - df_xl['Actual_Enrl'].astype(int),
df_xl['Room_Capacity'].astype(int) - df_xl['Actual_Enrl'].astype(int))
df_xl = df_xl.loc[df_xl['Room_Capacity'].astype(int) < 999]
return df_xl
"""
Explanation: OLS Analysis Using Full PSU dataset
End of explanation
"""
pd.set_option('display.max_rows', None)
df = pd.read_csv('data/PSU_master_classroom_91-17.csv', dtype={'Schedule': object, 'Schedule Desc': object})
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
#terms = [199104, 199204, 199304, 199404, 199504, 199604, 199704, 199804, 199904, 200004, 200104, 200204, 200304, 200404, 200504, 200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]
terms = [200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]
df = df.loc[df['Term'].isin(terms)]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
# Calculate number of days per week and treat Sunday condition
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df['Building'] = df['ROOM'].str.extract('([^\s]+)', expand=True)
df_cl = format_xlist(df)
df_cl['%_Empty'] = df_cl['Cap_Diff'].astype(float) / df_cl['Room_Capacity'].astype(float)
# Normalize the results
df_cl['%_Empty'] = df_cl['Actual_Enrl'].astype(np.float32)/df_cl['Room_Capacity'].astype(np.float32)
df_cl = df_cl.replace([np.inf, -np.inf], np.nan).dropna()
from sklearn.preprocessing import LabelEncoder
df_cl = df_cl.sample(n = 15000)
# Save as a 1D array. Otherwise will throw errors.
y = np.asarray(df_cl['%_Empty'], dtype="|S6")
df_cl = df_cl[['Dept', 'Class', 'Days', 'Start_Time', 'ROOM', 'Term', 'Room_Capacity', 'Building']]
cat_columns = ['Dept', 'Class', 'Days', 'Start_Time', 'ROOM', 'Building']
for column in cat_columns:
room_mapping = {label: idx for idx, label in enumerate(np.unique(df_cl['{0}'.format(column)]))}
df_cl['{0}'.format(column)] = df_cl['{0}'.format(column)].map(room_mapping)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X = df_cl.iloc[:, 1:].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
"""
Explanation: Partitioning a dataset in training and test sets
End of explanation
"""
# Compare Algorithms
import pandas
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
# prepare configuration for cross validation test harness
seed = 7
# prepare models
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))
# evaluate each model in turn
results = []
names = []
scoring = 'accuracy'
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cv_results = model_selection.cross_val_score(model, X, y, cv=kfold, scoring=scoring, n_jobs=-1)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# boxplot algorithm comparison
fig = plt.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
"""
Explanation: Determine Feature Importances
Test Prediction Results
Class, Term, and Start Times are the three most important factors in determining the percentage of empty seats expected.
from sklearn import model_selection
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
seed = 7
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cart = DecisionTreeClassifier()
num_trees = 25
model = BaggingClassifier(base_estimator=cart, n_estimators=num_trees, random_state=seed)
results = model_selection.cross_val_score(model, X, y, cv=kfold)
print(results.mean())
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_forward_sensitivity_maps.ipynb | bsd-3-clause | # Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
# Read the forward solutions with surface orientation
fwd = mne.read_forward_solution(fwd_fname, surf_ori=True)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
"""
Explanation: Display sensitivity maps for EEG and MEG sensors
Sensitivity maps can be produced from forward operators that
indicate how well different sensor types will be able to detect
neural currents from different regions of the brain.
To get started with forward modeling see ref:tut_forward.
End of explanation
"""
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
"""
Explanation: Compute sensitivity maps
End of explanation
"""
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
plt.colorbar(im, ax=ax, cmap='RdBu_r')
plt.show()
plt.figure()
plt.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
plt.legend()
plt.title('Normal orientation sensitivity')
plt.xlabel('sensitivity')
plt.ylabel('count')
plt.show()
grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[0, 50, 100]))
"""
Explanation: Show gain matrix a.k.a. leadfield matrix with sensitivity map
End of explanation
"""
|
Benedicto/ML-Learning | Linear_Regression_4_ridge_regression_assignment_1.ipynb | gpl-3.0 | import graphlab
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
"""
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
name_left = 'power_' + str(power-1)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature * poly_sframe[name_left]
return poly_sframe
"""
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
"""
sales = sales.sort(['sqft_living','price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
l2_small_penalty = 1e-5
"""
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
"""
poly_sframe = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly_sframe.column_names()
poly_sframe['price'] = sales['price']
model = graphlab.linear_regression.create(poly_sframe, 'price', features=my_features,
validation_set=None, l2_penalty=1e-5)
model.get('coefficients')
"""
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
End of explanation
"""
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
"""
Explanation: QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
"""
def fitAndPlot(data, degree, l2):
poly_data = polynomial_sframe(data['sqft_living'], degree)
my_features = poly_data.column_names() # get the name of the features
poly_data['price'] = data['price'] # add price to the data since it's the target
model = graphlab.linear_regression.create(poly_data, target = 'price', l2_penalty=l2, verbose=False,
features = my_features, validation_set = None)
plt.plot(poly_data['power_1'],poly_data['price'],'.',
poly_data['power_1'], model.predict(poly_data),'-')
model.get("coefficients").print_rows(num_rows = 16)
for data in [set_1, set_2, set_3, set_4]:
fitAndPlot(data, 15, l2_small_penalty)
power_1 | None | 1247.59037346 | 7944.94142547
power_1 | None | -759.251889293 | 7591.2364892
power_1 | None | 783.493762459 | nan
power_1 | None | 585.865810528 | 2868.03758336
"""
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
for data in [set_1, set_2, set_3, set_4]:
fitAndPlot(data, 15, 1e5)
"""
Explanation: The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
End of explanation
"""
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
"""
Explanation: These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
"""
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
"""
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
"""
train_valid_shuffled[0:10] # rows 0 to 9
"""
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
"""
validation4 = train_valid_shuffled[5818:7758]
"""
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
End of explanation
"""
print int(round(validation4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
"""
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
"""
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
"""
train4 = train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:])
"""
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
End of explanation
"""
print int(round(train4['price'].mean(), 0))
"""
Explanation: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
"""
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
n = len(data)
RSS_total = 0.
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k
validation = data[start:end]
train = data[:start].append(data[end:])
model = graphlab.linear_regression.create(train, target=output_name, l2_penalty=l2_penalty,
verbose=False, features=features_list, validation_set=None)
predictions = model.predict(validation)
errors = predictions - validation[output_name]
RSS = (errors * errors).sum()
RSS_total = RSS_total + RSS
return RSS_total / k
"""
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
"""
train_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
feature_list = train_data.column_names()
train_data['price'] = train_valid_shuffled['price']
import numpy as np
validation_error = float('inf')
l2_min = 0
errors = []
l2_list = np.logspace(1, 7, num=13)
for l2_penalty in l2_list:
new_error = k_fold_cross_validation(10, l2_penalty, train_data, 'price', feature_list)
errors.append(new_error)
if new_error < validation_error:
l2_min = l2_penalty
validation_error = new_error
print l2_min
"""
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
End of explanation
"""
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.xscale('log')
plt.plot(np.logspace(1, 7, num=13), errors)
"""
Explanation: QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
"""
final_model = graphlab.linear_regression.create(train_data, target='price', l2_penalty=1000,
verbose=False, features=feature_list, validation_set=None)
"""
Explanation: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
End of explanation
"""
test_data = polynomial_sframe(test['sqft_living'], 15)
test_data['price'] = test['price']
predictions = final_model.predict(test_data)
test_error = predictions - test['price']
RSS = (test_error * test_error).sum()
RSS
"""
Explanation: QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
End of explanation
"""
|
aKumpan/hse-shad-ml | 01-titanic/pandas/pandas.ipynb | apache-2.0 | sex_counts = df['Sex'].value_counts()
print('{} {}'.format(sex_counts['male'], sex_counts['female']))
"""
Explanation: 1. Какое количество мужчин и женщин ехало на корабле? В качестве ответа приведите два числа через пробел
End of explanation
"""
survived_df = df['Survived']
count_of_survived = survived_df.value_counts()[1]
survived_percentage = 100.0 * count_of_survived / survived_df.value_counts().sum()
print("{:0.2f}".format(survived_percentage))
"""
Explanation: 2. Какой части пассажиров удалось выжить? Посчитайте долю выживших пассажиров. Ответ приведите в процентах (число в интервале от 0 до 100, знак процента не нужен), округлив до двух знаков.
End of explanation
"""
pclass_df = df['Pclass']
count_of_first_class_passengers = pclass_df.value_counts()[1]
first_class_percentage = 100.0 * count_of_first_class_passengers / survived_df.value_counts().sum()
print("{:0.2f}".format(first_class_percentage))
"""
Explanation: 3. Какую долю пассажиры первого класса составляли среди всех пассажиров? Ответ приведите в процентах (число в интервале от 0 до 100, знак процента не нужен), округлив до двух знаков.
End of explanation
"""
ages = df['Age'].dropna()
print("{:0.2f} {:0.2f}".format(ages.mean(), ages.median()))
"""
Explanation: 4. Какого возраста были пассажиры? Посчитайте среднее и медиану возраста пассажиров. Посчитайте среднее и медиану возраста пассажиров. В качестве ответа приведите два числа через пробел.
End of explanation
"""
correlation = df['SibSp'].corr(df['Parch'])
print("{:0.2f}".format(correlation))
"""
Explanation: 5. Коррелируют ли число братьев/сестер с числом родителей/детей? Посчитайте корреляцию Пирсона между признаками SibSp и Parch.
End of explanation
"""
def clean_name(name):
# First word before comma is a surname
s = re.search('^[^,]+, (.*)', name)
if s:
name = s.group(1)
# get name from braces (if in braces)
s = re.search('\(([^)]+)\)', name)
if s:
name = s.group(1)
# Removing appeal
name = re.sub('(Miss\. |Mrs\. |Ms\. )', '', name)
# Get first left word and removing quotes
name = name.split(' ')[0].replace('"', '')
return name
names = df[df['Sex'] == 'female']['Name'].map(clean_name)
name_counts = names.value_counts()
name_counts.head()
print(name_counts.head(1).index.values[0])
"""
Explanation: 6. Какое самое популярное женское имя на корабле? Извлеките из полного имени пассажира (колонка Name) его личное имя (First Name). Это задание — типичный пример того, с чем сталкивается специалист по анализу данных. Данные очень разнородные и шумные, но из них требуется извлечь необходимую информацию. Попробуйте вручную разобрать несколько значений столбца Name и выработать правило для извлечения имен, а также разделения их на женские и мужские.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/d418deb5d74ab4363c42409de6a8e6df/label_source_activations.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import matplotlib.patheffects as path_effects
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
label = 'Aud-lh'
label_fname = data_path + '/MEG/sample/labels/%s.label' % label
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
"""
Explanation: Extracting the time series of activations in a label
We first apply a dSPM inverse operator to get signed activations in a label
(with positive and negative values) and we then compare different strategies
to average the times series in a label. We compare a simple average, with an
averaging using the dipoles normal (flip mode) and then a PCA,
also using a sign flip.
End of explanation
"""
pick_ori = "normal" # Get signed values to see the effect of sign flip
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
label = mne.read_label(label_fname)
stc_label = stc.in_label(label)
modes = ('mean', 'mean_flip', 'pca_flip')
tcs = dict()
for mode in modes:
tcs[mode] = stc.extract_label_time_course(label, src, mode=mode)
print("Number of vertices : %d" % len(stc_label.data))
"""
Explanation: Compute inverse solution
End of explanation
"""
fig, ax = plt.subplots(1)
t = 1e3 * stc_label.times
ax.plot(t, stc_label.data.T, 'k', linewidth=0.5, alpha=0.5)
pe = [path_effects.Stroke(linewidth=5, foreground='w', alpha=0.5),
path_effects.Normal()]
for mode, tc in tcs.items():
ax.plot(t, tc[0], linewidth=3, label=str(mode), path_effects=pe)
xlim = t[[0, -1]]
ylim = [-27, 22]
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Activations in Label %r' % (label.name),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
"""
Explanation: View source activations
End of explanation
"""
pick_ori = 'vector'
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
data = stc_vec.extract_label_time_course(label, src)
fig, ax = plt.subplots(1)
stc_vec_label = stc_vec.in_label(label)
colors = ['#EE6677', '#228833', '#4477AA']
for ii, name in enumerate('XYZ'):
color = colors[ii]
ax.plot(t, stc_vec_label.data[:, ii].T, color=color, lw=0.5, alpha=0.5,
zorder=5 - ii)
ax.plot(t, data[0, ii], lw=3, color=color, label='+' + name, zorder=8 - ii,
path_effects=pe)
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Mean vector activations in Label %r' % (label.name,),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
"""
Explanation: Using vector solutions
It's also possible to compute label time courses for a
:class:mne.VectorSourceEstimate, but only with mode='mean'.
End of explanation
"""
|
suvofalcon/MLPY-DataAnalysisVisualizations | _03A_Matplotlib Exercises - Solutions.ipynb | gpl-3.0 | import numpy as np
x = np.arange(0,100)
y = x*2
z = x**2
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Matplotlib Exercises - Solutions
Welcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.
Also don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!
* NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. *
Exercises
Follow the instructions to recreate the plots using this data:
Data
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
# plt.show() for non-notebook users
"""
Explanation: Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?
End of explanation
"""
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('title')
"""
Explanation: Exercise 1
Follow along with these steps:
* Create a figure object called fig using plt.figure()
* Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax.
* Plot (x,y) on that axes and set the labels and titles to match the plot below:
End of explanation
"""
fig = plt.figure()
ax1 = fig.add_axes([0,0,1,1])
ax2 = fig.add_axes([0.2,0.5,.2,.2])
"""
Explanation: Exercise 2
Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.
End of explanation
"""
ax1.plot(x,y)
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax2.plot(x,y)
ax2.set_xlabel('x')
ax2.set_ylabel('y')
fig # Show figure object
"""
Explanation: Now plot (x,y) on both axes. And call your figure object to show it.
End of explanation
"""
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax2 = fig.add_axes([0.2,0.5,.4,.4])
"""
Explanation: Exercise 3
Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]
End of explanation
"""
ax.plot(x,z)
ax.set_xlabel('X')
ax.set_ylabel('Z')
ax2.plot(x,y)
ax2.set_xlabel('X')
ax2.set_ylabel('Y')
ax2.set_title('zoom')
ax2.set_xlim(20,22)
ax2.set_ylim(30,50)
fig
"""
Explanation: Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:
End of explanation
"""
# Empty canvas of 1 by 2 subplots
fig, axes = plt.subplots(nrows=1, ncols=2)
"""
Explanation: Exercise 4
Use plt.subplots(nrows=1, ncols=2) to create the plot below.
End of explanation
"""
axes[0].plot(x,y,color="blue", lw=3, ls='--')
axes[1].plot(x,z,color="red", lw=3, ls='-')
fig
"""
Explanation: Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(12,2))
axes[0].plot(x,y,color="blue", lw=5)
axes[0].set_xlabel('x')
axes[0].set_ylabel('y')
axes[1].plot(x,z,color="red", lw=3, ls='--')
axes[1].set_xlabel('x')
axes[1].set_ylabel('z')
"""
Explanation: See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.
End of explanation
"""
|
bmeaut/python_nlp_2017_fall | course_material/07_Tagging/07_Scipy_lab_solutions.ipynb | mit | import numpy, scipy
import scipy.linalg
import scipy.sparse
import scipy.sparse.linalg
%matplotlib inline
import matplotlib.pyplot
"""
Explanation: Python for mathematics, science and engineering
https://scipy.org/
Scipy
(pronounced "Sigh Pie")
Higher level algorithms on top of numpy
numerical integration
optimization
interpolation
Signal Processing
Linear Algebra
with sparse matrices
statistics
End of explanation
"""
from sklearn import datasets
iris = datasets.load_iris()
print('Target names:', iris.target_names)
print('Features:', iris.feature_names)
print(iris.data)
first = iris.data[iris.target == 0]
second = iris.data[iris.target == 1]
third = iris.data[iris.target == 2]
print(len(first), len(second), len(third))
print("first average:", first.mean(axis=0))
print("second average:", second.mean(axis=0))
print("third average:", third.mean(axis=0))
print("sepal width and length: ", scipy.stats.pearsonr(iris.data[:, 0], iris.data[:, 1])[0])
print("petal width and length: ", scipy.stats.pearsonr(iris.data[:, 2], iris.data[:, 3])[0])
print("")
print("sepal width and length for first class: ", scipy.stats.pearsonr(first[:, 0], first[:, 1])[0])
print("sepal width and length for second class: ", scipy.stats.pearsonr(second[:, 0], second[:, 1])[0])
print("sepal width and length for third class: ", scipy.stats.pearsonr(third[:, 0], third[:, 1])[0])
"""
Explanation: Iris
End of explanation
"""
As = scipy.sparse.diags([-0.5*numpy.ones(7), 0.5*numpy.ones(7)], [-1,1])
bs = numpy.ones(8)
print('[As|bs]:\n{}'.format(numpy.concatenate((As.toarray(), bs.reshape(-1,1)), axis=1)))
xs = scipy.sparse.linalg.spsolve(As.tocsr(), bs)
print(xs)
"""
Explanation: A = 0.5*(numpy.diag(numpy.ones(7), k=1) - numpy.diag(numpy.ones(7), k=-1))
b = numpy.ones(len(A))
print('[A|b]:\n{}'.format(numpy.concatenate((A, b.reshape(-1,1)), axis=1)))
x = scipy.linalg.solve(A, b)### Sparse linalg
End of explanation
"""
movie_descriptions = {}
vocab = {}
with open("movies.txt", "rb") as f:
for i, line in enumerate(f):
title, description = line.strip().split(b'\t')
movie_descriptions[title] = description.split(b' ')
for word in set(movie_descriptions[title]):
if word not in vocab:
new_id = len(vocab)
vocab[word] = new_id
print(len(vocab))
print(b" ".join(movie_descriptions[b"The Matrix"]))
movie_to_id = {k: i for i, k in enumerate(movie_descriptions.keys())}
id_to_movie = {i: k for k, i in movie_to_id.items()}
id_to_word = {i: w for w, i in vocab.items()}
print("The Matrix:", movie_to_id[b"The Matrix"])
print("0th movie:", id_to_movie[0])
print(len(movie_to_id)-1, "th movie:", id_to_movie[len(movie_to_id)-1])
print("word id of dog:", vocab[b"dog"])
print("0th word:", id_to_word[0])
"""
Explanation: Document-term matrix decomposition
Download the file and put it in the same folder, as your notebook!
Task 1.
End of explanation
"""
from collections import Counter
i = []
j = []
k = []
for title, description in movie_descriptions.items():
words = Counter(description)
for w, c in words.items():
i.append(movie_to_id[title])
j.append(vocab[w])
k.append(c)
Matrix = scipy.sparse.csc_matrix((k, (i, j)), dtype="float32")
print(Matrix.shape)
"""
Explanation: Task 2.
End of explanation
"""
U, d, Vh = scipy.sparse.linalg.svds(Matrix, k=40, )
U /= numpy.sqrt((U**2).sum(1))[:, None]
Vh /= numpy.sqrt((Vh**2).sum(0))[None, :]
print(U.shape)
print(Vh.shape)
"""
Explanation: Task 3.
End of explanation
"""
def closests(v, k=1):
return numpy.argpartition(((U - v[None, :])**2).sum(1), k, axis=0)[:k]
closests(numpy.ones(len(Vh)), 3)
"""
Explanation: Task 4.
End of explanation
"""
print([id_to_movie[i] for i in closests(U[movie_to_id[b"Monsters, Inc."]], 5)])
print([id_to_movie[i] for i in closests(U[movie_to_id[b"Popeye"]], 5)])
"""
Explanation: Now you can search similar movies!
End of explanation
"""
[id_to_movie[i] for i in closests(U[movie_to_id[b"Popeye"]] + U[movie_to_id[b"Monsters, Inc."]], 10)]
"""
Explanation: Or even mixture of movies by adding "movie vectors"!
End of explanation
"""
|
sebp/scikit-survival | doc/user_guide/random-survival-forest.ipynb | gpl-3.0 | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OrdinalEncoder
from sksurv.datasets import load_gbsg2
from sksurv.preprocessing import OneHotEncoder
from sksurv.ensemble import RandomSurvivalForest
"""
Explanation: Using Random Survival Forests
This notebook demonstrates how to use Random Survival Forests introduced in scikit-survival 0.11.
As it's popular counterparts for classification and regression, a Random Survival Forest is an ensemble
of tree-based learners. A Random Survival Forest ensures that individual trees are de-correlated by 1)
building each tree on a different bootstrap sample of the original training data, and 2)
at each node, only evaluate the split criterion for a randomly selected subset of
features and thresholds. Predictions are formed by aggregating predictions of individual
trees in the ensemble.
To demonstrate Random Survival Forest, we are going to use data from the German Breast Cancer Study Group (GBSG-2) on the treatment of node-positive breast cancer patients. It contains data on 686 women
and 8 prognostic factors:
1. age,
2. estrogen receptor (estrec),
3. whether or not a hormonal therapy was administered (horTh),
4. menopausal status (menostat),
5. number of positive lymph nodes (pnodes),
6. progesterone receptor (progrec),
7. tumor size (tsize,
8. tumor grade (tgrade).
The goal is to predict recurrence-free survival time.
End of explanation
"""
X, y = load_gbsg2()
grade_str = X.loc[:, "tgrade"].astype(object).values[:, np.newaxis]
grade_num = OrdinalEncoder(categories=[["I", "II", "III"]]).fit_transform(grade_str)
X_no_grade = X.drop("tgrade", axis=1)
Xt = OneHotEncoder().fit_transform(X_no_grade)
Xt.loc[:, "tgrade"] = grade_num
"""
Explanation: First, we need to load the data and transform it into numeric values.
End of explanation
"""
random_state = 20
X_train, X_test, y_train, y_test = train_test_split(
Xt, y, test_size=0.25, random_state=random_state)
"""
Explanation: Next, the data is split into 75% for training and 25% for testing, so we can determine
how well our model generalizes.
End of explanation
"""
rsf = RandomSurvivalForest(n_estimators=1000,
min_samples_split=10,
min_samples_leaf=15,
max_features="sqrt",
n_jobs=-1,
random_state=random_state)
rsf.fit(X_train, y_train)
"""
Explanation: Training
Several split criterion have been proposed in the past, but the most widespread one is based
on the log-rank test, which you probably know from comparing survival curves among two or more
groups. Using the training data, we fit a Random Survival Forest comprising 1000 trees.
End of explanation
"""
rsf.score(X_test, y_test)
"""
Explanation: We can check how well the model performs by evaluating it on the test data.
End of explanation
"""
X_test_sorted = X_test.sort_values(by=["pnodes", "age"])
X_test_sel = pd.concat((X_test_sorted.head(3), X_test_sorted.tail(3)))
X_test_sel
"""
Explanation: This gives a concordance index of 0.68, which is a good a value and matches the results
reported in the Random Survival Forests paper.
Predicting
For prediction, a sample is dropped down each tree in the forest until it reaches a terminal node.
Data in each terminal is used to non-parametrically estimate the survival and cumulative hazard
function using the Kaplan-Meier and Nelson-Aalen estimator, respectively. In addition, a risk score
can be computed that represents the expected number of events for one particular terminal node.
The ensemble prediction is simply the average across all trees in the forest.
Let's first select a couple of patients from the test data
according to the number of positive lymph nodes and age.
End of explanation
"""
pd.Series(rsf.predict(X_test_sel))
"""
Explanation: The predicted risk scores indicate that risk for the last three patients is quite
a bit higher than that of the first three patients.
End of explanation
"""
surv = rsf.predict_survival_function(X_test_sel, return_array=True)
for i, s in enumerate(surv):
plt.step(rsf.event_times_, s, where="post", label=str(i))
plt.ylabel("Survival probability")
plt.xlabel("Time in days")
plt.legend()
plt.grid(True)
"""
Explanation: We can have a more detailed insight by considering the predicted survival function. It shows that the biggest difference occurs roughly within the first 750 days.
End of explanation
"""
surv = rsf.predict_cumulative_hazard_function(X_test_sel, return_array=True)
for i, s in enumerate(surv):
plt.step(rsf.event_times_, s, where="post", label=str(i))
plt.ylabel("Cumulative hazard")
plt.xlabel("Time in days")
plt.legend()
plt.grid(True)
"""
Explanation: Alternatively, we can also plot the predicted cumulative hazard function.
End of explanation
"""
from sklearn.inspection import permutation_importance
result = permutation_importance(
rsf, X_test, y_test, n_repeats=15, random_state=random_state
)
pd.DataFrame(
{k: result[k] for k in ("importances_mean", "importances_std",)},
index=X_test.columns
).sort_values(by="importances_mean", ascending=False)
"""
Explanation: Permutation-based Feature Importance
The implementation is based on scikit-learn's Random Forest implementation and inherits many
features, such as building trees in parallel. What's currently missing is feature importances
via the feature_importance_ attribute.
This is due to the way scikit-learn's implementation computes importances. It relies on
a measure of impurity for each child node, and defines importance as the amount of
decrease in impurity due to a split. For traditional regression, impurity would be measured by the variance, but for survival analysis
there is no per-node impurity measure due to censoring. Instead, one could use the
magnitude of the log-rank test statistic as an importance measure, but scikit-learn's
implementation doesn't seem to allow this.
Fortunately, this is not a big concern though, as scikit-learn's definition
of feature importance is non-standard and differs from what Leo Breiman
proposed in the original Random Forest paper.
Instead, we can use permutation to estimate feature importance, which is
preferred over scikit-learn's definition. This is implemented in the
permutation_importance
function of scikit-learn,
which is fully compatible with scikit-survival.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/sandbox-2/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.delay_automaton.ipynb | gpl-3.0 | import vcsn
ctx = vcsn.context("lat<law_char, law_char>, b")
ctx
a = ctx.expression(r"'abc, \e''d,v'*'\e,wxyz'").standard()
a
"""
Explanation: automaton.delay_automaton
Create a new transducer, equivalent to the first one, with the states labeled with the delay of the state, i.e. the difference of input length on each tape.
Preconditions:
- The input automaton is a transducer.
- Input.has_bounded_lag
See also:
- automaton.has_bounded_lag
- automaton.is_synchronized
Examples
End of explanation
"""
a.delay_automaton()
"""
Explanation: The lag is bounded, because every cycle (here, the loop) produces a delay of 0.
End of explanation
"""
s = ctx.expression(r"(abc|x+ab|y)(d|z)").automaton()
s
s.delay_automaton()
"""
Explanation: State 1 has a delay of $(3, 0)$ because the first tape is 3 characters longer than the shortest tape (the second one) for all possible inputs leading to this state.
End of explanation
"""
|
NeuroDataDesign/pan-synapse | pipeline_1/background/Sparse_Arrays_Algorithms.md.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
import sys
sys.path.insert(0,'../code/functions/')
import tiffIO as tIO
import connectLib as cLib
import plosLib as pLib
import time
import scipy.ndimage as ndimage
import numpy as np
import time
import clusterComponents
class SparseArray:
def genClusters(self, image):
clusterList = []
for label in range(np.max(image)):
memberList = []
for z in range(len(image)):
for y in range(len(image[z])):
for x in range(len(image[z][y])):
if (self.readValue((z, y, x)) == label):
memberList.append((z, y, x))
self.addValue((z, y, x), 0)
clusterList.append(Cluster(memberList))
return clusterList
def __init__(self, input, threshold = 200):
self.elements = {}
imCC = clusterComponents.ClusterComponent(input)
imCC.volumeThreshold(threshold)
ccThresh = imCC.labeledIm
for z in range(len(ccThresh)):
for y in range(len(ccThresh[z])):
for x in range(len(ccThresh[z][y])):
self.addValue((z, y, x), ccThresh[z][y][x])
self.clusterList = self.genClusters(ccThresh)
def addValue(self, tuple, value):
if value > 0:
self.elements[tuple] = value
elif value == 0 and not(self.readValue(tuple) == 0):
del self.elements[tuple]
def readValue(self, tuple):
try:
value = self.elements[tuple]
except KeyError:
# could also be 0.0 if using floats...
value = 0
return value
"""
Explanation: Sparse Arrays Incorporated With Cluster Components
Motive
The purpose of this notebook is to determine if implimenting Sparse Arrays will speed up the Connected Components portion of our pipeline.
Pseudocode For Sparse Arrays Functions
** Initialization:** (inputs: image)
Give the Sparse Array a data member called "elements"
Run Connected Components on the input image
Run clusterComponents on input image
Threshold the clusterComponent by volume
For each index in this thresholded clusterComponent, add its value to elements
Give the Sparse Array a data member called "clusterList by using the genClusters function
**genClusters:** (input: image)
create a variable called clusterList that will contain the Clusters
for each unique label in the input image:
find which indices in the Sparse Array are equal to that label
input these indices into the Cluster class and append this Cluster to clusterList
return clusterList
**addValue:** (inputs: tuple, value)
if value is greater than 0:
set self.elements at tuple equal to value
else if value equals 0 and there already exists a value at tuple:
set delete self.elemnts at tuple
**readValue:** (inputs: tuple)
try:
value gets self.elements at tuple
catch KeyError:
value = 0 (a.k.a. if there exists no element with that tuple, return 0)
return value
The Code
End of explanation
"""
import numpy as np
import math
class Cluster:
def __init__(self, members):
self.members = members
self.volume = self.getVolume()
def getVolume(self):
return len(self.members)
def getCentroid(self):
unzipList = zip(*self.members)
listZ = unzipList[0]
listY = unzipList[1]
listX = unzipList[2]
return [np.average(listZ), np.average(listY), np.average(listX)]
def getMembers(self):
return self.members
"""
Explanation: Cluster Class For Reference
End of explanation
"""
dataSubset = tIO.unzipChannels(tIO.loadTiff('../data/SEP-GluA1-KI_tp1.tif'))[0][0:5]
plt.imshow(dataSubset[0], cmap="gray")
plt.axis('off')
plt.title('Raw Data Slice at z=0')
plt.show()
#finding the clusters after plosPipeline
plosOutSub = pLib.pipeline(dataSubset)
#binarize output of plos lib
bianOutSub = cLib.otsuVox(plosOutSub)
#dilate the output based on neigborhood size
bianOutSubDil = ndimage.morphology.binary_dilation(bianOutSub).astype(int)
sparse = SparseArray(bianOutSubDil)
intensities = []
index = 0
for z in range(len(bianOutSubThresh)):
for y in range(len(bianOutSubThresh[z])):
for x in range(len(bianOutSubThresh[z][y])):
val = sparse.readValue((z, y, x))
if val > 0:
intensities.append(val)
index = index + 1
print 'Percentage of Volume\n\tActual: ' + str(1.0*np.max(index)/(1024*1024*3)) + '\tExpected: .015'
print 'Number of Clusters\n\tActual: ' + str(np.max(intensities)) + '\tExpected: 1500 - 2000'
displayIm = np.zeros_like(bianOutSubDil)
for z in range(len(bianOutSubDil)):
for y in range(len(bianOutSubDil[z])):
for x in range(len(bianOutSubDil[z][y])):
displayIm[z][y][x] = sparse.readValue((z, y, x))
plt.imshow(displayIm[3])
plt.show()
"""
Explanation: Trying on A Slice of 5
End of explanation
"""
start_time = time.time()
sparse = SparseArray(bianOutSubDil)
print time.time() - start_time
"""
Explanation: Results:
After Storing the Data as a sparse array and counting the non-zero values, I found that the non-zero values (a.k.a. the detected Synapses) represented about 1.5% of the data. It also detected 1754 Synapses, which is also around how many were detected without using Sparse Arrays. Both of these signs indicate that the Sparse Array is storing values correctly.
The display image also looks as we would expect it to.
Now, let's determine if it helps the Connected Components speed:
End of explanation
"""
|
julienchastang/unidata-python-workshop | notebooks/XArray/XArray and CF.ipynb | mit | # Convention for import to get shortened namespace
import numpy as np
import xarray as xr
# Create some sample "temperature" data
data = 283 + 5 * np.random.randn(5, 3, 4)
data
"""
Explanation: <div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>XArray & CF Introduction</h1>
<h3>Unidata Sustainable Science Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://xarray.pydata.org/en/stable/_static/dataset-diagram-logo.png" alt="NumPy Logo" style="height: 250px;"></div>
Overview:
Teaching: 25 minutes
Exercises: 20 minutes
Questions
What is XArray?
How does XArray fit in with Numpy and Pandas?
What is the CF convention and how do we use it with Xarray?
Objectives
Create a DataArray.
Open netCDF data using XArray
Subset the data.
Write a CF-compliant netCDF file
XArray
XArray expands on the capabilities on NumPy arrays, providing a lot of streamlined data manipulation. It is similar in that respect to Pandas, but whereas Pandas excels at working with tabular data, XArray is focused on N-dimensional arrays of data (i.e. grids). Its interface is based largely on the netCDF data model (variables, attributes, and dimensions), but it goes beyond the traditional netCDF interfaces to provide functionality similar to netCDF-java's Common Data Model (CDM).
DataArray
The DataArray is one of the basic building blocks of XArray. It provides a NumPy ndarray-like object that expands to provide two critical pieces of functionality:
Coordinate names and values are stored with the data, making slicing and indexing much more powerful
It has a built-in container for attributes
End of explanation
"""
temp = xr.DataArray(data)
temp
"""
Explanation: Here we create a basic DataArray by passing it just a numpy array of random data. Note that XArray generates some basic dimension names for us.
End of explanation
"""
temp = xr.DataArray(data, dims=['time', 'lat', 'lon'])
temp
"""
Explanation: We can also pass in our own dimension names:
End of explanation
"""
# Use pandas to create an array of datetimes
import pandas as pd
times = pd.date_range('2018-01-01', periods=5)
times
# Sample lon/lats
lons = np.linspace(-120, -60, 4)
lats = np.linspace(25, 55, 3)
"""
Explanation: This is already improved upon from a numpy array, because we have names for each of the dimensions (or axes in NumPy parlance). Even better, we can take arrays representing the values for the coordinates for each of these dimensions and associate them with the data when we create the DataArray.
End of explanation
"""
temp = xr.DataArray(data, coords=[times, lats, lons], dims=['time', 'lat', 'lon'])
temp
"""
Explanation: When we create the DataArray instance, we pass in the arrays we just created:
End of explanation
"""
temp.attrs['units'] = 'kelvin'
temp.attrs['standard_name'] = 'air_temperature'
temp
"""
Explanation: ...and we can also set some attribute metadata:
End of explanation
"""
# For example, convert Kelvin to Celsius
temp - 273.15
"""
Explanation: Notice what happens if we perform a mathematical operaton with the DataArray: the coordinate values persist, but the attributes are lost. This is done because it is very challenging to know if the attribute metadata is still correct or appropriate after arbitrary arithmetic operations.
End of explanation
"""
temp.sel(time='2018-01-02')
"""
Explanation: Selection
We can use the .sel method to select portions of our data based on these coordinate values, rather than using indices (this is similar to the CDM).
End of explanation
"""
from datetime import timedelta
temp.sel(time='2018-01-07', method='nearest', tolerance=timedelta(days=2))
"""
Explanation: .sel has the flexibility to also perform nearest neighbor sampling, taking an optional tolerance:
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise
.interp() works similarly to .sel(). Using .interp(), get an interpolated time series "forecast" for Boulder (40°N, 105°W) or your favorite latitude/longitude location. (Documentation for <a href="http://xarray.pydata.org/en/stable/interpolation.html">interp</a>).
End of explanation
"""
# %load solutions/interp_solution.py
"""
Explanation: Solution
End of explanation
"""
temp.sel(time=slice('2018-01-01', '2018-01-03'), lon=slice(-110, -70), lat=slice(25, 45))
"""
Explanation: Slicing with Selection
End of explanation
"""
# As done above
temp.loc['2018-01-02']
temp.loc['2018-01-01':'2018-01-03', 25:45, -110:-70]
# This *doesn't* work however:
#temp.loc[-110:-70, 25:45,'2018-01-01':'2018-01-03']
"""
Explanation: .loc
All of these operations can also be done within square brackets on the .loc attribute of the DataArray. This permits a much more numpy-looking syntax, though you lose the ability to specify the names of the various dimensions. Instead, the slicing must be done in the correct order.
End of explanation
"""
# Open sample North American Reanalysis data in netCDF format
ds = xr.open_dataset('../../data/NARR_19930313_0000.nc')
ds
"""
Explanation: Opening netCDF data
With its close ties to the netCDF data model, XArray also supports netCDF as a first-class file format. This means it has easy support for opening netCDF datasets, so long as they conform to some of XArray's limitations (such as 1-dimensional coordinates).
End of explanation
"""
ds.isobaric1
"""
Explanation: This returns a Dataset object, which is a container that contains one or more DataArrays, which can also optionally share coordinates. We can then pull out individual fields:
End of explanation
"""
ds['isobaric1']
"""
Explanation: or
End of explanation
"""
ds_1000 = ds.sel(isobaric1=1000.0)
ds_1000
ds_1000.Temperature_isobaric
"""
Explanation: Datasets also support much of the same subsetting operations as DataArray, but will perform the operation on all data:
End of explanation
"""
u_winds = ds['u-component_of_wind_isobaric']
u_winds.std(dim=['x', 'y'])
"""
Explanation: Aggregation operations
Not only can you use the named dimensions for manual slicing and indexing of data, but you can also use it to control aggregation operations, like sum:
End of explanation
"""
# %load solutions/mean_profile.py
"""
Explanation: Exercise
Using the sample dataset, calculate the mean temperature profile (temperature as a function of pressure) over Colorado within this dataset. For this exercise, consider the bounds of Colorado to be:
* x: -182km to 424km
* y: -1450km to -990km
(37°N to 41°N and 102°W to 109°W projected to Lambert Conformal projection coordinates)
Solution
End of explanation
"""
# Import some useful Python tools
from datetime import datetime
# Twelve hours of hourly output starting at 22Z today
start = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0)
times = np.array([start + timedelta(hours=h) for h in range(13)])
# 3km spacing in x and y
x = np.arange(-150, 153, 3)
y = np.arange(-100, 100, 3)
# Standard pressure levels in hPa
press = np.array([1000, 925, 850, 700, 500, 300, 250])
temps = np.random.randn(times.size, press.size, y.size, x.size)
"""
Explanation: Resources
There is much more in the XArray library. To learn more, visit the XArray Documentation
Introduction to Climate and Forecasting Metadata Conventions
In order to better enable reproducible data and research, the Climate and Forecasting (CF) metadata convention was created to have proper metadata in atmospheric data files. In the remainder of this notebook, we will introduce the CF data model and discuss some netCDF implementation details to consider when deciding how to write data with CF and netCDF. We will cover gridded data in this notebook, with more in depth examples provided in the full CF notebook. Xarray makes the creation of netCDFs with proper metadata simple and straightforward, so we will use that, instead of the netCDF-Python library.
This assumes a basic understanding of netCDF.
<a name="gridded"></a>
Gridded Data
Let's say we're working with some numerical weather forecast model output. Let's walk through the steps necessary to store this data in netCDF, using the Climate and Forecasting metadata conventions to ensure that our data are available to as many tools as possible.
To start, let's assume the following about our data:
* It corresponds to forecast three dimensional temperature at several times
* The native coordinate system of the model is on a regular grid that represents the Earth on a Lambert conformal projection.
We'll also go ahead and generate some arrays of data below to get started:
End of explanation
"""
from cftime import date2num
time_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0])
time_vals = date2num(times, time_units)
time_vals
"""
Explanation: Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12:00:00.00'. 'seconds', 'minutes', 'hours', and 'days' are the most commonly used units for time. Due to the variable length of months and years, they are not recommended.
Before we can write data, we need to first need to convert our list of Python datetime instances to numeric values. We can use the cftime library to make this easy to convert using the unit string as defined above.
End of explanation
"""
ds = xr.Dataset({'temperature': (['time', 'z', 'y', 'x'], temps, {'units':'Kelvin'})},
coords={'x_dist': (['x'], x, {'units':'km'}),
'y_dist': (['y'], y, {'units':'km'}),
'pressure': (['z'], press, {'units':'hPa'}),
'forecast_time': (['time'], times)
})
ds
"""
Explanation: Now we can create the forecast_time variable just as we did before for the other coordinate variables:
Convert arrays into Xarray Dataset
End of explanation
"""
ds.forecast_time.encoding['units'] = time_units
"""
Explanation: Due to how xarray handles time units, we need to encode the units in the forecast_time coordinate.
End of explanation
"""
ds.temperature
"""
Explanation: If we look at our data variable, we can see the units printed out, so they were attached properly!
End of explanation
"""
ds.attrs['Conventions'] = 'CF-1.7'
ds.attrs['title'] = 'Forecast model run'
ds.attrs['nc.institution'] = 'Unidata'
ds.attrs['source'] = 'WRF-1.5'
ds.attrs['history'] = str(datetime.utcnow()) + ' Python'
ds.attrs['references'] = ''
ds.attrs['comment'] = ''
ds
"""
Explanation: We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
End of explanation
"""
ds.temperature.attrs['standard_name'] = 'air_temperature'
ds.temperature.attrs['long_name'] = 'Forecast air temperature'
ds.temperature.attrs['missing_value'] = -9999
ds.temperature
"""
Explanation: We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we have already set it to a value of 'Kelvin'. We also set the standard (optional) attributes of long_name and standard_name. The former contains a longer description of the variable, while the latter comes from a controlled vocabulary in the CF conventions. This allows users of data to understand, in a standard fashion, what a variable represents. If we had missing values, we could also set the missing_value attribute to an appropriate value.
NASA Dataset Interoperability Recommendations:
Section 2.2 - Include Basic CF Attributes
Include where applicable: units, long_name, standard_name, valid_min / valid_max, scale_factor / add_offset and others.
End of explanation
"""
ds.x.attrs['axis'] = 'X' # Optional
ds.x.attrs['standard_name'] = 'projection_x_coordinate'
ds.x.attrs['long_name'] = 'x-coordinate in projected coordinate system'
ds.y.attrs['axis'] = 'Y' # Optional
ds.y.attrs['standard_name'] = 'projection_y_coordinate'
ds.y.attrs['long_name'] = 'y-coordinate in projected coordinate system'
"""
Explanation: Coordinate variables
To properly orient our data in time and space, we need to go beyond dimensions (which define common sizes and alignment) and include values along these dimensions, which are called "Coordinate Variables". Generally, these are defined by creating a one dimensional variable with the same name as the respective dimension.
To start, we define variables which define our x and y coordinate values. These variables include standard_names which allow associating them with projections (more on this later) as well as an optional axis attribute to make clear what standard direction this coordinate refers to.
End of explanation
"""
ds.pressure.attrs['axis'] = 'Z' # Optional
ds.pressure.attrs['standard_name'] = 'air_pressure'
ds.pressure.attrs['positive'] = 'down' # Optional
ds.forecast_time['axis'] = 'T' # Optional
ds.forecast_time['standard_name'] = 'time' # Optional
ds.forecast_time['long_name'] = 'time'
"""
Explanation: We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.
End of explanation
"""
from pyproj import Proj
X, Y = np.meshgrid(x, y)
lcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000.,
'lat_1':25})
lon, lat = lcc(X * 1000, Y * 1000, inverse=True)
"""
Explanation: Auxilliary Coordinates
Our data are still not CF-compliant because they do not contain latitude and longitude information, which is needed to properly locate the data. To solve this, we need to add variables with latitude and longitude. These are called "auxillary coordinate variables", not because they are extra, but because they are not simple one dimensional variables.
Below, we first generate longitude and latitude values from our projected coordinates using the pyproj library.
End of explanation
"""
ds = ds.assign_coords(lon = (['y', 'x'], lon))
ds = ds.assign_coords(lat = (['y', 'x'], lat))
ds
ds.lon.attrs['units'] = 'degrees_east'
ds.lon.attrs['standard_name'] = 'longitude' # Optional
ds.lon.attrs['long_name'] = 'longitude'
ds.lat.attrs['units'] = 'degrees_north'
ds.lat.attrs['standard_name'] = 'latitude' # Optional
ds.lat.attrs['long_name'] = 'latitude'
"""
Explanation: Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.
End of explanation
"""
ds
"""
Explanation: With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables:
End of explanation
"""
ds['lambert_projection'] = int()
ds.lambert_projection.attrs['grid_mapping_name'] = 'lambert_conformal_conic'
ds.lambert_projection.attrs['standard_parallel'] = 25.
ds.lambert_projection.attrs['latitude_of_projection_origin'] = 40.
ds.lambert_projection.attrs['longitude_of_central_meridian'] = -105.
ds.lambert_projection.attrs['semi_major_axis'] = 6371000.0
ds.lambert_projection
"""
Explanation: Coordinate System Information
With our data specified on a Lambert conformal projected grid, it would be good to include this information in our metadata. We can do this using a "grid mapping" variable. This uses a dummy scalar variable as a namespace for holding all of the required information. Relevant variables then reference the dummy variable with their grid_mapping attribute.
Below we create a variable and set it up for a Lambert conformal conic projection on a spherical earth. The grid_mapping_name attribute describes which of the CF-supported grid mappings we are specifying. The names of additional attributes vary between the mappings.
End of explanation
"""
ds.temperature.attrs['grid_mapping'] = 'lambert_projection' # or proj_var.name
ds
"""
Explanation: Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable:
End of explanation
"""
ds.to_netcdf('test_netcdf.nc', format='NETCDF4')
!ncdump test_netcdf.nc
"""
Explanation: Write to NetCDF
Xarray has built-in support for a few flavors of netCDF. Here we'll write a netCDF4 file from our Dataset.
End of explanation
"""
|
DBWangGroupUNSW/COMP9318 | L6 - GaussianNB, KNN, and Cross-Validation.ipynb | mit | import pandas as pd
import numpy as np
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.cross_validation import KFold
from sklearn import preprocessing
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Predict the Sun Hours using Naive Bayes Classifier and KNN Classifier
import modules
End of explanation
"""
data = pd.read_csv('./asset/Daily_Weather_Observations.csv', sep=',')
data_missing_sun_hours = data[pd.isnull(data['Sun_hours'])]
data = data[pd.notnull(data['Sun_hours'])]
labels = ['Low','Med','High']
data['Sun_level'] = pd.cut(data.Sun_hours, [-1,5,10,25], labels=labels)
data = data.dropna(subset = ['CLD_at_9am', 'Max_wind_dir', 'Max_wind_spd', 'Max_wind_dir'])
bitmap1 = data.Evap.notnull()
bitmap2 = bitmap1.shift(1)
bitmap2[0] = True
data = data[bitmap1 & bitmap2]
data['Temps_diff'] = data['Temps_max'] - data['Temps_min']
print(data.shape)
"""
Explanation: Data Preprocessing
We use the same data resource as in L5, i.e., Daily Weather Observations of Sydney, New South Wales between Aug 2015 and Aug 2016.
We will handle the missing values, and category Sun_hours into three levels: High(>10), Med(>5 and <=10), and Low(<=5) as we did in L5.
Column Meanings
| Heading | Meaning | Units |
|-----------------|----------------------------------------------------------|---------------------|
| Day | Day of the week | first two letters |
| Temps_min | Minimum temperature in the 24 hours to 9am. | degrees Celsius |
| Temps_max | Maximum temperature in the 24 hours from 9am. | degrees Celsius |
| Rain | Precipitation (rainfall) in the 24 hours to 9am. | millimetres |
| Evap | Class A pan evaporation in the 24 hours to 9am | millimetres |
| Sun_hours | Bright sunshine in the 24 hours to midnight | hours |
| Max_wind_dir | Direction of strongest gust in the 24 hours to midnight | 16 compass points |
| Max_wind_spd | Speed of strongest wind gust in the 24 hours to midnight | kilometres per hour |
| Max_wind_time | Time of strongest wind gust | local time hh:mm |
| Temp_at_9am | Temperature at 9 am | degrees Celsius |
| RH_at_9am | Relative humidity at 9 am | percent |
| CLD_at_9am | Fraction of sky obscured by cloud at 9 am | eighths |
| Wind_dir_at_9am | Wind direction averaged over 10 minutes prior to 9 am | compass points |
| Wind_spd_at_9am | Wind speed averaged over 10 minutes prior to 9 am | kilometres per hour |
| MSLP_at_9am | Atmospheric pressure reduced to mean sea level at 9 am | hectopascals |
| Temp_at_3pm | Temperature at 3 pm | degrees Celsius |
| RH_at_3pm | Relative humidity at 3 pm | percent |
| CLD_at_3pm | Fraction of sky obscured by cloud at 3 pm | eighths |
| Wind_dir_at_3pm | Wind direction averaged over 10 minutes prior to 3 pm | compass points |
| Wind_spd_at_3pm | Wind speed averaged over 10 minutes prior to 3 pm | kilometres per hour |
| MSLP_at_3pm | Atmospheric pressure reduced to mean sea level at 3 pm | hectopascals |
End of explanation
"""
feature_list = ['CLD_at_9am', 'CLD_at_3pm', 'RH_at_9am', 'RH_at_3pm', 'Temps_diff']
"""
Explanation: We use CLD_at_9am, CLD_at_3pm, RH_at_9am, RH_at_3pm, and Temps_diff as features.
End of explanation
"""
X = data[feature_list]
X.tail()
y = data.Sun_level
y.tail()
"""
Explanation: We generate X and y based on the selected features and labels
End of explanation
"""
gnb = GaussianNB() # Note that there is no parameter allowed in GaussianNB()
gnb.fit(X, y)
gnb.score(X, y)
"""
Explanation: (Gaussian) Naive Bayes Classifier
End of explanation
"""
gnb.predict_proba(X)
"""
Explanation: You can get the probability estimates for the input vector X.
End of explanation
"""
min_max_scaler = preprocessing.MinMaxScaler()
X_scaled = min_max_scaler.fit_transform(X)
X_scaled.shape
"""
Explanation: KNN Classifier
Before classification, we need to normalize the data first.
End of explanation
"""
neigh = KNeighborsClassifier(n_neighbors=3,weights='uniform',p=2)
neigh.fit(X_scaled, y)
neigh.score(X_scaled, y)
"""
Explanation: The following parameters affects the performance (accuracy) of the classifier
* n_neighbors: Number of neighbors to use
* weights: weight function of the points. Possible values are:
* ‘uniform’ : uniform weights.
* ‘distance’ : weight points by the inverse of their distance.
* [callable] : a user-defined weight function.
* metric: The distance metric to use. The default metric is minkowski.
* p: Power parameter for the Minkowski metric (assume we use the default metric). Usual choices are:
* p = 2: Euclidean Distance
* p = 1: Manhattan Distance
End of explanation
"""
n_folds = 10
kf = KFold(n=len(X), n_folds=n_folds, shuffle=True, random_state=42)
def test_Gaussian_NB(train_X, train_y, test_X, test_y, debug_flag = False):
gnb = GaussianNB()
gnb.fit(train_X, train_y)
train_error = gnb.score(train_X, train_y)
test_error = gnb.score(test_X, test_y)
if debug_flag:
print('=============')
print('training error:\t{}'.format(train_error))
print('testing error:\t{}'.format(test_error))
return train_error, test_error
def test_KNN(train_X, train_y, test_X, test_y, n_neighbors=3, weights='uniform', p=2, debug_flag = False):
neigh = KNeighborsClassifier(n_neighbors=n_neighbors, weights=weights, p=p)
neigh.fit(train_X, train_y)
train_error = neigh.score(train_X, train_y)
test_error = neigh.score(test_X, test_y)
if debug_flag:
print('=============')
print('training error:\t{}'.format(train_error))
print('testing error:\t{}'.format(test_error))
return train_error, test_error
"""
Explanation: Cross Validation
We use K-fold cross validation to get a reliable estimate of how well a model performs on unseen data (i.e., the generalization error). This helps to (1) determine which model works well, or (2) how to set the values for the hyper parameters for a model. We will see an example of the latter usage in finding the best $k$ for kNN classifiers.
End of explanation
"""
train_error_total = 0
test_error_total = 0
for train, test in kf:
train_X = X.iloc[train]
test_X = X.iloc[test]
train_y = y.iloc[train]
test_y = y.iloc[test]
train_error, test_error = test_Gaussian_NB(train_X, train_y, test_X, test_y)
train_error_total += train_error
test_error_total += test_error
print('===================')
print('avg. training error (Gaussian NB):\t{}'.format(train_error_total/n_folds))
print('avg. testing error (Gaussian NB):\t{}'.format(test_error_total/n_folds))
"""
Explanation: CV on Naive Bayes Classifier
End of explanation
"""
def cv(n_neighbors=3,weights='uniform',p=2):
train_error_total = 0
test_error_total = 0
for train, test in kf:
train_X = X_scaled[train]
test_X = X_scaled[test]
train_y = y.iloc[train]
test_y = y.iloc[test]
train_error, test_error = test_KNN(train_X, train_y, test_X, test_y, n_neighbors, weights, p)
train_error_total += train_error
test_error_total += test_error
return train_error_total/n_folds, test_error_total/n_folds
# print('===================')
# print('avg. training error (kNN):\t{}'.format(train_error_total/n_folds))
# print('avg. testing error (kNN):\t{}'.format(test_error_total/n_folds))
# print()
def cv_plot(weights='uniform',p=2):
cv_res = []
for i in range(1,50):
train_error, test_error = cv(i, weights, p)
cv_res.append([i, train_error, test_error])
cv_res_arr = np.array(cv_res)
plt.figure(figsize=(16,9))
plt.title('Errors vs k for kNN classifiers')
plot_train, = plt.plot(cv_res_arr[:,0], cv_res_arr[:,1], label='training')
plot_test, = plt.plot(cv_res_arr[:,0], cv_res_arr[:,2], label='testing')
plt.legend(handles=[plot_train, plot_test])
plt.ylim((min(min(cv_res_arr[:,1]), min(cv_res_arr[:,2])) - 0.05, max(max(cv_res_arr[:,1]), max(cv_res_arr[:,2]))+0.05))
cv_plot('uniform',2)
cv_plot('uniform',1)
cv_plot('distance',2)
cv_plot('distance',1)
"""
Explanation: CV on KNN Classifier
End of explanation
"""
neigh = KNeighborsClassifier(n_neighbors=27,weights='uniform',p=1)
neigh.fit(X_scaled, y)
neigh.score(X_scaled, y)
"""
Explanation: According to the above results, we decide to use k=27 and uniform weight with p=1. Then we use these parameters to train a classifier over all the data (i.e., X_scaled)
End of explanation
"""
data_missing_sun_hours['Temps_diff'] = data_missing_sun_hours['Temps_max'] - data_missing_sun_hours['Temps_min']
test_data = data_missing_sun_hours[feature_list]
test_data_scaled = min_max_scaler.transform(test_data)
test_data
"""
Explanation: We also need to normalize data, but be aware that we need to make sure that testing data and training data are normalized in the same way (e.g., use min_max_scaler.transform())
End of explanation
"""
data_missing_sun_hours['Sun_level_pred_knn'] = neigh.predict(test_data_scaled)
data_missing_sun_hours
data_missing_sun_hours['Sun_level_pred_nb'] = gnb.predict(test_data)
data_missing_sun_hours
"""
Explanation: Now let's predict the sun_level using both KNN classifier and Naive Bayes classifier.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/aec45e1f20057e833cee12bb6bd292dc/10_evoked_overview.ipynb | bsd-3-clause | import os
import mne
"""
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut_creating_data_structures.
As usual we'll start by importing the modules we need:
End of explanation
"""
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
"""
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
"""
print(f'Epochs baseline: {epochs.baseline}')
print(f'Evoked baseline: {evoked.baseline}')
"""
Explanation: You may have noticed that MNE informed us that "baseline correction" has been
applied. This happened automatically by during creation of the
~mne.Epochs object, but may also be initiated (or disabled!) manually:
We will discuss this in more detail later.
The information about the baseline period of ~mne.Epochs is transferred to
derived ~mne.Evoked objects to maintain provenance as you process your
data:
End of explanation
"""
evoked.plot()
"""
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
"""
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
"""
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
"""
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
"""
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
"""
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
"""
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
"""
for evok in evokeds_list:
print(evok.comment)
"""
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
"""
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
"""
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
"""
evokeds_list[0].plot(picks='eeg')
"""
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
"""
# Original baseline (none set).
print(f'Baseline after loading: {evokeds_list[0].baseline}')
# Apply a custom baseline correction.
evokeds_list[0].apply_baseline((None, 0))
print(f'Baseline after calling apply_baseline(): {evokeds_list[0].baseline}')
# Visualize the evoked response.
evokeds_list[0].plot(picks='eeg')
"""
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
"""
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
"""
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
"""
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
"""
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
"""
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
"""
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each ~mne.Evoked object
by $\frac{1}{N}$, where $N$ is the number of ~mne.Evoked
objects given) or the keyword 'nave' (weight each ~mne.Evoked object
proportional to the number of epochs averaged together to create it):
End of explanation
"""
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
"""
Explanation: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation
"""
|
LxMLS/lxmls-toolkit | labs/notebooks/linear_classifiers/exercises.ipynb | mit | %load_ext autoreload
%autoreload 2
import lxmls.readers.sentiment_reader as srs
scr = srs.SentimentCorpus("books")
"""
Explanation: Exercise 1.1
In this exercise we will use the Amazon sentiment analysis data (Blitzer et al., 2007), where the goal is to classify text documents as expressing a positive or negative sentiment (i.e., a classification problem with two classes). We are going to focus on book reviews. To load the data, type:
End of explanation
"""
import lxmls.classifiers.multinomial_naive_bayes as mnbb
mnb = mnbb.MultinomialNaiveBayes()
params_nb_sc = mnb.train(scr.train_X,scr.train_y)
y_pred_train = mnb.test(scr.train_X,params_nb_sc)
acc_train = mnb.evaluate(scr.train_y, y_pred_train)
y_pred_test = mnb.test(scr.test_X,params_nb_sc)
acc_test = mnb.evaluate(scr.test_y, y_pred_test)
print("Multinomial Naive Bayes Amazon Sentiment Accuracy train: %f test: %f"%(acc_train,acc_test))
"""
Explanation: This will load the data in a bag-of-words representation where rare words (occurring less than 5 times in the training data) are removed.
Implement the Naive Bayes algorithm. Open the file multinomial_naive_bayes.py, which is inside the classifiers folder. In the MultinomialNaiveBayes class you will find the train method. We have already placed some code in that file to help you get started.
After implementing, run Naive Bayes with the multinomial model on the Amazon dataset (sentiment classification) and report results both for training and testing
End of explanation
"""
%matplotlib inline
import lxmls.readers.simple_data_set as sds
sd = sds.SimpleDataSet(
nr_examples=100,
g1=[[-1,-1],1],
g2=[[1,1],1],
balance=0.5,
split=[0.5,0,0.5]
)
"""
Explanation: Observe that words that were not observed at training time cause problems at testtime. Why? To solve this problem, apply a simple add-one smoothing technique: replace the expression in Eq. 1.9 for the estimation of the conditional probabilities by
${\hat P}(w_j|c_k) = \frac{1+\sum_ {m \in \mathcal{I}k} n_j(x^m)}{J + \sum{i=1}^J \sum_ {m\in \mathcal{I}_k} n_i(x^m)}.$
where $J$ is the number of distinct words. This is a widely used smoothing strategy which has a Bayesian interpretation: it corresponds to choosing a uniform prior for the word distribution on both classes, and to replace the maximum likelihood criterion by a maximum a posteriori approach. This is a form of regularization, preventing the model from overfitting on the training data. See e.g. Manning and Schütze (1999); Manning et al. (2008) for more information. Report the new accuracies.
Exercise 1.2
We provide an implementation of the perceptron algorithm in the class Perceptron
(file perceptron.py).
Run the following commands to generate a simple dataset
End of explanation
"""
import lxmls.classifiers.perceptron as percc
perc = percc.Perceptron()
params_perc_sd = perc.train(sd.train_X,sd.train_y)
y_pred_train = perc.test(sd.train_X,params_perc_sd)
acc_train = perc.evaluate(sd.train_y, y_pred_train)
y_pred_test = perc.test(sd.test_X,params_perc_sd)
acc_test = perc.evaluate(sd.test_y, y_pred_test)
print("Perceptron Simple Dataset Accuracy train: %f test: %f"%(acc_train, acc_test))
fig, axis = sd.plot_data("osx")
fig, axis = sd.add_line(fig, axis, params_perc_sd, "Perceptron", "blue")
"""
Explanation: Run the perceptron algorithm on the simple dataset previously generated and report its train and test set accuracy:
End of explanation
"""
import lxmls.classifiers.mira as mirac
mira = mirac.Mira()
mira.regularizer = 1.0 # This is lambda
params_mira_sd = mira.train(sd.train_X,sd.train_y)
y_pred_train = mira.test(sd.train_X,params_mira_sd)
acc_train = mira.evaluate(sd.train_y, y_pred_train)
y_pred_test = mira.test(sd.test_X,params_mira_sd)
acc_test = mira.evaluate(sd.test_y, y_pred_test)
print("Mira Simple Dataset Accuracy train: %f test: %f"%(acc_train, acc_test))
"""
Explanation: Change the code to save the intermediate weight vectors, and plot them every five iterations. What do you observe?
Exercise 1.3
We provide an implementation of the MIRA algorithm. Compare it with the perceptron for various values of $\lambda$
End of explanation
"""
fig, axis = sd.add_line(fig, axis, params_mira_sd, "Mira","green")
fig
"""
Explanation: Compare the results achieved and separating hyperplanes found.
End of explanation
"""
import lxmls.classifiers.max_ent_batch as mebc
me_lbfgs = mebc.MaxEntBatch()
me_lbfgs.regularizer = 1.0
params_meb_sd = me_lbfgs.train(sd.train_X,sd.train_y)
y_pred_train = me_lbfgs.test(sd.train_X,params_meb_sd)
acc_train = me_lbfgs.evaluate(sd.train_y, y_pred_train)
y_pred_test = me_lbfgs.test(sd.test_X,params_meb_sd)
acc_test = me_lbfgs.evaluate(sd.test_y, y_pred_test)
print(
"Max-Ent batch Simple Dataset Accuracy train: %f test: %f" %
(acc_train,acc_test)
)
fig, axis = sd.add_line(fig, axis, params_meb_sd, "Max-Ent-Batch","orange")
fig
"""
Explanation: Exercise 1.4
We provide an implementation of the L-BFGS algorithm for training maximum entropy models in the class MaxEnt batch, as well as an implementation of the SGD algorithm in the class MaxEnt online.
End of explanation
"""
params_meb_sc = me_lbfgs.train(scr.train_X,scr.train_y)
y_pred_train = me_lbfgs.test(scr.train_X,params_meb_sc)
acc_train = me_lbfgs.evaluate(scr.train_y, y_pred_train)
y_pred_test = me_lbfgs.test(scr.test_X,params_meb_sc)
acc_test = me_lbfgs.evaluate(scr.test_y, y_pred_test)
print(
"Max-Ent Batch Amazon Sentiment Accuracy train: %f test: %f" %
(acc_train, acc_test)
)
"""
Explanation: Train a maximum entropy model using L-BFGS, on the Amazon dataset (try different values of $\lambda$) and report training and test set accuracy. What do you observe?
End of explanation
"""
import lxmls.classifiers.max_ent_online as meoc
me_sgd = meoc.MaxEntOnline()
me_sgd.regularizer = 1.0
params_meo_sc = me_sgd.train(scr.train_X,scr.train_y)
y_pred_train = me_sgd.test(scr.train_X,params_meo_sc)
acc_train = me_sgd.evaluate(scr.train_y, y_pred_train)
y_pred_test = me_sgd.test(scr.test_X,params_meo_sc)
acc_test = me_sgd.evaluate(scr.test_y, y_pred_test)
print(
"Max-Ent Online Amazon Sentiment Accuracy train: %f test: %f" %
(acc_train, acc_test)
)
"""
Explanation: Now, fix $\lambda$ = 1.0 and train with SGD (you might try to adjust the initial step). Compare the objective values obtained during training with those obtained with L-BFGS. What do you observe?
End of explanation
"""
import lxmls.classifiers.svm as svmc
svm = svmc.SVM()
svm.regularizer = 1.0 # This is lambda
params_svm_sd = svm.train(sd.train_X,sd.train_y)
y_pred_train = svm.test(sd.train_X,params_svm_sd)
acc_train = svm.evaluate(sd.train_y, y_pred_train)
y_pred_test = svm.test(sd.test_X,params_svm_sd)
acc_test = svm.evaluate(sd.test_y, y_pred_test)
print("SVM Online Simple Dataset Accuracy train: {} test: {}".format(acc_train,acc_test))
fig,axis = sd.add_line(fig,axis,params_svm_sd,"SVM","orange")
params_svm_sc = svm.train(scr.train_X,scr.train_y)
y_pred_train = svm.test(scr.train_X,params_svm_sc)
acc_train = svm.evaluate(scr.train_y, y_pred_train)
y_pred_test = svm.test(scr.test_X,params_svm_sc)
acc_test = svm.evaluate(scr.test_y, y_pred_test)
print("SVM Online Amazon Sentiment Accuracy train: {} test: {}".format(acc_train,acc_test))
"""
Explanation: Exercise 1.5
Run the SVM primal algorithm. Then, repeat the MaxEnt exercise now using SVMs, for several values of $\lambda$:
End of explanation
"""
fig, axis = sd.add_line(fig, axis, params_svm_sd, "SVM", "yellow")
fig
"""
Explanation: Compare the results achieved and separating hyperplanes found.
End of explanation
"""
|
mmatera/qmnotebooks | Graficas 2 y 3d y gráficas de nivel con matplotlib.ipynb | gpl-3.0 | # Encabezado: cargar librerías
%matplotlib inline
import numpy as np # Librería para funciones matemáticas
import matplotlib.pyplot as plt # Librería para graficos
import matplotlib.cm as cm # Módulo para controlar mapas de colores
from mpl_toolkits.mplot3d import Axes3D # Módulo 3D
# Graficar una función
x = np.linspace(0,2*np.pi,100)
y = np.sin(x)
plt.plot(x,y)
# Esta instrucción muestra el gráfico. El último gráfico siempre se muestra
plt.show()
# Se pueden poner varios gráficos juntos
x = np.linspace(0,2*np.pi,100)
y = np.sin(x)
plt.plot(x,y,label="$\sin(x)$")
plt.plot(x,x,label="x")
plt.plot(x,x-x**3/3,label="$x-x^3/3$")
plt.plot(x,x-x**3/6,label="$x-x^3/3$")
plt.plot(x,x-x**3/5+x**5/120,label="$x-x^3/3+x^5/120$")
# Fija el rango graficado
plt.ylim(-2,2)
plt.xlim(0,10)
plt.title("Convergencia de la serie de Taylor para la función $\sin(x)$ ")
# Muestra las leyendas
plt.legend()
plt.show()
# También se puede guardar el gráfico como un archivo de imágen
plt.savefig("taylor.png")
plt.savefig("taylor.pdf")
"""
Explanation: En esta notebook, encontrarán algunos ejemplos para graficar curvas y superficies. Para ejecutar una celda,
presionar las teclas "Control" y "Enter" en simultaneo. Pueden modificar el código para ver cómo cambian los resultados.
End of explanation
"""
xs = np.linspace(-5,5,100)
normal = np.exp(-x**2/2)
xpositivas = np.array([x if x>0 else 0 for x in xs])
poisson = np.exp(-2*xpositivas) * xpositivas/.25
plt.scatter(xs, normal, label="curva normal", color="red", marker="*")
plt.scatter(xs, poisson, label="curva binomial", color="blue", marker=".")
plt.legend()
"""
Explanation: Graficos de dispersión
End of explanation
"""
# Graficar una curva paramétrica en dos dimensiones
t = np.linspace(0,10*np.pi,100)
x = t * np.cos(t)
y = t * np.sin(t)
plt.plot(x,y)
# Graficar una curva paramétrica en tres dimensiones
# Genera una lista de valores para el parámetro
t = np.linspace(0,10*np.pi,100)
# Genera la curva
x = t * np.cos(t)
y = t * np.sin(t)
z = t
# Construye la figura y el sistema de ejes
fig = plt.figure()
ax = fig.gca(projection='3d')
# Grafica la curva paramétrica
ax.plot(x, y, z, label='parametric curve 1')
ax.plot(x, z, y, label='parametric curve 2')
ax.legend()
# Graficar una superficie paramétrica en tres dimensiones
u = np.linspace(0,np.pi,100)
v = np.linspace(0,2*np.pi,100)
# Construye a partir del par de parámetros, una grilla con los valores
U, V =np.meshgrid(u, v)
# A partir de la grilla, construye el conjunto de puntos. En este caso, una esfera.
x = np.sin(U)*np.cos(V)
y = np.sin(U)*np.sin(V)
z = np.cos(U)
# Construye la figura y los ejes
fig = plt.figure()
ax = fig.gca(projection='3d')
# Plotea una superficie (llena)
ax.plot_surface(x, y, z, color="orange")
# Acá construimos un hiperboloide
x = np.sinh(.5*U)*np.cos(V)
y = np.sinh(.5*U)*np.sin(V)
z = np.cosh(.5*U)
# Ploteamos una gráfica de alambres (sin relleno)
ax.plot_wireframe(x, y, z, label='hiperboloide',color="red")
ax.legend()
"""
Explanation: Gráficos de curvas y superficies paramétricas
End of explanation
"""
u = np.linspace(-np.pi,np.pi,100)
v = np.linspace(-np.pi,np.pi,100)
# Construye a partir del par de parámetros, una grilla con los valores
x, y =np.meshgrid(u, v)
z = np.cos(U)*np.sin(V)
plt.pcolor(x,y,z,cmap=cm.coolwarm)
plt.show()
plt.contour(x,y,z,cmap=cm.coolwarm)
plt.show()
plt.contourf(x,y,z,cmap=cm.coolwarm)
"""
Explanation: Gráficos altimétricos
End of explanation
"""
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make the grid
x, y, z = np.meshgrid(np.arange(-0.8, 1, 0.2),
np.arange(-0.8, 1, 0.2),
np.arange(-0.8, 1, 0.8))
# Make the direction data for the arrows
u = np.sin(np.pi * x) * np.cos(np.pi * y) * np.cos(np.pi * z)
v = -np.cos(np.pi * x) * np.sin(np.pi * y) * np.cos(np.pi * z)
w = (np.sqrt(2.0 / 3.0) * np.cos(np.pi * x) * np.cos(np.pi * y) *
np.sin(np.pi * z))
ax.quiver(x, y, z, u, v, w, length=0.1, normalize=True)
plt.show()
fig = plt.figure(figsize=(8,8))
ax = fig.gca(projection='3d')
# Make the grid
theta, phi = np.meshgrid(np.arange(0, 3.1416, 0.1), np.arange(-3.1415, 3.1416, 0.2))
theta2, phi2 = np.meshgrid(np.arange(0, 3.1416, 0.5), np.arange(-3.1415, 3.1416, 0.5))
xs = np.sin(theta)*np.cos(phi)
ys = np.sin(theta)*np.sin(phi)
zs = np.cos(theta)
xs2 = np.sin(theta2)*np.cos(phi2)
ys2 = np.sin(theta2)*np.sin(phi2)
zs2 = np.cos(theta2)
ax.plot_surface(xs,ys,zs)
size = 3
ax.quiver(xs2, ys2, zs2, size*xs2, size*ys2, size*zs2, length=0.1, normalize=False,color="red")
plt.show()
"""
Explanation: Campos vectoriales
End of explanation
"""
|
dbrattli/OSlash | notebooks/Reader.ipynb | apache-2.0 | from oslash import Reader
unit = Reader.unit
"""
Explanation: The Reader Monad
The Reader monad pass the state you want to share between functions. Functions may read that state, but can't change it. The reader monad lets us access shared immutable state within a monadic context. In the Reader monad this shared state is called the environment.
The Reader is just a fancy name for a wrapped function, so this monad could also be called the Function monad, or perhaps the Callable monad. Reader is all about composing wrapped functions.
This IPython notebook uses the OSlash library for Python 3.4, aka Ø. You can install Ø using:
```bash
pip3 install oslash
```
End of explanation
"""
r = Reader(lambda name: "Hi %s!" % name)
"""
Explanation: A Reader wraps a function, so it takes a callable:
End of explanation
"""
r("Dag")
"""
Explanation: In Python you can call this wrapped function as any other callable:
End of explanation
"""
r = unit(42)
r("Ignored")
"""
Explanation: Unit
Unit is a constructor that takes a value and returns a Reader that ignores the environment. That is it ignores any value that is passed to the Reader when it's called:
End of explanation
"""
r = Reader(lambda name: "Hi %s!" % name)
b = r | (lambda x: unit(x.replace("Hi", "Hello")))
b("Dag")
"""
Explanation: Bind
You can bind a Reader to a monadic function using the pipe | operator (The bind operator is called >>= in Haskell). A monadic function is a function that takes a value and returns a monad, and in this case it returns a new Reader monad:
End of explanation
"""
r = Reader(lambda name: "Hi %s!" % name)
a = Reader.pure(lambda x: x + "!!!") * r
a("Dag")
"""
Explanation: Applicative
Apply (*) is a beefed up map. It takes a Reader that has a function in it and another Reader, and extracts that function from the first Reader and then maps it over the second one (basically composes the two functions).
End of explanation
"""
from oslash import MonadReader
asks = MonadReader.asks
ask = MonadReader.ask
"""
Explanation: MonadReader
The MonadReader class provides a number of convenience functions that are very useful when working with a Reader monad.
End of explanation
"""
r = ask() | (lambda x: unit("Hi %s!" % x))
r("Dag")
"""
Explanation: Ask
Provides a way to easily access the environment. Ask lets us read the environment and then play with it:
End of explanation
"""
r = asks(len)
r("banana")
"""
Explanation: Asks
Given a function it returns a Reader which evaluates that function and returns the result.
End of explanation
"""
from oslash import Reader, MonadReader
ask = MonadReader.ask
def hello():
return ask() | (lambda name:
unit("Hello, " + name + "!"))
def bye():
return ask() | (lambda name:
unit("Bye, " + name + "!"))
def convo():
return hello() | (lambda c1:
bye() | (lambda c2:
unit("%s %s" % (c1, c2))))
r = convo()
print(r("dag"))
"""
Explanation: A Longer Example
This example has been translated to Python from https://gist.github.com/egonSchiele/5752172.
End of explanation
"""
|
anhaidgroup/py_entitymatching | notebooks/guides/end_to_end_em_guides/.ipynb_checkpoints/Basic EM Workflow Restaurants - 1-checkpoint.ipynb | bsd-3-clause | import sys
sys.path.append('/Users/pradap/Documents/Research/Python-Package/anhaid/py_entitymatching/')
import py_entitymatching as em
import pandas as pd
import os
# Display the versions
print('python version: ' + sys.version )
print('pandas version: ' + pd.__version__ )
print('magellan version: ' + em.__version__ )
"""
Explanation: Basic EM workflow 1 (Restaurants data set)
Introduction
This IPython notebook explains a basic workflow to match two tables using py_entitymatching. Our goal is to match restaurants from Fodors and Zagat sites. The datasets contain information about the restaurants.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
# Get the paths
path_A = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/fodors.csv'
path_B = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/zagats.csv'
A = em.read_csv_metadata(path_A, key='id')
B = em.read_csv_metadata(path_B, key='id')
print('Number of tuples in A: ' + str(len(A)))
print('Number of tuples in B: ' + str(len(B)))
print('Number of tuples in A X B (i.e the cartesian product): ' + str(len(A)*len(B)))
A.head()
B.head()
# Display the keys of the input tables
em.get_key(A), em.get_key(B)
"""
Explanation: Matching two tables typically consists of the following three steps:
1. Reading the input tables
2. Blocking the input tables to get a candidate set
3. Matching the tuple pairs in the candidate set
Read input tables
We begin by loading the input tables. For the purpose of this guide, we use the datasets that are included with the package.
End of explanation
"""
ob = em.OverlapBlocker()
C = ob.block_tables(A, B, 'name', 'name',
l_output_attrs=['name', 'addr', 'city', 'phone'],
r_output_attrs=['name', 'addr', 'city', 'phone'],
overlap_size=1, show_progress=False)
C.head()
"""
Explanation: Block tables to get candidate set
Before we do the matching, we would like to remove the obviously non-matching tuple pairs from the input tables. This would reduce the number of tuple pairs considered for matching.
py_entitymatching provides four different blockers: (1) attribute equivalence, (2) overlap, (3) rule-based, and (4) black-box. The user can mix and match these blockers to form a blocking sequence applied to input tables.
For the matching problem at hand, we know that two restaurants with no overlap between the names will not match. So we decide the apply blocking over names:
End of explanation
"""
# Sample candidate set
S = em.sample_table(C, 450)
"""
Explanation: Match tuple pairs in candidate set
In this step, we would want to match the tuple pairs in the candidate set. Specifically, we use learning-based method for matching purposes.
This typically involves the following four steps:
Sampling and labeling the candidate set
Train matcher using labeled data
Predict the matches in the candidate set using trained matcher
Sampling and labeling the candidate set
First, we randomly sample 450 tuple pairs for labeling purposes.
End of explanation
"""
# Label S
G = em.label_table(S, 'gold')
"""
Explanation: Next, we label the sampled candidate set. Specify we would enter 1 for a match and 0 for a non-match.
End of explanation
"""
path_G = em.get_install_path() + os.sep + 'datasets' + os.sep + 'end-to-end' + os.sep + 'restaurants/lbl_restnt_wf1.csv'
G = em.read_csv_metadata(path_G,
key='_id',
ltable=A, rtable=B,
fk_ltable='ltable_id', fk_rtable='rtable_id')
len(G)
"""
Explanation: For the purposes of this guide, we will load in a pre-labeled dataset (of 450 tuple pairs) included in this package.
End of explanation
"""
# Generate features automatically
feature_table = em.get_features_for_matching(A, B)
"""
Explanation: Train matcher using labeled data
First, we need to create a set of features.py_entitymatching provides a way to automatically generate features based on the attributes in the input tables. For the purposes of this guide, we use the automatically generated features.
End of explanation
"""
# Select the attrs. to be included in the feature vector table
attrs_from_table = ['ltable_name', 'ltable_addr', 'ltable_city', 'ltable_phone',
'rtable_name', 'rtable_addr', 'rtable_city', 'rtable_phone']
# Convert the labeled data to feature vectors using the feature table
H = em.extract_feature_vecs(G,
feature_table=feature_table,
attrs_before = attrs_from_table,
attrs_after='gold',
show_progress=False)
"""
Explanation: Next, we convert the labeled data to feature vectors using the feature table
End of explanation
"""
# Instantiate the RF Matcher
rf = em.RFMatcher()
# Get the attributes to be projected while training
attrs_to_be_excluded = []
attrs_to_be_excluded.extend(['_id', 'ltable_id', 'rtable_id', 'gold'])
attrs_to_be_excluded.extend(attrs_from_table)
# Train using feature vectors from the labeled data.
rf.fit(table=H, exclude_attrs=attrs_to_be_excluded, target_attr='gold')
"""
Explanation: Then, we train the learning-based matcher using the feature vectors. For the purposes of the guide, we will use Random Forest matcher that is included in the py_entitymatching package.
End of explanation
"""
# Select the attrs. to be included in the feature vector table
attrs_from_table = ['ltable_name', 'ltable_addr', 'ltable_city', 'ltable_phone',
'rtable_name', 'rtable_addr', 'rtable_city', 'rtable_phone']
# Convert the cancidate set to feature vectors using the feature table
L = em.extract_feature_vecs(C, feature_table=feature_table,
attrs_before= attrs_from_table,
show_progress=False)
"""
Explanation: Predict the matches in the candidate set using trained matcher
Now, we use the trained matcher to predict matches in the candidate set. To do that, first we need to convert the candidate set to feature vectors.
End of explanation
"""
# Get the attributes to be excluded while predicting
attrs_to_be_excluded = []
attrs_to_be_excluded.extend(['_id', 'ltable_id', 'rtable_id'])
attrs_to_be_excluded.extend(attrs_from_table)
# Predict the matches
predictions = rf.predict(table=L, exclude_attrs=attrs_to_be_excluded,
append=True, target_attr='predicted', inplace=False)
predictions.head()
"""
Explanation: Next, we predict the matches in the candidate set using the trained matcher and the feature vectors.
End of explanation
"""
# Get the attributes to be projected out
attrs_proj = []
attrs_proj.extend(['_id', 'ltable_id', 'rtable_id'])
attrs_proj.extend(attrs_from_table)
attrs_proj.append('predicted')
# Project the attributes
predictions = predictions[attrs_proj]
predictions.head()
"""
Explanation: Finally, project the attributes and the predictions from the predicted table.
End of explanation
"""
|
hktxt/MachineLearning | Map,_Filter,_and_Reduce_Functions.ipynb | gpl-3.0 | # 计算一系列半径的圆的面积
import math
# 计算面积
def area(r):
"""area of a circle with radius 'r'."""
return math.pi * (r**2)
# 半径
radii = [2, 5, 7,1 ,0.3, 10]
# method 1
areas = []
for r in radii:
a = area(r)
areas.append(a)
areas
# method 2
[area(r) for r in radii]
# method 3, with map function, map take 2 arguments, first is function, second is list/tuple or other iterable object
map(area, radii)
list(map(area, radii))
# more examples
temps = [("Berlin", 29), ("Cairo", 36), ("Buenos Aires", 19), ("Los Angeles", 26), ("Tokyo", 27), ("New York", 28), ("London", 22), ("Bejing", 32)]
c_to_f = lambda data: (data[0], (9/5)*data[1] + 32)
list(map(c_to_f, temps))
"""
Explanation: Map, Filter, and Reduce Functions
https://www.youtube.com/watch?v=hUes6y2b--0
End of explanation
"""
# let's select all data that above the mean
import statistics
data = [1.3, 2.7, 0.8, 4.1, 4.3, -0.1]
avg = statistics.mean(data);avg
# like map. filter 1st take a function, 2nd take a list/tuple...
filter(lambda x: x>avg, data)
list(filter(lambda x: x>avg, data))
list(filter(lambda x: x<avg, data))
# remove empty data
countries = ["", "China", "USA", "Chile", "", "", "Brazil"]
list(filter(None, countries))
"""
Explanation: filter function is used to select certain pieces of data from a list/tuple or other collection of data.
End of explanation
"""
# data: [a1, a2, a3, ....., an]
# function: f(x, y)
# reduce(f, data):
# step 1: val1 = f(a1, a2)
# step 2: val2 = f(val1, a3)
# step 3: val3 = f(val2, a4)
# ......
# step n-1: valn-1= f(valn-2, an)
# return valn-1
from functools import reduce
# multiply all numbers in a list
data = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
multiplier = lambda x, y: x*y
reduce(multiplier, data)
# use for loop instead
product = 1
for i in data:
product = product * i
product
# sum
sum(data)
# use reduce tosum
reduce(lambda x, y:x+y, data)
"""
Explanation: reduce
End of explanation
"""
|
TomAugspurger/PracticalPandas | Practical Pandas 03 - EDA.ipynb | mit | %matplotlib inline
import os
import datetime
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_hdf(os.path.join('data', 'cycle_store.h5'), key='with_weather')
df.head()
"""
Explanation: Welcome back. As a reminder:
In part 1 we got dataset with my cycling data from last year merged and stored in an HDF5 store
In part 2 we did some cleaning and augmented the cycling data with data from http://forecast.io.
Today we'll use pandas, seaborn, and
matplotlib to do some exploratory data analysis.
For fun, we'll make some maps at the end using folium.
End of explanation
"""
df.duplicated().sum()
"""
Explanation: Upon further inspection, it looks like some of our rows are duplicated.
End of explanation
"""
df.time.duplicated().sum()
"""
Explanation: The problem is actually a bit more severe than that. The app I used to collect the data sometimes records multiple observations per second, but only reports the results at the second frequency.
End of explanation
"""
df = df.drop_duplicates(subset=['time']).set_index('time')
df.index.is_unique
"""
Explanation: What to do here? We could drop change the frequency to micro- or nano-second resolution, and add, say, a half-second onto the duplicated observations.
Since this is just for fun though, I'm going to do the easy thing and throw out the duplicates (in real life you'll want to make sure this doesn't affect your analysis).
Then we can set the time column to be our index, which will make our later analysis a bit simpler.
End of explanation
"""
df = df.tz_localize('UTC').tz_convert('US/Central')
df.head()
"""
Explanation: Because of a bug in pandas, we lost our timzone information when we filled in our missing values. Until that's fixed we'll have to manually add back the timezone info and convert. The actual values stored were UTC (which is good practice whenever you have timezone-aware timestamps), it just doesn't know that it's UTC.
End of explanation
"""
df['time'] = df.index.time
"""
Explanation: Timelines
We'll store the time part of the DatetimeIndex in a column called time.
End of explanation
"""
ax = df.plot(x='time', y='distance_miles')
"""
Explanation: With these, let's plot how far along I was in my ride (distance_miles) at the time of day.
End of explanation
"""
df['is_morning'] = df.time < datetime.time(12)
ax = df[df.is_morning].plot(x='time', y='distance_miles')
"""
Explanation: There's a couple problems. First of all, the data are split into morning and afternoon.
Let's create a new, boolean, column indicating whether the ride took place in the morning or afternoon.
End of explanation
"""
axes = df[df.is_morning].groupby(df.ride_id).plot(x='time',
y='distance_miles',
color='k',
figsize=(12, 5))
"""
Explanation: Better, but this still isn't quite what we want. When we call .plot(x=..., y=...) the data are sorted before being plotted. This means that an observation from one ride gets mixed up with another. So we'll need to group by ride, and then plot.
End of explanation
"""
axes = df[~df.is_morning].groupby(df.ride_id).plot(x='time',
y='distance_miles',
color='k',
figsize=(12, 5))
"""
Explanation: Much better. Groupby is one of the most powerful operations in pandas, and it pays to understand it well. Here's the same thing for the evening.
End of explanation
"""
ride_time = df.groupby(['ride_id', 'is_morning'])['ride_time_secs'].agg('max')
mean_time = ride_time.groupby(level=1).mean().rename(
index={True: 'morning', False: 'evening'})
mean_time / 60
"""
Explanation: Fun. The horizontal distance is the length of time it took me to make the ride. The starting point on the horizontal axis conveys the time that I set out.
I like this chart because it also conveys the start time of each ride.
The plot shows that the morning ride typically took longer, but we can verify that.
End of explanation
"""
fig, ax = plt.subplots()
morning_color = sns.xkcd_rgb['amber']
evening_color = sns.xkcd_rgb['dusty purple']
_ = df[~df.is_morning].groupby(df.ride_id).plot(x='time', y='distance_miles',
color=evening_color, figsize=(12, 5),
ax=ax, alpha=.9, grid=False)
ax2 = ax.twiny()
_ = df[df.is_morning].groupby(df.ride_id).plot(x='time', y='distance_miles',
color=morning_color, figsize=(12, 5),
ax=ax2, alpha=.9, grid=False)
# Create fake lines for our custom legend.
morning_legend = plt.Line2D([0], [0], color=morning_color)
evening_legend = plt.Line2D([0], [0], color=evening_color)
ax.legend([morning_legend, evening_legend], ['Morning', 'Evening'])
"""
Explanation: So the morning ride is typically shorter! But I think I know what's going on. We were misleading with our plot earlier since the range of the horizontal axis weren't identical. Always check the axis!
At risk of raising the ire of Hadley Whickham, we'll plot these on the same plot, with a secondary x-axis. (I think its OK in this case since the second is just a transformation – a 10 hour or so shift – of the first).
We'll plot evening first, use matplotlib's twinx method, and plot the morning on the second axes.
End of explanation
"""
long_ride_id = df.groupby('ride_id')['distance_miles'].max().argmax()
long_ride_id
"""
Explanation: There's a bit of boilerplate at the end. pandas tries to add a legend element for each ride ID. It doesn't know that we only care whether it's morning or evening. So instead we just fake it, creating two lines thate are invisible, and labeling them appropriately.
Anyway, we've accomplished our original goal. The steeper slope on the evening rides show that they typically took me less time. I guess I wasn't too excited to get to school in the morning. The joys of being a grad student.
I'm sure I'm not the only one noticing that long evening ride sticking out from the rest. Let's note it's ride ID and follow up. We need the ride_id so groupby that. It's the longest ride so take the max of the distance. And we want the ride_id of the maximum distance, so take the argmax of that. These last three sentances can be beautifully chained together into a single line that reads like poetry.
End of explanation
"""
def inline_map(map):
"""
Embeds the HTML source of the map directly into the IPython notebook.
This method will not work if the map depends on any files (json data). Also this uses
the HTML5 srcdoc attribute, which may not be supported in all browsers.
"""
from IPython.display import HTML
map._build_map()
return HTML('<iframe srcdoc="{srcdoc}" style="width: 100%; height: 510px; border: none"></iframe>'.format(srcdoc=map.HTML.replace('"', '"')))
"""
Explanation: Cartography
We'll use Folium to do a bit of map plotting.
If you're using python3 (like I am) you'll need to use this pull request from tbicr,
or just clone the master of my fork, where I've merged the changes.
Since this is a practical pandas post, and not an intro to folium, I won't delve into the details here. The basics are that we initialize a Map with some coordinates and tiles, and then add lines to that map. The lines will come from the latitude and longitude columns of our DataFrame.
Here's a small helper function from birdage to inline the map in the notebook. This allows it to be viewable on nbviewer (which is rendering this blog post).
End of explanation
"""
import folium
folium.initialize_notebook()
lat, lon = df[['latitude', 'longitude']].mean()
mp = folium.Map(location=(lat, lon), tiles='OpenStreetMap', zoom_start=13)
mp.line(locations=df.loc[df.ride_id == 42, ['latitude', 'longitude']].values)
mp.line(locations=df.loc[df.ride_id == long_ride_id, ['latitude', 'longitude']].values,
line_color='#800026')
inline_map(mp)
"""
Explanation: I've plotted two rides, a hopefully representative ride (#42) and the long ride from above.
End of explanation
"""
mp = folium.Map(location=(lat, lon), tiles='OpenStreetMap', zoom_start=13)
for ride_id in df.ride_id.unique():
mp.line(locations=df.loc[df.ride_id == ride_id, ['latitude', 'longitude']].values,
line_weight=1, line_color='#111', line_opacity=.3)
inline_map(mp)
"""
Explanation: So you pan around a bit, it looks like the GPS receiver on my phone was just going crazy.
But without visualizing the data (as a map), there'd be no way to know that.
For fun, we can plot all the rides.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_brainstorm_phantom_ctf.ipynb | bsd-3-clause | # Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import fit_dipole
from mne.datasets.brainstorm import bst_phantom_ctf
from mne.io import read_raw_ctf
print(__doc__)
"""
Explanation: Brainstorm CTF phantom dataset tutorial
Here we compute the evoked from raw for the Brainstorm CTF phantom
tutorial dataset. For comparison, see [1]_ and:
http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf
References
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
End of explanation
"""
data_path = bst_phantom_ctf.data_path()
# Switch to these to use the higher-SNR data:
# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')
# dip_freq = 7.
raw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')
dip_freq = 23.
erm_path = op.join(data_path, 'emptyroom_20150709_01.ds')
raw = read_raw_ctf(raw_path, preload=True)
"""
Explanation: The data were collected with a CTF system at 2400 Hz.
End of explanation
"""
sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]
plt.figure()
plt.plot(times[times < 1.], sinusoid.T[times < 1.])
"""
Explanation: The sinusoidal signal is generated on channel HDAC006, so we can use
that to obtain precise timing.
End of explanation
"""
events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp
events = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T
"""
Explanation: Let's create some events using this signal by thresholding the sinusoid.
End of explanation
"""
raw.plot()
"""
Explanation: The CTF software compensation works reasonably well:
End of explanation
"""
raw.apply_gradient_compensation(0) # must un-do software compensation first
mf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)
raw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)
raw.plot()
"""
Explanation: But here we can get slightly better noise suppression, lower localization
bias, and a better dipole goodness of fit with spatio-temporal (tSSS)
Maxwell filtering:
End of explanation
"""
tmin = -0.5 / dip_freq
tmax = -tmin
epochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,
baseline=(None, None))
evoked = epochs.average()
evoked.plot(time_unit='s')
evoked.crop(0., 0.)
"""
Explanation: Our choice of tmin and tmax should capture exactly one cycle, so
we can make the unusual choice of baselining using the entire epoch
when creating our evoked data. We also then crop to a single time point
(@t=0) because this is a peak in our signal.
End of explanation
"""
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)
mne.viz.plot_alignment(raw.info, subject='sample',
meg='helmet', bem=sphere, dig=True,
surfaces=['brain'])
del raw, epochs
"""
Explanation: Let's use a sphere head geometry model and let's see the coordinate
alignement and the sphere location.
End of explanation
"""
raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)
raw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',
**mf_kwargs)
cov = mne.compute_raw_covariance(raw_erm)
del raw_erm
dip, residual = fit_dipole(evoked, cov, sphere)
"""
Explanation: To do a dipole fit, let's use the covariance provided by the empty room
recording.
End of explanation
"""
expected_pos = np.array([18., 0., 49.])
diff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))
print('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))
print('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))
print('Difference: %0.1f mm' % diff)
print('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))
print('GOF: %0.1f %%' % dip.gof[0])
"""
Explanation: Compare the actual position with the estimated one.
End of explanation
"""
|
maxentile/equilibrium-sampling-tinker | Bidirectional AIS for free energy computations tinker.ipynb | mit | import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
plt.rc('font', family='serif')
%matplotlib inline
# initial system
def harmonic_osc_potential(x):
''' quadratic energy well '''
return np.sum(x**2)
# alchemical perturbation
def egg_crate_potential(x,freq=20):
''' bumpy potential '''
return np.sum(np.sin(freq*x))
# target system
def total_potential(x,lam=0):
''' bumpy energy well at lam=1, smooth well at lam=0 '''
return harmonic_osc_potential(x)+ lam* egg_crate_potential(x)
# reduced potential for the ensemble of interest
def reduced_potential(x,lam=0.0,temp=1.0):
''' assuming a heat bath '''
return total_potential(x,lam) / temp
# probability of a state
def boltzmann_weight(x,lam=0.0,temp=1.0):
''' at equilibrium, the log probability (density) of a state is proportional to
minus its reduced potential '''
return np.exp(-reduced_potential(x,lam,temp))
x = np.linspace(-3,3,10000)
temp=1.0
# potential
u_0 = np.array([total_potential(x_,lam=0) for x_ in x])
u_1 = np.array([total_potential(x_,lam=1) for x_ in x])
plt.plot(x,u_0,label=r'$\lambda=0$')
plt.plot(x,u_1,label=r'$\lambda=1$')
plt.xlabel(r'$x$')
plt.ylabel(r'$U(x)$')
plt.title('Potential energy')
plt.legend(loc='best')
plt.figure()
# boltzmann distribution
p_0 = np.array([boltzmann_weight(x_,lam=0) for x_ in x])
p_1 = np.array([boltzmann_weight(x_,lam=1) for x_ in x])
plt.plot(x,p_0,label=r'$\lambda=0$')
plt.plot(x,p_1,label=r'$\lambda=1$')
plt.xlabel(r'$x$')
plt.ylabel(r'$p(x) = \mathcal{Z}^{-1} \exp[-U(x)/T]$')
plt.title('Equilibrium distribution')
plt.legend(loc='best')
# let's collect a ton of approximate samples using MCMC,
# so we can pretend we have exact samples later
# this is analogous to the situation if we can collect approximate
# samples at the desired end states using MD or MCMC
import emcee
n_walkers=10
n_samples=50000
burn_in = (n_samples*n_walkers) / 3
log_p0 = lambda x:-reduced_potential(x,lam=0,temp=temp)
sampler_0 = emcee.EnsembleSampler(n_walkers,1,log_p0)
_ = sampler_0.run_mcmc(npr.randn(n_walkers,1),n_samples)
samples_0 = sampler_0.flatchain[burn_in:]
def sample_from_initial():
return samples_0[npr.randint(len(samples_0))]
log_p1 = lambda x:-reduced_potential(x,lam=1,temp=temp)
sampler_1 = emcee.EnsembleSampler(n_walkers,1,log_p1)
_ = sampler_1.run_mcmc(npr.randn(n_walkers,1),n_samples)
samples_1 = sampler_1.flatchain[burn_in:]
def sample_from_target():
return samples_1[npr.randint(len(samples_1))]
# how many samples do we have
len(samples_1)
plt.hist(samples_0,range=(-3,3),bins=200,histtype='stepfilled');
plt.xlabel(r'$x$')
plt.ylabel('# samples collected')
plt.title(r'Approximate samples from $p_{\lambda=0}$')
plt.hist(samples_1,range=(-3,3),bins=200,histtype='stepfilled');
plt.xlabel(r'$x$')
plt.ylabel('# samples collected')
plt.title(r'Approximate samples from $p_{\lambda=1}$')
# changing temperature
temp=1.0
p_0 = np.exp(-u_0/temp)
plt.plot(x,p_0)
for temp_ in np.logspace(0,3,10)[:-1]*temp:
p_t = np.exp(-u_1/temp_)
plt.plot(x,p_t,alpha=0.5)
plt.plot(x,p_1)
plt.xlabel(r'$x$')
plt.ylabel(r'$p(x)$')
plt.title('Varying temperature')
plt.plot(x,np.cumsum(p_0)*(x[1]-x[0]),label=r'$\lambda=0$')
plt.plot(x,np.cumsum(p_1)*(x[1]-x[0]),label=r'$\lambda=1$')
plt.xlabel(r'$x$')
plt.ylabel(r'$\int_{-\infty}^x p_\lambda(a) da$')
plt.title(r'Integrating the equilibrium density from $-\infty$ to $x$')
plt.legend(loc='best')
# alchemical intermediates
temp=1.0
for lam in np.linspace(0,1.0,10):
u_1am = np.array([total_potential(x_,lam=lam) for x_ in x])
p_1am = np.exp(-u_1am/temp)
plt.plot(x,p_1am)
plt.xlabel(r'$x$')
plt.ylabel(r'$p_\lambda(x)$')
plt.title(r'Perturbed potentials (varying $\lambda$)')
# show geometric averages
from annealing_distributions import GeometricMean
num_intermediates = 1
betas = np.linspace(0,1,12)
annealing_distributions = [p_0**(1-beta) * p_1**beta for beta in betas]
for dist in annealing_distributions:
plt.plot(x,dist)
plt.title('Geometric averages')
plt.xlabel(r'$x$')
plt.ylabel(r'$p_0(x)^{1-\beta}p_1(x)^{\beta}$')
# also show: (1) boost potential? or (2) Neal's optimal interpolants?
# numerically integrate using a deterministic (trapezoidal) quadrature rule
x = np.linspace(-10,10,10000)
Z_0 = np.trapz(p_0,x)
Z_1 = np.trapz(p_1,x)
Z_0,Z_1
# the ratio of partition functions
Z_ratio = Z_1/Z_0
Z_ratio,np.log(Z_ratio)
"""
Explanation: 0. Motivation
Here we'll work with a toy model of alchemy!
Given two unnormalized probability distributions, our goal is to estimate the ratio of their normalizing constants. This comes up in, for example, computing drug binding affinities. The binding affinity is related to the relative free energies of the bound vs. unbound states, which is related to the log of the ratio of the normalizing constants.
Each system is described by a probability distribution over available states $x$. This probability distribution is induced by an energy function that assigns each state a number. Higher energy states are much less probable than lower energy states, and their relative probabilities are given by the Boltzmann distribution.
1. Defining the systems
Let's set up a potential function $U_\text{target}(x;\lambda) = U_\text{initial}(x) + \lambda U_\text{alchemy}(x)$ in terms of an initial potential $U_\text{initial}$ and an alchemical perturbation $U_\text{alchemy}$ that transforms the initial potential energy surface into the target.
We can now easly create a family of intermediate potentials by varying $\lambda$ between 0 and 1.
Our goal will be to estimate $$\frac{\mathcal{Z}T}{ \mathcal{Z}_1}$$
where $$\mathcal{Z}_T = \int e^{- U\text{target}(x) / T} dx \equiv \int p_{\lambda=1}(x) dx$$ and $$\mathcal{Z}1 = \int e^{- U\text{initial}(x) / T} dx \equiv \int p_{\lambda=0}(x) dx$$.
2. Annealed importance sampling
[This largely follows the review in section 3 of: Sandwiching the marginal likelihood using bidirectional Monte Carlo (Grosse, Ghahramani, and Adams, 2015)]
$\newcommand{\x}{\mathbf{x}}
\newcommand{\Z}{\mathcal{Z}}$
Goal:
We want to estimate the normalizing constant $\Z = \int p_T(\x) d\x$ of a complicated distribution $p_T$ we know only up to a normalizing constant.
A basic strategy:
Importance sampling, i.e. draw each sample from an easy distribution $\x^{(k)} \sim p_1$, then reweight by $w^{(k)}\equiv p_T(\x)/p_1(\x)$. After drawing $K$ such samples, we can estimate the normalizing constant as $$\hat{\Z} = \frac{1}{K} \sum_{k=1}^K w^{(k)} \equiv \frac{1}{K} \sum_{k=1}^K \frac{p_T(\x^{(k)} )}{p_1(\x^{(k)})}$$.
Problem:
Although importance sampling will eventually work as $K \to \infty$ as long as the support of $p_1$ contains the support of $p_T$, this will be extremely inefficient if $p_1$ and $p_T$ are very different.
Actual strategy:
Instead of doing the importance reweighting computation in one step, gradually convert a sample from the simpler distribution $p_1$ to the target distribution $p_T$ by introducing a series of intermediate distributions $p_1,p_2,\dots,p_{T}$, chosen so that no $p_t$ and $p_{t+1}$ are dramatically different. We can then estimate the overall importance weight as a product of more reasonable ratios.
Inputs:
- Desired number of samples $K$
- An initial distribution $p_1(\x)$ for which we can:
- Draw samples: $\x_s \sim p_1(\x)$
- Evaluate the normalizing constant: $\Z_1$
- A target (unnormalized) distribution function: $f_T(\x)$
- A sequence of annealing distribution functions $f_1,\dots,f_T$. These can be almost arbitrary, but here are some options:
- We can construct these generically by taking geometric averages of the initial and target distributions: $f_t(\x_) = f_1(\x)^{1-\beta_t}f_T(\x)^{\beta_t}$
- In the case of a target distribution $f_T(\x) \propto \exp(-U(\x) \beta)$ (where $\beta$ is the inverse temperature), we could also construct the annealing distributions as Boltzmann distributions at decreasing temperatures.
- In the case of a target distribution defined in terms of a force field, we could also construct the annealing distributions by starting from an alchemically softened form of the potential and gradually turning on various parts of the potential.
- Could use "boost potentials" from accelerated MD (http://www.ks.uiuc.edu/Research/namd/2.9/ug/node63.html)
- If we have some way to make dimension-matching proposals, we might use coarse-grained potentials as intermediates.
- A sequence of Markov transition kernels $\mathcal{T}_1,\dots,\mathcal{T}_T$, where each $\mathcal{T}_t$ leaves its corresponding distribution $p_t$ invariant. These can be almost arbitrary, but here are some options:
- Random-walk Metropolis
- Symplectic integrators of Hamiltonian dynamics
- NCMC
Outputs:
- A collection of weights $w^{(k)}$, from which we can compute an unbiased estimate of the normalizing constant of $f_t$ by $\hat{\Z}=\sum_{k=1}^K w^{(k)} / K$
Algorithm:
for $k=1$ to $K\$:
1. $\x_1 \leftarrow$ sample from $p_1(\x)$
2. $w^{(k)} \leftarrow \Z_1$
3. for $t=2$ to $T$:
- $w^{(k)} \leftarrow w^{(k)} \frac{f_t(\x_{t-1})}{f_{t-1}(\x_{t-1})}$
- $\x_t \leftarrow $ sample from $\mathcal{T}t(\x | \x{t-1})$
3. Bidirectional Monte Carlo
$\newcommand{\x}{\mathbf{x}}
\newcommand{\Z}{\mathcal{Z}}$
- Note that AIS can be interpreted as an instance of simple importance sampling over an extended state space:
- The full set of states visited by the algorithm ($x_1,\dots,x_{T-1}$) has a joint distribution:
$$ \newcommand{\T}{\mathcal{T}}
q_{\text{forward}} (x_1,\dots,x_{T-1}) = p_1(x_1) \T_2 (x_2 | x_1) \dots \T_{T-1} (x_{T-1} | x_{T-2})
$$
- Where $x_1$ is an exact sample from $p_1$
- We could postulate a reverse chain:
$$q_{\text{reverse}} (x_1,\dots,x_{T-1}) = p_T(x_{T-1}) \T_{T-1} (x_{T-2} | x_{T-1}) \dots \T_2 (x_1 | x_2)
$$
- Where $x_{T-1}$ is an exact sample from $p_T$
- Note that:
$$\frac{q_{\text{back}}}{q_{\text{forward}}} = \frac{\Z_1}{\Z_T}w$$
where $w$ is the weight computed in AIS, and $\mathbb{E}[w] = \Z_T / \Z_1$
We would like to define consistent stochastic upper and lower bounds on log marginal likelihood for simulated data
Stochastic lower bounds can be computed using existing methods
Stochastic upper bounds can be obtained from exact samples from the posterior distribution
3.1. Obtaining stochastic lower bounds
AIS and SIS are unbiased estimators $\newcommand{\Z}{\mathcal{Z}} \mathbb{E}[\hat{\Z}] = \Z$
Since $\Z$ can vary over many orders of magnitude, we may try to estimate $\log \Z$ instead of $\Z$
Unbiased estimators of $\Z$ can be biased estimators of $\log \Z$
Unbiased estimators of $\Z$ are stochastic lower bounds on $\log \Z$ since (1) they are unlikely to over-estimate $\log \Z$ and (2) they cannot overestimate $\log Z$ in expectation ($\mathbb{E}[\log \hat{\Z}] \leq \log \Z$).
Because they are nonnegative estimators of a nonnegative quantity, they are extremely unlikely to overestimate $\log \Z$ by more than a few nats:
$$\Pr(\log \hat{\Z} \geq \log \mathcal{Z} + a) \leq \exp(-a)$$
Proof:
Markov's inequality states that $\Pr(X \geq a) \leq \frac{\mathbb{E}[X]}{a}$ for $a>0$ and $X$ a nonnegative r.v.
Since these are unbiased estimators, $\mathbb{E}[\hat{\Z}] = \Z$, so we have that $\Pr(\hat{\Z} \geq a) \leq \frac{\Z}{a}$.
By replacing $a$ with $a\Z$, we have $\Pr(\hat{\Z} \geq a \Z) \leq 1/a$. (Note: in the paper, it's written as $<$, but I think it should be $\leq$.)
By taking the log, we have that:
$$\Pr(\log \hat{\Z} \geq \log \mathcal{Z} + a) \leq \exp(-a)$$
- There's also Jensen's inequality:
$$ \mathbb{E}[\log \hat{\Z}] \leq \log \mathbb{E}[\hat{\Z}] = \log \Z$$
3.2. Obtaining stochastic upper bounds
It's harder to get good upper bounds than to get good lower bounds:
Lower bound: point to regions of high probability mass
Upper bound: demonstrate absence of any additional probability mass
Applying the arguments from 4.1, unbiased estimators of $\frac{1}{\Z}$ are stochastic upper bounds on $\log \Z$.
The harmonic mean estimator is an unbiased estimator of $\frac{1}{\Z}$, but has two problems:
Need exact samples. If approximate posterior samples are used, the estimator is not a stochastic upper bound.
Loose bound. Even if exact samples are used, the bound is very bad.
We can get exact posterior samples for simulated data, since we know the parameters used to generate the data. Note that there are two ways to sample the joint distribution $p(\theta,y)$:
We can sample $\theta \sim p(\theta)$ then $y \sim p(y|\theta)$
We can sample $y \sim p(y)$ then $\theta \sim p(\theta|y)$
Thus, for a simulated dataset $y \sim p(y|\theta)$, we have an exact posterior sample-- the $\theta$ used to generate the dataset.
We can get a tighter bound if we do HME but with multiple intermediate distributions instead of simple importance sampling. We'll do this two ways: reverse AIS and sequential HME.
3.2.1. Reverse AIS
We can interpret AIS as simple importance sampling over an extended state space (where each state is a sequence of $x$'s), where the proposal and target distributions are foreward and backward annealing chains.
In the case of simulated data, we can sample the reverse chain with an exact sample from $p_T(x)$.
The importance weights for the forward chain using the backward chain as a proposal are then:
$$\frac{q_{\text{forward}}}{q_{\text{back}}} = \frac{\Z_T}{\Z_1}w$$
where
$$ w \equiv \frac{f_{T-1}(x_{T-1})}{f_T(x_{T-1})} \cdots \frac{f_1(x_1)}{f_2(x_1)}$$
We then obtain the following estimate of $\Z$:
$$\hat{\Z} = \frac{K}{\sum_{k=1}^K w^{(k)}}$$
End of explanation
"""
# let's go with perturbed hamiltonians for now
protocol = np.linspace(0,1,10)
class AlchemicalIntermediate():
def __init__(self,lam=0.0):
self.lam=lam
def __call__(self,x):
return boltzmann_weight(x,lam=self.lam)
annealing_distributions = [AlchemicalIntermediate(lam) for lam in protocol]
"""
Explanation: 4. Choose annealing distributions for AIS
End of explanation
"""
%%time
from bidirectional_mc import bidirectional_ais
from transition_kernels import gaussian_random_walk
transition_kernels = [gaussian_random_walk]*len(annealing_distributions)
npr.seed(1)
ub,lb,forward_results,reverse_results \
= bidirectional_ais(sample_from_initial,
sample_from_target,
transition_kernels,
annealing_distributions,
n_samples=10000)
"""
Explanation: 5. Run forward and reverse AIS
To get stochastic upper and lower bounds on $\log ( \mathcal{Z}_T / \mathcal{Z}_1)$.
End of explanation
"""
plt.plot(ub,label='Stochastic upper bound')
plt.plot(lb,label='Stochastic lower bound')
plt.hlines(np.log(Z_ratio),0,len(ub),label='Trapezoid rule')
plt.xlabel('# samples')
plt.ylabel(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1)$')
plt.title(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1)$')
plt.legend(loc='best')
#plt.ylim(0.21,0.25)
"""
Explanation: 5. Plot bounds as a function of # samples collected
End of explanation
"""
plt.plot(np.abs(ub-lb))
print(np.abs(ub-lb)[-1])
plt.xlabel('# samples')
plt.ylabel('|Upper bound - lower bound|')
plt.title('Gap between upper and lower bounds')
"""
Explanation: 5.1. Plot gap between upper and lower bounds
End of explanation
"""
z_hat_f,xs_f,weights_f,ratios_f=forward_results
z_hat_r,xs_r,weights_r,ratios_r=reverse_results
mean_f = ratios_f.mean(0)
err_f = ratios_f.std(0)
plt.plot(mean_f,color='blue',label='Forward');
plt.fill_between(range(len(mean_f)),mean_f-err_f,mean_f+err_f,alpha=0.3,color='blue')
plt.hlines(1.0,0,len(mean_f)-1,linestyle='--')
#plt.figure()
mean_r = ratios_r.mean(0)[::-1]
err_r = ratios_r.std(0)[::-1]
plt.plot(mean_r,color='green',label='Reverse');
plt.fill_between(range(len(mean_r)),mean_r-err_r,mean_r+err_r,alpha=0.3,color='green')
plt.legend(loc='best')
plt.xlabel(r'Alchemical intermediate $t$')
plt.ylabel('Estimated $\mathcal{Z}_{t+1}/\mathcal{Z}_{t}$')
plt.title('Intermediate importance weights')
"""
Explanation: 6. Plot intermediate $\mathcal{Z}{t}/\mathcal{Z}{t-1}$'s
End of explanation
"""
from openmmtools import testsystems
from alchemy import AbsoluteAlchemicalFactory
test_systems = dict()
name='alanine dipeptide in vacuum with annihilated sterics'
ala = {
'test' : testsystems.AlanineDipeptideVacuum(),
'factory_args' : {'ligand_atoms' : range(0,22), 'receptor_atoms' : range(22,22),
'annihilate_sterics' : True, 'annihilate_electrostatics' : True }}
system = ala['test']
ref_sys,pos = system.system,system.positions
factory = AbsoluteAlchemicalFactory(ref_sys,**ala['factory_args'])
protocol = factory.defaultSolventProtocolImplicit()[::-1]
systems = factory.createPerturbedSystems(protocol)
plt.xlabel(r'Alchemical intermediate $t$')
plt.ylabel(r'$\lambda$')
plt.title('Alchemical protocol')
plt.plot([p['lambda_sterics'] for p in protocol],label=r'$\lambda$ sterics')
#the plot for lambda electrostatics is the same
%%time
from time import time
t = time()
from simtk import openmm
from simtk.unit import *
n_samples = 100
n_steps = 500
temperature = 300.0 * kelvin
collision_rate = 5.0 / picoseconds
timestep = 2.0 * femtoseconds
kB = BOLTZMANN_CONSTANT_kB * AVOGADRO_CONSTANT_NA
kT = (kB * temperature)
new_int = lambda:openmm.VerletIntegrator(timestep)
integrator_0 = new_int()
integrator_1 = new_int()
# collect roughly equilibrated samples at the start and end points
context_0 = openmm.Context(systems[0], integrator_0)
context_0.setPositions(pos)
context_1 = openmm.Context(systems[-1], integrator_1)
context_1.setPositions(pos)
pos_samples_0 = []
energies_0 = []
pos_samples_1 = []
energies_1 = []
for i in range(n_samples):
if i % 10 == 0:
print(i,time() - t)
integrator_0.step(n_steps)
state = context_0.getState(getEnergy=True, getPositions=True)
potential = state.getPotentialEnergy()
pos = state.getPositions()
pos_samples_0.append(pos)
energies_0.append(potential)
#context_0.setVelocitiesToTemperature(300)
integrator_1.step(n_steps)
state = context_1.getState(getEnergy=True, getPositions=True)
potential = state.getPotentialEnergy()
pos = state.getPositions()
pos_samples_1.append(pos)
energies_1.append(potential)
#context_1.setVelocitiesToTemperature(300)
print(time() - t)
protocol[0],protocol[-1]
plt.title('Potential energy of samples')
plt.plot([e.value_in_unit(kilojoule / mole) for e in energies_0],label='Annihilated sterics')
plt.plot([e.value_in_unit(kilojoule / mole) for e in energies_1],label='Full potential')
plt.legend(loc='best')
plt.xlabel('Sample #')
plt.ylabel(r'$U(x)$ (kJ/mol)')
array_pos_0 = np.vstack([np.array(pos.value_in_unit(nanometer)).flatten() for pos in pos_samples_0])
array_pos_1 = np.vstack([np.array(pos.value_in_unit(nanometer)).flatten() for pos in pos_samples_1])
def sample_from_initial():
return array_pos_0[npr.randint(len(array_pos_0))]
def sample_from_target():
return array_pos_1[npr.randint(len(array_pos_1))]
n_atoms = array_pos_0.shape[1]/3
n_atoms
temp = temperature.value_in_unit(kelvin)
%%time
integrator_0.step(1000)
"""
Explanation: 7. Now let's try a toy chemical system
Instead of defining a simple 1D potential as above, let's use a potential energy function given by a real forcefield.
Here $\vec{x}$ will be the positions of all 22 atoms in alanine dipeptide. The target potential $U_\text{target}(\vec{x})$ will be the full potential energy function of $\vec{x}$ (including terms for bonds, sterics, electrostatics, angles, and torsions). The initial potential $U_\text{initial}(\vec{x})$ will be a simpler potential energy function, where the terms for steric and electrostatic interactions will turned off. A family of alchemical intermediates will again be constructed by tuning a parameter $\lambda$ between 0 and 1.
End of explanation
"""
contexts = [openmm.Context(sys,new_int()) for sys in systems]
class MarkovKernel():
''' verlet with momentum thermalization '''
def __init__(self,index=0):
self.index=index
def __call__(self,position_vec,target_f=None,n_steps=500):
context = contexts[self.index]
context.setPositions(position_vec.reshape(n_atoms,3))
context.setVelocitiesToTemperature(temp)
integrator = context.getIntegrator()
integrator.step(n_steps)
return np.array(context.getState(getPositions=True).getPositions().value_in_unit(nanometer)).flatten()
class BoltzmannWeight():
''' thin wrapper for turning a flat array into a boltzmann weight'''
def __init__(self,index=0):
self.index=index
def __call__(self,position_vec):
context = contexts[self.index]
context.setPositions(position_vec.reshape(n_atoms,3))
energy = context.getState(getEnergy=True).getPotentialEnergy().value_in_unit(kilojoule/mole)
return np.exp(-energy/temp)
annealing_distributions = [BoltzmannWeight(i) for i in range(len(systems))]
transition_kernels = [MarkovKernel(i) for i in range(len(systems))]
len(systems)
%%time
ub,lb,forward_results,reverse_results \
= bidirectional_ais(sample_from_initial,
sample_from_target,
transition_kernels,
annealing_distributions,
n_samples=10)
ub[-1],lb[-1]
plt.plot(ub,label='Stochastic upper bound')
plt.plot(lb,label='Stochastic lower bound')
plt.xlabel('# samples')
plt.ylabel(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1)$')
plt.title(r'Estimated $\log ( \mathcal{Z}_T / \mathcal{Z}_1)$')
plt.legend(loc='best')
"""
Explanation: 8. Define each annealing distribution and its Markov transition kernel
End of explanation
"""
ub.mean(),lb.mean()
np.mean((ub[-1],lb[-1]))
plt.plot(ub-lb)
plt.ylim(0,np.max(ub-lb))
plt.xlabel('# samples')
plt.ylabel('Gap between upper and lower bound')
z_hat_f,xs_f,weights_f,ratios_f=forward_results
z_hat_r,xs_r,weights_r,ratios_r=reverse_results
ratios_f.shape
mean_f = ratios_f.mean(0)
err_f = ratios_f.std(0)
plt.plot(mean_f,color='blue',label='Forward');
plt.fill_between(range(len(mean_f)),mean_f-err_f,mean_f+err_f,alpha=0.3,color='blue')
plt.hlines(1.0,0,len(mean_f)-1,linestyle='--')
#plt.figure()
mean_r = ratios_r.mean(0)[::-1]
err_r = ratios_r.std(0)[::-1]
plt.plot(mean_r,color='green',label='Reverse');
plt.fill_between(range(len(mean_r)),mean_r-err_r,mean_r+err_r,alpha=0.3,color='green')
plt.legend(loc='best')
plt.xlabel(r'Alchemical intermediate $t$')
plt.ylabel('Estimated $\mathcal{Z}_{t+1}/\mathcal{Z}_{t}$')
plt.title('Intermediate importance weights')
"""
Explanation: Do we know the correct value here?
Question: is any remaining gap here a function of the non-equilibrium work being performed?
End of explanation
"""
|
scotthuang1989/Python-3-Module-of-the-Week | networking/Unix Domain Sockets.ipynb | apache-2.0 | # %load socket_echo_server_uds.py
import socket
import sys
import os
server_address = './uds_socket'
# Make sure the socket does not already exist
try:
os.unlink(server_address)
except OSError:
if os.path.exists(server_address):
raise
# Create a UDS socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# Bind the socket to the address
print('starting up on {}'.format(server_address))
sock.bind(server_address)
# Listen for incoming connections
sock.listen(1)
while True:
# Wait for a connection
print('waiting for a connection')
connection, client_address = sock.accept()
try:
print('connection from', client_address)
# Receive the data in small chunks and retransmit it
while True:
data = connection.recv(16)
print('received {!r}'.format(data))
if data:
print('sending data back to the client')
connection.sendall(data)
else:
print('no data from', client_address)
break
finally:
# Clean up the connection
connection.close()
"""
Explanation: From the programmer’s perspective there are two essential differences between using a Unix domain socket and an TCP/IP socket. First, the address of the socket is a path on the file system, rather than a tuple containing the server name and port. Second, the node created in the file system to represent the socket persists after the socket is closed, and needs to be removed each time the server starts up. The echo server example from earlier can be updated to use UDS by making a few changes in the setup section.
The socket needs to be created with address family AF_UNIX. Binding the socket and managing the incoming connections works the same as with TCP/IP sockets.
End of explanation
"""
# %load socket_echo_client_uds.py
import socket
import sys
# Create a UDS socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# Connect the socket to the port where the server is listening
server_address = './uds_socket'
print('connecting to {}'.format(server_address))
try:
sock.connect(server_address)
except socket.error as msg:
print(msg)
sys.exit(1)
try:
# Send data
message = b'This is the message. It will be repeated.'
print('sending {!r}'.format(message))
sock.sendall(message)
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print('received {!r}'.format(data))
finally:
print('closing socket')
sock.close()
"""
Explanation: The client setup also needs to be modified to work with UDS. It should assume the file system node for the socket exists, since the server creates it by binding to the address. Sending and receiving data works the same way in the UDS client as the TCP/IP client from before.
End of explanation
"""
import socket
import os
parent, child = socket.socketpair()
pid = os.fork()
if pid:
print('in parent, sending message')
child.close()
parent.sendall(b'ping')
response = parent.recv(1024)
print('response from child:', response)
parent.close()
else:
print('in child, waiting for message')
parent.close()
message = child.recv(1024)
print('message from parent:', message)
child.sendall(b'pong')
child.close()
"""
Explanation: Permissions
Since the UDS socket is represented by a node on the file system, standard file system permissions can be used to control access to the server.
Running the client as a user other than root now results in an error because the process does not have permission to open the socket.
Communication Between Parent and Child Processes
The socketpair() function is useful for setting up UDS sockets for inter-process communication under Unix. It creates a pair of connected sockets that can be used to communicate between a parent process and a child process after the child is forked.
End of explanation
"""
|
matsuyamax/Recipes | examples/ImageNet Pretrained Network (VGG_S).ipynb | mit | !wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg_cnn_s.pkl
"""
Explanation: Introduction
This example demonstrates using a network pretrained on ImageNet for classification. The model used was converted from the VGG_CNN_S model (http://arxiv.org/abs/1405.3531) in Caffe's Model Zoo.
For details of the conversion process, see the example notebook "Using a Caffe Pretrained Network - CIFAR10".
License
The model is licensed for non-commercial use only
Download the model (393 MB)
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import lasagne
from lasagne.layers import InputLayer, DenseLayer, DropoutLayer
from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer
from lasagne.layers import MaxPool2DLayer as PoolLayer
from lasagne.layers import LocalResponseNormalization2DLayer as NormLayer
from lasagne.utils import floatX
"""
Explanation: Setup
End of explanation
"""
net = {}
net['input'] = InputLayer((None, 3, 224, 224))
net['conv1'] = ConvLayer(net['input'], num_filters=96, filter_size=7, stride=2)
net['norm1'] = NormLayer(net['conv1'], alpha=0.0001) # caffe has alpha = alpha * pool_size
net['pool1'] = PoolLayer(net['norm1'], pool_size=3, stride=3, ignore_border=False)
net['conv2'] = ConvLayer(net['pool1'], num_filters=256, filter_size=5)
net['pool2'] = PoolLayer(net['conv2'], pool_size=2, stride=2, ignore_border=False)
net['conv3'] = ConvLayer(net['pool2'], num_filters=512, filter_size=3, pad=1)
net['conv4'] = ConvLayer(net['conv3'], num_filters=512, filter_size=3, pad=1)
net['conv5'] = ConvLayer(net['conv4'], num_filters=512, filter_size=3, pad=1)
net['pool5'] = PoolLayer(net['conv5'], pool_size=3, stride=3, ignore_border=False)
net['fc6'] = DenseLayer(net['pool5'], num_units=4096)
net['drop6'] = DropoutLayer(net['fc6'], p=0.5)
net['fc7'] = DenseLayer(net['drop6'], num_units=4096)
net['drop7'] = DropoutLayer(net['fc7'], p=0.5)
net['fc8'] = DenseLayer(net['drop7'], num_units=1000, nonlinearity=lasagne.nonlinearities.softmax)
output_layer = net['fc8']
"""
Explanation: Define the network
End of explanation
"""
import pickle
model = pickle.load(open('vgg_cnn_s.pkl'))
CLASSES = model['synset words']
MEAN_IMAGE = model['mean image']
lasagne.layers.set_all_param_values(output_layer, model['values'])
"""
Explanation: Load the model parameters and metadata
End of explanation
"""
import urllib
index = urllib.urlopen('http://www.image-net.org/challenges/LSVRC/2012/ori_urls/indexval.html').read()
image_urls = index.split('<br>')
np.random.seed(23)
np.random.shuffle(image_urls)
image_urls = image_urls[:5]
"""
Explanation: Trying it out
Get some test images
We'll download the ILSVRC2012 validation URLs and pick a few at random
End of explanation
"""
import io
import skimage.transform
def prep_image(url):
ext = url.split('.')[-1]
im = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)
# Resize so smallest dim = 256, preserving aspect ratio
h, w, _ = im.shape
if h < w:
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
else:
im = skimage.transform.resize(im, (h*256/w, 256), preserve_range=True)
# Central crop to 224x224
h, w, _ = im.shape
im = im[h//2-112:h//2+112, w//2-112:w//2+112]
rawim = np.copy(im).astype('uint8')
# Shuffle axes to c01
im = np.swapaxes(np.swapaxes(im, 1, 2), 0, 1)
# Convert to BGR
im = im[::-1, :, :]
im = im - MEAN_IMAGE
return rawim, floatX(im[np.newaxis])
"""
Explanation: Helper to fetch and preprocess images
End of explanation
"""
for url in image_urls:
try:
rawim, im = prep_image(url)
prob = np.array(lasagne.layers.get_output(output_layer, im, deterministic=True).eval())
top5 = np.argsort(prob[0])[-1:-6:-1]
plt.figure()
plt.imshow(rawim.astype('uint8'))
plt.axis('off')
for n, label in enumerate(top5):
plt.text(250, 70 + n * 20, '{}. {}'.format(n+1, CLASSES[label]), fontsize=14)
except IOError:
print('bad url: ' + url)
"""
Explanation: Process test images and print top 5 predicted labels
End of explanation
"""
|
GoogleCloudPlatform/analytics-componentized-patterns | retail/recommendation-system/bqml-scann/ann01_create_index.ipynb | apache-2.0 | import base64
import datetime
import logging
import os
import json
import pandas as pd
import time
import sys
import grpc
import google.auth
import numpy as np
import tensorflow.io as tf_io
from google.cloud import bigquery
from typing import List, Optional, Text, Tuple
"""
Explanation: Low-latency item-to-item recommendation system - Creating ANN index
Overview
This notebook is a part of the series that describes the process of implementing a Low-latency item-to-item recommendation system.
The notebook demonstrates how to create and deploy an ANN index using item embeddings created in the preceding notebooks. In this notebook you go through the following steps.
Exporting embeddings from BigQuery into the JSONL formated file.
Creating an ANN Index using the exported embeddings.
Creating and ANN Endpoint.
Deploying the ANN Index to the ANN Endpoint.
Testing the deployed ANN Index.
This notebook was designed to run on AI Platform Notebooks. Before running the notebook make sure that you have completed the setup steps as described in the README file.
Setting up the notebook's environment
Import notebook dependencies
End of explanation
"""
ANN_GRPC_ENDPOINT_STUB = 'ann_grpc'
if ANN_GRPC_ENDPOINT_STUB not in sys.path:
sys.path.append(ANN_GRPC_ENDPOINT_STUB)
import ann_grpc.match_pb2_grpc as match_pb2_grpc
import ann_grpc.match_pb2 as match_pb2
"""
Explanation: In the experimental release, the Online Querying API of the ANN service is exposed throught the GRPC interface. The ann_grpc folder contains the grpc client stub to interface to the API.
End of explanation
"""
PROJECT_ID = 'jk-mlops-dev' # <-CHANGE THIS
PROJECT_NUMBER = '895222332033' # <-CHANGE THIS
BQ_DATASET_NAME = 'song_embeddings' # <- CHANGE THIS
BQ_LOCATION = 'US' # <- CHANGE THIS
DATA_LOCATION = 'gs://jk-ann-staging/embeddings' # <-CHANGE THIS
VPC_NAME = 'default' # <-CHANGE THIS
EMBEDDINGS_TABLE = 'item_embeddings'
REGION = 'us-central1'
MATCH_SERVICE_PORT = 10000
"""
Explanation: Configure GCP environment
Set the following constants to the values reflecting your environment:
PROJECT_ID - your GCP project ID.
PROJECT_NUMBER - your GCP project number.
BQ_DATASET_NAME - the name of the BigQuery dataset that contains the item embeddings table.
BQ_LOCATION - the dataset location
DATA_LOCATION - a GCS location for the exported embeddings (JSONL) files.
VPC_NAME - a name of the GCP VPC to use for the index deployments. Use the name of the VPC prepared during the initial setup.
REGION - a compute region. Don't change the default - us-central - while the ANN Service is in the experimental stage
End of explanation
"""
client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION)
query = f"""
SELECT COUNT(*) embedding_count
FROM {BQ_DATASET_NAME}.item_embeddings;
"""
query_job = client.query(query)
query_job.to_dataframe()
"""
Explanation: Exporting the embeddings
In the preceeding notebooks you trained the Matrix Factorization BQML model and exported the embeddings to the item_embeddings table.
In this step you will extract the embeddings to a set of JSONL files in the format required by the ANN service.
Verify the number of embeddings
End of explanation
"""
file_name_pattern = 'embedding-*.json'
destination_uri = f'{DATA_LOCATION}/{file_name_pattern}'
table_id = 'item_embeddings'
destination_format = 'NEWLINE_DELIMITED_JSON'
dataset_ref = bigquery.DatasetReference(PROJECT_ID, BQ_DATASET_NAME)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.job.ExtractJobConfig()
job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON
extract_job = client.extract_table(
table_ref,
destination_uris=destination_uri,
job_config=job_config,
#location=BQ_LOCATION,
)
extract_job.result()
"""
Explanation: Export the embeddings
You will use the BigQuery export job to export the embeddings table.
End of explanation
"""
! gsutil ls {DATA_LOCATION}
"""
Explanation: Inspect the extracted files.
End of explanation
"""
import datetime
import logging
import json
import time
import google.auth
class ANNClient(object):
"""Base ANN Service client."""
def __init__(self, project_id, project_number, region):
credentials, _ = google.auth.default()
self.authed_session = google.auth.transport.requests.AuthorizedSession(credentials)
self.ann_endpoint = f'{region}-aiplatform.googleapis.com'
self.ann_parent = f'https://{self.ann_endpoint}/v1alpha1/projects/{project_id}/locations/{region}'
self.project_id = project_id
self.project_number = project_number
self.region = region
def wait_for_completion(self, operation_id, message, sleep_time):
"""Waits for a completion of a long running operation."""
api_url = f'{self.ann_parent}/operations/{operation_id}'
start_time = datetime.datetime.utcnow()
while True:
response = self.authed_session.get(api_url)
if response.status_code != 200:
raise RuntimeError(response.json())
if 'done' in response.json().keys():
logging.info('Operation completed!')
break
elapsed_time = datetime.datetime.utcnow() - start_time
logging.info('{}. Elapsed time since start: {}.'.format(
message, str(elapsed_time)))
time.sleep(sleep_time)
return response.json()['response']
class IndexClient(ANNClient):
"""Encapsulates a subset of control plane APIs
that manage ANN indexes."""
def __init__(self, project_id, project_number, region):
super().__init__(project_id, project_number, region)
def create_index(self, display_name, description, metadata):
"""Creates an ANN Index."""
api_url = f'{self.ann_parent}/indexes'
request_body = {
'display_name': display_name,
'description': description,
'metadata': metadata
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def list_indexes(self, display_name=None):
"""Lists all indexes with a given display name or
all indexes if the display_name is not provided."""
if display_name:
api_url = f'{self.ann_parent}/indexes?filter=display_name="{display_name}"'
else:
api_url = f'{self.ann_parent}/indexes'
response = self.authed_session.get(api_url).json()
return response['indexes'] if response else []
def delete_index(self, index_id):
"""Deletes an ANN index."""
api_url = f'{self.ann_parent}/indexes/{index_id}'
response = self.authed_session.delete(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
class IndexDeploymentClient(ANNClient):
"""Encapsulates a subset of control plane APIs
that manage ANN endpoints and deployments."""
def __init__(self, project_id, project_number, region):
super().__init__(project_id, project_number, region)
def create_endpoint(self, display_name, vpc_name):
"""Creates an ANN endpoint."""
api_url = f'{self.ann_parent}/indexEndpoints'
network_name = f'projects/{self.project_number}/global/networks/{vpc_name}'
request_body = {
'display_name': display_name,
'network': network_name
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def list_endpoints(self, display_name=None):
"""Lists all ANN endpoints with a given display name or
all endpoints in the project if the display_name is not provided."""
if display_name:
api_url = f'{self.ann_parent}/indexEndpoints?filter=display_name="{display_name}"'
else:
api_url = f'{self.ann_parent}/indexEndpoints'
response = self.authed_session.get(api_url).json()
return response['indexEndpoints'] if response else []
def delete_endpoint(self, endpoint_id):
"""Deletes an ANN endpoint."""
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'
response = self.authed_session.delete(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
return response.json()
def create_deployment(self, display_name, deployment_id, endpoint_id, index_id):
"""Deploys an ANN index to an endpoint."""
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:deployIndex'
index_name = f'projects/{self.project_number}/locations/{self.region}/indexes/{index_id}'
request_body = {
'deployed_index': {
'id': deployment_id,
'index': index_name,
'display_name': display_name
}
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def get_deployment_grpc_ip(self, endpoint_id, deployment_id):
"""Returns a private IP address for a gRPC interface to
an Index deployment."""
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'
response = self.authed_session.get(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
endpoint_ip = None
if 'deployedIndexes' in response.json().keys():
for deployment in response.json()['deployedIndexes']:
if deployment['id'] == deployment_id:
endpoint_ip = deployment['privateEndpoints']['matchGrpcAddress']
return endpoint_ip
def delete_deployment(self, endpoint_id, deployment_id):
"""Undeployes an index from an endpoint."""
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:undeployIndex'
request_body = {
'deployed_index_id': deployment_id
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
return response
index_client = IndexClient(PROJECT_ID, PROJECT_NUMBER, REGION)
deployment_client = IndexDeploymentClient(PROJECT_ID, PROJECT_NUMBER, REGION)
"""
Explanation: Creating an ANN index deployment
Deploying an ANN index is a 3 step process:
1. Creating an index from source files
2. Creating an endpoint to access the index
3. Deploying the index to the endpoint
You will use the REST interface to invoke the AI Platform ANN Service Control Plane API that manages indexes, endpoints, and deployments.
After the index has been deployed you can submit matching requests using Online Querying API. In the experimental stage this API is only accessible through the gRPC interface.
Define helper classes to encapsulate the ANN Service REST API.
Currently, there is no Python client that encapsulates the ANN Service Control Plane API. The below code snippet defines a simple wrapper that encapsulates a subset of REST APIs used in this notebook.
End of explanation
"""
indexes = index_client.list_indexes()
if not indexes:
print('There are not any indexes registered with the service')
for index in indexes:
print(index['name'])
"""
Explanation: Create an ANN index
List all indexes registered with the ANN service
End of explanation
"""
index_display_name = 'Song embeddings'
index_description = 'Song embeddings created BQML Matrix Factorization model'
index_metadata = {
'contents_delta_uri': DATA_LOCATION,
'config': {
'dimensions': 50,
'approximate_neighbors_count': 50,
'distance_measure_type': 'DOT_PRODUCT_DISTANCE',
'feature_norm_type': 'UNIT_L2_NORM',
'tree_ah_config': {
'child_node_count': 1000,
'max_leaves_to_search': 100
}
}
}
logging.getLogger().setLevel(logging.INFO)
operation_id = index_client.create_index(index_display_name,
index_description,
index_metadata)
response = index_client.wait_for_completion(operation_id, 'Creating index', 20)
print(response)
"""
Explanation: Configure and create a new index based on the exported embeddings
Index creation is a long running operation. Be patient.
End of explanation
"""
indexes = index_client.list_indexes(index_display_name)
for index in indexes:
print(index['name'])
if indexes:
index_id = index['name'].split('/')[-1]
print(f'Index: {index_id} will be used for deployment')
else:
print('No indexes available for deployment')
"""
Explanation: Verify that the index was created
End of explanation
"""
endpoints = deployment_client.list_endpoints()
if not endpoints:
print('There are not any endpoints registered with the service')
for endpoint in endpoints:
print(endpoint['name'])
"""
Explanation: Create the index deployment
List all endpoints registered with the ANN service
End of explanation
"""
deployment_display_name = 'Song embeddings endpoint'
operation_id = deployment_client.create_endpoint(deployment_display_name, VPC_NAME)
response = index_client.wait_for_completion(operation_id, 'Waiting for endpoint', 10)
print(response)
"""
Explanation: Create an index endpoint
End of explanation
"""
endpoints = deployment_client.list_endpoints(deployment_display_name)
for endpoint in endpoints:
print(endpoint['name'])
if endpoints:
endpoint_id = endpoint['name'].split('/')[-1]
print(f'Endpoint: {endpoint_id} will be used for deployment')
else:
print('No endpoints available for deployment')
"""
Explanation: Verify that the endpoint was created
End of explanation
"""
deployment_display_name = 'Song embeddings deployed index'
deployed_index_id = 'songs_embeddings_deployed_index'
"""
Explanation: Deploy the index to the endpoint
Set the deployed index ID
The ID of the deployed index must be unique within your project.
End of explanation
"""
response = index_client.wait_for_completion(operation_id, 'Waiting for deployment', 10)
operation_id = deployment_client.create_deployment(deployment_display_name,
deployed_index_id,
endpoint_id,
index_id)
response = index_client.wait_for_completion(operation_id, 'Waiting for deployment', 10)
print(response)
"""
Explanation: Deploy the index
Be patient. Index deployment is a long running operation
End of explanation
"""
deployed_index_ip = deployment_client.get_deployment_grpc_ip(endpoint_id, deployed_index_id)
endpoint = f'{deployed_index_ip}:{MATCH_SERVICE_PORT}'
print(f'gRPC endpoint for the: {deployed_index_id} deployment is: {endpoint}')
"""
Explanation: Querying the ANN service
You will use the gRPC interface to query the deployed index.
Retrieve the gRPC private endpoint for the ANN Match service
End of explanation
"""
class MatchService(object):
"""This is a wrapper around Online Querying gRPC interface."""
def __init__(self, endpoint, deployed_index_id):
self.endpoint = endpoint
self.deployed_index_id = deployed_index_id
def single_match(
self,
embedding: List[float],
num_neighbors: int) -> List[Tuple[str, float]]:
"""Requests a match for a single embedding."""
match_request = match_pb2.MatchRequest(deployed_index_id=self.deployed_index_id,
float_val=embedding,
num_neighbors=num_neighbors)
with grpc.insecure_channel(endpoint) as channel:
stub = match_pb2_grpc.MatchServiceStub(channel)
response = stub.Match(match_request)
return [(neighbor.id, neighbor.distance) for neighbor in response.neighbor]
def batch_match(
self,
embeddings: List[List[float]],
num_neighbors: int) -> List[List[Tuple[str, float]]]:
"""Requests matches ofr a list of embeddings."""
match_requests = [
match_pb2.MatchRequest(deployed_index_id=self.deployed_index_id,
float_val=embedding,
num_neighbors=num_neighbors)
for embedding in embeddings]
batches_per_index = [
match_pb2.BatchMatchRequest.BatchMatchRequestPerIndex(
deployed_index_id=self.deployed_index_id,
requests=match_requests)]
batch_match_request = match_pb2.BatchMatchRequest(
requests=batches_per_index)
with grpc.insecure_channel(endpoint) as channel:
stub = match_pb2_grpc.MatchServiceStub(channel)
response = stub.BatchMatch(batch_match_request)
matches = []
for batch_per_index in response.responses:
for match in batch_per_index.responses:
matches.append(
[(neighbor.id, neighbor.distance) for neighbor in match.neighbor])
return matches
match_service = MatchService(endpoint, deployed_index_id)
"""
Explanation: Create a helper wrapper around the Match Service gRPC API.
The wrapper uses the pre-generated gRPC stub to the Online Querying gRPC interface.
End of explanation
"""
%%bigquery df_embeddings
SELECT id, embedding
FROM `recommendations.item_embeddings`
LIMIT 10
sample_embeddings = [list(embedding) for embedding in df_embeddings['embedding']]
sample_embeddings[0]
"""
Explanation: Prepare sample data
Retrieve a few embeddings from the BigQuery embedding table
End of explanation
"""
%%time
single_match = match_service.single_match(sample_embeddings[0], 10)
single_match
"""
Explanation: Run a single match query
The following call requests 10 closest neighbours for a single embedding.
End of explanation
"""
%%time
batch_match = match_service.batch_match(sample_embeddings[0:5], 3)
batch_match
"""
Explanation: Run a batch match query
The following call requests 3 closest neighbours for each of the embeddings in a batch of 5.
End of explanation
"""
for endpoint in deployment_client.list_endpoints():
endpoint_id = endpoint['name'].split('/')[-1]
if 'deployedIndexes' in endpoint.keys():
for deployment in endpoint['deployedIndexes']:
print(' Deleting index deployment: {} in the endpoint: {} '.format(deployment['id'], endpoint_id))
deployment_client.delete_deployment(endpoint_id, deployment['id'])
print('Deleting endpoint: {}'.format(endpoint['name']))
deployment_client.delete_endpoint(endpoint_id)
"""
Explanation: Clean up
WARNING
The below code will delete all ANN deployments, endpoints, and indexes in the configured project.
Delete index deployments and endpoints
End of explanation
"""
for index in index_client.list_indexes():
index_id = index['name'].split('/')[-1]
print('Deleting index: {}'.format(index['name']))
index_client.delete_index(index_id)
"""
Explanation: Delete indexes
End of explanation
"""
|
jordanopensource/data-science-bootcamp | session4/L0_Machine learning.ipynb | mit | import scipy.stats as st
def generateData(true_p, dataset_size):
return ['H' if i == 1 else 'T' for i in st.bernoulli.rvs(0.3 ,size=dataset_size)]
def estimate_p(data):
return sum([1.0 if observation == 'H' else 0.0 for observation in data]) / len(data)
#simulate data
true_p = 0.3
dataset_size = 20000
data = generateData(true_p, dataset_size)
p_hat = estimate_p(data)
#print (data)
print ("true p:", true_p)
print ("learned p:", p_hat)
"""
Explanation: Probability is amazing - Machine learning
Learning from data
Given a dataset and a model we have these these interesting probabilities
The likelihood $P(Data|Model)$
Model fitting: $P(Model|Data)$
Predictive distribution: $P(New data| Model, Data)$
Together these retells the machine learning story in a probabilistic words
Simplest learning algorithm never told (the counting)!
Suppose we have a dataset of coin flips $D$, and we want to learn $P(C=H)$, here is the simple algorithm
- Count the heads
- Divide by the total trials
End of explanation
"""
#Your code here !
"""
Explanation: Exercise:
Plot accuracy vs. dataset size
End of explanation
"""
import itertools
import scipy.stats as st
import numpy as np
import functools
def generateData(true_p, dataset_size):
data= np.array([ ['H' if j == 1 else 'T' for j in st.bernoulli.rvs(true_p[i] ,size=dataset_size)] for i in range(0, len(true_p))])
return data.T.tolist()
def estimate_p(data):
d = len(data[0])
if d > 23:
raise 'too many dims'
toCross = [['H', 'T'] for i in range(0, d)]
omega = list(itertools.product(*toCross))
combos = {tuple(x): 0 for x in omega}
for i in data:
combos[tuple(i)] += 1
n = len(data)
p = {k: float(v)/n for (k,v) in combos.items()}
return p
d = 12
pp = 0.25
data = generateData([pp for i in range(0, d)], 10000)
%%timeit -n 1 -r 1
p = estimate_p(data)
print ("true p: " , pp**d)
print (p[tuple(['H' for i in range(0, d)])])
def estimate_p_ind(data):
d = len(data[0])
omega = [(i, 'H') for i in range(0, d)] + [(i, 'T') for i in range(0, d)]
combos = {x: 0 for x in omega}
for i in data:
for ix, j in enumerate(i):
combos[(ix, j)] += 1
n = len(data)
p = {k: float(v)/n for (k,v) in combos.items()}
return p
%%timeit -n 1 -r 1
p_ind = estimate_p_ind(data)
#p('H', 'H', 'H', 'H') = prod P('H')
t = [p_ind[(i, 'H')] for i in range(0, d)]
p_Hs = functools.reduce(lambda x, y: x* y, t, 1)
print ("true p: " , pp**d)
print(p_Hs)
"""
Explanation: The counting algorithm origin
First we could model coins by bernoulli random variable $c$ ~ $Ber(\theta)$
now the liklyhood of the data is $$likelihood = P(Data|\theta) = \prod{\theta^x * (1 - \theta)^{1 - x}}$$
And we want $\theta$ that makes the likelihood maximum $$\theta_{mle} = argmax_{\theta}\prod{\theta^x * (1 - \theta)^{1 - x}}$$
This $\theta$ is called MLE or the maximum likelihood estimator, if you do the math (derive likelihood and find the zeros) then you will find that $$\theta_{mle} = \frac{\sum(C=H)}{N}$$
"All models are wrong, but some models are useful. — George Box""
For the next iterations, the baise may change, the coin may got curved, why we can relay on mle estimator ?
The answer is subtle but very fundamental to machine learning, it is a theorem called No Free lunch theorem. It states that we could not learn without making assumption.
Exercise:
What is the assumption we made to make it happen ?
How does machine learning scale ? (Independencies)
End of explanation
"""
|
taesiri/noteobooks | old:misc/object-detection/object_detection_virtual_camera.ipynb | mit | import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
import cv2
"""
Explanation: Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
README
First of all, Look here for more information about tensroflow object detection model. You need to clone that repository and follow the installation instruction
In my demo, GTA V runs on another machine and I used Steam in-home streaming to stream video of the game on my Ubuntu machine.
I used v4l2loopback and ffmpeg to create a virtual webcam that captures part of my desktop screen.
Creating virtual camera:
sudo modprobe v4l2loopback
ffmpeg -f x11grab -r 60 -s 800x600 -i :0.0+65,50 -vcodec rawvideo -pix_fmt bgr24 -threads 0 -f v4l2 /dev/video0
Imports
End of explanation
"""
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
"""
Explanation: Env setup
End of explanation
"""
from utils import label_map_util
from utils import visualization_utils as vis_util
"""
Explanation: Object detection imports
Here are the imports from the object detection module.
End of explanation
"""
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
"""
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
"""
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
"""
Explanation: Download Model
End of explanation
"""
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
"""
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
"""
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
"""
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
"""
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
"""
Explanation: Helper code
End of explanation
"""
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
cam = cv2.VideoCapture('/dev/video0')
cv2.namedWindow('camera')
cv2.moveWindow('camera', 50, 700)
while True:
image_np = cam.read()[1]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
cv2.imshow('camera', image_np)
if cv2.waitKey(25) & 0xff == ord('q'):
cv2.destroyAllWindows()
break
"""
Explanation: Detection
End of explanation
"""
|
chinapnr/python_study | Python 基础课程/Python Basic 练习题A.ipynb | gpl-3.0 | a = 24
b = 16
for i in range(min(a,b), 0, -1):
if a % i == 0 and b % i ==0:
print(i)
break
while b:
a,b=b,a%b
print(a)
print(max([x for x in range(1,a+1) if a%x==0 and b%x==0]))
"""
Explanation: Python Basic 练习题 A
v1.1, 2020.4, 2020.5,2020.6, edit by David Yi
题目1:两个正整数a和b, 输出它们的最大公约数。
例如:a = 24, b = 16
则输出:8
End of explanation
"""
for i in range(1, 10):
for j in range(1, i+1):
print ("%d x %d = %d" % (i, j, i*j),"\t",end="")
if i==j:
print("")
i=0
j=0
while i<9:
i+=1
while j<9:
j+=1
print(j,"x",i,"=",i*j,"\t",end="")
if i==j:
j=0
print("")
break
print('\n'.join(['\t'.join(["%2s x%2s = %2s"%(j,i,i*j) for j in range(1,i+1)]) for i in range(1,10)]))
"""
Explanation: 题目2:输出 9*9 乘法口诀表
程序分析:分行与列考虑,共9行9列,i控制行,j控制列。
End of explanation
"""
def func1(a,b,c):
if (a+b)>c and (b+c)>a and (a+c)>b:
return 'yes'
else:
return 'no'
print(func1(1,2,3))
def func2(a,b,c):
if max(a,b,c)<(a+b+c)/2.0:
return 'yes'
else:
return 'no'
print(func2(3,4,5))
# if else 的用法
x, y = 4, 5
smaller = x if x > y else y
print(smaller)
# 匿名函数 lambda 用法
a = lambda x, y=2 : x + y*3
print(a(3) + a(3, 5))
# list 的 count方法
l = ['tom', 'steven', 'jerry', 'steven']
print(l.count('tom') + l.count('steven'))
# os 用法
# 计算目录下有多少文件
import os
a = os.listdir()
print(len(a))
# 获得路径或者文件的大小
import os
os.path.getsize('/Users')
# 列表生成式用法
# 在列表生成式后面加上判断,过滤出结果为偶数的结果
n = 0
for i in [x * x for x in range(8, 11) if x % 2 == 0]:
n = n + i
print(n)
# 简化调用方式
# 省却了 try...finally,会有 with 来自动控制
filename = 'test.txt'
with open(filename, 'r') as f:
print(f.read())
# python 有趣灵活的变量定义
first, second, *rest = (1,2,3,4,5,6,7,8)
print(first + second + rest[2])
# for continue 举例
# 以及 等于 不等于 或者 的逻辑判断
for l in 'computer':
if l != 't' or l == 'u':
continue
print('letter:', l)
# 找到符合 x*x + y*y == z*z + 3 的 x y z
# 多种循环
n = 0
for x in range(1,10):
for y in range(x,10):
for z in range(y,10):
if x*x + y*y == z*z:
print(x,y,z)
n= n + 1
print('count:',n)
# python 内置函数
print(max((1,2),(2,3),(2,4),(2,4,1)))
l = ['amy', 'tom', 'jerry ']
for i, item in enumerate(l):
print(i, item)
# question 1
l = [ x * 2 + 3 for x in range(1, 11)]
print(l)
print(l[1]+l[3])
# question 2
import os
os.getcwd()
# question 3
first, second, *rest = (1,2,3,4,5)
print(*rest)
# question 4
from collections import deque
q = deque(['a', 'b'])
q.appendleft('b','a')
print(q.count('b'))
# question 5
x, y = 3, 6
if 1 < x < 4 < y:
print('yes')
else:
print('no')
# question 6
d = dict([('a', 1), ('b', 2)])
print(d.get('a',2))
# question 7
a = [0, 1, 2, 3, 4, 5]
n = 0
for i in a[::-2]:
n=n+i
print(n)
# question 8
a = ['a', '-', 'a']
print('-'.join(a))
# quesion 9
from functools import reduce
def f(x,y=2):
return x + y
r = map(f, [1, 2, 3])
s = reduce(f,r)
print(s)
# question 10
if 2 in [1,2,3]:
print('small')
else:
print('big')
# quesion 11
s1 = 'abcde'
s2 = '12'
s3 = '12s'
s4 = '#s'
print(s1.isalpha())
print(s2.isalpha())
print(s3.isalpha())
print(s4.isalpha())
"""
Explanation: 题目3:编写一个函数,给你三个整数a,b,c, 判断能否以它们为三个边长构成三角形。若能,输出YES,否则输出NO
End of explanation
"""
|
esa-as/2016-ml-contest | HouMath/Face_classification_HouMath_XGB_04.ipynb | apache-2.0 | %matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score, roc_auc_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold, cross_val_score, LeavePGroupsOut
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
#use a fixed seed for reproducibility
seed = 123
np.random.seed(seed)
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head(10)
"""
Explanation: In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order:
•Background
•Exploratory Data Analysis
•Data Prepration and Model Selection
•Final Results
Background
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1.Nonmarine sandstone
2.Nonmarine coarse siltstone
3.Nonmarine fine siltstone
4.Marine siltstone and shale
5.Mudstone (limestone)
6.Wackestone (limestone)
7.Dolomite
8.Packstone-grainstone (limestone)
9.Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies/ Label/ Adjacent Facies
1 SS 2
2 CSiS 1,3
3 FSiS 2
4 SiSh 5
5 MS 4,6
6 WS 5,7
7 D 6,8
8 PS 6,7,9
9 BS 7,8
Exprolatory Data Analysis
After the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.
End of explanation
"""
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
training_data.describe()
"""
Explanation: Set columns 'Well Name' and 'Formation' to be category
End of explanation
"""
plt.figure(figsize=(5,5))
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
"""
Explanation: Check distribution of classes in whole dataset
End of explanation
"""
wells = training_data['Well Name'].unique()
plt.figure(figsize=(15,9))
for index, w in enumerate(wells):
ax = plt.subplot(2,5,index+1)
facies_counts = pd.Series(np.zeros(9), index=range(1,10))
facies_counts = facies_counts.add(training_data[training_data['Well Name']==w]['Facies'].value_counts().sort_index())
#facies_counts.replace(np.nan,0)
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title=w)
ax.set_ylim(0,160)
"""
Explanation: Check distribution of classes in each well
End of explanation
"""
plt.figure(figsize=(5,5))
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
"""
Explanation: We can see that classes are very imbalanced in each well
End of explanation
"""
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
features = ['GR','ILD_log10','DeltaPHI','PHIND','PE','NM_M','RELPOS']
"""
Explanation: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
End of explanation
"""
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
"""
Explanation: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
End of explanation
"""
skf = StratifiedKFold(n_splits=5)
cv = skf.split(X_train, Y_train)
def modelfit(alg, Xtrain, Ytrain, useTrainCV=True, cv_fold=skf):
#Fit the algorithm on the data
alg.fit(Xtrain, Ytrain,eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(Xtrain)
#dtrain_predprob = alg.predict_proba(Xtrain)[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(Ytrain,dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(Ytrain,dtrain_prediction,average='micro'))
#Perform cross-validation:
if useTrainCV:
cv_score = cross_val_score(alg, Xtrain, Ytrain, cv=cv_fold, scoring='f1_micro')
print ("CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" %
(np.mean(cv_score), np.std(cv_score), np.min(cv_score), np.max(cv_score)))
#Pring Feature Importance
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
"""
Explanation: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
End of explanation
"""
xgb1= XGBClassifier(
learning_rate=0.05,
objective = 'multi:softmax',
nthread = 4,
seed = seed
)
xgb1
modelfit(xgb1, X_train, Y_train)
"""
Explanation: General Approach for Parameter Tuning
We are going to preform the steps as follows:
1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems.
2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.
3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees.
4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
5.Lower the learning rate and decide the optimal parameters.
Step 1:Fix learning rate and number of estimators for tuning tree-based parameters
In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:
1.max_depth = 5
2.min_child_weight = 1
3.gamma = 0
4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value.
5.scale_pos_weight = 1
Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
End of explanation
"""
param_test1={
'n_estimators':range(20, 100, 10)
}
gs1 = GridSearchCV(xgb1,param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs1.fit(X_train, Y_train)
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
gs1.best_estimator_
param_test2={
'max_depth':range(5,16,2),
'min_child_weight':range(1,15,2)
}
gs2 = GridSearchCV(gs1.best_estimator_,param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs2.fit(X_train, Y_train)
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
gs2.best_estimator_
modelfit(gs2.best_estimator_, X_train, Y_train)
"""
Explanation: Step 2: Tune max_depth and min_child_weight
End of explanation
"""
param_test3={
'gamma':[0,.05,.1,.15,.2,.3,.4],
'subsample':[0.6,.7,.75,.8,.85,.9],
'colsample_bytree':[i/10.0 for i in range(4,10)]
}
gs3 = GridSearchCV(gs2.best_estimator_,param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs3.fit(X_train, Y_train)
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
gs3.best_estimator_
modelfit(gs3.best_estimator_,X_train,Y_train)
"""
Explanation: Step 3: Tune gamma
End of explanation
"""
param_test4={
'reg_alpha':[0, 1e-5, 1e-2, 0.1, 0.2],
'reg_lambda':[0, .25,.5,.75,.1]
}
gs4 = GridSearchCV(gs3.best_estimator_,param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs4.fit(X_train, Y_train)
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
modelfit(gs4.best_estimator_,X_train, Y_train)
gs4.best_estimator_
param_test5={
'reg_alpha':[.15,0.2,.25,.3,.4],
}
gs5 = GridSearchCV(gs4.best_estimator_,param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs5.fit(X_train, Y_train)
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
modelfit(gs5.best_estimator_, X_train, Y_train)
gs5.best_estimator_
"""
Explanation: Step 5: Tuning Regularization Parameters
End of explanation
"""
xgb4 = XGBClassifier(
learning_rate = 0.025,
n_estimators=120,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb4,X_train, Y_train)
xgb5 = XGBClassifier(
learning_rate = 0.00625,
n_estimators=480,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb5,X_train, Y_train)
"""
Explanation: Step 6: Reducing Learning Rate
End of explanation
"""
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
X_train_nowell = X_train.drop(['Well Name'], axis=1)
Y_train = data['Facies' ] - 1
# Final recommended model based on the extensive parameters search
model_final = gs5.best_estimator_
model_final.fit( X_train_nowell , Y_train , eval_metric = 'merror' )
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Train the model based on training data
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
"""
Explanation: Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
End of explanation
"""
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction4.csv')
test_data[test_data['Well Name']=='STUART'].head()
test_data[test_data['Well Name']=='CRAWFORD'].head()
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors)
"""
Explanation: Use final model to predict the given test data set
End of explanation
"""
|
gansanay/datascience-theoryinpractice | machinelearning-theoryinpractice/01_Regression/01_LinearRegression.ipynb | mit | n = 100
x = np.random.normal(1, 0.5, n)
noise = np.random.normal(0, 0.25, n)
y = 0.75*x + 1 + noise
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.scatter(x, y)
ax.set_xlim([0,2])
ax.set_ylim([0,3.1])
"""
Explanation: Introduction
Linear regression, like any model regression methods, is made of two parts:
* a regression model = a structure with parameters
* a regression method to estimate which parameters minimize a residual between the points and the model
When least squares methods are used, the residual is the sum of the squares of the distances between each data point and the corresponding model value.
The linear regression model
Simple linear regression
Let us consider a problem with only one independent (explanatory) variable $x$ and one dependent variable $y$.
With $n \in \mathbb{N}^*$, let ${(x_i,y_i),\, i=1,\dots,n}$ be a set of points.
End of explanation
"""
def fit(x,y, with_constant=True):
beta = np.cov(x,y)[0][1] / np.var(x)
if with_constant:
alpha = np.mean(y) - beta * np.mean(x)
else:
alpha = 0
r = np.cov(x,y)[0][1] / (np.std(x)*np.std(y))
mse = np.sum((y-alpha-beta*x)**2)/n
return([beta,alpha,r,mse])
beta,alpha,r,mse = fit(x,y)
print('alpha: {:.2f}, beta: {:.2f}'.format(alpha,beta))
print('r squared: {:.2f}'.format(r*r))
print('MSE: {:.2f}'.format(mse))
def fit_plot(x,y,noise,beta,alpha):
fig, ax = plt.subplots(1, 3, figsize=(18,4))
ax[0].scatter(x, y)
x_ = np.linspace(0,2,5)
ax[0].plot(x_, alpha + beta*x_, color='orange', linewidth=2)
ax[0].set_xlim([0,2])
ax[0].set_ylim([0,3.1])
mse = np.sum((y-alpha-beta*x)**2)/n
ax[0].set_title('MSE: {:.2f}'.format(mse), fontsize=14)
ax[1].hist(noise, alpha=0.5, bins=np.arange(-0.7,0.7,0.1));
ax[1].hist(y - alpha - beta*x, alpha=0.5, bins=np.arange(-0.7,0.7,0.1))
ax[1].legend(['original noise', 'residual']);
stats.probplot(y - alpha - beta*x, dist="norm", plot=ax[2]);
fit_plot(x,y,noise,beta,alpha)
"""
Explanation: Simple linear regression considers the model function
\begin{equation}
y = \alpha + \beta x
\end{equation}
We can write the relation between $y_i$ and $x_i$ as:
\begin{equation}
\forall i, \quad y_i = \alpha + \beta x_i + \varepsilon_i
\end{equation}
where the $\varepsilon_i$, called residuals, are what's missing between the model and the data.
(For the sake of demonstration, the points above were generated as a uniform sample of 50 $x$ values in $[0,1]$, and the $y$'s were computed as $y = \frac{3}{4}x+1+\varepsilon$, where $\varepsilon \sim \mathcal{N}(0,0.1)$.)
Now comes the fun part: we are looking for the $(\alpha,\beta)$ pair that provides the best fit between model and data. Best, but in which sense?
General form of linear regression
In the general case, we can write for each dependent variable $y_i, \, i = 1, \dots, n$ a set of $p$-vectors $\mathbf{x}_i$ called regressors.
The regression model above then takes the form
\begin{equation}
y_i = \beta_0 1 + \beta_1 x_{i1} + \dots + \beta_p x_{ip} + \varepsilon_i,\quad i = 1, \dots, n
\end{equation}
which can be more concisely written as:
\begin{equation}
y_i = \mathbf{x}_i^T \mathbf{\beta} + \varepsilon_i,\quad i = 1, \dots, n
\end{equation}
and in vector form:
\begin{equation}
\mathbf{y} = X \mathbf{\beta} + \mathbf{\varepsilon}
\end{equation}
You may have noticed that there is no more $\alpha$ in this form: it has been replaced by $\beta_0$, which is the multiplication factor for the constant value $1$. This makes $X$ an $n \times (p+1)$ matrix and $\mathbf{\beta}$ a $(p+1)$ vector. This is more understandable with an explicit display of $X$, $\mathbf{y}$, $\mathbf{\beta}$ and $\mathbf{\epsilon}$:
$$\mathbf{y} = \left(
\begin{array}{c}
y_1\
y_2\
\vdots\
y_n\
\end{array}\right),
\quad
X = \left(
\begin{array}{cccc}
1&x_{11}&\cdots&x_{1p}\
1&x_{21}&\cdots&x_{2p}\
\vdots&\vdots&\ddots&\vdots\
1&x_{n1}&\cdots&x_{np}\
\end{array}\right),
\quad
\mathbf{\beta} = \left(
\begin{array}{c}
\beta_0\
\beta_1\
\beta_2\
\vdots\
\beta_n\
\end{array}\right),
\quad
\mathbf{\epsilon} = \left(
\begin{array}{c}
\varepsilon_1\
\varepsilon_2\
\vdots\
\varepsilon_n\
\end{array}\right)
$$
Fitting the model
The best known family of regression methods rely this definition of the 'best fit': the best fit is the one that minimizes the sum of the squares of the residuals $\varepsilon_i$.
In the case of the simple linear regression, that is:
\begin{equation}
\textrm{Find} \min_{\alpha,\beta} Q(\alpha,\beta)\quad Q(\alpha,\beta) = \sum_{i=1}^n \varepsilon_i^2 = \sum_{i=1}^n (y_i - \alpha - \beta x_i)^2
\end{equation}
Ordinary / Linear least squares (OLS)
In the general case we are looking for $\hat{\beta}$ that minimizes:
\begin{equation}
S(b) = (y - Xb)^T(y-Xb)
\end{equation}
We are then looking for $\hat{\beta}$ such that
\begin{equation}
0 = \frac{dS}{db}(\hat{\beta}) = \frac{d}{db}\left.\left(y^T y - b^TX^Ty-y^TXb + b^TX^TXb\right)\right|_{b=\hat{\beta}}
\end{equation}
By matrix calculus:
$$
\begin{array}{rcl}
\frac{d}{db}(-b^TX^Ty) &=& -\frac{d}{db}(b^TX^Ty) = -X^Ty\
\frac{d}{db}(-y^TXb) &=& -\frac{d}{db}(y^TXb) = -(y^TX)^T = -X^Ty\
\frac{d}{db}(b^TX^TXb) &=& 2X^TXb\
\end{array}
$$
So that we get
$$-2X^Ty + 2X^TX\hat{\beta} = 0$$
<div style="background-color:rgba(255, 0, 0, 0.1); vertical-align: middle; padding:10px 10px;">
<b>Assumption: $X$ has full column rank</b><br>
That is, features should not be linearly dependent.
Under this assumption $X^TX$ is invertible, and we can write the least squares estimator for $\beta$:
\begin{equation}
\hat{\beta} = (X^TX)^{-1}X^Ty
\end{equation}
</div>
For the simple linear regression, we get:
$$\hat{\beta} = \frac{\textrm{Cov}(x,y)}{\textrm{Var}(x)},\quad \hat{\alpha} = \bar{y} - \hat{\beta} \bar{x}$$
End of explanation
"""
n = 200
x = np.random.normal(5, 2, n)
noise = np.random.normal(0, 0.25, n)
y = 0.75*x + 1 + noise
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.scatter(x, y)
ax.set_xlim([0,10])
ax.set_ylim([0,10])
x_train = x[-40:]
x_test = x[:-40]
y_train = y[-40:]
y_test = y[:-40]
"""
Explanation: Expected value
$$
\begin{array}{rcl}
\mathbf{E}[\hat{\beta}] &=& \mathbf{E}\left[(X^TX)^{-1}X^T(X\beta + \varepsilon)\right]\
&=& \beta + \mathbf{E}\left[(X^TX)^{-1}X^T \varepsilon\right]\
&=& \beta + \mathbf{E}\left[\mathbf{E}\left[(X^TX)^{-1}X^T \varepsilon|X\right]\right] \quad \textit{(law of total expectation)}\
&=& \beta + \mathbf{E}\left[(X^TX)^{-1}X^T \mathbf{E}\left[\varepsilon|X\right]\right]\
\end{array}
$$
<div style="background-color:rgba(255, 0, 0, 0.1); vertical-align: middle; padding:10px 10px;">
<b>Assumption: strict exogeneity: $\mathbf{E}\left[\varepsilon|X\right] = 0$</b><br>
Under this assumption, $\mathbf{E}[\hat{\beta}] = \beta$, the ordinary least squares estimator is unbiased.
</div>
Variance
$$
\begin{array}{rcl}
\textrm{Var}(\hat{\beta}) &=& \mathbf{E}\left[\left(\hat{\beta} - \mathbf{E}(\hat{\beta})\right)^2\right]\
&=& \mathbf{E}\left[(\hat{\beta} - \beta)^2\right]\
&=& \mathbf{E}\left[(\hat{\beta} - \beta)(\hat{\beta} - \beta)^T\right]\
&=& \mathbf{E}\left[((X^TX)^{-1}X^T\varepsilon)((X^TX)^{-1}X^T\varepsilon)^T\right]\
&=& \mathbf{E}\left[(X^TX)^{-1}X^T\varepsilon\varepsilon^TX(X^TX)^{-1}\right]\
&=& (X^TX)^{-1}X^T\mathbf{E}\left[\varepsilon\varepsilon^T\right]X(X^TX)^{-1}\
\end{array}
$$
<div style="background-color:rgba(255, 0, 0, 0.1); vertical-align: middle; padding:10px 10px;">
<b>Assumption: homoskedasticity: $\mathbf{E}\left[\varepsilon_i^2|X\right] = \sigma^2$</b><br>
</div>
Then we have:
$$
\begin{array}{rcl}
\textrm{Var}(\hat{\beta}) &=& \sigma^2 (X^TX)^{-1}X^TX(X^TX)^{-1}\
&=& \sigma^2 (X^TX)^{-1}\
\end{array}
$$
For simple linear regression, this gives ${\displaystyle\textrm{Var}(\beta) = \frac{\sigma^2}{\sum_{i=1}^n(x_i - \overline{x})}}$
Confidence intervals for coefficients
Here we will go through the case of the simple linear regression. There are two situations: either we can make the assumption that the residuals are normally distributed, or that the number of points in the dataset is "large enough", so that the law of large numbers and the central limit theorem become applicable.
Under normality assumption
Under a normality assumption for $\varepsilon$, $\hat{\beta}$ is also normally distributed.
Then,
$$Z = \frac{\hat{\beta}-\beta}{\frac{1}{\sqrt{n}}\sqrt{\frac{\sigma^2}{\sum_{i=1}^n (x_i - \overline{x})^2}}} \sim \mathcal{N}(0,1)$$
Let $S = \frac{n}{\sigma} \sum_{i=1}^n \hat{\varepsilon_i}^2$. $S$ has a chi-squared distribution with $n-2$ degrees of freedom.
Then $T = \frac{Z}{\sqrt{\frac{S}{n-2}}}$ has a Student's t-distribution with $n-2$ degrees of freedom.
$T$ will be easier to use if we write it as:
$$T = \frac{\hat{\beta} - \beta}{s_{\hat{\beta}}}$$
where
$$s_{\hat{\beta}} = \sqrt{\frac{\frac{1}{n-2} \sum_{i=1}^n \hat{\varepsilon_i}^2}{\sum_{i=1}^n (x_i - \overline{x})^2}}$$
With this setting, the confidence interval for $\beta$ with confidence level $\alpha$ is
$$\left[\hat{\beta} - t_{1-\alpha/2}^{n-2} s_{\hat{\beta}}, \hat{\beta} + t_{1-\alpha/2}^{n-2} s_{\hat{\beta}}\right]$$
<span style='color: red;'>TODO: confidence interval for $\alpha$</span>
Under asymptotic assumption
<span style='color: red;'>TODO</span>
Using regression as a machine learning predictor
End of explanation
"""
beta,alpha,r,mse = fit(x,y)
"""
Explanation: We can fit a linear regression model on the training data:
End of explanation
"""
y_pred = alpha + beta*x_test
def pred_plot(y_pred,y_test):
fig, ax = plt.subplots(1, 2, figsize=(12,4))
ax[0].scatter(y_test, y_pred)
ax[0].plot([0,10],[0,10],color='g')
ax[0].set_xlim([0,10])
mse_pred = np.sum((y_test-y_pred)**2)/n
ax[0].set_title('MSE: {:.2f}'.format(mse_pred), fontsize=14)
ax[1].hist(y_test-y_pred);
pred_plot(y_pred,y_test)
"""
Explanation: And use it to predict $y$ on test data:
End of explanation
"""
n = 100
x = np.random.normal(1, 0.5, n)
noise = np.random.normal(0.25, 0.25, n)
y = 0.75*x + noise
beta,alpha,r,mse = fit(x,y, with_constant=False)
fit_plot(x,y,noise,beta,alpha)
"""
Explanation: What could possibly go wrong? Assumptions for OLS
$X$ has full column rank
If we break this assumption, we can't even compute the OLS estimator.
strict exogeneity: $\mathbf{E}\left[\varepsilon|X\right] = 0$
If we break this one, then the OLS estimator gets biased.
End of explanation
"""
n = 200
x = np.random.normal(5, 2, n)
noise = np.random.normal(1, 0.25, n)
y = 0.75*x + noise
x_train = x[-40:]
x_test = x[:-40]
y_train = y[-40:]
y_test = y[:-40]
beta,alpha,r,mse = fit(x,y, with_constant=False)
y_pred = beta*x_test
pred_plot(y_pred,y_test)
"""
Explanation: What would it mean for a prediction model?
End of explanation
"""
n = 100
x = np.random.normal(1, 0.5, n)
noise = np.random.normal(0.25, 0.25, n)
y = 0.75*x + noise
beta,alpha,r,mse = fit(x,y, with_constant=True)
fit_plot(x,y,noise,beta,alpha)
n = 200
x = np.random.normal(5, 2, n)
noise = np.random.normal(1, 0.25, n)
y = 0.75*x + noise
x_train = x[-40:]
x_test = x[:-40]
y_train = y[-40:]
y_test = y[:-40]
beta,alpha,r,mse = fit(x,y, with_constant=True)
y_pred = alpha+beta*x_test
pred_plot(y_pred, y_test)
"""
Explanation: But if we have a constant in our model, it is unbiased:
End of explanation
"""
n = 100
x = np.linspace(0, 2, n)
sigma2 = 0.1*x**2
noise = np.random.normal(0, np.sqrt(sigma2), n)
y = 0.75*x + 1 + noise
beta,alpha,r,mse = fit(x,y)
fit_plot(x,y,noise,beta,alpha)
"""
Explanation: <span style='color: red;'>What happened?</span>
Homoscedasticity: $\mathbf{E}\left[\varepsilon_i^2|X\right] = \sigma^2$
What happens if we break this assumption?
End of explanation
"""
n = 100
x = np.linspace(0, 2, n)
sigma2 = 0.1*((6*x).astype(int) % 4)
noise = np.random.normal(0, np.sqrt(sigma2), n)
y = 0.75*x + 1 + noise
beta,alpha,r,mse = fit(x,y)
fit_plot(x,y,noise,beta,alpha)
"""
Explanation: When this assumption is violated, we will turn to a weighted least squares estimator.
No autocorrelation: $\mathbf{E}\left[\varepsilon_i \varepsilon_j|X\right] = 0 \textrm{ if } i \neq j$
What happens if we break this assumption?
End of explanation
"""
|
boland1992/SeisSuite | seissuite/ant/.ipynb_checkpoints/Stack_Example-checkpoint.ipynb | gpl-3.0 | from tools.stack import Stack
from obspy import read
%matplotlib inline
"""
Explanation: Stacking Waveforms Examples
The following notebook contains examples for using the stack.py toolbox for stacking raw and band-pass filtered seismic waveforms. Currently the script can only operate with MSEED formats, but additional support for other formats such as: SAC, SEED and SUDS will likely be added in the future. The user specifies the input path to the raw waveform. This waveform should have more than one trace, e.g. BHZ, BHE and BHN in one. A Stack object is created with this input, and the output is specified with on of the containing functions.
Each trace in the stream is stacked in a specified way. Three options are currently available: Linear Stack, Phase Stack and Phase-Weighted Stack.
End of explanation
"""
# here is a list of all of the functions and variables that the Stack class contains
help(Stack)
# set the path to the desired waveform, the example HOLS.mseed is provided.
example_path = 'tools/examples/HOLS.mseed'
# plot the original waveform to show what form it's in.
st = read(example_path); st.plot()
"""
Explanation: The Stack object requires the MSEED path as the only necessary input. Other obtions include 'filter_waveform' which is default to False, and 'band_lims' which is defaulted to [1,10]. So if you wish to band-pass filter the waveform before stacking, set 'filter_waveform' to true and specify which frequency range you wish to filter; 'band_lims' contains [min. frequency, max. frequency] for the band-pass filter.
End of explanation
"""
# create the stack object
STACK = Stack(example_path, filter_waveform=True, band_lims=[1,10])
STACK.lin_stack()
print 'List of Linearly Stacked Seismic Waveforms: ', STACK.LS
"""
Explanation: To create a list of the linearly stacked waveforms, run STACK.lin_stack and the list is stored in STACK.LS
End of explanation
"""
STACK.phase_stack()
print 'List of Phase Stacked Seismic Waveforms: ', STACK.PS
"""
Explanation: To create a list of the phase stacked waveforms, run STACK.phase_stack and the list is stored in STACK.PS using the method from Schimmel and Paulssen (1997).
End of explanation
"""
STACK.pw_stack()
print 'List of Phase-Weighted Stacked Seismic Waveforms: ', STACK.PWS
"""
Explanation: To create a list of the phase-weighted stacked waveforms, run STACK.pw_stack and the list is stored in STACK.PWS using the method from Schimmel and Paulssen (1997).
End of explanation
"""
STACK.plot_lin_stack(LS=None, show=True, save=False)
"""
Explanation: To create a plot of the linearly stacked waveforms, run STACK.plot_lin_stack. Options are to save or show the waveforms, as well as to plot from the initialised stack, or another STACK.LS that has been created previously. If save=True, the the waveform will be saved to the same location as the example, but changed the named to ?.lin_stack.jpg where ? is the basename without the extension of the original MSEED input file.
End of explanation
"""
STACK.plot_phase_stack(PS=None, show=True, save=False)
"""
Explanation: To create a plot of the phase stacked waveforms, run STACK.plot_phase_stack. Options are to save or show the waveforms, as well as to plot from the initialised stack, or another STACK.PS that has been created previously. If save=True, the the waveform will be saved to the same location as the example, but changed the named to ?.phase_stack.jpg where ? is the basename without the extension of the original MSEED input file.
End of explanation
"""
STACK.plot_pw_stack(PWS=None, show=True, save=False)
"""
Explanation: To create a plot of the phase-weighted stacked waveforms, run STACK.plot_pw_stack. Options are to save or show the waveforms, as well as to plot from the initialised stack, or another STACK.PWS that has been created previously. If save=True, the the waveform will be saved to the same location as the example, but changed the named to ?.pw_stack.jpg where ? is the basename without the extension of the original MSEED input file.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/inm-cm5-h/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-h', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-H
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
LucaFoschini/UCSBDataScienceBootcamp2015 | Day01_ComputerBasics/notebooks/01 - Data Science.ipynb | cc0-1.0 | from IPython.display import Image
Image(url='http://static.squarespace.com/static/5150aec6e4b0e340ec52710a/t/51525c33e4b0b3e0d10f77ab/1364352052403/Data_Science_VD.png?format=750w')
"""
Explanation: Data Science
What's that?
End of explanation
"""
Image(url='https://upload.wikimedia.org/wikipedia/en/c/cb/Windows_Explorer_Windows_7.png')
"""
Explanation: Read the full story:
http://lucafoschini.com/notebooks/Agile%20Data%20Science%20Meetup.slides.html#/
Source (Another feature of Notebooks: Make slides out of them!):
https://github.com/LucaFoschini/lucafoschini.github.io/blob/master/notebooks/Agile%20Data%20Science%20Meetup.ipynb
Getting started
This section focuses on the 'Hacking Skills' part of the Venn diagram -- we're going to get you up and running on Linux, using git and GitHub, and starting to work on Jupyter notebooks. This is all stuff which will be used for future sections in this bootcamp, so take notes!
First up: opening the terminal. Either search for a program named 'terminal' on your computer (more on that in the links below), or use 'Applications (top left corner of your screen) -> System Tools -> Terminal'
The terminal is a powerful and efficient replacement for the file explorer:
End of explanation
"""
|
5agado/data-science-learning | machine learning/Tensorflow - Intro.ipynb | apache-2.0 | import os
import sys
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from pathlib import Path
import tensorflow as tf
%matplotlib notebook
#%matplotlib inline
models_data_folder = Path.home() / "Documents/models/"
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Intro" data-toc-modified-id="Intro-1"><span class="toc-item-num">1 </span>Intro</a></span></li><li><span><a href="#Core-(Low-Level-APIs)" data-toc-modified-id="Core-(Low-Level-APIs)-2"><span class="toc-item-num">2 </span>Core (Low Level APIs)</a></span></li><li><span><a href="#Eager-Execution" data-toc-modified-id="Eager-Execution-3"><span class="toc-item-num">3 </span>Eager Execution</a></span></li><li><span><a href="#Dataset-API" data-toc-modified-id="Dataset-API-4"><span class="toc-item-num">4 </span>Dataset API</a></span></li><li><span><a href="#Save-and-Restore-Variables" data-toc-modified-id="Save-and-Restore-Variables-5"><span class="toc-item-num">5 </span><a href="https://www.tensorflow.org/programmers_guide/saved_model" target="_blank">Save and Restore Variables</a></a></span></li><li><span><a href="#Save-and-Restore-a-Model" data-toc-modified-id="Save-and-Restore-a-Model-6"><span class="toc-item-num">6 </span><a href="https://www.tensorflow.org/programmers_guide/saved_model" target="_blank">Save and Restore a Model</a></a></span></li><li><span><a href="#Serving-Client" data-toc-modified-id="Serving-Client-7"><span class="toc-item-num">7 </span>Serving Client</a></span></li></ul></div>
Intro
Notebook revolving around the use and concepts of Tensorflow (v1.12.0).
End of explanation
"""
# create and add up two constants
a = tf.constant(3.0, dtype=tf.float32)
b = tf.constant(4.0)
total = a + b
print(a)
print(b)
print(total)
# execute graph via a Session
sess = tf.Session()
print(sess.run(total))
print(sess.run({'ab': (a, b), 'total': total})) # request multiple tensors
# variables
x = tf.placeholder(tf.float32, name='x')
y = tf.placeholder(tf.float32, name='y')
z = x + y
sess = tf.Session()
print(sess.run(z, feed_dict={x: 3, y: 4}))
"""
Explanation: Core (Low Level APIs)
Tensorflow operations are arranged into a computational graph. graph is about building, session is about running.
The graph nodes are represented by Operation while the edges can be see as Tensor flowing between the nodes. A Tensor does not have values, is just a handler returned by a function.
End of explanation
"""
tf.enable_eager_execution() # enable eager mode, need to be run at start
a = 3.0
b = 4.0
res = tf.multiply(a, b)
res
np.multiply(res, res)
"""
Explanation: Eager Execution
Present from Tensorflow v1.7, provides a imperative programming env that allows to evaluate operations immediately. In this env a Tensor object actually reference concrete values that can be used in other Python contexts like debugger or Numpy.
End of explanation
"""
tf.enable_eager_execution() # enable eager mode, need to be run at start
dataset = tf.data.Dataset.range(10)
print(dataset.output_types)
print(dataset.output_shapes)
# apply custom function to each element of the dataset
dataset = dataset.map(lambda x : x + 1)
for i in dataset:
print(i)
# define repeatition, batching and buffers
dataset = tf.data.Dataset.range(10)
dataset = dataset.repeat(2)
dataset = dataset.batch(2)
iterator = dataset.make_one_shot_iterator()
for i in iterator:
print(i)
"""
Explanation: Dataset API
tf.data as a mean to build input/pre-processing pipelines. Introduces the Dataset (sequence of elements) and Iterator (access elements from a dataset) abstractions.
In eager mode can iterate over a dataset as done in common Python code. In a session need instead to instantiate/initialize an iterator over the dataset.
End of explanation
"""
# dummy variables
#v1 = tf.get_variable("v1", shape=[3], initializer=tf.zeros_initializer)
#v2 = tf.get_variablea("v2", shape=[5], initializer=tf.zeros_initializer)
v1 = tf.Variable(tf.constant(0), name='v1')
v2 = tf.Variable(tf.constant(5), name='v2')
# dummy operations
inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)
# Save variables
# def init op and saver
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
# run some operations and save sessions
with tf.Session() as sess:
sess.run(init_op)
inc_v1.op.run()
dec_v2.op.run()
save_path = saver.save(sess,
str(models_data_folder / 'tmp' / "model.ckpt"))
print("Model saved in {}".format(save_path))
# test behavior in new session (need to rerun initializer)
with tf.Session() as sess:
sess.run(init_op)
print(v1.eval())
print(inc_v1.eval())
print(v1.eval())
# Restore Variables
# need to redefine the variable
v1 = tf.Variable(tf.constant(0), name='v1')
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess,
str(models_data_folder / 'tmp' / "model.ckpt"))
#now v1 should have the value we previously saved
print(v1.eval())
"""
Explanation: Save and Restore Variables
End of explanation
"""
# directory where model will be exported
# include version info in model path as required by TF
version = 0
export_dir = str(models_data_folder / "tf_test_models_export" / str(version))
# dummy model
x = tf.Variable(tf.constant(0), name='x')
y = tf.Variable(tf.constant(5), name='y')
f = tf.multiply(x, y, name='f')
# save model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#consider difference between eval and run
#see: https://stackoverflow.com/questions/33610685/in-tensorflow-what-is-the-difference-between-session-run-and-tensor-eval
#sess.run(f, feed_dict={x:3.0, y:5.0})
fval = f.eval(feed_dict={x:3.0, y:5.0})
print(fval)
# Init builder
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
# Build info for inputs and outputs tensors
#??Is the key associated with the tensor name?
inputs = {
'x' : tf.saved_model.utils.build_tensor_info(x),
'y' : tf.saved_model.utils.build_tensor_info(y)
}
outputs = {
'f' : tf.saved_model.utils.build_tensor_info(f)
}
# Define signature (set of inputs and outputs for the graph)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
# method used for the inference
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
)
# Add meta-graph (dataflow graph, variables, assets, and signatures)
# to the builder
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
# ??
signature_def_map={
'predict' : prediction_signature
},
# ??
#legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
)
# Finally save builder
builder.save()
# Restore model
# redefine target
x = tf.Variable(tf.constant(1), name='x')
y = tf.Variable(tf.constant(5), name='y')
#f = tf.Operation(None, name='f')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#print(f.eval())
mg = tf.saved_model.loader.load(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
export_dir
)
f = tf.get_default_graph().get_operation_by_name("f")
# ??Why session graph keeps getting new operations?
# isn't it clean every time we exit the "with" scope
#print(sess.graph.get_operations())
print(sess.run(f))
"""
Explanation: Save and Restore a Model
Uses SavedModelBuilder instead of Saver. Should this be done only for serving? In what way can I reload a model saved with the former and retrain?
End of explanation
"""
from grpc.beta import implementations
# reference local copy of Tensorflow Serving API Files
sys.path.append(str(os.getcwd() / *[os.pardir]*2 / 'ext_libs'))
import lib.predict_pb2 as predict_pb2
import lib.prediction_service_pb2 as prediction_service_pb2
host='127.0.0.1'
port=9000
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# build request
request = predict_pb2.PredictRequest()
request.model_spec.name = 'ed' # model name, as given to bazel script
request.model_spec.signature_name = 'predict' # as defined in ModelBuilder
# define inputs
x = 3
y = 4
x_tensor = tf.contrib.util.make_tensor_proto(x, dtype=tf.int32)
y_tensor = tf.contrib.util.make_tensor_proto(y, dtype=tf.int32)
request.inputs['x'].CopyFrom(x_tensor)
request.inputs['y'].CopyFrom(y_tensor)
# call prediction on the server
result = stub.Predict(request, timeout=10.0)
result
"""
Explanation: Serving Client
Needs
pip install grpcio grpcio-tools
Plus Tensorflow Serving API files.
End of explanation
"""
|
eshlykov/mipt-day-after-day | labs/term-5/lab-3-3.ipynb | unlicense | import pandas
nI1 = pandas.read_excel('lab-3-3.xlsx', 'tab-1', header=None)
nI.head(5)
nI2 = pandas.DataFrame(nI.values[[0, 5, 6, 7, 8], :])
nI2.head()
nI3 = pandas.DataFrame(nI.values[[0, 9, 10, 11, 12], :])
nI3.head()
import matplotlib.pyplot
r1, r500, r3000 = nI1.values, nI2.values, nI3.values
matplotlib.pyplot.figure(figsize=(18, 9))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('Резонансные кривые при $R = 1,\,500,\,3000\,$ Ом', fontweight='bold')
matplotlib.pyplot.xlabel('$f$, кГц')
matplotlib.pyplot.ylabel('$I_0$, мА')
matplotlib.pyplot.errorbar(r1[0, 1:], r1[4, 1:], xerr=[0.05] * 11, yerr=r1[2, 1:] * 4 / 3, fmt='o', c='black', lw=3)
matplotlib.pyplot.errorbar(r500[0, 1:], r500[4, 1:], xerr=[0.05] * 11, yerr=r500[2, 1:] * 4 / 3, fmt='o', c='black', lw=3)
matplotlib.pyplot.errorbar(r3000[0, 1:], r3000[4, 1:], xerr=[0.05] * 11, yerr=r3000[2, 1:] * 4 / 3, fmt='o', c='black', lw=3)
matplotlib.pyplot.show()
"""
Explanation: Работа 2.2. Изучение вынужденных колебаний в колебательном контуре
Цель работы: изучение зависимости тока в колеба- тельном контуре от частоты источника ЭДС, включен- ного в контур, и измерение резонансной частоты контура.
Приборы и оборудование: звуковой генератор Г6–46, электронный осциллограф, модуль ФПЭ–11, магазин сопротивлений, магазин емкостей.
Параметры установки:
$C = 3~нФ$;
$R_1 = 75~Ом$;
при $L = 100~мГн$ резонансная частота равна примерно $9.2~кГц$.
Формулы: $U_0~[В]~=U_0~[дел.]\cdot k~[В / дел.]$; $I_0 = \frac{U_0}{R_1}$.
Погрешность $\Delta I_0 = \frac{0.1k}{R_1}~мA$.
End of explanation
"""
nII = pandas.read_excel('lab-3-3.xlsx', 'tab-2', header=None)
nII.head()
import numpy
f = nII.values
x = f[0, 1:]
y = f[2, 1:]
l = numpy.mean(x * y) / numpy.mean(x ** 2)
dl = ((numpy.mean(x ** 2) * numpy.mean(y ** 2) - (numpy.mean(x * y) ** 2)) / (len(x) * (numpy.mean(x ** 2) ** 2))) ** 0.5
fff = numpy.linspace(0, 10, 100)
matplotlib.pyplot.figure(figsize=(18, 9))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('Зависимость резонансной частоты от емкости$', fontweight='bold')
matplotlib.pyplot.xlabel('$F$, мс^2')
matplotlib.pyplot.ylabel('$C$, нФ')
matplotlib.pyplot.errorbar(f[0, 1:], f[2, 1:], xerr=f[0, 1:] * 0.05, yerr=f[3, 1:], fmt='o', c='black', lw=3)
matplotlib.pyplot.plot(fff, l * fff, '--', c='black', lw=2)
matplotlib.pyplot.show()
l * 1000, dl * 1000, 1 / (2 * numpy.pi * (3 * l * 10 ** (-9)) ** 0.5)
"""
Explanation: Таким образом, резонансная частота примерно равна $f_p = 6.9~кГц$ и не зависит от сопротивления. Это расходится с ожидаемыми данными. Скорее всего, у нашей катушки индективность больше или меньше 100 мГн.
Добротность при $R = 1~Ом$ составляет примерно $Q \approx \frac{4.667}{7.6-6.3} \approx 5.31$, а при $R = 500~Ом$ — $Q \approx \frac{3.467}{8.2-6.4} \approx 3.83$.
Формулы: $ F = (2 \pi f_p)^{-2}~[с^2]$
Погрешность: $\varepsilon C = 5\%$, $\Delta F = F \sqrt{2} \frac{0.05 Гц}{f} $.
End of explanation
"""
|
schemaorg/schemaorg | software/scripts/dashboard.ipynb | apache-2.0 | # Import libraries
import unittest
import os
import pprint
from os import path, getenv
from os.path import expanduser
import logging # https://docs.python.org/2/library/logging.html#logging-levels
import glob
import argparse
import StringIO
import sys
# 3rd party, see e.g. http://pbpython.com/simple-graphing-pandas.html
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.__version__
%matplotlib inline
# Locally,
print os.getcwd()
if os.getcwd().endswith('schemaorg'):
print "Working from: ", os.getcwd()
else:
#print "cd .."
os.chdir("..")
# We'll need our lib/ dir, plus AppEngine's files:
sys.path.append( os.getcwd() )
sdk_path = getenv('APP_ENGINE', expanduser("~") + '/google-cloud-sdk/platform/google_appengine/')
sys.path.insert(0, sdk_path)
print "GAE: ", sdk_path
# previous useful:
import dev_appserver
dev_appserver.fix_sys_path()
# pprint.pprint(os.environ.copy())
import rdflib
from rdflib import Graph
from rdflib import RDF, RDFS
from rdflib.term import URIRef, Literal
from rdflib.parser import Parser
from rdflib.serializer import Serializer
from rdflib.plugins.sparql import prepareQuery
from rdflib.compare import graph_diff
import threading
from api import inLayer,read_file, full_path, read_schemas, read_extensions, read_examples, namespaces, DataCache, getMasterStore
from api import setInTestHarness, GetAllTypes, Unit, GetImmediateSubtypes # old API
from apirdflib import getNss
# Setup
setInTestHarness(True)
import sdoapp
rdflib.plugin.register("json-ld", Serializer, "rdflib_jsonld.serializer", "JsonLDSerializer")
store = getMasterStore()
read_schemas(loadExtensions=True)
read_extensions(sdoapp.ENABLED_EXTENSIONS)
graphs = list(store.graphs())
# Utilities
def findGraph(guri):
myg = ""
for g in graphs:
#print g.identifier
if str(g.identifier) == guri:
myg = g
# print "Found graph %s for graph URI %s" % (g, guri)
# print myg
if myg == "":
print "Didn't find graph %s." % guri
return None
else:
return myg
# Convert SPARQL results to Pandas DataFrame:
from pandas import DataFrame
def sparql2df(a, cast_to_numeric=True):
c = []
for b in a.bindings:
rowvals=[]
for k in a.vars:
rowvals.append(b[k])
c.append(rowvals)
df = DataFrame(c)
df.columns = [str(v) for v in a.vars]
if cast_to_numeric:
df = df.apply(lambda x: pd.to_numeric(x, errors='ignore'))
return df
"""
Explanation: This is an experimental (stub...) Schema.org dashboard using Jupyter/iPython
See corresponding Github issue #896.
End of explanation
"""
sdocore = findGraph("http://schema.org/")
bibex = findGraph("http://bib.schema.org/")
auto = findGraph("http://auto.schema.org/")
# Test OLD API (skip this, unless debugging; slow on first run.)
pprint.pprint( "Found %s types." % len( GetAllTypes() ) )
for t in ["Article", "CreativeWork", "Product", "MedicalEntity", "Event"]:
someType = Unit.GetUnit(t)
p_subtypes = GetImmediateSubtypes(someType)
pprint.pprint( "Direct subtypes of %s: %s" % ( someType.id, ', '.join([str(x.id) for x in p_subtypes]) ) )
renamed = sdocore.query("select ?x ?y where { ?x <http://schema.org/supersededBy> ?y }")
for (old, new) in renamed:
print "older: %s -> newer: %s" % (old, new)
a = bibex.query("select ?x ?p ?y where { ?x ?p ?y } LIMIT 3")
sparql2df(a)
sparql2df( sdocore.query("select ?item ?type where { ?item a ?type }") )
s="""SELECT ?child (count(?grandchild) as ?nGrandchildren) where {
?child rdfs:subClassOf schema:Thing .
OPTIONAL { ?grandchild rdfs:subClassOf ?child }
}
GROUP BY ?child order by desc(count(?grandchild))"""
ets = sparql2df( sdocore.query(s)) # Count subtypes
print ets
ets.plot(kind='bar')
# ets['nGrandchildren']
ets
"""
Explanation: Overview
We have two APIs, the original pseudo-RDF unit/node structure, plus also now RDFLib.
It is better to use the latter as it gives access to SPARQL, parsers/serializers etc.
End of explanation
"""
# Properties on each class (outgoing)
# What's wrong with this query?
#s = "SELECT distinct ?t (count(?prop) as ?n) WHERE { ?prop schema:domainIncludes ?t . } GROUP BY ?prop ORDER BY DESC(?n)"
s = """SELECT ?t (count(?prop) as ?n) WHERE { ?prop schema:domainIncludes ?t .
FILTER (?t = <http://schema.org/Movie> ) . } GROUP BY ?t ORDER BY DESC(?n)"""
sparql2df( sdocore.query( s))
# Something is wrong here. Movie should have 10 properties (2 of which are superseded: actors, directors).
# Why are there multiple rows?
# Properties on each class (*incoming*)
s = "SELECT ?t (count(?prop) as ?n) WHERE { ?prop schema:rangeIncludes ?t . } GROUP BY ?prop ORDER BY DESC(?n)"
sparql2df( sdocore.query( s))
sparql2df( sdocore.query("select ?x where { ?x rdfs:subClassOf <http://schema.org/Event> } LIMIT 30 ") )
sdocore
s = "select ?x where { ?x rdfs:subClassOf <http://schema.org/CreativeWork> } LIMIT 60 "
a = sdocore.query(s)
sparql2df( a )
# Unit Tests
# Seems we could run them here, https://amodernstory.com/2015/06/28/running-unittests-in-the-ipython-notebook/
# unittest.TextTestRunner().run(suite)
s="""
PREFIX schema: <http://schema.org/>
SELECT distinct ?t (count(?prop) as ?n)
WHERE {
?prop schema:domainIncludes ?t . FILTER (?t = <http://schema.org/Movie> ) .
}
GROUP BY ?prop ORDER BY DESC(?n)"""
sparql2df( sdocore.query(s) )
"""
Explanation: Property counts e.g. Movie
<a id="moviecount"></a>
End of explanation
"""
s = "SELECT distinct ?t (count(?prop) as ?n) WHERE { ?prop schema:domainIncludes ?t . } GROUP BY ?prop ORDER BY DESC(?n) LIMIT 5"
sparql2df( sdocore.query( s))
s = "SELECT distinct * WHERE { ?prop schema:domainIncludes <http://schema.org/Movie> . }"
sparql2df( sdocore.query( s))
"""
Explanation: Debugging.
<a id="debugcounts"></a>
Something is wrong with the count queries above.
The count query thinks 11 property terms apply directly to Movie
Querying finds just 10
http://webschemas.org/Movie shows just 8, because 'actors' and 'directors' were suppressed. By why 11 not 10?
Trying in Dydra, see http://dydra.com/danbri/schema-org-3-1-sdo-makemake/@query#counting-props-on-types
End of explanation
"""
s = """SELECT distinct ?t (count(?prop) as ?n) ?prop2
WHERE {
OPTIONAL { ?prop schema:domainIncludes ?t . }
?prop2 schema:supersededBy ?prop .
FILTER (?t = <http://schema.org/Movie>) .
}
GROUP BY ?prop ORDER BY DESC(?n)
"""
# sparql2df( sdocore.query( s))
s = """SELECT distinct * WHERE {
?prop schema:domainIncludes <http://schema.org/Movie> .
?prop2 schema:supersededBy ?prop .
}"""
sparql2df( sdocore.query( s))
"""
Explanation: debugging counts
<a id="debug_counts2"></a>
End of explanation
"""
|
krishnatray/data_science_project_portfolio | galvanize/TechnicalExcercise/Q3 Split Test Analysis_SS.ipynb | mit | # read data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# I have stored the data in a csv file. Let's load the data in pandas dataframe
split_test_df = pd.read_csv("split_test.csv")
split_test_df['conversion_rate'] = split_test_df['Quotes'] /split_test_df['Views']
# Let's look at the dataframe
split_test_df
#split_test_df.sort_values('conversion_rate').plot.bar(x = 'Bucket', y='conversion_rate)
split_test_df.plot(kind='bar', x = 'Bucket', y='conversion_rate')
# percent difference from baseline
split_test_df['percent_diff_from_base'] = (split_test_df['conversion_rate'] - split_test_df['conversion_rate'][0])*100 / split_test_df['conversion_rate']
split_test_df
# Let's plot percent difference from base in a barchart
split_test_df.plot(kind='bar', x = 'Bucket', y='percent_diff_from_base')
"""
Explanation: Sushil K Sharma --- Q3 Split Test Analysis
Problem Statement:
Over the course of a week, I divided invites from about 3000 requests among four new variations of the quote form as well as the baseline form we've been using for the last year. Here are my results:
Baseline: 32 quotes out of 595 viewers
Variation 1: 30 quotes out of 599 viewers
Variation 2: 18 quotes out of 622 viewers
Variation 3: 51 quotes out of 606 viewers
Variation 4: 38 quotes out of 578 viewers
What's your interpretation of these results? What conclusions would you draw? What questions would you ask me about my goals and methodology? Do you have any thoughts on the experimental design? Please provide statistical justification for your conclusions and explain the choices you made in your analysis.
For the sake of your analysis, you can make whatever assumptions are necessary to make the experiment valid, so long as you state them. So, for example, your response might follow the form "I would ask you A, B and C about your goals and methodology. Assuming the answers are X, Y and Z, then here's my analysis of the results... If I were to run it again, I would consider changing...".
1. Questions to Ask
What are the assumptions of above experiement design?
Are the above samples of baseline vs variations comparable (i.e random)? Are the viewers for all variations taken from the the same sample (e.g. gender, region, location, etc.)? It is possible that a particular location have a higher conversion rate.
Did we run the experient for baseline vs variations on the same days i.e. the conversion rate may be diffrenet on different
Are we testing only minor variations in the invites vs whole new design/customer experience? A/B testing is not suitable for a whole new design as it would take some time for customers to adjust to the new interface.
2. Assumptions
Assuming variations are independent, and binomially distributed with a true population conversion rate.
95% confidence interval is accptable for this testing
3. Goals and Methodology:
I am going to follow the following process for fining a solution to the above Q3:
3(a) Step1: Explore Dataset
3(b) Step2: Perform Statistical Tests
3(c) Step3: Results
3(d) Step4: Final Thoughts
3(a) Step1: Explore Dataset
End of explanation
"""
# Proportion of variation 3 (i.e. comversion rate)
p1_hat = split_test_df['conversion_rate'][3]
# Sample size of variation 3 (i.e. views)
n1 = split_test_df['Views'][3]
# Proportion of baseline (i.e. comversion rate)
p2_hat = split_test_df['conversion_rate'][0]
# Sample size of baseline (i.e. views)
n2 = split_test_df['Views'][0]
print (p1_hat, p2_hat, n1, n2)
# overall sample proportion
z = (p1_hat - p2_hat) / np.sqrt((p1_hat * (1-p1_hat) / n2) + (p2_hat * (1-p2_hat) / n2))
print(f"Z score is: {z}")
"""
Explanation: Observations
Simply looking at the data and above barcharts, 3 & 4 have higher conversion rates comprated to the baseline.
Variation 1 has about 7% and variation 2 has about 86% drop in conversion from baseline
Variation 3 has about 36% and variation 4 has about 18% higher conversion from baseline
Variation 3 seems to be winner. However, we need to test if it is stattistically significant.
3(b) Step2: Perform Statistical Tests
To test the significance of the conversion baseline vs variation 3, I am going to use the z-test. The null hypothesis is that the differences are not statistically significant (i.e. the differences in quotes has occurred by chance). The alternate hypothesis is that the differences did not occur by random chance. As sample size is > 20 we can use central limit theoram. Using Central Limit Theoram, the distribution of the test statistic can be approximated. For this test I am assuming alfha / significance level 5% (or 0.05) would be sifficient. Z-score or critical value for alpha=0.05 is 1.645.
Null Hypothesis, H0: p1 = p2, i.e. There is no significant difference in proportions
Where: p1 is the proportion from the first population and p2 the proportion from the second.
Alternate Hypothesis, HA: p1 not = p2
Equation for calculating Z static:
Python Code to calculate Z-static
End of explanation
"""
|
Hasil-Sharma/Neural-Networks-CS231n | assignment1/two_layer_net.ipynb | gpl-3.0 | # A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
"""
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
"""
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
"""
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
"""
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
"""
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
"""
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
"""
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
"""
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
"""
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
"""
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
"""
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
"""
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
"""
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
"""
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
"""
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
"""
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
"""
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
input_size = 32 * 32 * 3
hidden_sizes = [500]
# learning_rates = [1e-4, 2e-4, 3e-4, 4e-4, 5e-4, 6e-4, 1e-3]
learning_rates = [3e-3]
regularization_strengths = [0.8]
num_classes = 10
best_val_acc = 0.0
best_hidden_size = None
best_learning_rate = None
best_regularization_strength = None
for hidden_size in hidden_sizes:
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=5000, batch_size=500,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=regularization_strength, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
if best_val_acc < val_acc:
best_val_acc = val_acc
best_net = net
best_hidden_size = hidden_size
best_learning_rate = learning_rate
best_regularization_strength = regularization_strength
print 'Validation accuracy: ', val_acc
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
"""
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
|Hidden Dim|Lr|RS|Val Acc|Num Iter|Test Acc|
|---|--|--|--|--|--| --|
|500|1e-4|0.5|0.302|1000||
|500|2e-4|0.5|0.374|1000||
|500|3e-4|0.5|0.409|1000||
|500|4e-4|0.5|0.443|1000||
|500|5e-4|0.5|0.463|1000||
|500|6e-4|0.5|0.467|1000||
|500|1e-3|0.5|0.498|1000||
|500|1e-4|0.5|0.383|5000||
|500|2e-4|0.5|0.461|5000||
|500|3e-4|0.5|0.469|5000||
|500|4e-4|0.5|0.497|5000||
|500|5e-4|0.5|0.506|5000||
|500|6e-4|0.5|0.516|5000||
|500|1e-3|0.5|0.541|5000||
|500|2e-3|0.5|0.567|5000||
|500|3e-3|0.5|0.578|5000|0.568|
|500|3e-3|0.8|0.569|5000|0.565|
|500|3e-3|0.5|0.574|10000|0.569|
End of explanation
"""
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
"""
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation
"""
|
coolharsh55/advent-of-code | 2016/python3/Day10.ipynb | mit | with open('../inputs/day10.txt', 'r') as f:
data = [line.strip() for line in f.readlines()]
"""
Explanation: Day 10: Balance Bots
author: Harshvardhan Pandit
license: MIT
link to problem statement
You come upon a factory in which many robots are zooming around handing small microchips to each other.
Upon closer examination, you notice that each bot only proceeds when it has two microchips, and once it does, it gives each one to a different bot or puts it in a marked "output" bin. Sometimes, bots take microchips from "input" bins, too.
Inspecting one of the microchips, it seems like they each contain a single number; the bots must use some logic to decide what to do with each chip. You access the local control computer and download the bots' instructions (your puzzle input).
Some of the instructions specify that a specific-valued microchip should be given to a specific bot; the rest of the instructions indicate what a given bot should do with its lower-value or higher-value chip.
For example, consider the following instructions:
- value 5 goes to bot 2
- bot 2 gives low to bot 1 and high to bot 0
- value 3 goes to bot 1
- bot 1 gives low to output 1 and high to bot 0
- bot 0 gives low to output 2 and high to output 0
- value 2 goes to bot 2
Which leads to:
- Initially, bot 1 starts with a value-3 chip, and bot 2 starts with a value-2 chip and a value-5 chip.
- Because bot 2 has two microchips, it gives its lower one (2) to bot 1 and its higher one (5) to bot 0.
- Then, bot 1 has two microchips; it puts the value-2 chip in output 1 and gives the value-3 chip to bot 0.
- Finally, bot 0 has two microchips; it puts the 3 in output 2 and the 5 in output 0.
In the end, output bin 0 contains a value-5 microchip, output bin 1 contains a value-2 microchip, and output bin 2 contains a value-3 microchip. In this configuration, bot number 2 is responsible for comparing value-5 microchips with value-2 microchips.
Based on your instructions, what is the number of the bot that is responsible for comparing value-61 microchips with value-17 microchips?
Solution logic
Let us first understand the requirements.
There are bots, which have certain properties, and actions. This is a good use case for modeling them using class.
Each bot can handle two chips.
If at any time a bot has two chips, the bot follows the rules specified in the instructions to hand over one or both chips to some other bots, or to the output bin as specified.
The question is to find out which bot (number) is responsible for handling chips 61 and 17, which means that both chips are with the bot at the same time.
We model the bot class as:
Bot:
attributes:
number
chips - 0, 1, 2
handle chips:
give low to <>
give high to <>
actions:
take chip:
add chip to chips
handle chips:
if there are two chips:
resolve which is lower and which is higher
give chips to bots/bins specified by the rules
Reading in the rules line by line, we add bots to the bot-list, and check at each iteration if a bot has two chips, whether they are the specified chips, and if yes, we get the answer as the bot number. We maintain two lists, one for the bots, and the other for output bins. The bots will be a dictionary addressed by the bot's number whose keys will be the class object. The output bins be a dictionary whose keys will be lists for holding chips in that particular output bin.
Note: We always maintain the bot's chips in sorted order to
Reading in the input
End of explanation
"""
class Bot:
def __init__(self, number):
self.number = number
self.chips = []
self.give_low = None
self.give_high = None
def __repr__(self):
return 'number: %s chips: %s low: %s high: %s' % (self.number, self.chips, self.give_low, self.give_high)
"""
Explanation: Class Bot to hold the bot instances
End of explanation
"""
from collections import namedtuple
instruction_type = namedtuple('instruction_type', ('type', 'number'))
import re
bot_instruction = re.compile('(\w+) (\d+)')
input_instruction = re.compile('(\d+)')
bot_list = {}
output_bins = {}
"""
Explanation: Creating the necessary data structures and regex patterns.
The bot instructions each contain a word specifying who the chip is from/going to and the chip number
We use this information to create the instruction_type structure to hold the type and number of the instruction.
End of explanation
"""
def ensure_bot_is_registered(bot_number):
if bot_number not in bot_list:
bot_list[bot_number] = Bot(bot_number)
def ensure_bin_is_registered(bin_number):
if bin_number not in output_bins:
output_bins[bin_number] = []
"""
Explanation: Helper functions
The ensure... functions are there to help (or clear up the code) when accessing bots and output bins as they are encountered in the instructions. Another (elegant?) way to do this would be to wrap each of the calls to the dictionary in a try...except and catch IndexError for invalid accesses.
End of explanation
"""
def bot_chip_handler(bot_number):
# retrieve the bot
bot = bot_list[bot_number]
# check if it has two chips
if len(bot.chips) < 2:
return False
# check if it has the specified chip
if 61 in bot.chips and 17 in bot.chips:
print('answer', bot.number)
# check if it has no instructions, which means no actions
if bot.give_low is None and bot.give_high is None:
return False
# give low chip
if bot.give_low.type == 'bot':
# give chip to bot
bot_list[bot.give_low.number].chips.append(bot.chips[0])
bot_list[bot.give_low.number].chips.sort()
else:
# give chip to output bin
output_bins[bot.give_low.number].append(bot.chips[0])
# give high chip
if bot.give_high.type == 'bot':
# give chip to bot
bot_list[bot.give_high.number].chips.append(bot.chips[1])
bot_list[bot.give_high.number].chips.sort()
else:
# give chip to output bin
output_bins[bot.give_high.number].append(bot.chips[1])
# remove chips from bot
bot.chips = []
# return action succesfull
return True
"""
Explanation: Actions for bots - giving out chips
Each bot will only act when it has two chips, and it will give those two chips according to the rules specified. The function returns True if the specified action was performed, False otherwise.
End of explanation
"""
for line in data:
if line.startswith('bot'):
# this is an instruction for the bot
# get the tokens from the line
main_bot, give_low, give_high = [
# the first is a word, the second is the bot/bin number
instruction_type(token[0].strip(), int(token[1]))
for token in bot_instruction.findall(line)]
# retrieve the bot
ensure_bot_is_registered(main_bot.number)
main_bot = bot_list[main_bot.number]
# handle low/high chip rules
# and whether they are for bots or bins
if give_low.type == 'bot':
ensure_bot_is_registered(give_low.number)
else:
ensure_bin_is_registered(give_low.number)
main_bot.give_low = give_low
if give_high.type == 'bot':
ensure_bot_is_registered(give_high.number)
else:
ensure_bin_is_registered(give_high.number)
main_bot.give_high = give_high
else:
# this is an action where bots are given chips from the input bin
# there are only two values - the chip value and the bot number
chip_number, bot_number = map(int, input_instruction.findall(line))
ensure_bot_is_registered(bot_number)
bot = bot_list[bot_number]
bot.chips.append(chip_number)
bot.chips.sort()
# run handler on this bot since it could now have two chips
bot_chip_handler(bot.number)
# run handlers on all bots in an increasing order
# this is an arbitary choice, it is not specified in the input
condition = True
while condition:
condition = False
for bot_number in sorted(bot_list.keys()):
returned_flag = bot_chip_handler(bot_number)
if returned_flag == True:
condition = True
break
"""
Explanation: main
Here we take each in the given instructions, identify whether the line is an instruction for the bot, or an action where bots are given/take chips from the input bin. The instruction lines start with bot and the action lines start with value.
For bot instructions, the appropriate bot is retrieved from list, its rules are added to it.
For actions, the bot is retrieved and the chip is added to it. The chip handler function is then run to determine what to do with the bot.
At the end of each iteration, we go from each both in increasing order by running the chip handler, and if any bot gives out chip to another bot, we start again, since the bot can give chips to other bots with a number lesser than theirs.
End of explanation
"""
from functools import reduce
from operator import mul
print('answer', reduce(mul, [output_bins[i][0] for i in range(3)]))
"""
Explanation: Part Two
What do you get if you multiply together the values of one chip in each of outputs 0, 1, and 2?
Solution logic
We need to multiple one of the values in output bins 0, 1, and 2. This means either that they contain only one value or that we have to choose only the first one. In any case, we access only the first element in each bin.
reduce(mul, [output_bins[i][0] for i in range(3)])
This is the functional way to write -
n = 1
for i in range 0 to 2 (3 - 1 or i < 3)
n *= output[i][0]
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/sklearn_ensae_course/00_introduction_machine_learning_and_data.ipynb | mit | # Start pylab inline mode, so figures will appear in the notebook
%matplotlib inline
# Import the example plot from the figures directory
from plot_sgd_separator import plot_sgd_separator
plot_sgd_separator()
"""
Explanation: 2A.ML101.0: What is machine learning?
Machine Learning is about building programs with tunable parameters
that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Source: Course on machine learning with scikit-learn by Gaël Varoquaux
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task: the figure shows a
collection of two-dimensional data, colored according to two different class
labels. A classification algorithm may be used to draw a dividing boundary
between the two clusters of points:
End of explanation
"""
from plot_linear_regression import plot_linear_regression
plot_linear_regression()
"""
Explanation: By drawing this separating line, we have learned a model which can generalize to new
data: if you were to drop another point onto the plane which is unlabeled, this algorithm
could now predict whether it's a blue or a red point.
The next simple task we'll look at is a regression task: a simple best-fit line
to a set of data:
End of explanation
"""
from IPython.display import Image, display
display(Image(filename='iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(filename='iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(filename='iris_virginica.jpg'))
print("Iris Virginica")
"""
Explanation: Again, this is an example of fitting a model to data, but our focus here is that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data:
here, we might be given an x-value, and the model would
allow us to predict the y value.
Data in scikit-learn
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the iris data stored by scikit-learn.
The data consists of measurements of three different species of irises. There are three species of iris
in the dataset, which we can picture here:
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
"""
Explanation: Quick Question:
If we want to design an algorithm to recognize iris species, what might the data be?
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number i must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-learn
Scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
"""
n_samples, n_features = iris.data.shape
print(n_samples)
print(n_features)
print(iris.data[0])
"""
Explanation: The features of each sample flower are stored in the data attribute of the dataset:
End of explanation
"""
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
"""
Explanation: The information about the class of each sample is stored in the target attribute of the dataset:
End of explanation
"""
print(iris.target_names)
"""
Explanation: The names of the classes are stored in the last attribute, namely target_names:
End of explanation
"""
from matplotlib import pyplot as plt
x_index = 0
y_index = 1
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target)
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index]);
"""
Explanation: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot. Again, we'll start by enabling
matplotlib inline mode:
End of explanation
"""
|
dynaryu/rmtk | rmtk/vulnerability/derivation_fragility/equivalent_linearization/vidic_etal_1994/vidic_etal_1994.ipynb | agpl-3.0 | import vidic_etal_1994
from rmtk.vulnerability.common import utils
%matplotlib inline
"""
Explanation: Vidic, Fajfar and Fischinger (1994)
This procedure, proposed by Vidic, Fajfar and Fischinger (1994), aims to determine the displacements from an inelastic design spectra for systems with a given ductility factor. The inelastic displacement spectra is determined by means of applying a reduction factor, which depends on the natural period of the system, its ductility factor, the hysteretic behaviour, the damping, and the frequency content of the ground motion.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
"""
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
"""
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
"""
gmrs_folder = "../../../../../../rmtk_data/accelerograms"
gmrs = utils.read_gmrs(gmrs_folder)
minT, maxT = 0.1, 2.0
utils.plot_response_spectra(gmrs, minT, maxT)
"""
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
"""
damage_model_file = "../../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
"""
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
"""
damping_model = "mass"
damping_ratio = 0.05
hysteresis_model = 'Q'
PDM, Sds = vidic_etal_1994.calculate_fragility(capacity_curves, gmrs,
damage_model, damping_ratio,
hysteresis_model, damping_model)
"""
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_model: This parameter defines the type of damping model to be used in the analysis. The valid options are "mass" and "stiffness".
2. damping_ratio: This parameter defines the damping ratio for the structure.
3. hysteresis_model: The valid options are 'Q' or "bilinear".
End of explanation
"""
IMT = "Sa"
period = 2.0
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
"""
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
period: this parameter defines the time period of the fundamental mode of vibration of the structure.
regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
"""
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
"""
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
"""
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
"""
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
"""
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
"""
utils.plot_vulnerability_model(vulnerability_model)
"""
Explanation: Plot vulnerability function
End of explanation
"""
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
"""
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/tutorials/LC.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: 'lc' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
"""
b.add_dataset('lc')
print(b.get_dataset(kind='lc', check_visible=False))
"""
Explanation: Dataset Parameters
Let's add a lightcurve dataset to the Bundle (see also the lc API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
End of explanation
"""
print(b.get_parameter(qualifier='times'))
"""
Explanation: times
End of explanation
"""
print(b.get_parameter(qualifier='fluxes'))
"""
Explanation: fluxes
End of explanation
"""
print(b.get_parameter(qualifier='sigmas'))
"""
Explanation: sigmas
End of explanation
"""
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
"""
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
"""
print(b.get_parameter(qualifier='phases_t0'))
"""
Explanation: NOTE: phases_t0 was called compute_phases_t0 before the 2.3 release.
End of explanation
"""
print(b.get_parameter(qualifier='ld_mode', component='primary'))
"""
Explanation: ld_mode
See the Limb Darkening tutorial
End of explanation
"""
b.set_value('ld_mode', component='primary', value='lookup')
print(b.get_parameter(qualifier='ld_func', component='primary'))
"""
Explanation: ld_func
ld_func will only be available if ld_mode is not 'interp', so let's set it to 'lookup'. See the limb darkening tutorial for more details.
End of explanation
"""
print(b.get_parameter(qualifier='ld_coeffs_source', component='primary'))
"""
Explanation: ld_coeffs_source
ld_coeffs_source will only be available if ld_mode is 'lookup'. See the limb darkening tutorial for more details.
End of explanation
"""
b.set_value('ld_mode', component='primary', value='manual')
print(b.get_parameter(qualifier='ld_coeffs', component='primary'))
"""
Explanation: ld_coeffs
ld_coeffs will only be available if ld_mode is set to 'manual'. See the limb darkening tutorial for more details.
End of explanation
"""
print(b.get_parameter(qualifier='passband'))
"""
Explanation: passband
See the Atmospheres & Passbands tutorial
End of explanation
"""
print(b.get_parameter(qualifier='intens_weighting'))
"""
Explanation: intens_weighting
See the Intensity Weighting tutorial
End of explanation
"""
print(b.get_parameter(qualifier='pblum_mode'))
"""
Explanation: pblum_mode
See the Passband Luminosity tutorial
End of explanation
"""
b.set_value('pblum_mode', value='component-coupled')
print(b.get_parameter(qualifier='pblum_component'))
"""
Explanation: pblum_component
pblum_component is only available if pblum_mode is set to 'component-coupled'. See the passband luminosity tutorial for more details.
End of explanation
"""
b.set_value('pblum_mode', value='dataset-coupled')
print(b.get_parameter(qualifier='pblum_dataset'))
"""
Explanation: pblum_dataset
pblum_dataset is only available if pblum_mode is set to 'dataset-coupled'. In this case we'll get a warning because there is only one dataset. See the passband luminosity tutorial for more details.
End of explanation
"""
b.set_value('pblum_mode', value='decoupled')
print(b.get_parameter(qualifier='pblum', component='primary'))
"""
Explanation: pblum
pblum is only available if pblum_mode is set to 'decoupled' (in which case there is a pblum entry per-star) or 'component-coupled' (in which case there is only an entry for the star chosen by pblum_component). See the passband luminosity tutorial for more details.
End of explanation
"""
print(b.get_parameter(qualifier='l3_mode'))
"""
Explanation: l3_mode
See the "Third" Light tutorial
End of explanation
"""
b.set_value('l3_mode', value='flux')
print(b.get_parameter(qualifier='l3'))
"""
Explanation: l3
l3 is only avaible if l3_mode is set to 'flux'. See the "Third" Light tutorial for more details.
End of explanation
"""
b.set_value('l3_mode', value='fraction')
print(b.get_parameter(qualifier='l3_frac'))
"""
Explanation: l3_frac
l3_frac is only avaible if l3_mode is set to 'fraction'. See the "Third" Light tutorial for more details.
End of explanation
"""
print(b.get_compute())
"""
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to computing fluxes and the LC dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision are explained in the section on the mesh dataset
End of explanation
"""
print(b.get_parameter(qualifier='irrad_method'))
"""
Explanation: irrad_method
End of explanation
"""
print(b.get_parameter(qualifier='boosting_method'))
"""
Explanation: For more details on irradiation, see the Irradiation tutorial
boosting_method
End of explanation
"""
print(b.get_parameter(qualifier='atm', component='primary'))
"""
Explanation: For more details on boosting, see the Beaming and Boosting example script
atm
End of explanation
"""
b.set_value('times', phoebe.linspace(0,1,101))
b.run_compute()
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', kind='lc', context='model'))
print(b.get_parameter(qualifier='fluxes', kind='lc', context='model'))
"""
Explanation: For more details on atmospheres, see the Atmospheres & Passbands tutorial
Synthetics
End of explanation
"""
afig, mplfig = b.plot(show=True)
"""
Explanation: Plotting
By default, LC datasets plot as flux vs time.
End of explanation
"""
afig, mplfig = b.plot(x='phases', show=True)
"""
Explanation: Since these are the only two columns available in the synthetic model, the only other option is to plot in phase instead of time.
End of explanation
"""
print(b.filter(qualifier='period').components)
afig, mplfig = b.plot(x='phases:binary', show=True)
"""
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
"""
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
b.set_value('columns', value=['intensities@lc01',
'abs_intensities@lc01',
'normal_intensities@lc01',
'abs_normal_intensities@lc01',
'pblum_ext@lc01',
'boost_factors@lc01'])
b.run_compute()
print(b.get_model().datasets)
"""
Explanation: Mesh Fields
By adding a mesh dataset and setting the columns parameter, light-curve (i.e. passband-dependent) per-element quantities can be exposed and plotted.
Let's add a single mesh at the first time of the light-curve and re-call run_compute
End of explanation
"""
print(b.filter(dataset='lc01', kind='mesh', context='model').twigs)
"""
Explanation: These new columns are stored with the lc's dataset tag, but with the 'mesh' dataset-kind.
End of explanation
"""
afig, mplfig = b.filter(kind='mesh').plot(fc='intensities', ec='None', show=True)
"""
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the mesh dataset).
End of explanation
"""
print(b.get_parameter(qualifier='pblum_ext',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: Now let's look at each of the available fields.
pblum
For more details, see the tutorial on Passband Luminosities
End of explanation
"""
print(b.get_parameter(qualifier='abs_normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: pblum_ext is the extrinsic passband luminosity of the entire star/mesh - this is a single value (unlike most of the parameters in the mesh) and does not have per-element values.
abs_normal_intensities
End of explanation
"""
print(b.get_parameter(qualifier='normal_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: abs_normal_intensities are the absolute normal intensities per-element.
normal_intensities
End of explanation
"""
print(b.get_parameter(qualifier='abs_intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: normal_intensities are the relative normal intensities per-element.
abs_intensities
End of explanation
"""
print(b.get_parameter(qualifier='intensities',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: abs_intensities are the projected absolute intensities (towards the observer) per-element.
intensities
End of explanation
"""
print(b.get_parameter(qualifier='boost_factors',
component='primary',
dataset='lc01',
kind='mesh',
context='model'))
"""
Explanation: intensities are the projected relative intensities (towards the observer) per-element.
boost_factors
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.1/examples/notebooks/generated/plots_boxplots.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
"""
Explanation: Box Plots
The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot.
End of explanation
"""
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
"""
Explanation: Bean Plots
The following example is taken from the docstring of beanplot.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
End of explanation
"""
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible
plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
#plt.show()
def beanplot(data, plot_opts={}, jitter=False):
"""helper function to try out different plot options
"""
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
plot_opts_.update(plot_opts)
sm.graphics.beanplot(data, ax=ax, labels=labels,
jitter=jitter, plot_opts=plot_opts_)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
fig = beanplot(age, jitter=True)
fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
"""
Explanation: Group age by party ID, and create a violin plot with it:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
# Necessary to make horizontal axis labels fit
plt.rcParams['figure.subplot.bottom'] = 0.23
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
# Group age by party ID.
age = [data.exog['age'][data.endog == id] for id in party_ID]
# Create a violin plot.
fig = plt.figure()
ax = fig.add_subplot(111)
sm.graphics.violinplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a bean plot.
fig2 = plt.figure()
ax = fig2.add_subplot(111)
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a jitter plot.
fig3 = plt.figure()
ax = fig3.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small',
'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8),
'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00',
'bean_mean_color':'#009D91'}
sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create an asymmetrical jitter plot.
ix = data.exog['income'] < 16 # incomes < $30k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_lower_income = [age[endog == id] for id in party_ID]
ix = data.exog['income'] >= 20 # incomes > $50k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_higher_income = [age[endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts['violin_fc'] = (0.5, 0.5, 0.5)
plot_opts['bean_show_mean'] = False
plot_opts['bean_show_median'] = False
plot_opts['bean_legend_text'] = 'Income < \$30k'
plot_opts['cutoff_val'] = 10
sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left',
jitter=True, plot_opts=plot_opts)
plot_opts['violin_fc'] = (0.7, 0.7, 0.7)
plot_opts['bean_color'] = '#009D91'
plot_opts['bean_legend_text'] = 'Income > \$50k'
sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right',
jitter=True, plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Show all plots.
#plt.show()
"""
Explanation: Advanced Box Plots
Based of example script example_enhanced_boxplots.py (by Ralf Gommers)
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_algo/td1a_quicksort_correction.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.algo - quicksort - correction
Implémentation du quicksort façon graphe.
End of explanation
"""
class NoeudTri (object):
def __init__(self,s):
self.mot = s
NoeudTri("a")
"""
Explanation: Q1 : classe
End of explanation
"""
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
return self.mot + "\n" # \n : passage à la ligne
print(NoeudTri("a"))
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
return self.mot + "\n" # \n : passage à la ligne
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
NoeudTri("a")
"""
Explanation: Q2 : str, repr
End of explanation
"""
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
return self.mot + "\n"
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
self.avant = NoeudTri (s) # ajout d'un successeur
elif s > self.mot:
self.apres = NoeudTri (s) # ajout d'un successeur
else:
# égalite, on ne fait rien
pass
n = NoeudTri("a")
n.insere("b")
"""
Explanation: Q3 : avant, après
End of explanation
"""
class NoeudTri (object):
def __init__(self,s):
self.mot = s
def __str__(self):
s = ""
if hasattr(self, "avant"):
s += self.avant.__str__ ()
s += self.mot + "\n"
if hasattr(self, "apres"):
s += self.apres.__str__()
return s
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
self.avant = NoeudTri (s) # ajout d'un successeur
elif s > self.mot:
self.apres = NoeudTri (s) # ajout d'un successeur
else:
# égalite, on ne fait rien
pass
n = NoeudTri("a")
n.insere("b")
print(n)
"""
Explanation: La méthode insere prévoit de ne rien faire dans le cas où le mot s passé en argument est égal à l'attribut mot : cela revient à ignorer les doublons dans la liste de mots à trier.
Q4 : str
End of explanation
"""
class SecondeInserstion (AttributeError):
"insertion d'un mot déjà inséré"
class NoeudTri :
def __init__(self,s):
self.mot = s
# la création d'un nouveau noeud a été placée dans une méthode
def nouveau_noeud(self, s) :
return self.__class__(s)
def __str__(self):
s = ""
if hasattr(self, "avant"):
s += self.avant.__str__ ()
s += self.mot + "\n"
if hasattr(self, "apres"):
s += self.apres.__str__()
return s
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
if hasattr(self, "avant"):
self.avant.insere (s) # délégation
else:
self.avant = self.nouveau_noeud(s) # création
elif s > self.mot:
if hasattr(self, "apres"):
self.apres.insere (s) # délégation
else:
self.apres = self.nouveau_noeud(s) # création
else:
raise SecondeInsertion(s)
li = ["un", "deux", "unite", "dizaine", "exception", "dire", \
"programme", "abc", "xyz", "opera", "quel"]
racine = None
for mot in li:
if racine is None:
# premier cas : aucun mot --> on crée le premier noeud
racine = NoeudTri(mot)
else :
# second cas : il y a déjà un mot, on ajoute le mot suivant à l'arbre
racine.insere(mot)
print(racine)
"""
Explanation: L'insertion des mots donnés dans l'énoncé produit le code suivant :
Q5, Q6
Il reste à compléter la fonction insere afin qu'elle puisse trouver le bon noeud où insérer un nouveau mot. Cette méthode est récursive : si un noeud contient deux attributs avant et apres, cela signifie que le nouveau mot doit être inséré plus bas, dans des noeuds reliés soit à avant soit à apres. La méthode insere choisit donc un des attributs et délègue le problème à la méthode insere de ce noeud.
End of explanation
"""
class NoeudTri :
def __init__(self,s):
self.mot = s
# la création d'un nouveau noeud a été placée dans une méthode
def nouveau_noeud(self, s) :
return self.__class__(s)
def __str__(self):
s = ""
if hasattr(self, "avant"):
s += self.avant.__str__ ()
s += self.mot + "\n"
if hasattr(self, "apres"):
s += self.apres.__str__()
return s
def __repr__(self):
return "NoeudTri('{0}')".format(self.mot)
def insere(self,s):
if s < self.mot:
if hasattr(self, "avant"):
self.avant.insere (s) # délégation
else:
self.avant = self.nouveau_noeud(s) # création
elif s > self.mot:
if hasattr(self, "apres"):
self.apres.insere (s) # délégation
else:
self.apres = self.nouveau_noeud(s) # création
else:
raise SecondeInsertion(s)
def dessin(self):
s = ""
if hasattr(self, "avant"):
s += 'n{0} -> n{1} [label="-"];\n'.format(id(self), id(self.avant))
s += self.avant.dessin()
s += 'n{0} [label="{1}"];\n'.format(id(self), self.mot)
if hasattr(self, "apres"):
s += 'n{0} -> n{1} [label="+"];\n'.format(id(self), id(self.apres))
s += self.apres.dessin()
return s
li = ["un", "deux", "unite", "dizaine", "exception", "dire", \
"programme", "abc", "xyz", "opera", "quel"]
racine = None
for mot in li:
if racine is None:
# premier cas : aucun mot --> on crée le premier noeud
racine = NoeudTri(mot)
else :
# second cas : il y a déjà un mot, on ajoute le mot suivant à l'arbre
racine.insere(mot)
print(racine.dessin())
from pyensae.graphhelper import draw_diagram
img = draw_diagram("""
blockdiag {{
{0}
}}
""".format(racine.dessin()))
img
"""
Explanation: Q7 : dessin
End of explanation
"""
|
SnShine/aima-python | learning.ipynb | mit | from learning import *
from notebook import *
"""
Explanation: LEARNING
This notebook serves as supporting material for topics covered in Chapter 18 - Learning from Examples , Chapter 19 - Knowledge in Learning, Chapter 20 - Learning Probabilistic Models from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from learning.py. Let's start by importing everything from the module:
End of explanation
"""
%psource DataSet
"""
Explanation: CONTENTS
Machine Learning Overview
Datasets
Iris Visualization
Distance Functions
Plurality Learner
k-Nearest Neighbours
Decision Tree Learner
Naive Bayes Learner
Perceptron
Learner Evaluation
MACHINE LEARNING OVERVIEW
In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.
An agent is learning if it improves its performance on future tasks after making observations about the world.
There are three types of feedback that determine the three main types of learning:
Supervised Learning:
In Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output.
Example: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings.
Unsupervised Learning:
In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is clustering: detecting potential useful clusters of input examples.
Example: A taxi agent would develop a concept of good traffic days and bad traffic days without ever being given labeled examples.
Reinforcement Learning:
In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments.
Example: Let's talk about an agent to play the popular Atari game—Pong. We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it.
DATASETS
For the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following:
Fisher's Iris: Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica.
Zoo: The dataset holds different animals and their classification as "mammal", "fish", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean).
To make using the datasets easier, we have written a class, DataSet, in learning.py. The tutorials found here make use of this class.
Let's have a look at how it works before we get started with the algorithms.
Intro
A lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use on aima-data. Two examples are the datasets mentioned above (iris.csv and zoo.csv). You can find plenty datasets online, and a good repository of such datasets is UCI Machine Learning Repository.
In such files, each line corresponds to one item/measurement. Each individual value in a line represents a feature and usually there is a value denoting the class of the item.
You can find the code for the dataset here:
End of explanation
"""
iris = DataSet(name="iris")
"""
Explanation: Class Attributes
examples: Holds the items of the dataset. Each item is a list of values.
attrs: The indexes of the features (by default in the range of [0,f), where f is the number of features. For example, item[i] returns the feature at index i of item.
attrnames: An optional list with attribute names. For example, item[s], where s is a feature name, returns the feature of name s in item.
target: The attribute a learning algorithm will try to predict. By default the last attribute.
inputs: This is the list of attributes without the target.
values: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially None, it gets computed (by the function setproblem) from the examples.
distance: The distance function used in the learner to calculate the distance between two items. By default mean_boolean_error.
name: Name of the dataset.
source: The source of the dataset (url or other). Not used in the code.
exclude: A list of indexes to exclude from inputs. The list can include either attribute indexes (attrs) or names (attrnames).
Class Helper Functions
These functions help modify a DataSet object to your needs.
sanitize: Takes as input an example and returns it with non-input (target) attributes replaced by None. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned.
classes_to_numbers: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string.
remove_examples: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers).
Importing a Dataset
Importing from aima-data
Datasets uploaded on aima-data can be imported with the following line:
End of explanation
"""
print(iris.examples[0])
print(iris.inputs)
"""
Explanation: To check that we imported the correct dataset, we can do the following:
End of explanation
"""
iris2 = DataSet(name="iris",exclude=[1])
print(iris2.inputs)
"""
Explanation: Which correctly prints the first line in the csv file and the list of attribute indexes.
When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter exclude to the attribute index or name.
End of explanation
"""
print(iris.examples[:3])
"""
Explanation: Attributes
Here we showcase the attributes.
First we will print the first three items/examples in the dataset.
End of explanation
"""
print("attrs:", iris.attrs)
print("attrnames (by default same as attrs):", iris.attrnames)
print("target:", iris.target)
print("inputs:", iris.inputs)
"""
Explanation: Then we will print attrs, attrnames, target, input. Notice how attrs holds values in [0,4], but since the fourth attribute is the target, inputs holds values in [0,3].
End of explanation
"""
print(iris.values[0])
"""
Explanation: Now we will print all the possible values for the first feature/attribute.
End of explanation
"""
print("name:", iris.name)
print("source:", iris.source)
"""
Explanation: Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty.
End of explanation
"""
print(iris.values[iris.target])
"""
Explanation: A useful combination of the above is dataset.values[dataset.target] which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it:
End of explanation
"""
print("Sanitized:",iris.sanitize(iris.examples[0]))
print("Original:",iris.examples[0])
"""
Explanation: Helper Functions
We will now take a look at the auxiliary functions found in the class.
First we will take a look at the sanitize function, which sets the non-input values of the given example to None.
In this case we want to hide the class of the first example, so we will sanitize it.
Note that the function doesn't actually change the given example; it returns a sanitized copy of it.
End of explanation
"""
iris2 = DataSet(name="iris")
iris2.remove_examples("virginica")
print(iris2.values[iris2.target])
"""
Explanation: Currently the iris dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function remove_examples.
End of explanation
"""
print("Class of first example:",iris2.examples[0][iris2.target])
iris2.classes_to_numbers()
print("Class of first example:",iris2.examples[0][iris2.target])
"""
Explanation: We also have classes_to_numbers. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers.
End of explanation
"""
means, deviations = iris.find_means_and_deviations()
print("Setosa feature means:", means["setosa"])
print("Versicolor mean for first feature:", means["versicolor"][0])
print("Setosa feature deviations:", deviations["setosa"])
print("Virginica deviation for second feature:",deviations["virginica"][1])
"""
Explanation: As you can see "setosa" was mapped to 0.
Finally, we take a look at find_means_and_deviations. It finds the means and standard deviations of the features for each class.
End of explanation
"""
iris = DataSet(name="iris")
show_iris()
show_iris(0, 1, 3)
show_iris(1, 2, 3)
"""
Explanation: IRIS VISUALIZATION
Since we will use the iris dataset extensively in this notebook, below we provide a visualization tool that helps in comprehending the dataset and thus how the algorithms work.
We plot the dataset in a 3D space using matplotlib and the function show_iris from notebook.py. The function takes as input three parameters, i, j and k, which are indicises to the iris features, "Sepal Length", "Sepal Width", "Petal Length" and "Petal Width" (0 to 3). By default we show the first three features.
End of explanation
"""
def manhattan_distance(X, Y):
return sum([abs(x - y) for x, y in zip(X, Y)])
distance = manhattan_distance([1,2], [3,4])
print("Manhattan Distance between (1,2) and (3,4) is", distance)
"""
Explanation: You can play around with the values to get a good look at the dataset.
DISTANCE FUNCTIONS
In a lot of algorithms (like the k-Nearest Neighbors algorithm), there is a need to compare items, finding how similar or close they are. For that we have many different functions at our disposal. Below are the functions implemented in the module:
Manhattan Distance (manhattan_distance)
One of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates x and y. In that grid we have two items, at the squares positioned at (1,2) and (3,4). The difference between their two coordinates is 3-1=2 and 4-2=2. If we sum these up we get 4. That means to get from (1,2) to (3,4) we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids.
End of explanation
"""
def euclidean_distance(X, Y):
return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)]))
distance = euclidean_distance([1,2], [3,4])
print("Euclidean Distance between (1,2) and (3,4) is", distance)
"""
Explanation: Euclidean Distance (euclidean_distance)
Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items.
End of explanation
"""
def hamming_distance(X, Y):
return sum(x != y for x, y in zip(X, Y))
distance = hamming_distance(['a','b','c'], ['a','b','b'])
print("Hamming Distance between 'abc' and 'abb' is", distance)
"""
Explanation: Hamming Distance (hamming_distance)
This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too.
End of explanation
"""
def mean_boolean_error(X, Y):
return mean(int(x != y) for x, y in zip(X, Y))
distance = mean_boolean_error([1,2,3], [1,4,5])
print("Mean Boolean Error Distance between (1,2,3) and (1,4,5) is", distance)
"""
Explanation: Mean Boolean Error (mean_boolean_error)
To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are (1,2,3) and (1,4,5), the ration of different/all elements is 2/3, since they differ in two out of three elements.
End of explanation
"""
def mean_error(X, Y):
return mean([abs(x - y) for x, y in zip(X, Y)])
distance = mean_error([1,0,5], [3,10,5])
print("Mean Error Distance between (1,0,5) and (3,10,5) is", distance)
"""
Explanation: Mean Error (mean_error)
This function finds the mean difference of single elements between two items. For example, if the two items are (1,0,5) and (3,10,5), their error distance is (3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12. The mean error distance therefore is 12/3=4.
End of explanation
"""
def ms_error(X, Y):
return mean([(x - y)**2 for x, y in zip(X, Y)])
distance = ms_error([1,0,5], [3,10,5])
print("Mean Square Distance between (1,0,5) and (3,10,5) is", distance)
"""
Explanation: Mean Square Error (ms_error)
This is very similar to the Mean Error, but instead of calculating the difference between elements, we are calculating the square of the differences.
End of explanation
"""
def rms_error(X, Y):
return math.sqrt(ms_error(X, Y))
distance = rms_error([1,0,5], [3,10,5])
print("Root of Mean Error Distance between (1,0,5) and (3,10,5) is", distance)
"""
Explanation: Root of Mean Square Error (rms_error)
This is the square root of Mean Square Error.
End of explanation
"""
psource(PluralityLearner)
"""
Explanation: PLURALITY LEARNER CLASSIFIER
Overview
The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification.
Let's see how the classifier works with the plot above. There are three classes named Class A (orange-colored dots) and Class B (blue-colored dots) and Class C (green-colored dots). Every point in this plot has two features (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem.
The Plurality Learner will find the class most represented in the plot. Class A has four items, Class B has three and Class C has seven. The most popular class is Class C. Therefore, the item will get classified in Class C, despite the fact that it is closer to the other two classes.
Implementation
Below follows the implementation of the PluralityLearner algorithm:
End of explanation
"""
zoo = DataSet(name="zoo")
pL = PluralityLearner(zoo)
print(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1]))
"""
Explanation: It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in.
The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class.
Example
For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset.
End of explanation
"""
psource(NearestNeighborLearner)
"""
Explanation: The output for the above code is "mammal", since that is the most popular and common class in the dataset.
K-NEAREST NEIGHBOURS CLASSIFIER
Overview
The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on Scholarpedia.
Let's see how kNN works with a simple plot shown in the above picture.
We have co-ordinates (we call them features in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of k is arbitrary. k is one of the hyper parameters for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as hyper parameter tuning/optimising. We learn more about this in coming topics.
Let's put k = 3. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than test point (red star). As there are two violet points, which form the majority, we predict the class of red star as violet- Class B.
Similarly if we put k = 5, you can observe that there are three yellow points, which form the majority. So, we classify our test point as yellow- Class A.
In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one.
Implementation
Below follows the implementation of the kNN algorithm:
End of explanation
"""
iris = DataSet(name="iris")
kNN = NearestNeighborLearner(iris,k=3)
print(kNN([5.1,3.0,1.1,0.1]))
"""
Explanation: It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item.
To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from example (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class.
Example
We measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following:
End of explanation
"""
pseudocode("Decision Tree Learning")
"""
Explanation: The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species.
DECISION TREE LEARNER
Overview
Decision Trees
A decision tree is a flowchart that uses a tree of decisions and their possible consequences for classification. At each non-leaf node of the tree an attribute of the input is tested, based on which corresponding branch leading to a child-node is selected. At the leaf node the input is classified based on the class label of this leaf node. The paths from root to leaves represent classification rules based on which leaf nodes are assigned class labels.
Decision Tree Learning
Decision tree learning is the construction of a decision tree from class-labeled training data. The data is expected to be a tuple in which each record of the tuple is an attribute used for classification. The decision tree is built top-down, by choosing a variable at each step that best splits the set of items. There are different metrics for measuring the "best split". These generally measure the homogeneity of the target variable within the subsets.
Gini Impurity
Gini impurity of a set is the probability of a randomly chosen element to be incorrectly labeled if it was randomly labeled according to the distribution of labels in the set.
$$I_G(p) = \sum{p_i(1 - p_i)} = 1 - \sum{p_i^2}$$
We select split which minimizes the Gini impurity in childre nodes.
Information Gain
Information gain is based on the concept of entropy from information theory. Entropy is defined as:
$$H(p) = -\sum{p_i \log_2{p_i}}$$
Information Gain is difference between entropy of the parent and weighted sum of entropy of children. The feature used for splitting is the one which provides the most information gain.
Pseudocode
You can view the pseudocode by running the cell below:
End of explanation
"""
psource(DecisionFork)
"""
Explanation: Implementation
The nodes of the tree constructed by our learning algorithm are stored using either DecisionFork or DecisionLeaf based on whether they are a parent node or a leaf node respectively.
End of explanation
"""
psource(DecisionLeaf)
"""
Explanation: DecisionFork holds the attribute, which is tested at that node, and a dict of branches. The branches store the child nodes, one for each of the attribute's values. Calling an object of this class as a function with input tuple as an argument returns the next node in the classification path based on the result of the attribute test.
End of explanation
"""
psource(DecisionTreeLearner)
"""
Explanation: The leaf node stores the class label in result. All input tuples' classification paths end on a DecisionLeaf whose result attribute decide their class.
End of explanation
"""
dataset = iris
target_vals = dataset.values[dataset.target]
target_dist = CountingProbDist(target_vals)
attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr])
for gv in target_vals
for attr in dataset.inputs}
for example in dataset.examples:
targetval = example[dataset.target]
target_dist.add(targetval)
for attr in dataset.inputs:
attr_dists[targetval, attr].add(example[attr])
print(target_dist['setosa'])
print(attr_dists['setosa', 0][5.0])
"""
Explanation: The implementation of DecisionTreeLearner provided in learning.py uses information gain as the metric for selecting which attribute to test for splitting. The function builds the tree top-down in a recursive manner. Based on the input it makes one of the four choices:
<ol>
<li>If the input at the current step has no training data we return the mode of classes of input data recieved in the parent step (previous level of recursion).</li>
<li>If all values in training data belong to the same class it returns a `DecisionLeaf` whose class label is the class which all the data belongs to.</li>
<li>If the data has no attributes that can be tested we return the class with highest plurality value in the training data.</li>
<li>We choose the attribute which gives the highest amount of entropy gain and return a `DecisionFork` which splits based on this attribute. Each branch recursively calls `decision_tree_learning` to construct the sub-tree.</li>
</ol>
NAIVE BAYES LEARNER
Overview
Theory of Probabilities
The Naive Bayes algorithm is a probabilistic classifier, making use of Bayes' Theorem. The theorem states that the conditional probability of A given B equals the conditional probability of B given A multiplied by the probability of A, divided by the probability of B.
$$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)}$$
From the theory of Probabilities we have the Multiplication Rule, if the events X are independent the following is true:
$$P(X_{1} \cap X_{2} \cap ... \cap X_{n}) = P(X_{1})P(X_{2})...*P(X_{n})$$
For conditional probabilities this becomes:
$$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)P(X_{2}|Y)...*P(X_{n}|Y)$$
Classifying an Item
How can we use the above to classify an item though?
We have a dataset with a set of classes (C) and we want to classify an item with a set of features (F). Essentially what we want to do is predict the class of an item given the features.
For a specific class, Class, we will find the conditional probability given the item features:
$$P(Class|F) = \dfrac{P(F|Class)*P(Class)}{P(F)}$$
We will do this for every class and we will pick the maximum. This will be the class the item is classified in.
The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes:
$$P(Class|F) = \dfrac{P(Class)P(F_{1}|Class)P(F_{2}|Class)...P(F_{n}|Class)}{P(F_{1})P(F_{2})...*P(F_{n})}$$
The calculation of the conditional probability then depends on the calculation of the following:
a) The probability of Class in the dataset.
b) The conditional probability of each feature occuring in an item classified in Class.
c) The probabilities of each individual feature.
For a), we will count how many times Class occurs in the dataset (aka how many items are classified in a particular class).
For b), if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see Central Limit Theorem).
NOTE: If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function.
The last one, c), is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values).
So as we cannot calculate the feature value probabilities, what are we going to do?
Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, A and B, we want to know which one is greater:
$$\dfrac{P(F|A)P(A)}{P(F)} vs. \dfrac{P(F|B)P(B)}{P(F)}$$
Wait, P(F) is the same for both the classes! In fact, it is the same for every combination of classes. That is because P(F) does not depend on a class, thus being independent of the classes.
So, for c), we actually don't need to calculate it at all.
Wrapping It Up
Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious.
Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called Naive Bayes Classifier. We (naively) assume that the features are independent to make computations easier.
Implementation
The implementation of the Naive Bayes Classifier is split in two; Learning and Simple. The learning classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The simple classifier takes as input not a dataset, but already calculated distributions (a dictionary of CountingProbDist objects).
Discrete
The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a CountinProbDist object.
With the below code you can see the probabilities of the class "Setosa" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, "Tall", "3", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution.
End of explanation
"""
def predict(example):
def class_probability(targetval):
return (target_dist[targetval] *
product(attr_dists[targetval, attr][example[attr]]
for attr in dataset.inputs))
return argmax(target_vals, key=class_probability)
print(predict([5, 3, 1, 0.1]))
"""
Explanation: First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of CountingProbDist objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites.
Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result.
End of explanation
"""
psource(NaiveBayesDiscrete)
"""
Explanation: You can view the complete code by executing the next line:
End of explanation
"""
means, deviations = dataset.find_means_and_deviations()
target_vals = dataset.values[dataset.target]
target_dist = CountingProbDist(target_vals)
print(means["setosa"])
print(deviations["versicolor"])
"""
Explanation: Continuous
In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the find_means_and_deviations Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach.
End of explanation
"""
def predict(example):
def class_probability(targetval):
prob = target_dist[targetval]
for attr in dataset.inputs:
prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr])
return prob
return argmax(target_vals, key=class_probability)
print(predict([5, 3, 1, 0.1]))
"""
Explanation: You can see the means of the features for the "Setosa" class and the deviations for "Versicolor".
The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occuring with the conditional probabilities of the feature values for the class.
Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value.
End of explanation
"""
psource(NaiveBayesContinuous)
"""
Explanation: The complete code of the continuous algorithm:
End of explanation
"""
psource(NaiveBayesSimple)
"""
Explanation: Simple
The simple classifier (chosen with the argument simple) does not learn from a dataset, instead it takes as input a dictionary of already calculated CountingProbDist objects and returns a predictor function. The dictionary is in the following form: (Class Name, Class Probability): CountingProbDist Object.
Each class has its own probability distribution. The classifier given a list of features calculates the probability of the input for each class and returns the max. The only pre-processing work is to create dictionaries for the distribution of classes (named targets) and attributes/features.
The complete code for the simple classifier:
End of explanation
"""
nBD = NaiveBayesLearner(iris, continuous=False)
print("Discrete Classifier")
print(nBD([5, 3, 1, 0.1]))
print(nBD([6, 5, 3, 1.5]))
print(nBD([7, 3, 6.5, 2]))
nBC = NaiveBayesLearner(iris, continuous=True)
print("\nContinuous Classifier")
print(nBC([5, 3, 1, 0.1]))
print(nBC([6, 5, 3, 1.5]))
print(nBC([7, 3, 6.5, 2]))
"""
Explanation: This classifier is useful when you already have calculated the distributions and you need to predict future items.
Examples
We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items:
End of explanation
"""
bag1 = 'a'*50 + 'b'*30 + 'c'*15
dist1 = CountingProbDist(bag1)
bag2 = 'a'*30 + 'b'*45 + 'c'*20
dist2 = CountingProbDist(bag2)
bag3 = 'a'*20 + 'b'*20 + 'c'*35
dist3 = CountingProbDist(bag3)
"""
Explanation: Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem.
Let's now take a look at the simple classifier. First we will come up with a sample problem to solve. Say we are given three bags. Each bag contains three letters ('a', 'b' and 'c') of different quantities. We are given a string of letters and we are tasked with finding from which bag the string of letters came.
Since we know the probability distribution of the letters for each bag, we can use the naive bayes classifier to make our prediction.
End of explanation
"""
dist = {('First', 0.5): dist1, ('Second', 0.3): dist2, ('Third', 0.2): dist3}
nBS = NaiveBayesLearner(dist, simple=True)
"""
Explanation: Now that we have the CountingProbDist objects for each bag/class, we will create the dictionary. We assume that it is equally probable that we will pick from any bag.
End of explanation
"""
print(nBS('aab')) # We can handle strings
print(nBS(['b', 'b'])) # And lists!
print(nBS('ccbcc'))
"""
Explanation: Now we can start making predictions:
End of explanation
"""
psource(PerceptronLearner)
"""
Explanation: The results make intuitive sence. The first bag has a high amount of 'a's, the second has a high amount of 'b's and the third has a high amount of 'c's. The classifier seems to confirm this intuition.
Note that the simple classifier doesn't distinguish between discrete and continuous values. It just takes whatever it is given. Also, the simple option on the NaiveBayesLearner overrides the continuous argument. NaiveBayesLearner(d, simple=True, continuous=False) just creates a simple classifier.
PERCEPTRON CLASSIFIER
Overview
The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network.
Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index.
Note that in classification problems each node represents a class. The final classification is the class/node with the max output value.
Below you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g.
Implementation
First, we train (calculate) the weights given a dataset, using the BackPropagationLearner function of learning.py. We then return a function, predict, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.
End of explanation
"""
iris = DataSet(name="iris")
iris.classes_to_numbers()
perceptron = PerceptronLearner(iris)
print(perceptron([5, 3, 1, 0.1]))
"""
Explanation: Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in BackPropagationLearner, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated.
That function predict passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product.
Example
We will train the Perceptron on the iris dataset. Because though the BackPropagationLearner works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1.
End of explanation
"""
iris = DataSet(name="iris")
"""
Explanation: The correct output is 0, which means the item belongs in the first class, "setosa". Note that the Perceptron algorithm is not perfect and may produce false classifications.
LEARNER EVALUATION
In this section we will evaluate and compare algorithm performance. The dataset we will use will again be the iris one.
End of explanation
"""
nBD = NaiveBayesLearner(iris, continuous=False)
print("Error ratio for Discrete:", err_ratio(nBD, iris))
nBC = NaiveBayesLearner(iris, continuous=True)
print("Error ratio for Continuous:", err_ratio(nBC, iris))
"""
Explanation: Naive Bayes
First up we have the Naive Bayes algorithm. First we will test how well the Discrete Naive Bayes works, and then how the Continuous fares.
End of explanation
"""
kNN_1 = NearestNeighborLearner(iris, k=1)
kNN_3 = NearestNeighborLearner(iris, k=3)
kNN_5 = NearestNeighborLearner(iris, k=5)
kNN_7 = NearestNeighborLearner(iris, k=7)
print("Error ratio for k=1:", err_ratio(kNN_1, iris))
print("Error ratio for k=3:", err_ratio(kNN_3, iris))
print("Error ratio for k=5:", err_ratio(kNN_5, iris))
print("Error ratio for k=7:", err_ratio(kNN_7, iris))
"""
Explanation: The error for the Naive Bayes algorithm is very, very low; close to 0. There is also very little difference between the discrete and continuous version of the algorithm.
k-Nearest Neighbors
Now we will take a look at kNN, for different values of k. Note that k should have odd values, to break any ties between two classes.
End of explanation
"""
iris2 = DataSet(name="iris")
iris2.classes_to_numbers()
perceptron = PerceptronLearner(iris2)
print("Error ratio for Perceptron:", err_ratio(perceptron, iris2))
"""
Explanation: Notice how the error became larger and larger as k increased. This is generally the case with datasets where classes are spaced out, as is the case with the iris dataset. If items from different classes were closer together, classification would be more difficult. Usually a value of 1, 3 or 5 for k suffices.
Also note that since the training set is also the testing set, for k equal to 1 we get a perfect score, since the item we want to classify each time is already in the dataset and its closest neighbor is itself.
Perceptron
For the Perceptron, we first need to convert class names to integers. Let's see how it performs in the dataset.
End of explanation
"""
|
pombredanne/gensim | docs/notebooks/topic_coherence_tutorial.ipynb | lgpl-2.1 | import numpy as np
import logging
import pyLDAvis.gensim
import json
import warnings
warnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity
from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
from gensim.models.wrappers import LdaVowpalWabbit, LdaMallet
from gensim.corpora.dictionary import Dictionary
from numpy import array
"""
Explanation: Demonstration of the topic coherence pipeline in Gensim
Introduction
We will be using the u_mass and c_v coherence for two different LDA models: a "good" and a "bad" LDA model. The good LDA model will be trained over 50 iterations and the bad one for 1 iteration. Hence in theory, the good LDA model will be able come up with better or more human-understandable topics. Therefore the coherence measure output for the good LDA model should be more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable.
End of explanation
"""
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
"""
Explanation: Set up logging
End of explanation
"""
texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
"""
Explanation: Set up corpus
As stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. We will be setting up two LDA models. One with 50 iterations of training and the other with just 1. Hence the one with 50 iterations ("better" model) should be able to capture this underlying pattern of the corpus better than the "bad" LDA model. Therefore, in theory, our topic coherence for the good LDA model should be greater than the one for the bad LDA model.
End of explanation
"""
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)
badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
"""
Explanation: Set up two topic models
We'll be setting up two different LDA Topic models. A good one and bad one. To build a "good" topic model, we'll simply train it using more iterations than the bad one. Therefore the u_mass coherence should in theory be better for the good model than the bad one since it would be producing more "human-interpretable" topics.
End of explanation
"""
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
"""
Explanation: Using U_Mass Coherence
End of explanation
"""
print goodcm
"""
Explanation: View the pipeline parameters for one coherence model
Following are the pipeline parameters for u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.
End of explanation
"""
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
print goodcm.get_coherence()
print badcm.get_coherence()
"""
Explanation: Interpreting the topics
As we will see below using LDA visualization, the better model comes up with two topics composed of the following words:
1. goodLdaModel:
- Topic 1: More weightage assigned to words such as "system", "user", "eps", "interface" etc which captures the first set of documents.
- Topic 2: More weightage assigned to words such as "graph", "trees", "survey" which captures the topic in the second set of documents.
2. badLdaModel:
- Topic 1: More weightage assigned to words such as "system", "user", "trees", "graph" which doesn't make the topic clear enough.
- Topic 2: More weightage assigned to words such as "system", "trees", "graph", "user" which is similar to the first topic. Hence both topics are not human-interpretable.
Therefore, the topic coherence for the goodLdaModel should be greater for this than the badLdaModel since the topics it comes up with are more human-interpretable. We will see this using u_mass and c_v topic coherence measures.
Visualize topic models
End of explanation
"""
goodcm = CoherenceModel(model=goodLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
badcm = CoherenceModel(model=badLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
"""
Explanation: Using C_V coherence
End of explanation
"""
print goodcm
"""
Explanation: Pipeline parameters for C_V coherence
End of explanation
"""
print goodcm.get_coherence()
print badcm.get_coherence()
"""
Explanation: Print coherence values
End of explanation
"""
model1 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=50)
model2 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=1)
cm1 = CoherenceModel(model=model1, corpus=corpus, coherence='u_mass')
cm2 = CoherenceModel(model=model2, corpus=corpus, coherence='u_mass')
print cm1.get_coherence()
print cm2.get_coherence()
model1 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=50)
model2 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=1)
cm1 = CoherenceModel(model=model1, texts=texts, coherence='c_v')
cm2 = CoherenceModel(model=model2, texts=texts, coherence='c_v')
print cm1.get_coherence()
print cm2.get_coherence()
"""
Explanation: Support for wrappers
This API supports gensim's ldavowpalwabbit and ldamallet wrappers as input parameter to model.
End of explanation
"""
|
kringen/IOT-Back-Brace | data_collection/ProcessSensorReadings.ipynb | apache-2.0 | import json
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import explode
from pyspark.ml.feature import VectorAssembler
from pyspark.mllib.tree import RandomForest, RandomForestModel
#custom modules
import MySQLConnection
"""
IMPORTANT: MUST use class paths when using spark-submit
$SPARK_HOME/bin/spark-submit --packages org.apache.spark:spark-streaming-kafka_2.10:1.6.2,mysql:mysql-connector-java:5.1.28 ProcessSensorReadings.py
"""
"""
Explanation: Script to Process the Sensor Readings - ProcessSensorReadings.py
Overview:
This Script uses Spark Streaming to read Kafka topics as they come in and then insert them into a MySQL database. There are two main methods:
Read actual sensor readings: Kafka Topic (LumbarSensorReadings) -> writeLumbarReadings -> MySQL table: SensorReadings
Read Training sensor readings: Kafka Topic (LumbarSensorTrainingReadings) -> writeLumbarTrainingReadings -> MySQL table: SensorTrainingReadings
This script requires the JDBC Driver in order to connect to to a MySQL database.
End of explanation
"""
def writeLumbarReadings(time, rdd):
try:
# Convert RDDs of the words DStream to DataFrame and run SQL query
connectionProperties = MySQLConnection.getDBConnectionProps('/home/erik/mysql_credentials.txt')
sqlContext = SQLContext(rdd.context)
if rdd.isEmpty() == False:
lumbarReadings = sqlContext.jsonRDD(rdd)
lumbarReadingsIntermediate = lumbarReadings.selectExpr("readingID","readingTime","deviceID","metricTypeID","uomID","actual.y AS actualYaw","actual.p AS actualPitch","actual.r AS actualRoll","setPoints.y AS setPointYaw","setPoints.p AS setPointPitch","setPoints.r AS setPointRoll")
assembler = VectorAssembler(
inputCols=["actualPitch"], # Must be in same order as what was used to train the model. Testing using only pitch since model has limited dataset.
outputCol="features")
lumbarReadingsIntermediate = assembler.transform(lumbarReadingsIntermediate)
predictions = loadedModel.predict(lumbarReadingsIntermediate.map(lambda x: x.features))
predictionsDF = lumbarReadingsIntermediate.map(lambda x: x.readingID).zip(predictions).toDF(["readingID","positionID"])
combinedDF = lumbarReadingsIntermediate.join(predictionsDF, lumbarReadingsIntermediate.readingID == predictionsDF.readingID).drop(predictionsDF.readingID)
combinedDF = combinedDF.drop("features")
combinedDF.show()
combinedDF.write.jdbc("jdbc:mysql://localhost/biosensor", "SensorReadings", properties=connectionProperties)
except:
pass
"""
Explanation: The "writeLumbarReadings" method takes the rdd received from Spark Streaming as an input. It then extracts the JSON data and converts to a SQLContext dataframe.
After this it creates a new column in the dataframe that contains the "feature vector" that will be used to predict the posture.
The prediction process uses a model that is created and saved previously. it uses the feature vector to predict the posture.
Finally, the extra feature column is dropped and the final dataframe is inserted into the MySQL database using JDBC.
End of explanation
"""
def writeLumbarTrainingReadings(time, rddTraining):
try:
# Convert RDDs of the words DStream to DataFrame and run SQL query
connectionProperties = MySQLConnection.getDBConnectionProps('/home/erik/mysql_credentials.txt')
sqlContext = SQLContext(rddTraining.context)
if rddTraining.isEmpty() == False:
lumbarTrainingReading = sqlContext.jsonRDD(rddTraining)
lumbarTrainingReadingFinal = lumbarTrainingReading.selectExpr("deviceID","metricTypeID","uomID","positionID","actual.y AS actualYaw","actual.p AS actualPitch","actual.r AS actualRoll","setPoints.y AS setPointYaw","setPoints.p AS setPointPitch","setPoints.r AS setPointRoll")
lumbarTrainingReadingFinal.write.jdbc("jdbc:mysql://localhost/biosensor", "SensorTrainingReadings", properties=connectionProperties)
except:
pass
"""
Explanation: The "writeLumbarTrainingReadings" method also accepts an RDD from Spark Streaming but does not need to do any machine learning processing since we already know the posture from the JSON data.
Readings are simply transformed to a SQLContext dataframe and then inserted into the MySQL training readings table.
End of explanation
"""
if __name__ == "__main__":
sc = SparkContext(appName="Process Lumbar Sensor Readings")
ssc = StreamingContext(sc, 2) # 2 second batches
loadedModel = RandomForestModel.load(sc, "../machine_learning/models/IoTBackBraceRandomForest.model")
#Process Readings
streamLumbarSensor = KafkaUtils.createDirectStream(ssc, ["LumbarSensorReadings"], {"metadata.broker.list": "localhost:9092"})
lineSensorReading = streamLumbarSensor.map(lambda x: x[1])
lineSensorReading.foreachRDD(writeLumbarReadings)
#Process Training Readings
streamLumbarSensorTraining = KafkaUtils.createDirectStream(ssc, ["LumbarSensorTrainingReadings"], {"metadata.broker.list": "localhost:9092"})
lineSensorTrainingReading = streamLumbarSensorTraining.map(lambda x: x[1])
lineSensorTrainingReading.foreachRDD(writeLumbarTrainingReadings)
# Run and then wait for termination signal
ssc.start()
ssc.awaitTermination()
"""
Explanation: In the main part of the script the machine learning model is loaded and then two Spark StreamingContexts are created to listen for either actual device readings or training readings. The appropriate methods are then called upon receipt.
End of explanation
"""
|
albahnsen/ML_SecurityInformatics | notebooks/09_EnsembleMethods_Bagging.ipynb | mit | import numpy as np
# set a seed for reproducibility
np.random.seed(1234)
# generate 1000 random numbers (between 0 and 1) for each model, representing 1000 observations
mod1 = np.random.rand(1000)
mod2 = np.random.rand(1000)
mod3 = np.random.rand(1000)
mod4 = np.random.rand(1000)
mod5 = np.random.rand(1000)
# each model independently predicts 1 (the "correct response") if random number was at least 0.3
preds1 = np.where(mod1 > 0.3, 1, 0)
preds2 = np.where(mod2 > 0.3, 1, 0)
preds3 = np.where(mod3 > 0.3, 1, 0)
preds4 = np.where(mod4 > 0.3, 1, 0)
preds5 = np.where(mod5 > 0.3, 1, 0)
# print the first 20 predictions from each model
print(preds1[:20])
print(preds2[:20])
print(preds3[:20])
print(preds4[:20])
print(preds5[:20])
# average the predictions and then round to 0 or 1
ensemble_preds = np.round((preds1 + preds2 + preds3 + preds4 + preds5)/5.0).astype(int)
# print the ensemble's first 20 predictions
print(ensemble_preds[:20])
# how accurate was each individual model?
print(preds1.mean())
print(preds2.mean())
print(preds3.mean())
print(preds4.mean())
print(preds5.mean())
# how accurate was the ensemble?
print(ensemble_preds.mean())
"""
Explanation: 09 - Ensemble Methods - Bagging
by Alejandro Correa Bahnsen
version 0.2, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Why are we learning about ensembling?
Very popular method for improving the predictive performance of machine learning models
Provides a foundation for understanding more sophisticated models
Lesson objectives
Students will be able to:
Define ensembling and its requirements
Identify the two basic methods of ensembling
Decide whether manual ensembling is a useful approach for a given problem
Explain bagging and how it can be applied to decision trees
Explain how out-of-bag error and feature importances are calculated from bagged trees
Explain the difference between bagged trees and Random Forests
Build and tune a Random Forest model in scikit-learn
Decide whether a decision tree or a Random Forest is a better model for a given problem
Part 1: Introduction
Ensemble learning is a widely studied topic in the machine learning community. The main idea behind
the ensemble methodology is to combine several individual base classifiers in order to have a
classifier that outperforms each of them.
Nowadays, ensemble methods are one
of the most popular and well studied machine learning techniques, and it can be
noted that since 2009 all the first-place and second-place winners of the KDD-Cup https://www.sigkdd.org/kddcup/ used ensemble methods. The core
principle in ensemble learning, is to induce random perturbations into the learning procedure in
order to produce several different base classifiers from a single training set, then combining the
base classifiers in order to make the final prediction. In order to induce the random permutations
and therefore create the different base classifiers, several methods have been proposed, in
particular:
* bagging
* pasting
* random forests
* random patches
Finally, after the base classifiers
are trained, they are typically combined using either:
* majority voting
* weighted voting
* stacking
There are three main reasons regarding why ensemble
methods perform better than single models: statistical, computational and representational . First, from a statistical point of view, when the learning set is too
small, an algorithm can find several good models within the search space, that arise to the same
performance on the training set $\mathcal{S}$. Nevertheless, without a validation set, there is
a risk of choosing the wrong model. The second reason is computational; in general, algorithms
rely on some local search optimization and may get stuck in a local optima. Then, an ensemble may
solve this by focusing different algorithms to different spaces across the training set. The last
reason is representational. In most cases, for a learning set of finite size, the true function
$f$ cannot be represented by any of the candidate models. By combining several models in an
ensemble, it may be possible to obtain a model with a larger coverage across the space of
representable functions.
Example
Let's pretend that instead of building a single model to solve a binary classification problem, you created five independent models, and each model was correct about 70% of the time. If you combined these models into an "ensemble" and used their majority vote as a prediction, how often would the ensemble be correct?
End of explanation
"""
# read in and prepare the vehicle training data
import zipfile
import pandas as pd
with zipfile.ZipFile('../datasets/vehicles_train.csv.zip', 'r') as z:
f = z.open('vehicles_train.csv')
train = pd.io.parsers.read_table(f, index_col=False, sep=',')
with zipfile.ZipFile('../datasets/vehicles_test.csv.zip', 'r') as z:
f = z.open('vehicles_test.csv')
test = pd.io.parsers.read_table(f, index_col=False, sep=',')
train['vtype'] = train.vtype.map({'car':0, 'truck':1})
# read in and prepare the vehicle testing data
test['vtype'] = test.vtype.map({'car':0, 'truck':1})
train.head()
"""
Explanation: Note: As you add more models to the voting process, the probability of error decreases, which is known as Condorcet's Jury Theorem.
What is ensembling?
Ensemble learning (or "ensembling") is the process of combining several predictive models in order to produce a combined model that is more accurate than any individual model.
Regression: take the average of the predictions
Classification: take a vote and use the most common prediction, or take the average of the predicted probabilities
For ensembling to work well, the models must have the following characteristics:
Accurate: they outperform the null model
Independent: their predictions are generated using different processes
The big idea: If you have a collection of individually imperfect (and independent) models, the "one-off" mistakes made by each model are probably not going to be made by the rest of the models, and thus the mistakes will be discarded when averaging the models.
There are two basic methods for ensembling:
Manually ensemble your individual models
Use a model that ensembles for you
Theoretical performance of an ensemble
If we assume that each one of the $T$ base classifiers has a probability $\rho$ of
being correct, the probability of an ensemble making the correct decision, assuming independence,
denoted by $P_c$, can be calculated using the binomial distribution
$$P_c = \sum_{j>T/2}^{T} {{T}\choose{j}} \rho^j(1-\rho)^{T-j}.$$
Furthermore, as shown, if $T\ge3$ then:
$$
\lim_{T \to \infty} P_c= \begin{cases}
1 &\mbox{if } \rho>0.5 \
0 &\mbox{if } \rho<0.5 \
0.5 &\mbox{if } \rho=0.5 ,
\end{cases}
$$
leading to the conclusion that
$$
\rho \ge 0.5 \quad \text{and} \quad T\ge3 \quad \Rightarrow \quad P_c\ge \rho.
$$
Part 2: Manual ensembling
What makes a good manual ensemble?
Different types of models
Different combinations of features
Different tuning parameters
Machine learning flowchart created by the winner of Kaggle's CrowdFlower competition
End of explanation
"""
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsRegressor
models = {'lr': LinearRegression(),
'dt': DecisionTreeRegressor(),
'nb': GaussianNB(),
'nn': KNeighborsRegressor()}
# Train all the models
X_train = train.iloc[:, 1:]
X_test = test.iloc[:, 1:]
y_train = train.price
y_test = test.price
for model in models.keys():
models[model].fit(X_train, y_train)
# predict test for each model
y_pred = pd.DataFrame(index=test.index, columns=models.keys())
for model in models.keys():
y_pred[model] = models[model].predict(X_test)
# Evaluate each model
from sklearn.metrics import mean_squared_error
for model in models.keys():
print(model,np.sqrt(mean_squared_error(y_pred[model], y_test)))
"""
Explanation: Train different models
End of explanation
"""
np.sqrt(mean_squared_error(y_pred.mean(axis=1), y_test))
"""
Explanation: Evaluate the error of the mean of the predictions
End of explanation
"""
# set a seed for reproducibility
np.random.seed(1)
# create an array of 1 through 20
nums = np.arange(1, 21)
print(nums)
# sample that array 20 times with replacement
print(np.random.choice(a=nums, size=20, replace=True))
"""
Explanation: Comparing manual ensembling with a single model approach
Advantages of manual ensembling:
Increases predictive accuracy
Easy to get started
Disadvantages of manual ensembling:
Decreases interpretability
Takes longer to train
Takes longer to predict
More complex to automate and maintain
Small gains in accuracy may not be worth the added complexity
Part 3: Bagging
The primary weakness of decision trees is that they don't tend to have the best predictive accuracy. This is partially due to high variance, meaning that different splits in the training data can lead to very different trees.
Bagging is a general purpose procedure for reducing the variance of a machine learning method, but is particularly useful for decision trees. Bagging is short for bootstrap aggregation, meaning the aggregation of bootstrap samples.
What is a bootstrap sample? A random sample with replacement:
End of explanation
"""
# set a seed for reproducibility
np.random.seed(123)
n_samples = train.shape[0]
n_B = 10
# create ten bootstrap samples (will be used to select rows from the DataFrame)
samples = [np.random.choice(a=n_samples, size=n_samples, replace=True) for _ in range(1, n_B +1 )]
samples
# show the rows for the first decision tree
train.iloc[samples[0], :]
"""
Explanation: How does bagging work (for decision trees)?
Grow B trees using B bootstrap samples from the training data.
Train each tree on its bootstrap sample and make predictions.
Combine the predictions:
Average the predictions for regression trees
Take a vote for classification trees
Notes:
Each bootstrap sample should be the same size as the original training set.
B should be a large enough value that the error seems to have "stabilized".
The trees are grown deep so that they have low bias/high variance.
Bagging increases predictive accuracy by reducing the variance, similar to how cross-validation reduces the variance associated with train/test split (for estimating out-of-sample error) by splitting many times an averaging the results.
End of explanation
"""
from sklearn.tree import DecisionTreeRegressor
# grow each tree deep
treereg = DecisionTreeRegressor(max_depth=None, random_state=123)
# DataFrame for storing predicted price from each tree
y_pred = pd.DataFrame(index=test.index, columns=[list(range(n_B))])
# grow one tree for each bootstrap sample and make predictions on testing data
for i, sample in enumerate(samples):
X_train = train.iloc[sample, 1:]
y_train = train.iloc[sample, 0]
treereg.fit(X_train, y_train)
y_pred[i] = treereg.predict(X_test)
y_pred
"""
Explanation: Build one tree for each sample
End of explanation
"""
for i in range(n_B):
print(i, np.sqrt(mean_squared_error(y_pred[i], y_test)))
"""
Explanation: Results of each tree
End of explanation
"""
y_pred.mean(axis=1)
np.sqrt(mean_squared_error(y_test, y_pred.mean(axis=1)))
"""
Explanation: Results of the ensemble
End of explanation
"""
# define the training and testing sets
X_train = train.iloc[:, 1:]
y_train = train.iloc[:, 0]
X_test = test.iloc[:, 1:]
y_test = test.iloc[:, 0]
# instruct BaggingRegressor to use DecisionTreeRegressor as the "base estimator"
from sklearn.ensemble import BaggingRegressor
bagreg = BaggingRegressor(DecisionTreeRegressor(), n_estimators=500,
bootstrap=True, oob_score=True, random_state=1)
# fit and predict
bagreg.fit(X_train, y_train)
y_pred = bagreg.predict(X_test)
y_pred
# calculate RMSE
np.sqrt(mean_squared_error(y_test, y_pred))
"""
Explanation: Bagged decision trees in scikit-learn (with B=500)
End of explanation
"""
# show the first bootstrap sample
samples[0]
# show the "in-bag" observations for each sample
for sample in samples:
print(set(sample))
# show the "out-of-bag" observations for each sample
for sample in samples:
print(sorted(set(range(n_samples)) - set(sample)))
"""
Explanation: Estimating out-of-sample error
For bagged models, out-of-sample error can be estimated without using train/test split or cross-validation!
On average, each bagged tree uses about two-thirds of the observations. For each tree, the remaining observations are called "out-of-bag" observations.
End of explanation
"""
# compute the out-of-bag R-squared score (not MSE, unfortunately!) for B=500
bagreg.oob_score_
"""
Explanation: How to calculate "out-of-bag error":
For every observation in the training data, predict its response value using only the trees in which that observation was out-of-bag. Average those predictions (for regression) or take a vote (for classification).
Compare all predictions to the actual response values in order to compute the out-of-bag error.
When B is sufficiently large, the out-of-bag error is an accurate estimate of out-of-sample error.
End of explanation
"""
# read in the data
with zipfile.ZipFile('../datasets/hitters.csv.zip', 'r') as z:
f = z.open('hitters.csv')
hitters = pd.read_csv(f, sep=',', index_col=False)
# remove rows with missing values
hitters.dropna(inplace=True)
hitters.head()
# encode categorical variables as integers
hitters['League'] = pd.factorize(hitters.League)[0]
hitters['Division'] = pd.factorize(hitters.Division)[0]
hitters['NewLeague'] = pd.factorize(hitters.NewLeague)[0]
hitters.head()
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# scatter plot of Years versus Hits colored by Salary
hitters.plot(kind='scatter', x='Years', y='Hits', c='Salary', colormap='jet', xlim=(0, 25), ylim=(0, 250))
# define features: exclude career statistics (which start with "C") and the response (Salary)
feature_cols = hitters.columns[hitters.columns.str.startswith('C') == False].drop('Salary')
feature_cols
# define X and y
X = hitters[feature_cols]
y = hitters.Salary
"""
Explanation: Estimating feature importance
Bagging increases predictive accuracy, but decreases model interpretability because it's no longer possible to visualize the tree to understand the importance of each feature.
However, we can still obtain an overall summary of feature importance from bagged models:
Bagged regression trees: calculate the total amount that MSE is decreased due to splits over a given feature, averaged over all trees
Bagged classification trees: calculate the total amount that Gini index is decreased due to splits over a given feature, averaged over all trees
Part 4: Random Forests
Random Forests is a slight variation of bagged trees that has even better performance:
Exactly like bagging, we create an ensemble of decision trees using bootstrapped samples of the training set.
However, when building each tree, each time a split is considered, a random sample of m features is chosen as split candidates from the full set of p features. The split is only allowed to use one of those m features.
A new random sample of features is chosen for every single tree at every single split.
For classification, m is typically chosen to be the square root of p.
For regression, m is typically chosen to be somewhere between p/3 and p.
What's the point?
Suppose there is one very strong feature in the data set. When using bagged trees, most of the trees will use that feature as the top split, resulting in an ensemble of similar trees that are highly correlated.
Averaging highly correlated quantities does not significantly reduce variance (which is the entire goal of bagging).
By randomly leaving out candidate features from each split, Random Forests "decorrelates" the trees, such that the averaging process can reduce the variance of the resulting model.
Part 5: Building and tuning decision trees and Random Forests
Major League Baseball player data from 1986-87: data, data dictionary (page 7)
Each observation represents a player
Goal: Predict player salary
End of explanation
"""
# list of values to try for max_depth
max_depth_range = range(1, 21)
# list to store the average RMSE for each value of max_depth
RMSE_scores = []
# use 10-fold cross-validation with each value of max_depth
from sklearn.cross_validation import cross_val_score
for depth in max_depth_range:
treereg = DecisionTreeRegressor(max_depth=depth, random_state=1)
MSE_scores = cross_val_score(treereg, X, y, cv=10, scoring='mean_squared_error')
RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))
# plot max_depth (x-axis) versus RMSE (y-axis)
plt.plot(max_depth_range, RMSE_scores)
plt.xlabel('max_depth')
plt.ylabel('RMSE (lower is better)')
# show the best RMSE and the corresponding max_depth
sorted(zip(RMSE_scores, max_depth_range))[0]
# max_depth=2 was best, so fit a tree using that parameter
treereg = DecisionTreeRegressor(max_depth=2, random_state=1)
treereg.fit(X, y)
# compute feature importances
pd.DataFrame({'feature':feature_cols, 'importance':treereg.feature_importances_}).sort_values('importance')
"""
Explanation: Predicting salary with a decision tree
Find the best max_depth for a decision tree using cross-validation:
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
rfreg = RandomForestRegressor()
rfreg
"""
Explanation: Predicting salary with a Random Forest
End of explanation
"""
# list of values to try for n_estimators
estimator_range = range(10, 310, 10)
# list to store the average RMSE for each value of n_estimators
RMSE_scores = []
# use 5-fold cross-validation with each value of n_estimators (WARNING: SLOW!)
for estimator in estimator_range:
rfreg = RandomForestRegressor(n_estimators=estimator, random_state=1, n_jobs=-1)
MSE_scores = cross_val_score(rfreg, X, y, cv=5, scoring='mean_squared_error')
RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))
# plot n_estimators (x-axis) versus RMSE (y-axis)
plt.plot(estimator_range, RMSE_scores)
plt.xlabel('n_estimators')
plt.ylabel('RMSE (lower is better)')
"""
Explanation: Tuning n_estimators
One important tuning parameter is n_estimators, which is the number of trees that should be grown. It should be a large enough value that the error seems to have "stabilized".
End of explanation
"""
# list of values to try for max_features
feature_range = range(1, len(feature_cols)+1)
# list to store the average RMSE for each value of max_features
RMSE_scores = []
# use 10-fold cross-validation with each value of max_features (WARNING: SLOW!)
for feature in feature_range:
rfreg = RandomForestRegressor(n_estimators=150, max_features=feature, random_state=1, n_jobs=-1)
MSE_scores = cross_val_score(rfreg, X, y, cv=10, scoring='mean_squared_error')
RMSE_scores.append(np.mean(np.sqrt(-MSE_scores)))
# plot max_features (x-axis) versus RMSE (y-axis)
plt.plot(feature_range, RMSE_scores)
plt.xlabel('max_features')
plt.ylabel('RMSE (lower is better)')
# show the best RMSE and the corresponding max_features
sorted(zip(RMSE_scores, feature_range))[0]
"""
Explanation: Tuning max_features
The other important tuning parameter is max_features, which is the number of features that should be considered at each split.
End of explanation
"""
# max_features=8 is best and n_estimators=150 is sufficiently large
rfreg = RandomForestRegressor(n_estimators=150, max_features=8, oob_score=True, random_state=1)
rfreg.fit(X, y)
# compute feature importances
pd.DataFrame({'feature':feature_cols, 'importance':rfreg.feature_importances_}).sort_values('importance')
# compute the out-of-bag R-squared score
rfreg.oob_score_
"""
Explanation: Fitting a Random Forest with the best parameters
End of explanation
"""
# check the shape of X
X.shape
# set a threshold for which features to include
print(rfreg.transform(X, threshold=0.1).shape)
print(rfreg.transform(X, threshold='mean').shape)
print(rfreg.transform(X, threshold='median').shape)
# create a new feature matrix that only includes important features
X_important = rfreg.transform(X, threshold='mean')
# check the RMSE for a Random Forest that only includes important features
rfreg = RandomForestRegressor(n_estimators=150, max_features=3, random_state=1)
scores = cross_val_score(rfreg, X_important, y, cv=10, scoring='mean_squared_error')
np.mean(np.sqrt(-scores))
"""
Explanation: Reducing X to its most important features
End of explanation
"""
|
Geosyntec/pycvc | examples/medians/0 - Setup NSQD Median computation.ipynb | bsd-3-clause | import numpy
import wqio
import pynsqd
import pycvc
def get_cvc_parameter(nsqdparam):
try:
cvcparam = list(filter(
lambda p: p['nsqdname'] == nsqdparam, pycvc.info.POC_dicts
))[0]['cvcname']
except IndexError:
cvcparam = numpy.nan
return cvcparam
def fix_nsqd_bacteria_units(df, unitscol='units'):
df[unitscol] = df[unitscol].replace(to_replace='MPN/100 mL', value='CFU/100 mL')
return df
nsqd_params = [
p['nsqdname']
for p in pycvc.info.POC_dicts
]
"""
Explanation: Load, filter, export the NSQD Dataset
The cell below imports the libaries we need and defines some function that help up clean up the NSQD
End of explanation
"""
raw_data = pynsqd.NSQData().data
clean_data = (
raw_data
.query("primary_landuse != 'Unknown'")
.query("parameter in @nsqd_params")
.query("fraction == 'Total'")
.query("epa_rain_zone == 1")
.assign(station='outflow')
.assign(cvcparam=lambda df: df['parameter'].apply(get_cvc_parameter))
.assign(season=lambda df: df['start_date'].apply(wqio.utils.getSeason))
.drop('parameter', axis=1)
.rename(columns={'cvcparam': 'parameter'})
.pipe(fix_nsqd_bacteria_units)
.query("primary_landuse == 'Residential'")
)
"""
Explanation: Create a raw data set, then compute season and apply basic filters
(also export to CSV file)
End of explanation
"""
clean_data.groupby(by=['parameter', 'season']).size().unstack(level='season')
"""
Explanation: Show the sample counts for each parameter
End of explanation
"""
(
clean_data
.query("parameter == 'Total Suspended Solids'")
.to_csv('NSQD_Res_TSS.csv', index=False)
)
"""
Explanation: Export TSS to a CSV file
End of explanation
"""
|
SlipknotTN/udacity-deeplearning-nanodegree | tv-script-generation/dlnd_tv_script_generation_deep_dante.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/divina_commedia.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
#text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
words_ordered = sorted(set(text))
# TODO: Implement Function
vocab_to_int = {word: index for index, word in enumerate(words_ordered)}
int_to_vocab = {index: word for index, word in enumerate(words_ordered)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = dict()
token_dict['.'] = "||Period||"
token_dict[','] = "||Comma||"
token_dict['"'] = "||Quotation_Mark||"
token_dict[';'] = "||Semicolon||"
token_dict['!'] = "||Exclamation_Mark||"
token_dict['?'] = "||Question_Mark||"
token_dict['('] = "||Left_Parentheses||"
token_dict[')'] = "||Right_Parentheses||"
token_dict['--'] = "||Dash||"
token_dict['\n'] = "||Return||"
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None,None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm_layers = 2 #Need to pass test?! (otherwise final_state shape will be wrong)
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# print(initial_state)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, ids=input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
#print(cell)
#print(inputs)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
# Shape is lstm_layers x 2 (inputs and targets) x None (batch_size) x lstm_units
#print(final_state)
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim=embed_dim)
# outputs shape is batch_size x seq_len x lstm_units
outputs, final_state = build_rnn(cell, inputs=embed)
#print(outputs.shape)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
# logits shape is batch_size x seq_len x vocab_size
#print(logits.shape)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
#print("Batch_size: " + str(batch_size))
#print("Seq length: " + str(seq_length))
# Consider that targets is shifted by 1
num_batches = len(int_text)//(batch_size * seq_length + 1)
#print("Num batches: " + str(num_batches))
#print("Text length: " + str(len(int_text)))
batches = np.zeros(shape=(num_batches, 2, batch_size, seq_length), dtype=np.int32)
#print(batches.shape)
# TODO: Add a smarter check
for batch_index in range(0, num_batches):
for in_batch_index in range(0, batch_size):
start_x = (batch_index * seq_length) + (seq_length * num_batches * in_batch_index)
start_y = start_x + 1
x = int_text[start_x : start_x + seq_length]
y = int_text[start_y : start_y + seq_length]
#print("batch_index: " + str(batch_index))
#print("in_batch_index: " + str(in_batch_index))
#print("start_x: " + str(start_x))
#print(x)
batches[batch_index][0][in_batch_index] = np.asarray(x)
batches[batch_index][1][in_batch_index] = np.asarray(y)
#print(batches)
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# FINAL LOSS: 0.213 - Seq length 20, LR 0.001, Epochs 250
# Number of Epochs
num_epochs = 250
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 99
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
# probs shape is batch_size x seq_len x vocab_size
probs = tf.nn.softmax(logits, name='probs')
#print(probs.shape)
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
# x and y shapes are batch_size x seq_len
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), \
loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
#print(vocab_to_int)
gen_length = 200
#prime_word = 'Inferno: Canto I'
prime_word = 'vuolsi'
prime_word = str.lower(prime_word)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = prime_word.split()
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
georgetown-analytics/machine-learning | archive/notebook/Clustering Flag Data.ipynb | mit | import os
import requests
import numpy as np
import pandas as pd
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from sklearn import manifold
from sklearn.cluster import KMeans, AgglomerativeClustering
from sklearn.decomposition import PCA
from sklearn.metrics import silhouette_samples, silhouette_score
from sklearn.metrics.pairwise import euclidean_distances
from time import time
%matplotlib inline
pd.set_option('max_columns', 500)
"""
Explanation: Predicting Religion from Country Flags
Professor Bengfort put together a notebook using the UCI Machine Learning Repository flags dataset to predict the religion of a country based on the attributes of their flags.
What if we had the same data, without the religion column? Can we used unsupervised machine learning to draw some conclusions about the data?
🇦🇫🇦🇽🇦🇱🇩🇿🇦🇸🇦🇩🇦🇴🇦🇮🇦🇶🇦🇬🇦🇷🇦🇲🇦🇼🇦🇺🇦🇹🇦🇿🇧🇸🇧🇭🇧🇩🇧🇧🇧🇾🇧🇪🇧🇿🇧🇯🇧🇲🇧🇹🇧🇴🇧🇶🇧🇦🇧🇼🇧🇷🇮🇴
Here is some infomation about our dataset:
Data Set Information:
This data file contains details of various nations and their flags. In this file the fields are separated by spaces (not commas). With this data you can try things like predicting the religion of a country from its size and the colours in its flag.
10 attributes are numeric-valued. The remainder are either Boolean- or nominal-valued.
Attribute Information:
name: Name of the country concerned
landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania
zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW
area: in thousands of square km
population: in round millions
language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others
religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others
bars: Number of vertical bars in the flag
stripes: Number of horizontal stripes in the flag
colours: Number of different colours in the flag
red: 0 if red absent, 1 if red present in the flag
green: same for green
blue: same for blue
gold: same for gold (also yellow)
white: same for white
black: same for black
orange: same for orange (also brown)
mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)
circles: Number of circles in the flag
crosses: Number of (upright) crosses
saltires: Number of diagonal crosses
quarters: Number of quartered sections
sunstars: Number of sun or star symbols
crescent: 1 if a crescent moon symbol present, else 0
triangle: 1 if any triangles present, 0 otherwise
icon: 1 if an inanimate image present (e.g., a boat), otherwise 0
animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise
text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise
topleft: colour in the top-left corner (moving right to decide tie-breaks)
botright: Colour in the bottom-left corner (moving left to decide tie-breaks)
End of explanation
"""
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data"
def fetch_data(fname='flags.txt'):
"""
Helper method to retreive the ML Repository dataset.
"""
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'wb') as f:
f.write(response.content)
return outpath
# Fetch the data if required
DATA = fetch_data()
# Load data and do some simple data management
# We are going to define the names from the features and build a dictionary to convert our categorical features.
FEATS = [
"name", "landmass", "zone", "area", "population", "language", "religion", "bars",
"stripes", "colours", "red", "green", "blue", "gold", "white", "black", "orange",
"mainhue", "circles", "crosses", "saltires", "quarters", "sunstars", "crescent",
"triangle", "icon", "animate", "text", "topleft", "botright",
]
COLOR_MAP = {"red": 1, "blue": 2, "green": 3, "white": 4, "gold": 5, "black": 6, "orange": 7, "brown": 8}
# Load Data
df = pd.read_csv(DATA, header=None, names=FEATS)
df.head()
#df['mainhue'] = df['mainhue'].map(COLOR_MAP)
#df['topleft'] = df['topleft'].map(COLOR_MAP)
#df['botright'] = df['botright'].map(COLOR_MAP)
# Now we will use the dictionary to convert categoricals into int values
for k,v in COLOR_MAP.items():
df.loc[df.mainhue == k, 'mainhue'] = v
for k,v in COLOR_MAP.items():
df.loc[df.topleft == k, 'topleft'] = v
for k,v in COLOR_MAP.items():
df.loc[df.botright == k, 'botright'] = v
df.mainhue = df.mainhue.apply(int)
df.topleft = df.topleft.apply(int)
df.botright = df.botright.apply(int)
df.head()
df.describe()
"""
Explanation: Let's grab the data and set it up for analysis.
End of explanation
"""
feature_names = [
"landmass", "zone", "area", "population", "language", "bars",
"stripes", "colours", "red", "green", "blue", "gold", "white", "black", "orange",
"mainhue", "circles", "crosses", "saltires", "quarters", "sunstars", "crescent",
"triangle", "icon", "animate", "text", "topleft", "botright",
]
X = df[feature_names]
y = df.religion
"""
Explanation: Clustering
Clustering is an unsupervised machine learning method. This means we don't have to have a value we are predicting.
You can use clustering when you know this information as well. Scikit-learn provides a number of metrics you can employ with a "known ground truth" (i.e. the values you are predicting). We won't cover them here, but you can use this notebook to add some cells, create your "y" value, and explore the metrics described here.
In the case of the flags data, we do have our "known ground truth". However, for the purpose of this exercise we are going to drop that information out of our data set. We will use it later with Agglomerative Clustering.
End of explanation
"""
# Code adapted from https://www.packtpub.com/books/content/clustering-k-means
K = range(1,10)
meandistortions = []
for k in K:
elbow = KMeans(n_clusters=k, n_jobs=-1, random_state=1)
elbow.fit(X)
meandistortions.append(sum(np.min(euclidean_distances(X, elbow.cluster_centers_), axis=1)) / X.shape[0])
plt.plot(K, meandistortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Average distortion')
plt.title('Selecting k with the Elbow Method')
plt.show()
"""
Explanation: KMeans Clustering
Let's look at KMeans clustering first.
"K-means is a simple unsupervised machine learning algorithm that groups a dataset into a user-specified number (k) of clusters. The algorithm is somewhat naive--it clusters the data into k clusters, even if k is not the right number of clusters to use. Therefore, when using k-means clustering, users need some way to determine whether they are using the right number of clusters."
One way to determine the number of cluster is through the "elbow" method. Using this method, we try a range of values for k and evaluate the "variance explained as a function of the number of clusters".
End of explanation
"""
kmeans = KMeans(n_clusters=5, n_jobs=-1, random_state=1)
kmeans.fit(X)
labels = kmeans.labels_
silhouette_score(X, labels, metric='euclidean')
kmeans = KMeans(n_clusters=4, n_jobs=-1, random_state=1)
kmeans.fit(X)
labels = kmeans.labels_
silhouette_score(X, labels, metric='euclidean')
"""
Explanation: If the line chart looks like an arm, then the "elbow" on the arm is the value of k that is the best. Our goal is to choose a small value of k that still has a low variance. The elbow usually represents where we start to have diminishing returns by increasing k.
However, the elbow method doesn't always work well; especially if the data is not very clustered.
Based on our plot, it looks like k=4 and k=5 are worth looking at. How do we measure which might be better? We can use the Silhouette Coefficient. A higher Silhouette Coefficient score relates to a model with better defined clusters.
End of explanation
"""
kmeans = KMeans(n_clusters=8, n_jobs=-1, random_state=1)
kmeans.fit(X)
labels = kmeans.labels_
silhouette_score(X, labels, metric='euclidean')
"""
Explanation: We can see above, k=4 has a better score.
As implemented in scikit-learn, KMeans will use 8 clusters by default. Given our data, it makes sense to try this out since our data actually has 8 potential labels (look at "religion" in the data secription above). Based on the plot above, we should expect the silhouette score for k=8 to be less than for k=4.
End of explanation
"""
# Code adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
def silhouette_plot(X, range_n_clusters = range(2, 12, 2)):
for n_clusters in range_n_clusters:
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1
ax1.set_xlim([-.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X.iloc[:, 0], X.iloc[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors)
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1],
marker='o', c="white", alpha=1, s=200)
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50)
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
silhouette_plot(X)
"""
Explanation: We can also visualize what our clusters look like. The function below will plot the clusters and visulaize their silhouette scores.
End of explanation
"""
# Code adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_digits_linkage.html
# Visualize the clustering
def plot_clustering(X_red, X, labels, title=None):
x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0)
X_red = (X_red - x_min) / (x_max - x_min)
plt.figure(figsize=(6, 4))
for i in range(X_red.shape[0]):
plt.text(X_red[i, 0], X_red[i, 1], str(y[i]),
color=plt.cm.nipy_spectral(labels[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
plt.xticks([])
plt.yticks([])
if title is not None:
plt.title(title, size=17)
plt.axis('off')
plt.tight_layout()
print("Computing embedding")
X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X)
print("Done.")
for linkage in ('ward', 'average', 'complete'):
clustering = AgglomerativeClustering(linkage=linkage, n_clusters=8)
t0 = time()
clustering.fit(X_red)
print("%s : %.2fs" % (linkage, time() - t0))
plot_clustering(X_red, X, clustering.labels_, "%s linkage" % linkage)
plt.show()
"""
Explanation: If we had just used silhouette scores, we would have missed that a lot of our data is actually not clustering very well. The plots above should make us reevaluate whether clustering is the right thing to do on our data.
Hierarchical clustering
Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. See the Wikipedia page for more details.
Agglomerative Clustering
The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together.
The linkage criteria determines the metric used for the merge strategy:
* Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.
* Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.
* Average linkage minimizes the average of the distances between all observations of pairs of clusters.
AgglomerativeClustering can also scale to large number of samples when it is used jointly with a connectivity matrix, but is computationally expensive when no connectivity constraints are added between samples: it considers at each step all the possible merges.
End of explanation
"""
|
NuGrid/NuPyCEE | DOC/Capabilities/Fitting_curves_to_STELLAB_data.ipynb | bsd-3-clause | # Import Python packages
%matplotlib inline
import matplotlib
import matplotlib.pyplot as pl
import numpy as np
# Import the STELLAB module
from NuPyCEE import stellab as st
"""
Explanation: Fitting Curves to STELLAB Data
Prepared by @Marco Pignatari
This notebooks shows how to extract observational data from the STELLAB module in order to perform a polynomial fit to recover the global chemical evolution trends.
End of explanation
"""
def func_rsquare(x, y, ord_polyfit):
"function calculating R-square and R-square adjusted, for a given polynomial order of the regression line"
# Calculate the polynomial coefficients
coeffs = np.polyfit(x, y, ord_polyfit)
p = np.poly1d(coeffs)
# Fit values, and mean
calc_val = p(x)
av_val = np.sum(y)/float(len(y))
# Sum of the squared residuals
ssreg = np.sum((y-calc_val)**2)
# Sum of the squared differences from the mean of the dependent variable
sstot = np.sum((y - av_val)**2)
# Calculate R-squared
rsquare = 1. - ssreg / sstot
# Calculate R-squared adjusted, to keep into account order of polynomial and number of points to fit
rsquare_adj = (1. - ssreg / sstot * ((float(len(x))-1.)/(float(len(x))-float(ord_polyfit)-1.)))
return(rsquare,rsquare_adj)
"""
Explanation: Define Fitting Function and R-Squared Calculation
End of explanation
"""
# Create a STELLAB instance
stellar_data = st.stellab()
# Define the X and Y axis (abundance ratios)
x_label = '[Fe/H]'
y_label = '[O/Fe]'
# Select the stellar data references
# You can also type "stellar_data.list_ref_papers()" to see all references
obs = ['stellab_data/milky_way_data/Frebel_2010_Milky_Way_stellab',
'stellab_data/milky_way_data/Venn_et_al_2004_stellab',
'stellab_data/milky_way_data/Akerman_et_al_2004_stellab',
'stellab_data/milky_way_data/Andrievsky_et_al_2007_stellab',
'stellab_data/milky_way_data/Andrievsky_et_al_2008_stellab',
'stellab_data/milky_way_data/Andrievsky_et_al_2010_stellab',
'stellab_data/milky_way_data/Bensby_et_al_2005_stellab',
'stellab_data/milky_way_data/Bihain_et_al_2004_stellab',
'stellab_data/milky_way_data/Bonifacio_et_al_2009_stellab',
'stellab_data/milky_way_data/Caffau_et_al_2005_stellab',
'stellab_data/milky_way_data/Cayrel_et_al_2004_stellab',
'stellab_data/milky_way_data/Fabbian_et_al_2009_stellab',
'stellab_data/milky_way_data/Gratton_et_al_2003_stellab',
'stellab_data/milky_way_data/Israelian_et_al_2004_stellab',
'stellab_data/milky_way_data/Lai_et_al_2008_stellab',
'stellab_data/milky_way_data/Lai_et_al_2008_stellab',
'stellab_data/milky_way_data/Nissen_et_al_2007_stellab',
'stellab_data/milky_way_data/Reddy_et_al_2006_stellab',
'stellab_data/milky_way_data/Reddy_et_al_2003_stellab',
'stellab_data/milky_way_data/Spite_et_al_2005_stellab',
'stellab_data/milky_way_data/Battistini_Bensby_2016_stellab',
'stellab_data/milky_way_data/Nissen_et_al_2014_stellab']
# Extract all selected data from STELLAB using "return_xy=True"
x,y = stellar_data.plot_spectro(xaxis=x_label, yaxis=y_label, obs=obs, return_xy=True)
"""
Explanation: Extract Data from STELLAB
End of explanation
"""
# Define the maximal polynomial order for the fit
pol_order = 5
results = []
results_adj = []
# For all polynomial order..
for i in range(pol_order):
# Calculate and keep in memory the fit
dum,dum1 = func_rsquare(x,y,i+1)
results.append(dum)
results_adj.append(dum1)
"""
Explanation: Fit Curves to Data
End of explanation
"""
# Define the line colours and styles
lines = ['k-','m-','g-','b-','r-','c-','y-','k--','m--','g--','b--','r--','c--']
# Plot the observed data
%matplotlib inline
pl.plot(x,y,'mo',label='data')
# Plot the regression curves for each polynomial order
# Symbols and labels are automatically set
x_fit = np.arange(min(x)-0.01, max(x)+0.01, 0.001)
for i in range(pol_order):
coeffs = np.polyfit(x, y, i+1)
p = np.poly1d(coeffs)
pl.plot(x_fit,p(x_fit),lines[i],label='polyfit ord '+ str(i+1)+', R$^2$ adj='+'%2.3f' %results_adj[i])
# Set labels and digits size
pl.ylabel(y_label, fontsize=15.)
pl.xlabel(x_label, fontsize=15.)
matplotlib.rcParams.update({'font.size': 14.0})
# Legend outside the plot box
pl.legend(numpoints=1, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize=14)
pl.show()
# Create the polynomial order array
order = np.arange(1,pol_order+1,1)
# Plot R-squared and its maximal value as a function of polynomial order
pl.plot(order, results, 'k-o', label='R$^2$')
pl.plot(order[np.argmax(results)], results[np.argmax(results)],'k-o',markersize=15.)
# Plot R-squared adjusted and its maximal value as a function of polynomial order
#pl.plot(order,results_adj,'r-s',label='R$^2$ adj')
#pl.plot(order[np.argmax(results_adj)],results_adj[np.argmax(results_adj)],'r-s',markersize=15.)
# Set the X-axis range and xticks (only integers)
pl.xlim(min(order)-0.5, max(order)+0.5)
pl.xticks(np.arange(min(order), max(order)+1, 1.0))
# Set labels and digits size
pl.ylabel('R$^2$', fontsize=15.)
pl.xlabel('Order polynomial', fontsize=15.)
matplotlib.rcParams.update({'font.size': 14.0})
# Legend
pl.legend(numpoints=1, loc='upper left', fontsize=14)
pl.show()
"""
Explanation: Plot Results
End of explanation
"""
|
KIPAC/StatisticalMethods | tutorials/missing_data.ipynb | gpl-2.0 | exec(open('tbc.py').read()) # define TBC and TBC_above
from io import StringIO
import numpy as np
from pygtc import plotGTC
import emcee
import incredible as cr
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Tutorial: Coping with Missing Information
O-ring failure rates prior to the Challenger shuttle loss
In this tutorial, we will use a real data set where unwise data selection had serious consequences to illustrate how such selection effects could be modeled.
Background
On January 28, 1986, the Space Shuttle Challenger was destroyed in an explosion during launch. The cause was eventually found to be the failure of an O-ring seal that normally prevents hot gas from leaking between two segments of the solid rocket motors during their burn. The ambient atmospheric temperature of just 36 degrees Fahrenheit, significantly colder than any previous launch, was determined to be a significant factor in the failure.
A relevant excerpt from the Report of the Presidential Commission on the Space Shuttle Challenger Accident reads:
Temperature Effects
The record of the fateful series of NASA and Thiokol meetings, telephone conferences, notes, and facsimile transmissions on January 27th, the night before the launch of flight 51L, shows that only limited consideration was given to the past history of O-ring damage in terms of temperature. The managers compared as a function of temperature the flights for which thermal distress of O-rings had been observed-not the frequency of occurrence based on all flights (Figure 6). In such a comparison, there is nothing irregular in the distribution of O-ring "distress" over the spectrum of joint temperatures at launch between 53 degrees Fahrenheit and 75 degrees Fahrenheit. When the entire history of flight experience is considered, including"normal" flights with no erosion or blow-by, the comparison is substantially different (Figure 7).
This comparison of flight history indicates that only three incidents of O-ring thermal distress occurred out of twenty flights with O-ring temperatures at 66 degrees Fahrenheit or above, whereas, all four flights with O-ring temperatures at 63 degrees Fahrenheit or below experienced O-ring thermal distress.
Consideration of the entire launch temperature history indicates that the probability of O-ring distress is increased to almost a certainty if the temperature of the joint is less than 65.
<table>
<tr>
<td><img src="graphics/v1p146.jpg" width=75%></td>
</tr>
</table>
The data above show the number of incidences of O-ring damage found in previous missions as a function of the temperature at launch; these have been transcribed below.
Setup
Let's import some things.
End of explanation
"""
oring_data_string = \
"""# temperature incidents
53 3
56 1
57 1
63 1
66 0
67 0
67 0
67 0
68 0
69 0
70 1
70 1
70 0
70 0
72 0
73 0
75 2
75 0
76 0
76 0
78 0
79 0
80 0
81 0
"""
oring_data = np.loadtxt(StringIO(oring_data_string), skiprows=1)
oring_temps = oring_data[:,0]
oring_incidents = oring_data[:,1]
"""
Explanation: The data in the figure above are transcribed and read into an array here. We put the launch temperatures in oring_temps and the corresponding number of incidents in oring_incidents.
End of explanation
"""
plt.rcParams['figure.figsize'] = (6, 4)
plt.plot(oring_temps, oring_incidents, 'bo');
plt.xlabel('temperature (F)');
plt.ylabel('Number of incidents');
"""
Explanation: Here's a quick plot to show that we did that right (cf above).
End of explanation
"""
failure_temps = oring_temps[np.where(oring_incidents > 0)[0]]
Nfailure = len(failure_temps)
success_temps = oring_temps[np.where(oring_incidents == 0)[0]]
Nsuccess = len(success_temps)
print('temperatures corresponding to failures:', failure_temps)
print('temperatures corresponding to successes:', success_temps)
"""
Explanation: For this notebook, we will simplify the data for each launch from integer (how many incidents of O-ring damage) to boolean (was there any damage, or not). This cell stores the temperatures corresponding to "failure" (any incidents) and "success" (no incidents).
End of explanation
"""
def P_success(T, T0, beta, Pcold, Phot):
"""
Evaluate Psuccess as given above, as a function of T, for parameters T0, beta, Pcold, Phot.
"""
TBC()
TBC_above()
"""
Explanation: 1. Defining a model
Before worrying about missing data, let's define a model that we might want to fit to these data. We're interested in whether the probability of having zero O-ring incidents (or non-zero incidents, conversely) is a function of temperature. One possible parametrization that allows this is the logistic function, which squeezes the real line onto the range (0,1).
For reasons that may be clear later, I suggest defining the model in terms of the probability of success (zero incidents)
$P_\mathrm{success}(T|T_0,\beta,P_\mathrm{cold},P_\mathrm{hot}) = P_\mathrm{cold} + \frac{P_\mathrm{hot} - P_\mathrm{cold}}{1 + e^{-\beta(T-T_0)}}$,
with parameters $T_0$ and $\beta$ respectively determining the center and width of the logistic function, and $P_\mathrm{cold}$ and $P_\mathrm{hot}$ determine the probabilities of success at very low and high temperatures (which need not be 0 and 1).
As we'll see in a moment, a model like this provides a linear-ish transition between two extreme values, without imposing the strong prior that $P_\mathrm{success}$ must drop to zero at some point, for example.
1a. Implement this function and have a look
End of explanation
"""
plt.rcParams['figure.figsize'] = (6, 4)
T_axis = np.arange(32., 100.)
plt.plot(T_axis, P_success(T_axis, 70.0, 0.3, 0.0, 1.0));
plt.plot(T_axis, P_success(T_axis, 65.0, 0.1, 0.4, 0.9));
plt.plot(T_axis, P_success(T_axis, 45.0, 1.0, 0.1, 0.5));
plt.plot(T_axis, P_success(T_axis, 80.0, 0.5, 0.9, 0.2));
plt.xlabel('temperature (F)');
plt.ylabel('probability of a clean launch');
"""
Explanation: Plot the function for a few different parameter values. If you've never worked with the logistic function (or a similar sigmoid function) before, this will give you an idea of how flexible it is.
End of explanation
"""
def ln_prior(T0, beta, Pcold, Phot):
"""
Return the log-prior density for parameters T0, beta, Pcold, Phot
"""
TBC()
TBC_above()
"""
Explanation: 1b. PGM and priors
Given the definition of the data and model above, draw the PGM for this problem, and write down an expression for the likelihood (assuming we have the complete data set).
TBC: your answer here
Choosing priors is a little tricky because we're interested in the model's predictions at $T=36$ degrees F, which is an extrapolation even for the complete data set.
We'd like our model to be consistent with no trend a priori - that way we can see relatively straightforwardly whether the data require there to be a trend. A pleasingly symmetric way to allow this is to put identical, independent priors on $P_\mathrm{cold}$ and $P_\mathrm{hot}$, in particular including the possibility that $P_\mathrm{cold} > P_\mathrm{hot}$ even though that isn't what we're looking for. Thus, a solution with $P_\mathrm{cold}=P_\mathrm{hot}$, i.e. no trend, is perfectly allowed.
Our temperature data are given in integer degrees, so it doesn't make sense to allow values of $\beta$ too much greater than 1, since the data would not resolve such a sudden change (which would increasingly make $P_\mathrm{success}$ resemble a step function). By definition, $\beta>0$ (it's an inverse scale parameter).
In principle, we might allow $T_0$ to take any value. But, arguably, the most sensible thing we can do with such limited information is test whether there is evidence for a trend in the probability of O-ring failure within the range of the available data (or, a little more casually, the range of the figure from the report, above). Given the flexibility already provided by the choices above, there's little obvious benefit to allowing $T_0$ to vary more than this.
In summary, my suggestion is the following uniform priors:
* $0<P_\mathrm{cold}<1$
* $0<P_\mathrm{hot}<1$
* $0<\beta<3$
* $45<T_0<80$
You're welcome to mess around with other priors if you disagree. Either way, implement a log-prior function below.
End of explanation
"""
class Model:
def __init__(self, log_prior, log_likelihood):
self.log_prior = log_prior
self.log_likelihood = log_likelihood
self.param_names = ['T0', 'beta', 'Pcold', 'Phot']
self.param_labels = [r'$T_0$', r'$\beta$', r'$P_\mathrm{cold}$', r'$P_\mathrm{hot}$']
self.sampler = None
self.samples = None
def log_posterior(self, pvec=None, **params):
'''
Our usual log-posterior function, able to take a vector argument to satisfy emcee
'''
if pvec is not None:
pdict = {k:pvec[i] for i,k in enumerate(self.param_names)}
return self.log_posterior(**pdict)
lnp = self.log_prior(**params)
if lnp != -np.inf:
lnp += self.log_likelihood(**params)
return lnp
def sample_posterior(self, nwalkers=8, nsteps=10000, guess=[65.0, 0.1, 0.25, 0.75], threads=1):
# use emcee to sample the posterior
npars = len(self.param_names)
self.sampler = emcee.EnsembleSampler(nwalkers, npars, self.log_posterior, threads=threads)
start = np.array([np.array(guess)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
self.sampler.run_mcmc(start, nsteps)
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(self.sampler.chain[:min(8,nwalkers),:,:], ax, labels=self.param_labels);
def check_chains(self, burn=500, maxlag=500):
'''
Ignoring `burn` samples from the front of each chain, compute convergence criteria and
effective number of samples.
'''
nwalk, nsteps, npars = self.sampler.chain.shape
if burn < 1 or burn >= nsteps:
return
tmp_samples = [self.sampler.chain[i,burn:,:] for i in range(nwalk)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!')
def remove_burnin(self, burn=500):
'''
Remove `burn` samples from the front of each chain, and concatenate.
Store the result in self.samples.
'''
nwalk, nsteps, npars = self.sampler.chain.shape
if burn < 1 or burn >= nsteps:
return
self.samples = self.sampler.chain[:,burn:,:].reshape(nwalk*(nsteps-burn), npars)
def posterior_prediction_Pfailure(self, temperatures=np.arange(30., 85.), probs=[0.5, 0.16, 0.84]):
'''
For the given temperatures, compute and store quantiles of the posterior predictive distribution for O-ring failure.
By default, return the median and a 68% credible interval (defined via quantiles).
'''
Pfail = np.array([1.0-P_success(T, self.samples[:,0], self.samples[:,1], self.samples[:,2], self.samples[:,3]) for T in temperatures])
res = {'T':temperatures, 'p':[str(p) for p in probs]}
for p in probs:
res[str(p)] = np.quantile(Pfail, p, axis=1)
self.post_failure = res
def plot_Pfailure(self, color, label):
'''
Plot summaries of the posterior predictive distribution for O-ring failure.
Show the center as a solid line and credible interval(s) bounded by dashed lines.
'''
plt.plot(self.post_failure['T'], self.post_failure[self.post_failure['p'][0]], color+'-', label=label)
n = len(self.post_failure['p'])
if n > 1:
for j in range(1,n):
plt.plot(self.post_failure['T'], self.post_failure[self.post_failure['p'][j]], color+'--')
"""
Explanation: 1c. Model fitting code
Since the point of this tutorial is model design rather than carrying out a fit, most of the code below is given. Naturally, you should ensure that you understand what the code is doing, even if there's nothing to add.
Here we follow a similar, though simpler, approach to the object oriented code used in the model evaluation/selection notebooks, since the models we'll compare all have the same set of free parameters. The Model object will take log-prior and log-likelihood functions as inputs in its constructor (instead of deriving new classes corresponding to different likelihoods), and will deal with the computational aspects of fitting the parameters. It will also provide a posterior prediction for the thing we actually care about, the failure probability at a given temperature. To do this, we need to marginalize over the model parameters; that is, we compute the posterior-weighted average of $1-P_\mathrm{success}$, at some temperature of interest, over the parameter space.
End of explanation
"""
def ln_like_complete(T0, beta, Pcold, Phot):
"""
Return the log-likelihood corresponding to a complete data set
"""
TBC()
TBC_above()
"""
Explanation: 2. Solution for complete data
First, let's see what the solution looks like when there is no missing data. Complete the likelihood function appropriate for a complete data set below.
End of explanation
"""
complete_model = Model(ln_prior, ln_like_complete)
"""
Explanation: Now we put the Model code to work. The default options below should work well enough, but do keep an eye the usual basic diagnostics as provided below, and make any necessary changes. First we instantiate the model:
End of explanation
"""
%%time
complete_model.sample_posterior(nwalkers=8, nsteps=10000, guess=[65.0, 0.1, 0.25, 0.75])
"""
Explanation: ... and run the fit. Note that the parameters are not likely to be individually well contrained by the data, compared with the prior. We don't necessarily care about this - the important question is what the posterior predictions for the probability of failure end up looking like.
End of explanation
"""
complete_model.check_chains(burn=1000, maxlag=2000)
"""
Explanation: Here are the usual diagnostics:
End of explanation
"""
complete_model.remove_burnin(burn=1000)
plotGTC(complete_model.samples, paramNames=complete_model.param_labels,
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
"""
Explanation: Finally, remove burn-in and plot the marginal posteriors:
End of explanation
"""
complete_model.posterior_prediction_Pfailure()
plt.rcParams['figure.figsize'] = (6., 4.)
complete_model.plot_Pfailure('C0', 'complete')
plt.xlabel(r'$T$');
plt.ylabel(r'$P_\mathrm{failure}(T)$');
plt.legend();
"""
Explanation: Assuming that went well, let's visualize the predicted failure probability as a function of temperature.
End of explanation
"""
j = np.where(complete_model.post_failure['T']==36.)[0]
np.concatenate([complete_model.post_failure[p][j] for p in complete_model.post_failure['p']])
"""
Explanation: Does this make curve sense compared with inspection of the data? Any surprises?
TBC: observations
Checkpoint: Let's look at the probability of failure at 36 F. This will print the posterior prediction median and CI lower and upper bounds. For comparison, I find approximately $0.82_{-0.16}^{+0.13}$.
End of explanation
"""
TBC()
# success_Tmin =
# success_Tmax =
"""
Explanation: 3. Censored (but somewhat informed) success temperatures
Imagine we are in a slightly better situation than that shown in the top panel of Figure 6 from the report. Namely, we are given
1. the temperatures of launches where there were O-ring failures (failure_temps and Nfailure above),
2. the number of launches with no failures (Nsuccess),
3. a range of temperatures containing the successful launches, but not the precise temperatures of each.
For (3), I suggest using the actual min and max of success_temps, and implementing the prior on unknown temperatures as uniform in this range. In the next section, we'll look at the results with a less informed prior on the success temperatures.
End of explanation
"""
def ln_like_censored(T0, beta, Pcold, Phot):
"""
Return the log-likelihood for the case of censored success temperatures.
This prototype assumes the success temperatures will be marginalized over within this function; otherwise
they would need to be additional parameters to be sampled.
"""
TBC()
TBC_above()
"""
Explanation: 3a. Censored model definition
Work out how to adjust your PGM and expression for the likelihood to reflect our ignorance of the temperatures of successful launches.
TBC: adjusted PGM and expressions
Implement the (log)-likelihood for the censored model. Here are some hints/suggestions if you want them:
1. This doesn't require as dramatic a change to the model as truncation would, more a re-definition of the sampling distribution for the censored points.
2. A model component that was previously fixed by observation or effectively determined precisely is now indeterminate.
3. We can marginalize over our newfound ignorance analytically, taking advantage of the fact that the integral of the logistic function is analytic (see Wikipedia).
End of explanation
"""
censored_model = Model(ln_prior, ln_like_censored)
%%time
censored_model.sample_posterior(nwalkers=8, nsteps=10000, guess=[65.0, 0.1, 0.25, 0.75])
censored_model.check_chains(burn=1000, maxlag=2000)
censored_model.remove_burnin(burn=1000)
plotGTC([complete_model.samples, censored_model.samples], paramNames=complete_model.param_labels,
chainLabels=['complete', 'censored'],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
"""
Explanation: 3b. Censored model fit
We can now carry out the usual steps:
End of explanation
"""
censored_model.posterior_prediction_Pfailure()
plt.rcParams['figure.figsize'] = (6., 4.)
complete_model.plot_Pfailure('C0', 'complete')
censored_model.plot_Pfailure('C1', 'censored')
plt.xlabel(r'$T$');
plt.ylabel(r'$P_\mathrm{failure}(T)$');
plt.legend();
"""
Explanation: Now let's compare the posterior predictions to the previous result.
End of explanation
"""
j = np.where(censored_model.post_failure['T']==75.)[0]
np.concatenate([censored_model.post_failure[p][j] for p in censored_model.post_failure['p']])
"""
Explanation: Does your censored model manage to make consistent predictions to the model fitted to the complete data? If there are clear differences, do they make sense in light of what information has been hidden?
TBC: your comments here
Checkpoint: Looking at a balmy temperature of 75 F this time, I find a failure probability of approximately $0.18_{-0.08}^{+0.10}$.
End of explanation
"""
success_Tmin = 45.0
success_Tmax = 80.0
verycensored_model = Model(ln_prior, ln_like_censored)
%%time
verycensored_model.sample_posterior(nwalkers=8, nsteps=10000, guess=[65.0, 0.1, 0.25, 0.75])
verycensored_model.check_chains(burn=1000, maxlag=2000)
verycensored_model.remove_burnin(burn=1000)
plotGTC([complete_model.samples, censored_model.samples, verycensored_model.samples], paramNames=complete_model.param_labels,
chainLabels=['complete', 'censored', 'very censored'],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
"""
Explanation: 4. Censored (less informed) success temperatures
As a point of comparison, let's fit a model in which the temperature range for the censored (success) data is much less well constrained. This is arguably more analogous to what we might do by eye if presented with the first figure in this notebook, knowing that successful launches were absent from the figure, but without the context that those launches has all taken place in warm weather.
In particular, let's take the prior on the success temperatures to be uniform over the range shown in the figure. We followed poor practice by defining success_Tmin and success_Tmax at global scope earlier, and then using them from global scope in ln_like_censored, but it does mean we can just redefine them below and then re-use the likelihood function. If your implementation differs (i.e., was more sensible), you might need to change some more details.
End of explanation
"""
verycensored_model.posterior_prediction_Pfailure()
plt.rcParams['figure.figsize'] = (7., 5.)
complete_model.plot_Pfailure('C0', 'complete')
censored_model.plot_Pfailure('C1', 'censored')
verycensored_model.plot_Pfailure('C2', 'very censored')
plt.axvline(36.0, color='k', linestyle='dotted')
plt.xlabel(r'$T$');
plt.ylabel(r'$P_\mathrm{failure}(T)$');
plt.legend();
"""
Explanation: This seems like it will lead to somewhat different posterior predictions. Let's check.
End of explanation
"""
j = np.where(censored_model.post_failure['T']==60.)[0]
np.concatenate([verycensored_model.post_failure[p][j] for p in verycensored_model.post_failure['p']])
"""
Explanation: The vertical, dotted line added in this plot marks the ambient temperature of 36 F at the Challenger launch.
Does this more censored model manage to make consistent predictions to the model fitted to the complete data? If there are clear differences, do they make sense in light of what information has been hidden?
TBC: your comments
Checkpoint: looking now at 60 F, I find a failure probability of approximately $0.32_{-0.10}^_{+0.11}$.
End of explanation
"""
|
syednasar/datascience | deeplearning/sentiment-analysis/sentiment_network/.ipynb_checkpoints/Sentiment Classification - Mini Project 2-checkpoint.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
|
wtgme/labeldoc2vec | docs/notebooks/distance_metrics.ipynb | lgpl-2.1 | from gensim.corpora import Dictionary
from gensim.models import ldamodel
from gensim.matutils import kullback_leibler, jaccard, hellinger, sparse2full
import numpy
# you can use any corpus, this is just illustratory
texts = [['bank','river','shore','water'],
['river','water','flow','fast','tree'],
['bank','water','fall','flow'],
['bank','bank','water','rain','river'],
['river','water','mud','tree'],
['money','transaction','bank','finance'],
['bank','borrow','money'],
['bank','finance'],
['finance','money','sell','bank'],
['borrow','sell'],
['bank','loan','sell']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
numpy.random.seed(1) # setting random seed to get the same results each time.
model = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=2)
model.show_topics()
"""
Explanation: New Distance Metrics for Probability Distribution and Bag of Words
A small tutorial to illustrate the new distance functions.
We would need this mostly when comparing how similar two probability distributions are, and in the case of gensim, usually for LSI or LDA topic distributions after we have a LDA model.
Gensim already has functionalities for this, in the sense of getting most similar documents - this, this and this are such examples of documentation and tutorials.
What this tutorial shows is a building block of these larger methods, which are a small suite of distance metrics.
We'll start by setting up a small corpus and showing off the methods.
End of explanation
"""
doc_water = ['river', 'water', 'shore']
doc_finance = ['finance', 'money', 'sell']
doc_bank = ['finance', 'bank', 'tree', 'water']
# now let's make these into a bag of words format
bow_water = model.id2word.doc2bow(doc_water)
bow_finance = model.id2word.doc2bow(doc_finance)
bow_bank = model.id2word.doc2bow(doc_bank)
# we can now get the LDA topic distributions for these
lda_bow_water = model[bow_water]
lda_bow_finance = model[bow_finance]
lda_bow_bank = model[bow_bank]
"""
Explanation: Let's take a few sample documents and get them ready to test Similarity. Let's call the 1st topic the water topic and the second topic the finance topic.
Note: these are all distance metrics. This means that a value between 0 and 1 is returned, where values closer to 0 indicate a smaller 'distance' and therefore a larger similarity.
End of explanation
"""
hellinger(lda_bow_water, lda_bow_finance)
hellinger(lda_bow_finance, lda_bow_bank)
"""
Explanation: Hellinger and Kullback–Leibler
We're now ready to apply our distance metrics.
Let's start with the popular Hellinger distance.
The Hellinger distance metric gives an output in the range [0,1] for two probability distributions, with values closer to 0 meaning they are more similar.
End of explanation
"""
kullback_leibler(lda_bow_water, lda_bow_bank)
kullback_leibler(lda_bow_finance, lda_bow_bank)
"""
Explanation: Makes sense, right? In the first example, Document 1 and Document 2 are hardly similar, so we get a value of roughly 0.5.
In the second case, the documents are a lot more similar, semantically. Trained with the model, they give a much less distance value.
Let's run similar examples down with Kullback Leibler.
End of explanation
"""
# As you can see, the values are not equal. We'll get more into the details of this later on in the notebook.
kullback_leibler(lda_bow_bank, lda_bow_finance)
"""
Explanation: NOTE!
KL is not a Distance Metric in the mathematical sense, and hence is not symmetrical.
This means that kullback_leibler(lda_bow_finance, lda_bow_bank) is not equal to kullback_leibler(lda_bow_bank, lda_bow_finance).
End of explanation
"""
# just to confirm our suspicion that the bank bow is more to do with finance:
model.get_document_topics(bow_bank)
"""
Explanation: In our previous examples we saw that there were lower distance values between bank and finance than for bank and water, even if it wasn't by a huge margin. What does this mean?
The bank document is a combination of both water and finance related terms - but as bank in this context is likely to belong to the finance topic, the distance values are less between the finance and bank bows.
End of explanation
"""
jaccard(bow_water, bow_bank)
jaccard(doc_water, doc_bank)
jaccard(['word'], ['word'])
"""
Explanation: It's evident that while it isn't too skewed, it it more towards the finance topic.
Distance metrics (also referred to as similarity metrics), as suggested in the examples above, are mainly for probability distributions, but the methods can accept a bunch of formats for input. You can do some further reading on Kullback Leibler and Hellinger to figure out what suits your needs.
Jaccard
Let us now look at the Jaccard Distance metric for similarity between bags of words (i.e, documents)
End of explanation
"""
topic_water, topic_finance = model.show_topics()
# some pre processing to get the topics in a format acceptable to our distance metrics
def make_topics_bow(topic):
# takes the string returned by model.show_topics()
# split on strings to get topics and the probabilities
topic = topic.split('+')
# list to store topic bows
topic_bow = []
for word in topic:
# split probability and word
prob, word = word.split('*')
# get rid of spaces
word = word.replace(" ","")
# convert to word_type
word = model.id2word.doc2bow([word])[0][0]
topic_bow.append((word, float(prob)))
return topic_bow
finance_distribution = make_topics_bow(topic_finance[1])
water_distribution = make_topics_bow(topic_water[1])
# the finance topic in bag of words format looks like this:
finance_distribution
"""
Explanation: The three examples above feature 2 different input methods.
In the first case, we present to jaccard document vectors already in bag of words format. The distance can be defined as 1 minus the size of the intersection upon the size of the union of the vectors.
We can see (on manual inspection as well), that the distance is likely to be high - and it is.
The last two examples illustrate the ability for jaccard to accept even lists (i.e, documents) as inputs.
In the last case, because they are the same vectors, the value returned is 0 - this means the distance is 0 and they are very similar.
Distance Metrics for Topic Distributions
While there are already standard methods to identify similarity of documents, our distance metrics has one more interesting use-case: topic distributions.
Let's say we want to find out how similar our two topics are, water and finance.
End of explanation
"""
hellinger(water_distribution, finance_distribution)
"""
Explanation: Now that we've got our topics in a format more acceptable by our functions, let's use a Distance metric to see how similar the word distributions in the topics are.
End of explanation
"""
# 16 here is the number of features the probability distribution draws from
kullback_leibler(water_distribution, finance_distribution, 16)
"""
Explanation: Our value of roughly 0.36 means that the topics are not TOO distant with respect to their word distributions.
This makes sense again, because of overlapping words like bank and a small size dictionary.
Some things to take care of
In our previous example we didn't use Kullback Leibler to test for similarity for a reason - KL is not a Distance 'Metric' in the technical sense (you can see what a metric is here. The nature of it, mathematically also means we must be a little careful before using it, because since it involves the log function, a zero can mess things up. For example:
End of explanation
"""
# return ALL the words in the dictionary for the topic-word distribution.
topic_water, topic_finance = model.show_topics(num_words=len(model.id2word))
# do our bag of words transformation again
finance_distribution = make_topics_bow(topic_finance[1])
water_distribution = make_topics_bow(topic_water[1])
# and voila!
kullback_leibler(water_distribution, finance_distribution)
"""
Explanation: That wasn't very helpful, right? This just means that we have to be a bit careful about our inputs. Our old example didn't work out because they were some missing values for some words (because show_topics() only returned the top 10 topics).
This can be remedied, though.
End of explanation
"""
# normal Hellinger
hellinger(water_distribution, finance_distribution)
# we swap finance and water distributions and get the same value. It is indeed symmetric!
hellinger(finance_distribution, water_distribution)
# if we pass the same values, it is zero.
hellinger(water_distribution, water_distribution)
# for triangle inequality let's use LDA document distributions
hellinger(lda_bow_finance, lda_bow_bank)
# Triangle inequality works too!
hellinger(lda_bow_finance, lda_bow_water) + hellinger(lda_bow_water, lda_bow_bank)
"""
Explanation: You may notice that the distance for this is quite less, indicating a high similarity. This may be a bit off because of the small size of the corpus, where all topics are likely to contain a decent overlap of word probabilities. You will likely get a better value for a bigger corpus.
So, just remember, if you intend to use KL as a metric to measure similarity or distance between two distributions, avoid zeros by returning the ENTIRE distribution. Since it's unlikely any probability distribution will ever have absolute zeros for any feature/word, returning all the values like we did will make you good to go.
So - what exactly are Distance Metrics?
Having seen the practical usages of these measures (i.e, to find similarity), let's learn a little about what exactly Distance Measures and Metrics are.
I mentioned in the previous section that KL was not a distance metric. There are 4 conditons for for a distance measure to be a matric:
d(x,y) >= 0
d(x,y) = 0 <=> x = y
d(x,y) = d(y,x)
d(x,z) <= d(x,y) + d(y,z)
That is: it must be non-negative; if x and y are the same, distance must be zero; it must be symmetric; and it must obey the triangle inequality law.
Simple enough, right?
Let's test these out for our measures.
End of explanation
"""
kullback_leibler(finance_distribution, water_distribution)
kullback_leibler(water_distribution, finance_distribution)
"""
Explanation: So Hellinger is indeed a metric. Let's check out KL.
End of explanation
"""
|
GoogleCloudPlatform/tensorflow-without-a-phd | tensorflow-rnn-tutorial/old-school-tensorflow/tutorial/00_RNN_predictions_estimator_solution.ipynb | apache-2.0 | import numpy as np
import tensorflow as tf
from tensorflow.python.platform import tf_logging as logging
logging.set_verbosity(logging.INFO)
logging.log(logging.INFO, "Tensorflow version " + tf.__version__)
import utils_datagen
from matplotlib import pyplot as plt
import utils_display
"""
Explanation: An RNN for short-term predictions
This model will try to predict the next value in a short sequence based on historical data. This can be used for example to forecast demand based on a couple of weeks of sales data.
<div class="alert alert-block alert-info">
This version packages the model into an Estimator interface.
It can be executed on Google cloud ML Engine using the code in this bash notebook: [../run-on-cloud-ml-engine.ipynb](../run-on-cloud-ml-engine.ipynb)
</div>
End of explanation
"""
DATA_SEQ_LEN = 1024*128
data = np.concatenate([utils_datagen.create_time_series(waveform, DATA_SEQ_LEN) for waveform in utils_datagen.Waveforms])
utils_display.picture_this_1(data, DATA_SEQ_LEN)
"""
Explanation: Generate fake dataset
End of explanation
"""
RNN_CELLSIZE = 32 # size of the RNN cells
SEQLEN = 16 # unrolled sequence length
BATCHSIZE = 32 # mini-batch size
"""
Explanation: Hyperparameters
End of explanation
"""
utils_display.picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences
"""
Explanation: Visualize training sequences
This is what the neural network will see during training.
End of explanation
"""
# tree simplistic predictive models: can you beat them ?
def simplistic_models(X):
# "random" model
Yrnd = tf.random_uniform([tf.shape(X)[0]], -2.0, 2.0) # tf.shape(X)[0] is the batch size
# "same as last" model
Ysal = X[:,-1]
# "trend from last two" model
Ytfl = X[:,-1] + (X[:,-1] - X[:,-2])
return Yrnd, Ysal, Ytfl
def bad_model(X):
Yr = X * tf.Variable(tf.ones([]), name="dummy") # shape [BATCHSIZE, SEQLEN]
Yout = Yr[:,-1:SEQLEN] # Last item in sequence. Yout [BATCHSIZE, 1]
return Yout
# linear model (RMSE: 0.36, with shuffling: 0.17)
def linear_model(X):
Yout = tf.layers.dense(X, 1) # output shape [BATCHSIZE, 1]
return Yout
# 2-layer dense model (RMSE: 0.38, with shuffling: 0.15-0.18)
def DNN_model(X):
Y = tf.layers.dense(X, SEQLEN//2, activation=tf.nn.relu)
Yout = tf.layers.dense(Y, 1, activation=None) # output shape [BATCHSIZE, 1]
return Yout
# convolutional (RMSE: 0.31, with shuffling: 0.16)
def CNN_model(X):
X = tf.expand_dims(X, axis=2) # [BATCHSIZE, SEQLEN, 1] is necessary for conv model
Y = tf.layers.conv1d(X, filters=8, kernel_size=4, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.conv1d(Y, filters=16, kernel_size=3, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.conv1d(Y, filters=8, kernel_size=1, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN, 8]
Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//2, 8]
Y = tf.layers.conv1d(Y, filters=8, kernel_size=3, activation=tf.nn.relu, padding="same") # [BATCHSIZE, SEQLEN//2, 8]
Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//4, 8]
# mis-using a conv layer as linear regression :-)
Yout = tf.layers.conv1d(Y, filters=1, kernel_size=SEQLEN//4, activation=None, padding="valid") # output shape [BATCHSIZE, 1, 1]
Yout = tf.squeeze(Yout, axis=-1) # output shape [BATCHSIZE, 1]
return Yout
# RNN model (RMSE: 0.38, with shuffling 0.14, the same with loss on last 8)
def RNN_model(X, n=1):
# 2-layer RNN
X = tf.expand_dims(X, axis=2) # [BATCHSIZE, SEQLEN, 1] is necessary for RNN model
cell1 = tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)
cell2 = tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)
cell = tf.nn.rnn_cell.MultiRNNCell([cell1, cell2], state_is_tuple=False)
Yn, H = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32) # Yn [BATCHSIZE, SEQLEN, RNN_CELLSIZE]
# regression head
batchsize = tf.shape(X)[0]
Yn = tf.reshape(Yn, [batchsize*SEQLEN, RNN_CELLSIZE])
Yr = tf.layers.dense(Yn, 1) # Yr [BATCHSIZE*SEQLEN, 1]
Yr = tf.reshape(Yr, [batchsize, SEQLEN, 1]) # Yr [BATCHSIZE, SEQLEN, 1]
# In this RNN model, you can compute the loss on the last predicted item or the lats n predicted items
# Last n is slightly better.
Yout = Yr[:,-n:SEQLEN,:] # last item(s) in sequence: output shape [BATCHSIZE, n, 1]
Yout = tf.squeeze(Yout, axis=-1)
return Yout
def RNN_model_N(X): return RNN_model(X, n=SEQLEN//2)
def configurable_model_fn(features, labels, mode, model=CNN_model):
X = features # shape [BATCHSIZE, SEQLEN]
Y = model(X)
Yout = Y[:,-1]
loss = train_op = eval_metrics = None
if mode != tf.estimator.ModeKeys.PREDICT:
last_label = labels[:, -1] # last item in sequence: the target value to predict
last_labels = labels[:, -tf.shape(Y)[1]:SEQLEN] # last p items in sequence (as many as in Y), useful for RNN_model(X, n>1)
loss = tf.losses.mean_squared_error(Y, last_labels) # loss computed on last label(s)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = tf.contrib.training.create_train_op(loss, optimizer)
#train_op = optimizer.minimize(loss)
Yrnd, Ysal, Ytfl = simplistic_models(X)
eval_metrics = {"RMSE": tf.metrics.root_mean_squared_error(Y, last_labels),
# compare agains three simplistic predictive models: can you beat them ?
"RMSErnd": tf.metrics.root_mean_squared_error(Yrnd, last_label),
"RMSEsal": tf.metrics.root_mean_squared_error(Ysal, last_label),
"RMSEtfl": tf.metrics.root_mean_squared_error(Ytfl, last_label)}
return tf.estimator.EstimatorSpec(
mode = mode,
predictions = {"Yout":Yout},
loss = loss,
train_op = train_op,
eval_metric_ops = eval_metrics
)
def model_fn(features, labels, mode): return configurable_model_fn(features, labels, mode, model=RNN_model_N)
"""
Explanation: The model definition
When executed, this function instantiates the Tensorflow graph for our model.
End of explanation
"""
# training to predict the same sequence shifted by one (next value)
labeldata = np.roll(data, -1)
# slice data into sequences
traindata = np.reshape(data, [-1, SEQLEN])
labeldata = np.reshape(labeldata, [-1, SEQLEN])
# also make an evaluation dataset by randomly subsampling our fake data
EVAL_SEQUENCES = DATA_SEQ_LEN*4//SEQLEN//4
joined_data = np.stack([traindata, labeldata], axis=1) # new shape is [N_sequences, 2(train/eval), SEQLEN]
joined_evaldata = joined_data[np.random.choice(joined_data.shape[0], EVAL_SEQUENCES, replace=False)]
evaldata = joined_evaldata[:,0,:]
evallabels = joined_evaldata[:,1,:]
def train_dataset():
# Dataset API for batching, shuffling, repeating
dataset = tf.data.Dataset.from_tensor_slices((traindata, labeldata))
dataset = dataset.repeat() # indefinitely
dataset = dataset.shuffle(DATA_SEQ_LEN*4//SEQLEN) # important ! Number of sequences in shuffle buffer: all of them
dataset = dataset.batch(BATCHSIZE)
samples, labels = dataset.make_one_shot_iterator().get_next()
return samples, labels
def eval_dataset():
# Dataset API for batching
evaldataset = tf.data.Dataset.from_tensor_slices((evaldata, evallabels))
evaldataset = evaldataset.repeat(1)
evaldataset = evaldataset.batch(EVAL_SEQUENCES) # just one batch with everything
samples, labels = evaldataset.make_one_shot_iterator().get_next()
return samples, labels
NB_EPOCHS = 5
nbsteps = DATA_SEQ_LEN*4//SEQLEN//BATCHSIZE * NB_EPOCHS
train_spec = tf.estimator.TrainSpec(input_fn=train_dataset, max_steps=nbsteps)
eval_spec = tf.estimator.EvalSpec(input_fn=eval_dataset, throttle_secs=20, start_delay_secs=1, steps=None) # eval until dataset exhausted (1 batch)
training_config = tf.estimator.RunConfig(model_dir="./outputdir")
estimator=tf.estimator.Estimator(model_fn=model_fn, config=training_config)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
evals = estimator.evaluate(eval_dataset)
results = estimator.predict(eval_dataset)
Yout_ = [result["Yout"] for result in results]
utils_display.picture_this_3(Yout_, evaldata, evallabels, SEQLEN) # execute multiple times to see different sample sequences
"""
Explanation: prepare training dataset
End of explanation
"""
|
yingchi/fastai-notes | deeplearning1/nbs/statefarm-yc.ipynb | apache-2.0 | from theano.sandbox import cuda
cuda.use('gpu0')
%matplotlib inline
from __future__ import print_function, division
from importlib import reload
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
LESSON_HOME_DIR='/home/ubuntu/fastai-notes/deeplearning1/nbs/'
path = LESSON_HOME_DIR+'data/state/'
batch_size=64
"""
Explanation: Statefarm - Whole Dataset
End of explanation
"""
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
"""
Explanation: Setup batches
End of explanation
"""
trn = get_data(path+'train')
val = get_data(path+'valid')
save_array(path+'results/val.dat', val)
save_array(path+'results/trn.dat', trn)
??get_data()
"""
def get_data(path, target_size=(224,224)):
batches = get_batches(path, shuffle=False, batch_size=1, class_mode=None, target_size=target_size)
return np.concatenate([batches.next() for i in range(batches.nb_sample)])
"""
??save_array()
"""
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
"""
val = load_array(path+'results/val.dat')
trn = load_array(path+'results/trn.dat')
"""
Explanation: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
End of explanation
"""
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
# model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
# model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
# nb_val_samples=val_batches.nb_sample)
model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
model = conv1(batches)
model.save_weights(path+'models/model1.h1')
"""
Explanation: Re-run sample experiements on full dataset
We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.
Single conv layer
End of explanation
"""
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
"""
Explanation: Data augmentation
End of explanation
"""
vgg = Vgg16()
model = vgg.model
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# batches shuffle must be set to False when pre-computing features
batches = get_batches(path+'train', batch_size=batch_size, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
conv_feat = conv_model.predict_generator(batches, batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
test_batches = get_batches(path+'test', batch_size=batch_size, shuffle=False)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
# save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
# save_array(path+'results/conv_feat.dat', conv_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_val_feat.shape
"""
Explanation: Imagenet conv features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
End of explanation
"""
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
val_preds = model.predict(val, batch_size = batch_size)
keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.96)).eval()
# test_batches = get_batches(path+'test', batch_size=batch_size, shuffle=False)
test = get_data(path+'test')
preds =model.predict(test, batch_size = batch_size*2)
subm = do_clip(preds,0.96)
subm_name = path+'results/subm.gz'
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
"""
Explanation: Batchnorm dense layers on pretrained conv layers
Submit
We'll find a good clipping amount using the validation set, prior to submitting.
End of explanation
"""
|
mdeff/ntds_2016 | toolkit/02_ex_exploitation.ipynb | mit | import pandas as pd
import numpy as np
from IPython.display import display
import os.path
folder = os.path.join('..', 'data', 'social_media')
# Your code here.
"""
Explanation: A Python Tour of Data Science: Data Acquisition & Exploration
Michaël Defferrard, PhD student, EPFL LTS2
Exercise: problem definition
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
This notebook is the second part of the exercise. Given the data we collected from Facebook an Twitter in the last exercise, we will construct an ML model and evaluate how good it is to predict the number of likes of a post / tweet given the content.
1 Data importation
Use pandas to import the facebook.sqlite and twitter.sqlite databases.
Print the 5 first rows of both tables.
The facebook.sqlite and twitter.sqlite SQLite databases can be created by running the data acquisition and exploration exercise.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
nwords = 100
# Your code here.
"""
Explanation: 2 Vectorization
First step: transform the data into a format understandable by the machine. What to do with text ? A common choice is the so-called bag-of-word model, where we represent each word a an integer and simply count the number of appearances of a word into a document.
Example
Let's say we have a vocabulary represented by the following correspondance table.
| Integer | Word |
|:-------:|---------|
| 0 | unknown |
| 1 | dog |
| 2 | school |
| 3 | cat |
| 4 | house |
| 5 | work |
| 6 | animal |
Then we can represent the following document
I have a cat. Cats are my preferred animals.
by the vector $x = [6, 0, 0, 2, 0, 0, 1]^T$.
Tasks
Construct a vocabulary of the 100 most occuring words in your dataset.
Build a vector $x \in \mathbb{R}^{100}$ for each document (post or tweet).
Tip: the natural language modeling libraries nltk and gensim are useful for advanced operations. You don't need them here.
Arise a first data cleaning question. We may have some text in french and other in english. What do we do ?
End of explanation
"""
# Your code here.
"""
Explanation: Exploration question: what are the 5 most used words ? Exploring your data while playing with it is a useful sanity check.
End of explanation
"""
# Your code here.
"""
Explanation: 3 Pre-processing
The independant variables $X$ are the bags of words.
The target $y$ is the number of likes.
Split in half for training and testing sets.
End of explanation
"""
import scipy.sparse
# Your code here.
"""
Explanation: 4 Linear regression
Using numpy, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$
Please define a class LinearRegression with two methods:
1. fit learn the parameters $w$ and $b$ of the model given the training examples.
2. predict gives the estimated number of likes of a post / tweet. That will be used to evaluate the model on the testing set.
To evaluate the classifier, create an accuracy(y_pred, y_true) function which computes the mean squared error $\frac1n \| \hat{y} - y \|_2^2$.
Hint: you may want to use the function scipy.sparse.linalg.spsolve().
End of explanation
"""
# Your code here.
"""
Explanation: Interpretation: what are the most important words a post / tweet should include ?
End of explanation
"""
import ipywidgets
from IPython.display import clear_output
# Your code here.
"""
Explanation: 5 Interactivity
Create a slider for the number of words, i.e. the dimensionality of the samples $x$.
Print the accuracy for each change on the slider.
End of explanation
"""
from sklearn import linear_model, metrics
# Your code here.
"""
Explanation: 6 Scikit learn
Fit and evaluate the linear regression model using sklearn.
Evaluate the model with the mean squared error metric provided by sklearn.
Compare with your implementation.
End of explanation
"""
import os
os.environ['KERAS_BACKEND'] = 'theano' # tensorflow
import keras
# Your code here.
"""
Explanation: 7 Deep Learning
Try a simple deep learning model !
Another modeling choice would be to use a Recurrent Neural Network (RNN) and feed it the sentence words after words.
End of explanation
"""
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
# Your code here.
"""
Explanation: 8 Evaluation
Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.
What do you observe ? What are your suggestions to improve the performance ?
End of explanation
"""
|
skkandrach/foundations-homework | .ipynb_checkpoints/Homeowrk_3-checkpoint.ipynb | mit | from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
"""
Explanation: Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.
End of explanation
"""
type(document)
h3_tag = document.find('h3')
h3_tag
h3_tag.string
h3_tags = []
h3_tags = document.find_all('h3')
for item in h3_tags:
print(item.string)
print(len(h3_tags))
"""
Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
End of explanation
"""
a_tag = document.find('a', {'class': 'tel'})
print(a_tag)
"""
Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
End of explanation
"""
w_tag = document.find_all('td', {'class': 'wname'})
#print(w_tag)
for item in w_tag:
print(item.string)
"""
Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
End of explanation
"""
widgets = []
entire_widgets = document.find_all('tr', {'class':"winfo"})
for item in entire_widgets:
dictionaries = {}
partno_tag = item.find('td',{'class':'partno'})
wname_tag = item.find('td',{'class':'wname'})
price_tag = item.find('td',{'class':'price'})
quantity_tag = item.find('td',{'class':'quantity'})
dictionaries['partno']= partno_tag.string
dictionaries['wname']= wname_tag.string
dictionaries['price']= price_tag.string
dictionaries['quantity']= quantity_tag.string
widgets.append(dictionaries)
print(widgets)
print(widgets[5]['partno'])
"""
Explanation: Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
End of explanation
"""
widgets = []
entire_widgets = document.find_all('tr', {'class':"winfo"})
for item in entire_widgets:
dictionaries = {}
partno_tag = item.find('td',{'class':'partno'})
wname_tag = item.find('td',{'class':'wname'})
price_tag = item.find('td',{'class':'price'})
quantity_tag = item.find('td',{'class':'quantity'})
for price_tag_item in price_tag:
price= price_tag_item[1:] #getting rid of the dollar sign
for quantity_tag_item in quantity_tag:
quantity= quantity_tag_item
dictionaries['partno']= partno_tag.string
dictionaries['wname']= wname_tag.string
dictionaries['price']= float(price)
dictionaries['quantity']= int(quantity_tag_item)
widgets.append(dictionaries)
widgets
"""
Explanation: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
[{'partno': 'C1-9476',
'price': 2.7,
'quantity': 512,
'widgetname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': 9.36,
'quantity': 967,
'widgetname': 'Widget For Furtiveness'},
... some items omitted ...
{'partno': '5B-941/F',
'price': 13.26,
'quantity': 919,
'widgetname': 'Widget For Cinema'}]
(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)
End of explanation
"""
total_widget_list= []
for item in widgets:
total_widget_list.append(item['quantity'])
sum(total_widget_list)
"""
Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: 7928
End of explanation
"""
for item in widgets:
if item['price'] > 9.30:
print(item['wname'])
"""
Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
End of explanation
"""
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
"""
Explanation: Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):
End of explanation
"""
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
"""
Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
End of explanation
"""
for item in document.find_all('h3'):
if item.string == "Hallowed widgets":
hallowed_widgets_table = item.find_next_sibling('table', {'class': 'widgetlist'})
#print(type(hallowed_widgets_table))
td_list = hallowed_widgets_table.find_all('td', {'class': 'partno'})
for td in td_list:
print(td.string)
"""
Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output:
MZ-556/B
QV-730
T1-9731
5B-941/F
End of explanation
"""
category_counts = {}
all_categories = document.find_all('h3')
for item in all_categories:
category = item.string
widget_table = item.find_next_sibling('table', {'class': 'widgetlist'})
widget_quantity = widget_table.find_all('tr', {'class':'winfo'})
category_counts[category]= len(widget_quantity)
category_counts
"""
Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
End of explanation
"""
|
CGATOxford/CGATPipelines | CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report_insert_sizes.ipynb | mit | import sqlite3
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
#import CGATPipelines.Pipeline as P
import os
import statistics
#import collections
#load R and the R packages required
#%load_ext rpy2.ipython
#%R require(ggplot2)
# use these functions to display tables nicely as html
from IPython.display import display, HTML
plt.style.use('ggplot')
#plt.style.available
"""
Explanation: Peakcalling Bam Stats and Filtering Report - Insert Sizes
This notebook is for the analysis of outputs from the peakcalling pipeline
There are severals stats that you want collected and graphed (topics covered in this notebook in bold).
These are:
how many reads input
how many reads removed at each step (numbers and percentages)
how many reads left after filtering
inset size distribution pre filtering for PE reads
how many reads mapping to each chromosome before filtering?
how many reads mapping to each chromosome after filtering?
X:Y reads ratio
inset size distribution after filtering for PE reads
samtools flags - check how many reads are in categories they shouldn't be
picard stats - check how many reads are in categories they shouldn't be
This notebook takes the sqlite3 database created by CGAT peakcalling_pipeline.py and uses it for plotting the above statistics
It assumes a file directory of:
location of database = project_folder/csvdb
location of this notebook = project_folder/notebooks.dir/
Firstly lets load all the things that might be needed
Insert size distribution
This section get the size distribution of the fragements that have been sequeced in paired-end sequencing. The pipeline calculates the size distribution by caluculating the distance between the most 5' possition of both reads, for those mapping to the + stand this is the left-post possition, for those mapping to the - strand is the rightmost coordinate.
This plot is especially useful for ATAC-Seq experiments as good samples should show peaks with a period approximately equivelent to the length of a nucleosome (~ 146bp) a lack of this phasing might indicate poor quality samples and either over (if lots of small fragments) or under intergration (if an excess of large fragments) of the topoisomerase.
End of explanation
"""
!pwd
!date
"""
Explanation: This is where we are and when the notebook was run
End of explanation
"""
database_path = '../csvdb'
output_path = '.'
#database_path= "/ifs/projects/charlotteg/pipeline_peakcalling/csvdb"
"""
Explanation: First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains
End of explanation
"""
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
"""
Explanation: This code adds a button to see/hide code in html
End of explanation
"""
def getTableNamesFromDB(database_path):
# Create a SQL connection to our SQLite database
con = sqlite3.connect(database_path)
cur = con.cursor()
# the result of a "cursor.execute" can be iterated over by row
cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;")
available_tables = (cur.fetchall())
#Be sure to close the connection.
con.close()
return available_tables
db_tables = getTableNamesFromDB(database_path)
print('Tables contained by the database:')
for x in db_tables:
print('\t\t%s' % x[0])
#This function retrieves a table from sql database and indexes it with track name
def getTableFromDB(statement,database_path):
'''gets table from sql database depending on statement
and set track as index if contains track in column names'''
conn = sqlite3.connect(database_path)
df = pd.read_sql_query(statement,conn)
if 'track' in df.columns:
df.index = df['track']
return df
"""
Explanation: The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name
End of explanation
"""
insert_df = getTableFromDB('select * from insert_sizes;',database_path)
insert_df = insert_df[insert_df["filename"].str.contains('pseudo')==False].copy()
insert_df = insert_df[insert_df["filename"].str.contains('pooled')==False].copy()
def add_expt_to_insertdf(dataframe):
''' splits track name for example HsTh1-RATotal-R1.star into expt
featues, expt, sample_treatment and replicate and adds these as
collumns to the dataframe'''
expt = []
treatment = []
replicate = []
for value in dataframe.filename:
x = value.split('/')[-1]
x = x.split('_insert')[0]
# split into design features
y = x.split('-')
expt.append(y[-3])
treatment.append(y[-2])
replicate.append(y[-1])
if len(expt) == len(treatment) and len(expt)== len(replicate):
print ('all values in list correctly')
else:
print ('error in loading values into lists')
#add collums to dataframe
dataframe['expt_name'] = expt
dataframe['sample_treatment'] = treatment
dataframe['replicate'] = replicate
return dataframe
insert_df = add_expt_to_insertdf(insert_df)
insert_df
"""
Explanation: Insert Size Summary
1) lets getthe insert_sizes table from database
Firsly lets look at the summary statistics that us the mean fragment size, sequencing type and mean read length. This table is produced using macs2 for PE data, or bamtools for SE data
If IDR has been run the insert_size table will contain entries for the pooled and pseudo replicates too - we don't really want this as it will duplicate the data from the origional samples so we subset this out
End of explanation
"""
ax = insert_df.boxplot(column='fragmentsize_mean', by='sample_treatment')
ax.set_title('for mean fragment size',size=10)
ax.set_ylabel('mean fragment length')
ax.set_xlabel('sample treatment')
ax = insert_df.boxplot(column='tagsize', by='sample_treatment')
ax.set_title('for tag size',size=10)
ax.set_ylabel('tag size')
ax.set_xlabel('sample treatment')
ax.set_ylim(((insert_df.tagsize.min()-2),(insert_df.tagsize.max()+2)))
"""
Explanation: lets graph the fragment length mean and tag size grouped by sample so we can see if they are much different
End of explanation
"""
def getFraglengthTables(database_path):
'''Takes path to sqlite3 database and retrieves fraglengths tables for individual samples
, returns a dictionary where keys = sample table names, values = fraglengths dataframe'''
frag_tabs = []
db_tables = getTableNamesFromDB(database_path)
for table_name in db_tables:
if 'fraglengths' in str(table_name[0]):
tab_name = str(table_name[0])
statement ='select * from %s;' % tab_name
df = getTableFromDB(statement,database_path)
frag_tabs.append((tab_name,df))
print('detected fragment length distribution tables for %s files: \n' % len(frag_tabs))
for val in frag_tabs:
print(val[0])
return frag_tabs
def getDFofFragLengths(database_path):
''' this takes a path to database and gets a dataframe where length of fragments is the index,
each column is a sample and values are the number of reads that have that fragment length in that
sample
'''
fraglength_dfs_list = getFraglengthTables(database_path)
dfs=[]
for item in fraglength_dfs_list:
track = item[0].split('_filtered_fraglengths')[0]
df = item[1]
#rename collumns so that they are correct - correct this in the pipeline then delete this
#df.rename(columns={'frequency':'frag_length', 'frag_length':'frequency'}, inplace=True)
df.index = df.frag_length
df.drop('frag_length',axis=1,inplace=True)
df.rename(columns={'frequency':track},inplace=True)
dfs.append(df)
frag_length_df = pd.concat(dfs,axis=1)
frag_length_df.fillna(0, inplace=True)
return frag_length_df
#Note the frequency and fragment lengths are around the wrong way!
#frequency is actually fragment length, and fragement length is the frequency
#This gets the tables from db and makes master df of all fragment length frequencies
frag_length_df = getDFofFragLengths(database_path)
#plot fragment length frequencies
ax = frag_length_df.divide(1000).plot()
ax.set_ylabel('Number of fragments\n(thousands)')
ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('fragment length distribution')
ax.set_xlabel('fragment length (bp)')
ax.set_xlim()
"""
Explanation: Ok now get get the fragment length distributiions for each sample and plot them
End of explanation
"""
ax = frag_length_df.divide(1000).plot(figsize=(9,9))
ax.set_ylabel('Number of fragments\n(thousands)')
ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('fragment length distribution')
ax.set_xlabel('fragment length (bp)')
ax.set_xlim((0,800))
"""
Explanation: Now lets zoom in on the interesting region of the plot (the default in the code looks at fragment lengths from 0 to 800bp - you can change this below by setting the tuple in the ax.set_xlim() function
End of explanation
"""
percent_frag_length_df = pd.DataFrame(index=frag_length_df.index)
for column in frag_length_df:
total_frags = frag_length_df[column].sum()
percent_frag_length_df[column] = frag_length_df[column].divide(total_frags)*100
ax = percent_frag_length_df.plot(figsize=(9,9))
ax.set_ylabel('Percentage of fragments')
ax.legend(loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('percentage fragment length distribution')
ax.set_xlabel('fragment length (bp)')
ax.set_xlim((0,800))
"""
Explanation: it is a bit trickly to see differences between samples of different library sizes so lets look and see if the reads for each fragment length is similar
End of explanation
"""
insert_df = getTableFromDB('select * from picard_stats_insert_size_metrics;',database_path)
for c in insert_df.columns:
print (c)
insert_df
"""
Explanation: SUMMARISE HERE
From these plots you should be able to tell wether there are any distinctive patterns in the size of the fragment lengths,this is especially important for ATAC-Seq data as in successful experiments you should be able to detect nucleosome phasing - it can also indicate over fragmentation or biases in cutting.
Lets looks at the picard insert size metrics also
End of explanation
"""
|
nitin-cherian/LifeLongLearning | Python/Python_Morsels_Revised/11.lstrip/let_me_try/lstrip.ipynb | mit | def lstrip(iterable, obj):
stop = False
for item in iterable:
if stop:
yield item
elif item != obj:
yield item
stop = True
x = lstrip([0, 1, 2, 3, 0], 0)
x
list(x)
"""
Explanation: Bonus1: return an iterator (for example a generator) from your lstrip function instead of a list.
End of explanation
"""
def lstrip(iterable, obj):
lstrip_stop = False
for item in iterable:
if lstrip_stop:
yield item
else:
if not callable(obj):
if item != obj:
yield item
lstrip_stop = True
else:
if not obj(item):
yield item
lstrip_stop = True
def is_falsey(value): return not bool(value)
list(lstrip(['', 0, 1, 0, 2, 'h', ''], is_falsey))
list(lstrip([-4, -2, 2, 4, -6], lambda n: n < 0))
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
list(lstrip(numbers, is_even))
list(lstrip([0, 0, 1, 0, 2, 3], 0))
list(lstrip(' hello ', ' '))
"""
Explanation: Bonus2: allow your lstrip function to accept a function as its second argument which will determine whether the item should be stripped
End of explanation
"""
import unittest
class LStripTests(unittest.TestCase):
"""Tests for lstrip."""
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_list(self):
self.assertIterableEqual(lstrip([1, 1, 2, 3], 1), [2, 3])
def test_nothing_to_strip(self):
self.assertIterableEqual(lstrip([1, 2, 3], 0), [1, 2, 3])
def test_string(self):
self.assertIterableEqual(lstrip(' hello', ' '), 'hello')
def test_empty_iterable(self):
self.assertIterableEqual(lstrip([], 1), [])
def test_strip_all(self):
self.assertIterableEqual(lstrip([1, 1, 1], 1), [])
def test_none_values(self):
self.assertIterableEqual(lstrip([None, 1, 2, 3], 0), [None, 1, 2, 3])
def test_iterator(self):
squares = (n**2 for n in [0, 0, 1, 2, 3])
self.assertIterableEqual(lstrip(squares, 0), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
stripped = lstrip((1, 2, 3), 1)
self.assertEqual(iter(stripped), iter(stripped))
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_function_given(self):
numbers = [0, 2, 4, 1, 3, 5, 6]
def is_even(n): return n % 2 == 0
self.assertIterableEqual(lstrip(numbers, is_even), [1, 3, 5, 6])
if __name__ == "__main__":
unittest.main(argv=['ignore-first-arg'], exit=False)
"""
Explanation: Unit Tests
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb | apache-2.0 | import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
! pip3 install tensorflow-hub $USER_FLAG -q
"""
Explanation: E2E ML on GCP: MLOps stage 6 : Get started with TensorFlow serving functions with Vertex AI Prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_tf_serving_function.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to add a serving function to a model deployed to a Vertex AI Endpoint.
Objective
In this tutorial, you learn how to use Vertex AI Prediction on a Vertex AI Endpoint resource with a serving function.
This tutorial uses the following Google Cloud ML services and resources:
Vertex AI Prediction
Vertex AI Models
Vertex AI Endpoints
The steps performed include:
Download a pretrained image classification model from TensorFlow Hub.
Create a serving function to receive compressed image data, and output decomopressed preprocessed data for the model input.
Upload the TensorFlow Hub model and serving function as a Vertex AI Model resource.
Creating an Endpoint resource.
Deploying the Model resource to an Endpoint resource.
Make an online prediction to the Model resource instance deployed to the Endpoint resource.
Dataset
This tutorial uses a pre-trained image classification model from TensorFlow Hub, which is trained on ImageNet dataset.
Learn more about ResNet V2 pretained model.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the following packages to execute this notebook.
End of explanation
"""
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
import tensorflow as tf
import tensorflow_hub as hub
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
"""
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more about hardware accelerator support for your region.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Set pre-built containers
Set the pre-built Docker container image for prediction.
For the latest list, see Pre-built containers for prediction.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", DEPLOY_COMPUTE)
"""
Explanation: Set machine type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
tfhub_model = tf.keras.Sequential(
[hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/5")]
)
tfhub_model.build([None, 224, 224, 3])
tfhub_model.summary()
"""
Explanation: Get pretrained model from TensorFlow Hub
For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource.
Download the pretrained model
First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model.
End of explanation
"""
MODEL_DIR = BUCKET_URI + "/model"
tfhub_model.save(MODEL_DIR)
"""
Explanation: Save the model artifacts
At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location.
End of explanation
"""
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(224, 224))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(tfhub_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(tfhub_model, MODEL_DIR, signatures={"serving_default": serving_fn})
"""
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Serving function for image data
Preprocessing
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes, and then preprocessed to match the model input requirements, before it is passed as input to the deployed model.
To resolve this, you define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:
io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.
image.resize - Resizes the image to match the input shape for the model.
At this point, the data can be passed to the model (m_call), via a concrete function. The serving function is a static graph, while the model is a dynamic graph. The concrete function performs the tasks of marshalling the input data from the serving function to the model, and marshalling the prediction result from the model back to the serving function.
End of explanation
"""
loaded = tf.saved_model.load(MODEL_DIR)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
"""
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
"""
model = aip.Model.upload(
display_name="example_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
)
print(model)
"""
Explanation: Upload the TensorFlow Hub model to a Vertex AI Model resource
Finally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource.
Note: When you upload the model artifacts to a Vertex AI Model resource, you specify the corresponding deployment container image.
End of explanation
"""
endpoint = aip.Endpoint.create(
display_name="example_" + TIMESTAMP,
project=PROJECT_ID,
location=REGION,
labels={"your_key": "your_value"},
)
print(endpoint)
"""
Explanation: Creating an Endpoint resource
You create an Endpoint resource using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method.
In this example, the following parameters are specified:
display_name: A human readable name for the Endpoint resource.
project: Your project ID.
location: Your region.
labels: (optional) User defined metadata for the Endpoint in the form of key/value pairs.
This method returns an Endpoint object.
Learn more about Vertex AI Endpoints.
End of explanation
"""
response = endpoint.deploy(
model=model,
deployed_model_display_name="example_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
)
print(endpoint)
"""
Explanation: Deploying Model resources to an Endpoint resource.
You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary.
Note: For this example, you specified the deployment container for the TFHub model in the previous step of uploading the model artifacts to a Vertex AI Model resource.
In the next example, you deploy the Vertex AI Model resource to a Vertex AI Endpoint resource. The Vertex AI Model resource already has defined for it the deployment container image. To deploy, you specify the following additional configuration settings:
The machine type.
The (if any) type and number of GPUs.
Static, manual or auto-scaling of VM instances.
In this example, you deploy the model with the minimal amount of specified parameters, as follows:
model: The Model resource.
deployed_model_displayed_name: The human readable name for the deployed model instance.
machine_type: The machine type for each VM instance.
Do to the requirements to provision the resource, this may take upto a few minutes.
End of explanation
"""
! gsutil cp gs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg test.jpg
import base64
with open("test.jpg", "rb") as f:
data = f.read()
b64str = base64.b64encode(data).decode("utf-8")
"""
Explanation: Prepare test data for prediction
Next, you will load a compressed JPEG image into memory and then base64 encode it. For demonstration purposes, you use an image from the Flowers dataset.
End of explanation
"""
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: {"b64": b64str}}]
prediction = endpoint.predict(instances=instances)
print(prediction)
"""
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ serving_input: { 'b64': base64_encoded_bytes } }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
"""
delete_bucket = False
delete_model = True
delete_endpoint = True
if delete_endpoint:
try:
endpoint.undeploy_all()
endpoint.delete()
except Exception as e:
print(e)
if delete_model:
try:
model.delete()
except Exception as e:
print(e)
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -rf {BUCKET_URI}
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation
"""
|
ProfessorKazarinoff/staticsite | content/code/functions/functions_in_python.ipynb | gpl-3.0 | out = sum([2, 3])
"""
Explanation: Functions are pieces of reusable code. Each function contains three descrete elements: name, input, and output. Functions take in input, called arguments or input arguments, and produce output. A function is called in Python by coding
output = function_name(input)
Note the output is written first followed by the equals sign = and then the function name and the input is enclosed with parenthesis ( ).
Python has many useful built-in functions such as min(),max(),abs() and sum(). We can use these built-in functions without importing any modules.
For example:
End of explanation
"""
type(sum)
"""
Explanation: In the function above, the input is the list [2, 3]. The output of the function is assigned to the out variable and the function name is sum().
We can write our own functions in Python using the general form:
```
def function_name(input):
code to run indented
return output
```
A couple import points about the general form above. The def keyword starts the function definition. Without the word def you are writing a regular line of Python code. Afterthe function name, the input is encolsed with parenthesis ( ) followed by a colon :. Don't forget the colon :. Without the colon : your function will not run. The code within the body of the function must be indentend. Finally, the key word return needs to be at the end of you function followed by the output. The input and output variables can be any thing you want, but the def, : and return must be used.
Let's create our own function to convet kilograms(kg) to grams(g). Let's call our function kg2g.
The first thing to do is make sure that our function name kg2g is not assigned to another function or keyword by Python. We can check if the name kg2g has already been defined using Python's type function. We know that sum() is a function and def is a keyword, how about kg2g?
Let's first check if sum is already a function name:
End of explanation
"""
from keyword import iskeyword
iskeyword('def')
"""
Explanation: Now let's check if def is a keyword.
End of explanation
"""
type(g2kg)
"""
Explanation: OK, so how about kg2g is that a function name?
End of explanation
"""
from keyword import iskeyword
iskeyword('g2kg')
"""
Explanation: g2kg is not a function name. Now let's test to see if g2kg is a keyword in Python:
End of explanation
"""
def kg2g(kg):
g = kg*1000
return g
"""
Explanation: Once we know that our function name is available, we can build our function. Remember the parenthsis, colon and return statment.
End of explanation
"""
kg2g(1.3)
"""
Explanation: Now let's try and use our function. How many kg's is 1300 grams. We expect the output to be 1.3 kg
End of explanation
"""
def kg2g(kg):
"""
Function kg2g converts between kg and g
input: a measurement in kilograms (kg), int or float
output: measurment in grams (g), float
Example:
>>> kg2g(1.3)
1300.0
"""
g = kg * 1000
return g
help(kg2g)
kg2g(1.3)
"""
Explanation: It is good practice to add a doc string to our function. A doc string is used to give the user an idea of what a function does. THe doc string is called when Python's help function invoked. A typical doc string includes the following:
A summary of the function
the function input, including data type
the function output, including data type
An example of the function run with sample input and the output produced
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bnu/cmip6/models/bnu-esm-1-1/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: BNU
Source ID: BNU-ESM-1-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
SHDShim/pytheos | examples/6_p_scale_test_Dorogokupets2015_Pt.ipynb | apache-2.0 | %config InlineBackend.figure_format = 'retina'
"""
Explanation: For high dpi displays.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
"""
Explanation: 0. General note
This example compares pressure calculated from pytheos and original publication for the platinum scale by Dorogokupets 2015.
1. Global setup
End of explanation
"""
eta = np.linspace(1., 0.70, 7)
print(eta)
dorogokupets2015_pt = eos.platinum.Dorogokupets2015()
help(eos.platinum.Dorogokupets2015)
dorogokupets2015_pt.print_equations()
dorogokupets2015_pt.print_equations()
dorogokupets2015_pt.print_parameters()
v0 = 60.37930856339099
dorogokupets2015_pt.three_r
v = v0 * (eta)
temp = 3000.
p = dorogokupets2015_pt.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
"""
Explanation: 3. Compare
End of explanation
"""
v = dorogokupets2015_pt.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print((v/v0))
"""
Explanation: The table is not given in this publication.
End of explanation
"""
|
Benedicto/ML-Learning | Linear_Regression_1_simple_regression.ipynb | gpl-3.0 | import graphlab
"""
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
train_data,test_data = sales.random_split(.8,seed=0)
"""
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
"""
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
"""
Explanation: Useful SFrame summary functions
In order to make use of the closed form solution as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
"""
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
"""
Explanation: As we see we get the same answer both ways
End of explanation
"""
def simple_linear_regression(input_feature, output):
N = len(input_feature)
# compute the sum of input_feature and output
sum_input = input_feature.sum()
sum_output = output.sum()
# compute the product of the output and the input_feature and its sum
product = input_feature * output
product_sum = product.sum()
# compute the squared value of the input_feature and its sum
input_square = input_feature * input_feature
sum_input_square = input_square.sum()
# use the formula for the slope
slope = (product_sum - (sum_output * sum_input) * 1.0/N) / (sum_input_square - sum_input*sum_input*1.0/N)
# use the formula for the intercept
intercept = (sum_output - slope * sum_input) / N
return (intercept, slope)
"""
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
"""
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
"""
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
"""
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
"""
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
"""
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = input_feature * slope + intercept
return predicted_values
"""
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
"""
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
"""
Explanation: Now that we can calculate a prediction given the slope and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
"""
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = output - predictions
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
"""
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
"""
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
"""
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
"""
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
"""
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
"""
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept) * 1.0 / slope
return estimated_feature
"""
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
"""
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
"""
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that costs $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
"""
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
br_intercept, br_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
"""
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
"""
# Compute RSS when using bedrooms on TEST data:
get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], br_intercept, br_slope)
# Compute RSS when using squarefeet on TEST data:
get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
"""
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ncar/cmip6/models/sandbox-3/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
vitojph/kschool-nlp | notebooks-py2/nltk-analyzers.ipynb | gpl-3.0 | from __future__ import print_function
from __future__ import division
import nltk
"""
Explanation: Resumen de NLTK: Análisis sintáctico
Este resumen se corresponde con el capítulo 8 del NLTK Book Analyzing Sentence Structure. La lectura del capítulo es muy recomendable.
En este resumen vamos a repasar cómo crear gramáticas con NLTK y cómo crear herramientas que nos permitan analizar sintácticamente oraciones sencillas.
Para empezar, necesitamos importar el módulo nltk que nos da acceso a todas las funcionalidades:
End of explanation
"""
g1 = """
S -> NP VP
NP -> Det N | Det N PP | 'I'
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
"""
"""
Explanation: Gramáticas Independientes del Contexto (CFG)
Noam Chmosky definió una jerarquía de lenguajes y gramáticas que se utiliza habitualmente en Lingüística e Informática para clasificar lenguajes y gramáticas formales. Cuando queremos modelar fenómenos lingüísticos de las lenguas naturales, el tipo de gramática más adeacuado es el conocido como Tipo 2 o Gramáticas Independientes del Contexto o Context-Free Grammars (CFG) en inglés.
Vamos a definir una gramática simplemente como un conjunto de reglas de reescritura o transformación. Sin entrar en muchos detalles sobre las restricciones que tienen que cumplir las reglas de las gramáticas de Tipo 2, es importante que tengamos en cuenta lo siguiente:
Las gramáticas formales manejan dos tipos de alfabetos.
Los símbolos no terminales son los componentes intermedios que utilizamos en las reglas. Todo símbolo no terminal tiene que ser definido como una secuenca de otros símbolos. En nuestro caso, los no terminales van a ser las categorías sintácticas.
Los símbolos terminales son los componentes finales reconocidos por la gramática. En nuestro caso, los terminales van a ser las palabras de las oraciones que queremos analizar sintácticamente.
Todas las reglas de una gramática formal tienen la forma Símbolo1 -> Símbolo2, Símbolo3... SímboloN y se leen como el Símbolo1 se define/está formado/se reescribe como una secuencia formada por Símbolo2, Símbolo3, etc.
En las gramáticas independientes del contexto, la parte situada a la izquierda de la flecha -> es siempre un único símbolo no terminal.
Gramáticas Generativas en NLTK
Pues bien, para definir nuestras gramáticas en NLTK podemos escribirlas en un fichero aparte o como una cadena de texto siguiendo el formalismo de las gramaticas generativas de Chomsky. Vamos a definir una sencilla gramática capaz de reconocer la famosa frase de los hermanos Marx I shot an elephant in my pajamas, y la vamos a guardar como una cadena de texto en la variable g1.
End of explanation
"""
grammar1 = nltk.CFG.fromstring(g1)
"""
Explanation: Fíjate cómo hemos definido nuestra gramática:
Hemos encerrado todo entre triples comillas dobles. Recuerda que esta sintaxis de Python permite crear cadenas que contengan retornos de carro y ocupen más de una línea de longitud.
Para los no terminales utilizamos las convenciones habituales para las estructuras sintácticas y las categorías de palabras y los escribimos en mayúsculas. Las etiquetas son autoexplicativas, aunque estén en inglés.
Lo no terminales van escritos entre comillas simples.
Cuando un no terminal se puede definir de más de una forma, marcamos la disyunción con la barra vertical |.
Tenemos reglas que se interpretan de la siguiente manera: una oración se define como una sintagma nominal y un sintagma verbal; un sintagma nominal se define como un determinante y un nombre, o un determinante, un nombre y un sintagma preposicional, o la palabra I, etc.
A partir de nuestra gramática en una cadena de texto, necesitamos crear un analizador que podamos utilizar posterioremente. Para ello, es imprescindible parsearla antes con el método nltk.CFG.fromstring().
End of explanation
"""
analyzer = nltk.ChartParser(grammar1)
"""
Explanation: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
End of explanation
"""
oracion = "I shot an elephant in my pajamas".split()
# guardamos todos los posibles análisis sintácticos en trees
trees = analyzer.parse(oracion)
for tree in trees:
print(tree)
"""
Explanation: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
End of explanation
"""
print(analyzer.parse_one(oracion))
"""
Explanation: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés: se trata del típico ejemplo de PP attachment (saber exactamente a qué nodo está modificando un sintagma preposicional). Existe una doble interpretación para el sintagma preposicional in my pajamas: En el momento del disparo, ¿quién llevaba puesto el pijama? ¿El elefante o yo? Pues bien, nuestra gramática recoge esta ambigüedad y sería capaz de analizarla de dos maneras diferentes, tal y como se muestra en la celda anterior.
En el caso de que nos interese solo generar uno de los posibles análisis, podemos utilizar el método parse_one(), como se muestra a continuación.
End of explanation
"""
print(analyzer.parse(oracion))
"""
Explanation: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
End of explanation
"""
g1v2 = """
S -> NP VP
NP -> Det N | Det N PP | PRO
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
PRO -> 'I' | 'you'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
"""
grammar1v2 = nltk.CFG.fromstring(g1v2)
analyzer1v2 = nltk.ChartParser(grammar1v2)
# itero sobre la estructura que devuelve parse()
for tree in analyzer1v2.parse(oracion):
print(tree)
print("\n", "-------------------------------", "\n")
for tree in analyzer1v2.parse("you shot my elephant".split()):
print(tree)
"""
Explanation: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos:
End of explanation
"""
for tree in analyzer.parse("shot an pajamas elephant my I".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
"""
Explanation: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Cuando un analizador reconoce todo el vocabulario de una oración de entrada pero es incapaz de analizarla, el método parse() no da error pero devuelve un objeto vacío. En este caso, la oración es agramatical según nuestra gramática.
End of explanation
"""
for tree in analyzer.parse("our time is running out".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
"""
Explanation: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea:
End of explanation
"""
g2 = u"""
O -> SN SV
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
"""
grammar2 = nltk.CFG.fromstring(g2)
analizador2 = nltk.ChartParser(grammar2)
"""
Explanation: Tenlo en cuenta a la hora de detectar errores en tu código.
Gramáticas en español
Visto un primer ejemplo de CFG, vamos a cambiar de lengua y crear un analizador para oraciones sencillas en español. El procedimiento es el mismo, definimos nuestra gramática en formato de Chomsky en un fichero aparte o en una cadena de texto, la parseamos con el método nltk.CFG.fromstring() y creamos un analizador con el método nltk.ChartParser():
End of explanation
"""
oraciones = u"""Ana salta
la niña pela una manzana verde con el cuchillo
Juan come un cuchillo bonito desde el niño
un manzana bonito salta el cuchillo desde el niño verde
el cuchillo verde persigue a la pequeña manzana de Ana
el cuchillo verde persigue a Ana""".split("\n")
for oracion in oraciones:
print(oracion)
for tree in analizador2.parse(oracion.split()):
print(tree, "\n")
"""
Explanation: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
End of explanation
"""
g3 = u"""
O -> SN SV | O Conj O
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
Conj -> 'y' | 'pero'
"""
# Ahora fijate cómo creamos en analizador en un solo paso
# compáralo con los ejemplos anteriores
analizador3 = nltk.ChartParser(nltk.CFG.fromstring(g3))
for tree in analizador3.parse(u"""la manzana salta y el niño come pero el cuchillo
verde persigue a la pequeña manzana de Ana""".split()):
print(tree)
"""
Explanation: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo: un par de conjunciones.
End of explanation
"""
# ojo, son sencillas, pero contienen oraciones impersonales, verbos copulativos, sujetos elípticos
oraciones = u"""mañana es viernes
hoy es jueves
tenéis sueño
hace frío
Pepe hace sueño""".split("\n")
# escribe tu gramática en esta celda
g4 = """
"""
analyzer4 = nltk.ChartParser(nltk.CFG.fromtring(g4))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer4.parse(oracion.split()):
print(tree, "\n")
"""
Explanation: Recuerda que una gramática no es un programa: es una simple descripción que permite establecer qué estructuras sintácticas están bien formadas (la oraciones gramaticales) y cuáles no (las oraciones agramaticales). Cuando una oración es reconocida por una gramática (y en consecuencia, está bien formada), el analizador puede representar la estructura en forma de árbol.
NLTK proporciona acceso a distintos tipo de analizadores (árboles de dependencias, gramáticas probabilísticas, etc), aunque nosotros solo hemos utilizado el más sencillo de ellos: nltk.ChartParser(). Estos analizadores sí son programitas que permiten leer una gramática y analizar las oraciones que proporcionemos como entrada del método parse().
Otro ejemplo
En clase improvisamos un poco y proponemos el siguiente ejemplo de gramática. Vamos a ir complicándola de manera incremental. Comencemos con unas cuantas oraciones de ejemplo.
End of explanation
"""
oraciones = u"""Pepe cree que mañana es viernes
María dice que Pepe cree que mañana es viernes""".split()
# escribe la extensión de tu gramática en esta celda
g5 = """
"""
analyzer5 = nltk.ChartParser(nltk.CFG.fromstring(g5))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer5.parse(oracion.split()):
print(tree, "\n")
"""
Explanation: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo: Pepe cree que mañana es viernes, María dice que Pepe cree que mañana es viernes, etc.
Aumenta tu vocabulario añadiendo tantos terminales como te haga falta.
End of explanation
"""
|
oliverlee/pydy | examples/chaos_pendulum/chaos_pendulum.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import sympy as sm
import sympy.physics.mechanics as me
from pydy.system import System
from pydy.viz import Cylinder, Plane, VisualizationFrame, Scene
%matplotlib nbagg
me.init_vprinting(use_latex='mathjax')
"""
Explanation: Introduction
This example gives a simple demostration of chaotic behavior in a simple two body system. The system is made up of a slender rod that is connected to the ceiling at one end with a revolute joint that rotates about the $\hat{\mathbf{n}}_y$ unit vector. At the other end of the rod a flat plate is attached via a second revolute joint allowing the plate to rotate about the rod's axis with aligns with the $\hat{\mathbf{a}_z}$ unit vector.
Setup
End of explanation
"""
mA, mB, lB, w, h, g = sm.symbols('m_A, m_B, L_B, w, h, g')
"""
Explanation: Define Variables
First define the system constants:
$m_A$: Mass of the slender rod.
$m_B$: Mass of the plate.
$l_B$: Distance from $N_o$ to $B_o$ along the slender rod's axis.
$w$: The width of the plate.
$h$: The height of the plate.
$g$: The acceleratoin due to gravity.
End of explanation
"""
theta, phi = me.dynamicsymbols('theta, phi')
omega, alpha = me.dynamicsymbols('omega, alpha')
"""
Explanation: There are two time varying generalized coordinates:
$\theta(t)$: The angle of the slender rod with respect to the ceiling.
$\phi(t)$: The angle of the plate with respect to the slender rod.
The two generalized speeds will then be defined as:
$\omega(t)=\dot{\theta}$: The angular rate of the slender rod with respect to the ceiling.
$\alpha(t)=\dot{\phi}$: The angluer rate of the plate with respect to the slender rod.
End of explanation
"""
kin_diff = (omega - theta.diff(), alpha - phi.diff())
kin_diff
"""
Explanation: The kinematical differential equations are defined in this fashion for the KanesMethod class:
$$0 = \omega - \dot{\theta}\
0 = \alpha - \dot{\phi}$$
End of explanation
"""
N = me.ReferenceFrame('N')
A = me.ReferenceFrame('A')
B = me.ReferenceFrame('B')
"""
Explanation: Define Orientations
There are three reference frames. These are defined as such:
End of explanation
"""
A.orient(N, 'Axis', (theta, N.y))
B.orient(A, 'Axis', (phi, A.z))
"""
Explanation: The frames are oriented with respect to each other by simple revolute rotations. The following lines set the orientations:
End of explanation
"""
No = me.Point('No')
Ao = me.Point('Ao')
Bo = me.Point('Bo')
"""
Explanation: Define Positions
Three points are necessary to define the problem:
$N_o$: The fixed point which the slender rod rotates about.
$A_o$: The center of mass of the slender rod.
$B_o$: The center of mass of the plate.
End of explanation
"""
lA = (lB - h / 2) / 2
Ao.set_pos(No, lA * A.z)
Bo.set_pos(No, lB * A.z)
"""
Explanation: The two centers of mass positions can be set relative to the fixed point, $N_o$.
End of explanation
"""
A.set_ang_vel(N, omega * N.y)
B.set_ang_vel(A, alpha * A.z)
"""
Explanation: Specify the Velocities
The generalized speeds should be used in the definition of the linear and angular velocities when using Kane's method. For simple rotations and the defined kinematical differential equations the angular rates are:
End of explanation
"""
No.set_vel(N, 0)
Ao.v2pt_theory(No, N, A)
Bo.v2pt_theory(No, N, A)
"""
Explanation: Once the angular velocities are specified the linear velocities can be computed using the two point velocity thereom, starting with the origin point having a velocity of zero.
End of explanation
"""
IAxx = sm.S(1) / 12 * mA * (2 * lA)**2
IAyy = IAxx
IAzz = 0
IA = (me.inertia(A, IAxx, IAyy, IAzz), Ao)
"""
Explanation: Inertia
The central inertia of the symmetric slender rod with respect to its reference frame is a function of its length and its mass.
End of explanation
"""
IA[0].to_matrix(A)
"""
Explanation: This gives the inertia tensor:
End of explanation
"""
IBxx = sm.S(1)/12 * mB * h**2
IByy = sm.S(1)/12 * mB * (w**2 + h**2)
IBzz = sm.S(1)/12 * mB * w**2
IB = (me.inertia(B, IBxx, IByy, IBzz), Bo)
IB[0].to_matrix(B)
"""
Explanation: The central inerita of the symmetric plate with respect to its reference frame is a function of its width and height.
End of explanation
"""
rod = me.RigidBody('rod', Ao, A, mA, IA)
plate = me.RigidBody('plate', Bo, B, mB, IB)
"""
Explanation: All of the information to define the two rigid bodies are now available. This information is used to create an object for the rod and the plate.
End of explanation
"""
rod_gravity = (Ao, mA * g * N.z)
plate_gravity = (Bo, mB * g * N.z)
"""
Explanation: Loads
The only loads in this problem is the force due to gravity that acts on the center of mass of each body. These forces are specified with a tuple containing the point of application and the force vector.
End of explanation
"""
kane = me.KanesMethod(N, q_ind=(theta, phi), u_ind=(omega, alpha), kd_eqs=kin_diff)
"""
Explanation: Equations of motion
Now that the kinematics, kinetics, and inertia have all been defined the KanesMethod class can be used to generate the equations of motion of the system. In this case the independent generalized speeds, independent generalized speeds, the kinematical differential equations, and the inertial reference frame are used to initialize the class.
End of explanation
"""
bodies = (rod, plate)
loads = (rod_gravity, plate_gravity)
fr, frstar = kane.kanes_equations(loads, bodies)
sm.trigsimp(fr)
sm.trigsimp(frstar)
"""
Explanation: The equations of motion are then generated by passing in all of the loads and bodies to the kanes_equations method. This produces $f_r$ and $f_r^*$.
End of explanation
"""
sys = System(kane)
sys.constants = {lB: 0.2, # meters
h: 0.1, # meters
w: 0.2, # meters
mA: 0.01, # kilograms
mB: 0.1, # kilograms
g: 9.81} # meters per second squared
sys.initial_conditions = {theta: np.deg2rad(45),
phi: np.deg2rad(0.5),
omega: 0,
alpha: 0}
sys.times = np.linspace(0, 10, 500)
"""
Explanation: Simulation
The equations of motion can now be simulated numerically. Values for the constants, initial conditions, and time are provided to the System class along with the symbolic KanesMethod object.
End of explanation
"""
x = sys.integrate()
"""
Explanation: The trajectories of the states are found with the integrate method.
End of explanation
"""
def plot():
plt.figure()
plt.plot(sys.times, np.rad2deg(x[:, :2]))
plt.legend([sm.latex(s, mode='inline') for s in sys.coordinates])
plot()
"""
Explanation: The angles can be plotted to see how they change with respect to time given the initial conditions.
End of explanation
"""
sys.initial_conditions[phi] = np.deg2rad(1.0)
x = sys.integrate()
plot()
"""
Explanation: Chaotic Behavior
Now change the intial condition of the plat angle just slighty to see if the behvior of the system is similar.
End of explanation
"""
sys.initial_conditions[theta] = np.deg2rad(90)
sys.initial_conditions[phi] = np.deg2rad(0.5)
x = sys.integrate()
plot()
"""
Explanation: Seems all good, very similar behavior. But now set the rod angle to $90^\circ$ and try the same slight change in plate angle.
End of explanation
"""
sys.initial_conditions[phi] = np.deg2rad(1.0)
x = sys.integrate()
plot()
"""
Explanation: First note that the plate behaves wildly. What happens when the initial plate angle is altered slightly.
End of explanation
"""
rod_shape = Cylinder(2 * lA, 0.005, color='red')
plate_shape = Plane(h, w, color='blue')
v1 = VisualizationFrame('rod',
A.orientnew('rod', 'Axis', (sm.pi / 2, A.x)),
Ao,
rod_shape)
v2 = VisualizationFrame('plate',
B.orientnew('plate', 'Body', (sm.pi / 2, sm.pi / 2, 0), 'XZX'),
Bo,
plate_shape)
scene = Scene(N, No, v1, v2, system=sys)
"""
Explanation: The behavior does not look similar to the previous simulation. This is an example of chaotic behavior. The plate angle can not be reliably predicted because slight changes in the initial conditions cause the behavior of the system to vary widely.
Visualization
Finally, the system can be animated by attached a cylinder and a plane shape to the rigid bodies. To properly align the coordinate axes of the shapes with the bodies, simple rotations are used.
End of explanation
"""
scene.display_ipython()
%load_ext version_information
%version_information numpy, sympy, scipy, matplotlib, pydy
"""
Explanation: The following method opens up a simple gui that shows a 3D animatoin of the system.
End of explanation
"""
|
sbenthall/bigbang | examples/experimental_notebooks/Assortativity Study.ipynb | agpl-3.0 | from bigbang.archive import Archive
urls = [#"analytics",
"conferences",
"design",
"education",
"gendergap",
"historic",
"hot",
"ietf-privacy",
"ipython-dev",
"ipython-user",
"languages",
"maps-l",
"numpy-discussion",
"playground",
"potlatch-dev",
"python-committers",
"python-dev",
"scipy-dev",
"scipy-user",
"social-media",
"spambayes",
#"wikien-l",
"wikimedia-l"]
archives= [(url,Archive(url,archive_dir="../archives")) for url in urls]
archives = dict(archives)
"""
Explanation: Everything is a network. Assortativity is an interesting property of networks. It is the tendency of nodes in a network to be attached to other nodes that are similar in some way. In social networks, this is sometimes called "homophily."
One kind of assortativity that is particularly descriptive of network topology is degree assortativity. This is what it sounds like: the assortativity (tendency of nodes to attach to other nodes that are similar) of degree (the number of edges a node has).
A suggestive observation by Newman (2002) is that social networks such as academic coauthorship networks and film collaborations tend to have positive degree assortativity, while technical and biological networks tend to have negative degree assortativity. Another way of saying this is that they are disassortatively mixed. This has implications for the ways we model these networks forming as well as the robustness of these networks to the removal of nodes.
Looking at open source software collaboration as a sociotechnical system, we can ask whether and to what extent the networks of activity are assortatively mixed. Are these networks more like social networks or technical networks? Or are they something in between?
Email reply networks
One kind of network that we can extract from open source project data are networks of email replies from public mailing lists. Mailing lists and discussion forums are often the first point of contact for new community members and can be the site of non-technical social processes that are necessary for the maintenance of the community. Of all the communications media used in coordinating the cooperative work of open source development, mailing lists are the most "social".
We are going to look at the mailing lists associated with a number of open source and on-line collaborative projects. We will construct for each list a network for which nodes are email senders (identified by their email address) and edges are the number of times a sender has replied directly to another participant on the list. Keep in mind that these are public discussions and that in a sense every reply is sent to everybody.
End of explanation
"""
import bigbang.graph as graph
igs = dict([(k,graph.messages_to_interaction_graph(v.data)) for (k,v) in archives.items()])
igs
"""
Explanation: The above code reads in preprocessed email archive data. These mailing lists are from a variety of different sources:
|List name | Project | Description |
|---|---|---|
|analytics| Wikimedia | |
|conferences| Python | |
|design| Wikimedia | |
|education| Wikimedia | |
|gendergap| Wikimedia | |
|historic| OpenStreetMap | |
|hot| OpenStreetMap | Humanitarian OpenStreetMap Team |
|ietf-privacy| IETF | |
|ipython-dev| IPython | Developer's list |
|ipython-user| IPython | User's list |
|languages| Wikimedia | |
|maps-l| Wikimedia | |
|numpy-discussion| Numpy | |
|playground| Python | |
|potlatch-dev| OpenStreetMap | |
|python-committers| Python | |
|python-dev| Python | |
|scipy-dev| SciPy | Developer's list|
|scipy-user| SciPy | User's list |
|social-media| Wikimedia | |
|spambayes| Python | |
|wikien-l| Wikimedia | English language Wikipedia |
|wikimedia-l| Wikimedia | |
End of explanation
"""
import networkx as nx
def draw_interaction_graph(ig):
pos = nx.graphviz_layout(ig,prog='neato')
node_size = [data['sent'] * 4 for name,data in ig.nodes(data=True)]
nx.draw(ig,
pos,
node_size = node_size,
node_color = 'b',
alpha = 0.4,
font_size=18,
font_weight='bold'
)
# edge width is proportional to replies sent
edgewidth=[d['weight'] for (u,v,d) in ig.edges(data=True)]
#overlay edges with width based on weight
nx.draw_networkx_edges(ig,pos,alpha=0.5,width=edgewidth,edge_color='r')
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(550,figsize=(12.5, 7.5))
for ln,ig in igs.items():
print ln
try:
plt.subplot(550 + i)
#print nx.degree_assortativity_coefficient(ig)
draw_interaction_graph(ig)
except:
print 'plotting failure'
plt.show()
"""
Explanation: Now we have processed the mailing lists into interaction graphs based on replies. This is what those graphs look like:
End of explanation
"""
for ln,ig in igs.items():
print ln, len(ig.nodes()), nx.degree_assortativity_coefficient(ig,weight='weight')
"""
Explanation: Well, that didn't work out so well...
I guess I should just go on to compute the assortativity directly.
This is every mailing list, with the total number of nodes and its degree assortativity computed.
End of explanation
"""
|
wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era | Keras_convolutional_MNIST.ipynb | gpl-3.0 | %autosave 120
import numpy as np
np.random.seed(1337)
import datetime
import graphviz
from IPython.display import SVG
import keras
from keras import activations
from keras import backend as K
from keras.datasets import mnist
from keras.layers import (
concatenate,
Concatenate,
Conv1D,
Conv2D,
Dense,
Dropout,
Embedding,
Flatten,
Input,
MaxPooling1D,
MaxPooling2D)
from keras.models import load_model, Model, Sequential
from keras_tqdm import TQDMNotebookCallback
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
import math
import matplotlib
from matplotlib import gridspec
import matplotlib.pylab as plt
from matplotlib.ticker import NullFormatter, NullLocator, MultipleLocator
import pandas as pd
import random
from scipy import stats
import seaborn as sns
from sklearn.datasets import load_iris
import sklearn.ensemble
import sklearn.tree
from sklearn.metrics import (
auc,
confusion_matrix,
roc_curve,
precision_score)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import sqlite3
import sys
import talos as ta
import tensorflow as tf
from tensorflow.python.client.device_lib import list_local_devices
from tqdm import tqdm_notebook
import uuid
from vis.utils import utils
from vis.visualization import visualize_activation
from vis.visualization import visualize_saliency
import warnings
pd.set_option("display.max_columns", 500)
pd.set_option("display.max_rows", 500)
sns.set_palette('husl')
sns.set(style='ticks')
warnings.filterwarnings("ignore")
print('Python version:', sys.version)
print('Matplotlib version:', matplotlib.__version__)
print('NumPy version:', np.__version__)
print('Keras version:', keras.__version__)
print('TensorFlow version:', tf.__version__)
list_local_devices()
%matplotlib inline
plt.rcParams['figure.figsize'] = [10, 10]
# input dimensions
img_x = 28
img_y = 28
# Load MNIST data into training and testing datasets. The x data are the features and the y data are the labels.
(x_train, y_train), (x_test, y_test) = mnist.load_data()
num_classes = 10
# Reshape the data into a 4D tensor (sample_number, x_img_size, y_img_size, num_channels).
# MNIST is greyscale, which corresponds to a single channel/dimension.
# Alternatively, color, for example RGB, would correspond to three channels/dimensions.
x_train = x_train.reshape(x_train.shape[0], img_x, img_y, 1)
x_test = x_test.reshape(x_test.shape[0], img_x, img_y, 1)
input_shape = (img_x, img_y, 1)
# Cast the data as type float32.
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255
x_test = x_test / 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices for use in the categorical_crossentropy loss.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
x_train.shape
input_shape
y_train[0] # labels
y_train.shape
for i in x_train[0].tolist():
print('\n', i)
plt.imshow(x_train[21].reshape(28, 28), cmap="Greys", interpolation="nearest");
model = Sequential()
model.add(Conv2D( # Add a 2D convolutional layer to process the 2D input (image) data.
32, # number of output channels
kernel_size = (3, 3), # kernel: 3 x 3 moving window
strides = (1, 1), # kernel strides in the x and y dimensions -- default: (1, 1)
activation = 'relu', # activation function: ReLU
input_shape = input_shape # input size/shape
))
model.add(MaxPooling2D( # Add a 2D max pooling layer.
pool_size = (2, 2), # size of the pooling in the x and y dimensions
strides = (2, 2) # strides in the x and y dimensions
))
# Add a convolutional layer. The input tensor for this layer is (batch_size, 28, 28, 32),
# where 28 x 28 corresponds to the input dimensions and 32 is the number of output channels from the previous layer.
model.add(Conv2D(
64, # number of output channels
(5, 5), # kernel: 5 x 5 moving window
strides = (1, 1), # kernel strides in x and y dimensions -- default: (1, 1)
activation = 'relu' # activation function: ReLU
))
model.add(Dropout(rate=0.5)) # Add a dropout layer.
model.add(MaxPooling2D( # Add a 2D max pooling layer.
pool_size = (2, 2) # size of the pooling in the x and y dimensions
))
# Flatten the output from convolutional layers to prepare them for input to fully-connected layers.
model.add(Flatten())
model.add(Dense( # Specify a fully-connected layer.
1000, # number of nodes
activation = 'relu' # activation function: ReLU
))
model.add(Dense( # Specify a fully-connected output layer.
num_classes, # number of classes
activation = 'softmax', # softmax classification
name = "preds"
))
#plot_model(model, to_file="model.png")
model.summary()
SVG(model_to_dot(model).create(prog='dot', format='svg'));
model.compile(
loss = 'categorical_crossentropy',
optimizer = 'nadam',
metrics = ['accuracy']
)
checkpoint = keras.callbacks.ModelCheckpoint(
filepath = 'best_model.{epoch:02d}-{val_loss:.2f}.h5',
monitor = 'val_loss',
save_best_only = True
)
%%time
out = model.fit(
x_train,
y_train,
batch_size = 512,
epochs = 5,
verbose = True,
validation_data = (x_test, y_test),
callbacks = [checkpoint]
)
score = model.evaluate(x_test, y_test, verbose=False)
print('test loss:', score[0])
print('test accuracy:', score[1])
plt.plot(out.history['acc'], label='train')
plt.plot(out.history['val_acc'], label='validate')
plt.legend()
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.show();
"""
Explanation: example: Keras 2D convolutional neural network on MNIST
End of explanation
"""
layer_idx = utils.find_layer_idx(model, 'preds')
model.layers[layer_idx].activation = activations.linear # Swap softmax for linear.
model = utils.apply_modifications(model)
filter_idx = 0
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, verbose=False)
im = plt.imshow(img[..., 0])
plt.colorbar(im, fraction=0.0458, pad=0.04);
layer_idx = utils.find_layer_idx(model, 'preds')
model.layers[layer_idx].activation = activations.linear # Swap softmax for linear.
model = utils.apply_modifications(model)
filter_idx = 4
img = visualize_activation(model, layer_idx, filter_indices=filter_idx, verbose=False)
im = plt.imshow(img[..., 0])
plt.colorbar(im, fraction=0.0458, pad=0.04);
"""
Explanation: deep learning understanding
In deep learning research, researchers tend to focus on visualization of learned features of each neuron using various optimization-based algorithms. These optimization-based methods are currently divided into two categories: activation maximization and code inversion.
activation maximization
In a convolutional neural network, each convolution layer has several learned template matching filters that maximize their output when a similar template pattern is found in an input image. The first convolution layer is straightforward to visualise; simply visualise the weights as an image. To see what the layer is doing, a simple option is to apply the filter over raw input pixels. Higher convolution filters operate on the outputs of lower convolution filters (which indicate the presence or absense of some template patterns), making them harder to interpret.
The idea of activation maximization is to generate an input image that maximizes the filter output activations. This approach enables us to see wat sorts of input patterns activate a particular filter.
example
End of explanation
"""
grads = visualize_saliency(model, layer_idx, filter_indices=filter_idx, seed_input=x_test[13], backprop_modifier='guided')
im = plt.imshow(grads, cmap='jet')
plt.colorbar(im, fraction=0.0458, pad=0.04);
"""
Explanation: saliency
Deep Inside Convolutional Networks: VisualisingImage Classification Models and Saliency Maps (paper introducing the technique)
Beyond saliency: understanding convolutional neuralnetworks from saliency prediction on layer-wiserelevance propagation
Understanding how convolutional neural networks work is hard. There's a two-step understanding method called Salient Relevance (SR) map which aims to illuminate how deep convolutional neural networks recognise inputs and features from areas called attention areas therein. In this method, there is first a layer-wise relevance propagation (LRP) step which estimates a 'pixel'-wise relevance map over the input. Then there is constructed a context-aware saliency map from the LRP-generated map which predicts areas close to the foci of attention instead of isolated 'pixels' that the LRP reveals. This perhaps corresponds to recognition in the human visual system where information on regions is more important than information of pixels. So the original designers suggest that this saliency is something of a simulation of human visual recognision. The method seems to identify not only key pixels but also attention areas that contribute to the underlying neural network comprehension of inputs. So overall the salient relevance is a visual interface which unveils some of the visual attention of the network and reveals which type of objects the model has learned to recognize after training.
A saliency map highlights those input elements that are most important for classification of the input.
input-specific class saliency map
Given an input ${I_{o}}$ (e.g. an image), a class ${c}$ and a classification convolutional neural network with the class score function ${S_{c}\left(I\right)}$, which is computed by the classification layer of the network, features of input ${I_{o}}$ can be ranked by their influence on the class score ${S_{c}\left(I_{o}\right)}$ in order to create a saliency map. In this case the saliency map specific to a specific input case. One interpretation of computing this image-specific class saliency is that the magnitude of the class score derivative indicates which input features require the least change to affect the class score the most.
End of explanation
"""
# get indices in test dataset of instances of the class 0
y_test_non_categorical = np.argmax(y_test, axis=1)
indices = [i for i, j in enumerate(y_test_non_categorical) if j == 0]
# get the instances
x_test_0 = [x_test[i] for i in indices]
sample_size = 100
saliencies = []
for i in list(range(0, sample_size - 1)):
saliencies.append(visualize_saliency(model, layer_idx, filter_indices=filter_idx, seed_input=x_test_0[i], backprop_modifier='guided'))
im = plt.imshow(np.mean(saliencies, axis=0), cmap='jet')
plt.colorbar(im, fraction=0.0458, pad=0.04);
"""
Explanation: class saliency map extraction
In the original definition of saliency, a class saliency map is computed essentially by taking the mean of ten input-specific saliency maps with the ten inputs selected randomly. This approach is used here but using a sample of 100 images for the mean.
End of explanation
"""
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.sem.html
im = plt.imshow(stats.sem(saliencies, axis=0), cmap='jet');
plt.colorbar(im, fraction=0.0458, pad=0.04);
"""
Explanation: class saliency map statistical uncertainties
End of explanation
"""
mean = np.mean(saliencies, axis=0)
statistical_uncertainty = stats.sem(saliencies, axis=0)
x = list(range(0, len(mean.flatten()-1)))
y = mean.flatten()
y_su = statistical_uncertainty.flatten()
plt.plot(x, y, 'k-')
plt.fill_between(x, y-10*y_su/2, y+10*y_su/2)
plt.show();
"""
Explanation: reshaped to 1D class-saliency map with statistical uncertainties scaled by a factor of 10
End of explanation
"""
plt.rcParams["figure.figsize"] = [10, 10]
xi = 0.2; yi = 0.2; wi = 0.7; hi = 0.7 # image
xc = 0.91; yc = 0.2; wc = 0.05; hc = 0.7 # colorbar
xh = 0.2; yh = 0.0; wh = 0.7; hh = 0.2 # horizontal plot
xv = 0.0; yv = 0.2; wv = 0.2; hv = 0.7 # vertical plot
ax_i = plt.axes((xi, yi, wi, hi))
ax_h = plt.axes((xh, yh, wh, hh))
ax_v = plt.axes((xv, yv, wv, hv))
ax_c = plt.axes((xc, yc, wc, hc))
ax_i.xaxis.set_major_formatter(NullFormatter())
ax_i.yaxis.set_major_formatter(NullFormatter())
ax_h.yaxis.set_major_formatter(NullFormatter())
ax_v.xaxis.set_major_formatter(NullFormatter())
plt.axes(ax_i)
plt.imshow(mean, aspect='auto', cmap="jet")
ax_h.plot(list(range(0, 28)), mean.sum(axis=0), '-k', drawstyle='steps')
ax_h.plot(list(range(0, 28)), mean.sum(axis=0) + np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='red')
ax_h.plot(list(range(0, 28)), mean.sum(axis=0) - np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='blue')
ax_v.plot(mean.sum(axis=1), list(range(0, 28)), '-k', drawstyle='steps')
ax_v.plot(mean.sum(axis=1) + np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='red')
ax_v.plot(mean.sum(axis=1) - np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='blue')
cb = plt.colorbar(cax=ax_c)
#cb.set_label('intensity')
#ax_i.set_title('input')
#ax_h.set_xlabel('${x}$')
#ax_h.set_ylabel('intensity')
#ax_h.yaxis.set_label_position('right')
#ax_v.set_ylabel('${y}$')
#ax_v.set_xlabel('intensity')
#ax_v.xaxis.set_label_position('top')
plt.show();
plt.rcParams["figure.figsize"] = [10, 10]
xi = 0.2; yi = 0.2; wi = 0.7; hi = 0.7 # image
xc = 0.91; yc = 0.2; wc = 0.05; hc = 0.7 # colorbar
xh = 0.2; yh = 0.0; wh = 0.7; hh = 0.2 # horizontal plot
xv = 0.0; yv = 0.2; wv = 0.2; hv = 0.7 # vertical plot
ax_i = plt.axes((xi, yi, wi, hi))
ax_h = plt.axes((xh, yh, wh, hh))
ax_v = plt.axes((xv, yv, wv, hv))
ax_c = plt.axes((xc, yc, wc, hc))
ax_i.xaxis.set_major_formatter(NullFormatter())
ax_i.yaxis.set_major_formatter(NullFormatter())
ax_h.yaxis.set_major_formatter(NullFormatter())
ax_v.xaxis.set_major_formatter(NullFormatter())
plt.axes(ax_i)
plt.imshow(mean, aspect='auto', cmap="jet")
ax_h.plot(list(range(0, 28)), mean.sum(axis=0), '-k', drawstyle='steps')
#ax_h.plot(list(range(0, 28)), mean.sum(axis=0) + np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='red')
#ax_h.plot(list(range(0, 28)), mean.sum(axis=0) - np.sum(statistical_uncertainty, axis=0)/2, '-k', drawstyle='steps', color='blue')
ax_h.fill_between(
list(range(0, 28)),
mean.sum(axis=0) + np.sum(statistical_uncertainty, axis=0)/2,
mean.sum(axis=0) - np.sum(statistical_uncertainty, axis=0)/2,
step = 'pre',
facecolor = 'red',
alpha = 0.5
)
ax_h.set_xlim(-1, 27)
ax_v.plot(mean.sum(axis=1), list(range(0, 28)), '-k', drawstyle='steps')
#ax_v.plot(mean.sum(axis=1) + np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='red')
#ax_v.plot(mean.sum(axis=1) - np.sum(statistical_uncertainty, axis=1)/2, list(range(0, 28)), '-k', drawstyle='steps', color='blue')
ax_v.fill_betweenx(
list(range(1, 29)),
mean.sum(axis=1) + np.sum(statistical_uncertainty, axis=1)/2,
mean.sum(axis=1) - np.sum(statistical_uncertainty, axis=1)/2,
step = 'pre',
facecolor = 'red',
alpha = 0.5
)
ax_v.set_ylim(0, 28)
cb = plt.colorbar(cax=ax_c)
#cb.set_label('intensity')
#ax_i.set_title('input')
#ax_h.set_xlabel('${x}$')
#ax_h.set_ylabel('intensity')
#ax_h.yaxis.set_label_position('right')
#ax_v.set_ylabel('${y}$')
#ax_v.set_xlabel('intensity')
#ax_v.xaxis.set_label_position('top')
plt.show();
"""
Explanation: 2D class saliency map with uncertainties shown in marginal plots
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/7 Neural Networks/Reverse-Mode-AD.ipynb | gpl-2.0 | def f(x1, x2):
return sin(x1 + x2) * cos(x1 - x2) + (x1 + x2) * (x1 - x2)
"""
Explanation: Reverse Mode Automatic Differentiation
We demonstrate reverse mode AD with the function
$$ f(x_1, x_2) = \sin(x_1 + x_2) \cdot \cos(x_1 - x_2) + (x_1 + x_2) \cdot (x_1 - x_2) $$
To compute the function step by step, we introduce the following auxiliary variables:
* $v_1 := x_1 + x_2$,
* $v_2 := x_1 - x_2$,
* $v_3 := \sin(v_1)$,
* $v_4 := \cos(v_2)$,
* $v_5 := v_3 \cdot v_4$,
* $v_6 := v_1 \cdot v_2$,
* $y := v_5 + v_6$.
End of explanation
"""
CG = [ ('x1', ),
('x2', ),
('v1', '+', 'x1', 'x2'),
('v2', '-', 'x1', 'x2'),
('v3', 'sin', 'v1'),
('v4', 'cos', 'v2'),
('v5', '*', 'v3', 'v4'),
('v6', '*', 'v1', 'v2'),
('y', '+', 'v5', 'v6')
]
import graphviz as gv
"""
Explanation: The computational graph GC defined below implements the function f that is defined above.
End of explanation
"""
def render(CG):
cg = gv.Digraph()
cg.attr(rankdir='LR')
for node in CG:
shape = 'rectangle'
match node:
case (v, ):
label = f'{v}'
shape = 'circle'
case (v, r):
label = f'{v} := {r}'
case (v, op, a1, a2):
label = f'{v} := {a1} {op} {a2}'
case (v, f, a):
label = f'{v} := {f}({a})'
cg.node(v, label=label, shape=shape)
for node in CG:
match node:
case (v, _, a1, a2):
cg.edge(a1, v)
cg.edge(a2, v)
case (v, _, a):
cg.edge(a, v)
return cg
dot = render(CG)
dot
dot.save(filename='cg.dot')
!open cg.dot
import math
"""
Explanation: The function render(CG) takes a computational graph CG as input and renders it graphically via graphviz.
End of explanation
"""
def eval_graph(CG, Values):
for node in CG:
match node:
case (v, ):
pass
case (v, r):
Values[v] = r
case (v, '+', a1, a2):
Values[v] = Values[a1] + Values[a2]
case (v, '-', a1, a2):
Values[v] = Values[a1] - Values[a2]
case (v, '*', a1, a2):
Values[v] = Values[a1] * Values[a2]
case (v, '/', a1, a2):
Values[v] = Values[a1] / Values[a2]
case (v, 'sqrt', a):
Values[v] = math.sqrt(Values[a])
case (v, 'exp', a):
Values[v] = math.exp(Values[a])
case (v, 'log', a):
Values[v] = math.log(Values[a])
case (v, 'sin', a):
Values[v] = math.sin(Values[a])
case (v, 'cos', a):
Values[v] = math.cos(Values[a])
case (v, 'atan', a):
Values[v] = math.atan(Values[a])
return Values['y']
eval_graph(CG, { 'x1': math.pi/4, 'x2': math.pi/4 })
def add_to_dictionary(D, key, value):
if key in D:
D[key] |= { value }
else:
D[key] = { value }
"""
Explanation: The function eval_graph takes two arguments:
* CG is a computational graph,
* Values is a dictionary assigning values to variable names.
End of explanation
"""
def parents(CG):
Parents = {}
for node in CG:
match node:
case (p, _, a):
add_to_dictionary(Parents, a, p)
case (p, _, a1, a2):
add_to_dictionary(Parents, a1, p)
add_to_dictionary(Parents, a2, p)
return Parents
parents(CG)
def node_dictionary(CG):
D = {}
for node in CG:
name = node[0]
D[name] = node
return D
node_dictionary(CG)
"""
Explanation: Given a computational graph CG, the function parents returns a dictionary Parents such that
for every node name n occurring in CG we have that Parents[n] is the set of nodes that are parents
of the node labeled with n.
End of explanation
"""
def partial_derivative(Node, arg, Values):
match Node:
case n, '+', a1, a2:
if arg == a1 == a2:
return 2
if arg == a1 or arg == a2:
return 1
else:
assert False, f'partial_derivative({Node}, {arg})'
case n, '-', a1, a2:
if arg == a1 == a2:
return 0
if arg == a1:
return 1
if arg == a2:
return -1
else:
assert False, f'partial_derivative({Node}, {arg})'
case n, '*', a1, a2:
if arg == a1 == a2:
return 2 * Values[a1]
if arg == a1:
return Values[a2]
if arg == a2:
return Values[a1]
else:
assert False, f'partial_derivative({Node}, {arg})'
case n, '/', a1, a2:
if arg == a1 == a2:
return 0
if arg == a1:
return 1 / Values[a2]
if arg == a2:
return -Values[a1] / Values[a2] ** 2
else:
assert False, f'partial_derivative({Node}, {arg})'
case n, 'sqrt', a:
return 0.5 / math.sqrt(Values[a])
case n, 'exp', a:
return math.exp(Values[a])
case n, 'log', a:
return 1 / Values[a]
case n, 'sin', a:
return math.cos(Values[a])
case n, 'cos', a:
return -math.sin(Values[a])
case n, 'atan', a:
return 1 / (1 + Values[a]**2)
def adjoints(CG, Values):
eval_graph(CG, Values)
NodeDict = node_dictionary(CG)
Parents = parents(CG)
n = len(CG)
Adjoints = {}
Adjoints['y'] = 1
for Node in reversed(CG[:-1]):
name = Node[0]
result = 0
for parent_name in Parents[name]:
parent_node = NodeDict[parent_name]
result += Adjoints[parent_name] * partial_derivative(parent_node, name, Values)
Adjoints[name] = result
return Adjoints
adjoints(CG, { 'x1': math.pi/4, 'x2': math.pi/4 })
CG
"""
Explanation: The function partial_derivative takes three arguments:
* Node is a computational node,
* arg is the name of a node occurring as argument in Node,
* Values is a dictionary that stores a value for every node name.
The function computes the partial derivative of Node w.r.t. arg.
End of explanation
"""
|
abulbasar/machine-learning | Scikit - 21 Kaggle House price prediction (regression).ipynb | apache-2.0 | lasso = Lasso(random_state=1, max_iter=10000)
lasso.fit(X_train_std, y_train)
rmse(y_test, lasso.predict(X_test_std))
"""
Explanation: Seems that Linear regression model performed very poorly. Most likely it is because model finds a lot of collinearity in the data due to the categorical columns.
Test lasso, which is more robust against multi collinearity.
End of explanation
"""
scores = cross_val_score(cv=10, estimator = lasso, scoring="neg_mean_squared_error", X=X_train_std, y = y_train)
scores = np.sqrt(-scores)
scores
from sklearn import linear_model
from sklearn import metrics
from sklearn import tree
from sklearn import ensemble
from sklearn import neighbors
import xgboost as xgb
rs = 1
estimatores = {
#'Linear': linear_model.LinearRegression(),
'Ridge': linear_model.Ridge(random_state=rs, max_iter=10000),
'Lasso': linear_model.Lasso(random_state=rs, max_iter=10000),
'ElasticNet': linear_model.ElasticNet(random_state=rs, max_iter=10000),
'BayesRidge': linear_model.BayesianRidge(),
'OMP': linear_model.OrthogonalMatchingPursuit(),
'DecisionTree': tree.DecisionTreeRegressor(max_depth=10, random_state=rs),
'RandomForest': ensemble.RandomForestRegressor(random_state=rs),
'KNN': neighbors.KNeighborsRegressor(n_neighbors=5),
'GradientBoostingRegressor': ensemble.GradientBoostingRegressor(n_estimators=300, max_depth=4, learning_rate=0.01, loss="ls", random_state=rs),
'xgboost': xgb.XGBRegressor(max_depth=10)
}
errvals = {}
for k in estimatores:
e = estimatores[k]
e.fit(X_train_std, y_train)
err = np.sqrt(metrics.mean_squared_error(y_test, e.predict(X_test_std)))
errvals[k] = err
result = pd.Series.from_array(errvals).sort_values()
result.plot.barh(width = 0.8)
for y, error in enumerate(result):
plt.text(x = 0.01, y = y - 0.1, s = "%.3f" % error, fontweight='bold', color = "white")
plt.title("Performance comparison of algorithms")
"""
Explanation: This rmse score seems reasonable. Find cross validation scores.
End of explanation
"""
|
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/01-Python-Crash-Course/Python Crash Course Exercises - Solved .ipynb | apache-2.0 | price = 300
import math
math.sqrt(price)
"""
Explanation: Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. The questions tend to have a financial theme to them, but don't look to deeply into these tasks themselves, many of them don't hold any significance and are meaningless. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
Task #1
Given price = 300 , use python to figure out the square root of the price.
End of explanation
"""
stock_index = "SP500"
stock_index[2:]
"""
Explanation: Task #2
Given the string:
stock_index = "SP500"
Grab '500' from the string using indexing.
End of explanation
"""
stock_index = "SP500"
price = 300
print('The {x} is at {y} today.'.format(x = stock_index, y = price))
"""
Explanation: Task #3
Given the variables:
stock_index = "SP500"
price = 300
Use .format() to print the following string:
The SP500 is at 300 today.
End of explanation
"""
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
print(stock_info['sp500']['yesterday'])
print(stock_info['info'][1][2])
"""
Explanation: Task #4
Given the variable of a nested dictionary with nested lists:
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
Use indexing and key calls to grab the following items:
Yesterday's SP500 price (250)
The number 365 nested inside a list nested inside the 'info' key.
End of explanation
"""
def source_finder(inp):
return (inp.split(sep = '--'))[-1]
source_finder("PRICE:345.324:SOURCE--QUANDL")
"""
Explanation: Task #5
Given strings with this form where the last source value is always separated by two dashes --
"PRICE:345.324:SOURCE--QUANDL"
Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL"
End of explanation
"""
def price_finder(inp):
return 'price' in inp.lower()
price_finder(inp = "What is the price?")
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300")
"""
Explanation: Task #5
Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!')
End of explanation
"""
def count_price(inp):
return s.lower().count('price')
s = 'Wow that is a nice price, very nice Price! I said price 3 times.'
count_price(s)
"""
Explanation: Task #6
Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation.
End of explanation
"""
def avg_price(inp):
return (sum(inp) / len(inp))
avg_price([3,4,5])
"""
Explanation: Task #7
Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float.
End of explanation
"""
|
zhmz90/CS231N | assign/assignment1/.ipynb_checkpoints/knn-checkpoint.ipynb | mit | %matplotlib
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
#print idx
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
"""
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
"""
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
"""
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
"""
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
- some test image is similar to every image in the training dataset, in contrast some test image is not.
- some train image is similar to each test image, in trast some train image is not.
End of explanation
"""
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
"""
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
#two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
#print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
#no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
#print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
"""
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
"""
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
num_example_each_fold = X_train.shape[0] / num_folds
X_train_folds = np.array_split(X_train, num_example_each_fold)
y_train_folds = np.array_split(y_train, num_example_each_fold)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
def run_knn(X_train, y_train, X_test, y_test, k):
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
dists = classifier.compute_distances_no_loops(X_test)
y_test_pred = classifier.predict_labels(dists, k=k)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
return accuracy
for k in k_choices:
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
"""
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation
"""
|
ProfessorKazarinoff/staticsite | content/code/periodic_table/seaborn_violin_plot.ipynb | gpl-3.0 | import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Title: Violin plot with python, matplotlib and seaborn
Date: 2017-10-19 10:42
Modified: 2017-10-19 10:42
Slug: violin-plot-with-python-matplotlib-seaborn
Import the necessary packages
End of explanation
"""
df = pd.read_csv('LAB_3_large_data_set_cleaned.csv')
"""
Explanation: Read in the data
We will use pandas pd.read_csv() function to read in the data file. We'll bring in the .csv file and save it in a pandas dataframe called df.
End of explanation
"""
ax = sns.violinplot(data=df, palette="pastel")
plt.show()
"""
Explanation: Make the violin plot
We will make the violin plot using seaborns sns.violinplot() function. Two arguments are added to the function call data=df which specifies the data included in the plot and palette="pastel" which specifies a nice looking color pallet.
End of explanation
"""
fig = ax.get_figure()
fig.savefig('sns_violin_plot.png', dpi=300)
"""
Explanation: Save the figure
To save the violin plot, we will use matplotlibs ax.get_figure() method. This creates a figure object that we'll call fig. We then use matplotlibs fig.savefig() method to save the figure.
End of explanation
"""
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('LAB_3_large_data_set_cleaned.csv')
ax = sns.violinplot(data=df, palette="pastel")
plt.show()
fig = ax.get_figure()
fig.savefig('sns_violin_plot.png', dpi=300)
"""
Explanation: The whole script
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.