repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15 values | content stringlengths 335 154k |
|---|---|---|---|
theamazingfedex/ml-project-4 | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
#print(source_text.split('\n'))
# TODO: Implement texttoids Function
#print(source_text)
#print(source_text.split('<EOS>'))
if len(source_text) == 0 or len(target_text) == 0:
return
source_sentences = source_text.split('\n')
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
#try:
source_ids = list(map(lambda x: [source_vocab_to_int[word] for word in x.split()], source_sentences))
#except KeyError:
# print('sources: ')
# print('sources: ', [s.split(' ') for s in source_sentences])
#try:
target_ids = list(map(lambda x: [target_vocab_to_int[word] for word in x.split()], target_sentences))
#except KeyError:
# print('targets: ')
# print('targets: ', [s.split(' ') for s in target_sentences])
#print(source_ids)
#print(target_ids)
return source_ids, target_ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
# TODO: Implement Function
inputs= tf.placeholder(tf.int32, (None, None), name='input')
targets = tf.placeholder(tf.int32, (None, None), name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
#print(target_data)
#batches = chunks(target_data, batch_size)
go_id = target_vocab_to_int['<GO>']
length = target_data.shape[0]
ends = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])
finals = tf.concat([tf.fill([batch_size, 1], go_id), ends], 1)
return finals
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)#, activation=tf.nn.relu6)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
#initial_state = cell.zero_state(rnn_inputs.shape, tf.float32)
_, state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)#, initial_state=initial_state)
return state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
# TODO: Implement Function
#with tf.variable_scope(decoding_scope):
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state, name='decoder_fn_train')
decoder_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
decoder_outputs,_,__ = tf.contrib.seq2seq.dynamic_rnn_decoder(
decoder_drop,
decoder_fn,
inputs=dec_embed_input,
sequence_length=sequence_length,
scope=decoding_scope
)
return output_fn(decoder_outputs)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
# TODO: Implement Function
#dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn=output_fn,
encoder_state=encoder_state,
embeddings=dec_embeddings,
start_of_sequence_id=start_of_sequence_id,
end_of_sequence_id=end_of_sequence_id,
maximum_length=maximum_length,
num_decoder_symbols=vocab_size,
name='inference_decoder_fn'
)
outputs,_,__ = tf.contrib.seq2seq.dynamic_rnn_decoder(
cell=dec_cell,
decoder_fn=decoder_fn,
scope=decoding_scope,
name='inference_decoder_rnn'
)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=dec_scope)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
maximum_length = sequence_length
with tf.variable_scope('training') as dec_scope:
#train_scope.reuse_variables()
train_logits = decoding_layer_train(
encoder_state,
dec_cell,
dec_embed_input,
sequence_length,
dec_scope,
output_fn,
keep_prob
)
#with tf.variable_scope('inferring') as inf_scope:
dec_scope.reuse_variables()
inf_logits = decoding_layer_infer(
encoder_state,
dec_cell,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size,
dec_scope,
output_fn,
keep_prob
)
return train_logits, inf_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
# TODO: Implement Function
enc_input = tf.contrib.layers.embed_sequence(
input_data,
source_vocab_size,
enc_embedding_size
)
enc_layer = encoding_layer(
enc_input,
rnn_size,
num_layers,
keep_prob
)
dec_input = process_decoding_input(
target_data,
target_vocab_to_int,
batch_size
)
#embed_target = tf.contrib.layers.embed_sequence(
# dec_input,
# target_vocab_size,
# dec_embedding_size
#)
dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size], minval=0))
target_embed = tf.nn.embedding_lookup(dec_embed, dec_input)
train_logits, inf_logits = decoding_layer(
target_embed,
dec_embed,
enc_layer,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob
)
return train_logits, inf_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 12
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
%matplotlib inline
import matplotlib.pyplot as plt
demo_acc_data = [0.01, 0.05, 0.06, 0.09, 0.11, 0.12, 0.19, 0.25, 0.33, 0.54, .33]
demo_epochs = 3
#eps, rews = np.array(train_acc).T
smoothed_rews = running_mean(demo_acc_data, demo_epochs)
plt.plot(smoothed_rews, demo_acc_data[-len(smoothed_rews):])
#plt.plot(demo_epochs, train_acc_data[-1:], color='grey', alpha=0.3)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
train_acc_data = []
valid_acc_data = []
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
train_acc_data.append(train_acc)
valid_acc_data.append(valid_acc)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
#print(train_acc_data)
smoothed_train = running_mean(train_acc_data, epochs)
smoothed_valid = running_mean(valid_acc_data, epochs)
plt.plot(smoothed_valid, valid_acc_data[-len(smoothed_valid):])
plt.plot(smoothed_train, train_acc_data[-len(smoothed_train):], color="grey", alpha=0.3)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.show()
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
#lowered = sentence.lower()
word_ids = []
for word in sentence.lower().split():
try:
word_ids.append(vocab_to_int[word])
except KeyError:
word_ids.append(vocab_to_int['<UNK>'])
return word_ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
bpgc-cte/python2017 | Week 3/Lecture_5_Introdution_to_Functions.ipynb | mit | #Example_1: return keyword
def straight_line(slope,intercept,x):
"Computes straight line y value"
y = slope*x + intercept
return y
print("y =",straight_line(1,0,5)) #Actual Parameters
print("y =",straight_line(0,3,10))
#By default, arguments have a positional behaviour
#Each of the parameters here is called a formal parameter
#Example_2
def straight_line(slope,intercept,x):
y = slope*x + intercept
print(y)
straight_line(1,0,5)
straight_line(0,3,10)
#By default, arguments have a positional behaviour
#Functions can have no inputs or return.
"""
Explanation: Introduction to Programming : Lecture 5
Agenda for the class
Introduction to functions
Practice Questions
Functions in Python
Syntax
def function_name(input_1,input_2,...):
'''
Process input to get output
'''
return [output1,output2,..]
End of explanation
"""
straight_line(x=2,intercept=7,slope=3)
"""
Explanation: Question: Is it necessary to know the order of parametres to send values to a function?
End of explanation
"""
list_zeroes=[0 for x in range(0,5)]
print(list_zeroes)
def case1(list1):
list1[1]=1
print(list1)
case1(list_zeroes)
print(list_zeroes)
#Passing variables to a function
list_zeroes=[0 for x in range(0,5)]
print(list_zeroes)
def case2(list1):
list1=[2,3,4,5,6]
print(list1)
case2(list_zeroes)
print(list_zeroes)
"""
Explanation: Passing values to functions
End of explanation
"""
def calculator(num1,num2,operator='+'):
if (operator == '+'):
result = num1 + num2
elif(operator == '-'):
result = num1 - num2
return result
n1=int(input("Enter value 1: "))
n2=int(input("Enter value 2: "))
v_1 = calculator(n1,n2)
print(v_1)
v_2 = calculator(n1,n2,'-')
print(v_2)
# Here, the function main is termed as the caller function, and the function
# calculator is termed as the called function
# The operator parameter here is called a keyword-argument
"""
Explanation: Conclusion:
If the input is a mutable datatype, and we make changes to it, then the changes are refelected back on the original variable. (Case-1)
If the input is a mutable datatype, and we assign a new value to it, then the changes are not refelected back on the original variable. (Case-2)
Default Parameters
End of explanation
"""
def f(a, L=[]):
L.append(a)
return L
print(f(1))
print(f(2))
print(f(3))
# Caution ! The list L[] was initialised only once.
#The paramter initialization to the default value happens at function definition and not at function call.
"""
Explanation: Initialization of variables within function definition
End of explanation
"""
def sum(*values):
s = 0
for v in values:
s = s + v
return s
s = sum(1, 2, 3, 4, 5)
print(s)
def get_a(**values):
return values['a']
s = get_a(a=1, b=2) # returns 1
print(s)
def sum(*values, **options):
s = 0
for i in values:
s = s + i
if "neg" in options:
if options["neg"]:
s = -s
return s
s = sum(1, 2, 3, 4, 5) # returns 15
print(s)
s = sum(1, 2, 3, 4, 5, neg=True) # returns -15
print(s)
s = sum(1, 2, 3, 4, 5, neg=False) # returns 15
print(s)
"""
Explanation: * operator
1. Unpacks a list or tuple into positional arguments
** operator
2. Unpacks a dictionary into keyword arguments
Types of parametres
Formal parameters (Done above, repeat)
Keyword Arguments (Done above, repeat)
*variable_name : interprets the arguments as a tuple
**variable_name : interprets the arguments as a dictionary
End of explanation
"""
|
cirla/tulipy | README.ipynb | lgpl-3.0 | import numpy as np
import tulipy as ti
ti.TI_VERSION
DATA = np.array([81.59, 81.06, 82.87, 83, 83.61,
83.15, 82.84, 83.99, 84.55, 84.36,
85.53, 86.54, 86.89, 87.77, 87.29])
"""
Explanation: tulipy
Python bindings for Tulip Indicators
Tulipy requires numpy as all inputs and outputs are numpy arrays (dtype=np.float64).
Installation
You can install via pip install tulipy.
If a wheel is not available for your system, you will need to pip install Cython numpy to build from the source distribution.
Usage
End of explanation
"""
def print_info(indicator):
print("Type:", indicator.type)
print("Full Name:", indicator.full_name)
print("Inputs:", indicator.inputs)
print("Options:", indicator.options)
print("Outputs:", indicator.outputs)
print_info(ti.sqrt)
"""
Explanation: Information about indicators are exposed as properties:
End of explanation
"""
ti.sqrt(DATA)
print_info(ti.sma)
ti.sma(DATA, period=5)
"""
Explanation: Single outputs are returned directly. Indicators returning multiple outputs use
a tuple in the order indicated by the outputs property.
End of explanation
"""
try:
ti.sma(DATA, period=-5)
except ti.InvalidOptionError:
print("Invalid Option!")
print_info(ti.bbands)
ti.bbands(DATA, period=5, stddev=2)
"""
Explanation: Invalid options will throw an InvalidOptionError:
End of explanation
"""
DATA2 = np.array([83.15, 82.84, 83.99, 84.55, 84.36])
# 'high' trimmed to DATA[-5:] == array([ 85.53, 86.54, 86.89, 87.77, 87.29])
ti.aroonosc(high=DATA, low=DATA2, period=2)
"""
Explanation: If inputs of differing sizes are provided, they are right-aligned and trimmed from the left:
End of explanation
"""
|
hunterowens/machine-learning-in-edu | EduMLvsRuleBased.ipynb | cc0-1.0 | ## Imports
import pandas as pd
import seaborn as sns
sns.set(color_codes=True)
import matplotlib.pyplot as plt
"""
Explanation: Comparision of Machine Learning Methods vs Rule Based
Traditionally, Educational Institutions use rule based models to generate risk score which then informs resource allocation. For example, Hiller et al, 1999
Instead, we'll build a simple model using basic ML techniques and demonstrate why the risk scores generated are better
End of explanation
"""
# Gen Data
%run sim.py
stud_df.gpa = pd.to_numeric(stud_df.gpa)
stud_df.honors = pd.to_numeric(stud_df.honors)
stud_df.psat = pd.to_numeric(stud_df.psat)
"""
Explanation: Setup
First, we need to generate simulated data and read it into a data frame
End of explanation
"""
avg_gpas = stud_df.groupby('college').gpa.mean()
def isUndermatched(student):
if student.gpa >= (avg_gpas[student.college] + .50):
return True
else:
return False
stud_df['undermatch_status'] = stud_df.apply(isUndermatched, axis =1 )
#stud_df.groupby('race').undermatch_status.value_counts()
"""
Explanation: Determine if the student undermatched-or-was-properly matched
End of explanation
"""
msk = np.random.rand(len(stud_df)) < 0.8
train = stud_df[msk]
test = stud_df[~msk]
print("Training Set Length: ", len(train))
print("Testing Set Length: ", len(test))
"""
Explanation: Rule Based Model
Simple GPA and PSAT rule
End of explanation
"""
stud_df.psat.hist()
def rule_based_model(student_r):
"""returns a college for each student passed"""
risk_score = 0
if student_r.race == 'aa':
risk_score += 1
if student_r.race == 'latino':
risk_score += .5
if student_r.psat >= 170 and student_r.honors <= 3:
risk_score += 1
return risk_score
test['risk_score'] = test.apply(rule_based_model, axis = 1)
"""
Explanation: The Rules
* We have 3 observed variables - GPA, PSAT, race
* Predict which college based on those observed variables.
* Rules based on Hoxby, et al 2013
End of explanation
"""
from sklearn import linear_model
feature_cols = ['psat', 'gpa', 'honors']
X = train[feature_cols]
y = train['undermatch_status']
# instantiate, fit
lm = linear_model.LogisticRegression()
lm.fit(X, y)
# The coefficients
print('Coefficients: \n', lm.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((lm.predict(test[feature_cols]) - test['undermatch_status']) ** 2))
# Explained variance score: 1 is perfect prediction
lm.predict(train[feature_cols])
sns.lmplot(x='psat', y='undermatch_status', data=test, logistic=True)
"""
Explanation: Machine Learning Model
Simple Logisitic Regression
End of explanation
"""
|
albertfxwang/grizli | examples/Fit-with-Photometry.ipynb | mit | %matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import astropy.io.fits as pyfits
import drizzlepac
import grizli
from grizli.pipeline import photoz
from grizli import utils, prep, multifit, fitting
utils.set_warnings()
print('\n Grizli version: ', grizli.__version__)
# Requires eazy-py: https://github.com/gbrammer/eazy-py
import eazy
# Run in the directory where you ran the Grizli-Pipeline notebook and
# extracted spectra of two objects
root = 'j0332m2743'
os.chdir('{0}/Extractions/'.format(root))
# Fetch 3D-HST catalogs
if not os.path.exists('goodss_3dhst.v4.1.cats.tar.gz'):
os.system('wget https://archive.stsci.edu/missions/hlsp/3d-hst/RELEASE_V4.0/Photometry/GOODS-S/goodss_3dhst.v4.1.cats.tar.gz')
os.system('tar xzvf goodss_3dhst.v4.1.cats.tar.gz')
# Preparation for eazy-py
eazy.symlink_eazy_inputs(path=os.path.dirname(eazy.__file__)+'/data',
path_is_env=False)
### Initialize **eazy.photoz** object
field = 'goodss'
version = 'v4.1'
params = {}
params['CATALOG_FILE'] = '{0}_3dhst.{1}.cats/Catalog/{0}_3dhst.{1}.cat'.format(field, version)
params['Z_STEP'] = 0.002
params['Z_MAX'] = 10
params['MAIN_OUTPUT_FILE'] = '{0}_3dhst.{1}.eazypy'.format(field, version)
params['PRIOR_FILTER'] = 205
# Galactic extinction
params['MW_EBV'] = {'aegis':0.0066, 'cosmos':0.0148, 'goodss':0.0069,
'uds':0.0195, 'goodsn':0.0103}[field]
params['TEMPLATES_FILE'] = 'templates/fsps_full/tweak_fsps_QSF_12_v3.param'
translate_file = '{0}_3dhst.{1}.cats/Eazy/{0}_3dhst.{1}.translate'.format(field, version)
ez = eazy.photoz.PhotoZ(param_file=None, translate_file=translate_file,
zeropoint_file=None, params=params,
load_prior=True, load_products=False)
## Grism fitting arguments created in Grizli-Pipeline
args = np.load('fit_args.npy')[0]
## First-pass redshift templates, similar to the eazy templates but
## with separate emission lines
t0 = args['t0']
#############
## Make a helper object for generating photometry in a format that grizli
## understands.
## Passing the parameters precomputes a function to quickly interpolate
## the templates through the broad-band filters. It's not required,
## but makes the fitting much faster.
##
## `zgrid` defaults to ez.zgrid, be explicit here to show you can
## change it.
phot_obj = photoz.EazyPhot(ez, grizli_templates=t0, zgrid=ez.zgrid)
### Find IDs of specific objects to extract, same ones from the notebook
import astropy.units as u
tab = utils.GTable()
tab['ra'] = [53.0657456, 53.0624459]
tab['dec'] = [-27.720518, -27.707018]
# Internal grizli catalog
gcat = utils.read_catalog('{0}_phot.fits'.format(root))
idx, dr = gcat.match_to_catalog_sky(tab)
source_ids = gcat['number'][idx]
tab['id'] = source_ids
tab['dr'] = dr.to(u.mas)
tab['dr'].format='.1f'
tab.show_in_notebook()
## Find indices in the 3D-HST photometric catalog
idx3, dr3 = ez.cat.match_to_catalog_sky(tab)
## Run the photozs just for comparison. Not needed for the grism fitting
## but the photozs and SEDs give you a check that the photometry looks
## reasonable
ez.param['VERBOSITY'] = 1.
ez.fit_parallel(idx=idx3, verbose=False)
# or could run on the whole catalog by not specifying `idx`
# Show SEDs with best-fit templates and p(z)
for ix in idx3:
ez.show_fit(ix, id_is_idx=True)
### Spline templates for dummy grism continuum fits
wspline = np.arange(4200, 2.5e4)
Rspline = 50
df_spl = len(utils.log_zgrid(zr=[wspline[0], wspline[-1]], dz=1./Rspline))
tspline = utils.bspline_templates(wspline, df=df_spl+2, log=True, clip=0.0001)
i=1 # red galaxy
id=tab['id'][i]
ix = idx3[i]
## This isn't necessary for general fitting, but
## load the grism spectrum here for demonstrating the grism/photometry scaling
beams_file = '{0}_{1:05d}.beams.fits'.format(args['group_name'], id)
mb = multifit.MultiBeam(beams_file, MW_EBV=args['MW_EBV'],
fcontam=args['fcontam'], sys_err=args['sys_err'],
group_name=args['group_name'])
"""
Explanation: Include photometry with Grizli grism fits
grizli has the ability to include constraints from broad-band photometry in the redshift fitting. Using this capability requires some of the tools provided by the eazy-py module (https://github.com/gbrammer/eazy-py).
The example below is intended to be run in the same directory where you've already run the Grizli-Pipeline notebook to process the GOODS-South ERS grism data.
Here we take photometry from the 3D-HST photometric catalog (Skelton et al.) of the GOODS-South field. The 3D-HST products are available at https://archive.stsci.edu/prepds/3d-hst/.
End of explanation
"""
# Generate the `phot` dictionary
phot, ii, dd = phot_obj.get_phot_dict(mb.ra, mb.dec)
label = "3DHST Catalog ID: {0}, dr={1:.2f}, zphot={2:.3f}"
print(label.format(ez.cat['id'][ii], dd, ez.zbest[ii]))
print('\n`phot` keys:', list(phot.keys()))
for k in phot:
print('\n'+k+':\n', phot[k])
# Initialize photometry for the MultiBeam object
mb.set_photometry(**phot)
# parametric template fit to get reasonable background
sfit = mb.template_at_z(templates=tspline, fit_background=True,
include_photometry=False)
fig = mb.oned_figure(tfit=sfit)
ax = fig.axes[0]
ax.errorbar(mb.photom_pivot/1.e4, mb.photom_flam/1.e-19,
mb.photom_eflam/1.e-19,
marker='s', color='k', alpha=0.4, linestyle='None',
label='3D-HST photometry')
ax.legend(loc='upper left', fontsize=8)
"""
Explanation: Pull out the photometry of the nearest source matched in the photometric catalog.
End of explanation
"""
## First example: no rescaling
z_phot = ez.zbest[ix]
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
t1 = args['t1']
tfit = mb.template_at_z(z=z_phot)
print('No rescaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
"""
Explanation: Scaling the spectrum to the photometry
For the grism spectra, the total flux of an object is the flux integrated over the segmentation polygon, which corresponds to some isophotal level set by the SExtractor/SEP threshold parameter. While this should often be similar to the total flux definition in the photometric catalog (e.g., FLUX_AUTO), they won't necessarily be the same and there could be small offsets in the normalization between photometry and spectra.
Below we demonstrate a quick way to scale the spectrum to the photometry for more reliable combined fits.
End of explanation
"""
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
# Template rescaling, simple multiplicative factor
scl = mb.scale_to_photometry(order=0)
# has funny units of polynomial coefficients times 10**power,
# see `grizli.fitting.GroupFitter.compute_scale_array`
# Scale value is the inverse, so, e.g.,
# scl.x = [8.89] means scale the grism spectrum by 10/8.89=1.12
print(scl.x)
mb.pscale = scl.x
# Redo template fit
tfit = mb.template_at_z(z=z_phot)
print('Simple scaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
"""
Explanation: An offset is clearly visible betwen the spectra and photometry. In the next step, we use an internal function to compute a correction factor to bring the spectrum in line with the photometry.
The scale_to_photometry function integrates the observed 1D spectrum (combining all available grisms) through the photometry filter bandpasses and derives a polynomial correction to make the two agree.
NB Because we use the observed spectrum directly, scale_to_photometry requires that at least one observed filter be entirely covered by the grism spectrum. And fitting for a higher order correction requires multiple filters across the grism bandpass (generally at least one filter per polynomial order).
End of explanation
"""
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
# Template rescaling, linear fit
scl = mb.scale_to_photometry(order=1)
# has funny units of polynomial coefficients times 10**power,
# see `grizli.fitting.GroupFitter.compute_scale_array`
# Scale value is the inverse, so, e.g.,
# scl.x = [8.89] means scale the grism spectrum by 10/8.89=1.12
print(scl.x)
mb.pscale = scl.x
# Redo template fit
tfit = mb.template_at_z(z=z_phot)
print('Simple scaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
"""
Explanation: The function computed a multiplicative factor 10/8.9=1.1 to apply to the spectrum, which now falls nicely on top of the photometry. Next we fit for a first-order linear scaling (normalization and slope).
End of explanation
"""
# Now run the full redshift fit script with the photometry, which will also do the scaling
order=1
fitting.run_all_parallel(id, phot=phot, verbose=False,
scale_photometry=order+1, zr=[1.5, 2.4])
"""
Explanation: A bit of an improvement. But be careful with corrections with order>0 if there is limited photometry available in bands that overlap the spectrum.
End of explanation
"""
zfit = pyfits.open('{0}_{1:05d}.full.fits'.format(root, id))
z_grism = zfit['ZFIT_STACK'].header['Z_MAP']
print('Best redshift: {0:.4f}'.format(z_grism))
# Compare PDFs
pztab = utils.GTable.gread(zfit['ZFIT_STACK'])
plt.plot(pztab['zgrid'], pztab['pdf'], label='grism+3D-HST')
plt.plot(ez.zgrid, ez.pz[ix,:], label='photo-z')
plt.semilogy()
plt.xlim(z_grism-0.05, z_grism+0.05); plt.ylim(1.e-2, 1000)
plt.xlabel(r'$z$'); plt.ylabel(r'$p(z)$')
plt.grid()
plt.legend()
"""
Explanation: Redshift PDF
Now compare the redshift PDF for the grism+photometry and photometry-only (photo-z) fits.
End of explanation
"""
import grizli.pipeline.photoz
# The catalog is automatically generated with a number of aperture sizes. A total
# correction is computed in the detection band, usually a weighted sum of all available
# WFC3/IR filters, with the correction as the ratio between the aperture flux and the
# flux over the isophotal segment region, the 'flux' column in the SEP catalog.
aper_ix = 1
total_flux_column = 'flux'
# Get external photometry from Vizier
get_external_photometry = True
# Set `object_only=True` to generate the `eazy.photoz.Photoz` object from the
# internal photometric catalog without actually running the photo-zs on the catalog
# with few bands.
int_ez = grizli.pipeline.photoz.eazy_photoz(root, force=False, object_only=True,
apply_background=True, aper_ix=aper_ix,
apply_prior=True, beta_prior=True,
get_external_photometry=get_external_photometry,
external_limits=3, external_sys_err=0.3, external_timeout=300,
sys_err=0.05, z_step=0.01, z_min=0.01, z_max=12,
total_flux=total_flux_column)
# Available apertures
for k in int_ez.cat.meta:
if k.startswith('APER'):
print('Aperture {0}: R={1:>4.1f} pix = {2:>4.2f}"'.format(k, int_ez.cat.meta[k],
int_ez.cat.meta[k]*0.06))
#
k = 'APER_{0}'.format(aper_ix)
print('\nAperture used {0}: R={1:>4.1f} pix = {2:>4.2f}"'.format(k, int_ez.cat.meta[k],
int_ez.cat.meta[k]*0.06))
# Integrate the grism templates through the filters on the redshift grid.
int_phot_obj = photoz.EazyPhot(int_ez, grizli_templates=t0, zgrid=int_ez.zgrid)
# Reset scale parameter
if hasattr(mb,'pscale'):
delattr(mb, 'pscale')
# Show the SED
int_phot, ii, dd = int_phot_obj.get_phot_dict(mb.ra, mb.dec)
mb.unset_photometry()
mb.set_photometry(**int_phot)
tfit = mb.template_at_z(z=z_phot)
print('No rescaling, chi-squared={0:.1f}'.format(tfit['chi2']))
fig = fitting.full_sed_plot(mb, tfit, zfit=None, bin=4)
fig.axes[0].set_ylim(-5,80)
# Run the grism fit with the direct image photometry
# Note that here we show that you can just pass the full photometry object
# and the script will match the nearest photometric entry to the grism object.
order=0
fitting.run_all_parallel(id, phot_obj=int_phot_obj, verbose=False,
scale_photometry=order+1, zr=[1.5, 2.4])
"""
Explanation: Grizli internal photometry
The archive query functions shown in the Grizli-Pipeline notebook can optionally query for any HST imaging that overlaps with the grism exposures. The automatic preparation script will process and align these along with the grism+direct data and produce a simple forced-aperture photometric catalog. Currently this catalog just has matched aperture photometry and does not make any effort at PSF matching.
For cases where no external photometric catalog is available, this photometry can be used to help constrain the redshift fits. The catalog and eazy-py products are automatically generated by the preparation scripts.
The grizli.pipeline.photoz scripts can also try to fetch additional photometry from published Vizier catalogs, e.g., GALEX, SDSS, PanSTARRS, WISE, CFHT-LS.
End of explanation
"""
zfit2 = pyfits.open('{0}_{1:05d}.full.fits'.format(root, id))
pztab2 = utils.GTable.gread(zfit2['ZFIT_STACK'])
plt.plot(ez.zgrid, ez.fit_chi2[ix,:] - ez.fit_chi2[ix,:].min(),
label='3D-HST photo-z')
plt.plot(pztab['zgrid'], pztab['chi2'] - pztab['chi2'].min(),
label='grism + 3D-HST photometry')
plt.plot(pztab2['zgrid'], pztab2['chi2'] - pztab2['chi2'].min(),
label='grism + direct image photometry')
plt.legend()
plt.xlabel('z'); plt.ylabel(r'$\chi^2$')
plt.xlim(1.5, 2.4); plt.ylim(-200, 4000); plt.grid()
"""
Explanation: Compare chi-squared
In this case of this bright spectrum with strong features, the grism dominates the fit even with the extensive 3D-HST catalog
End of explanation
"""
|
rsterbentz/phys202-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Optimization Exercise 1
Imports
End of explanation
"""
def hat(x,a,b):
potential = -a*x**2 + b*x**4
return potential
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
"""
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
"""
a = 5.0
b = 1.0
x = np.linspace(-3,3,61)
plt.plot(x, hat(x,a,b));
hat(x,a,b)
assert True # leave this to grade the plot
"""
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
"""
x0 = [-1.5,1.5]
a = 5.0
b = 1.0
x1 = opt.minimize(lambda x,a,b: -a*x**2 + b*x**4, x0[0], args=(a,b)).x
x2 = opt.minimize(lambda x,a,b: -a*x**2 + b*x**4, x0[1], args=(a,b)).x
print(x1,x2)
plt.plot(x, hat(x,a,b))
plt.plot(x1, hat(x1,a,b), 'ro')
plt.plot(x2, hat(x2,a,b), 'ro');
assert True # leave this for grading the plot
"""
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation
"""
|
tkphd/pycalphad | examples/EquilibriumWithOrdering.ipynb | mit | # Only needed in a Jupyter Notebook
%matplotlib inline
# Optional plot styling
import matplotlib
matplotlib.style.use('bmh')
import matplotlib.pyplot as plt
from pycalphad import equilibrium
from pycalphad import Database, Model
import pycalphad.variables as v
import numpy as np
"""
Explanation: Equilibrium Properties and Partial Ordering (Al-Fe and Al-Ni)
End of explanation
"""
db = Database('alfe_sei.TDB')
my_phases = ['LIQUID', 'B2_BCC']
eq = equilibrium(db, ['AL', 'FE', 'VA'], my_phases, {v.X('AL'): 0.25, v.T: (300, 2000, 50), v.P: 101325},
output=['heat_capacity', 'degree_of_ordering'])
print(eq)
"""
Explanation: Al-Fe (Heat Capacity and Degree of Ordering)
Here we compute equilibrium thermodynamic properties in the Al-Fe system. We know that only B2 and liquid are stable in the temperature range of interest, but we just as easily could have included all the phases in the calculation using my_phases = list(db.phases.keys()). Notice that the syntax for specifying a range is (min, max, step). We can also directly specify a list of temperatures using the list syntax, e.g., [300, 400, 500, 1400].
We explicitly indicate that we want to compute equilibrium values of the heat_capacity and degree_of_ordering properties. These are both defined in the default Model class. For a complete list, see the documentation. equilibrium will always return the Gibbs energy, chemical potentials, phase fractions and site fractions, regardless of the value of output.
End of explanation
"""
eq2 = equilibrium(db, ['AL', 'FE', 'VA'], 'B2_BCC', {v.X('AL'): (0,1,0.01), v.T: 700, v.P: 101325},
output='degree_of_ordering')
print(eq2)
"""
Explanation: We also compute degree of ordering at fixed temperature as a function of composition.
End of explanation
"""
plt.gca().set_title('Al-Fe: Degree of bcc ordering vs T [X(AL)=0.25]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Degree of ordering')
plt.gca().set_ylim((-0.1,1.1))
# Generate a list of all indices where B2 is stable
phase_indices = np.nonzero(eq.Phase.values == 'B2_BCC')
# phase_indices[2] refers to all temperature indices
# We know this because pycalphad always returns indices in order like P, T, X's
plt.plot(np.take(eq['T'].values, phase_indices[2]), eq['degree_of_ordering'].values[phase_indices])
plt.show()
"""
Explanation: Plots
Next we plot the degree of ordering versus temperature. We can see that the decrease in the degree of ordering is relatively steady and continuous. This is indicative of a second-order transition from partially ordered B2 to disordered bcc (A2).
End of explanation
"""
plt.gca().set_title('Al-Fe: Heat capacity vs T [X(AL)=0.25]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Heat Capacity (J/mol-atom-K)')
# np.squeeze is used to remove all dimensions of size 1
# For a 1-D/"step" calculation, this aligns the temperature and heat capacity arrays
# In 2-D/"map" calculations, we'd have to explicitly select the composition of interest
plt.plot(eq['T'].values, np.squeeze(eq['heat_capacity'].values))
plt.show()
"""
Explanation: For the heat capacity curve shown below we notice a sharp increase in the heat capacity around 750 K. This is indicative of a magnetic phase transition and, indeed, the temperature at the peak of the curve coincides with 75% of 1043 K, the Curie temperature of pure Fe. (Pure bcc Al is paramagnetic so it has an effective Curie temperature of 0 K.)
We also observe a sharp jump in the heat capacity near 1800 K, corresponding to the melting of the bcc phase.
End of explanation
"""
plt.gca().set_title('Al-Fe: Degree of bcc ordering vs X(AL) [T=700 K]')
plt.gca().set_xlabel('X(AL)')
plt.gca().set_ylabel('Degree of ordering')
# Select all points in the datasets where B2_BCC is stable, dropping the others
eq2_b2_bcc = eq2.where(eq2.Phase == 'B2_BCC', drop=True)
plt.plot(eq2_b2_bcc['X_AL'].values, eq2_b2_bcc['degree_of_ordering'].values.squeeze())
plt.show()
"""
Explanation: To understand more about what's happening around 700 K, we plot the degree of ordering versus composition. Note that this plot excludes all other phases except B2_BCC. We observe the presence of disordered bcc (A2) until around 13% Al or Fe, when the phase begins to order.
End of explanation
"""
db_alni = Database('NI_AL_DUPIN_2001.TDB')
phases = ['LIQUID', 'FCC_L12']
eq_alni = equilibrium(db_alni, ['AL', 'NI', 'VA'], phases, {v.X('AL'): 0.10, v.T: (300, 2500, 20), v.P: 101325},
output='degree_of_ordering')
print(eq_alni)
"""
Explanation: Al-Ni (Degree of Ordering)
End of explanation
"""
from pycalphad.plot.utils import phase_legend
phase_handles, phasemap = phase_legend(phases)
plt.gca().set_title('Al-Ni: Phase fractions vs T [X(AL)=0.1]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Phase Fraction')
plt.gca().set_ylim((0,1.1))
plt.gca().set_xlim((300, 2000))
for name in phases:
phase_indices = np.nonzero(eq_alni.Phase.values == name)
plt.scatter(np.take(eq_alni['T'].values, phase_indices[2]), eq_alni.NP.values[phase_indices], color=phasemap[name])
plt.gca().legend(phase_handles, phases, loc='lower right')
"""
Explanation: Plots
In the plot below we observe two phases designated FCC_L12. This is indicative of a miscibility gap. The ordered gamma-prime phase steadily decreases in amount with increasing temperature until it completely disappears around 750 K, leaving only the disordered gamma phase.
End of explanation
"""
plt.gca().set_title('Al-Ni: Degree of fcc ordering vs T [X(AL)=0.1]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Degree of ordering')
plt.gca().set_ylim((-0.1,1.1))
# Generate a list of all indices where FCC_L12 is stable and ordered
L12_phase_indices = np.nonzero(np.logical_and((eq_alni.Phase.values == 'FCC_L12'),
(eq_alni.degree_of_ordering.values > 0.01)))
# Generate a list of all indices where FCC_L12 is stable and disordered
fcc_phase_indices = np.nonzero(np.logical_and((eq_alni.Phase.values == 'FCC_L12'),
(eq_alni.degree_of_ordering.values <= 0.01)))
# phase_indices[2] refers to all temperature indices
# We know this because pycalphad always returns indices in order like P, T, X's
plt.plot(np.take(eq_alni['T'].values, L12_phase_indices[2]), eq_alni['degree_of_ordering'].values[L12_phase_indices],
label='$\gamma\prime$ (ordered fcc)', color='red')
plt.plot(np.take(eq_alni['T'].values, fcc_phase_indices[2]), eq_alni['degree_of_ordering'].values[fcc_phase_indices],
label='$\gamma$ (disordered fcc)', color='blue')
plt.legend()
plt.show()
"""
Explanation: In the plot below we see that the degree of ordering does not change at all in each phase. There is a very abrupt disappearance of the completely ordered gamma-prime phase, leaving the completely disordered gamma phase. This is a first-order phase transition.
End of explanation
"""
|
nholtz/structural-analysis | matrix-methods/frame2d/AA-3203-2019-hw6.ipynb | cc0-1.0 | from Frame2D import Frame2D
theframe = Frame2D('3203/2019/hw-6')
"""
Explanation: CIVE 3203 2019 HW-6
Compare the results here with those given be the slope-deflection method
in the solution of HW-6, CIVE3203, Fall 2019.
End of explanation
"""
%%Table nodes
NODEID,X,Y,Z
a,0,5000
b,8000,5000
c,14000,5000
d,8000,0
"""
Explanation: Input Data
Nodes
Table nodes (file nodes.csv) provides the $x$-$y$ coordinates of each node. Other columns, such
as the $z$- coordinate are optional, and ignored if given.
End of explanation
"""
%%Table supports
NODEID,C0,C1,C2
a,FX,FY,MZ
c,FX,FY,MZ
d,FX,FY,MZ
"""
Explanation: Supports
Table supports (file supports.csv) specifies the support fixity, by indicating the constrained
direction for each node. There can be 1, 2 or 3 constraints, selected from the set 'FX', 'FY' or 'MZ',
in any order for each constrained node. Directions not mentioned are 'free' or unconstrained.
End of explanation
"""
%%Table members
MEMBERID,NODEJ,NODEK
ab,a,b
bc,b,c
bd,b,d
"""
Explanation: Members
Table members (file members.csv) specifies the member incidences. For each member, specify
the id of the nodes at the 'j-' and 'k-' ends. These ends are used to interpret the signs of various values.
End of explanation
"""
%%Table releases
MEMBERID,RELEASE
"""
Explanation: Releases
Table releases (file releases.csv) is optional and specifies internal force releases in some members.
Currently only moment releases at the 'j-' end ('MZJ') and 'k-' end ('MZK') are supported. These specify
that the internal bending moment at those locations are zero. You can only specify one release per line,
but you can have more than one line for a member.
End of explanation
"""
%%Table properties
MEMBERID,SIZE,IX,A
ab,,240E6,7000
bc,,160E6,
bd,,80E6
"""
Explanation: Properties
Table properties (file properties.csv) specifies the member properties for each member.
If the 'SST' library is available, you may specify the size of the member by using the
designation of a shape in the CISC Structural Section Tables. If either IX or A is missing,
it is retreived using the sst library. If the values on any line are missing, they
are copied from the line above.
Note: a value of $A=7000$ is reasonable for steel W shapes of the range of $I$ values we have given.
If we give $A$ values 1000 times this, we get results that match the slope-deflection method very
closely (that effectively causes axial effects to be ignored).
End of explanation
"""
%%Table node_loads
LOAD,NODEID,DIRN,F
"""
Explanation: Node Loads
Table node_loads (file node_loads.csv) specifies the forces applied directly to the nodes.
DIRN (direction) may be one of 'FX,FY,MZ'. 'LOAD' is an identifier of the kind of load
being applied and F is the value of the load, normally given as a service or specified load.
A later input table will specify load combinations and factors.
End of explanation
"""
%%Table support_displacements
LOAD,NODEID,DIRN,DELTA
"""
Explanation: Support Displacements
Table support_displacements (file support_displacements.csv) is optional and specifies imposed displacements
of the supports. DIRN (direction) is one of 'DX, DY, RZ'. LOAD is as for Node Loads, above.
Of course, in this example the frame is statically determinate and so the support displacement
will have no effect on the reactions or member end forces.
End of explanation
"""
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
live,ab,UDL,-24
live,bc,PL,-64000,,4000
"""
Explanation: Member Loads
Table member_loads (file member_loads.csv) specifies loads acting on members. Current
types are PL (concentrated transverse, ie point load), CM (concentrated moment), UDL (uniformly
distributed load over entire span), LVL (linearly varying load over a portion of the span) and PLA (point load applied parallel to member coincident with centroidal axis). Values W1 and W2 are loads or
load intensities and A, B, and C are dimensions appropriate to the kind of load.
End of explanation
"""
%%Table load_combinations
CASE,LOAD,FACTOR
one,live,1.0
"""
Explanation: Load Combinations
Table load_combinations (file load_combinations.csv) is optional and specifies
factored combinations of loads. By default, there is always a load combination
called all that includes all loads with a factor of 1.0. A frame solution (see below)
indicates which CASE to use.
End of explanation
"""
theframe.input_all()
theframe.print_input()
RS = theframe.solve('one')
theframe.print_results(rs=RS)
"""
Explanation: Solution
The following outputs all tables, prints a description of the input data,
produces a solution for load case 'one' (all load and case names are case-insensitive)
and finally prints the results.
End of explanation
"""
|
Featuretools/featuretools | docs/source/getting_started/woodwork_types.ipynb | bsd-3-clause | import featuretools as ft
ft.list_logical_types()
"""
Explanation: Woodwork Typing in Featuretools
Featuretools relies on having consistent typing across the creation of EntitySets, Primitives, Features, and feature matrices. Previously, Featuretools used its own type system that contained objects called Variables. Now and moving forward, Featuretools will use an external data typing library for its typing: Woodwork.
Understanding the Woodwork types that exist and how Featuretools uses Woodwork's type system will allow users to:
- build EntitySets that best represent their data
- understand the possible input and return types for Featuretools' Primitives
- understand what features will get generated from a given set of data and primitives.
Read the Understanding Woodwork Logical Types and Semantic Tags guide for an in-depth walkthrough of the available Woodwork types that are outlined below.
For users that are familiar with the old Variable objects, the Transitioning to Featuretools Version 1.0 guide will be useful for converting Variable types to Woodwork types.
Physical Types
Physical types define how the data in a Woodwork DataFrame is stored on disk or in memory. You might also see the physical type for a column referred to as the column’s dtype.
Knowing a Woodwork DataFrame's physical types is important because Pandas, Dask, and Spark rely on these types when performing DataFrame operations. Each Woodwork LogicalType class has a single physical type associated with it.
Logical Types
Logical types add additional information about how data should be interpreted or parsed beyond what can be contained in a physical type. In fact, multiple logical types have the same physical type, each imparting a different meaning that's not contained in the physical type alone.
In Featuretools, a column's logical type informs how data is read into an EntitySet and how it gets used down the line in Deep Feature Synthesis.
Woodwork provides many different logical types, which can be seen with the list_logical_types function.
End of explanation
"""
ft.list_semantic_tags()
"""
Explanation: Featuretools will perform type inference to assign logical types to the data in EntitySets if none are provided, but it is also possible to specify which logical types should be set for any column (provided that the data in that column is compatible with the logical type).
To learn more about how logical types are used in EntitySets, see the Creating EntitySets guide.
To learn more about setting logical types directly on a DataFrame, see the Woodwork guide on working with Logical Types.
Semantic Tags
Semantic tags provide additional information to columns about the meaning or potential uses of data. Columns can have many or no semantic tags. Some tags are added by Woodwork, some are added by Featuretools, and users can add additional tags as they see fit.
To learn more about setting semantic tags directly on a DataFrame, see the Woodwork guide on working with Semantic Tags.
Woodwork-defined Semantic Tags
Woodwork will add certain semantic tags to columns at initialization. These can be standard tags that may be associated with different sets of logical types or index tags. There are also tags that users can add to confer a suggested meaning to columns in Woodwork.
To get a list of these tags, you can use the list_semantic_tags function.
End of explanation
"""
es = ft.demo.load_retail()
es
"""
Explanation: Above we see the semantic tags that are defined within Woodwork. These tags inform how Featuretools is able to interpret data, an example of which can be seen in the Age primitive, which requires that the date_of_birth semantic tag be present on a column.
The date_of_birth tag will not get automatically added by Woodwork, so in order for Featuretools to be able to use the Age primitive, the date_of_birth tag must be manually added to any columns to which it applies.
Featuretools-defined Semantic Tags
Just like Woodwork specifies semantic tags internally, Featuretools also defines a few tags of its own that allow the full set of Features to be generated. These tags have specific meanings when they are present on a column.
'last_time_index' - added by Featuretools to the last time index column of a DataFrame. Indicates that this column has been created by Featuretools.
'foreign_key' - used to indicate that this column is the child column of a relationship, meaning that this column is related to a corresponding index column of another dataframe in the EntitySet.
Woodwork Throughout Featuretools
Now that we've described the elements that make up Woodwork's type system, lets see them in action in Featuretools.
Woodwork in EntitySets
For more information on building EntitySets using Woodwork, see the EntitySet guide.
Let's look at the Woodwork typing information as it's stored in a demo EntitySet of retail data:
End of explanation
"""
df = es['products']
df.head()
df.ww
"""
Explanation: Woodwork typing information is not stored in the EntitySet object, but rather is stored in the individual DataFrames that make up the EntitySet. To look at the Woodwork typing information, we first select a single DataFrame from the EntitySet, and then access the Woodwork information via the ww namespace:
End of explanation
"""
products_df = es['products']
product_ids_series = products_df.ww['product_id']
column_schema = product_ids_series.ww.schema
column_schema
"""
Explanation: Notice how the three columns showing this DataFrame's typing information are the three elements of typing information outlined at the beginning of this guide. To reiterate: By defining physical types, logical types, and semantic tags for each column in a DataFrame, we've defined a DataFrame's Woodwork schema, and with it, we can gain an understanding of the contents of each column.
This column-specific typing information that exists for every column in every DataFrame in an EntitySet is an integral part of Deep Feature Synthesis' ability to generate features for an EntitySet.
Woodwork in DFS
As the units of computation in Featuretools, Primitives need to be able to specify the input types that they allow as well as have a predictable return type. For an in-depth explanation of Primitives in Featuretools, see the Feature Primitives guide. Here, we'll look at how the Woodwork types come together into a ColumnSchema object to describe Primitive input and return types.
Below is a Woodwork ColumnSchema that we've obtained from the 'product_id' column in the products DataFrame in the retail EntitySet.
End of explanation
"""
order_products_df = es['order_products']
order_products_df.head()
quantity_series = order_products_df.ww['quantity']
column_schema = quantity_series.ww.schema
column_schema
"""
Explanation: This combination of logical type and semantic tag typing information is a ColumnSchema. In the case above, the ColumnSchema describes the type definition for a single column of data.
Notice that there is no physical type in a ColumnSchema. This is because a ColumnSchema is a collection of Woodwork types that doesn't have any data tied to it and therefore has no physical representation. Because a ColumnSchema object is not tied to any data, it can also be used to describe a type space into which other columns may or may not fall.
This flexibility of the ColumnSchema class allows ColumnSchema objects to be used both as type definitions for every column in an EntitySet as well as input and return type spaces for every Primitive in Featuretools.
Let's look at a different column in a different DataFrame to see how this works:
End of explanation
"""
es['order_products'].ww
"""
Explanation: The ColumnSchema above has been pulled from the 'quantity' column in the order_products DataFrame in the retail EntitySet. This is a type definition.
If we look at the Woodwork typing information for the order_products DataFrame, we can see that there are several columns that will have similar ColumnSchema type definitions. If we wanted to describe subsets of those columns, we could define several ColumnSchema type spaces
End of explanation
"""
from woodwork.column_schema import ColumnSchema
ColumnSchema()
"""
Explanation: Below are several ColumnSchemas that all would include our quantity column, but each of them describes a different type space. These ColumnSchemas get more restrictive as we go down:
Entire DataFrame
No restrictions have been placed; any column falls into this definition. This would include the whole DataFrame.
End of explanation
"""
ColumnSchema(semantic_tags={'numeric'})
df = es['order_products'].ww.select(include='numeric')
df.ww
"""
Explanation: An example of a Primitive with this ColumnSchema as its input type is the IsNull transform primitive.
By Semantic Tag
Only columns with the numeric tag apply. This can include Double, Integer, and Age logical type columns as well. It will not include the index column which, despite containing integers, has had its standard tags replaced by the 'index' tag.
End of explanation
"""
from woodwork.logical_types import Integer
ColumnSchema(logical_type=Integer)
df = es['order_products'].ww.select(include='Integer')
df.ww
"""
Explanation: And example of a Primitive with this ColumnSchema as its input type is the Mean aggregation primitive.
By Logical Type
Only columns with logical type of Integer are included in this definition. Does not require the numeric tag, so an index column (which has its standard tags removed) would still apply.
End of explanation
"""
ColumnSchema(logical_type=Integer, semantic_tags={'numeric'})
df = es['order_products'].ww.select(include='numeric')
df = df.ww.select(include='Integer')
df.ww
"""
Explanation: By Logical Type and Semantic Tag
The column must have logical type Integer and have the numeric semantic tag, excluding index columns.
End of explanation
"""
|
IBMStreams/streamsx.topology | samples/python/topology/notebooks/ViewDemo/ViewDemo.ipynb | apache-2.0 | from streamsx.topology.topology import Topology
from streamsx.topology import context
from some_module import jsonRandomWalk
#from streamsx import rest
import json
import logging
# Define topology & submit
rw = jsonRandomWalk()
top = Topology("myTop")
stock_data = top.source(rw)
# The view object can be used to retrieve data remotely
view = stock_data.view()
stock_data.print()
"""
Explanation: Randomly Generate A Stock Price & View The Data
Here, we create an application which is submitted to a remote host, yet we retrieve its data remotely via views. This way, we can graph remote data inside of Jupyter without needing to run the application on the local host.
First, we create an application which generates a random stock price by using the jsonRandomWalk class. After we create the stream, we create a view object. This can later be used to retrieve the remote data.
End of explanation
"""
context.submit("DISTRIBUTED", top.graph, username = "streamsadmin", password = "passw0rd")
"""
Explanation: Submit To Remote Streams Install
Then, we submit the application to the default domain.
End of explanation
"""
from streamsx import rest
queue = view.start_data_fetch()
"""
Explanation: Begin Retreiving The Data In A Blocking Queue
Using the view object, we can call the start_data_fetch method. This kicks off a background thread which, once per second, queries the remote view REST endpoint and inserts the data into a queue. The queue is returned from start_data_fetch.
End of explanation
"""
for i in iter(queue.get, 60):
print(i)
"""
Explanation: Print Data to Screen
The queue is a blocking queue, so every time queue.get() is invoked, it will wait until there is more data on the stream. The following is one way of iterating over the queue.
End of explanation
"""
view.stop_data_fetch()
"""
Explanation: Stop Fetching The Data, Cancelling The Background Thread
To stop the background thread from fetching data, invoke the stop_data_fetch method on the view.
End of explanation
"""
%matplotlib inline
%matplotlib notebook
from streamsx import rest
rest.graph_every(view, 'val', 1.0)
"""
Explanation: Graph The Live Feed Using Matplotlib
One of Jupyter strengths is its capacity for data visualization. Here, we can use Matplotlib to interactively update the graph when new data is (or is not) available.
End of explanation
"""
|
jarrison/trEFM-learn | Examples/.ipynb_checkpoints/demo-checkpoint.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from trEFMlearn import data_sim
%matplotlib inline
"""
Explanation: Welcome!
Let's start by assuming you have downloaded the code, and ran the setup.py . This demonstration will show the user how predict the time constant of their trEFM data using the methods of statistical learning. Let's start by importing the data simulation module trEFMlearn package. This package contains methods to numerically simulate some experimental data.
End of explanation
"""
tau_array = np.logspace(-8, -5, 100)
fit_object, fit_tau = data_sim.sim_fit(tau_array)
"""
Explanation: Simulation
You can create an array of time constants that you would like to simulate the data for. This array can then be input into the simulation function which simulates the data as well as fits it using standard vector regression. This function can take a few minutes dpending on the number of time constants you provide. Run this cell and wait for the function to complete. There may be an error that occurs, don't fret as this has no effect.
End of explanation
"""
plt.figure()
plt.title('Fit Time Constant vs. Actual')
plt.plot(fit_tau, 'bo')
plt.plot(tau_array,'g')
plt.ylabel('Tau (s)')
plt.yscale('log')
plt.show()
# Calculate the error at each measurement.
error = (tau_array - fit_tau) / tau_array
plt.figure()
plt.title('Error Signal')
plt.plot(tau_array, error)
plt.ylabel('Error (%)')
plt.xlabel('Time Constant (s)')
plt.xscale('log')
plt.show()
"""
Explanation: Neato!
Looks like that function is all done. We now have an SVR Object called "fit_object" as well as a result of the fit called "fit_tau". Let's take a look at the result of the fit by comparing it to the actual input tau.
End of explanation
"""
from trEFMlearn import process_image
"""
Explanation: Clearly the SVR method is quite capable of reproducing the time constants simulated data using very simple to calculate features. We observe some lower limit to the model's ability to calculate time constants, which is quite interesting. However, this lower limit appears below 100 nanoseconds, a time-scale that is seldom seen in the real world. This could be quite useful for extracting time constant data!
Analyzing a Real Image
The Data
In order to assess the ability of the model to apply to real images, I have taken a trEFM image of an MDMO photovoltaic material. There are large aggregates of acceptor material that should show a nice contrast in the way that they generate and hold charge. Each pixel of this image has been pre-averaged before being saved with this demo program. Each pixel is a measurement of the AFM cantilever position as a function of time.
The Process
Our mission is to extract the time constant out of this signal using the SVR fit of our simulated data. We accomplish this by importing and calling the "process_image" function.
End of explanation
"""
tau_img, real_sum_img, fft_sum_img, amp_diff_img = process_image.analyze_image('.\\image data\\', fit_object)
"""
Explanation: The image processing function needs two inputs. First we show the function the path to the provided image data. We then provide the function with the SVR object that was previously generated using the simulated cantilever data. Processing this image should only take 15 to 30 seconds.
End of explanation
"""
# Something went wrong in the data on the first line. Let's skip it.
tau_img = tau_img[1:]
real_sum_img = real_sum_img[1:]
fft_sum_img = fft_sum_img[1:]
amp_diff_img = amp_diff_img[1:]
plt.figure()
upper_lim = (tau_img.mean() + 2*tau_img.std())
lower_lim = (tau_img.mean() - 2*tau_img.std())
plt.imshow(tau_img,vmin=lower_lim, vmax=upper_lim,cmap = 'cubehelix')
plt.show()
"""
Explanation: Awesome. That was pretty quick huh? Without this machine learning method, the exact same image we just analyzed takes over 8 minutes to run. Yes! Now let's take a look at what we get.
End of explanation
"""
fig, axs = plt.subplots(nrows=3)
axs[0].imshow(real_sum_img ,'hot')
axs[0].set_title('Total Signal Sum')
axs[1].imshow(fft_sum_img, cmap='hot')
axs[1].set_title('Sum of the FFT Power Spectrum')
axs[2].imshow(amp_diff_img, cmap='hot')
axs[2].set_title('Difference in Amplitude After Trigger')
axs[2].set_
plt.tight_layout()
plt.show()
"""
Explanation: You can definitely begin to make out some of the structure that is occuring in the photovoltaic performance of this device. This image looks great, but there are still many areas of improvement. For example, I will need to extensively prove that this image is not purely a result of topographical cross-talk. If this image is correct, this is a significant improvement on our current imaging technique.
The Features
In the next cell we show an image of the various features that were calculated from the raw deflection signal. Some features more clearly matter than others and indicate that the search for better and more representative features is desirable. However, I think this is a great start to a project I hope to continue developing in the future.
End of explanation
"""
|
bird-house/birdy | birdy/ipyleafletwfs/examples/ipyleafletwfs_guide.ipynb | apache-2.0 | from birdy import IpyleafletWFS
from ipyleaflet import Map
url = 'http://boreas.ouranos.ca/geoserver/wfs'
version = '2.0.0'
wfs_connection = IpyleafletWFS(url, version)
demo_map = Map(center=(46.42, -64.14), zoom=8)
demo_map
"""
Explanation: How to use the WFSGeojsonLayer class
This class provides WFS layers for ipyleaflet from services than avec geojson output capabilities
We first have to create the WFS connection and instanciate the map:
End of explanation
"""
wfs_connection.layer_list
"""
Explanation: We can then retrieve the available layers. We will use these to create our WFS layer.
End of explanation
"""
wfs_connection.build_layer(layer_typename='public:HydroLAKES_poly', source_map=demo_map)
"""
Explanation: Next we create our WFS layer from one of the layers listed above. It is filtered by the extent of the map, seen above. This next function is a builder and will create, add and configure the map with it's two default widgets.
End of explanation
"""
wfs_connection.property_list
"""
Explanation: The layer created above will have a refresh button, which can be pressed to refresh the WFS layer.
It will also have a property widget in the lower right corner of the map, and will show the feature ID of a feature after you click on it.
It's also possible to add a new property widget. We first need to retrieve the properties of a feature. The following code returns the properties of the first feature, which should be shared by all features.
End of explanation
"""
wfs_connection.create_feature_property_widget(widget_name='Wshd_area', feature_property='Wshd_area', widget_position='bottomleft')
demo_map
"""
Explanation: We can create a new widget from any of the above properties
The widget_name parameter needs to be unique, else it will overwrite the existing one.
End of explanation
"""
wfs_connection.create_feature_property_widget(widget_name='main_widget', feature_property='Lake_area')
"""
Explanation: To replace the default property widget, the same function can be used with the 'main_widget' name.
This can be usefull when there is no need for the feature ID, or on the off chance that the first property attribute does not contain the feature ID.
End of explanation
"""
gjson = wfs_connection.geojson
gjson['features'][0].keys()
gjson['totalFeatures']
"""
Explanation: The geojson data is available. The results are also filtered by what is visible on the map.
End of explanation
"""
wfs_connection.create_feature_property_widget(widget_name='main_widget')
demo_map
"""
Explanation: A search by ID for features is also available. Let's set back the main widget to default so we can have access to feature IDs again
End of explanation
"""
wfs_connection.feature_properties_by_id(748)
"""
Explanation: Now click on a feature and replace '748' in the cell below with a new ID number to get the full properties of that feature
End of explanation
"""
wfs_connection.clear_property_widgets()
demo_map
### And finally, to remove the layer from the map
wfs_connection.remove_layer()
"""
Explanation: To get rid of all the property widgets:
End of explanation
"""
|
ctralie/TUMTopoTimeSeries2016 | SlidingWindow2-PersistentHomology.ipynb | apache-2.0 | # Do all of the imports and setup inline plotting
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib import gridspec
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
from scipy.interpolate import InterpolatedUnivariateSpline
import ipywidgets as widgets
from IPython.display import display
import warnings
warnings.filterwarnings('ignore')
from IPython.display import clear_output
from ripser import ripser
from persim import plot_diagrams
def getSlidingWindow(x, dim, Tau, dT):
"""
Return a sliding window of a time series,
using arbitrary sampling. Use linear interpolation
to fill in values in windows not on the original grid
Parameters
----------
x: ndarray(N)
The original time series
dim: int
Dimension of sliding window (number of lags+1)
Tau: float
Length between lags, in units of time series
dT: float
Length between windows, in units of time series
Returns
-------
X: ndarray(N, dim)
All sliding windows stacked up
"""
N = len(x)
NWindows = int(np.floor((N-dim*Tau)/dT))
if NWindows <= 0:
print("Error: Tau too large for signal extent")
return np.zeros((3, dim))
X = np.zeros((NWindows, dim))
spl = InterpolatedUnivariateSpline(np.arange(N), x)
for i in range(NWindows):
idxx = dT*i + Tau*np.arange(dim)
start = int(np.floor(idxx[0]))
end = int(np.ceil(idxx[-1]))+2
# Only take windows that are within range
if end >= len(x):
X = X[0:i, :]
break
X[i, :] = spl(idxx)
return X
"""
Explanation: Persistent Homology of Sliding Windows
Now that we have heuristically explored the geometry of sliding window embeddings of 1D signals, we will apply tools from persistent homology to quantify the geometry. As before, we first need to import all necessary libraries and setup code to compute sliding window embeddings
End of explanation
"""
def on_value_change(change):
execute_computation1()
dimslider = widgets.IntSlider(min=1,max=100,value=20,description='Dimension:',continuous_update=False)
dimslider.observe(on_value_change, names='value')
Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
noiseampslider = widgets.FloatSlider(min=0,max=2,step=0.1,value=0,description='Noise Amplitude',continuous_update=False)
noiseampslider.observe(on_value_change, names='value')
display(widgets.HBox(( dimslider,Tauslider, noiseampslider)))
noise = np.random.randn(10000)
fig = plt.figure(figsize=(9.5, 4))
def execute_computation1():
plt.clf()
# Step 1: Setup the signal
T = 40 # The period in number of samples
NPeriods = 4 # How many periods to go through
N = T*NPeriods # The total number of samples
t = np.linspace(0, 2*np.pi*NPeriods, N+1)[0:N] # Sampling indices in time
x = np.cos(t) # The final signal
x += noiseampslider.value * noise[:len(x)]
# Step 2: Do a sliding window embedding
dim = dimslider.value
Tau = Tauslider.value
dT = 0.5
X = getSlidingWindow(x, dim, Tau, dT)
extent = Tau*dim
# Step 3: Do Rips Filtration
PDs = ripser(X, maxdim=1)['dgms']
I = PDs[1]
# Step 4: Perform PCA down to 2D for visualization
pca = PCA(n_components = 2)
Y = pca.fit_transform(X)
eigs = pca.explained_variance_
# Step 5: Plot original signal, 2-D projection, and the persistence diagram
gs = gridspec.GridSpec(2, 2)
ax = plt.subplot(gs[0,0])
ax.plot(x)
ax.set_ylim((2*min(x), 2*max(x)))
ax.set_title("Original Signal")
ax.set_xlabel("Sample Number")
yr = np.max(x)-np.min(x)
yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]
ax.plot([extent, extent], yr, 'r')
ax.plot([0, 0], yr, 'r')
ax.plot([0, extent], [yr[0]]*2, 'r')
ax.plot([0, extent], [yr[1]]*2, 'r')
ax2 = plt.subplot(gs[1,0])
plot_diagrams(PDs)
plt.title("Max Persistence = %.3g"%np.max(I[:, 1] - I[:, 0]))
ax3 = plt.subplot(gs[:,1])
ax3.scatter(Y[:, 0], Y[:, 1])
plt.axis('equal')
plt.title("2-D PCA, Eigenvalues: %.3g, %.3g "%(eigs[0],eigs[1]))
plt.tight_layout()
execute_computation1()
"""
Explanation: Single Sine: Maximum Persistence vs Window Size
First, let's examine the 1D persistent homology of the sliding window embedding of a single perfect sinusoid. Choose dim and $\tau$ to change the extent (window size) to different values, and examine how the maximum persistence changes. How does this support what you saw in the first module?
End of explanation
"""
# Step 1: Setup the signal
T1 = 10 # The period of the first sine in number of samples
T2 = T1*3 # The period of the second sine in number of samples
NPeriods = 10 # How many periods to go through, relative to the second sinusoid
N = T2*NPeriods # The total number of samples
t = np.arange(N) # Time indices
x = np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
x += np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
plt.figure();
plt.plot(x);
"""
Explanation: Questions
Examining the effect of window extent on maximal persistence (with no noise):
Describe the effect of dimension and $\tau$ on maximal persistence. What does increasing one of these factors while keeping the other constant to do the window extent? What does it do to the maximal persistence? Explain your observations.
Is the maximal persistence a function of the window extent? Justify your answer and explain it geometrically (you may want to refer to the PCA projection plots).
Describe the relation between the eigenvalues and the maximal persistence. (Hint: How do the eigenvalues affect roundness? How does roundness affect persistence?)
Write code to plot scatter plots of maximal persistence vs dimension for fixed $\tau$ and vs $\tau$ for fixed dimension, and maximal persistence vs. dimension for fixed extent (say, extent = 40). Comment on your results.
<br><br>
Now add some noise to your plots. Notice that the maximal persistence point on the persistence diagram is colored in red.
What do you observe regarding the persistence diagram? Explain your observations in terms of your understading of persistence.
At what noise amplitude does the point with maximal persistence appear to get 'swallowed up' by the noise in the diagram? How does this correspond with the 2-D projection?
Note that the original signal has amplitude 1. As you increase noise, is it clear by looking at the signal that there is a periodic function underlying it? How does persistence allow detection of periodicity? Explain.
For fixed noise amplitude (say 1), increase dimension. What effect does this have on detection of periodicity using your method?
Does varying $\tau$ for the same fixed amplitude have the same effect? Explain.
Two Sines
Now let's examine the persistent homology of a signal consisting of the sum of two sinusoids. First, setup and examine the signal. We will use a slightly coarser sampling rate than we did in the first module to keep the persistent homology code running quickly.
End of explanation
"""
def on_value_change(change):
execute_computation3()
secondfreq = widgets.Dropdown(options=[ 2, 3, np.pi],value=3,description='Second Frequency:',disabled=False)
secondfreq.observe(on_value_change,names='value')
noiseampslider = widgets.FloatSlider(min=0,max=2,step=0.1,value=0,description='Noise Amplitude',continuous_update=False)
noiseampslider.observe(on_value_change, names='value')
dimslider = widgets.IntSlider(min=1,max=100,value=20,description='Dimension:',continuous_update=False)
dimslider.observe(on_value_change, names='value')
Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description=r'\(\tau :\)' ,continuous_update=False)
Tauslider.observe(on_value_change, names='value')
display(widgets.HBox(( dimslider,Tauslider)))
display(widgets.HBox(( secondfreq,noiseampslider)))
noise = np.random.randn(10000)
fig = plt.figure(figsize=(9.5, 5))
def execute_computation3():
# Step 1: Setup the signal
T1 = 10 # The period of the first sine in number of samples
T2 = T1*secondfreq.value # The period of the second sine in number of samples
NPeriods = 5 # How many periods to go through, relative to the second sinusoid
N = T2*NPeriods # The total number of samples
t = np.arange(N) # Time indices
x = np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
x += np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
x += noiseampslider.value * noise[:len(x)]
#Step 2: Do a sliding window embedding
dim = dimslider.value
Tau = Tauslider.value
dT = 0.35
X = getSlidingWindow(x, dim, Tau, dT)
extent = Tau*dim
#Step 3: Do Rips Filtration
PDs = ripser(X, maxdim=1)['dgms']
#Step 4: Perform PCA down to 2D for visualization
pca = PCA()
Y = pca.fit_transform(X)
eigs = pca.explained_variance_
#Step 5: Plot original signal and the persistence diagram
gs = gridspec.GridSpec(3, 2,width_ratios=[1, 2],height_ratios=[2,2,1])
ax = plt.subplot(gs[0,1])
ax.plot(x)
ax.set_ylim((1.25*min(x), 1.25*max(x)))
ax.set_title("Original Signal")
ax.set_xlabel("Sample Number")
yr = np.max(x)-np.min(x)
yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr]
ax.plot([extent, extent], yr, 'r')
ax.plot([0, 0], yr, 'r')
ax.plot([0, extent], [yr[0]]*2, 'r')
ax.plot([0, extent], [yr[1]]*2, 'r')
ax2 = plt.subplot(gs[0:2,0])
plot_diagrams(PDs)
maxind = np.argpartition(PDs[1][:,1]-PDs[1][:,0], -2)[-2:]
max1 = PDs[1][maxind[1],1] - PDs[1][maxind[1],0]
max2 = PDs[1][maxind[0],1] - PDs[1][maxind[0],0]
ax2.set_title("Persistence Diagram\n Max Pers: %.3g 2nd Pers: %.3g"%(max1,max2) )
ax3 = plt.subplot(gs[2,0])
eigs = eigs[0:min(len(eigs), 10)]
ax3.bar(np.arange(len(eigs)), eigs)
ax3.set_xlabel("Eigenvalue Number")
ax3.set_ylabel("Eigenvalue")
ax3.set_title("PCA Eigenvalues")
c = plt.get_cmap('jet')
C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))
C = C[:, 0:3]
ax4 = fig.add_subplot(gs[1:,1], projection = '3d')
ax4.set_title("PCA of Sliding Window Embedding")
ax4.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax4.set_aspect('equal', 'datalim')
plt.tight_layout()
execute_computation3()
"""
Explanation: Persistent Homology
Now, we will compute the persistent homology of the above signal following a sliding window embedding. Run this code. Then examine the outputs with both harmonic sinusoids and noncommensurate sinusoids, and note the difference in the persistence diagrams. Note that the two points with highest persistence are highlighted in red on the diagram.
End of explanation
"""
# Step 1: Setup the signal
T1 = 100 # The period of the first sine in number of samples
T2 = 50
NPeriods = 5 # How many periods to go through, relative to the first sinusoid
N = T1*NPeriods # The total number of samples
t = np.arange(N) # Time indices
coeff1 = 0.6
coeff2 = 0.8
g1 = coeff1*np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
g1 += coeff2*np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
g2 = coeff2*np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid
g2 += coeff1*np.cos(2*np.pi*(1.0/T2)*t) # The second sinusoid
fig = plt.figure(figsize=(9.5, 4))
plot1, = plt.plot(g1,label="g1 = %.2gcos(t) + %.2gcos(2t)"%(coeff1, coeff2))
plot2, = plt.plot(g2,color='r',label="g2 = %.2gcos(t) + %.2gcos(2t)"%(coeff2, coeff1));
plt.legend(handles=[plot1,plot2])
plt.legend(bbox_to_anchor=(0., 1.02, 0.69, .102), ncol=2);
"""
Explanation: Questions
Describe a key difference in the persistence diagrams between the harmonic and non-commensurate cases. Explain this difference in terms of the 3-D projection of the PCA embedding. (Hint: consider the shape and the intrinsic dimension of the projection.)
<br><br>
Explain how the persistence diagram allows the detection of non-commensurate sinusoids.
<br><br>
Can the persistence diagram distinguish between a single sinusoid and the sum of two harmonic sinusoids?
<br><br>
Looking back at the 2-D projection of the PCA in the harmonic case from the first lab, explain why the persistence diagram might be surprising if you had only seen that projection. How does looking at the 3-D projection make the persistence diagram less of a surprise?
<h1>Field of Coefficients</h1>
<BR>
Now we will examine a surprising geometric property that is able to tell apart two signals which look quite similar. First, we generate and plot the two signals below:
$$g_1 = 0.6\cos(t) + 0.8\cos(2t)$$
$$g_2 = 0.8\cos(t) + 0.6\cos(2t)$$
End of explanation
"""
####g1
#Step 2: Do a sliding window embedding
dim = 20
Tau = 5
dT = 2
X1 = getSlidingWindow(g1, dim, Tau, dT)
#Step 3: Perform PCA down to 2D for visualization
pca = PCA()
Y = pca.fit_transform(X1)
eigs = pca.explained_variance_
c = plt.get_cmap('jet')
C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32))
C = C[:, 0:3]
#Step 4: Plot original signal and PCA of the embedding
fig = plt.figure(figsize=(9.5,6))
ax = fig.add_subplot(221)
ax.plot(g1)
ax.set_title("Original Signal")
ax.set_xlabel("Sample Index")
ax2 = fig.add_subplot(222, projection = '3d')
ax2.set_title("g1 = %.2gcos(t) + %.2gcos(2t)"%(coeff1, coeff2))
ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax2.set_aspect('equal', 'datalim')
#####g2
X2 = getSlidingWindow(g2, dim, Tau, dT)
#Perform PCA down to 2D for visualization
pca = PCA()
Y = pca.fit_transform(X2)
eigs = pca.explained_variance_
ax = fig.add_subplot(223)
ax.plot(g2)
ax.set_title("Original Signal")
ax.set_xlabel("Sample Index")
ax2 = fig.add_subplot(224, projection = '3d')
ax2.set_title("g2 = %.2gcos(t) + %.2gcos(2t)"%(coeff2, coeff1))
ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C)
ax2.set_aspect('equal', 'datalim')
plt.tight_layout();
"""
Explanation: Now, we will look at PCA of the sliding window embeddings of the two signals
End of explanation
"""
#Step 1: Do rips filtrations with different field coefficients
print("Computing persistence diagrams for g1...")
PDs1_2 = ripser(X1, maxdim=1, coeff=2)['dgms'] #Z2 Coefficients
PDs1_3 = ripser(X1, maxdim=1, coeff=3)['dgms'] #Z3 Coefficients
print("Computing persistence diagrams for g2...")
PDs2_2 = ripser(X2, maxdim=1, coeff=2)['dgms']
PDs2_3 = ripser(X2, maxdim=1, coeff=3)['dgms']
fig = plt.figure(figsize=(8, 6))
plt.subplot(231)
plt.plot(g1)
plt.subplot(232);
plot_diagrams(PDs1_2[1], labels=['H1'])
plt.title("$g_1$ Persistence Diagram $\mathbb{Z}/2\mathbb{Z}$")
plt.subplot(233);
plot_diagrams(PDs1_3[1], labels=['H1'])
plt.title("$g_1$ Persistence Diagram $\mathbb{Z}/3\mathbb{Z}$")
plt.subplot(234)
plt.plot(g2)
plt.subplot(235);
plot_diagrams(PDs2_2[1], labels=['H1'])
plt.title("$g_2$ Persistence Diagram $\mathbb{Z}/2\mathbb{Z}$")
plt.subplot(236);
plot_diagrams(PDs2_3[1])
plt.title("$g_2$ Persistence Diagram $\mathbb{Z}/3\mathbb{Z}$")
plt.tight_layout();
"""
Explanation: Notice how one looks more "twisted" than the other. To finish this off, let's compute TDA
End of explanation
"""
|
machinelearningdeveloper/lc101-kc | November 28, 2016/Covered in class 11-28-2016.ipynb | unlicense | """Assignment between variables creates aliases."""
animal = "giraffe"
creature = animal
print("Is creature an alias of animal?", creature is animal)
"""Assignment of the same value to different variables does not necessarily create aliases."""
weather_next_5_days = ["Sunny", "Partly sunny", "Cloudy", "Sunny", "Sunny"]
weather_subsequent_5_days = ["Sunny", "Partly sunny", "Cloudy", "Sunny", "Sunny"]
if weather_subsequent_5_days is weather_next_5_days:
is_weather_subsequent_alias_of_weather_next = "Yes."
else:
is_weather_subsequent_alias_of_weather_next = "No."
if weather_subsequent_5_days == weather_next_5_days:
same_forecast = "Yes."
else:
same_forecast = "No."
print("Is weather_subsequent_5_days an alias of weather_next_5_days?",
is_weather_subsequent_alias_of_weather_next, "\n")
print("Is the forecast for the next 5 days the same as the forecast for the subsequent 5 days?",
same_forecast, "\n")
print("id(weather_next_5_days): ", id(weather_next_5_days))
print("id(weather_subsequent_5_days):", id(weather_subsequent_5_days))
"""
Explanation: Glossary
Taken from https://runestone.launchcode.org/runestone/static/thinkcspy/ListsContinued/Glossary.html
aliases
Multiple variables that contain references to the same object.
End of explanation
"""
"""Clone a list to get a new object with the same values."""
lst = [1, 2, 3]
alias = lst # create an alias to lst
clone = lst[:] # clone lst
print("Is alias an alias of lst?", alias is lst)
print("Is clone an alias of lst?", clone is lst)
print("Do lst, alias, and clone all have the same values?", lst == alias == clone)
"""
Explanation: clone
To create a new object that has the same value as an existing object. Copying a reference to an object creates an alias but doesn’t clone the object.
End of explanation
"""
"""List slicing DOES NOT deeply copy nested objects."""
import time
delay = 2
hundreds = [100, 200, 300]
numbers = [1, hundreds] # hundreds is nested
shallow_clone = numbers[:]
print("hundreds:", hundreds)
print("numbers:", numbers)
print("shallow_clone:", shallow_clone, "\n")
time.sleep(delay)
# verify that shallow_clone[0] == numbers[0]
print("shallow_clone[0] == numbers[0]:", shallow_clone[0] == numbers[0], "\n")
time.sleep(delay)
# modify the first element in numbers from 1 -> 10
numbers[0] = 10
print("numbers[0] = 10\n")
time.sleep(delay)
# test whether shallow_clone[0] also was modified
print("Was shallow_clone[0] also modified?",
shallow_clone[0] == numbers[0], "\n")
time.sleep(delay)
# change the first element in the list of hundreds from 100 -> 500
numbers[1][0] = 500
print("numbers[1][0] = 500\n")
time.sleep(delay)
# test whether the list of hundreds in shallow_clone also has been modified
print("Was shallow_clone[1][0] also modified?",
numbers[1][0] == shallow_clone[1][0], "\n")
time.sleep(delay)
# look at all of the variables
print("hundreds:", hundreds)
print("numbers:", numbers)
print("shallow_clone:", shallow_clone)
"""
Explanation: <h4 style="color: #ff0000">Warning: list slicing will create a shallow copy ("clone") of nested lists.</h4>
End of explanation
"""
"""By default, strings are split on whitespace."""
quote = "Beware of bugs in the above code; I have only proved it correct, not tried it. -Donald Knuth"
words = quote.split()
print(words)
"""Specify the delimiter to change how strings are split."""
quote = "Beware of bugs in the above code; I have only proved it correct, not tried it. -Donald Knuth"
delimiter = ';'
phrases = quote.split(delimiter)
print(phrases)
"""A delimiter can also be specified to join."""
quote = "Beware of bugs in the above code; I have only proved it correct, not tried it. -Donald Knuth"
delimiter = '-'
parts = quote.split(delimiter)
print(parts, "\n")
improved_quote = '\nby '.join(parts)
print(improved_quote)
"""
Explanation: delimiter
A character or string used to indicate where a string should be split.
End of explanation
"""
"""Access an element of a list by using square brackets with an index."""
lst = [1, 2, 3]
print("The third element of lst:", lst[2])
"""
Explanation: element
One of the values in a list (or other sequence). The bracket operator selects elements of a list.
End of explanation
"""
"""By varying the index, you can access different elements of a list."""
lst = [1, 2, 3]
i = 0 # i holds the index value
print("The element of lst at index 0 is", lst[i])
i = 1
print("The element of lst at index 1 is", lst[i])
"""
Explanation: index
An integer variable or value that indicates an element of a list.
End of explanation
"""
"""Create a list from a list literal."""
lst = [1, 2, 3]
print(lst)
"""Create a list using list()."""
lst = list(range(1, 4))
print(lst, type(lst))
"""
Explanation: list
A collection of objects, where each object is identified by an index. Like other types str, int, float, etc. there is also a list type-converter function that tries to turn its argument into a list.
End of explanation
"""
"""Create a list from a list comprehension."""
lst = [i for i in range(1, 11) if i <= 3]
print(lst)
"""
Explanation: List comprehensions
List comphrensions take the form:
[expression for variable in sequence if condition]
End of explanation
"""
"""Create a list from a list comprehension without an if clause."""
lst = [i for i in range(1, 11)]
print(lst)
"""
Explanation: In the above example the general pattern:
[expression for variable in sequence if condition]
Becomes:
[i for i in range(1, 100) if i <= 3]
You can omit the condition when you want to take all of the values from the sequence.
A list comprehension without the optional if clause takes the form:
[expression for variable in sequence]
End of explanation
"""
"""Create a list by accumulating values."""
lst = []
for i in range(1, 10):
if i <= 3:
lst.append(i)
print(lst)
"""Create a list by accumulating values without if condition."""
lst = []
for i in range(1, 11):
lst.append(i)
print(lst)
"""
Explanation: Notice especially the "punctuation" separating the various parts of the comprehension:
[... for ... in ... if ...]
Without the if clause:
[... for ... in ...]
These patterns are functionally equivalent to the more familiar list accumulation pattern.
End of explanation
"""
"""List traversal using a for loop."""
lst = [1, 2, 3]
for i in lst:
print(i)
"""List traversal using a for loop and an index."""
lst = [1, 2, 3]
for i in range(len(lst)):
print(lst[i])
"""
Explanation: You can translate many for loops into list comprehensions:
<pre>
lst = []
for i in range(1, 10): becomes lst = [i for i in range(1, 10) if i <= 3]
if i <= 3:
lst.append(i)
</pre>
<pre>
lst = []
for i in range(1, 10): becomes lst = [i for i in range(1, 10)]
lst.append(i)
</pre>
<pre>
doubled = []
for i in range(1, 10): becomes doubled = [i * 2 for i in range(1, 10)]
doubled.append(i * 2)
</pre>
Generally:
<pre>
lst = []
for variable in sequence: becomes lst = [expression for variable in sequence if condition]
if condition:
lst.append(expression)
</pre>
list traversal
The sequential accessing of each element in a list.
End of explanation
"""
import time
delay = 3
def title_modifier(lst):
"""This function will change the list passed to it."""
for i in range(len(lst)):
lst[i] = ' '.join([word[0].upper() + word[1:].lower() for word in lst[i].split()])
places = ["new york", "kansas city", "los angeles", "seattle"]
print("Places before being modified:", places, "\n")
time.sleep(delay)
print("Calling title_modifier(places)...", "\n")
return_value = title_modifier(places)
time.sleep(delay)
print("The return value of title_modifier(places) is", return_value, "\n")
time.sleep(delay)
print("Places after being modified:", places, "\n")
import time
delay = 3
def title_modifier(lst):
"""This function will change the list passed to it."""
for i in range(len(lst)):
place = lst[i]
words = place.split()
title_cased_words = []
for word in words:
first_letter = word[0]
rest_of_word = word[1:]
title_cased_words.append(first_letter.upper() + rest_of_word.lower())
lst[i] = ' '.join(title_cased_words)
places = ["new york", "kansas city", "los angeles", "seattle"]
print("Places before being modified:", places, "\n")
time.sleep(delay)
print("Calling title_modifier(places)...", "\n")
return_value = title_modifier(places)
time.sleep(delay)
print("The return value of title_modifier(places) is", return_value, "\n")
time.sleep(delay)
print("Places after being modified:", places, "\n")
"""
Explanation: modifier
A function which changes its arguments inside the function body. Only mutable types can be changed by modifiers.
End of explanation
"""
"""Lists are mutable."""
cities = ["Albuquerque", "Chicago", "Paris"]
print("Cities before changing cities[1]:", cities)
cities[1] = "Tokyo"
print("Cities after changing cities[1]:", cities)
"""Strings are not mutable"""
cities = "Albuquerque Chicago Paris"
print("Cities as a string:", cities)
cities[1] = "Tokyo"
print("This will never print because it is an error to try to change part of a string")
"""
Explanation: mutable data type
A data type in which the elements can be modified. All mutable types are compound types. Lists are mutable data types; strings are not.
End of explanation
"""
"""Lists of lists (nested lists).
The format of hourly_forecasts is a list of days:
hourly_forecasts = [day1, day2, ..., dayN]
Each day has a name and a list of conditions associated with that day for the hours of 1-3 pm:
day = [name, [conditions_at_1pm, conditions_at_2pm, conditions_at_3pm]]
Therefore, hourly_forecasts is, in part, a list of lists of lists.
"""
hourly_forecasts = [["Monday", ["Sunny", "Sunny", "Partly Cloudy"]],
["Tuesday", ["Cloudy", "Cloudy", "Cloudy"]],
["Wednesday", ["Partly Cloudy", "Sunny", "Sunny"]]]
for day in hourly_forecasts:
name = day[0]
hourly_conditions = day[1]
i = 1
for conditions in hourly_conditions:
print("The conditions at " + str(i) + "pm on " + name + " are forecast to be " + conditions)
i += 1
"""
Explanation: nested list
A list that is an element of another list.
End of explanation
"""
"""Direct iteration."""
months = ["January", "February", "March",
"April", "May", "June", "July",
"August", "September", "October",
"November", "December"]
for month in months:
print(month)
"""
Explanation: object
A thing to which a variable can refer.
In an assignment such as:
python
lst = [1, 2, 3]
lst is the variable and [1, 2, 3] is the object to which lst refers.
pattern
A sequence of statements, or a style of coding something that has general applicability in a number of different situations. Part of becoming a mature Computer Scientist is to learn and establish the patterns and algorithms that form your toolkit. Patterns often correspond to your “mental chunking”.
End of explanation
"""
"""Indexed iteration."""
months = ["January", "February", "March",
"April", "May", "June", "July",
"August", "September", "October",
"November", "December"]
for i in range(len(month)):
print(months[i])
"""
Explanation: Boiled down to its essential characteristics, direct iteration takes the form:
for element in sequence:
expression(element)
Where expression typically uses the value in element, for example, to do arithmetic, print, etc.
End of explanation
"""
"""List accumulation."""
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
squares = []
for i in numbers:
squares.append(i ** 2)
print(squares)
"""
Explanation: Indexed iteration often takes the form:
for index in range(len(sequence)):
expression(sequence[index])
The example uses range(len(sequence)) to get each index, and sequence[index] to get each element.
End of explanation
"""
import time
delay = 3
def title_pure_function(lst):
"""This function will create a new list and return it
instead of changing the list passed into it.
"""
places_titled = []
for i in range(len(lst)):
place = lst[i]
words = place.split()
title_cased_words = []
for word in words:
first_letter = word[0]
rest_of_word = word[1:]
title_cased_words.append(first_letter.upper() + rest_of_word.lower())
titled = ' '.join(title_cased_words)
places_titled.append(titled)
return places_titled
places = ["new york", "kansas city", "los angeles", "seattle"]
print("Places:", places, "\n")
time.sleep(delay)
print("Calling title_pure_function(places)...", "\n")
return_value = title_pure_function(places)
time.sleep(delay)
print("The return value of title_pure_function(places) is", return_value, "\n")
time.sleep(delay)
print("Places:", places, "\n")
"""
Explanation: List accumulation is a convenient for creating a new list from an old list by applying a transformation to each element in the old list and appending the result to the new list. It takes the form:
result = []
for element in sequence:
result.append(expression(element))
Using list accumulation in a function and returning the accumulated value can help make a function pure.
pure function
A function which has no side effects. Pure functions only make changes to the calling program through their return values.
End of explanation
"""
"""Lists are sequences."""
lst = [1, 2, 3]
print("lst: ", lst)
print("lst[0]: ", lst[0])
"""Tuples are sequences."""
t = (1, 2, 3)
print("t: ", t)
print("t[0]: ", t[0])
"""
Explanation: sequence
Any of the data types that consist of an ordered collection of elements, with each element identified by an index.
End of explanation
"""
def change_forecast(forecast):
"""This modifier function changes the value of the
forecast without returning anything.
"""
for i in range(len(forecast)):
forecast[i] = "Cloudy"
forecast = ["Sunny", "Sunny", "Sunny"]
print("The forecast is", forecast)
return_value = change_forecast(forecast)
print("The return_value from calling change_forecast(forecast) is", return_value)
print("The forecast is", forecast)
"""
Explanation: side effect
A change in the state of a program made by calling a function that is not a result of reading the return value from the function. Side effects can only be produced by modifiers.
End of explanation
"""
"""Create a tuple from a comma-separated list of values."""
vehicle = (2000, "Ford", "Ranger")
print(vehicle)
"""Parentheses are not required."""
vehicle = 2000, "Ford", "Ranger"
print(vehicle)
"""You can create a tuple from a list."""
vehicle = tuple([2000, "Ford", "Ranger"])
print(vehicle)
"""Tuples are immutable."""
vehicle = (2000, "Ford", "Ranger")
vehicle[2] = "F-150"
"""Values in tuples can be "unpacked" into multiple variables."""
vehicle = (2000, "Ford", "Ranger")
year, make, model = vehicle # tuple unpacking
print("Year:", year)
print("Make:", make)
print("Model:", model)
print(vehicle)
"""
Explanation: tuple
A sequential collection of items, similar to a list. Any python object can be an element of a tuple. However, unlike a list, tuples are immutable.
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb | apache-2.0 | import json
import math
import os
from pprint import pprint
import numpy as np
import tensorflow as tf
print(tf.version.VERSION)
"""
Explanation: TensorFlow Dataset API
Learning Objectives
1. Learn how use tf.data to read data from memory
1. Learn how to use tf.data in a training loop
1. Learn how use tf.data to read data from disk
1. Learn how to write production input pipelines with feature engineering (batching, shuffling, etc.)
In this notebook, we will start by refactoring the linear regression we implemented in the previous lab so that is takes its data from atf.data.Dataset, and we will learn how to implement stochastic gradient descent with it. In this case, the original dataset will be synthetic and read by the tf.data API directly from memory.
In a second part, we will learn how to load a dataset with the tf.data API when the dataset resides on disk.
End of explanation
"""
N_POINTS = 10
X = tf.constant(range(N_POINTS), dtype=tf.float32)
Y = 2 * X + 10
"""
Explanation: Loading data from memory
Creating the dataset
Let's consider the synthetic dataset of the previous section:
End of explanation
"""
def create_dataset(X, Y, epochs, batch_size):
dataset = tf.data.Dataset.from_tensor_slices((X, Y))
dataset = dataset.repeat(epochs).batch(batch_size, drop_remainder=True)
return dataset
"""
Explanation: We begin with implementing a function that takes as input
our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$
the number of passes over the dataset we want to train on (epochs)
the size of the batches the dataset (batch_size)
and returns a tf.data.Dataset:
Remark: Note that the last batch may not contain the exact number of elements you specified because the dataset was exhausted.
If you want batches with the exact same number of elements per batch, we will have to discard the last batch by
setting:
python
dataset = dataset.batch(batch_size, drop_remainder=True)
We will do that here.
End of explanation
"""
BATCH_SIZE = 3
EPOCH = 2
dataset = create_dataset(X, Y, epochs=EPOCH, batch_size=BATCH_SIZE)
for i, (x, y) in enumerate(dataset):
print("x:", x.numpy(), "y:", y.numpy())
assert len(x) == BATCH_SIZE
assert len(y) == BATCH_SIZE
assert EPOCH
"""
Explanation: Let's test our function by iterating twice over our dataset in batches of 3 datapoints:
End of explanation
"""
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
"""
Explanation: Loss function and gradients
The loss function and the function that computes the gradients are the same as before:
End of explanation
"""
EPOCHS = 250
BATCH_SIZE = 2
LEARNING_RATE = 0.02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dataset = create_dataset(X, Y, epochs=EPOCHS, batch_size=BATCH_SIZE)
for step, (X_batch, Y_batch) in enumerate(dataset):
dw0, dw1 = compute_gradients(X_batch, Y_batch, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X_batch, Y_batch, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
assert loss < 0.0001
assert abs(w0 - 2) < 0.001
assert abs(w1 - 10) < 0.001
"""
Explanation: Training loop
The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function.
We will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2.
End of explanation
"""
!ls -l ../data/taxi*.csv
"""
Explanation: Loading data from disk
Locating the CSV files
We will start with the taxifare dataset CSV files that we wrote out in a previous lab.
The taxifare datast files been saved into ../data.
Check that it is the case in the cell below, and, if not, regenerate the taxifare
dataset by running the provious lab notebook:
End of explanation
"""
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
"""
Explanation: Use tf.data to read the CSV files
The tf.data API can easily read csv files using the helper function
tf.data.experimental.make_csv_dataset
If you have TFRecords (which is recommended), you may use
tf.data.experimental.make_batched_features_dataset
The first step is to define
the feature names into a list CSV_COLUMNS
their default values into a list DEFAULTS
End of explanation
"""
def create_dataset(pattern):
return tf.data.experimental.make_csv_dataset(
pattern, 1, CSV_COLUMNS, DEFAULTS
)
tempds = create_dataset("../data/taxi-train*")
print(tempds)
"""
Explanation: Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located:
End of explanation
"""
for data in tempds.take(2):
pprint({k: v.numpy() for k, v in data.items()})
print("\n")
"""
Explanation: Note that this is a prefetched dataset, where each element is an OrderedDict whose keys are the feature names and whose values are tensors of shape (1,) (i.e. vectors).
Let's iterate over the two first element of this dataset using dataset.take(2) and let's convert them ordinary Python dictionary with numpy array as values for more readability:
End of explanation
"""
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
"""
Explanation: Transforming the features
What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary:
Remove the unwanted column "key"
Keep the label separate from the features
Let's first implement a funciton that takes as input a row (represented as an OrderedDict in our tf.data.Dataset as above) and then returns a tuple with two elements:
The first element beeing the same OrderedDict with the label dropped
The second element beeing the label itself (fare_amount)
Note that we will need to also remove the key and pickup_datetime column, which we won't use.
End of explanation
"""
for row_data in tempds.take(2):
features, label = features_and_labels(row_data)
pprint(features)
print(label, "\n")
assert UNWANTED_COLS[0] not in features.keys()
assert UNWANTED_COLS[1] not in features.keys()
assert label.shape == [1]
"""
Explanation: Let's iterate over 2 examples from our tempds dataset and apply our feature_and_labels
function to each of the examples to make sure it's working:
End of explanation
"""
def create_dataset(pattern, batch_size):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
return dataset.map(features_and_labels)
"""
Explanation: Batching
Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented in order for our dataset to produce tuples of features and labels.
End of explanation
"""
BATCH_SIZE = 2
tempds = create_dataset("../data/taxi-train*", batch_size=2)
for X_batch, Y_batch in tempds.take(2):
pprint({k: v.numpy() for k, v in X_batch.items()})
print(Y_batch.numpy(), "\n")
assert len(Y_batch) == BATCH_SIZE
"""
Explanation: Let's test that our batches are of the right size:
End of explanation
"""
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels).cache()
if mode == "train":
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
"""
Explanation: Shuffling
When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.
Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training.
We will introduce a additional argument mode to our function to allow the function body to distinguish the case
when it needs to shuffle the data (mode == "train") from when it shouldn't (mode == "eval").
Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speedup training:
End of explanation
"""
tempds = create_dataset("../data/taxi-train*", 2, "train")
print(list(tempds.take(1)))
tempds = create_dataset("../data/taxi-valid*", 2, "eval")
print(list(tempds.take(1)))
"""
Explanation: Let's check that our function work well in both modes:
End of explanation
"""
|
juancarlosqr/datascience | python/playground/jupyter/keyboard-shortcuts.ipynb | mit | # mode practice
"""
Explanation: Keyboard shortcuts
In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.
First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.
By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.
Exercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.
End of explanation
"""
## Practice here
def fibo(n): # Recursive Fibonacci sequence!
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
"""
Explanation: Help with commands
If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.
Creating new cells
One of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.
Exercise: Create a cell above this cell using the keyboard command.
Exercise: Create a cell below this cell using the keyboard command.
Switching between Markdown and code
With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to a code cell, press Y. To switch from code to Markdown, press M.
Exercise: Switch the cell below between Markdown and code cells.
End of explanation
"""
# DELETE ME
"""
Explanation: Line numbers
A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.
Exercise: Turn line numbers on and off in the above code cell.
Deleting cells
Deleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice!
Exercise: Delete the cell below.
End of explanation
"""
# Move this cell down
# below this cell
"""
Explanation: Saving the notebook
Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!
The Command Palette
You can easily access the command palette by pressing Shift + Control/Command + P.
Note: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.
This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands.
Exercise: Use the command palette to move the cell below down one position.
End of explanation
"""
|
ivergara/science_notebooks | Two levels, two electrons.ipynb | gpl-3.0 | import numpy as np
import itertools
from operator import add
from functools import reduce
#from itertools import combinations, permutations
"""
Explanation: Transitions for a two level system with an electron
End of explanation
"""
initial = [1, 0, 0, 0, 1, 0, 0, 0]
final = [0, 0, 0, 0, 1, 0, 1, 0]
"""
Explanation: We have two sites $(i,j)$ and two levels $\alpha, \beta$. The Initial states are $d_i^1d_j^1$ configurations where one electron sits onthe $\alpha$ level on each site ($4$ in total). The final states are $d_i^0d_j^2$ configurations ($5$ in total).
The problem now is how to represent the states. We take an array where the first 4 elements correspond to the $i$ site and then the first two correspond to the $+$ and $-$ spin $\alpha$ level. Thus the state with spin up on both sites at the $\alpha$ level is [1 0 0 0 1 0 0 0].
As an example, we take the final configuration where in there is on spin up in each level at site $j$ [0 0 0 0 1 0 1 0]
End of explanation
"""
jump = np.logical_xor(initial,final)
print(jump.astype(int))
"""
Explanation: The question is, is there a transition matrix element between those states? The answer is yes, since the receiving "state" is empty and no spin flip is involved. We can codify this by taking the XOR operation between the initial and final states and checking the positions where we get True. If both are odd or even the hop is allowed, whereas if one is in odd and the other in even positions it is not allowd (implying a spin flip)
End of explanation
"""
np.nonzero(jump)
"""
Explanation: We can see that we get a $1$ in positions $0$ and $6$, thus the jump is allowed since both are even. Let's codify that computationally.
End of explanation
"""
def allowed(jump):
if np.unique(np.remainder(np.nonzero(jump),2)).size == 1:
return 1
return 0
"""
Explanation: Now we apply modulo 2, which will allow us to check validity. If both are even or odd, there will be just one unique element.
End of explanation
"""
allowed(jump)
"""
Explanation: Testing it out
End of explanation
"""
final = [0, 0, 0, 0, 0, 1, 1, 0]
jump = np.logical_xor(initial,final)
allowed(jump)
"""
Explanation: What about another final state like [0 0 0 0 0 1 1 0]. Between the initial and this final state there is a spin flip wich should not be allowed.
End of explanation
"""
single_initial_states = [[1, 0, 0, 0], [0, 1, 0, 0]]
single_initial_states *= 2
single_initial_states
initial_states = set()
for combination in itertools.combinations(single_initial_states,2):
initial_states.add(tuple(reduce(add, combination)))
list(initial_states)
initial_states = set([tuple(reduce(add, combination))
for combination
in itertools.combinations(single_initial_states,2)])
initial_states
"""
Explanation: As we expected, it is not allowed.
How to generate the states
The problem now is how to generate the initial and final states. First we start with the initial states. Here we can proceed by isolating one site and writing both possible initial states. Then we double it and make all possible combinations and filter out repeated sequences.
End of explanation
"""
for initial_state in initial_states:
jump = np.logical_xor(initial_state,final)
print(f"From initial state {initial_state} to final state {final} allowed? {allowed(jump)}")
"""
Explanation: Now that we have all possible initial states, we can test what happens with our test final state.
End of explanation
"""
single_states_j = set(itertools.permutations([1, 1, 0, 0]))
final_states = [tuple(reduce(add, [(0, 0, 0, 0), j_state]))
for j_state
in list(set(itertools.permutations((1, 1, 0, 0))))]
final_states
"""
Explanation: To generate the final states, we do simmilarly as for the initial ones. We have to create all the combinations of two ele trons on site $j$. There are in total $6$ states where the one with two electrons in the $\beta$ level is not accessible with optics.
End of explanation
"""
for initial_state in initial_states:
for final_state in final_states:
jump = np.logical_xor(initial_state,final_state)
print(f"From initial state {initial_state} to final state {final_state} allowed? {allowed(jump)}")
"""
Explanation: Having the final states created, we can again test every initial state against every final state.
End of explanation
"""
|
voyageth/udacity-Deep_Learning_Foundations_Nanodegree | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
source_id_lists = []
for sentence in source_text.split('\n'):
source_id_list = []
for word in sentence.split():
source_id_list.append(source_vocab_to_int[word])
source_id_lists.append(source_id_list)
target_id_lists = []
for sentence in target_text.split('\n'):
target_id_list = []
for word in sentence.split():
target_id_list.append(target_vocab_to_int[word])
target_id_list.append(target_vocab_to_int['<EOS>'])
target_id_lists.append(target_id_list)
return source_id_lists, target_id_lists
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="target")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_probability = tf.placeholder(tf.float32, name="keep_prob")
# TODO: Implement Function
return inputs, targets, learning_rate, keep_probability
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
return tf.concat([tf.constant(target_vocab_to_int['<GO>'], shape=[batch_size, 1]), target_data[:,:-1]],1)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# TODO: Implement Function
basic_cell = tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size), output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([basic_cell] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
basic_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state=encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
basic_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement FunctionFunction
basic_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(basic_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
#def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
# output_fn, keep_prob):
train_logits = decoding_layer_train(
encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
#def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
# maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
infer_logits = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length - 1,
vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
# 1. Apply embedding to the input data for the encoder.
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# 2. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
# def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# 3. Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
# def process_decoding_input(target_data, target_vocab_to_int, batch_size):
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# 4. Apply embedding to the target data for the decoder.
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 5. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
# def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
# num_layers, target_vocab_to_int, keep_prob):
return decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.8
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: """
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
result = []
for word in sentence.split():
if word in vocab_to_int:
result.append(vocab_to_int[word])
else:
result.append(vocab_to_int['<UNK>'])
return result
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
Milad7m/motion | DM_05_04.ipynb | mit | %matplotlib inline
import pylab
import numpy as np
import pandas as pd
from sklearn.svm import OneClassSVM
from sklearn.covariance import EllipticEnvelope
pylab.rcParams.update({'font.size': 14})
"""
Explanation: DM_05_04
Import Libraries
End of explanation
"""
df = pd.read_csv("AnomalyData.csv")
df.head()
"""
Explanation: Read CSV
End of explanation
"""
state_code = df["state_code"]
data = df.loc[:, "data science": "Openness"]
"""
Explanation: Save state_code to label outliers. "data" contains just quantitative variables.
End of explanation
"""
param = "modern dance"
"""
Explanation: Univariate Outliers
Create a box plot to display univariate outliers on "modern dance."
End of explanation
"""
qv1 = data[param].quantile(0.25)
qv2 = data[param].quantile(0.5)
qv3 = data[param].quantile(0.75)
qv_limit = 1.5 * (qv3 - qv1)
"""
Explanation: Get quantile values and IQR for outlier limits.
End of explanation
"""
un_outliers_mask = (data[param] > qv3 + qv_limit) | (data[param] < qv1 - qv_limit)
un_outliers_data = data[param][un_outliers_mask]
un_outliers_name = state_code[un_outliers_mask]
"""
Explanation: Get positions of outliers and use state_code for labels.
End of explanation
"""
fig = pylab.figure(figsize=(4,6))
ax = fig.add_subplot(1, 1, 1)
for name, y in zip(un_outliers_name, un_outliers_data):
ax.text(1, y, name)
ax.boxplot(data[param])
ax.set_ylabel(param)
"""
Explanation: Create box plot for "modern dance."
End of explanation
"""
params = ["data science", "ceo"]
params_data = np.array([df[params[0]], df[params[1]]]).T
"""
Explanation: Bivariate Outliers
Create a scatterplot with an ellipse as a boundary for outliers.
Use the Google search terms "data science" and "ceo" for this example.
End of explanation
"""
ee = EllipticEnvelope()
ee.fit(params_data)
"""
Explanation: Compute the "elliptical envelope."
End of explanation
"""
biv_outliers_mask = ee.predict(params_data) == -1
biv_outliers_data = params_data[biv_outliers_mask]
biv_outliers_name = state_code[biv_outliers_mask]
"""
Explanation: Get the names and positions of outliers.
End of explanation
"""
xx, yy = np.meshgrid(np.linspace(params_data[:, 0].min(), params_data[:, 0].max(), 100),
np.linspace(params_data[:, 1].min(), params_data[:, 1].max(), 100))
zz = ee.decision_function(np.c_[xx.ravel(), yy.ravel()])
zz = zz.reshape(xx.shape)
"""
Explanation: Calculate the decision boundary for the scatterplot.
End of explanation
"""
fig = pylab.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
for name, xy in zip(biv_outliers_name, biv_outliers_data):
ax.text(xy[0], xy[1], name)
ax.contour(xx, yy, zz, levels=[0], linewidths=2)
ax.scatter(params_data[:, 0], params_data[:, 1], color='black')
ax.set_xlabel(params[0])
ax.set_ylabel(params[1])
"""
Explanation: Draw the scatterplot with the elliptical envelope and label the outliers.
End of explanation
"""
ocsvm = OneClassSVM(nu=0.25, gamma=0.05)
ocsvm.fit(data)
"""
Explanation: Multivariate Outliers
Use the one-class support vector machine (SVM) algorithm to classify unusual cases.
End of explanation
"""
#
state_code[ocsvm.predict(data) == -1]
"""
Explanation: List the names of the outlying states based on the one-class SVM.
End of explanation
"""
|
Kaggle/learntools | notebooks/data_cleaning/raw/ex4.ipynb | apache-2.0 | from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex4 import *
print("Setup Complete")
"""
Explanation: In this exercise, you'll apply what you learned in the Character encodings tutorial.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
"""
# modules we'll use
import pandas as pd
import numpy as np
# helpful character encoding module
import chardet
# set seed for reproducibility
np.random.seed(0)
"""
Explanation: Get our environment set up
The first thing we'll need to do is load in the libraries we'll be using.
End of explanation
"""
sample_entry = b'\xa7A\xa6n'
print(sample_entry)
print('data type:', type(sample_entry))
"""
Explanation: 1) What are encodings?
You're working with a dataset composed of bytes. Run the code cell below to print a sample entry.
End of explanation
"""
new_entry = ____
# Check your answer
q1.check()
#%%RM_IF(PROD)%%
before = sample_entry.decode("big5-tw")
new_entry = before.encode()
q1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q1.hint()
#_COMMENT_IF(PROD)_
q1.solution()
"""
Explanation: You notice that it doesn't use the standard UTF-8 encoding.
Use the next code cell to create a variable new_entry that changes the encoding from "big5-tw" to "utf-8". new_entry should have the bytes datatype.
End of explanation
"""
# TODO: Load in the DataFrame correctly.
police_killings = ____
# Check your answer
q2.check()
"""
Explanation: 2) Reading in files with encoding problems
Use the code cell below to read in this file at path "../input/fatal-police-shootings-in-the-us/PoliceKillingsUS.csv".
Figure out what the correct encoding should be and read in the file to a DataFrame police_killings.
End of explanation
"""
# (Optional) Use this code cell for any additional work.
#%%RM_IF(PROD)%%
# look at the first ten thousand bytes to guess the character encoding
with open("../input/fatal-police-shootings-in-the-us/PoliceKillingsUS.csv", 'rb') as rawdata:
result = chardet.detect(rawdata.read(100000))
# check what the character encoding might be
print(result)
#%%RM_IF(PROD)%%
# look at the first ten thousand bytes to guess the character encoding
with open("../input/fatal-police-shootings-in-the-us/PoliceKillingsUS.csv", 'rb') as rawdata:
result = chardet.detect(rawdata.read(10000))
# check what the character encoding might be
print(result)
#%%RM_IF(PROD)%%
police_killings = pd.read_csv("../input/fatal-police-shootings-in-the-us/PoliceKillingsUS.csv", encoding='Windows-1252')
q2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q2.hint()
#_COMMENT_IF(PROD)_
q2.solution()
"""
Explanation: Feel free to use any additional code cells for supplemental work. To get credit for finishing this question, you'll need to run q2.check() and get a result of Correct.
End of explanation
"""
# TODO: Save the police killings dataset to CSV
____
# Check your answer
q3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
"""
Explanation: 3) Saving your files with UTF-8 encoding
Save a version of the police killings dataset to CSV with UTF-8 encoding. Your answer will be marked correct after saving this file.
Note: When using the to_csv() method, supply only the name of the file (e.g., "my_file.csv"). This saves the file at the filepath "/kaggle/working/my_file.csv".
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | apache-2.0 | # Install Sklearn
!python3 -m pip install --user sklearn
# Ensure the right version of Tensorflow is installed.
!pip3 freeze | grep 'tensorflow==2\|tensorflow-gpu==2' || \
!python3 -m pip install --user tensorflow==2
import os
import tensorflow.keras
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from keras.utils import plot_model
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: LAB 03: Basic Feature Engineering in Keras
Learning Objectives
Create an input pipeline using tf.data
Engineer features to create categorical, crossed, and numerical feature columns
Introduction
In this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
Start by importing the necessary libraries for this lab.
End of explanation
"""
if not os.path.isdir("../data"):
os.makedirs("../data")
!gsutil cp gs://cloud-training-demos/feat_eng/housing/housing_pre-proc.csv ../data
!ls -l ../data/
"""
Explanation: Many of the Google Machine Learning Courses Programming Exercises use the California Housing Dataset, which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.
First, let's download the raw .csv data by copying the data from a cloud storage bucket.
End of explanation
"""
housing_df = pd.read_csv('../data/housing_pre-proc.csv', error_bad_lines=False)
housing_df.head()
"""
Explanation: Now, let's read in the dataset just copied from the cloud storage bucket and create a Pandas dataframe.
End of explanation
"""
housing_df.describe()
"""
Explanation: We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 20433.000000 for all feature columns. Thus, there are no missing values.
End of explanation
"""
train, test = train_test_split(housing_df, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
"""
Explanation: Split the dataset for ML
The dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.
End of explanation
"""
train.to_csv('../data/housing-train.csv', encoding='utf-8', index=False)
val.to_csv('../data/housing-val.csv', encoding='utf-8', index=False)
test.to_csv('../data/housing-test.csv', encoding='utf-8', index=False)
!head ../data/housing*.csv
"""
Explanation: Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.
End of explanation
"""
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# TODO 1 -- Your code here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
"""
Explanation: Create an input pipeline using tf.data
Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model.
Here, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell.
End of explanation
"""
batch_size = 32
train_ds = df_to_dataset(train)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
"""
Explanation: Next we initialize the training and validation datasets.
End of explanation
"""
# TODO 1
"""
Explanation: Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
End of explanation
"""
# TODO 1
"""
Explanation: We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Numeric columns
The output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
In the California housing prices dataset, most columns from the dataframe are numeric. Let' create a variable called numeric_cols to hold only the numerical feature columns.
End of explanation
"""
# Scalar def get_scal(feature):
# TODO 1
# TODO 1
"""
Explanation: Scaler function
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
Next, we scale the numerical feature columns that we assigned to the variable "numeric cols".
End of explanation
"""
print('Total number of feature coLumns: ', len(feature_columns))
"""
Explanation: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
End of explanation
"""
# Model create
feature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype='float64')
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
"""
Explanation: Using the Keras Sequential Model
Next, we will run this cell to compile and fit the Keras Sequential model.
End of explanation
"""
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
"""
Explanation: Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.
End of explanation
"""
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
"""
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
End of explanation
"""
test_data = pd.read_csv('../data/housing-test.csv')
test_data.describe()
"""
Explanation: Load test data
Next, we read in the test.csv file and validate that there are no null values.
Again, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 4087.000000 for all feature columns. Thus, there are no missing values.
End of explanation
"""
# TODO 1
test_predict = test_input_fn(dict(test_data))
"""
Explanation: Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.
End of explanation
"""
predicted_median_house_value = model.predict(test_predict)
"""
Explanation: Prediction: Linear Regression
Before we begin to feature engineer our feature columns, we should predict the median house value. By predicting the median house value now, we can then compare it with the median house value after feature engineeing.
To predict with Keras, you simply call model.predict() and pass in the housing features you want to predict the median_house_value for. Note: We are predicting the model locally.
End of explanation
"""
# Ocean_proximity is INLAND
model.predict({
'longitude': tf.convert_to_tensor([-121.86]),
'latitude': tf.convert_to_tensor([39.78]),
'housing_median_age': tf.convert_to_tensor([12.0]),
'total_rooms': tf.convert_to_tensor([7653.0]),
'total_bedrooms': tf.convert_to_tensor([1578.0]),
'population': tf.convert_to_tensor([3628.0]),
'households': tf.convert_to_tensor([1494.0]),
'median_income': tf.convert_to_tensor([3.0905]),
'ocean_proximity': tf.convert_to_tensor(['INLAND'])
}, steps=1)
# Ocean_proximity is NEAR OCEAN
model.predict({
'longitude': tf.convert_to_tensor([-122.43]),
'latitude': tf.convert_to_tensor([37.63]),
'housing_median_age': tf.convert_to_tensor([34.0]),
'total_rooms': tf.convert_to_tensor([4135.0]),
'total_bedrooms': tf.convert_to_tensor([687.0]),
'population': tf.convert_to_tensor([2154.0]),
'households': tf.convert_to_tensor([742.0]),
'median_income': tf.convert_to_tensor([4.9732]),
'ocean_proximity': tf.convert_to_tensor(['NEAR OCEAN'])
}, steps=1)
"""
Explanation: Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.
End of explanation
"""
# TODO 2
"""
Explanation: The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set.
Go to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predict for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering!
Engineer features to create categorical and numerical features
Now we create a cell that indicates which features will be used in the model.
Note: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values!
End of explanation
"""
# Scalar def get_scal(feature):
def get_scal(feature):
def minmax(x):
mini = train[feature].min()
maxi = train[feature].max()
return (x - mini)/(maxi-mini)
return(minmax)
# All numerical features - scaling
feature_columns = []
for header in numeric_cols:
scal_input_fn = get_scal(header)
feature_columns.append(fc.numeric_column(header,
normalizer_fn=scal_input_fn))
"""
Explanation: Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the precding cell.
End of explanation
"""
# TODO 2
"""
Explanation: Categorical Feature
In this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.
Next, we create a categorical feature using 'ocean_proximity'.
End of explanation
"""
# TODO 2
"""
Explanation: Bucketized Feature
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.
Next we create a bucketized column using 'housing_median_age'
End of explanation
"""
# TODO 2
"""
Explanation: Feature Cross
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.
Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.
End of explanation
"""
print('Total number of feature coumns: ', len(feature_columns))
"""
Explanation: Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
End of explanation
"""
# Model create
feature_layer = tf.keras.layers.DenseFeatures(feature_columns,
dtype='float64')
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
"""
Explanation: Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.
End of explanation
"""
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
plot_curves(history, ['loss', 'mse'])
"""
Explanation: Next, we show loss and mean squared error then plot the model.
End of explanation
"""
# TODO 2
"""
Explanation: Next we create a prediction model. Note: You may use the same values from the previous prediciton.
End of explanation
"""
|
mcleonard/sampyl | examples/Abalone Model.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import sampyl as smp
from sampyl import np
import pandas as pd
plt.style.use('seaborn')
plt.rcParams['font.size'] = 14.
plt.rcParams['legend.fontsize'] = 14.0
plt.rcParams['axes.titlesize'] = 16.0
plt.rcParams['axes.labelsize'] = 14.0
plt.rcParams['xtick.labelsize'] = 13.0
plt.rcParams['ytick.labelsize'] = 13.0
"""
Explanation: Hierarchical Model for Abalone Length
Abalone were collected from various sites on the coast of California north of San Francisco. Here I'm going to develop a model to predict abalone lengths based on sites and harvest method - diving or rock-picking. I'm interested in how abalone lengths vary between sites and harvesting methods. This should be a hierarchical model as the abalone at the different sites are from the same population and should exhibit similar effects based on harvesting method. The hierarchical model will be beneficial since some of the sites are missing a harvesting method.
End of explanation
"""
data = pd.read_csv('Clean2017length.csv')
data.head()
"""
Explanation: Load our data here. This is just data collected in 2017.
End of explanation
"""
# Convert sites from codes into sequential integers starting at 0
unique_sites = data['site_code'].unique()
site_map = dict(zip(unique_sites, np.arange(len(unique_sites))))
data = data.assign(site=data['site_code'].map(site_map))
# Convert modes into integers as well
# Filter out 'R/D' modes, bad data collection
data = data[(data['Mode'] != 'R/D')]
mode_map = {'R':0, 'D':1}
data = data.assign(mode=data['Mode'].map(mode_map))
"""
Explanation: Important columns here are:
full lengths: length of abalone
mode: Harvesting method, R: rock-picking, D: diving
site_code: codes for 15 different sites
First some data preprocessing to get it into the correct format for our model.
End of explanation
"""
class HLM(smp.Model):
def __init__(self, data=None):
super().__init__()
self.data = data
# Now define the model (log-probability proportional to the posterior)
def logp_(self, μ_α, μ_β, σ_α, σ_β, site_α, site_β, ϵ):
# Population priors - normals for population means and half-Cauchy for population stds
self.add(smp.normal(μ_α, sig=500),
smp.normal(μ_β, sig=500),
smp.half_cauchy(σ_α, beta=5),
smp.half_cauchy(σ_β, beta=0.5))
# Priors for site coefficients, sampled from population distributions
self.add(smp.normal(site_α, mu=μ_α, sig=σ_α),
smp.normal(site_β, mu=μ_β, sig=σ_β))
# Prior for likelihood uncertainty
self.add(smp.half_normal(ϵ))
# Our estimate for abalone length, α + βx
length_est = site_α[self.data['site'].values] + site_β[self.data['site'].values]*self.data['mode']
# Add the log-likelihood
self.add(smp.normal(self.data['full lengths'], mu=length_est, sig=ϵ))
return self()
sites = data['site'].values
modes = data['mode'].values
lengths = data['full lengths'].values
# Now define the model (log-probability proportional to the posterior)
def logp(μ_α, μ_β, σ_α, σ_β, site_α, site_β, ϵ):
model = smp.Model()
# Population priors - normals for population means and half-Cauchy for population stds
model.add(smp.normal(μ_α, sig=500),
smp.normal(μ_β, sig=500),
smp.half_cauchy(σ_α, beta=5),
smp.half_cauchy(σ_β, beta=0.5))
# Priors for site coefficients, sampled from population distributions
model.add(smp.normal(site_α, mu=μ_α, sig=σ_α),
smp.normal(site_β, mu=μ_β, sig=σ_β))
# Prior for likelihood uncertainty
model.add(smp.half_normal(ϵ))
# Our estimate for abalone length, α + βx
length_est = site_α[sites] + site_β[sites]*modes
# Add the log-likelihood
model.add(smp.normal(lengths, mu=length_est, sig=ϵ))
return model()
model = HLM(data=data)
start = {'μ_α': 201., 'μ_β': 5., 'σ_α': 1., 'σ_β': 1.,
'site_α': np.ones(len(site_map))*201,
'site_β': np.zeros(len(site_map)),
'ϵ': 1.}
model.logp_(*start.values())
start = {'μ_α': 201., 'μ_β': 5., 'σ_α': 1., 'σ_β': 1.,
'site_α': np.ones(len(site_map))*201,
'site_β': np.zeros(len(site_map)),
'ϵ': 1.}
# Using NUTS is slower per sample, but more likely to give good samples (and converge)
sampler = smp.NUTS(logp, start)
chain = sampler(1100, burn=100, thin=2)
"""
Explanation: A Hierarchical Linear Model
Here we'll define our model. We want to make a linear model for each site in the data where we predict the abalone length given the mode of catching and the site.
$$ y_s = \alpha_s + \beta_s * x_s + \epsilon $$
where $y_s$ is the predicted abalone length, $x$ denotes the mode of harvesting, $\alpha_s$ and $\beta_s$ are coefficients for each site $s$, and $\epsilon$ is the model error. We'll use this prediction for our likelihood with data $D_s$, using a normal distribution with mean $y_s$ and variance $ \epsilon^2$ :
$$ \prod_s P(D_s \mid \alpha_s, \beta_s, \epsilon) = \prod_s \mathcal{N}\left(D_s \mid y_s, \epsilon^2\right) $$
The abalone come from the same population just in different locations. We can take these similarities between sites into account by creating a hierarchical model where the coefficients are drawn from a higher-level distribution common to all sites.
$$
\begin{align}
\alpha_s & \sim \mathcal{N}\left(\mu_{\alpha}, \sigma_{\alpha}^2\right) \
\beta_s & \sim \mathcal{N}\left(\mu_{\beta}, \sigma_{\beta}^2\right) \
\end{align}
$$
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(chain.site_α);
fig.savefig('/Users/mat/Desktop/chains.png', dpi=150)
chain.site_α.T.shape
fig, ax = plt.subplots(figsize=(16,9))
for each in chain.site_α.T:
ax.hist(each, range=(185, 210), bins=60, alpha=0.5)
ax.set_xticklabels('')
ax.set_yticklabels('');
fig.savefig('/Users/mat/Desktop/posteriors.png', dpi=300)
"""
Explanation: There are some checks for convergence you can do, but they aren't implemented yet. Instead, we can visually inspect the chain. In general, the samples should be stable, the first half should vary around the same point as the second half.
End of explanation
"""
def coeff_plot(coeff, ax=None):
if ax is None:
fig, ax = plt.subplots(figsize=(3,5))
CRs = np.percentile(coeff, [2.5, 97.5], axis=0)
means = coeff.mean(axis=0)
ax.errorbar(means, np.arange(len(means)), xerr=np.abs(means - CRs), fmt='o')
ax.set_yticks(np.arange(len(site_map)))
ax.set_yticklabels(site_map.keys())
ax.set_ylabel('Site')
ax.grid(True, axis='x', color="#CCCCCC")
ax.tick_params(axis='both', length=0)
for each in ['top', 'right', 'left', 'bottom']:
ax.spines[each].set_visible(False)
return ax
"""
Explanation: With the posterior distribution, we can look at many different results. Here I'll make a function that plots the means and 95% credible regions (range that contains central 95% of the probability) for the coefficients $\alpha_s$ and $\beta_s$.
End of explanation
"""
ax = coeff_plot(chain.site_α)
ax.set_xlim(175, 225)
ax.set_xlabel('Abalone Length (mm)');
"""
Explanation: Now we can look at how abalone lengths vary between sites for the rock-picking method ($\alpha_s$).
End of explanation
"""
ax = coeff_plot(chain.site_β)
#ax.set_xticks([-5, 0, 5, 10, 15])
ax.set_xlabel('Mode effect (mm)');
"""
Explanation: Here I'm plotting the mean and 95% credible regions (CR) of $\alpha$ for each site. This coefficient measures the average length of rock-picked abalones. We can see that the average abalone length varies quite a bit between sites. The CRs give a measure of the uncertainty in $\alpha$, wider CRs tend to result from less data at those sites.
Now, let's see how the abalone lengths vary between harvesting methods (the difference for diving is given by $\beta_s$).
End of explanation
"""
def model_plot(data, chain, site, ax=None, n_samples=20):
if ax is None:
fig, ax = plt.subplots(figsize=(4,6))
site = site_map[site]
xs = np.linspace(-1, 3)
for ii, (mode, m_data) in enumerate(data[data['site'] == site].groupby('mode')):
a = chain.site_α[:, site]
b = chain.site_β[:, site]
# now sample from the posterior...
idxs = np.random.choice(np.arange(len(a)), size=n_samples, replace=False)
# Draw light lines sampled from the posterior
for idx in idxs:
ax.plot(xs, a[idx] + b[idx]*xs, color='#E74C3C', alpha=0.05)
# Draw the line from the posterior means
ax.plot(xs, a.mean() + b.mean()*xs, color='#E74C3C')
# Plot actual data points with a bit of noise for visibility
mode_label = {0: 'Rock-picking', 1: 'Diving'}
ax.scatter(ii + np.random.randn(len(m_data))*0.04,
m_data['full lengths'], edgecolors='none',
alpha=0.8, marker='.', label=mode_label[mode])
ax.set_xlim(-0.5, 1.5)
ax.set_xticks([0, 1])
ax.set_xticklabels('')
ax.set_ylim(150, 250)
ax.grid(True, axis='y', color="#CCCCCC")
ax.tick_params(axis='both', length=0)
for each in ['top', 'right', 'left', 'bottom']:
ax.spines[each].set_visible(False)
return ax
fig, axes = plt.subplots(figsize=(10, 5), ncols=4, sharey=True)
for ax, site in zip(axes, [5, 52, 72, 162]):
ax = model_plot(data, chain, site, ax=ax, n_samples=30)
ax.set_title(site)
first_ax = axes[0]
first_ax.legend(framealpha=1, edgecolor='none')
first_ax.set_ylabel('Abalone length (mm)');
"""
Explanation: Here I'm plotting the mean and 95% credible regions (CR) of $\beta$ for each site. This coefficient measures the difference in length of dive picked abalones compared to rock picked abalones. Most of the $\beta$ coefficients are above zero which indicates that abalones harvested via diving are larger than ones picked from the shore. For most of the sites, diving results in 5 mm longer abalone, while at site 72, the difference is around 12 mm. Again, wider CRs mean there is less data leading to greater uncertainty.
Next, I'll overlay the model on top of the data and make sure it looks right. We'll also see that some sites don't have data for both harvesting modes but our model still works because it's hierarchical. That is, we can get a posterior distribution for the coefficient from the population distribution even though the actual data is missing.
End of explanation
"""
fig, ax = plt.subplots()
ax.hist(chain.μ_β, bins=30);
b_mean = chain.μ_β.mean()
b_CRs = np.percentile(chain.μ_β, [2.5, 97.5])
p_gt_0 = (chain.μ_β > 0).mean()
print(
"""Mean: {:.3f}
95% CR: [{:.3f}, {:.3f}]
P(mu_b) > 0: {:.3f}
""".format(b_mean, b_CRs[0], b_CRs[1], p_gt_0))
"""
Explanation: For site 5, there are few data points for the diving method so there is a lot of uncertainty in the prediction. The prediction is also pulled lower than the data by the population distribution. Similarly, for site 52 there is no diving data, but we still get a (very uncertain) prediction because it's using the population information.
Finally, we can look at the harvesting mode effect for the population. Here I'm going to print out a few statistics for $\mu_{\beta}$.
End of explanation
"""
import scipy.stats as stats
samples = stats.norm.rvs(loc=chain.μ_β, scale=chain.σ_β)
plt.hist(samples, bins=30);
plt.xlabel('Dive harvesting effect (mm)')
"""
Explanation: We can also look at the population distribution for $\beta_s$ by sampling from a normal distribution with mean and variance sampled from $\mu_\beta$ and $\sigma_\beta$.
$$
\beta_s \sim \mathcal{N}\left(\mu_{\beta}, \sigma_{\beta}^2\right)
$$
End of explanation
"""
|
prashanti/similarity-experiment | src/Notebooks/MetricComparison.ipynb | mit | import matplotlib.lines as mlines
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
import json
%matplotlib inline
"""
Explanation: Parameters used
Query profile size: 10
Number of query profiles: 5
Information Content: Annotation IC
Profile aggregation: Best Pairs
Directionality of similarity: Symmetric
1. Import required modules
End of explanation
"""
def load_results(infile,quartile,scores,metric,granularity):
next(infile)
for line in infile:
queryid,numreplaced,match,score=line.strip().split()
numreplaced=int(numreplaced)
if metric not in scores:
scores[metric]=dict()
if quartile not in scores[metric]:
scores[metric][quartile]=dict()
if granularity not in scores[metric][quartile]:
scores[metric][quartile][granularity]=dict()
if numreplaced not in scores[metric][quartile][granularity]:
scores[metric][quartile][granularity][numreplaced]=[]
scores[metric][quartile][granularity][numreplaced].append(float(score))
infile.close()
return scores
def error(scorelist):
return 2*(np.std(scorelist)/math.sqrt(len(scorelist)))
"""
Explanation: 2. Helper methods
End of explanation
"""
scores=dict()
quartile=50
granularity='E'
f, axarr = plt.subplots(3, 3)
i=j=0
titledict={'BPSym__Jaccard':'Jaccard','BPSym_AIC_Resnik':'Resnik','BPSym_AIC_Lin':'Lin'
,'BPSym_AIC_Jiang':'Jiang','_AIC_simGIC':'simGIC','BPAsym_AIC_HRSS':'HRSS','Groupwise_Jaccard':'Groupwise_Jaccard'}
lines=[]
legend=[]
for profilesize in [10]:
for metric in ['BPSym_AIC_Resnik','BPSym_AIC_Lin','BPSym_AIC_Jiang','_AIC_simGIC','BPSym__Jaccard',
'Groupwise_Jaccard','BPAsym_AIC_HRSS']:
# plotting annotation replacement
infile=open("../../results/FullDistribution/AnnotationReplacement/E_Decay_Quartile50_ProfileSize"+str(profilesize)+"_"+ metric+"_Results.tsv")
scores=load_results(infile,quartile,scores,metric,granularity)
infile.close()
signallist=[]
errorlist=[]
numreplacedlist=sorted(scores[metric][quartile][granularity].keys())
for numreplaced in numreplacedlist :
signallist.append(np.mean(scores[metric][quartile][granularity][numreplaced]))
errorlist.append(error(scores[metric][quartile][granularity][numreplaced]))
line=axarr[i][j].errorbar(numreplacedlist,signallist,yerr=errorlist,color='blue',linewidth=3)
if len(lines)==0:
lines.append(line)
legend.append("Annotation Replacement")
axarr[i][j].set_title(titledict[metric])
axarr[i][j].set_ylim(0,1)
# plotting Ancestral Replacement
ancestralreplacementfile="../../results/FullDistribution/AncestralReplacement/E_Decay_Quartile50_ProfileSize"+str(profilesize)+"_"+ metric+"_Results.tsv"
if os.path.isfile(ancestralreplacementfile):
infile=open(ancestralreplacementfile)
scores=load_results(infile,quartile,scores,metric,granularity)
infile.close()
signallist=[]
errorlist=[]
numreplacedlist=sorted(scores[metric][quartile][granularity].keys())
for numreplaced in numreplacedlist :
signallist.append(np.mean(scores[metric][quartile][granularity][numreplaced]))
errorlist.append(error(scores[metric][quartile][granularity][numreplaced]))
line=axarr[i][j].errorbar(numreplacedlist,signallist,yerr=errorlist,color='green',linewidth=3)
if len(lines)==1:
lines.append(line)
legend.append("Ancestral Replacement")
# plotting noise
decaytype="AnnotationReplacement"
if "simGIC" in metric or "Groupwise_Jaccard" in metric:
noisefile="../../results/FullDistribution/"+decaytype+"/Noise/Distributions/"+granularity+"_Noise_Quartile"+str(quartile)+"_ProfileSize"+str(profilesize)+"_"+metric+"_Results.tsv"
else:
noisefile="../../results/FullDistribution/"+decaytype+"/Noise/Distributions/"+granularity+"_NoiseDecay_Quartile"+str(quartile)+"_ProfileSize"+str(profilesize)+"_"+metric+"_Results.tsv"
if os.path.isfile(noisefile):
noisedist= json.load(open(noisefile))
line=axarr[i][j].axhline(y=np.percentile(noisedist,99.9),linestyle='--',color='black',label='_nolegend_')
if len(lines)==2:
lines.append(line)
legend.append("99.9 percentile noise")
if j==2:
j=0
i+=1
else:
j+=1
"""
Explanation: 2. Plot decay and noise
End of explanation
"""
|
SlipknotTN/udacity-deeplearning-nanodegree | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=z, units=n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(tf.scalar_mul(alpha, h1), h1)
# Logits and tanh output
logits = tf.layers.dense(inputs=h1, units=out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(inputs=x, units=n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(tf.scalar_mul(alpha, h1), h1)
# Logits and tanh output
logits = tf.layers.dense(inputs=h1, units=1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(real_dim=input_size, z_dim=z_size)
# Generator network here
g_model = generator(z=input_z, out_dim=input_size, alpha=alpha, n_units=g_hidden_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(x=input_real, alpha=alpha, n_units=d_hidden_size, reuse=False)
d_model_fake, d_logits_fake = discriminator(x=g_model, alpha=alpha, n_units=d_hidden_size, reuse=True)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [generatorVar for generatorVar in t_vars if generatorVar.name.startswith('generator')]
d_vars = [discriminatorVar for discriminatorVar in t_vars if discriminatorVar.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
#batch[1] is the MNIST label, but here we don't need it
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
# Scale from [0,1] to [-1,1]
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = sess.run(g_loss, {input_z: batch_z})
#Alternative way
#train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
sbenthall/bigbang | examples/experimental_notebooks/Show Interaction Graph.ipynb | agpl-3.0 | %matplotlib inline
"""
Explanation: This notebook shows how BigBang can be used to display a graph of interactions in the mailing list over some period of time.
First we'll make the I Python notebook display computed visualizations inline.
End of explanation
"""
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.graph as graph
import bigbang.mailman as mailman
import bigbang.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
"""
Explanation: Next we'll import dependencies.
End of explanation
"""
urls = ["ipython-dev","hot",
"scipy-user",
"https://lists.wikimedia.org/pipermail/gendergap/",
"ipython-user"]
archives= [Archive(url,archive_dir="../archives") for url in urls]
"""
Explanation: Now we will use BigBang to process mailing list archives we've already downloaded.
Note that you can load an Archive that you have stored locally in a .csv file just by using its shortened name. You can also still include full URL's if you haven't downloaded the data yet, or you aren't sure.
End of explanation
"""
date_from = pd.datetime(2011,11,1,tzinfo=pytz.utc)
date_to = pd.datetime(2011,12,1,tzinfo=pytz.utc)
"""
Explanation: Here we will set the window for analysis. By default, November 2011.
End of explanation
"""
def filter_by_date(df,d_from,d_to):
return df[(df['Date'] > d_from) & (df['Date'] < d_to)]
"""
Explanation: This is a helper function to select messages from a dataframe that fall within a certain range of dates.
End of explanation
"""
def draw_interaction_graph(ig):
#pdig = nx.to_pydot(ig)
#pdig.set_overlap('False')
pos = nx.graphviz_layout(ig,prog='neato')
node_size = [data['sent'] * 40 for name,data in ig.nodes(data=True)]
nx.draw(ig,
pos,
node_size = node_size,
node_color = 'w',
alpha = 0.4,
font_size=18,
font_weight='bold'
)
# edge width is proportional to replies sent
edgewidth=[d['weight'] for (u,v,d) in ig.edges(data=True)]
#overlay edges with width based on weight
nx.draw_networkx_edges(ig,pos,alpha=0.5,width=edgewidth,edge_color='r')
"""
Explanation: A function for drawing interaction graphs.
TODO: Move this into the library code
End of explanation
"""
plt.figure(230,figsize=(12.5, 7.5))
for i,arx in enumerate(archives):
plt.subplot(230 + i) # create a subplot keyed to the index of this ml
df = arx.data
dff = filter_by_date(df,date_from,date_to)
ig = graph.messages_to_interaction_graph(dff)
print urls[i]
print nx.degree_assortativity_coefficient(ig)
draw_interaction_graph(ig)
plt.show()
dfs = [filter_by_date(arx.data,
date_from,
date_to) for arx in archives]
bdf = pd.concat(dfs)
#RG = graph.messages_to_reply_graph(messages)
IG = graph.messages_to_interaction_graph(bdf)
pdig = nx.to_pydot(IG)
pdig.set_overlap('False')
"""
Explanation: Now we'll use BigBang's graph processing methods to turn the processed messages into a graph of interactions.
End of explanation
"""
plt.figure(figsize=(12.5,7.5))
draw_interaction_graph(IG)
plt.show()
nx.write_edgelist(IG, "ig-edges.txt",delimiter="\t")
nx.write_gexf(IG,"all.gexf")
arx = archives[0]
arx.data
"""
Explanation: Lastly, we use NetworkX's built in compatibility with Matplotlib to visualize the graph.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/efd09079125b2bd222e2dd62aaaccfa4/source_space_snr.ipynb | bsd-3-clause | # Author: Padma Sundaram <tottochan@gmail.com>
# Kaisu Lankinen <klankinen@mgh.harvard.edu>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
import numpy as np
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# Read inverse operator:
inv_op = make_inverse_operator(evoked.info, fwd, cov, fixed=True, verbose=True)
# Calculate MNE:
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv_op, lambda2, 'MNE', verbose=True)
# Calculate SNR in source space:
snr_stc = stc.estimate_snr(evoked.info, fwd, cov)
# Plot an average SNR across source points over time:
ave = np.mean(snr_stc.data, axis=0)
fig, ax = plt.subplots()
ax.plot(evoked.times, ave)
ax.set(xlabel='Time (sec)', ylabel='SNR MEG-EEG')
fig.tight_layout()
# Find time point of maximum SNR
maxidx = np.argmax(ave)
# Plot SNR on source space at the time point of maximum SNR:
kwargs = dict(initial_time=evoked.times[maxidx], hemi='split',
views=['lat', 'med'], subjects_dir=subjects_dir, size=(600, 600),
clim=dict(kind='value', lims=(-100, -70, -40)),
transparent=True, colormap='viridis')
brain = snr_stc.plot(**kwargs)
"""
Explanation: Computing source space SNR
This example shows how to compute and plot source space SNR as in
:footcite:GoldenholzEtAl2009.
End of explanation
"""
evoked_eeg = evoked.copy().pick_types(eeg=True, meg=False)
inv_op_eeg = make_inverse_operator(evoked_eeg.info, fwd, cov, fixed=True,
verbose=True)
stc_eeg = apply_inverse(evoked_eeg, inv_op_eeg, lambda2, 'MNE', verbose=True)
snr_stc_eeg = stc_eeg.estimate_snr(evoked_eeg.info, fwd, cov)
brain = snr_stc_eeg.plot(**kwargs)
"""
Explanation: EEG
Next we do the same for EEG and plot the result on the cortex:
End of explanation
"""
|
franzpl/StableGrid | jupyter_notebooks/clock_frequency_accuracy.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: On the influence of temperature in 16 MHz clock frequency measurement
This notebook discusses the behaviour of accuracy and stability of 2 types of clock generators (ceramic resonator & quartz) under influence of temperature for highly precision frequency measurements.
End of explanation
"""
data_ceramic = np.genfromtxt('temp_and_freq_error_data_ceramic')
freq_error_ceramic = data_ceramic[:3600, 0]
temp_ceramic = data_ceramic[:, 1]
t = np.arange(0, len(freq_error_ceramic))
fig, ax1 = plt.subplots()
plt.title("Clock frequency accuracy at 26 °C (ceramic resonator)")
plt.grid()
ax1.plot(t / 60, freq_error_ceramic, color='b')
ax1.set_ylabel("Frequency Error / Hz", color='b')
ax1.set_xlabel("t / min")
ax1.tick_params('y', colors='b')
plt.ylim([7000,9000])
ax2 = ax1.twinx()
ax2.plot(t / 60, (freq_error_ceramic / 16000000) * 10**6, color='r')
ax2.set_ylabel('PPM', color='r')
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.show()
"""
Explanation: Ceramic Resonator
End of explanation
"""
std_dev_ceramic = np.std(freq_error_ceramic)
"""
Explanation: Standard Deviation
End of explanation
"""
average_ceramic = np.average(freq_error_ceramic)
ppm_average_ceramic = (average_ceramic / 16000000) * 10**6
ppm_average_ceramic
"""
Explanation: Average
End of explanation
"""
(50 * 500 * 10**-6) * 1000 # (mains frequency * ppm) * 1000 in mHZ
np.max(freq_error_ceramic)
np.min(freq_error_ceramic)
"""
Explanation: The frequency tolerance of the used ceramic resonator of the Arduino UNO is approx. 500 ppm. Consequently, this means for mains frequency measurements:
End of explanation
"""
data_temp_ceramic = np.genfromtxt('ceramic_behaviour_increasing_temperature')
freq_error_temp_ceramic = data_temp_ceramic[:, 0]
increasing_temp_ceramic = data_temp_ceramic[:, 1]
fig, ax3 = plt.subplots()
plt.title("Clock frequency stability (ceramic resonator)")
plt.grid()
ax3.plot(increasing_temp_ceramic, freq_error_temp_ceramic, color='b')
ax3.set_ylabel("Frequency Error / Hz", color='b')
ax3.set_xlabel("T / °C")
ax3.tick_params('y', colors='b')
ax4 = ax3.twinx()
ax4.plot(increasing_temp_ceramic, (freq_error_temp_ceramic / 16000000) * 10**6, color='r')
ax4.set_ylabel('PPM', color='r')
ax4.tick_params('y', colors='r')
fig.tight_layout()
plt.show()
"""
Explanation: The error for mains frequency measurements is 25 mHz.
Ceramic resonator behaviour under the influence of increasing temperature
End of explanation
"""
data_quartz = np.genfromtxt('temp_and_freq_error_data_quartz')
freq_error_quartz = data_quartz[:3600, 0]
temp_quartz = data_quartz[:, 1]
t = np.arange(0, len(freq_error_quartz))
fig, ax5 = plt.subplots()
plt.title("Clock frequency accuracy at 26 °C (quartz)")
plt.grid()
ax5.plot(t / 60, freq_error_quartz, color='b')
ax5.set_ylabel("Frequency Error / Hz", color='b')
ax5.tick_params('y', colors='b')
ax5.set_xlabel("t / min")
ax6 = ax5.twinx()
ax6.plot(t / 60, (freq_error_quartz / 16000000) * 10**6, color='r')
ax6.set_ylabel('PPM', color='r')
ax6.tick_params('y', colors='r')
fig.tight_layout()
plt.show()
"""
Explanation: Quartz
End of explanation
"""
std_dev_quartz = np.std(freq_error_quartz)
"""
Explanation: Standard Deviation
End of explanation
"""
average_quartz = np.average(freq_error_quartz)
ppm_average_quartz = (average_quartz / 16000000) * 10**6
ppm_average_quartz
average_quartz
np.max(freq_error_quartz)
np.min(freq_error_quartz)
"""
Explanation: Average
End of explanation
"""
data_temp_quartz = np.genfromtxt('quartz_behaviour_increasing_temperature.txt')
freq_error_temp_quartz = data_temp_quartz[:, 0]
increasing_temp_quartz = data_temp_quartz[:, 1]
fig, ax7 = plt.subplots()
plt.title("Clock frequency stability (quartz)")
plt.grid()
ax7.plot(increasing_temp_quartz, freq_error_temp_quartz, color='b')
ax7.set_ylabel("Frequency Error / Hz", color='b')
ax7.tick_params('y', colors='b')
ax7.set_xlabel("T / °C")
ax8 = ax7.twinx()
ax8.plot(increasing_temp_quartz, (freq_error_temp_quartz / 16000000) * 10**6, color='r')
ax8.set_ylabel('PPM', color='r')
ax8.tick_params('y', colors='r')
fig.tight_layout()
plt.show()
"""
Explanation: Quartz behaviour under the influence of increasing temperature
End of explanation
"""
|
hpparvi/ldtk | notebooks/A2_redifining_stellar_edge.ipynb | gpl-2.0 | %pylab inline
from scipy.interpolate import interp1d
from os.path import join
import seaborn as sb
sb.set_style('white')
from ldtk.core import SIS, ldtk_root
sis = SIS(join(ldtk_root,'cache_lowres','Z-0.0','lte05800-5.50-0.0.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits'))
mu_o, z_o, ip_o = sis.mu, sis.z, sis.intensity_profile()
"""
Explanation: Appendix 2: Redefining the stellar edge and resampling
Author: Hannu Parviainen<br>
Last modified: 20.8.2020
The spherical models extend quite a bit over what we'd consider the 'edge' of the star (Wittkowski et al. A&A 413, 2004; Espinoza & Jordan, MNRAS 450, 2015). Thus, the edge needs to be (re)defined, and the z and $\mu$ need to be recomputed using the new edge distance. Also, while the used $\mu$ sampling works to capture the detail close to the stellar limb, it is not optimal for LD model fitting, as it gives way too much weight to the edge.
LDTk redefines the limb automatically at the LDPSet initialisation, and this can be done manually using the LDPSet.set_limb_z and LDPSet.set_limb_mu methods. The $\mu$ sampling can be changed using the LDPSet.resample_linear_z, LDPSet.resample_linear_mu, and LDPSet.resample methods.
End of explanation
"""
def plot_ld(mu, z, ip, mu_range=(-0.01,1.01), ylim=(-0.01,1.01)):
fig,ax = subplots(1,2,figsize=(13,4), sharey=True)
ax[0].plot(mu, ip)
ax[0].scatter(mu, ip, marker='o')
ax[1].plot(z, ip)
ax[1].scatter(z, ip, marker='o')
setp(ax, xlim=(0,1.01), ylim=ylim)
setp(ax[0], xlim=mu_range, xlabel='$\mu$')
setp(ax[1], xlim=sqrt(1- clip(array(mu_range)**2,-5,1))[::-1], xlabel='z')
fig.tight_layout()
return fig,ax
"""
Explanation: Redefining the stellar edge
End of explanation
"""
plot_ld(mu_o, z_o, ip_o);
plot_ld(mu_o, z_o, ip_o, mu_range=(0.02,0.07), ylim=(-0.01,0.4));
"""
Explanation: Let's first plot the stellar intensity profile as a function of $\mu$ and $z$, and take a closer look at what happens near the edge.
End of explanation
"""
ipm = argmax(abs(diff(sis.intensity_profile())/diff(sis.z)))
fig, axs = subplots(2, 2, figsize=(13,4), sharex='col')
axs[0,0].plot(sis.mu, sis.intensity_profile())
axs[1,0].plot(sis.mu[1:], abs(diff(sis.intensity_profile())/diff(sis.mu)))
[ax.axvline(sis.mu[ipm+1]) for ax in axs[:,0]]
axs[0,1].plot(sis.z, sis.intensity_profile())
axs[1,1].plot(sis.z[1:], abs(diff(sis.intensity_profile())/diff(sis.z)))
axs[1,1].axvline(sis.z[ipm+1])
[ax.axvline(sis.z[ipm+1]) for ax in axs[:,1]]
setp(axs[0,0], xlim=(0.02,0.07), ylabel='I', yticks=[], ylim=(0,0.45))
setp(axs[0,1], xlim=sqrt(1-array((0.02,0.07))**2)[::-1], ylabel='I', yticks=[], ylim=(0,0.45))
setp(axs[1,0], xlabel='$\mu$', xlim=(0.02,0.07), ylabel='dI / d$\mu$', yticks=[])
setp(axs[1,1], xlabel='z', xlim=sqrt(1-array((0.02,0.07))**2)[::-1], ylabel='dI / dz', yticks=[])
#setp(ax[1], xlim=(0.9,1.001))
fig.tight_layout()
i = argmax(abs(diff(ip_o)/diff(z_o)))
z_new = z_o[i:]/z_o[i]
mu_new = sqrt(1-z_new**2)
ip_new = ip_o[i:]
plot_ld(mu_new, z_new, ip_new);
"""
Explanation: It's clear that the sampling continues over the stellar edge. We need to fix this by redefining the edge and recalculating the $z$ and $\mu$ values. I choose the same approach as chosen in Espinoza & Jordan (MNRAS 450, 2015): we define the edge to correspond to the $z$ value where the derivative of the intensity profile is maximum.
End of explanation
"""
z_rs = linspace(0,1,100)
mu_rs = sqrt(1-z_rs**2)
ip_rs = interp1d(z_new[::-1], ip_new[::-1], kind='linear', assume_sorted=True)(z_rs)
ip_f1 = poly1d(polyfit(mu_new, ip_new, 2))(mu_rs)
ip_f2 = poly1d(polyfit(mu_rs, ip_rs, 2))(mu_rs)
fig,ax = plot_ld(mu_rs, z_rs, ip_rs);
ax[0].plot(mu_rs, ip_f1)
ax[1].plot(z_rs, ip_f1);
fig,ax = plot_ld(mu_rs, z_rs, ip_rs);
ax[0].plot(mu_rs, ip_f2)
ax[1].plot(z_rs, ip_f2);
fig,ax = plot_ld(mu_new, z_new, ip_new);
ax[0].plot(mu_rs, ip_f2)
ax[1].plot(z_rs, ip_f2);
"""
Explanation: Resampling
End of explanation
"""
|
mikelseverson/Udacity-Deep_Learning-Nanodegree | tv-script-generation/.ipynb_checkpoints/dlnd_tv_script_generation-checkpoint.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 100)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
data = {};
vocab_to_int = {};
int_to_vocab = {};
dict_index = 0;
for word in text:
if not word in vocab_to_int:
vocab_to_int[word] = dict_index;
int_to_vocab[dict_index] = word;
dict_index += 1;
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.' : "||Period||",
',' : "||Comma||",
'"' : "||Quotation_Mark||",
';' : "||Semicolon||",
'!' : "||Exclamation_Mark||",
'?' : "||Question_Mark||",
'(' : "||Left_Parentheses||",
')' : "||Right_Parentheses||",
'--' : "||Dash||",
'\n' : "||Return||"
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
input = tf.placeholder(tf.float32, shape=(1, 1024), name='input')
targets = tf.placeholder(tf.float32, shape=(1, 1024))
learningRate = tf.placeholder(tf.float32, shape=None)
# TODO: Implement Function
return input, targets, learningRate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell])
initialized = rnn_cell.zero_state(batch_size, tf.int32)
initialized = tf.identity(initialized, name="initial_state")
return rnn_cell, initialized
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
output, finalState = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
finalState = tf.identity(finalState, "final_state")
return output, finalState
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embeded = get_embed(input_data, vocab_size, rnn_size)
outputs, state = build_rnn(cell, embeded)
outputs = tf.concat(outputs, axis=1)
outputs = tf.reshape(outputs, [-1, rnn_size])
w = tf.Variable(tf.truncated_normal((rnn_size, vocab_size), stddev=0.01))
b = tf.Variable(tf.zeros(vocab_size))
logits = tf.matmul(outputs, w) + b
print(logits)
logits_shape = input_data.get_shape().as_list() + [vocab_size]
logits_shape[0] = -1
logits = tf.reshape(logits, logits_shape)
return logits, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return None, None, None, None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return None
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
usantamaria/ipynb_para_docencia | 05_python_errores/errores.ipynb | mit | """
IPython Notebook v4.0 para python 3.0
Librerías adicionales: IPython, pdb
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
"""
Explanation: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Errores en Python
Objetivos
Aprender a diagosticar y solucionar errores comunes en python.
Aprender técnicas comunes de debugging.
0.1 Instrucciones
Las instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente link.
Después de descargar y abrir el presente notebook, recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con Ctr-S para evitar sorpresas.
* Reemplazar en las celdas de código donde diga FIX_ME por el código correspondiente.
* Ejecutar cada celda de código utilizando Ctr-Enter
0.2 Licenciamiento y Configuración
Ejecutar la siguiente celda mediante Ctr-S.
End of explanation
"""
import numpy as np
def promedio_positivo(a):
pos_mean = a[a>0].mean()
return pos_mean
N = 100
x = np.linspace(-1,1,N)
y = 0.5 - x**2 # No cambiar esta linea
print(promedio_positivo(y))
# Error 1:
# Error 2:
# Error 3:
# Error 4:
# Error 5:
"""
Explanation: Contenido
Introducción
Técnicas de debugging.
Sobre el Notebook
Existen 4 desafíos:
* En todos los casos, documenten lo encontrado. Escriban como un #comentario o """comentario""" los errores que vayan detectando.
* En el desafío 1: Ejecute la celda, lea el output y arregle el código. Comente los 5 errores en la misma celda.
* En el desafío 2: Ejecute la celda y encuentre los errores utilizando print. Comente los 3 errores en la misma celda.
* En el desafío 3: Ejecute el archivo ./mat281_code/desafio_3.py, y encuentre los 3 errores utilizando pdb.set_trace()
* En el desafío 4: Ejecute el archivo ./mat281_code/desafio_4.py, y encuentre los 3 errores utilizando IPython.embed()
1. Introducción
Debugging: Eliminación de errores de un programa computacional.
* Fácilmente 40-60% del tiempo utilizado en la creación de un programa.
* Ningún programa está excento de bugs/errores.
* Imposible garantizar utilización 100% segura por parte del usuario.
* Programas computaciones tienen inconsistencias/errores de implementación.
* ¡Hardware también puede tener errores!
1. Introducción
¿Porqué se le llama bugs?
Existen registros en la correspondencia de Thomas Edisson, en 1848, hablaba de bugs para referirse a errores en sus inventos. El término se utilizaba ocasionalmente en el dominio informático. En 1947, el ordenador Mark II presentaba un error. Al buscar el origen del error, los técnicos encontraron una polilla, que se había introducido en la máquina.
<img src="images/bug.jpg" alt="" width="600px" align="middle"/>
Toda la historia en el siguiente enlace a wikipedia (ingles).
2. Técnicas para Debug
Leer output entregado por python para posibles errores
Utilizando print
Utilizando pdb: python debugger
Lanzamiento condicional de Ipython embed
2.1 Debug: Leer output de errores
Cuando el programa no funciona y entrega un error normalmente es fácil solucionarlo.
El mensaje de error entregará: la línea donde se detecta el error y el tipo de error.
PROS:
* Explicativo
* Fácil de detectar y reparar
CONTRA:
* No todos los errores arrojan error, particularmente los errores conceptuales.
2.1.1 Lista de errores comunes
Los errores más comunes en un programa son los siguientes:
* SyntaxError:
* Parentésis no cierran adecuadamente.
* Faltan comillas en un string.
* Falta dos puntos para definir un bloque if-elif-ese, una función, o un ciclo.
* NameError:
* Se está usando una variable que no existe (nombre mal escrito o se define después de donde es utilizada)
* Todavía no se ha definido la función o variable.
* No se ha importado el módulo requerido
* IOError: El archivo a abrir no existe.
* KeyError: La llave no existe en el diccionario.
* TypeError: La función no puede aplicarse sobre el objeto.
* IndentationError: Los bloques de código no están bien definidos. Revisar la indentación.
Un error clásico y que es dificil de detectar es la asignación involuntaria: Escribir $a=b$ cuando realmente se quiere testear la igualdad $a==b$.
Desafío 1
Arregle el siguiente programa en python para que funcione. Contiene 5 errores. Anote los errores como comentarios en el código.
Al ejecutar sin errores, debería regresar el valor 0.333384348536
End of explanation
"""
def fibonacci(n):
"""
Debe regresar la lista con los primeros n numeros de fibonacci.
Para n<1, regresar [].
Para n=1, regresar [1].
Para n=2, regresar [1,1].
Para n=3, regresar [1,1,2].
Para n=4, regresar [1,1,2,3].
Y sucesivamente
"""
a = 1
b = 1
fib = [a,b]
count = 2
if n<1:
return []
if n=1:
return [1]
while count <= n:
aux = a
a = b
b = aux + b
count += 1
fib.append(aux)
return fib
print "fibonacci(-1):", fibonacci(-1) # Deberia ser []
print "fibonacci(0):", fibonacci(0) # Deberia ser []
print "fibonacci(1):", fibonacci(1) # Deberia ser [1]
print "fibonacci(2):", fibonacci(2) # Deberia ser [1,1]
print "fibonacci(3):", fibonacci(3) # Deberia ser [1,1,2]
print "fibonacci(5):", fibonacci(5) # Deberia ser [1,1,2,3,5]
print "fibonacci(10):", fibonacci(10) # Deberia ser ...
"""
ERRORES DETECTADOS:
1)
2)
3)
"""
"""
Explanation: 2.2 Debug: Utilización de print
Utilizar print es la técnica más sencilla y habitual, apropiada si los errores son sencillos.
PRO:
* Fácil y rápido de implementar.
* Permite inspeccionar valores de variable a lo largo de todo un programa
CONTRA:
* Requiere escribir expresiones más complicadas para estudiar más de una variable simultáneamente.
* Impresión no ayuda para estudiar datos multidimensionales (arreglos, matrices, diccionarios grandes).
* Eliminacion de múltiples print puede ser compleja en programa grande.
* Inapropiado si la ejecución del programa tarde demasiado (por ejemplo si tiene que leer un archivo de disco), pues habitualmente se van insertando prints a lo largo de varias ejecuciones "persiguiendo" el valor de una variable.
Consejo
Si se desea inspeccionar la variable mi_variable_con_error, utilice
print("!!!" + str(mi_variable_con_error))
o bien
print(mi_variable_con_error) #!!!
De esa forma será más facil ver en el output donde está la variable impresa, y luego de solucionar el bug, será también más fácil eliminar las expresiones print que han sido insertadas para debugear (no se confundirá con los print que sí son necesarios y naturales al programa).
Desafío 2
Detecte porqué el programa se comporta de manera inadecuada, utilizando print donde parezca adecuado.
No elimine los print que usted haya introducido, sólo coméntelos con #.
Arregle el desperfecto e indique con un comentario en el código donde estaba el error.
End of explanation
"""
# Desafio 3 - Errores encontrados en ./mat281_code/desafio_3.py
"""
Se detectaron los siguientes errores:
1- FIX ME - COMENTAR AQUI
2- FIX ME - COMENTAR AQUI
3- FIX ME - COMENTAR AQUI
"""
"""
Explanation: 2.3 Debug: Utilización de pdb
Python trae un debugger por defecto: pdb (python debugger), que guarda similaridades con gdb (el debugger de C).
PRO:
* Permite inspeccionar el estado real de la máquina en un instante dado.
* Permite ejecutar las instrucciones siguientes.
CONTRA:
* Requiere conocer comandos.
* No tiene completación por tabulación como IPython.
El funcionamiento de pdb similar a los breakpoints en matlab.
Se debe realizar lo siguiente:
Importar la librería
import pdb
Solicitar que se ejecute la inspección en las líneas que potencialmente tienen el error. Para ello es necesario insertar en una nueva línea, con el alineamiento adecuado, lo siguiente:
pdb.set_trace()
Ejecutar el programa como se realizaría normalmente:
$ python mi_programa_con_error.py
Al realizar las acciones anteriores, pdb ejecuta todas las instrucciones hasta el primer pdb.set_trace() y regresa el terminal al usuario, para que inspeccione las variables y revise el código. Los comandos principales a memorizar son:
n + Enter: Permite ejecutar la siguiente instrucción (línea).
c + Enter: Permite continar la ejecución del programa, hasta el próximo pdb.set_trace() o el final del programa.
l + Enter: Permite ver qué línea esta actualmente en ejecución.
p mi_variable + Enter: Imprime la variable mi_variable.
Enter: Ejecuta la última accion realizada en pdb.
2.3.1 Ejemplo
Ejecute el archivo ./mat281_code/ejemplo_pdb.py y siga las instrucciones que obtendrá:
$ python ./mat281_code/ejemplo_pdb.py
Desafío 3
Utilice pdb para debuggear el archivo ./mat281_code/desafio_3.py.
El desafío 3 consiste en hallar 3 errores en la implementación defectuosa del método de la secante:
link wikipedia
Instrucciones:
* Después de utilizar pdb.set_trace() no borre la línea creada, solo coméntela con # para poder revisar su utilización.
* Anote en la celda a continuación los errores que ha encontrado en el archivo ./mat281_code/desafio_3.py
End of explanation
"""
# Desafio 4 - Errores encontrados en ./mat281_code/desafio_4.py
"""
Se detectaron los siguientes errores:
1- FIX ME - COMENTAR AQUI
2- FIX ME - COMENTAR AQUI
3- FIX ME - COMENTAR AQUI
"""
"""
Explanation: 2.4 Debug: Utilización de IPython
PRO:
* Permite inspeccionar el estado real de la máquina en un instante dado.
* Permite calcular cualquier expresión, de manera sencilla.
* Permite graficar, imprimir matrices, etc.
* Tiene completación por tabulación de IPython.
* Tiene todo el poder de IPython (%who, %whos, etc.)
CONTRA:
* No permite avanzar a la próxima instrucción como n+Enter en pdb.
El funcionamiento de IPython es el siguiente:
Importar la librería
import IPython
Solicitar que se ejecute la inspección en las líneas que potencialmente tienen el error. Para ello es necesario insertar en una nueva línea, con el alineamiento adecuado, lo siguiente:
IPython.embed()
Ejecutar el programa como se realizaría normalmente:
$ python mi_programa_con_error.py
Al realizar las acciones anteriores, python ejecuta todas las instrucciones hasta el primer IPython.embed() y regresa el terminal interactivo IPython al usuario en el punto seleccionado, para que inspeccione las variables y revise el código.
Para salir de IPython es necesario utilizar Ctr+d.
2.3.1 Ejemplo
Ejecute el archivo ./mat281_code/ejemplo_ipython.py y siga las instrucciones que obtendrá:
$ python ./mat281_code/ejemplo_ipython.py
Desafío 4
Utilice IPython para debuggear el archivo ./mat281_code/desafio_4.py.
El desafío 4 consiste en reparar una implementación defectuosa del método de bisección:
link wikipedia
Instrucciones:
* Después de utilizar IPython.embed() no borre la línea, solo coméntela con # para poder revisar su utilización.
* Anote en la celda a continuación los errores que ha encontrado en el archivo ./mat281_code/desafio_4.py
End of explanation
"""
|
mliu49/RMG-stuff | Thermo/sdata134k_small_polycyclic.ipynb | mit | from rmgpy.data.rmg import RMGDatabase
from rmgpy import settings
from rmgpy.species import Species
from rmgpy.molecule import Group
from rmgpy.rmg.main import RMG
from IPython.display import display
import numpy as np
import os
import pandas as pd
from pymongo import MongoClient
import logging
logging.disable(logging.CRITICAL)
from bokeh.charts import Histogram
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
host = 'mongodb://user:user@rmg.mit.edu/admin'
port = 27018
client = MongoClient(host, port)
db = getattr(client, 'sdata134k')
db.collection_names()
def get_data(db, collection_name):
collection = getattr(db, collection_name)
db_cursor = collection.find()
# collect data
print('reading data...')
db_mols = []
for db_mol in db_cursor:
db_mols.append(db_mol)
print('done')
return db_mols
database = RMGDatabase()
database.load(
settings['database.directory'],
thermoLibraries=[],
kineticsFamilies='none',
kineticsDepositories='none',
reactionLibraries = []
)
thermoDatabase = database.thermo
# fetch testing dataset
collection_name = 'small_cyclic_table'
db_mols = get_data(db, collection_name)
print len(db_mols)
"""
Explanation: Thermochemistry Validation Test
Han, Kehang (hkh12@mit.edu)
This notebook is designed to use a big set of tricyclics for testing the performance of new polycyclics thermo estimator. Currently the dataset contains 2903 tricyclics that passed isomorphic check.
Set up
End of explanation
"""
filterList = [
Group().fromAdjacencyList("""1 R u0 p0 c0 {2,[S,D,T]} {9,[S,D,T]}
2 R u0 p0 c0 {1,[S,D,T]} {3,[S,D,T]}
3 R u0 p0 c0 {2,[S,D,T]} {4,[S,D,T]}
4 R u0 p0 c0 {3,[S,D,T]} {5,[S,D,T]}
5 R u0 p0 c0 {4,[S,D,T]} {6,[S,D,T]}
6 R u0 p0 c0 {5,[S,D,T]} {7,[S,D,T]}
7 R u0 p0 c0 {6,[S,D,T]} {8,[S,D,T]}
8 R u0 p0 c0 {7,[S,D,T]} {9,[S,D,T]}
9 R u0 p0 c0 {1,[S,D,T]} {8,[S,D,T]}
"""),
Group().fromAdjacencyList("""1 R u0 p0 c0 {2,S} {5,S}
2 R u0 p0 c0 {1,S} {3,D}
3 R u0 p0 c0 {2,D} {4,S}
4 R u0 p0 c0 {3,S} {5,S}
5 R u0 p0 c0 {1,S} {4,S} {6,S} {9,S}
6 R u0 p0 c0 {5,S} {7,S}
7 R u0 p0 c0 {6,S} {8,D}
8 R u0 p0 c0 {7,D} {9,S}
9 R u0 p0 c0 {5,S} {8,S}
"""),
]
test_size = 0
R = 1.987 # unit: cal/mol/K
validation_test_dict = {} # key: spec.label, value: (thermo_heuristic, thermo_qm)
spec_labels = []
spec_dict = {}
H298s_qm = []
Cp298s_qm = []
H298s_gav = []
Cp298s_gav = []
for db_mol in db_mols:
smiles_in = str(db_mol["SMILES_input"])
spec_in = Species().fromSMILES(smiles_in)
for grp in filterList:
if spec_in.molecule[0].isSubgraphIsomorphic(grp):
break
else:
spec_in.generateResonanceIsomers()
spec_labels.append(smiles_in)
# qm: just free energy but not free energy of formation
G298_qm = float(db_mol["G298"])*627.51 # unit: kcal/mol
H298_qm = float(db_mol["Hf298(kcal/mol)"]) # unit: kcal/mol
Cv298_qm = float(db_mol["Cv298"]) # unit: cal/mol/K
Cp298_qm = Cv298_qm + R # unit: cal/mol/K
H298s_qm.append(H298_qm)
Cp298s_qm.append(Cp298_qm)
# gav
thermo_gav = thermoDatabase.getThermoDataFromGroups(spec_in)
H298_gav = thermo_gav.H298.value_si/4184.0 # unit: kcal/mol
Cp298_gav = thermo_gav.getHeatCapacity(298)/4.184 # unit: cal/mol
H298s_gav.append(H298_gav)
Cp298s_gav.append(Cp298_gav)
spec_dict[smiles_in] = spec_in
"""
Explanation: Validation Test
Collect data from heuristic algorithm and qm library
End of explanation
"""
# create pandas dataframe
validation_test_df = pd.DataFrame(index=spec_labels)
validation_test_df['H298_heuristic(kcal/mol/K)'] = pd.Series(H298s_gav, index=validation_test_df.index)
validation_test_df['H298_qm(kcal/mol/K)'] = pd.Series(H298s_qm, index=validation_test_df.index)
heuristic_qm_diff = abs(validation_test_df['H298_heuristic(kcal/mol/K)']-validation_test_df['H298_qm(kcal/mol/K)'])
validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'] = pd.Series(heuristic_qm_diff, index=validation_test_df.index)
display(validation_test_df.head())
print "Validation test dataframe has {0} tricyclics.".format(len(spec_labels))
validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'].describe()
"""
Explanation: Create pandas dataframe for easy data validation
End of explanation
"""
diff20_df = validation_test_df[(validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'] > 15)
& (validation_test_df['H298_heuristic_qm_diff(kcal/mol/K)'] <= 500)]
len(diff20_df)
print len(diff20_df)
for smiles in diff20_df.index:
print "***********heur = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_heuristic(kcal/mol/K)'])
print "***********qm = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_qm(kcal/mol/K)'])
spe = spec_dict[smiles]
display(spe)
"""
Explanation: categorize error sources
End of explanation
"""
p = figure(plot_width=500, plot_height=400)
# plot_df = validation_test_df[validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] < 10]
plot_df = validation_test_df
# add a square renderer with a size, color, and alpha
p.circle(plot_df['H298_heuristic(kcal/mol/K)'], plot_df['H298_qm(kcal/mol/K)'],
size=5, color="green", alpha=0.5)
x = np.array([-50, 200])
y = x
p.line(x=x, y=y, line_width=2, color='#636363')
p.line(x=x, y=y+10, line_width=2,line_dash="dashed", color='#bdbdbd')
p.line(x=x, y=y-10, line_width=2, line_dash="dashed", color='#bdbdbd')
p.xaxis.axis_label = "H298 GAV (kcal/mol)"
p.yaxis.axis_label = "H298 Quantum (kcal/mol)"
p.xaxis.axis_label_text_font_style = "normal"
p.yaxis.axis_label_text_font_style = "normal"
p.xaxis.axis_label_text_font_size = "16pt"
p.yaxis.axis_label_text_font_size = "16pt"
p.xaxis.major_label_text_font_size = "12pt"
p.yaxis.major_label_text_font_size = "12pt"
show(p)
len(plot_df.index)
"""
Explanation: Parity Plot: heuristic vs. qm
End of explanation
"""
from bokeh.models import Range1d
hist = Histogram(validation_test_df,
values='Cp298_heuristic_qm_diff(cal/mol/K)', xlabel='Cp Prediction Error (cal/mol/K)',
ylabel='Number of Testing Molecules',
bins=50,\
plot_width=500, plot_height=300)
# hist.y_range = Range1d(0, 1640)
hist.x_range = Range1d(0, 20)
show(hist)
with open('validation_test_sdata134k_2903_pyPoly_dbPoly.csv', 'w') as fout:
validation_test_df.to_csv(fout)
"""
Explanation: Histogram of abs(heuristic-qm)
End of explanation
"""
|
kmunve/APS | aps/notebooks/aps_terrain_analysis.ipynb | mit | # -*- coding: utf-8 -*-
%matplotlib inline
from __future__ import print_function
import pylab as plt
import datetime
import netCDF4
import numpy as np
import numpy.ma as ma
from linecache import getline
plt.rcParams['figure.figsize'] = (14, 6)
"""
Explanation: APS terrain analysis
Imports
End of explanation
"""
### From thredds.met.no with 2.5 km resolution
nc_thredds = netCDF4.Dataset("http://thredds.met.no/thredds/dodsC/meps25epsarchive/2017/11/12/meps_mbr0_pp_2_5km_20171112T00Z.nc")
thredds_altitude = nc_thredds.variables["altitude"]
### From hdata\grid with 1 km resolution
#nc_dem = netCDF4.Dataset(r"Y:\metdata\prognosis\meps\det\archive\2017\mepsDet00_PTW_1km_20171111.nc", "r")
nc_alpdtm = netCDF4.Dataset(r"../data/terrain_parameters/AlpDtm.nc", "r")
alpdtm = nc_alpdtm.variables["AlpDtm"]
nc_meandtm = netCDF4.Dataset(r"../data/terrain_parameters/MEANHeight.nc", "r")
meandtm = nc_meandtm.variables["MEANHeight"]
f, (ax1, ax2, ax3) = plt.subplots(1, 3)#, sharex=True, sharey=True)
#plt_thredds = ax1.imshow(np.flipud(thredds_altitude[:]), cmap=plt.cm.hsv,vmin=0, vmax=2500)
plt_meandem = ax1.imshow(meandtm[:], cmap=plt.cm.hsv,vmin=0, vmax=2500)
plt.colorbar(ax=ax1, mappable=plt_meandem)
plt_alpdem = ax2.imshow(alpdtm[:], cmap=plt.cm.hsv,vmin=0, vmax=2500)
plt.colorbar(ax=ax2, mappable=plt_alpdem)
plt_diffdem = ax3.imshow(alpdtm[:]-meandtm[:], cmap=plt.cm.seismic)
plt.colorbar(ax=ax3, mappable=plt_diffdem)
#cbar_dir.set_ticks([-180, -135, -90, -45, 0, 45, 90, 135, 180])
#cbar_dir.set_ticklabels(['S', 'SV', 'V', 'NV', 'N', 'NØ', 'Ø', 'SØ', 'S'])
#plt.title(ts)
plt.show()
# Load region mask
vr = netCDF4.Dataset(r"../data/terrain_parameters/VarslingsOmr_2017.nc", "r")
regions = vr.variables["VarslingsOmr_2017"][:]
ID = 3014 # Lofoten & Vesterålen
#ID = 3029 # Indre Sogn
region_mask = np.where(regions==ID)
# get the lower left and upper right corner of a rectangle around the region
y_min, y_max, x_min, x_max = min(region_mask[0].flatten()), max(region_mask[0].flatten()), min(region_mask[1].flatten()), max(region_mask[1].flatten())
plt.imshow(z)
plt.colorbar(label="M.A.S.L.")
hist, bin_edges = np.histogram(z, bins=[0, 300, 600, 900, 1200, 3000])
hist_perc = hist / (z.shape[1]*z.shape[0] )*100.0
plt.bar(bin_edges[:-1], hist_perc, width=300, color='lightgrey')
"""
Explanation: |Id |Name|
|---|---|
|3003 |Nordenskiöld Land|
|3007 |Vest-Finnmark|
|3009 |Nord-Troms|
|3010 |Lyngen|
|3011 |Tromsø|
|3012 |Sør-Troms|
|3013 |Indre Troms|
|3014 |Lofoten og Vesterålen|
|3015 |Ofoten|
|3016 |Salten|
|3017 |Svartisen|
|3022 |Trollheimen|
|3023 |Romsdal|
|3024 |Sunnmøre|
|3027 |Indre Fjordane|
|3028 |Jotunheimen|
|3029 |Indre Sogn|
|3031 |Voss|
|3032 |Hallingdal|
|3034 |Hardanger|
|3035 |Vest-Telemark|
End of explanation
"""
from netCDF4 import Dataset
fr = Dataset(r"../data/terrain_parameters/VarslingsOmr_2017.nc", "r")
print(fr)
regions = fr.variables["VarslingsOmr_2017"][:]
plt.imshow(regions)
plt.colorbar(label="Region ID")
stroms_mask = np.where(regions==3012)
print(stroms_mask)
# get the lower left and upper right corner of a rectangle around the region
y_min, y_max, x_min, x_max = min(stroms_mask[0].flatten()), max(stroms_mask[0].flatten()), min(stroms_mask[1].flatten()), max(stroms_mask[1].flatten())
region_3012 = regions.copy()
region_3012[y_min:y_max, x_min:x_max] = 32000 # rectangle around the region
region_3012[stroms_mask] = 48000 # exact pixels within the region
plt.imshow(region_3012)
plt.colorbar(label="Region ID")
rr_data = Dataset("../data/met_obs_grid/rr_2016_12_12.nc")
rr = rr_data.variables["precipitation_amount"][:, y_min:y_max, x_min:x_max].squeeze()
#rr = rr_data.variables["precipitation_amount"][:, stroms_mask[0], stroms_mask[1]].squeeze() #crashes the script
print(np.shape(rr))
plt.imshow(rr)
plt.colorbar(label="Precipitation - Sør Troms")
"""
Explanation: We can calculate area above tree-line by combining elevations with a treeline mask.
Extract from region
End of explanation
"""
|
NifTK/NiftyNet | demos/Learning_Rate_Decay/Demo_for_learning_rate_decay_application.ipynb | apache-2.0 | import os,sys
import matplotlib.pyplot as plt
import numpy as np
niftynet_path='your/niftynet/path' # Set your NiftyNet root path here
os.environ['niftynet_config_home'] = niftynet_path
os.chdir(niftynet_path)
"""
Explanation: Demo for Learning Rate Decay Application
This demo will address how to make use of the learning rate scheduler in the context of a segmentation task. The scheduler allows for the learning rate to evolve predictably with training iterations (this will typically be a decay over time). This demo assumes you have a working installation of NiftyNet.
Preparation:
1) Ensure you are in the NiftyNet root, and set this root as an environment variable:
End of explanation
"""
f = open("config.ini", "w+")
f.write("[global]\n")
f.write("home = {}".format(niftynet_path))
f.close()
"""
Explanation: 2) Acquire the data. We will make use of a publicly available hippocampus segmentation dataset in this demo. Prior to this you will need to create a file named 'config.ini' in your NiftyNet root folder with the following format:
ini
[global]
home = /your/niftynet/path
The following code snippet will create the file for you:
End of explanation
"""
%run net_download.py decathlon_hippocampus -r
"""
Explanation: Now you are ready to download the data which you can do by running the following cell:
End of explanation
"""
import os
import sys
import niftynet
sys.argv=['','train','-a','demos.Learning_Rate_Decay.Demo_applications.decay_lr_comparison_application.DecayLearningRateApplication','-c',os.path.join('demos','Learning_Rate_Decay','learning_rate_demo_train_config.ini'),'--max_iter','500','--lr','25.0','--model_dir','./models/decay']
niftynet.main()
sys.argv=['','train','-a','demos.Learning_Rate_Decay.Demo_applications.no_decay_lr_comparison_application.DecayLearningRateApplication','-c',os.path.join('demos','Learning_Rate_Decay','learning_rate_demo_train_config.ini'),'--max_iter','500','--lr','25.0','--model_dir','./models/no_decay']
niftynet.main()
decay_model_dir = niftynet_path + '/models/decay'
no_decay_model_dir = niftynet_path + '/models/no_decay'
with open(os.path.join(decay_model_dir, 'training_niftynet_log'), 'r') as f:
lines = f.readlines()
data = ' '.join(lines)
last_run = data.rpartition('Parameters from random initialisations')[-1]
last_run_lines = last_run.split('\n')
raw_lines = [l.split(',')[1:] for l in last_run_lines if 'loss' in l]
iterations = [int(l[0].split(':')[1].split(' ')[-1]) for l in raw_lines]
decay_CE_losses = [float(l[1].split('=')[1]) for l in raw_lines]
with open(os.path.join(no_decay_model_dir, 'training_niftynet_log'), 'r') as f:
lines = f.readlines()
data = ' '.join(lines)
last_run = data.rpartition('Parameters from random initialisations')[-1]
last_run_lines = last_run.split('\n')
raw_lines = [l.split(',')[1:] for l in last_run_lines if 'loss' in l]
iterations = [int(l[0].split(':')[1].split(' ')[-1]) for l in raw_lines]
no_decay_CE_losses = [float(l[1].split('=')[1]) for l in raw_lines]
fig = plt.figure(figsize=(10, 4.5), dpi=80)
plt.plot([np.mean(no_decay_CE_losses[l:l+10]) for l in range(0,len(no_decay_CE_losses), 10)],
color='red', label='Constant lr')
plt.plot([np.mean(decay_CE_losses[l:l+10]) for l in range(0,len(decay_CE_losses), 10)],
color='green', label='Decaying lr')
plt.title("Smoothed loss curves", fontsize=20)
plt.legend(fontsize=16)
plt.show()
"""
Explanation: The training data, labels, and test data will be downloaded into the /data/decathlon_hippocampus/ folder in your NiftyNet root directory.
The configuration file:
The configuration file in NiftyNet specifies all the parameters pertinent to the training/ inference task at hand including but not limited to: Training data location, network of choice, learning rate, etc. <br>
In this instance a configuration file has been provided (learning_rate_demo_train_config.ini) with default settings for a segmentation training task. Note that these settings can be overriden in the command line or by simply editing the configuration file. For further information regarding the configuration file and the individual variables refer to the relevant documentation here.
Training a network from the command line:
To run this application from the command line using the variables specified in the configuration file:
python net_run.py train -a niftynet.contrib.learning_rate_schedule.decay_lr_application.DecayLearningRateApplication
-c /demos/Learning_Rate_Decay/learning_rate_demo_train_config.ini --max_iter 90
With the current setup, the learning rate will halve every three iterations. NiftyNet will create logs and models folders to store training logs and models, respectively. <br>
The following cells exemplify the potential benefit of having a decaying learning rate: Two networks are trained, one lacks any learning rate scheduling, the other has a learning rate schedule that decays the learning rate by 2% every 5 iterations. By looking at the loss curves we can see that the latter converges towards a lower cross entropy despite the exact same initialisation:
End of explanation
"""
def set_iteration_update(self, iteration_message):
"""
This function will be called by the application engine at each
iteration.
"""
current_iter = iteration_message.current_iter
if iteration_message.is_training:
if current_iter > 0 and current_iter % 3 == 0:
self.current_lr = self.current_lr / 2.0
iteration_message.data_feed_dict[self.is_validation] = False
elif iteration_message.is_validation:
iteration_message.data_feed_dict[self.is_validation] = True
iteration_message.data_feed_dict[self.learning_rate] = self.current_lr
"""
Explanation: Customising the learning rate scheduler:
Currently the application is set up such that the learning rate is halved every third training iteration. This is an arbitrary default setup and it is likely you'll want to alter this to suit your purposes. Let's look at the set_iteration_update method of the DecayLearningRateApplication class in the application (decay_lr_application.py):
End of explanation
"""
if current_iter > 0 and current_iter % 3 == 0:
self.current_lr = self.current_lr / 2.0
"""
Explanation: The relevant subsection we will want to focus on is contained in two lines:
End of explanation
"""
if current_iter > 0 and current_iter % 5 == 0:
self.current_lr = self.current_lr * 0.99
"""
Explanation: The second line contains the logic that changes the learning rate wheras the first line stipulates the condition under which this will occur. As such only these two lines need to be changed if the scheduling is to be changed. For example, if we'd like to reduce the learning rate by 1% every 5 iterations then this snippet of code would look like:
End of explanation
"""
if current_iter > 0:
self.current_lr = self.current_lr * np.exp(-k * current_iter)
"""
Explanation: Similarly, if we'd like for the learning rate to decay exponentially every iteration modulated by some factor k:
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb | apache-2.0 | # Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
%%capture
#@title Setup Environment
# Install the latest Tensorflow version.
!pip install -q tensorflow_text
!pip install -q simpleneighbors[annoy]
!pip install -q nltk
!pip install -q tqdm
#@title Setup common imports and functions
import json
import nltk
import os
import pprint
import random
import simpleneighbors
import urllib
from IPython.display import HTML, display
from tqdm.notebook import tqdm
import tensorflow.compat.v2 as tf
import tensorflow_hub as hub
from tensorflow_text import SentencepieceTokenizer
nltk.download('punkt')
def download_squad(url):
return json.load(urllib.request.urlopen(url))
def extract_sentences_from_squad_json(squad):
all_sentences = []
for data in squad['data']:
for paragraph in data['paragraphs']:
sentences = nltk.tokenize.sent_tokenize(paragraph['context'])
all_sentences.extend(zip(sentences, [paragraph['context']] * len(sentences)))
return list(set(all_sentences)) # remove duplicates
def extract_questions_from_squad_json(squad):
questions = []
for data in squad['data']:
for paragraph in data['paragraphs']:
for qas in paragraph['qas']:
if qas['answers']:
questions.append((qas['question'], qas['answers'][0]['text']))
return list(set(questions))
def output_with_highlight(text, highlight):
output = "<li> "
i = text.find(highlight)
while True:
if i == -1:
output += text
break
output += text[0:i]
output += '<b>'+text[i:i+len(highlight)]+'</b>'
text = text[i+len(highlight):]
i = text.find(highlight)
return output + "</li>\n"
def display_nearest_neighbors(query_text, answer_text=None):
query_embedding = model.signatures['question_encoder'](tf.constant([query_text]))['outputs'][0]
search_results = index.nearest(query_embedding, n=num_results)
if answer_text:
result_md = '''
<p>Random Question from SQuAD:</p>
<p> <b>%s</b></p>
<p>Answer:</p>
<p> <b>%s</b></p>
''' % (query_text , answer_text)
else:
result_md = '''
<p>Question:</p>
<p> <b>%s</b></p>
''' % query_text
result_md += '''
<p>Retrieved sentences :
<ol>
'''
if answer_text:
for s in search_results:
result_md += output_with_highlight(s, answer_text)
else:
for s in search_results:
result_md += '<li>' + s + '</li>\n'
result_md += "</ol>"
display(HTML(result_md))
"""
Explanation: Multilingual Universal Sentence Encoder Q&A 检索
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td data-parent-segment-id="12900598"><a href="https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder-multilingual-qa%2F3%20OR%20google%2Funiversal-sentence-encoder-qa%2F3"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
这是使用 Univeral Encoder Multilingual Q&A 模型进行文本问答检索的演示,其中对模型的 question_encoder 和 response_encoder 的用法进行了说明。我们使用来自 SQuAD 段落的句子作为演示数据集,每个句子及其上下文(句子周围的文本)都使用 response_encoder 编码为高维嵌入向量。这些嵌入向量存储在使用 simpleneighbors 库构建的索引中,用于问答检索。
检索时,从 SQuAD 数据集中随机选择一个问题,并使用 question_encoder 将其编码为高维嵌入向量,然后查询 simpleneighbors 索引会返回语义空间中最近邻的列表。
更多模型
您可以在此处找到所有当前托管的文本嵌入向量模型,还可以在此处找到所有在 SQuADYou 上训练过的模型。
设置
End of explanation
"""
#@title Download and extract SQuAD data
squad_url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json' #@param ["https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"]
squad_json = download_squad(squad_url)
sentences = extract_sentences_from_squad_json(squad_json)
questions = extract_questions_from_squad_json(squad_json)
print("%s sentences, %s questions extracted from SQuAD %s" % (len(sentences), len(questions), squad_url))
print("\nExample sentence and context:\n")
sentence = random.choice(sentences)
print("sentence:\n")
pprint.pprint(sentence[0])
print("\ncontext:\n")
pprint.pprint(sentence[1])
print()
"""
Explanation: 运行以下代码块,下载并将 SQuAD 数据集提取为:
句子是(文本, 上下文)元组的列表,SQuAD 数据集中的每个段落都用 NLTK 库拆分成句子,并且句子和段落文本构成(文本, 上下文)元组。
问题是(问题, 答案)元组的列表。
注:您可以选择下面的 squad_url,使用本演示为 SQuAD 训练数据集或较小的 dev 数据集(1.1 或 2.0)建立索引。
End of explanation
"""
#@title Load model from tensorflow hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3" #@param ["https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3", "https://tfhub.dev/google/universal-sentence-encoder-qa/3"]
model = hub.load(module_url)
"""
Explanation: 以下代码块使用 <a>Univeral Encoder Multilingual Q&A 模型</a>的 question_encoder 和 <strong>response_encoder</strong> 签名对 TensorFlow 计算图 g 和会话进行设置。
End of explanation
"""
#@title Compute embeddings and build simpleneighbors index
batch_size = 100
encodings = model.signatures['response_encoder'](
input=tf.constant([sentences[0][0]]),
context=tf.constant([sentences[0][1]]))
index = simpleneighbors.SimpleNeighbors(
len(encodings['outputs'][0]), metric='angular')
print('Computing embeddings for %s sentences' % len(sentences))
slices = zip(*(iter(sentences),) * batch_size)
num_batches = int(len(sentences) / batch_size)
for s in tqdm(slices, total=num_batches):
response_batch = list([r for r, c in s])
context_batch = list([c for r, c in s])
encodings = model.signatures['response_encoder'](
input=tf.constant(response_batch),
context=tf.constant(context_batch)
)
for batch_index, batch in enumerate(response_batch):
index.add_one(batch, encodings['outputs'][batch_index])
index.build()
print('simpleneighbors index for %s sentences built.' % len(sentences))
"""
Explanation: 以下代码块计算所有文本的嵌入向量和上下文元组,并使用 response_encoder 将它们存储在 simpleneighbors 索引中。
End of explanation
"""
#@title Retrieve nearest neighbors for a random question from SQuAD
num_results = 25 #@param {type:"slider", min:5, max:40, step:1}
query = random.choice(questions)
display_nearest_neighbors(query[0], query[1])
"""
Explanation: 检索时,使用 question_encoder 对问题进行编码,而问题嵌入向量用于查询 simpleneighbors 索引。
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/2e7ef25ccf0fd2af7902f12debe11fc1/plot_stats_cluster_1samp_test_time_frequency.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Non-parametric 1 sample cluster statistic on single trial power
This script shows how to estimate significant clusters
in time-frequency power estimates. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists of:
extracting epochs
compute single trial power estimates
baseline line correct the power estimates (power ratios)
compute stats to see if ratio deviates from 1.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# just use right temporal sensors for speed
epochs.pick_channels(mne.read_selection('Right-temporal'))
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(-0.1, 0.4)
tfr_epochs.crop(-0.1, 0.4)
epochs_power = tfr_epochs.data
"""
Explanation: Set parameters
End of explanation
"""
sensor_adjacency, ch_names = mne.channels.find_ch_adjacency(
tfr_epochs.info, 'grad')
# Subselect the channels we are actually using
use_idx = [ch_names.index(ch_name.replace(' ', ''))
for ch_name in tfr_epochs.ch_names]
sensor_adjacency = sensor_adjacency[use_idx][:, use_idx]
assert sensor_adjacency.shape == \
(len(tfr_epochs.ch_names), len(tfr_epochs.ch_names))
assert epochs_power.data.shape == (
len(epochs), len(tfr_epochs.ch_names),
len(tfr_epochs.freqs), len(tfr_epochs.times))
adjacency = mne.stats.combine_adjacency(
sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times))
# our adjacency is square with each dim matching the data size
assert adjacency.shape[0] == adjacency.shape[1] == \
len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)
"""
Explanation: Define adjacency for statistics
To compute a cluster-corrected value, we need a suitable definition
for the adjacency/adjacency of our values. So we first compute the
sensor adjacency, then combine that with a grid/lattice adjacency
assumption for the time-frequency plane:
End of explanation
"""
threshold = 3.
n_permutations = 50 # Warning: 50 is way too small for real-world analysis.
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,
threshold=threshold, tail=0,
adjacency=adjacency,
out_type='mask', verbose=True)
"""
Explanation: Compute statistic
End of explanation
"""
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
# Just plot one channel's data
ch_idx, f_idx, t_idx = np.unravel_index(
np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:])
# ch_idx = tfr_epochs.ch_names.index('MEG 1332') # to show a specific one
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs[ch_idx], cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})')
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2], time_unit='s')
plt.show()
"""
Explanation: View time-frequency plots
End of explanation
"""
|
amueller/scipy-2017-sklearn | notebooks/04.Training_and_Testing_Data.ipynb | cc0-1.0 | from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
"""
Explanation: Training and Testing Data
To evaluate how well our supervised models generalize, we can split our data into a training and a test set:
<img src="figures/train_test_split_matrix.svg" width="100%">
End of explanation
"""
y
"""
Explanation: Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally new data. We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production.
Specifically for iris, the 150 labels in iris are sorted, which means that if we split the data using a proportional split, this will result in fudamentally altered class distributions. For instance, if we'd perform a common 2/3 training data and 1/3 test data split, our training dataset will only consists of flower classes 0 and 1 (Setosa and Versicolor), and our test set will only contain samples with class label 2 (Virginica flowers).
Under the assumption that all samples are independent of each other (in contrast time series data), we want to randomly shuffle the dataset before we split the dataset as illustrated above.
End of explanation
"""
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123)
print("Labels for training and testing data")
print(train_y)
print(test_y)
"""
Explanation: Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a pre-built function to split data into training and testing sets for you. Here, we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it has not seen during training!
End of explanation
"""
print('All:', np.bincount(y) / float(len(y)) * 100.0)
print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
"""
Explanation: Tip: Stratified Split
Especially for relatively small datasets, it's better to stratify the split. Stratification means that we maintain the original class proportion of the dataset in the test and training sets. For example, after we randomly split the dataset as shown in the previous code example, we have the following class proportions in percent:
End of explanation
"""
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123,
stratify=y)
print('All:', np.bincount(y) / float(len(y)) * 100.0)
print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
"""
Explanation: So, in order to stratify the split, we can pass the label array as an additional option to the train_test_split function:
End of explanation
"""
classifier.fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("Fraction Correct [Accuracy]:")
print(np.sum(pred_y == test_y) / float(len(test_y)))
"""
Explanation: By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the predictive power of our model. In the worst case, it may simply memorize the training samples but completely fails classifying new, similar samples -- we really don't want to put such a system into production!
Instead of using the same dataset for training and testing (this is called "resubstitution evaluation"), it is much much better to use a train/test split in order to estimate how well your trained model is doing on new data.
End of explanation
"""
print('Samples correctly classified:')
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
print('\nSamples incorrectly classified:')
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Plot two dimensions
colors = ["darkblue", "darkgreen", "gray"]
for n, color in enumerate(colors):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, 1], test_X[idx, 2], color=color, label="Class %s" % str(n))
plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color="darkred")
plt.xlabel('sepal width [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc=3)
plt.title("Iris Classification results")
plt.show()
"""
Explanation: We can also visualize the correct and failed predictions
End of explanation
"""
# %load solutions/04_wrong-predictions.py
"""
Explanation: We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Print the true labels of 3 wrong predictions and modify the scatterplot code, which we used above, to visualize and distinguish these three samples with different markers in the 2D scatterplot. Can you explain why our classifier made these wrong predictions?
</li>
</ul>
</div>
End of explanation
"""
|
RainFool/Udacity_Anwser_RainFool | Project1/boston_housing.ipynb | mit | # 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
# 检查你的Python版本
from sys import version_info
if version_info.major != 2 and version_info.minor != 7:
raise Exception('请使用Python 2.7来完成此项目')
# 让结果在notebook中显示
%matplotlib inline
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
"""
Explanation: 机器学习工程师纳米学位
模型评价与验证
项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以编程练习开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以TODO标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
第一步. 导入数据
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自UCI机器学习知识库(数据集已下线)。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。
- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。
- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
"""
#TODO 1
#目标:计算价值的最小值
minimum_price = np.amin(data['MEDV'])
#目标:计算价值的最大值
maximum_price = np.amax(data['MEDV'])
#目标:计算价值的平均值
mean_price = np.mean(data['MEDV'])
#目标:计算价值的中值
median_price = np.median(data['MEDV'])
#目标:计算价值的标准差
std_price = np.std(data['MEDV'])
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
"""
Explanation: 第二步. 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。
- 特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。
- 目标变量:'MEDV',是我们希望预测的变量。
他们分别被存在features和prices两个变量名中。
编程练习 1:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
End of explanation
"""
# TODO 2
# 提示: 导入train_test_split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features,prices,test_size=0.2,random_state=42)
"""
Explanation: 问题 1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:
- 'RM' 是该地区中每个房屋的平均房间数量;
- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);
- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。
凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。
提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?
问题 1 - 回答:
直觉上来讲:
RM房间数量越大,房屋价值越高
LStat房东是低收入阶层的百分比越高,则该地区越有可能属于整体收入较低的群体,则价格会更低
PtRatio 学生老师数目比,表示当地学生的数目,代表房子有可能是处于学区,这个比值越高则房价越高
编程练习 2: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。
在下面的代码中,你需要
使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
End of explanation
"""
# TODO 3
# 提示: 导入r2_score
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
"""计算并返回预测值相比于预测值的分数"""
score = r2_score(y_true, y_predict)
return score
# TODO 3 可选
# 不允许导入任何计算决定系数的库
# R_square = 1- SSE / SST
def performance_metric2(y_true, y_predict):
"""计算并返回预测值相比于预测值的分数"""
sum = 0
SSE = 0
SST = 0
for i in range(len(y_true)):
SSE = SSE + (y_true[i] - y_predict[i])**2
sum = sum + y_true[i]
average = sum/len(y_true)
for i in range(len(y_true)):
SST = SST + (y_true[i] - average)**2
score = 1 - (SSE/SST)
return score
"""
Explanation: 问题 2 - 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
如果用模型已经见过的数据,例如部分训练集数据进行测试,又有什么坏处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
问题 2 - 回答:
区分训练用和测试用数据集,可以有效在训练过程中得到及时反馈,我们可以使用训练测试集进行训练算法,然后通过测试集进行测试得分,一般情况下通过得分高低来评价算法优化的优劣,观察算法拟合程度。
如果使用训练数据测试,由于算法就是这堆数据产生的,但我们无法了解此算法对未知数据的表现如何。
第三步. 模型衡量标准
在项目的第三步中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
编程练习3:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 performance_metric 函数中,你要实现:
- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。
- 将他们的表现评分储存到score变量中。
或
(可选) 不使用任何外部库,参考决定系数的定义进行计算,这也可以帮助你更好的理解决定系数在什么情况下等于0或等于1。
End of explanation
"""
# 计算这个模型的预测结果的决定系数
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
"""
Explanation: 问题 3 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
你觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。
提示:运行下方的代码,使用performance_metric函数来计算模型的决定系数。
End of explanation
"""
# 根据不同的训练集大小,和最大深度,生成学习曲线
vs.ModelLearning(X_train, y_train)
"""
Explanation: 问题 3 - 回答:
非常成功,因为R方已经很大,而且从图中数据来看,每个预测数据都几乎接近实际数据
第四步. 分析模型的表现
在项目的第四步,我们来看一下不同参数下,模型在训练集和验证集上的表现。这里,我们专注于一个特定的算法(带剪枝的决策树,但这并不是这个项目的重点),和这个算法的一个参数 'max_depth'。用全部训练集训练,选择不同'max_depth' 参数,观察这一参数的变化如何影响模型的表现。画出模型的表现来对于分析过程十分有益,这可以让我们看到一些单看结果看不到的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观得显示了随着训练数据量的增加,模型学习曲线的在训练集评分和验证集评分的变化,评分使用决定系数R<sup>2</sup>。曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
End of explanation
"""
# 根据不同的最大深度参数,生成复杂度曲线
vs.ModelComplexity(X_train, y_train)
"""
Explanation: 问题 4 - 学习曲线
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练集曲线的评分有怎样的变化?验证集曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
随着数据量增加,训练集平分平缓下降,验证集平分平缓上升
验证曲线会趋于一致,且更平缓
如果有更多数据集,并不能对其表现有很大提升
问题 4 - 回答:
复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练集的变化,一个是验证集的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。
End of explanation
"""
# TODO 4
#提示: 导入 'KFold' 'DecisionTreeRegressor' 'make_scorer' 'GridSearchCV'
def fit_model(X, y):
""" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型"""
from sklearn.model_selection import KFold
from sklearn import tree
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
cross_validator = KFold(n_splits = 10)
regressor = tree.DecisionTreeRegressor()
params = {'max_depth':[1,2,3,4,5,6,7,8,9,10]}
scoring_fnc = make_scorer(performance_metric)
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cross_validator)
# 基于输入数据 [X,y],进行网格搜索
grid = grid.fit(X, y)
print pd.DataFrame(grid.cv_results_)
# 返回网格搜索后的最优模型
return grid.best_estimator_
"""
Explanation: 问题 5 - 偏差(bias)与方差(variance)之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
深度为1时是经典的欠拟合,会出现很大的偏差。
深度为10时时过拟合,会出现很大的方差
R方的值在深度为1时很低,表示当前集合处于欠拟合
R方的值在深度为10时,训练集很高,测试集反而下降,所以表现为过拟合
问题 5 - 回答:
深度为1时是经典的欠拟合,会出现很大的偏差。 深度为10时时过拟合,会出现很大的方差 R方的值在深度为1时很低,表示当前集合处于欠拟合 R方的值在深度为10时,训练集很高,测试集反而下降,所以表现为过拟合
问题 6- 最优模型的猜测
结合问题 5 中的图,你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?
问题 6 - 回答:
3,因为此时R方值最大,而且训练集和测试集的差距比较小,说明此时拟合程度合适,且泛化能力比较强
第五步. 选择最优参数
问题 7- 网格搜索(Grid Search)
什么是网格搜索法?如何用它来优化模型?
问题 7 - 回答:
网格搜索法是交叉验证的一种方法,通过遍历组合来实现模型优化。
用各个参数可能的取值来构造GridSearchCV,列出所有可能组合结果生成网格,然后使用这些组合进行训练,再进行评估,会得出一个最佳结果。
问题 8 - 交叉验证
什么是K折交叉验证法(k-fold cross-validation)?
GridSearchCV是如何结合交叉验证来完成对最佳参数组合的选择的?
GridSearchCV中的'cv_results_'属性能告诉我们什么?
网格搜索时如果不使用交叉验证会有什么问题?交叉验证又是如何解决这个问题的?
提示: 在下面 fit_model函数最后加入 print pd.DataFrame(grid.cv_results_) 可以帮你查看更多信息。
问题 8 - 回答:
K折交叉验证是交叉验证中的一种方法,在我们划分训练集合测试集时,使用K折交叉验证可以将训练数据集分为K份,每次用一份做验证集,其余为训练集,因此可以重复验证K此,并将结果汇总平均。默认情况下会对数据顺序切分,所以如果数据是排好序或聚类的(比如视频中的作者都聚集在还一起),那训练出来的算法很有可能只fit了部分数据,导致得分有偏差。
GridSearchCV的重要作用是自动调参,在网格搜索中,如果共有m组参数,假设每组参数都是n个值,那么一共有mn组参数,再对每组参数进行交叉验证,有K次训练和验证,然后取K次平均分为最终评分结果,最终评分结果分数最高的是最佳参数组合的选择
'cvresults'能够输出一个词典,包含每个参数组合的train_score,test_score,还有每个参数组合的mean_fit_time,mean_score_time,mean_test_score ,mean_train_score以及std_fit_time,std_score_time,std_test_score ,std_train_score,可作为最佳参数组合的依据
网格搜索时,如果不适用交叉验证,可能会导致因为偶然的测试集的划分,导致的训练不准确的问题。
交叉验证使用了多次尝试来获取最优值,降低误差,提升数据利用率。
编程练习 4:训练最优模型
在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
在下方 fit_model 函数中,你需要做的是:
1. 定义 'cross_validator' 变量: 使用 sklearn.model_selection 中的 KFold 创建一个交叉验证生成器对象;
2. 定义 'regressor' 变量: 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;
3. 定义 'params' 变量: 为 'max_depth' 参数创造一个字典,它的值是从1至10的数组;
4. 定义 'scoring_fnc' 变量: 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;
将 ‘performance_metric’ 作为参数传至这个函数中;
5. 定义 'grid' 变量: 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;将变量'regressor', 'params', 'scoring_fnc'和 'cross_validator' 作为参数传至这个对象构造函数中;
如果你对python函数的默认参数定义和传递不熟悉,可以参考这个MIT课程的视频。
End of explanation
"""
# TODO 4 可选
'''
不允许使用 DecisionTreeRegressor 以外的任何 sklearn 库
提示: 你可能需要实现下面的 cross_val_score 函数
def cross_val_score(estimator, X, y, scoring = performance_metric, cv=3):
""" 返回每组交叉验证的模型分数的数组 """
scores = [0,0,0]
return scores
'''
def cross_val_score(estimator, X, y, scoring = performance_metric, cv=3):
""" 返回每组交叉验证的模型分数的数组 """
scores = [0, 0, 0]
fold = len(X) / cv
for i in range(cv):
X_train = np.concatenate([X[:i * fold], X[(i + 1) * fold:]])
X_test = X[i * fold:(i + 1) * fold]
y_train = np.concatenate([y[:i * fold], y[(i + 1) * fold:]])
y_test = y[i * fold:(i + 1) * fold]
estimator = estimator.fit(X_train, y_train)
scores[i] = scoring(y_test, estimator.predict(X_test))
return scores
def fit_model2(X, y):
""" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型"""
#最优交叉验证分数对应的最优模型
best_estimator = None
best = 1
best_score = 0
for i in range(1, 11):
regressor = DecisionTreeRegressor(max_depth=i)
score = np.mean(cross_val_score(regressor, X, y))
(best_score, best) = (score, i) if score > best_score else (best_score, best)
best_estimator = DecisionTreeRegressor(max_depth=best)
return best_estimator
return best_estimator
"""
Explanation: 编程练习 4:训练最优模型 (可选)
在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
在下方 fit_model 函数中,你需要做的是:
遍历参数‘max_depth’的可选值 1~10,构造对应模型
计算当前模型的交叉验证分数
返回最优交叉验证分数对应的模型
End of explanation
"""
# 基于训练数据,获得最优模型
optimal_reg = fit_model(X_train, y_train)
# 输出最优模型的 'max_depth' 参数
print "Parameter 'max_depth' is {} for the optimal model.".format(optimal_reg.get_params()['max_depth'])
"""
Explanation: 问题 9 - 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
End of explanation
"""
# 生成三个客户的数据
client_data = [[5, 17, 15], # 客户 1
[4, 32, 22], # 客户 2
[8, 3, 12]] # 客户 3
# 进行预测
predicted_price = optimal_reg.predict(client_data)
for i, price in enumerate(predicted_price):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
"""
Explanation: 问题 9 - 回答:
4,与我猜测有些不同
第六步. 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?为什么?
提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
End of explanation
"""
#TODO 5
# 提示:你可能需要用到 X_test, y_test, optimal_reg, performance_metric
# 提示:你可能需要参考问题10的代码进行预测
# 提示:你可能需要参考问题3的代码来计算R^2的值
pred = optimal_reg.predict(X_test)
r2 = performance_metric(y_test, pred)
print "Optimal model has R^2 score {:,.2f} on test data".format(r2)
"""
Explanation: 问题 10 - 回答:
我会按照模型预测来给出建议。
从结果来看,客户3 > 客户1 > 客户2
从输入数据来看,客户3房子数最多,非贫困社区,显然应该价格最高
客户2房间数最少,很有可能是贫困阶层,所以自然价格最低。
因此从排序来看预测值是正确的。
Minimum price: $105,000.00
Maximum price: $1,024,800.00
Mean price: $454,342.94
Median price $438,900.00
Standard deviation of prices: $165,171.13
客户2的报价没有低于最小值,客户3的报价没有高于最大值,三者平均值500,000左右,与统计数据差不多,因此预测没有太大问题。
编程练习 5
你刚刚预测了三个客户的房子的售价。在这个练习中,你将用你的最优模型在整个测试数据上进行预测, 并计算相对于目标变量的决定系数 R<sup>2</sup>的值**。
End of explanation
"""
# 请先注释掉 fit_model 函数里的所有 print 语句
vs.PredictTrials(features, prices, fit_model, client_data)
"""
Explanation: 问题11 - 分析决定系数
你刚刚计算了最优模型在测试集上的决定系数,你会如何评价这个结果?
问题11 - 回答
0.84这个分数不低,可能还可以优化。
模型健壮性
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。
问题 12 - 模型健壮性
模型是否足够健壮来保证预测的一致性?
提示: 执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
End of explanation
"""
# TODO 6
# 你的代码
"""
Explanation: 问题 12 - 回答:
基本一致,预测数据趋势与训练数据大体相同
问题 13 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示:回答以下几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在已考虑通货膨胀的前提下,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
- 你觉得仅仅凭房屋所在社区的环境来判断房屋价值合理吗?
问题 13 - 回答:
如果已经考虑通货膨胀,将数据变形到当代的话,仍然适用
不足以描述,如国内各种开发商、学区房等都会对数据造成干扰
不适用,每个地区的训练数据不同,得出的结论不同,如果拿其他乡镇做测试集,得分应该不高
不合理,以一个维度判断,会有很大局限性,且同一个小区内的房价也有不同
可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中 bj_housing.csv。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。
End of explanation
"""
|
oscarmore2/deep-learning-study | tflearn-digit-recognition/TFLearn_Digit_Recognition.ipynb | mit | # Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(11)
print(trainX.shape)
print(trainY.shape)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.015, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
"""
|
junhwanjang/DataSchool | Lecture/09. 기초 확률론 3 - 확률모형/3) 가우시안 정규 분포.ipynb | mit | mu = 0
std = 1
rv = sp.stats.norm(mu, std)
rv
"""
Explanation: 가우시안 정규 분포
가우시안 정규 분포(Gaussian normal distribution), 혹은 그냥 간단히 정규 분포라고 부르는 분포는 자연 현상에서 나타나는 숫자를 확률 모형으로 모형화할 때 가장 많이 사용되는 확률 모형이다.
정규 분포는 평균 $\mu$와 분산 $\sigma^2$ 이라는 두 개의 모수만으로 정의되며 확률 밀도 함수(pdf: probability density function)는 다음과 같은 수식을 가진다.
$$ \mathcal{N}(x; \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$
정규 분포 중에서도 평균이 1이고 분산이 1인($\mu=1$, $\sigma^2=1$) 정규 분포를 표준 정규 분포(standard normal distribution)라고 한다.
SciPy를 사용한 정규 분포의 시뮬레이션
Scipy의 stats 서브 패키지에 있는 norm 클래스는 정규 분포에 대한 클래스이다. loc 인수로 평균을 설정하고 scale 인수로 표준 편차를 설정한다.
End of explanation
"""
xx = np.linspace(-5, 5, 100)
plt.plot(xx, rv.pdf(xx))
plt.ylabel("p(x)")
plt.title("pdf of normal distribution")
plt.show()
"""
Explanation: pdf 메서드를 사용하면 확률 밀도 함수(pmf: probability density function)를 계산할 수 있다.
End of explanation
"""
np.random.seed(0)
x = rv.rvs(100)
x
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
"""
Explanation: 시뮬레이션을 통해 샘플을 얻으려면 rvs 메서드를 사용한다.
End of explanation
"""
np.random.seed(0)
x = np.random.randn(100)
plt.figure(figsize=(7,7))
sp.stats.probplot(x, plot=plt)
plt.axis("equal")
plt.show()
"""
Explanation: Q-Q 플롯
정규 분포는 여러가지 연속 확률 분포 중에서도 가장 유용한 특성을 지니며 널리 사용되는 확률 분포이다. 따라서 어떤 확률 변수의 분포가 정규 분포인지 아닌지 확인하는 것은 정규 분포 검정(normality test)은 가장 중요한 통계적 분석 중의 하나이다. 그러나 구체적인 정규 분포 검정을 사용하기에 앞서 시작적으로 간단하게 정규 분포를 확인하는 Q-Q 플롯을 사용할 수 있다.
Q-Q(Quantile-Quantile) 플롯은 분석하고자 하는 샘플의 분포과 정규 분포의 분포 형태를 비교하는 시각적 도구이다. Q-Q 플롯은 동일 분위수에 해당하는 정상 분포의 값과 주어진 분포의 값을 한 쌍으로 만들어 스캐터 플롯(scatter plot)으로 그린 것이다. Q-Q 플롯을 그리는 구체적인 방법은 다음과 같다.
대상 샘플을 크기에 따라 정렬(sort)한다.
각 샘플의 분위수(quantile number)를 구한다.
각 샘플의 분위수와 일치하는 분위수를 가지는 정규 분포 값을 구한다.
대상 샘플과 정규 분포 값을 하나의 쌍으로 생각하여 2차원 공간에 하나의 점(point)으로 그린다.
모든 샘플에 대해 2부터 4까지의 과정을 반복하여 스캐터 플롯과 유사한 형태의 플롯을 완성한다.
비교를 위한 45도 직선을 그린다.
SciPy 패키지의 stats 서브 패키지는 Q-Q 플롯을 계산하고 그리기 위한 probplot 명령을 제공한다.
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html
probplot은 기본적으로 인수로 보낸 데이터 샘플에 대한 Q-Q 정보만을 반환하고 챠트는 그리지 않는다. 만약 차트를 그리고 싶다면 plot 인수에 matplotlib.pylab 모듈 객체 혹은 Axes 클래스 객체를 넘겨주어야 한다.
정규 분포를 따르는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선의 형태로 나타난다.
End of explanation
"""
np.random.seed(0)
x = np.random.rand(100)
plt.figure(figsize=(7,7))
sp.stats.probplot(x, plot=plt)
plt.ylim(-0.5, 1.5)
plt.show()
"""
Explanation: 정규 분포를 따르지 않는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선이 아닌 휘어진 형태로 나타난다.
End of explanation
"""
xx = np.linspace(-2, 2, 100)
plt.figure(figsize=(6,9))
for i, N in enumerate([1, 2, 10]):
X = np.random.rand(1000, N) - 0.5
S = X.sum(axis=1)/np.sqrt(N)
plt.subplot(3, 2, 2*i+1)
sns.distplot(S, bins=10, kde=False, norm_hist=True)
plt.xlim(-2, 2)
plt.yticks([])
plt.subplot(3, 2, 2*i+2)
sp.stats.probplot(S, plot=plt)
plt.tight_layout()
plt.show()
"""
Explanation: 중심 극한 정리
실세계에서 발생하는 현상 중 많은 것들이 정규 분포로 모형화 가능하다. 그 이유 중의 하나는 다음과 같은 중심 극한 정리(Central Limit Theorem)이다. 중심 극한 정리는 어떤 분포를 따르는 확류 변수든 간에 해당 확률 변수가 복수인 경우 그 합은 정규 분포와 비슥한 분포를 이루는 현상을 말한다.
좀 더 수학적인 용어로 쓰면 다음과 같다.
$X_1, X_2, \ldots, X_n$가 기댓값이 $\mu$이고 분산이 $\sigma^2$으로 동일한 분포이며 서로 독립인 확률 변수들이라고 하자.
이 값들의 합
$$ S_n = X_1+\cdots+X_n $$
도 마찬가지로 확률 변수이다. 이 확률 변수 $S_n$의 분포는 $n$이 증가할 수록 다음과 같은 정규 분포에 수렴한다.
$$ \dfrac{S_n}{\sqrt{n}} \xrightarrow{d}\ N(\mu,\;\sigma^2) $$
시뮬레이션을 사용하여 중심 극한 정리가 성립하는지 살펴보도록 하자.
End of explanation
"""
|
wgong/open_source_learning | projects/Open_Food/open-food-100k.ipynb | apache-2.0 | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: Data Incubator Fellowship Semifinalist Challenge
<a href="mailto:wen.g.gong@gmail.com">Wen Gong</a>
Motivation
<br>
<font color=red size=+3>Know what you eat, </font>
<font color=green size=+3> Gain insight into food.</font>
<a href=https://world.openfoodfacts.org/>
<img src=https://static.openfoodfacts.org/images/misc/openfoodfacts-logo-en-178x150.png width=300 height=200>
</a>
<br>
<font color=blue size=+2>What can be learned from the Open Food Facts dataset? </font>
End of explanation
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
"""
Explanation: Import libraries
End of explanation
"""
def remove_na_rows(df, cols=None):
"""
remove row with NaN in any column
"""
if cols is None:
cols = df.columns
return df[np.logical_not(np.any(df[cols].isnull().values, axis=1))]
def trans_country_name(x):
"""
translate country name to code (2-char)
"""
try:
country_name = x.split(',')[0]
if country_name in dictCountryName2Code:
return dictCountryName2Code[country_name]
except:
return None
def parse_additives(x):
"""
parse additives column values into a list
"""
try:
dict = {}
for item in x.split(']'):
token = item.split('->')[0].replace("[", "").strip()
if token: dict[token] = 1
return [len(dict.keys()), sorted(dict.keys())]
except:
return None
def trans_serving_size(x):
"""
pick up gram value from serving_size column
"""
try:
serving_g = float((x.split('(')[0]).replace("g", "").strip())
return serving_g
except:
return 0.0
def distplot2x2(cols):
"""
make dist. plot on 2x2 grid for up to 4 features
"""
sb.set(style="white", palette="muted")
f, axes = plt.subplots(2, 2, figsize=(7, 7), sharex=False)
b, g, r, p = sb.color_palette("muted", 4)
colors = [b, g, r, p]
axis = [axes[0,0],axes[0,1],axes[1,0],axes[1,1]]
for n,col in enumerate(cols):
sb.distplot(food[col].dropna(), hist=True, rug=False, color=colors[n], ax=axis[n])
"""
Explanation: User-defined functions
End of explanation
"""
food = pd.read_excel("data/openfoodfacts_100k.xlsx")
food.shape
# (99999, 162)
food.columns
food.head()
"""
Explanation: Load dataset
https://www.kaggle.com/openfoodfacts/world-food-facts
This dataset contains Food Nutrition Fact for 100 000+ food products from 150 countries.
<img src=https://static.openfoodfacts.org/images/products/00419796/front_en.3.full.jpg>
End of explanation
"""
# columns_to_keep = ['code','product_name','created_datetime','brands','categories','origins','manufacturing_places','energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','omega-3-fat_100g','omega-6-fat_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','alcohol_100g','vitamin-a_100g','vitamin-c_100g','potassium_100g','chloride_100g','calcium_100g','phosphorus_100g','iron_100g','magnesium_100g','zinc_100g','copper_100g','manganese_100g','fluoride_100g','ingredients_text','countries','countries_en','serving_size','additives','nutrition_grade_fr','nutrition_grade_uk','nutrition-score-fr_100g','nutrition-score-uk_100g','url','image_url','image_small_url']
columns_to_keep = ['code','product_name','created_datetime','brands','energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','vitamin-a_100g','vitamin-c_100g','calcium_100g','iron_100g','ingredients_text','countries','countries_en','serving_size','additives','nutrition_grade_fr','nutrition-score-fr_100g','url']
food = food[columns_to_keep]
"""
Explanation: Pre-processing data
Drop less useful columns
End of explanation
"""
columns_numeric_all = ['energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','omega-3-fat_100g','omega-6-fat_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','alcohol_100g','vitamin-a_100g','vitamin-c_100g','potassium_100g','chloride_100g','calcium_100g','phosphorus_100g','iron_100g','magnesium_100g','zinc_100g','copper_100g','manganese_100g','fluoride_100g','nutrition-score-fr_100g','nutrition-score-uk_100g']
columns_numeric = set(columns_numeric_all) & set(columns_to_keep)
columns_categoric = set(columns_to_keep) - set(columns_numeric)
# turn off
if False:
for col in columns_numeric:
if not col in ['nutrition-score-fr_100g', 'nutrition-score-uk_100g']:
food[col] = food[col].fillna(0)
for col in columns_categoric:
if col in ['nutrition_grade_fr', 'nutrition_grade_uk']:
food[col] = food[col].fillna('-')
else:
food[col] = food[col].fillna('')
# list column names: categoric vs numeric
columns_categoric, columns_numeric
food.head(3)
"""
Explanation: Fix missing value
End of explanation
"""
# standardize country
country_lov = pd.read_excel("../../0.0-Datasets/country_cd.xlsx")
# country_lov.shape
# country_lov.head()
# country_lov[country_lov['GEOGRAPHY_NAME'].str.startswith('United')].head()
# country_lov['GEOGRAPHY_CODE'].tolist()
# country_lov.ix[0,'GEOGRAPHY_CODE'], country_lov.ix[0,'GEOGRAPHY_NAME']
# create 2 dictionaries
dictCountryCode2Name = {}
dictCountryName2Code = {}
for i in country_lov.index:
dictCountryCode2Name[country_lov.ix[i,'GEOGRAPHY_CODE']] = country_lov.ix[i,'GEOGRAPHY_NAME']
dictCountryName2Code[country_lov.ix[i,'GEOGRAPHY_NAME']] = country_lov.ix[i,'GEOGRAPHY_CODE']
# add Country_Code column - pick 1st country from list
food['countries_en'] = food['countries_en'].fillna('')
food['country_code'] = food['countries_en'].apply(str).apply(lambda x: trans_country_name(x))
# add country_code to columns_categoric set
columns_categoric.add('country_code')
# verify bad country
food[food['country_code'] != food['countries']][['country_code', 'countries']].head(20)
food['ingredients_text'].head() # leave as is
"""
Explanation: Standardize country code
End of explanation
"""
# add serving_size in gram column
food['serving_size'].head(10)
food['serving_size'] = food['serving_size'].fillna('')
food['serving_size_gram'] = food['serving_size'].apply(lambda x: trans_serving_size(x))
# add serving_size_gram
columns_numeric.add('serving_size_gram')
food[['serving_size_gram', 'serving_size']].head()
"""
Explanation: Extract serving_size into gram value
End of explanation
"""
food['additives'].head(10)
food['additives'] = food['additives'].fillna('')
food['additive_list'] = food['additives'].apply(lambda x: parse_additives(x))
# add additive_list
columns_categoric.add('additive_list')
food[['additive_list', 'additives']].head()
"""
Explanation: Parse additives
End of explanation
"""
food["creation_date"] = food["created_datetime"].apply(str).apply(lambda x: x[:x.find("T")])
def extract_year(x):
try:
return int(x[:x.find("-")])
except:
return None
food["year_added"] = food["created_datetime"].dropna().apply(str).apply(extract_year)
# add creation_date
columns_categoric.add('creation_date')
columns_numeric.add('year_added')
food[['created_datetime', 'creation_date', 'year_added']].head()
# food['product_name']
food.head(3)
columns_numeric
"""
Explanation: Organic or Not
[TODO]
pick up word 'Organic' from product_name column
pick up word 'Organic','org' from ingredients_text column
Add creation_date
End of explanation
"""
year_added = food['year_added'].value_counts().sort_index()
#year_added
year_i = [int(x) for x in year_added.index]
x_pos = np.arange(len(year_i))
year_added.plot.bar()
plt.xticks(x_pos, year_i)
plt.title("Food labels added per year")
"""
Explanation: Visualize Food features
Food labels yearly trend
End of explanation
"""
TOP_N = 10
dist_country = food['country_code'].value_counts()
top_country = dist_country[:TOP_N][::-1]
country_s = [dictCountryCode2Name[x] for x in top_country.index]
y_pos = np.arange(len(country_s))
top_country.plot.barh()
plt.yticks(y_pos, country_s)
plt.title("Top {} Country Distribution".format(TOP_N))
"""
Explanation: Top countries
End of explanation
"""
# dist_nutri_grade = food['nutrition_grade_uk'].value_counts()
# no value
dist_nutri_grade = food[food["nutrition_grade_fr"].isin(['a','b','c','d','e'])]
dist_nutri_grade = dist_nutri_grade['nutrition_grade_fr'].value_counts()
dist_nutri_grade.sort_index(ascending=False).plot.barh()
plt.title("Nutrition Grade Dist")
"""
Explanation: Nutrition grade
End of explanation
"""
food['nutrition-score-fr_100g'].dropna().plot.hist()
plt.title("{} Dist.".format("Nutri-Score"))
"""
Explanation: Nutrition score
End of explanation
"""
food['serving_size_gram'].dropna().plot.hist()
plt.title("{} Dist.".format("Serving Size (g)"))
"""
Explanation: Serving size
End of explanation
"""
distplot2x2([ 'energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g'])
"""
Explanation: Energy, fat, ...
Energy
Fat
Saturated-Fat
Trans-Fat
End of explanation
"""
distplot2x2(['carbohydrates_100g', 'cholesterol_100g', 'proteins_100g', 'fiber_100g'])
"""
Explanation: Carbohydrates, protein, fiber
Carbohydrates
Cholesterol
Proteins
Fiber
End of explanation
"""
distplot2x2([ 'sugars_100g', 'salt_100g', 'vitamin-a_100g', 'vitamin-c_100g'])
"""
Explanation: Sugar, Vitamins
Sugars
Salt
Vitamin-A
Vitamin-C
End of explanation
"""
distplot2x2(['calcium_100g', 'iron_100g', 'sodium_100g'])
"""
Explanation: Minerals
Calcium
Iron
Sodium
End of explanation
"""
df = food[food["country_code"].isin(['US','FR'])][['energy_100g', 'carbohydrates_100g', 'sugars_100g','country_code']]
df = remove_na_rows(df)
df.head()
sb.pairplot(df, hue="country_code", size=2.5)
"""
Explanation: Explore food label
Are Amercan and French food different?
End of explanation
"""
# prepare a small dataframe for ['US', 'FR']
df2 = food[food["country_code"].isin(['US','FR'])][['energy_100g', 'sugars_100g','country_code','nutrition_grade_fr']]
df2 = df2[df2["nutrition_grade_fr"].isin(['a','b','c','d','e'])]
df2 = df2.sort_values(by="nutrition_grade_fr")
# df2.head()
# create a grid of scatter plot
g = sb.FacetGrid(df2, row="nutrition_grade_fr", col="country_code", margin_titles=True)
g.map(plt.scatter, "sugars_100g", "energy_100g", color="steelblue")
g.set(xlim=(0, 100), ylim=(0, 8000))
"""
Explanation: Who eats less sweet food?
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II | 07_SH_waves_in_moons_and_planets/4_2D_SHaxi_FD_modelling_ganymede.ipynb | gpl-3.0 | # Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
"""
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
from scipy import interpolate
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: 2D axisymmetric spherical SH finite difference modelling - Ganymede
Does a habitable liquid ocean exist beneath the ice crust of Jupiters moon Ganymede? By global axisymmetric SH modelling we will investigate if a single geophone on Ganymedes surface would be able to detect a possible ocean.
The moons of Jupiter
In the year 1610 Galileo Galilei pointed his simple telescope to the planet Jupiter and discovered 4 nearby "stars", changing their position with respect to Jupiter during the following nights. These largest Galilean moons of Jupiter
where later named Io, Europa, Ganymede and Callisto. Starting with the Voyager probes in the 1970s and later missions revealed dynamic worlds:
<img src="images/Galilean_moons.jpg" width="100%">
with active vulcanism on Io and the ice moons Eurpa, Ganymede and Callisto showing complex surface features.
A liquid ocean on Ganymede?
The strong tidal forces of Jupiter acting on the moon Ganymede are a possible heat source leading to liquid ocean layers below the ice crust, as shown in this artistic sketch by NASA/JPL:
<img src="images/InsideGanymede.jpg" width="100%">
Ganymede could consists of a "sandwich" of multiple layers of ice and liquid oceans with variable salinity. Liquid water is an important ingredient of life, making the Jupiter moon Ganymede an interesting target for future space missions. Recent numerical studies focussed on the computation of the seismicity of the icy moons (Panning et al. 2017), noise sources related to ice quakes, vulcanism, water currents and possible vital signs (Vance et al. 2018) and the influence of potential liquid oceans on seismic wave propagation (Stähler et al. 2017), In this exercise we investigate, if we can distinguish between a solid ice crust and liquid ocean with one SH-seismometer placed on the surface of Ganymede? Let's do some global axisymmetric SH modelling to answer this question ...
End of explanation
"""
# Definition of modelling parameters
# ----------------------------------
rcore = 693.0 * 1000.0 # Ganymede fluid core radius [m]
rplanet = 2634.1 * 1000.0 # Ganymede radius [m]
nr = 400 # number of spatial gridpoints in r-direction
dr = (rplanet - rcore) / nr # spatial gridpoint distance in r-direction
# calculate dtheta based on dr
dtheta = dr / rcore
ntheta = (int) (np.pi / dtheta) # number of spatial gridpoints in theta-direction
ntheta += 2
# Define Ganymede model filename
name_model = "ganymede_model/ganymede_model.dat"
# Acquisition geometry
tmax = 2000.0 # maximum recording time of the seismogram (s)
isnap = 5 # snapshot interval (timesteps)
"""
Explanation: As usual, we first define the modelling parameters for an axisymmetric Ganymede model. Unfortunately, Ganymede seem to have to solid core, which would require a special treatment of the centre to avoid the singularity at $r = 0\; m$. For simplicity, we assume that Ganymedes inner core with a radius of roughly 700 km is liquid, so we can handle the CMB as free-surface boundary.
End of explanation
"""
# Read and plot moon model
# ------------------------
model = np.loadtxt(name_model, delimiter=' ', skiprows=1)
radius = model[:,0] # radius [m]
rho1D = model[:,1] # density [kg/m^3]
vs1D = model[:,3] # S-wave velocity model [m/s]
# Define figure size
rcParams['figure.figsize'] = 10, 7
depth = radius[0] - radius
plt.plot(rho1D,depth/1000, label=r"Density $\rho$ [$kg/m^3$]")
plt.plot(vs1D,depth/1000, label=r"$Vs$ [$m/s$]")
# Annotate model
# --------------
# these are matplotlib.patch.Patch properties
props = dict(boxstyle='round', facecolor='wheat')
# place a text box in upper left in axes coords
plt.text(2100, 0, "Ice crust", fontsize=10, verticalalignment='top', bbox=props)
plt.text(1900, 140, "Liquid ocean", fontsize=10, verticalalignment='top', bbox=props)
plt.text(5000, 900, "Mantle", fontsize=10, verticalalignment='top', bbox=props)
plt.text(1000, 2200, "Iron Core", fontsize=10, verticalalignment='top', bbox=props)
plt.title("Ganymede model (Vance etal. 2017)")
plt.ylabel("Depth [km]")
plt.xlabel(r"Density $\rho$ [$kg/m^3$], $Vs$ [$m/s$]")
plt.gca().invert_yaxis()
plt.legend()
plt.show()
"""
Explanation: ... next we read the 1D Ganymede model based on Vance et al. (2017). Matlab codes by Steven Vance to create 1D Ganymede models describing basic physical properties (sound speeds, attenuation, and electrical conductivities), incorporating self-consistent thermodynamics for the fluid, rock, and mineral phases, are available here.
End of explanation
"""
# Calculate dominant frequency of the source wavelet
# --------------------------------------------------
Nlam = 12 # number of grid points per dominant wavelength
vsmin = vs1D[0] # minimum S-wave velocity [m/s]
f0 = vsmin / (Nlam * dr) # centre frequency of the source wavelet [Hz]
print('f0 = ', f0, ' Hz')
print('Period T = ', 1/f0, ' s')
t0 = 4. / f0 # source time shift (s)
"""
Explanation: The model consists of an ice crust, liquid ocean, mantle and Iron core. For simplicity, we will assume, that the iron core is liquid, so our model will only be defined above the Core-Mantle-Boundary (CMB).
Based on the spatial model discretization defined above, we calculate the centre frequency of the source wavelet, assuming $N_\lambda=12$ gridpoints per dominant wavelength.
End of explanation
"""
# Particle velocity vphi update
# -----------------------------
@jit(nopython=True) # use JIT for C-performance
def update_vel(vphi, srp, stp, dr, dtheta, dt, nr, ntheta, rho, r, theta):
# 2nd order FD operator
for i in range(1, ntheta - 1):
for j in range(1, nr - 1):
# Calculate spatial derivatives (2nd order operator)
srp_r = (srp[i,j] - srp[i,j-1]) / dr
stp_t = (stp[i,j] - stp[i-1,j]) / (r[j] * dtheta)
# Average stress wavefields at point (i,j)
vphi_avg = (((srp[i,j] + srp[i,j-1]) / 2) * 3 +
(stp[i,j] + stp[i-1,j]) / np.tan(theta[i])
) / r[j]
# Update particle velocities vphi
vphi[i,j] = vphi[i,j] + (dt/rho[i,j]) * (srp_r + stp_t + vphi_avg)
return vphi
"""
Explanation: For the axisymmetric SH modelling, we use the same FD code as for the earth PREM and moon modelling. First, we define the particle velocity $v_\phi$ update ...
End of explanation
"""
# Shear stress srp, stp updates
# -----------------------------
@jit(nopython=True) # use JIT for C-performance
def update_stress(vphi, srp, stp, dr, dtheta, dt, nr, ntheta, mur, mutheta, thetah, r, rh):
# 2nd order FD operator
for i in range(1, ntheta - 1):
for j in range(1, nr - 1):
# Calculate spatial derivatives (2nd order operator)
vphi_r = (vphi[i,j + 1] - vphi[i,j]) / dr
vphi_theta = (vphi[i + 1,j] - vphi[i,j]) / (dtheta * r[j])
# calculate vphi at (i,j+1/2)
vphih = (vphi[i,j + 1] + vphi[i,j]) / (2 * rh[j])
# calculate vphi at (i+1/2,j)
vphithetah = (vphi[i + 1,j] + vphi[i,j]) / (2 * r[j] * np.tan(thetah[i]))
# Update shear stresses
srp[i,j] = srp[i,j] + dt * mur[i,j] * (vphi_r - vphih)
stp[i,j] = stp[i,j] + dt * mutheta[i,j] * (vphi_theta - vphithetah)
return srp, stp
"""
Explanation: ... then update the shear stress components $\sigma_{r\phi}$ and $\sigma_{\theta\phi}$ ...
End of explanation
"""
# Harmonic averages of shear modulus
# ----------------------------------
@jit(nopython=True) # use JIT for C-performance
def shear_avg(mu, nr, ntheta, mur, mutheta):
for i in range(1, ntheta - 1):
for j in range(1, nr - 1):
# Calculate harmonic averages of shear moduli
# Check if mu=0 on the FD grid
if(mu[i+1,j]<1e-20 or mu[i,j]<1e-20):
mutheta[i,j] = 0.0
else:
mutheta[i,j] = 2 / (1 / mu[i + 1,j] + 1 / mu[i,j])
if(mu[i,j+1]<1e-20 or mu[i,j]<1e-20):
mur[i,j] = 0.0
else:
mur[i,j] = 2 / (1 / mu[i,j + 1] + 1 / mu[i,j])
return mur, mutheta
"""
Explanation: ... and averaging the shear modulus, where I additionally checked if the shear modulus is not equal 0 to avoid further trouble during the FD modelling ...
End of explanation
"""
# 2D SH axisymmetric spherical wave propagation (Finite Difference Solution)
# --------------------------------------------------------------------------
def FD_2D_SH_JIT(dt,dr,dtheta,f0,vs,rho,nr,ntheta,clip,zsrc):
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Initialize coordinates
# ----------------------
r = np.arange(nr)
r = rcore + r * dr # coordinates in r-direction (m)
rh = r + (dr/2) # r-direction coordinates shifted by half a gridpoint (m)
theta = np.arange(ntheta)
theta = theta * dtheta # coordinates in theta-direction (rad)
thetah = theta + (dtheta/2) # theta-direction shifted by half a gridpoint (rad)
thetamax = np.max(theta)
rmax = np.max(r)
# rectangular plot of polar data
r1, theta1 = np.meshgrid(r, np.hstack((theta,theta + np.pi)))
# Define source position
isrc = 2 # source location in theta-direction [gridpoints]
jsrc = (int)((r[-1] - rcore - zsrc)/dr) # source location in r-direction [gridpoints]
# Place receivers two grid points below the earth surface
# and at all grid points in theta direction
jr = nr - 2 # receiver location in r-direction [gridpoints]
# Initialize empty wavefield arrays
# ---------------------------------
vphi = np.zeros((ntheta,nr)) # particle velocity vphi
srp = np.zeros((ntheta,nr)) # shear stress srp
stp = np.zeros((ntheta,nr)) # shear stress stp
vphi1 = np.vstack((vphi,vphi)) # vphi mirrored at the symmetry axis
# Define S-wave velocity model for visualization
vs1 = np.vstack((vs,np.flipud(vs)))
# harmonic average of shear moduli
# --------------------------------
mu = np.zeros((ntheta,nr))
mu = rho * vs ** 2 # calculate shear modulus
mur = mu # initialize harmonic average mur
mutheta = mu # initialize harmonic average mutheta
mur, mutheta = shear_avg(mu, nr, ntheta, mur, mutheta)
# Initialize empty seismogram
# ---------------------------
seis = np.zeros((ntheta,nt))
# Initalize animation of vy wavefield
# -----------------------------------
fig, ax = plt.subplots(subplot_kw=dict(projection='polar'))
# Plot vphi wavefield
image1 = ax.pcolormesh(theta1, r1/1000, vphi1, vmin=-clip, vmax=clip,
cmap="RdBu", shading = "flat")
plt.title(r'$V_{\phi}$ wavefield')
plt.xlabel(r'$\theta$ [rad]')
plt.ion()
plt.show(block=False)
# Time looping
# ------------
for it in range(nt):
# Apply symmetry boundary condition to stress before vel. update
# --------------------------------------------------------------
srp[0,:] = srp[1,:]
srp[-1,:] = srp[-2,:]
stp[0,:] = -stp[1,:]
stp[-1,:] = -stp[-2,:]
# Update particle velocity vphi
# -----------------------------
vphi = update_vel(vphi, srp, stp, dr, dtheta, dt, nr, ntheta, rho, r, theta)
# Add Source Term at (isrc,jsrc)
# ------------------------------
# Absolute particle velocity w.r.t analytical solution
vphi[isrc,jsrc] = vphi[isrc,jsrc] + (dt * src[it] / (dr * dtheta * rho[isrc,jsrc]))
# Apply symmetry boundary condition to vphi before stress update
# --------------------------------------------------------------
vphi[0,:] = -vphi[1,:]
vphi[-1,:] = -vphi[-2,:]
# Update shear stress srp, stp
# ----------------------------
srp, stp = update_stress(vphi, srp, stp, dr, dtheta, dt, nr, ntheta, mur, mutheta, thetah, r, rh)
# Output of Seismogram
# -----------------
seis[:,it] = vphi[:,jr]
# display vy snapshots
if (it % isnap) == 0:
vphi1 = np.vstack((vphi,np.flipud(vphi)))
image1.set_array(vphi1[:-1, :-1].ravel())
title = '$V_{\phi}$ wavefield, t = ' + str((int)(time[it])) + ' s'
plt.title(title)
fig.canvas.draw()
return time, seis
"""
Explanation: Finally, we assemble all parts into the 2D SH axisymmetric FD code:
End of explanation
"""
# Run SH FD modelling for modified Vance etal. (2017) Ganymede model
# ------------------------------------------------------------------
%matplotlib notebook
# Define source depth (m)
zsrc = 10.0 * 1000.0
# Interpolate 1D PREM model on FD grid
r = np.arange(nr) # r-coordinates on FD grid
r = rcore + r * dr # add core radius
# define interpolation function
rshift = 0 # shift 1D Ganymede model with respect to FD model
fvs = interpolate.interp1d(np.flipud(radius-rshift),np.flipud(vs1D),kind="nearest")
vs1D_fd = fvs(r)
frho = interpolate.interp1d(np.flipud(radius-rshift),np.flipud(rho1D),kind="nearest")
rho1D_fd = frho(r)
# build 2D vs and density Ganymede models
vs1D_ganymede_liq = np.tile(vs1D_fd, (ntheta,1))
rho1D_ganymede_liq = np.tile(rho1D_fd, (ntheta,1))
# define wavefield clip value
clip = 3e-10
# calculate time step according to CFL criterion
dt = dr / (np.sqrt(2) * np.max(vs1D_ganymede_liq))
%time time_ganymede_liq, seis_ganymede_liq = FD_2D_SH_JIT(dt,dr,dtheta,f0,vs1D_ganymede_liq,rho1D_ganymede_liq,nr,ntheta,clip,zsrc)
"""
Explanation: Modelling of an icequake on Ganymede with liquid ocean
Now, we only have to assemble a 2D axisymmetric Ganymede model based on the modified 1D model by Vance et al. (2017) with a liquid instead of a solid iron core, calculate the time step according to the CFL criterion and finally run the FD modelling code. As a source, we will assume a shallow icequake in a depth of 10 km.
End of explanation
"""
# Plot FD seismograms for Ganymede model with liquid ocean at polar
# angles between 0 and 180°
# -----------------------------------------------------------------
%matplotlib notebook
# Define figure size
rcParams['figure.figsize'] = 7, 5
extent = [0,180.0,tmax,0]
clip = 1e-9
plt.imshow(seis_ganymede_liq.T, extent=extent, cmap="RdBu",vmin=-clip,vmax=clip, aspect=7e-2)
plt.title(r'Seismic section (Ganymede model with liquid ocean)')
plt.xlabel(r'Polar angle $\theta [^o]$')
plt.ylabel('Time (s)')
plt.gca().invert_yaxis()
plt.grid()
plt.show()
"""
Explanation: Let's be very optimistic and assume we have an armada of space probes to place geophones at each $\theta$-gridpoint on Ganymedes surface, which would record the following seismic section:
End of explanation
"""
# Plot FD seismogram at polar angle 22.5°
# ---------------------------------------
%matplotlib notebook
# Define figure size
rcParams['figure.figsize'] = 7, 5
plt.plot(time_ganymede_liq, seis_ganymede_liq[(int)(ntheta/8),:], 'b-',lw=3,label="FD solution") # plot FD seismogram
plt.xlim(time_ganymede_liq[0], time_ganymede_liq[-1])
plt.title(r'Seismogram @ $\theta = 22.5^o$ (Ganymede model with liquid ocean)')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.grid()
plt.show()
"""
Explanation: Compared to the 1D PREM earth modelling results and 1D moon modelling results you only notice the direct $S$ wave, a very strong coda generated by reflections between the Ganymede surface and top of the liquid ocean, and a Love wave.
If we are more realistic, we can only rely on one receiver placed at a polar angle $\theta = 22.5^o$:
End of explanation
"""
|
hamnonlineng/hamnonlineng | examples/Example_7th_order_Hamiltonian-no_solutions.ipynb | bsd-3-clause | import hamnonlineng as hnle
letters = 'abcde'
resonant = [hnle.Monomial(1, 'aabbEEC'), hnle.Monomial(1,'abddEEC')]
op_sum = hnle.operator_sum(letters)
sine_exp = (
hnle.sin_terms(op_sum, 3)
+hnle.sin_terms(op_sum, 5)
+hnle.sin_terms(op_sum, 7)
)
off_resonant = hnle.drop_single_mode(
hnle.drop_definitely_offresonant(
hnle.drop_matching(sine_exp.m, resonant)))
off_resonant = list(off_resonant)
print('Number of off-resonant constraints: %d'%len(off_resonant))
"""
Explanation: Find frequencies that make only $(\hat{a}^2\hat{b}^2\hat{e}^{\dagger2}+\hat{a}\hat{b}\hat{d}^2\hat{e}^{\dagger2})\hat{c}^\dagger +h.c.$ resonant in the 7th order expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+\hat{e}+h.c.)$
Proceed as in the 5th order example.
End of explanation
"""
res = hnle.head_and_count(
hnle.solve_constraints_gecode(resonant, off_resonant, letters, maxfreq=50))
"""
Explanation: Try to solve (takes around a minute):
End of explanation
"""
# Drop all of the form aabb...
starts_with_aabb = [_ for _ in off_resonant if _.s[4:5].lower() not in 'ab' and _.s.startswith('AABB')]
# Drop all of the form ab...
starts_with_ab = [_ for _ in off_resonant if _.s[2:3].lower() not in 'ab' and _.s.startswith('AB')]
# Drop all that do not contain any a or b
no_a_or_b = [_ for _ in off_resonant if 'a' not in _.s.lower() or 'b' not in _.s.lower()]
to_be_removed = starts_with_aabb + starts_with_ab + no_a_or_b
print('Number of constraints starting with ab or aabb or containing no a or b: %d'%len(to_be_removed))
off_resonant = hnle.drop_matching(off_resonant, to_be_removed)
off_resonant = list(off_resonant)
print('Number of new off-resonant constraints: %d'%len(off_resonant))
"""
Explanation: Remove constraints on terms of the form $\hat{a}^2\hat{b}^2\dots$ or $\hat{a}\hat{b}\dots$ or those that do not contain any $\hat{a}$s or $\hat{b}$s.
End of explanation
"""
res = hnle.head_and_count(
hnle.solve_constraints_gecode(resonant, off_resonant, letters, maxfreq=33))
"""
Explanation: Try to solve again:
End of explanation
"""
res[0]
hnle.filter_resonant(res[0], to_be_removed, letters)
"""
Explanation: See which of the removed constraints fail:
For the first solution:
End of explanation
"""
res[1]
hnle.filter_resonant(res[1], to_be_removed, letters)
"""
Explanation: For the second solution:
End of explanation
"""
|
sympy/scipy-2017-codegen-tutorial | notebooks/23-lambdify-Tc99m.ipynb | bsd-3-clause | import sympy as sym
sym.init_printing()
symbs = t, l1, l2, x0, y0, z0 = sym.symbols('t lambda_1 lambda_2 x0 y0 z0', real=True, nonnegative=True)
funcs = x, y, z = [sym.Function(s)(t) for s in 'xyz']
inits = [f.subs(t, 0) for f in funcs]
diffs = [f.diff(t) for f in funcs]
exprs = -l1*x, l1*x - l2*y, l2*y
eqs = [sym.Eq(diff, expr) for diff, expr in zip(diffs, exprs)]
eqs
solutions = sym.dsolve(eqs)
solutions
"""
Explanation: Using lambdify for plotting expressions
The syntethic isotope Technetium-99m is used in medical diagnostics (scintigraphy):
$$
^{99m}Tc \overset{\lambda_1}{\longrightarrow} \,^{99}Tc \overset{\lambda_2}{\longrightarrow} \,^{99}Ru \
\lambda_1 = 3.2\cdot 10^{-5}\,s^{-1} \
\lambda_2 = 1.04 \cdot 10^{-13}\,s^{-1} \
$$
SymPy can solve the differential equations describing the amounts versus time analytically.
Let's denote the concentrations of each isotope $x(t),\ y(t)\ \&\ z(t)$ respectively.
End of explanation
"""
integration_constants = set.union(*[sol.free_symbols for sol in solutions]) - set(symbs)
integration_constants
initial_values = [sol.subs(t, 0) for sol in solutions]
initial_values
const_exprs = sym.solve(initial_values, integration_constants)
const_exprs
analytic = [sol.subs(const_exprs) for sol in solutions]
analytic
"""
Explanation: now we need to determine the integration constants from the intial conditions:
End of explanation
"""
from math import log10
import numpy as np
year_s = 365.25*24*3600
tout = np.logspace(0, log10(3e6*year_s), 500) # 1 s to 3 million years
%load_ext scipy2017codegen.exercise
"""
Explanation: Exercise: Create a function from a symbolic expression
We want to plot the time evolution of x, y & z from the above analytic expression (called analytic above):
End of explanation
"""
# %exercise exercise_Tc99.py
xyz_num = sym.lambdify([t, l1, l2, *inits], [eq.rhs for eq in analytic])
yout = xyz_num(tout, 3.2e-5, 1.04e-13, 1, 0, 0)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(1, 1, figsize=(14, 4))
ax.loglog(tout.reshape((tout.size, 1)), np.array(yout).T)
ax.legend(['$^{99m}Tc$', '$^{99}Tc$', '$^{99}Ru$'])
ax.set_xlabel('Time / s')
ax.set_ylabel('Concentration / a.u.')
_ = ax.set_ylim([1e-11, 2])
"""
Explanation: Use either the %exercise or %load magic to get the exercise / solution respecitvely:
Replace ??? so that f(t) evaluates $x(t),\ y(t)\ \&\ z(t)$. Hint: use the right hand side of the equations in analytic (use the attribute rhs of the items in anayltic):
End of explanation
"""
|
owenjhwilliams/ASIIT | FindSwirlLocs.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import h5py
from importlib import reload
import PIVutils
#PIVutils = reload(PIVutils)
X, Y, Swirl, Cond, Prof = PIVutils.importMatlabPIVdata2D('/Users/Owen/Dropbox/Data/ABL/SBL PIV data/RNV45-RI2.mat',['X','Y','Swirl'],['Cond','Prof'])
NanLocs = np.isnan(Swirl)
uSize = Swirl.shape
scale = (X[1,-1]-X[1,1])/(uSize[1]-1)
PIVutils = reload(PIVutils)
PIVutils.plotScalarField(X,Y,Swirl[:,:,1],50)
"""
Explanation: Test script to find all locations with large swirl
Aim is to take a velocity field, find all locations with large swirl, and then identify distinct blobs of swirl.
This script makes use of the Source Extraction and Photometry (SEP) library
End of explanation
"""
#Find profile of swirl std
SwirlStd = np.std(np.nanmean(Swirl,axis=2),axis = 1)
plt.plot(SwirlStd,Y[:,1])
plt.ylabel('y(m)')
plt.xlabel('Swirl rms')
#Normalize field by the std of Swirl
SwirlNorm = Swirl/SwirlStd.reshape(uSize[0],1,1) #match the SwirlStd length (123) with the correct index in Swirl (also 123)
"""
Explanation: Normalize the swirl for distance from the wall (if wanted)
End of explanation
"""
PIVutils.plotScalarField(X,Y,Swirl[:,:,1],30)
Swirl[NanLocs] = 0 #Get rid of nans for now
"""
Explanation: Plot after normalization
End of explanation
"""
import copy
SwirlFilt = copy.copy(Swirl) #think this should completely copy the list, allowing me to try things
#Swirl must be above a certain background value or it is zeroed
SwirlFilt[np.absolute(SwirlFilt)<7] = 0
#Then only keep those locations where swirls is greater than Thresh*SwirlStd
Thresh = 60
SwirlFilt[np.absolute(SwirlNorm)<Thresh] = 0
"""
Explanation: Create thresholded field
End of explanation
"""
PIVutils = reload(PIVutils)
[num_features,features_per_frame, labeled_array, cent] = PIVutils.findBlobs(SwirlFilt)
[f, ax] = PIVutils.plotScalarField(X,Y,SwirlFilt[:,:,1],10)
for i in range(features_per_frame[1]):
plt.plot(cent[1][i][1]*scale+X[1,1],cent[1][i][0]*scale+Y[1,1],'oy',markersize=4,markeredgecolor=None)
"""
Explanation: Find all blobs that agree with the thresholds
End of explanation
"""
PIVutils = reload(PIVutils)
Thresh = 50
[num_features,features_per_frame, labeled_array, cent] = PIVutils.findBlobs(SwirlFilt,50)
[f, ax] = PIVutils.plotScalarField(X,Y,SwirlFilt[:,:,1],10)
for i in range(features_per_frame[1]):
plt.plot(cent[1][i][1]*scale+X[1,1],cent[1][i][0]*scale+Y[1,1],'oy',markersize=4,markeredgecolor=None)
"""
Explanation: Find all blobs and filter for size
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cnrm-cerfacs/cmip6/models/sandbox-3/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
diging/methods | 2.1. Co-citation networks/2.1. Co-citation analysis.ipynb | gpl-3.0 | metadata = wos.read('../data/Baldwin/PlantPhysiology',
streaming=True, index_fields=['date', 'abstract'], index_features=['citations'])
len(metadata)
"""
Explanation: 2.1. Co-citation Analysis
In this workbook we will conduct a co-citation analysis using the approach outlined in Chen (2009). If you have used the Java-based desktop application CiteSpace II, this should be familiar: this is the same methodology that is implemented in that application.
Co-citation analysis gained popularity in the 1970s as a technique for “mapping” scientific literatures, and for finding latent semantic relationships among technical publications.
Two papers are co-cited if they are both cited by the same, third, paper. The standard approach to co-citation analysis is to generate a sample of bibliographic records from a particular field by using certain keywords or journal names, and then build a co-citation graph describing relationships among their cited references. Thus the majority of papers that are represented as nodes in the co-citation graph are not papers that responded to the selection criteria used to build the dataset.
Our objective in this tutorial is to identify papers that bridge the gap between otherwise disparate areas of knowledge in the scientific literature. In this tutorial, we rely on the theoretical framework described in Chen (2006) and Chen et al. (2009).
According to Chen, we can detect potentially transformative changes in scientific knowledge by looking for cited references that both (a) rapidly accrue citations, and (b) have high betweenness-centrality in a co-citation network. It helps if we think of each scientific paper as representing a “concept” (its core knowledge claim, perhaps), and a co-citation event as representing a proposition connecting two concepts in the knowledge-base of a scientific field. If a new paper emerges that is highly co-cited with two otherwise-distinct clusters of concepts, then that might mean that the field is adopting new concepts and propositions in a way that is structurally radical for their conceptual framework.
Chen (2009) introduces sigma ($\Sigma$) as a metric for potentially transformative cited references:
$$
\Sigma(v) = (g(v) + 1)^{burstness(v)}
$$
...where the betweenness centrality of each node v is:
$$
g(v) = \sum\limits_{i\neq j\neq v} \frac{\sigma_{ij} (v)}{\sigma_{ij}}
$$
...where $\sigma_{ij}$ is the number of shortest paths from node i to node j and $\sigma_{ij}(v)$ is the number of those paths that pass through v. Burstness (0.-1. normalized) is estimated using Kleingberg’s (2002) automaton model, and is designed to detect rate-spikes around features in a stream of documents.
Note: In this notebook we will not use burstness, but rather the relative increase/decrease in citations from one year to the next. Burstness is helpful when we are dealing with higher-resolution time-frames, and/or we want to monitor a long stream of citation data. Since we will smooth our data with a multi-year time-window, burstness becomes a bit less informative, and the year-over-year change in citations (we'll call this Delta $\Delta$) is an intuitive alternative.
Load data
Here we have some field-tagged data from the Web of Science. We set streaming=True so that we don't load everything into memory all at once.
Note: When we stream the corpus, it is important to set index_fields and index_features ahead of time, so that we don't have to iterate over the whole corpus later on.
End of explanation
"""
from tethne import cocitation
"""
Explanation: Co-citation graph
Tethne provides a function called cocitation that creates co-citation graphs.
End of explanation
"""
graph = cocitation(metadata, min_weight=6., edge_attrs=['date'])
graph.order(), graph.size(), nx.number_connected_components(graph)
"""
Explanation: Co-citation graphs can get enormous quickly, and so it is important to set a threshold number of times that a paper must be cited to be included in the graph (min_weight). It's better to start high, and bring the threshold down as needed.
Note that edge_attrs should be set to whatever was the value of index_fields when we used read() (above).
End of explanation
"""
nx.write_graphml(graph, 'cocitation.graphml')
"""
Explanation: Serialize
We can visualize our graph in Cytoscape to get a sense of its structure.
End of explanation
"""
from tethne import GraphCollection
G = GraphCollection(metadata, cocitation,
slice_kwargs={'feature_name': 'citations'},
method_kwargs={'min_weight': 3, 'edge_attrs': []})
for year, graph in G.iteritems():
print graph.order(), graph.size(), nx.number_connected_components(graph)
nx.write_graphml(graph, 'cocitation_%i.graphml' % year)
"""
Explanation: Sigma, $\Sigma$
Chen (2009) proposed sigma ($\Sigma$) as a metric for potentially transformations in a scientific literature.
$$
\Sigma(v) = (g(v) + 1)^{burstness(v)}
$$
Note: In this notebook we will not use burstness, but rather the relative increase/decrease in citations from one year to the next. Burstness is helpful when we are dealing with higher-resolution time-frames, and/or we want to monitor a long stream of citation data. Since we will smooth our data with a multi-year time-window, burstness becomes a bit less informative, and the year-over-year change in citations (we'll call this Delta $\Delta(v)$) is an intuitive alternative. So:
$$
\Sigma(v) = (g(v) + 1)^{\Delta(v)}
$$
$$
\Delta(v) = \frac{N_t(v) - N_{t-1} }{max(1, N_{t-1})}
$$
GraphCollection
Since we are interested in the evolution of the co-citation graph over time, we need to create a series of sequential graphs. Tethne provides a class called GraphCollection that will do this for us.
We pass metadata (or Corpus object), the cotation function, and then some configuration information:
* slice_kwargs controls how the sequential time-slices are generated. The default is to use 1-year slices, and advance 1 year per slice. We are stating here that we want to extract only the citations feature from each slice (for performance).
* method_kwargs controls the graph-building function. Here we pass our min_weight, and we also say that we don't want any attributes on the edges (for performance).
End of explanation
"""
# 'betweenness_centrality' is the name of the algorithm
# in NetworkX that we want to use. ``invert=True`` means
# that we want to organize the g(v) values by node, rather
# than by time-period (the default).
g_v = G.analyze('betweenness_centrality', invert=True)
g_v.items()[89]
"""
Explanation: Betweenness centrality
Recall that the betweenness centrality of each node v is:
$$
g(v) = \sum\limits_{i\neq j\neq v} \frac{\sigma_{ij} (v)}{\sigma_{ij}}
$$
We can analyze all of the nodes in all of the graphs in our GraphCollection using its analyze() method.
End of explanation
"""
node_data = pd.DataFrame(columns=['ID', 'Node', 'Year', 'Citations', 'Centrality', 'Delta'])
i = 0
for n in G.node_index.keys():
if n < 0:
continue
# node_history() gets the values of a node attribute over
# all of te graphs in the GraphCollection.
g_n = G.node_history(n, 'betweenness_centrality')
N_n = G.node_history(n, 'count')
# Skip nodes whose g(v) never gets above 0.
if max(g_n.values()) == 0:
continue
years = sorted(G.keys()) # Graphs are keyed by year.
for year in years:
g_nt = g_n.get(year, 0.0) # Centrality for this year.
N_nt = float(N_n.get(year, 0.0)) # Citations for this year.
# For the second year and beyond, calculate Delta.
if year > years[0]:
N_nlast = N_n.get(year-1, 0.0)
delta = (N_nt - N_nlast)/max(N_nlast, 1.)
else:
delta = 0.0
# We will add one row per node per year.
node_data.loc[i] = [n, G.node_index[n], year, N_nt, g_nt, delta]
i += 1
"""
Explanation: Organize our data
In order to calculate $\Sigma$ more efficiently, we'll organize our data about the graph in a DataFrame. Our DataFrame will have the following columns:
ID - The GraphCollection gives each node an integer ID. We'll need this to identify nodes in Cytoscape, later.
Node - This is the Author Year Journal label from the WoS data for a cited reference.
Citations - The number of citations.
Centrality - Betweenness centrality.
Delta - The year-over-year increase/decrease in citations.
This may take a bit, depending on the size of the graphs.
End of explanation
"""
node_data.to_csv('node_data.csv', encoding='utf-8')
"""
Explanation: That was fairly computationally expensive. We should save the results so that we don't have to do that again.
End of explanation
"""
# Note the ``.copy()`` at the end -- this means that the new DataFrame will be a
# stand-alone copy, and not just a "view" of the existing ``node_data`` DataFrame.
# The practical effect is that we can add new data to the new ``candidates``
# DataFrame without creating problems in the larger ``node_data`` DataFrame.
candidates = node_data[node_data.Centrality*node_data.Delta > 0.].copy()
"""
Explanation: Before calculating $\Sigma$ we will select a subset of the rows in our dataframe, to reduce the computational burden. Here we create a smaller DataFrame with only those rows in which both Centrality and Delta are not zero -- it should be obvious that these will have negligible $\Sigma$.
End of explanation
"""
candidates['Sigma'] = (1.+candidates.Centrality)**candidates.Delta
"""
Explanation: Now we calculate $\Sigma$. Vector math is great!
End of explanation
"""
candidates.sort('Sigma', ascending=False)
"""
Explanation: The nodes (in a given year) with the highest $\Sigma$ are our candidates for potential "transformations". Note that in Chen's model, the cited reference and its co-cited references are an emission of the "knowledge" of the field. In other words, the nodes and edges in our graph primarily say something about the records that are doing the citing, rather than the records that are cited. I.e. a node with $\Sigma$ is indicative of a transformation, but that is a description of the papers that cite it and not necessarily the paper that the node represents.
End of explanation
"""
clusters = pd.read_csv('clusters.dat', sep='\t', skiprows=9)
clusters
clusters['Node IDs'][0].split(', ')
citations = metadata.features['citations']
citing = Counter()
for reference in clusters['Node IDs'][0].split(', '):
for idx in citations.papers_containing(reference):
citing[idx] += 1.
chunk = [idx for idx, value in citing.items() if value > 2.]
"""
Explanation: Cluster labeling
End of explanation
"""
abstracts = {}
for abstract, wosid in metadata.indices['abstract'].iteritems():
print '\r', wosid[0],
abstracts[wosid[0]] = abstract
abstracts.items()[5]
document_token_counts = nltk.ConditionalFreqDist([
(wosid, normalize_token(token))
for wosid, abstract in abstracts.items()
for token in nltk.word_tokenize(abstract)
if filter_token(token)
])
extract_keywords(document_token_counts, lambda k: k in chunk)
cluster_keywords = {}
for i, row in clusters.iterrows():
citing = Counter()
for reference in row['Node IDs'].split(', '):
for idx in citations.papers_containing(reference):
citing[idx] += 1.
chunk = [idx for idx, value in citing.items() if value > 2.]
cluster_keywords[row.Cluster] = extract_keywords(document_token_counts, lambda k: k in chunk)
cluster_keywords
"""
Explanation: This step can take a few minutes.
End of explanation
"""
|
ecervera/mindstorms-nb | task/light_teacher.ipynb | mit | from functions import connect, light, forward, stop
connect()
"""
Explanation: Sensor de llum
<img src="http://www.nxtprograms.com/line_follower/DCP_2945.JPG" align="right">
Aquest sensor té dos parts:
un diode emissor de llum roja
un transistor detector de llum, que dóna un valor proporcional a la intensitat de llum reflectida per la superfície
Utilitzarem el sensor per a detectar una línia negra, i controlar el robot per a seguir la trajectòria.
En primer lloc, connectem el robot.
End of explanation
"""
forward()
data = []
for i in range(25):
data.append(light())
stop()
"""
Explanation: Gràfica del sensor
Per a analitzar els valors del sensor, farem una gràfica, però en aquest mentre el robot es mou.
Heu de col·locar el robot perpendicularment a la línia negra, de manera que el codi següent farà avançar el robot uns segons per a travessar-la, llegint els valos abans de la línia, sobre ella, i després de la línia.
End of explanation
"""
from functions import plot
plot(data)
"""
Explanation: A continuació representeu la gràfica. Hauríeu d'obtindre una mena de "V" perquè els valors alts corresponen a la superfície blanca (que reflexa més llum) i els baixos a la negra (que en reflexa menys).
End of explanation
"""
from functions import left, right
try:
while True:
if light()>40:
forward()
else:
right()
except KeyboardInterrupt:
stop()
"""
Explanation: Heu de seleccionar un valor intermedi per a distingir entre blanc i negre. Serà el llindar que usareu:
si la lectura està per damunt, el sensor està detectant blanc
si està per baix, negre.
Seguiment de línia, versió 1.0
<img src="http://nxtprograms.com/NXT2/line_follower/DSC_3213.JPG" width=240 align="right">
Per a seguir la línia, la idea és senzilla:
el robot va pel costat de la línia, per la part de dins del circuit tancat
si el sensor detecta blanc, el robot va recte
si detecta negre, el robot gira per rectificar la seua trajectòria
Depenent del sentit en que està recorrent el circuit, haurà de rectificar girant a l'esquerra o a la dreta.
End of explanation
"""
from functions import left_sharp, right_sharp
try:
while True:
if light()>40:
forward()
else:
right_sharp()
except KeyboardInterrupt:
stop()
"""
Explanation: Versió 2.0
Si el robot se n'ix per la curva, cal fer un gir més pronunciat. Les funcions que hem utilitzat fins ara giren una roda mentre l'altra està parada. Es pot girar més si, en lloc d'estar parada, l'altra roda gira en sentit contrari. Per a fer-ho, useu les següents funcions:
End of explanation
"""
try:
while True:
if light()>85:
left_sharp(speed=10)
else:
if light()<15:
right_sharp(speed=10)
else:
forward(speed=50)
except KeyboardInterrupt:
stop()
"""
Explanation: Versió 3.0
En general, pot haver curves a esquerra i dreta. Aleshores, el mètode anterior no és suficient. Una solució seria usar dos sensors, un per cada costat de la línia, però només en tenim un per robot.
Una altra solució és fer que el sensor vaja per la vora de la línia, de manera que la superfície serà meitat negra i meitat blanca. Si definim dos llindars, aleshores:
si el sensor està per baix dels dos llindars, és negre
si està per dalt dels dos, és blanc
si està enmig, és la vora de la línia
El robot anirà recte quan vaja per la vora de la línia, i pot rectificar a esquerra en un cas, i a dreta en l'altre.
End of explanation
"""
try:
while True:
if light()>40:
left_sharp(speed=50)
else:
if light()<15:
right_sharp(speed=50)
else:
forward(speed=50)
except KeyboardInterrupt:
stop()
"""
Explanation: Versió 4.0
Si el robot oscil·la massa, fins i tot pot travessar la línia i desorientar-se completament.
La solució és disminuir la velocitat, indicant-ho a les funcions de moviment, per exemple:
python
forward(speed=65)
El valor per defecte és 100, el màxim, i el mínim és 0.
End of explanation
"""
from functions import disconnect
disconnect()
"""
Explanation: Recapitulem
Abans de continuar, desconnecteu el robot:
End of explanation
"""
|
tensorflow/probability | discussion/examples/cross_gpu_logprob.ipynb | apache-2.0 | %tensorflow_version 2.x
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
tfb, tfd = tfp.bijectors, tfp.distributions
physical_gpus = tf.config.experimental.list_physical_devices('GPU')
print(physical_gpus)
tf.config.experimental.set_virtual_device_configuration(
physical_gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2000)] * 4)
gpus = tf.config.list_logical_devices('GPU')
print(gpus)
st = tf.distribute.MirroredStrategy(devices=tf.config.list_logical_devices('GPU'))
print(st.extended.worker_devices)
# Draw samples from an MVN, then sort them. This way we can easily visually
# verify the correct partition ends up on the correct GPUs.
ndim = 3
def model():
Root = tfd.JointDistributionCoroutine.Root
loc = yield Root(tfb.Shift(.5)(tfd.MultivariateNormalDiag(loc=tf.zeros([ndim]))))
scale_tril = yield Root(tfb.FillScaleTriL()(tfd.MultivariateNormalDiag(loc=tf.zeros([ndim * (ndim + 1) // 2]))))
yield tfd.MultivariateNormalTriL(loc=loc, scale_tril=scale_tril)
dist = tfd.JointDistributionCoroutine(model)
tf.random.set_seed(1)
loc, scale_tril, _ = dist.sample(seed=2)
samples = dist.sample(value=([loc] * 1024, scale_tril, None), seed=3)[2]
samples = tf.round(samples * 1000) / 1000
for dim in reversed(range(ndim)):
samples = tf.gather(samples, tf.argsort(samples[:,dim]))
print(samples)
print(loc)
print(scale_tril)
print(tf.reduce_mean(samples, 0))
"""
Explanation: The objective of this notebook is to demonstrate splitting a log_prob and gradient computation across a number of GPU devices. For development purposes, this was prototyped in colab with a single GPU partitioned into multiple logical GPUs.
Note: Since it runs on a single GPU, performance is not representative of what can be achieved with multiple GPUs. Usage of tf.data can likely benefit from some tuning when deployed to multiple GPUs.
Needs a GPU: Edit > Notebook Settings: Hardware Accelerator => GPU
End of explanation
"""
%%time
def dataset_fn(ctx):
batch_size = ctx.get_per_replica_batch_size(len(samples))
d = tf.data.Dataset.from_tensor_slices(samples).batch(batch_size)
return d.shard(ctx.num_input_pipelines, ctx.input_pipeline_id)
ds = st.experimental_distribute_datasets_from_function(dataset_fn)
observations = next(iter(ds))
# print(observations)
@tf.function(autograph=False)
def log_prob_and_grad(loc, scale_tril, observations):
ctx = tf.distribute.get_replica_context()
with tf.GradientTape() as tape:
tape.watch((loc, scale_tril))
lp = tf.reduce_sum(dist.log_prob(loc, scale_tril, observations)) / len(samples)
grad = tape.gradient(lp, (loc, scale_tril))
return ctx.all_reduce('sum', lp), [ctx.all_reduce('sum', g) for g in grad]
@tf.function(autograph=False)
@tf.custom_gradient
def target_log_prob(loc, scale_tril):
lp, grads = st.run(log_prob_and_grad, (loc, scale_tril, observations))
return lp.values[0], lambda grad_lp: [grad_lp * g.values[0] for g in grads]
singleton_vals = tfp.math.value_and_gradient(target_log_prob, (loc, scale_tril))
kernel = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob, step_size=.35, num_leapfrog_steps=2)
kernel = tfp.mcmc.TransformedTransitionKernel(kernel, bijector=[tfb.Identity(), tfb.FillScaleTriL()])
@tf.function(autograph=False)
def sample_chain():
return tfp.mcmc.sample_chain(
num_results=200, num_burnin_steps=100,
current_state=[tf.ones_like(loc), tf.linalg.eye(scale_tril.shape[-1])],
kernel=kernel, trace_fn=lambda _, kr: kr.inner_results.is_accepted)
samps, is_accepted = sample_chain()
print(f'accept rate: {np.mean(is_accepted)}')
print(f'ess: {tfp.mcmc.effective_sample_size(samps)}')
print(tf.reduce_mean(samps[0], axis=0))
# print(tf.reduce_mean(samps[1], axis=0))
import matplotlib.pyplot as plt
for dim in range(ndim):
plt.figure(figsize=(10,1))
plt.hist(samps[0][:,dim], bins=50)
plt.title(f'loc[{dim}]: prior mean = 0.5, observation = {loc[dim]}')
plt.show()
"""
Explanation: Single batch of data resident on GPU.
End of explanation
"""
%%time
batches_per_eval = 2
def dataset_fn(ctx):
batch_size = ctx.get_per_replica_batch_size(len(samples))
d = tf.data.Dataset.from_tensor_slices(samples).batch(batch_size // batches_per_eval)
return d.shard(ctx.num_input_pipelines, ctx.input_pipeline_id).prefetch(2)
ds = st.experimental_distribute_datasets_from_function(dataset_fn)
@tf.function(autograph=False)
def log_prob_and_grad(loc, scale_tril, observations, prev_sum_lp, prev_sum_grads):
with tf.GradientTape() as tape:
tape.watch((loc, scale_tril))
lp = tf.reduce_sum(dist.log_prob(loc, scale_tril, observations)) / len(samples)
grad = tape.gradient(lp, (loc, scale_tril))
return lp + prev_sum_lp, [g + pg for (g, pg) in zip(grad, prev_sum_grads)]
@tf.function(autograph=False)
@tf.custom_gradient
def target_log_prob(loc, scale_tril):
sum_lp = tf.zeros([])
sum_grads = [tf.zeros_like(x) for x in (loc, scale_tril)]
sum_lp, sum_grads = st.run(
lambda *x: tf.nest.map_structure(tf.identity, x), (sum_lp, sum_grads))
def reduce_fn(state, observations):
sum_lp, sum_grads = state
return st.run(
log_prob_and_grad, (loc, scale_tril, observations, sum_lp, sum_grads))
sum_lp, sum_grads = ds.reduce((sum_lp, sum_grads), reduce_fn)
sum_lp = st.reduce('sum', sum_lp, None)
sum_grads = [st.reduce('sum', sg, None) for sg in sum_grads]
return sum_lp, lambda grad_lp: [grad_lp * sg for sg in sum_grads]
multibatch_vals = tfp.math.value_and_gradient(target_log_prob, (loc, scale_tril))
kernel = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob, step_size=.35, num_leapfrog_steps=2)
kernel = tfp.mcmc.TransformedTransitionKernel(kernel, bijector=[tfb.Identity(), tfb.FillScaleTriL()])
@tf.function(autograph=False)
def sample_chain():
return tfp.mcmc.sample_chain(
num_results=200, num_burnin_steps=100,
current_state=[tf.ones_like(loc), tf.linalg.eye(scale_tril.shape[-1])],
kernel=kernel, trace_fn=lambda _, kr: kr.inner_results.is_accepted)
samps, is_accepted = sample_chain()
print(f'accept rate: {np.mean(is_accepted)}')
print(f'ess: {tfp.mcmc.effective_sample_size(samps)}')
print(tf.reduce_mean(samps[0], axis=0))
# print(tf.reduce_mean(samps[1], axis=0))
import matplotlib.pyplot as plt
for dim in range(ndim):
plt.figure(figsize=(10,1))
plt.hist(samps[0][:,dim], bins=50)
plt.title(f'loc[{dim}]: prior mean = 0.5, observation = {loc[dim]}')
plt.show()
"""
Explanation: Two batches of data per log-prob eval (2x slower).
End of explanation
"""
for i, (sv, mv) in enumerate(zip(tf.nest.flatten(singleton_vals),
tf.nest.flatten(multibatch_vals))):
np.testing.assert_allclose(sv, mv, err_msg=i, rtol=1e-5)
"""
Explanation: Sanity check logprob and gradients.
End of explanation
"""
|
mjones01/NEON-Data-Skills | code/Python/remote-sensing/hyperspectral-data/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb | agpl-3.0 | import matplotlib.pyplot as plt
import numpy as np
import h5py, os, osr, copy
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: syncID: e046a83d83f2042d8b40dea1b20fd6779
title: "Band Stacking, RGB & False Color Images, and Interactive Widgets in Python - Tiled Data"
description: "Learn to efficintly work with tiled NEON AOP spectral data using functions."
dateCreated: 2017-06-19
authors: Bridget Hass
contributors:
estimatedTime:
packagesLibraries: numpy, matplotlib, h5py, os, osr, copy
topics: hyperspectral-remote-sensing, HDF5, remote-sensing
languagesTool: python
dataProduct: NEON.DP3.30006, NEON.DP3.30008
code1: Python/remote-sensing/hyperspectral-data/NEON_AOP_Hyperspectral_Functions_Tiles_py.ipynb
tutorialSeries: intro-hsi-tiles-py-series
urlTitle: neon-hsi-aop-functions-tiles-python
In the tutorial, we learn how to efficiently read in hdf5 data using h5py,
apply the no-data value and scale factor, and plot a single band of a
reflectance data tile using functions built for NEON AOP data. We will
introduce functions aop_h5refl2array, plot different combinations of
bands, and demonstrate how to create IPython widgets for more interactive
data visualization.
<div id="ds-ojectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Upload a Python module
* Efficiently work with NEON hyperspectral data using functions, including:
+ Read in tiled NEON AOP reflectance hdf5 data and associated metadata
+ Stack and plot 3-band combinations (eg. RGB, Color Infrared, False Color Images)
* Use IPython widgets to explore RGB band combinations interactively
* Understand how to write and use functions and loops to automate repeated processes
### Install Python Packages
* **numpy**
* **pandas**
* **gdal**
* **matplotlib**
* **h5py**
### Download Data
{% include/dataSubsets/_data_DI18.html %}
[[nid:7489]]
</div>
We can combine any three bands from the NEON reflectance data to make an RGB
image that will depict different information about the Earth's surface.
A natural color image, made with bands from the red, green, and blue
wavelengths looks close to what we would see with the naked eye. We can also
choose band combinations from other wavelenghts, and map them to the red, blue,
and green colors to highlight different features. A false color image is
made with one or more bands from a non-visible portion of the electromagnetic
spectrum that are mapped to red, green, and blue colors. These images can
display other information about the landscape that is not easily seen with a
natural color image.
The NASA Goddard Media Studio video "Peeling Back Landsat's Layers of Data"
gives a good quick overview of natural and false color band combinations. Note
that Landsat collects information from 11 wavelength bands, while NEON AOP
hyperspectral data collects information from 426 bands!
Peeling Back Landsat's Layers of Data Video
<iframe width="560" height="315" src="https://www.youtube.com/embed/YP0et8l_bvY" frameborder="0" allowfullscreen></iframe>
Further Reading
Check out the NASA Earth Observatory article
<a href="https://earthobservatory.nasa.gov/Features/FalseColor/" target="_blank">How to Interpret a False-Color Satellite Image</a>.
Read the supporting article for the video above,
<a href="https://svs.gsfc.nasa.gov//vis/a010000/a011400/a011491/index.html" target="_blank"> Landsat 8 Onion Skin</a>.
Load Function Module
Before we get started, let's set up our plot and warning preferences:
End of explanation
"""
def aop_h5refl2array(refl_filename):
"""aop_h5refl2array reads in a NEON AOP reflectance hdf5 file and returns
1. reflectance array (with the no data value and reflectance scale factor applied)
2. dictionary of metadata including spatial information, and wavelengths of the bands
--------
Parameters
refl_filename -- full or relative path and name of reflectance hdf5 file
--------
Returns
--------
reflArray:
array of reflectance values
metadata:
dictionary containing the following metadata:
bad_band_window1 (tuple)
bad_band_window2 (tuple)
bands: # of bands (float)
data ignore value: value corresponding to no data (float)
epsg: coordinate system code (float)
map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string)
reflectance scale factor: factor by which reflectance is scaled (float)
wavelength: wavelength values (float)
wavelength unit: 'm' (string)
--------
NOTE: This function applies to the NEON hdf5 format implemented in 2016, and should be used for
data acquired 2016 and after. Data in earlier NEON hdf5 format (collected prior to 2016) is
expected to be re-processed after the 2018 flight season.
--------
Example Execution:
--------
sercRefl, sercRefl_metadata = h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') """
import h5py
#Read in reflectance hdf5 file
hdf5_file = h5py.File(refl_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
#Extract the reflectance & wavelength datasets
refl = hdf5_file[sitename]['Reflectance']
reflData = refl['Reflectance_Data']
reflRaw = refl['Reflectance_Data'].value
#Create dictionary containing relevant metadata information
metadata = {}
metadata['map info'] = refl['Metadata']['Coordinate_System']['Map_Info'].value
metadata['wavelength'] = refl['Metadata']['Spectral_Data']['Wavelength'].value
#Extract no data value & scale factor
metadata['data ignore value'] = float(reflData.attrs['Data_Ignore_Value'])
metadata['reflectance scale factor'] = float(reflData.attrs['Scale_Factor'])
#metadata['interleave'] = reflData.attrs['Interleave']
#Apply no data value
reflClean = reflRaw.astype(float)
arr_size = reflClean.shape
if metadata['data ignore value'] in reflRaw:
print('% No Data: ',np.round(np.count_nonzero(reflClean==metadata['data ignore value'])*100/(arr_size[0]*arr_size[1]*arr_size[2]),1))
nodata_ind = np.where(reflClean==metadata['data ignore value'])
reflClean[nodata_ind]=np.nan
#Apply scale factor
reflArray = reflClean/metadata['reflectance scale factor']
#Extract spatial extent from attributes
metadata['spatial extent'] = reflData.attrs['Spatial_Extent_meters']
#Extract bad band windows
metadata['bad band window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad band window2'] = (refl.attrs['Band_Window_2_Nanometers'])
#Extract projection information
#metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['epsg'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
#Extract map information: spatial extent & resolution (pixel size)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
hdf5_file.close
return reflArray, metadata
"""
Explanation: The first function we will use is aop_h5refl2array. This function is loaded into the cell below, we encourage you to look through the code to understand what it is doing -- most of these steps should look familiar to you from the first lesson. This function can be thought of as a wrapper to automate the steps required to read AOP hdf5 reflectance tiles into a Python format. This function also cleans the data: it sets any no data values within the reflectance tile to nan (not a number) and applies the reflectance scale factor so the final array that is returned represents unitless scaled reflectance, with values ranging between 0 and 1 (0-100%).
End of explanation
"""
help(aop_h5refl2array)
aop_h5refl2array?
"""
Explanation: If you forget what this function does, or don't want to scroll up to read the docstrings, remember you can use help or ? to display the associated docstrings.
End of explanation
"""
serc_h5_tile = ('../data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
"""
Explanation: Now that we have an idea of how this function works, let's try it out. First, define the path where th e reflectance data is stored and use os.path.join to create the full path to the data file. Note that if you want to run this notebook later on a different reflectance tile, you just have to change this variable.
End of explanation
"""
sercRefl,sercMetadata = aop_h5refl2array(serc_h5_tile)
"""
Explanation: Now that we've specified our reflectance tile, we can call aop_h5refl2array to read in the reflectance tile as a python array called sercRefl , and the associated metadata into a dictionary sercMetadata
End of explanation
"""
sercRefl.shape
"""
Explanation: We can use the shape method to see the dimensions of the array we read in. NEON tiles are (1000 x 1000 x # of bands), the number of bands may vary depending on the hyperspectral sensor used, but should be in the vicinity of 426.
End of explanation
"""
def plot_aop_refl(band_array,refl_extent,colorlimit=(0,1),ax=plt.gca(),title='',cbar ='on',cmap_title='',colormap='Greys'):
'''plot_refl_data reads in and plots a single band or 3 stacked bands of a reflectance array
--------
Parameters
--------
band_array: array of reflectance values, created from aop_h5refl2array
refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax)
use metadata['spatial extent'] from aop_h5refl2array function
colorlimit: optional, range of values to plot (min,max).
- helpful to look at the histogram of reflectance values before plotting to determine colorlimit.
ax: optional, default = current axis
title: optional; plot title (string)
cmap_title: optional; colorbar title
colormap: optional (string, see https://matplotlib.org/examples/color/colormaps_reference.html) for list of colormaps
--------
Returns
--------
plots flightline array of single band of reflectance data
--------
Examples:
--------
plot_aop_refl(sercb56,
sercMetadata['spatial extent'],
colorlimit=(0,0.3),
title='SERC Band 56 Reflectance',
cmap_title='Reflectance',
colormap='Greys_r') '''
import matplotlib.pyplot as plt
plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit);
if cbar == 'on':
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20)
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation for ticklabels
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
"""
Explanation: plot_aop_refl: plot a single band
Next we'll use the function plot_aop_refl to plot a single band of reflectance data. Read the Parameters section of the docstring to understand the required inputs & data type for each of these; only the band and spatial extent are required inputs, the rest are optional inputs that, if specified, allow you to set the range color values, specify the axis, add a title, colorbar, colorbar title, and change the colormap (default is to plot in greyscale).
End of explanation
"""
sercb56 = sercRefl[:,:,55]
plot_aop_refl(sercb56,
sercMetadata['spatial extent'],
colorlimit=(0,0.3),
title='SERC Band 56 Reflectance',
cmap_title='Reflectance',
colormap='Greys_r')
"""
Explanation: Now that we have loaded this function, let's extract a single band from the SERC reflectance array and plot it:
End of explanation
"""
def stack_rgb(reflArray,bands):
import numpy as np
red = reflArray[:,:,bands[0]-1]
green = reflArray[:,:,bands[1]-1]
blue = reflArray[:,:,bands[2]-1]
stackedRGB = np.stack((red,green,blue),axis=2)
return stackedRGB
"""
Explanation: RGB Plots - Band Stacking
It is often useful to look at several bands together. We can extract and stack three reflectance bands in the red, green, and blue (RGB) spectrums to produce a color image that looks like what we see with our eyes; this is your typical camera image. In the next part of this tutorial, we will learn to stack multiple bands and make a geotif raster from the compilation of these bands. We can see that different combinations of bands allow for different visualizations of the remotely-sensed objects and also conveys useful information about the chemical makeup of the Earth's surface.
We will select bands that fall within the visible range of the electromagnetic
spectrum (400-700 nm) and at specific points that correspond to what we see
as red, green, and blue.
<figure>
<a href="{{ site.baseurl }}/images/hyperspectral/spectrum_RGBcombined.jpg">
<img src="{{ site.baseurl }}/images/hyperspectral/spectrum_RGBcombined.jpg"></a>
<figcaption> NEON Imaging Spectrometer bands and their respective nanometers. Source: National Ecological Observatory Network (NEON)
</figcaption>
</figure>
For this exercise, we'll first use the neon_aop_module function stack_rgb to extract the bands we want to stack. This function uses splicing to extract the nth band from the reflectance array, and then uses the numpy function stack to create a new 3D array (1000 x 1000 x 3) consisting of only the three bands we want.
End of explanation
"""
rgb_bands = (58,34,19)
print('Band 58 Center Wavelength = %.2f' %(sercMetadata['wavelength'][57]),'nm')
print('Band 33 Center Wavelength = %.2f' %(sercMetadata['wavelength'][33]),'nm')
print('Band 19 Center Wavelength = %.2f' %(sercMetadata['wavelength'][18]),'nm')
"""
Explanation: First, we will look at red, green, and blue bands, whos indices are defined below. To confirm that these band indices correspond to wavelengths in the expected portion of the spectrum, we can print out the wavelength values stored in metadata['wavelength']:
End of explanation
"""
SERCrgb = stack_rgb(sercRefl,rgb_bands)
SERCrgb.shape
"""
Explanation: Below we use stack_rgb to create an RGB array. Check that the dimensions of this array are as expected.
Data Tip: Checking the shape of arrays with .shape is a good habit to get into when creating your own workflows, and can be a handy tool for troubleshooting.
End of explanation
"""
plot_aop_refl(SERCrgb,
sercMetadata['spatial extent'],
title='SERC RGB Image',
cbar='off')
"""
Explanation: plot_aop_refl: plot an RGB band combination
Next, we can use the function plot_aop_refl, even though we have more than one band. This function only works for a single or 3-band array, so ensure the array you use has the proper dimensions before using. You do not need to specify the colorlimits as the matplotlib.pyplot automatically scales 3-band arrays to 8-bit color (256).
End of explanation
"""
from skimage import exposure
def plot_aop_rgb(rgbArray,ext,ls_pct=5,plot_title=''):
from skimage import exposure
pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (ls_pct,100-ls_pct))
img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=ext)
plt.title(plot_title + '\n Linear ' + str(ls_pct) + '% Contrast Stretch');
ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
plot_aop_rgb(SERCrgb,
sercMetadata['spatial extent'],
plot_title = 'SERC RGB')
"""
Explanation: You'll notice that this image is very dark; it is possible to make out some of the features (roads, buildings), but it is not ideal. Since colorlimits don't apply to 3-band images, we have to use some other image processing tools to enhance the visibility of this image.
Image Processing -- Contrast Stretch & Histogram Equalization
We can also try out some image processing routines to better visualize the reflectance data using the ski-image package.
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image by eliminating very high or low reflectance values that skew the display of the image.
<figure>
<a href="{{ site.baseurl }}/images/hyperspectral/histogram_equalization.png">
<img src="{{ site.baseurl }}/images/hyperspectral/histogram_equalization.png"></a>
<figcaption> Histogram equalization is a method in image processing of contrast adjustment
using the image's histogram. Stretching the histogram can improve the contrast
of a displayed image, as we will show how to do below.
Source: <a href="https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png"> Wikipedia - Public Domain </a>
</figcaption>
</figure>
The following tutorial section is adapted from skikit-image's tutorial
<a href="http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py" target="_blank"> Histogram Equalization</a>.
Let's see what the image looks like using a 5% linear contrast stretch using the skiimage module's exposure function.
End of explanation
"""
CIRbands = (90,34,19)
print('Band 90 Center Wavelength = %.2f' %(sercMetadata['wavelength'][89]),'nm')
print('Band 34 Center Wavelength = %.2f' %(sercMetadata['wavelength'][33]),'nm')
print('Band 19 Center Wavelength = %.2f' %(sercMetadata['wavelength'][18]),'nm')
SERCcir = stack_rgb(sercRefl,CIRbands)
plot_aop_rgb(SERCcir,
sercMetadata['spatial extent'],
ls_pct=2,
plot_title='SERC CIR')
"""
Explanation: False Color Image - Color Infrared (CIR)
We can also create an image from bands outside of the visible spectrum. An image containing one or more bands outside of the visible range is called a false-color image. Here we'll use the green and blue bands as before, but we replace the red band with a near-infrared (NIR) band.
For more information about non-visible wavelengths, false color images, and some frequently used false-color band combinations, refer to <a href="https://earthobservatory.nasa.gov/Features/FalseColor/" target="_blank">NASA's Earth Observatory page</a>.
End of explanation
"""
from IPython.html.widgets import *
array = copy.copy(sercRefl)
metadata = copy.copy(sercMetadata)
def RGBplot_widget(R,G,B):
#Pre-allocate array size
rgbArray = np.zeros((array.shape[0],array.shape[1],3), 'uint8')
Rband = array[:,:,R-1].astype(np.float)
#Rband_clean = clean_band(Rband,Refl_md)
Gband = array[:,:,G-1].astype(np.float)
#Gband_clean = clean_band(Gband,Refl_md)
Bband = array[:,:,B-1].astype(np.float)
#Bband_clean = clean_band(Bband,Refl_md)
rgbArray[..., 0] = Rband*256
rgbArray[..., 1] = Gband*256
rgbArray[..., 2] = Bband*256
# Apply Adaptive Histogram Equalization to Improve Contrast:
img_nonan = np.ma.masked_invalid(rgbArray) #first mask the image
img_adapteq = exposure.equalize_adapthist(img_nonan, clip_limit=0.10)
plot = plt.imshow(img_adapteq,extent=metadata['spatial extent']);
plt.title('Bands: \nR:' + str(R) + ' (' + str(round(metadata['wavelength'][R-1])) +'nm)'
+ '\n G:' + str(G) + ' (' + str(round(metadata['wavelength'][G-1])) + 'nm)'
+ '\n B:' + str(B) + ' (' + str(round(metadata['wavelength'][B-1])) + 'nm)');
ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
interact(RGBplot_widget, R=(1,426,1), G=(1,426,1), B=(1,426,1))
"""
Explanation: Demo: Exploring Band Combinations Interactively
Now that we have made a couple different band combinations, we can demo a Python widget to explore different combinations of bands in the visible and non-visible portions of the spectrum.
End of explanation
"""
rgbArray = copy.copy(SERCrgb)
def linearStretch(percent):
pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (percent,100-percent))
img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=sercMetadata['spatial extent'])
plt.title('SERC RGB \n Linear ' + str(percent) + '% Contrast Stretch');
ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
interact(linearStretch,percent=(0,20,1))
"""
Explanation: Demo: Interactive Linear Stretch & Equalization
Here is another widget to play around with, demonstrating how to interactively visualize linear contrast stretches with a variable percent.
End of explanation
"""
|
Bowenislandsong/Distributivecom | Archive/Hyperparameter Optimization.ipynb | gpl-3.0 | import numpy as np
import ray
@ray.remote
def train_cnn_and_compute_accuracy(hyperparameters,
train_images,
train_labels,
validation_images,
validation_labels):
# Construct a deep network, train it, and return the accuracy on the
# validation data.
return np.random.uniform(0, 1)
"""
Explanation: Machine learning algorithms often have a number of hyperparameters whose values must be chosen by the practitioner. For example, an optimization algorithm may have a step size, a decay rate, and a regularization coefficient. In a deep network, the network parameterization itself (e.g., the number of layers and the number of units per layer) can be considered a hyperparameter.
Choosing these parameters can be challenging, and so a common practice is to search over the space of hyperparameters. One approach that works surprisingly well is to randomly sample different options.
Problem Setup
Suppose that we want to train a convolutional network, but we aren't sure how to choose the following hyperparameters:
the learning rate
the batch size
the dropout probability
the standard deviation of the distribution from which to initialize the network weights
Suppose that we've defined a remote function train_cnn_and_compute_accuracy, which takes values for these hyperparameters as its input (along with the dataset), trains a convolutional network using those hyperparameters, and returns the accuracy of the trained model on a validation set.
End of explanation
"""
def generate_hyperparameters():
# Randomly choose values for the hyperparameters.
return {"learning_rate": 10 ** np.random.uniform(-5, 5),
"batch_size": np.random.randint(1, 100),
"dropout": np.random.uniform(0, 1),
"stddev": 10 ** np.random.uniform(-5, 5)}
"""
Explanation: Basic random search
Something that works surprisingly well is to try random values for the hyperparameters. For example, we can write a function that randomly generates hyperparameter configurations.
End of explanation
"""
import ray
ray.init()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
train_images = ray.put(mnist.train.images)
train_labels = ray.put(mnist.train.labels)
validation_images = ray.put(mnist.validation.images)
validation_labels = ray.put(mnist.validation.labels)
"""
Explanation: In addition, let's assume that we've started Ray and loaded some data.
End of explanation
"""
# Generate a bunch of hyperparameter configurations.
hyperparameter_configurations = [generate_hyperparameters() for _ in
range(20)]
# Launch some experiments.
results = []
for hyperparameters in hyperparameter_configurations:
results.append(train_cnn_and_compute_accuracy.remote(hyperparameters,
train_images,
train_labels,
validation_images,
validation_labels))
# Get the results.
accuracies = ray.get(results)
print(accuracies)
"""
Explanation: Then basic random hyperparameter search looks something like this. We launch a bunch of experiments, and we get the results.
End of explanation
"""
# Launch some experiments.
remaining_ids = []
for hyperparameters in hyperparameter_configurations:
remaining_ids.append(train_cnn_and_compute_accuracy.remote(hyperparameters,
train_images,
train_labels,
validation_images,
validation_labels))
# Whenever a new experiment finishes, print the value and start a new
# experiment
for i in range(10):
ready_ids, remaining_ids = ray.wait(remaining_ids, num_returns=1)
accuracy = ray.get(ready_ids[0])
print("Accuracy is {}".format(accuracy))
# Start a new experiment.
new_hyperparameters = generate_hyperparameters()
remaining_ids.append(train_cnn_and_compute_accuracy.remote(new_hyperparameters,
train_images,
train_labels,
validation_images,
validation_labels))
"""
Explanation: Then we can inspect the contents of accuracies and see which set of hyperparameters worked the best. Note that in the above example, the for loop will run instantaneously and the program will block in the call to ray.get, which will wait until all of the experiments have finished.
Processing results as they become available
One problem with the above approach is that you have to wait for all of the experiments to finish before you can process the results. Instead, you may want to process the results as they become available, perhaps in order to adaptively choose new experiments to run, or perhaps simply so you know how well the experiments are doing. To process the results as they become available, we can use the ray.wait primitive.
The most simple usage is the following. This example is implemented in more detail in driver.py.
End of explanation
"""
@ray.remote
def train_cnn_and_compute_accuracy(hyperparameters, model=None):
# Construct a deep network, train it, and return the accuracy on the
# validation data as well as the latest version of the model. If the
# model argument is not None, this will continue training an existing
# model.
validation_accuracy = np.random.uniform(0, 1)
new_model = model
return validation_accuracy, new_model
"""
Explanation: More sophisticated hyperparameter search
Hyperparameter search algorithms can get much more sophisticated. So far, we’ve been treating the function train_cnn_and_compute_accuracy as a black box, that we can choose its inputs and inspect its outputs, but once we decide to run it, we have to run it until it finishes.
However, there is often more structure to be exploited. For example, if the training procedure is going poorly, we can end the session early and invest more resources in the more promising hyperparameter experiments. And if we’ve saved the state of the training procedure, we can always restart it again later.
This is one of the ideas of the Hyperband algorithm. Start with a huge number of hyperparameter configurations, aggressively stop the bad ones, and invest more resources in the promising experiments.
To implement this, we can first adapt our training method to optionally take a model and to return the updated model.
End of explanation
"""
import numpy as np
def is_promising(model):
# Return true if the model is doing well and false otherwise. In
# practice, this function will want more information than just the
# model.
return np.random.choice([True, False])
# Start 10 experiments.
remaining_ids = []
for _ in range(10):
experiment_id = train_cnn_and_compute_accuracy.remote(hyperparameters,
model=None)
remaining_ids.append(experiment_id)
accuracies = []
for i in range(100):
# Whenever a segment of an experiment finishes, decide if it looks
# promising or not.
ready_ids, remaining_ids = ray.wait(remaining_ids, num_returns=1)
experiment_id = ready_ids[0]
current_accuracy, current_model = ray.get(experiment_id)
accuracies.append(current_accuracy)
if is_promising(experiment_id):
# Continue running the experiment.
experiment_id = train_cnn_and_compute_accuracy.remote(hyperparameters,
model=current_model)
else:
# Start a new experiment.
experiment_id = train_cnn_and_compute_accuracy.remote(hyperparameters)
remaining_ids.append(experiment_id)
print(accuracies)
"""
Explanation: Here’s a different variant that uses the same principles. Divide each training session into a series of shorter training sessions. Whenever a short session finishes, if it still looks promising, then continue running it. If it isn’t doing well, then terminate it and start a new experiment.
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Marks/Pyplot/Bins.ipynb | apache-2.0 | # Create a sample of Gaussian draws
np.random.seed(0)
x_data = np.random.randn(1000)
"""
Explanation: Bins Mark
This Mark is essentially the same as the Hist Mark from a user point of view, but is actually a Bars instance that bins sample data.
The difference with Hist is that the binning is done in the backend, so it will work better for large data as it does not have to ship the whole data back and forth to the frontend.
End of explanation
"""
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, padding=0)
fig
"""
Explanation: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales.
End of explanation
"""
hist.x, hist.y
"""
Explanation: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y:
End of explanation
"""
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, padding=0)
fig
# Changing the number of bins
hist.bins = 'sqrt'
# Changing the range
hist.min = 0
"""
Explanation: Tuning the bins
Under the hood, the Bins mark is really a Bars mark, with some additional magic to control the binning. The data in sample is binned into equal-width bins. The parameters controlling the binning are the following traits:
bins sets the number of bins. It is either a fixed integer (10 by default), or the name of a method to determine the number of bins in a smart way ('auto', 'fd', 'doane', 'scott', 'rice', 'sturges' or 'sqrt').
min and max set the range of the data (sample) to be binned
density, if set to True, normalizes the heights of the bars.
For more information, see the documentation of numpy's histogram
End of explanation
"""
# Normalizing the count
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, density=True)
fig
# changing the color
hist.colors=['orangered']
# stroke and opacity update
hist.stroke = 'orange'
hist.opacities = [0.5] * len(hist.x)
# Laying the histogram on its side
hist.orientation = 'horizontal'
fig.axes[0].orientation = 'vertical'
fig.axes[1].orientation = 'horizontal'
"""
Explanation: Histogram Styling
The styling of Hist is identical to the one of Bars
End of explanation
"""
|
cathywu/flow | tutorials/tutorial04_rllab.ipynb | mit | # ring road scenario class
from flow.scenarios.loop import LoopScenario
# input parameter classes to the scenario class
from flow.core.params import NetParams, InitialConfig
# name of the scenario
name = "training_example"
# network-specific parameters
from flow.scenarios.loop import ADDITIONAL_NET_PARAMS
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
# initial configuration to vehicles
initial_config = InitialConfig(spacing="uniform", perturbation=1)
# traffic lights (empty)
from flow.core.params import TrafficLightParams
traffic_lights = TrafficLightParams()
"""
Explanation: Tutorial 04: Running rllab Experiments
This tutorial walks you through the process of running traffic simulations in Flow with trainable rllab-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the rllab library [1]. Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics).
In this exercise, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of "phantom jams" which form when only human driver dynamics is involved.
1. Components of a Simulation
All simulations, both in the presence and absence of RL, require two components: a scenario, and an environment. Scenarios describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the scenario. Moreover, custom environments may be used to modify the dynamical features of an scenario. Finally, in the RL case, it is in the environment that the state/action spaces and the reward function are defined.
2. Setting up a Scenario
Flow contains a plethora of pre-designed scenarios used to replicate highways, intersections, and merges in both closed and open settings. All these scenarios are located in flow/scenarios. For this exercise, which involves a single lane ring road, we will use the scenario LoopScenario.
2.1 Setting up Scenario Parameters
The scenario mentioned at the start of this section, as well as all other scenarios in Flow, are parameterized by the following arguments:
* name
* vehicles
* net_params
* initial_config
* traffic_lights
These parameters are explained in detail in exercise 1. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous exercise. Accordingly, we specify them as we have before, and leave further explanations of the parameters to exercise 1.
End of explanation
"""
# vehicles class
from flow.core.params import VehicleParams
# vehicles dynamics models
from flow.controllers import IDMController, ContinuousRouter
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
"""
Explanation: 2.2 Adding Trainable Autonomous Vehicles
The VehicleParams class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various get methods within this class.
The dynamics of vehicles in the VehicleParams class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the ContinousRouter routing controller so that the vehicles may maintain their routes closed networks.
As we have done in exercise 1, human-driven vehicles are defined in the VehicleParams class as follows:
End of explanation
"""
from flow.controllers import RLController
"""
Explanation: The above addition to the Vehicles class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an RLController as the acceleraton controller to the vehicle.
End of explanation
"""
vehicles.add(veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
"""
Explanation: Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.
We finally add the vehicle as follows, while again using the ContinuousRouter to perpetually maintain the vehicle within the network.
End of explanation
"""
scenario = LoopScenario(name="ring_example",
vehicles=vehicles,
net_params=net_params,
initial_config=initial_config,
traffic_lights=traffic_lights)
"""
Explanation: 2.3 Scenario Object
We are finally ready to create the scenario object, as we had done in exercise 1.
End of explanation
"""
from flow.core.params import SumoParams
sumo_params = SumoParams(sim_step=0.1, render=False)
"""
Explanation: 3. Setting up an Environment
Several environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.
Envrionments in Flow are parametrized by three components:
* env_params
* sumo_params
* scenario
3.1 SumoParams
SumoParams specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and activate the GUI.
Note For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just need to specify the following: render=False
End of explanation
"""
from flow.core.params import EnvParams
env_params = EnvParams(
# length of one rollout
horizon=100,
additional_params={
# maximum acceleration of autonomous vehicles
"max_accel": 1,
# maximum deceleration of autonomous vehicles
"max_decel": 1,
# bounds on the ranges of ring road lengths the autonomous vehicle
# is trained on
"ring_length": [220, 270],
},
)
"""
Explanation: 3.2 EnvParams
EnvParams specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the scenario. For the environment "WaveAttenuationPOEnv", these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.
Finally, it is important to specify here the horizon of the experiment, which is the duration of one episode (during which the RL-agent acquire data).
End of explanation
"""
import flow.envs as flowenvs
print(flowenvs.__all__)
"""
Explanation: 3.3 Initializing a Gym Environments
Now, we have to specify our Gym Environment and the algorithm that our RL agents we'll use. To specify the environment, one has to use the environment's name (a simple string). A list of all environment names is located in flow/envs/__init__.py. The names of available environments can be seen below.
End of explanation
"""
env_name = "WaveAttenuationPOEnv"
pass_params = (env_name, sumo_params, vehicles, env_params, net_params,
initial_config, scenario)
"""
Explanation: We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:
End of explanation
"""
from rllab.algos.trpo import TRPO
from rllab.baselines.linear_feature_baseline import LinearFeatureBaseline
from rllab.policies.gaussian_mlp_policy import GaussianMLPPolicy
from rllab.envs.normalized_env import normalize
from rllab.envs.gym_env import GymEnv
def run_task(*_):
env = GymEnv(
env_name,
record_video=False,
register_params=pass_params
)
horizon = env.horizon
env = normalize(env)
policy = GaussianMLPPolicy(
env_spec=env.spec,
hidden_sizes=(32, 32)
)
baseline = LinearFeatureBaseline(env_spec=env.spec)
algo = TRPO(
env=env,
policy=policy,
baseline=baseline,
batch_size=1000,
max_path_length=horizon,
discount=0.999,
n_itr=1,
)
algo.train(),
"""
Explanation: 4. Setting up and Running an RL Experiment
4.1 run_task
We begin by creating a run_task method, which defines various components of the RL algorithm within rllab, such as the environment, the type of policy, the policy training method, etc.
We create the gym environment defined in section 3 using the GymEnv function.
In this experiment, we use a Gaussian MLP policy: we just need to specify its dimensions (32,32) and the environment name. We'll use linear baselines and the Trust Region Policy Optimization (TRPO) algorithm (see https://arxiv.org/abs/1502.05477):
- The batch_size parameter specifies the size of the batch during one step of the gradient descent.
- The max_path_length parameter indicates the biggest rollout size possible of the experiment.
- The n_itr parameter gives the number of iterations used in training the agent.
In the following, we regroup all the previous commands in one single cell
End of explanation
"""
from rllab.misc.instrument import run_experiment_lite
for seed in [5]: # , 20, 68]:
run_experiment_lite(
run_task,
# Number of parallel workers for sampling
n_parallel=1,
# Keeps the snapshot parameters for all iterations
snapshot_mode="all",
# Specifies the seed for the experiment. If this is not provided, a
# random seed will be used
seed=seed,
mode="local",
exp_prefix="training_example",
)
"""
Explanation: 4.2 run_experiment_lite
Using the above run_task method, we will execute the training process using rllab's run_experiment_lite methods. In this method, we are able to specify:
- The n_parallel cores you want to use for your experiment. If you set n_parallel>1, two processors will execute your code in parallel which results in a global roughly linear speed-up.
- The snapshot_mode, which specifies how frequently (blank).
- The mode which can set to be local is you want to run the experiment locally, or to ec2 for launching the experiment on an Amazon Web Services instance.
- The seed parameter which calibrates the randomness in the experiment.
- The tag, or name, for your experiment.
Finally, we are ready to begin the training process.
End of explanation
"""
|
rsterbentz/phys202-2015-work | assignments/assignment06/DisplayEx01.ipynb | mit | from IPython.display import Image, HTML, display
assert True # leave this to grade the import statements
"""
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
"""
Image(url='http://www.elevationnetworks.org/wp-content/uploads/2013/05/physics.jpeg', embed=True, width=600, height=600)
assert True # leave this to grade the image display
"""
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
"""
q = """<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass (MeV/c$^2$)</th>
</tr>
<tr>
<td>down</td>
<td>d</td>
<td>$\\bar{\\textrm{d}}$</td>
<td>$-\\frac{1}{3}$</td>
<td>3.5-6.0</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\\bar{\\textrm{b}}$</td>
<td>$-\\frac{1}{3}$</td>
<td>4,130-4,370</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>$\\bar{\\textrm{s}}$</td>
<td>$-\\frac{1}{3}$</td>
<td>70-130</td>
</tr>
<tr>
<td>charm</td>
<td>c</td>
<td>$\\bar{\\textrm{c}}$</td>
<td>$+\\frac{2}{3}$</td>
<td>1,160-1,340</td>
</tr>
<tr>
<td>up</td>
<td>u</td>
<td>$\\bar{\\textrm{u}}$</td>
<td>$+\\frac{2}{3}$</td>
<td>1.5-3.3</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>$\\bar{\\textrm{t}}$</td>
<td>$+\\frac{2}{3}$</td>
<td>169,100-173,300</td>
</tr>"""
display(HTML(q))
assert True # leave this here to grade the quark table
"""
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation
"""
|
dacr26/CompPhys | 00_02_numbers_in_python.ipynb | mit | import sys
sys.float_info
"""
Explanation: Manipulating numbers in Python
Disclaimer: Much of this section has been transcribed from <a href="https://pymotw.com/2/math/">https://pymotw.com/2/math/</a>
Every computer represents numbers using the <a href="https://en.wikipedia.org/wiki/IEEE_floating_point">IEEE floating point standard</a>. The math module implements many of the IEEE functions that would normally be found in the native platform C libraries for complex mathematical operations using floating point values, including logarithms and trigonometric operations.
The fundamental information about number representation is contained in the module sys
End of explanation
"""
sys.float_info.max
"""
Explanation: From here we can learn, for instance:
End of explanation
"""
infinity = float("inf")
infinity
infinity/10000
"""
Explanation: Similarly, we can learn the limits of the IEEE 754 standard
Largest Real = 1.79769e+308, 7fefffffffffffff // -Largest Real = -1.79769e+308, ffefffffffffffff
Smallest Real = 2.22507e-308, 0010000000000000 // -Smallest Real = -2.22507e-308, 8010000000000000
Zero = 0, 0000000000000000 // -Zero = -0, 8000000000000000
eps = 2.22045e-16, 3cb0000000000000 // -eps = -2.22045e-16, bcb0000000000000
Interestingly, one could define an even larger constant (more about this below)
End of explanation
"""
import math
print 'π: %.30f' % math.pi
print 'e: %.30f' % math.e
"""
Explanation: Special constants
Many math operations depend on special constants. math includes values for $\pi$ and $e$.
End of explanation
"""
float("inf")-float("inf")
import math
print '{:^3} {:6} {:6} {:6}'.format('e', 'x', 'x**2', 'isinf')
print '{:-^3} {:-^6} {:-^6} {:-^6}'.format('', '', '', '')
for e in range(0, 201, 20):
x = 10.0 ** e
y = x*x
print '{:3d} {!s:6} {!s:6} {!s:6}'.format(e, x, y, math.isinf(y))
"""
Explanation: Both values are limited in precision only by the platform’s floating point C library.
Testing for exceptional values
Floating point calculations can result in two types of exceptional values. INF (“infinity”) appears when the double used to hold a floating point value overflows from a value with a large absolute value.
There are several reserved bit patterns, mostly those with all ones in the exponent field. These allow for tagging special cases as Not A Number—NaN. If there are all ones and the fraction is zero, the number is Infinite.
The IEEE standard specifies:
Inf = Inf, 7ff0000000000000 // -Inf = -Inf, fff0000000000000
NaN = NaN, fff8000000000000 // -NaN = NaN, 7ff8000000000000
End of explanation
"""
x = 10.0 ** 200
print 'x =', x
print 'x*x =', x*x
try:
print 'x**2 =', x**2
except OverflowError, err:
print err
"""
Explanation: When the exponent in this example grows large enough, the square of x no longer fits inside a double, and the value is recorded as infinite. Not all floating point overflows result in INF values, however. Calculating an exponent with floating point values, in particular, raises OverflowError instead of preserving the INF result.
End of explanation
"""
import math
x = (10.0 ** 200) * (10.0 ** 200)
y = x/x
print 'x =', x
print 'isnan(x) =', math.isnan(x)
print 'y = x / x =', x/x
print 'y == nan =', y == float('nan')
print 'isnan(y) =', math.isnan(y)
"""
Explanation: This discrepancy is caused by an implementation difference in the library used by C Python.
Division operations using infinite values are undefined. The result of dividing a number by infinity is NaN (“not a number”).
End of explanation
"""
import math
print '{:^5} {:^5} {:^5} {:^5} {:^5}'.format('i', 'int', 'trunk', 'floor', 'ceil')
print '{:-^5} {:-^5} {:-^5} {:-^5} {:-^5}'.format('', '', '', '', '')
fmt = ' '.join(['{:5.1f}'] * 5)
for i in [ -1.5, -0.8, -0.5, -0.2, 0, 0.2, 0.5, 0.8, 1 ]:
print fmt.format(i, int(i), math.trunc(i), math.floor(i), math.ceil(i))
"""
Explanation: Converting to Integers
The math module includes three functions for converting floating point values to whole numbers. Each takes a different approach, and will be useful in different circumstances.
The simplest is trunc(), which truncates the digits following the decimal, leaving only the significant digits making up the whole number portion of the value. floor() converts its input to the largest preceding integer, and ceil() (ceiling) produces the largest integer following sequentially after the input value.
End of explanation
"""
import math
for i in range(6):
print '{}/2 = {}'.format(i, math.modf(i/2.0))
"""
Explanation: Alternate Representations
modf() takes a single floating point number and returns a tuple containing the fractional and whole number parts of the input value.
End of explanation
"""
import math
print '{:^7} {:^7} {:^7}'.format('x', 'm', 'e')
print '{:-^7} {:-^7} {:-^7}'.format('', '', '')
for x in [ 0.1, 0.5, 4.0 ]:
m, e = math.frexp(x)
print '{:7.2f} {:7.2f} {:7d}'.format(x, m, e)
"""
Explanation: frexp() returns the mantissa and exponent of a floating point number, and can be used to create a more portable representation of the value. It uses the formula x = m * 2 ** e, and returns the values m and e.
End of explanation
"""
import math
print '{:^7} {:^7} {:^7}'.format('m', 'e', 'x')
print '{:-^7} {:-^7} {:-^7}'.format('', '', '')
for m, e in [ (0.8, -3),
(0.5, 0),
(0.5, 3),
]:
x = math.ldexp(m, e)
print '{:7.2f} {:7d} {:7.2f}'.format(m, e, x)
"""
Explanation: ldexp() is the inverse of frexp(). Using the same formula as frexp(), ldexp() takes the mantissa and exponent values as arguments and returns a floating point number.
End of explanation
"""
import math
print math.fabs(-1.1)
print math.fabs(-0.0)
print math.fabs(0.0)
print math.fabs(1.1)
"""
Explanation: Positive and Negative Signs
The absolute value of number is its value without a sign. Use fabs() to calculate the absolute value of a floating point number.
End of explanation
"""
import math
print
print '{:^5} {:^5} {:^5} {:^5} {:^5}'.format('f', 's', '< 0', '> 0', '= 0')
print '{:-^5} {:-^5} {:-^5} {:-^5} {:-^5}'.format('', '', '', '', '')
for f in [ -1.0,
0.0,
1.0,
float('-inf'),
float('inf'),
float('-nan'),
float('nan'),
]:
s = int(math.copysign(1, f))
print '{:5.1f} {:5d} {!s:5} {!s:5} {!s:5}'.format(f, s, f < 0, f > 0, f==0)
"""
Explanation: To determine the sign of a value, either to give a set of values the same sign or simply for comparison, use copysign() to set the sign of a known good value. An extra function like copysign() is needed because comparing NaN and -NaN directly with other values does not work.
End of explanation
"""
import math
values = [ 0.1 ] * 10
print 'Input values:', values
print 'sum() : {:.20f}'.format(sum(values))
s = 0.0
for i in values:
s += i
print 'for-loop : {:.20f}'.format(s)
print 'math.fsum() : {:.20f}'.format(math.fsum(values))
"""
Explanation: Commonly Used Calculations
Representing precise values in binary floating point memory is challenging. Some values cannot be represented exactly, and the more often a value is manipulated through repeated calculations, the more likely a representation error will be introduced. math includes a function for computing the sum of a series of floating point numbers using an efficient algorithm that minimize such errors.
End of explanation
"""
import math
for i in [ 0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.1 ]:
try:
print '{:2.0f} {:6.0f}'.format(i, math.factorial(i))
except ValueError, err:
print 'Error computing factorial(%s):' % i, err
"""
Explanation: Given a sequence of ten values each equal to 0.1, the expected value for the sum of the sequence is 1.0. Since 0.1 cannot be represented exactly as a floating point value, however, errors are introduced into the sum unless it is calculated with fsum().
factorial() is commonly used to calculate the number of permutations and combinations of a series of objects. The factorial of a positive integer n, expressed n!, is defined recursively as (n-1)! * n and stops with 0! == 1. factorial() only works with whole numbers, but does accept float arguments as long as they can be converted to an integer without losing value.
End of explanation
"""
import math
print '{:^4} {:^4} {:^5} {:^5}'.format('x', 'y', '%', 'fmod')
print '---- ---- ----- -----'
for x, y in [ (5, 2),
(5, -2),
(-5, 2),
]:
print '{:4.1f} {:4.1f} {:5.2f} {:5.2f}'.format(x, y, x % y, math.fmod(x, y))
"""
Explanation: The modulo operator (%) computes the remainder of a division expression (i.e., 5 % 2 = 1). The operator built into the language works well with integers but, as with so many other floating point operations, intermediate calculations cause representational issues that result in a loss of data. fmod() provides a more accurate implementation for floating point values.
End of explanation
"""
import math
for x, y in [
# Typical uses
(2, 3),
(2.1, 3.2),
# Always 1
(1.0, 5),
(2.0, 0),
# Not-a-number
(2, float('nan')),
# Roots
(9.0, 0.5),
(27.0, 1.0/3),
]:
print '{:5.1f} ** {:5.3f} = {:6.3f}'.format(x, y, math.pow(x, y))
"""
Explanation: A potentially more frequent source of confusion is the fact that the algorithm used by fmod for computing modulo is also different from that used by %, so the sign of the result is different. mixed-sign inputs.
Exponents and Logarithms
Exponential growth curves appear in economics, physics, and other sciences. Python has a built-in exponentiation operator (“**”), but pow() can be useful when you need to pass a callable function as an argument.
End of explanation
"""
import math
print math.sqrt(9.0)
print math.sqrt(3)
try:
print math.sqrt(-1)
except ValueError, err:
print 'Cannot compute sqrt(-1):', err
"""
Explanation: Raising 1 to any power always returns 1.0, as does raising any value to a power of 0.0. Most operations on the not-a-number value nan return nan. If the exponent is less than 1, pow() computes a root.
Since square roots (exponent of 1/2) are used so frequently, there is a separate function for computing them.
End of explanation
"""
import math
print '{:2} {:^12} {:^20} {:^20} {:8}'.format('i', 'x', 'accurate', 'inaccurate', 'mismatch')
print '{:-^2} {:-^12} {:-^20} {:-^20} {:-^8}'.format('', '', '', '', '')
for i in range(0, 10):
x = math.pow(10, i)
accurate = math.log10(x)
inaccurate = math.log(x, 10)
match = '' if int(inaccurate) == i else '*'
print '{:2d} {:12.1f} {:20.18f} {:20.18f} {:^5}'.format(i, x, accurate, inaccurate, match)
"""
Explanation: Computing the square roots of negative numbers requires complex numbers, which are not handled by math. Any attempt to calculate a square root of a negative value results in a ValueError.
There are two variations of log(). Given floating point representation and rounding errors the computed value produced by log(x, b) has limited accuracy, especially for some bases. log10() computes log(x, 10), using a more accurate algorithm than log().
End of explanation
"""
import math
x = 2
fmt = '%.20f'
print fmt % (math.e ** 2)
print fmt % math.pow(math.e, 2)
print fmt % math.exp(2)
"""
Explanation: The lines in the output with trailing * highlight the inaccurate values.
As with other special-case functions, the function exp() uses an algorithm that produces more accurate results than the general-purpose equivalent math.pow(math.e, x).
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/06_structured/3_keras_dnn.ipynb | apache-2.0 | # change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-east1' #'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
"""
Explanation: <h1> Create Keras DNN model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using Keras. This requires TensorFlow 2.0
</ol>
End of explanation
"""
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
"""
Explanation: Create Keras model
<p>
First, write an input_fn to read the data.
End of explanation
"""
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(name, values))
def build_dnn_model():
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
inputs.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
})
# feature columns from inputs
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
if False:
# Until TF-serving supports 2.0, so as to get servable model
feature_columns['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
feature_columns['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='babyweight')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
# note how to use strategy to do distributed training
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = build_dnn_model()
print(model.summary())
"""
Explanation: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
The others (is_male, plurality) should be categorical.
End of explanation
"""
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
"""
Explanation: We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Train and evaluate
End of explanation
"""
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
"""
Explanation: Visualize loss curve
End of explanation
"""
# Serving function that passes through keys
@tf.function(input_signature=[{
'is_male': tf.TensorSpec([None,], dtype=tf.string, name='is_male'),
'mother_age': tf.TensorSpec([None,], dtype=tf.float32, name='mother_age'),
'plurality': tf.TensorSpec([None,], dtype=tf.string, name='plurality'),
'gestation_weeks': tf.TensorSpec([None,], dtype=tf.float32, name='gestation_weeks'),
'key': tf.TensorSpec([None,], dtype=tf.string, name='key')
}])
def my_serve(inputs):
feats = inputs.copy()
key = feats.pop('key')
output = model(feats)
return {'key': key, 'babyweight': output}
import shutil, os, datetime
OUTPUT_DIR = './export/babyweight'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH, signatures={'serving_default': my_serve})
print("Exported trained model to {}".format(EXPORT_PATH))
os.environ['EXPORT_PATH'] = EXPORT_PATH
!find $EXPORT_PATH
"""
Explanation: Save the model
Let's wrap the model so that we can supply keyed predictions, and get the key back in our output
End of explanation
"""
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
%%bash
MODEL_NAME="babyweight"
VERSION_NAME="dnn"
MODEL_LOCATION=$EXPORT_PATH
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "The model named $MODEL_NAME already exists."
else
# create model
echo "Creating $MODEL_NAME model now."
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=$MODEL_LOCATION --staging-bucket=gs://$BUCKET
"""
Explanation: <h2> Monitor and experiment with training </h2>
To begin TensorBoard from within AI Platform Notebooks, click the + symbol in the top left corner and select the Tensorboard icon to create a new TensorBoard.
In TensorBoard, look at the learned embeddings. Are they getting clustered? How about the weights for the hidden layers? What if you run this longer? What happens if you change the batchsize?
Deploy trained model to Cloud AI Platform
End of explanation
"""
%%writefile input.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "b2", "is_male": "True", "mother_age": 33.0, "plurality": "Single(1)", "gestation_weeks": 41}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g2", "is_male": "False", "mother_age": 33.0, "plurality": "Single(1)", "gestation_weeks": 41}
!gcloud ai-platform predict --model babyweight --json-instances input.json --version dnn
"""
Explanation: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
End of explanation
"""
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials)
project = PROJECT
model_name = 'babyweight'
version_name = 'dnn'
input_data = {
'instances': [
{
'key': 'b1',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
{
'key': 'g1',
'is_male': 'False',
'mother_age': 29.0,
'plurality': 'Single(1)',
'gestation_weeks': 38
},
{
'key': 'b2',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Triplets(3)',
'gestation_weeks': 39
},
{
'key': 'u1',
'is_male': 'Unknown',
'mother_age': 29.0,
'plurality': 'Multiple(2+)',
'gestation_weeks': 38
},
]
}
parent = 'projects/%s/models/%s/versions/%s' % (project, model_name, version_name)
prediction = api.projects().predict(body=input_data, name=parent).execute()
print(prediction)
print(prediction['predictions'][0]['babyweight'][0])
"""
Explanation: main.py
This is the code that exists in serving/application/main.py, i.e. the code in the web application that accesses the ML API.
End of explanation
"""
|
rkastilani/PowerOutagePredictor | PowerOutagePredictor/Linear/Ridge.ipynb | mit | import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
data = pd.read_csv("../../Data/2014outagesJerry.csv")
data.head()
"""
Explanation: Ridge Regression:
Ridge regression performs ‘L2 regularization‘, i.e. it adds a factor of sum of squares of coefficients in the optimization objective. Thus, ridge regression optimizes the following:
Objective = RSS + α * (sum of square of coefficients)
Here, α (alpha) is the parameter which balances the amount of emphasis given to minimizing RSS vs minimizing sum of square of coefficients.
End of explanation
"""
# Select input/output data
Y_tot = data['Total_outages']
X_tot = data[['Day_length_hr','Max_temp_F','Avg_Temp_F','Min_temp_F','Max_humidity_percent','Avg_humidity_percent','Min_humidity_percent','Max_visibility_mi','Avg_visibility_mi','Min_visibility_mi','Max_windspeed_mph','Avg_windspeed_mph','Max_windgust_mph','Precipitation_in','Event_fog','Event_rain','Event_snow','Event_thunderstorm','Event_Hail']]
# Initialize lists
coefs = []
trainerror = []
# Define lambda space
lambdas = np.logspace(-6,6,200)
# Define type of regressor
regr_ridge = linear_model.Ridge()
# loop over lambda (a) values (strength of regularization)
for a in lambdas:
regr_ridge.set_params(alpha=a,normalize=True,max_iter=1e6)
regr_ridge.fit(X_tot,Y_tot)
coefs.append(regr_ridge.coef_)
trainerror.append(mean_squared_error(Y_tot,regr_ridge.predict(X_tot)))
# Plot
plt.figure(figsize=(10,3))
# figure 1: Lasso Coef. and lambda
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('Ridge coefs vs $\lambda$')
# figure 2: Error and lambda
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
plt.legend(loc='lower right')
plt.title('error vs $\lambda$')
plt.show()
# pick the best alpha value
regr_ridge_best_tot = linear_model.Ridge()
regr_ridge_best_tot.set_params(alpha=1e-4,normalize=True,max_iter=1e6)
regr_ridge_best_tot.fit(X_tot,Y_tot)
Y_tot_predict = regr_ridge_best_tot.predict(X_tot)
#make parity plot
plt.figure(figsize=(4,4))
plt.scatter(Y_tot,Y_tot_predict)
plt.plot([0,10],[0,10],lw=4,color='black')
plt.show()
#calculate the test and train error
print("Train error",mean_squared_error(Y_tot,Y_tot_predict))
# Returns the coefficient of determination R^2 of the prediction.
print("R^2",regr_ridge_best_tot.score(X_tot,Y_tot))
"""
Explanation: Total Outages
End of explanation
"""
# Select input/output data
Y_eqp = data['Equipment']
X_eqp = data[['Day_length_hr','Max_temp_F','Avg_Temp_F','Min_temp_F','Max_humidity_percent','Avg_humidity_percent','Min_humidity_percent','Max_visibility_mi','Avg_visibility_mi','Min_visibility_mi','Max_windspeed_mph','Avg_windspeed_mph','Max_windgust_mph','Precipitation_in','Event_fog','Event_rain','Event_snow','Event_thunderstorm','Event_Hail']]
# Initialize lists
coefs = []
trainerror = []
# Define lambda space
lambdas = np.logspace(-6,6,200)
# Define type of regressor
regr_ridge = linear_model.Ridge()
# loop over lambda (a) values (strength of regularization)
for a in lambdas:
regr_ridge.set_params(alpha=a,normalize=True,max_iter=1e6)
regr_ridge.fit(X_eqp,Y_eqp)
coefs.append(regr_ridge.coef_)
trainerror.append(mean_squared_error(Y_eqp,regr_ridge.predict(X_eqp)))
# Plot
plt.figure(figsize=(10,3))
# figure 1: Lasso Coef. and lambda
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('Ridge coefs vs $\lambda$')
# figure 2: Error and lambda
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
plt.legend(loc='lower right')
plt.title('error vs $\lambda$')
plt.show()
# pick the best alpha value
regr_ridge_best_eqp = linear_model.Ridge()
regr_ridge_best_eqp.set_params(alpha=1e-3,normalize=True,max_iter=1e6)
regr_ridge_best_eqp.fit(X_eqp,Y_eqp)
Y_eqp_predict = regr_ridge_best_eqp.predict(X_eqp)
#make parity plot
plt.figure(figsize=(4,4))
plt.scatter(Y_eqp,Y_eqp_predict)
plt.plot([0,10],[0,10],lw=4,color='black')
plt.show()
#calculate the test and train error
print("Train error",mean_squared_error(Y_eqp,Y_eqp_predict))
# Returns the coefficient of determination R^2 of the prediction.
print("R^2",regr_ridge_best_eqp.score(X_eqp,Y_eqp))
"""
Explanation: Equipment-caused Outages
End of explanation
"""
# Select input/output data
Y_tree = data['Trees']
#X_tree = data[['Max_temp_F','Max_humidity_percent','Min_visibility_mi','Max_windspeed_mph','Precipitation_in','Event_Hail']]
X_tree = data[['Day_length_hr','Max_temp_F','Avg_Temp_F','Min_temp_F','Max_humidity_percent','Avg_humidity_percent','Min_humidity_percent','Max_visibility_mi','Avg_visibility_mi','Min_visibility_mi','Max_windspeed_mph','Avg_windspeed_mph','Max_windgust_mph','Precipitation_in','Event_fog','Event_rain','Event_snow','Event_thunderstorm','Event_Hail']]
# Initialize lists
coefs = []
trainerror = []
# Define lambda space
lambdas = np.logspace(-6,6,200)
# Define type of regressor
regr_ridge = linear_model.Ridge()
# loop over lambda (a) values (strength of regularization)
for a in lambdas:
regr_ridge.set_params(alpha=a,normalize=True,max_iter=1e6)
regr_ridge.fit(X_tree,Y_tree)
coefs.append(regr_ridge.coef_)
trainerror.append(mean_squared_error(Y_tree,regr_ridge.predict(X_tree)))
# Plot
plt.figure(figsize=(10,3))
# figure 1: Lasso Coef. and lambda
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('Ridge coefs vs $\lambda$')
# figure 2: Error and lambda
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
plt.legend(loc='lower right')
plt.title('error vs $\lambda$')
plt.show()
# pick the best alpha value
regr_ridge_best_tree = linear_model.Ridge()
regr_ridge_best_tree.set_params(alpha=1e-4,normalize=True,max_iter=1e6)
regr_ridge_best_tree.fit(X_tree,Y_tree)
Y_tree_predict = regr_ridge_best_tree.predict(X_tree)
#make parity plot
plt.figure(figsize=(4,4))
plt.scatter(Y_tree,Y_tree_predict)
plt.plot([0,10],[0,10],lw=4,color='black')
plt.show()
#calculate the test and train error
print("Train error",mean_squared_error(Y_tree,Y_tree_predict))
# Returns the coefficient of determination R^2 of the prediction.
print("R^2",regr_ridge_best_tree.score(X_tree,Y_tree))
"""
Explanation: Trees-caused Outages
End of explanation
"""
# Select input/output data
Y_ani = data['Animals']
X_ani = data[['Day_length_hr','Max_temp_F','Avg_Temp_F','Min_temp_F','Max_humidity_percent','Avg_humidity_percent','Min_humidity_percent','Max_visibility_mi','Avg_visibility_mi','Min_visibility_mi','Max_windspeed_mph','Avg_windspeed_mph','Max_windgust_mph','Precipitation_in','Event_fog','Event_rain','Event_snow','Event_thunderstorm','Event_Hail']]
# Initialize lists
coefs = []
trainerror = []
# Define lambda space
lambdas = np.logspace(-6,6,200)
# Define type of regressor
regr_ridge = linear_model.Ridge()
# loop over lambda (a) values (strength of regularization)
for a in lambdas:
regr_ridge.set_params(alpha=a,normalize=True,max_iter=1e6)
regr_ridge.fit(X_ani,Y_ani)
coefs.append(regr_ridge.coef_)
trainerror.append(mean_squared_error(Y_ani,regr_ridge.predict(X_ani)))
# Plot
plt.figure(figsize=(10,3))
# figure 1: Lasso Coef. and lambda
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('Ridge coefs vs $\lambda$')
# figure 2: Error and lambda
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
plt.legend(loc='lower right')
plt.title('error vs $\lambda$')
plt.show()
# pick the best alpha value
regr_ridge_best_ani = linear_model.Ridge()
regr_ridge_best_ani.set_params(alpha=1e-2,normalize=True,max_iter=1e6)
regr_ridge_best_ani.fit(X_ani,Y_ani)
Y_ani_predict = regr_ridge_best_ani.predict(X_ani)
#make parity plot
plt.figure(figsize=(4,4))
plt.scatter(Y_ani,Y_ani_predict)
plt.plot([0,10],[0,10],lw=4,color='black')
plt.show()
#calculate the test and train error
print("Train error",mean_squared_error(Y_ani,Y_ani_predict))
# Returns the coefficient of determination R^2 of the prediction.
print("R^2",regr_ridge_best_ani.score(X_ani,Y_ani))
"""
Explanation: Animals-caused Outages
End of explanation
"""
# Select input/output data
Y_lightening = data['Lightning']
X_lightening = data[['Day_length_hr','Max_temp_F','Avg_Temp_F','Min_temp_F','Max_humidity_percent','Avg_humidity_percent','Min_humidity_percent','Max_visibility_mi','Avg_visibility_mi','Min_visibility_mi','Max_windspeed_mph','Avg_windspeed_mph','Max_windgust_mph','Precipitation_in','Event_fog','Event_rain','Event_snow','Event_thunderstorm','Event_Hail']]
# Initialize lists
coefs = []
trainerror = []
# Define lambda space
lambdas = np.logspace(-6,6,200)
# Define type of regressor
regr_ridge = linear_model.Ridge()
# loop over lambda (a) values (strength of regularization)
for a in lambdas:
regr_ridge.set_params(alpha=a,normalize=True,max_iter=1e6)
regr_ridge.fit(X_lightening,Y_lightening)
coefs.append(regr_ridge.coef_)
trainerror.append(mean_squared_error(Y_lightening,regr_ridge.predict(X_lightening)))
# Plot
plt.figure(figsize=(10,3))
# figure 1: Lasso Coef. and lambda
plt.subplot(121)
plt.plot(lambdas,coefs)
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('coefs')
plt.title('Ridge coefs vs $\lambda$')
# figure 2: Error and lambda
plt.subplot(122)
plt.plot(lambdas,trainerror,label='train error')
plt.xscale('log')
plt.xlabel('$\lambda$')
plt.ylabel('error')
plt.legend(loc='lower right')
plt.title('error vs $\lambda$')
plt.show()
# pick the best alpha value
regr_ridge_best_lightening = linear_model.Ridge()
regr_ridge_best_lightening.set_params(alpha=1e-3,normalize=True,max_iter=1e6)
regr_ridge_best_lightening.fit(X_lightening,Y_lightening)
Y_lightening_predict = regr_ridge_best_lightening.predict(X_lightening)
#make parity plot
plt.figure(figsize=(4,4))
plt.scatter(Y_lightening,Y_lightening_predict)
plt.plot([0,10],[0,10],lw=4,color='black')
plt.show()
#calculate the test and train error
print("Train error",mean_squared_error(Y_lightening,Y_lightening_predict))
# Returns the coefficient of determination R^2 of the prediction.
print("R^2",regr_ridge_best_lightening.score(X_lightening,Y_lightening))
"""
Explanation: Lightning-caused Outages
End of explanation
"""
|
radhikapc/foundation-homework | homework05/Homework05_Spotify_radhika.ipynb | mit | #With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50
#that are playable in the USA (or the country of your choice), along with their popularity score.
count =0
for artist in Lil_artists:
count += 1
print(count,".", artist['name'],"has the popularity of", artist['popularity'])
"""
Explanation: 1.Searching and Printing a List of 50 'Lil' Musicians
With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
End of explanation
"""
# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres
#in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
#Tip: "how to join a list Python" might be a helpful search
# if len(artist['genres']) == 0 )
# print ("no genres")
# else:
# genres = ", ".join(artist['genres'])
genre_list = []
genre_loop = Lil_data['artists']['items']
for item in genre_loop:
#print(item['genres'])
item_gen = item['genres']
for i in item_gen:
genre_list.append(i)
#print(sorted(genre_list))
#COUNTING the most
genre_counter = {}
for word in genre_list:
if word in genre_counter:
genre_counter[word] += 1
else:
genre_counter[word] = 1
popular_genre = sorted(genre_counter, key = genre_counter.get, reverse = True)
top_genre = popular_genre[:1]
print("The genre most represented is", top_genre)
#COUNTING the most with count to confirm
from collections import Counter
count = Counter(genre_list)
most_count = count.most_common(1)
print("The genre most represented and the count are", most_count)
print("-----------------------------------------------------")
for artist in Lil_artists:
num_genres = 'no genres listed'
if len(artist['genres']) > 0:
num_genres= str.join(',', (artist['genres']))
print(artist['name'],"has the popularity of", artist['popularity'], ", and has", num_genres, "under genres")
"""
Explanation: 2 Genres Most Represented in the Search Results
What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
End of explanation
"""
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
"""
Explanation: More Spotify - LIL' GRAPHICS
Use Excel, Illustrator or something like https://infogr.am/ to make a graphic about the Lil's, or the Lil's vs. the Biggies.
Just a simple bar graph of their various popularities sounds good to me.
Link to the Line Graph of Lil's Popularity chart
Lil Popularity Graph
End of explanation
"""
#Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
#Is it the same artist who has the largest number of followers?
name_highest = ""
name_follow =""
second_high_pop = 0
highest_pop = 0
high_follow = 0
for artist in Lil_artists:
if (highest_pop < artist['popularity']) & (artist['name'] != "Lil Wayne"):
#second_high_pop = highest_pop
#name_second = artist['name']
highest_pop = artist['popularity']
name_highest = artist['name']
if (high_follow < artist['followers']['total']):
high_follow = artist ['followers']['total']
name_follow = artist['name']
#print(artist['followers']['total'])
print(name_highest, "has the second highest popularity, which is", highest_pop)
print(name_follow, "has the highest number of followers:", high_follow)
#print("the second highest popularity is", second_high_pop)
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
"""
Explanation: The Second Highest Popular Artist
Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
End of explanation
"""
Lil_artists = Lil_data['artists']['items']
#Print a list of Lil's that are more popular than Lil' Kim.
count = 0
for artist in Lil_artists:
if artist['popularity'] > 62:
count+=1
print(count, artist['name'],"has the popularity of", artist['popularity'])
#else:
#print(artist['name'], "is less popular with a score of", artist['popularity'])
"""
Explanation: 4. List of Lil's Popular Than Lil' Kim
End of explanation
"""
response = requests.get("https://api.spotify.com/v1/search?query=Lil&type=artist&limit=2&country=US")
data = response.json()
for artist in Lil_artists:
#print(artist['name'],artist['id'])
if artist['name'] == "Lil Wayne":
wayne = artist['id']
print(artist['name'], "id is",wayne)
if artist['name'] == "Lil Yachty":
yachty = artist['id']
print(artist['name'], "id is", yachty)
#Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
#Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Wayne's top tracks are: ")
for track in tracks:
print("-", track['name'])
print("-----------------------------------------------")
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Yachty 's top tracks are: ")
for track in tracks:
print("-", track['name'])
"""
Explanation: 5.Two Favorite Lils and Their Top Tracks
End of explanation
"""
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
#print(tracks)
#for track in tracks:
#print(track.keys())
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = track['duration_ms']
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
#data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = data_track['duration_ms']
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
"""
Explanation: 6. Average Popularity of My Fav Musicians (Above) for Their explicit songs vs. their non-explicit songs
Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
End of explanation
"""
#How many total "Biggie" artists are there? How many total "Lil"s?
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['total']
print("Total number of Biggie artists are", biggie_artists)
lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US')
lil_data = lil_response.json()
lil_artists = lil_data['artists']['total']
print("Total number of Lil artists are", lil_artists)
"""
Explanation: 7a. Number of Biggies and Lils
Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
End of explanation
"""
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
limit_download = 50
biggie_artists = biggie_data['artists']['total']
Lil_artist = Lil_data['artists']['total']
#1n 5 sec = 50
#in 1 sec = 50 / 5 req = 10 no, for 1 no, 1/10 sec
# for 4501 = 4501/10 sec
# for 49 49/ 10 sec
big_count = biggie_artists/10
lil_count = Lil_artist / 10
print("It would take", big_count, "seconds for Biggies, where as it would take", lil_count,"seconds for Lils" )
"""
Explanation: 7b. Time to Download All Information on Lil and Biggies
End of explanation
"""
#Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['items']
big_count_pop = 0
for artist in biggie_artists:
#count_pop = artist['popularity']
big_count_pop = big_count_pop + artist['popularity']
print("Biggie has a total popularity of ", big_count_pop)
big_pop = big_count_pop / 49
print("Biggie is on an average", big_pop,"popular")
#Lil
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
Lil_artists = Lil_data['artists']['items']
lil_count_pop = 0
for artist in Lil_artists:
count_pop_lil = artist['popularity']
lil_count_pop = lil_count_pop + count_pop_lil
lil_pop = lil_count_pop / 50
print("Lil is on an average", lil_pop,"popular")
"""
Explanation: 8. Highest Average Popular Lils and Biggies Out of The Top 50
End of explanation
"""
|
michaelaye/iuvs | notebooks/L1A_darks_mean_value_dataframe_analysis.ipynb | isc | %matplotlib inline
plt.rcParams['figure.figsize'] = (10,10)
from matplotlib.pyplot import subplots
"""
Explanation: Inital setup
End of explanation
"""
import pandas as pd
df = pd.read_hdf('/home/klay6683/l1a_dark_stats.h5','df')
"""
Explanation: loading summary file that I previously created by scanning through L1A darks.
End of explanation
"""
print(df.columns.values)
"""
Explanation: These are the columns I have available in this dataset:
End of explanation
"""
def correct_mean_value(row):
return row['SPE_SIZE'] * row['SPA_SIZE']
df['pixelsum'] = df.apply(correct_mean_value, axis=1)
df['corrected_mean'] = df.dark_mean / df.pixelsum
"""
Explanation: Correct mean DN for binning
When binning was applied, each pixel in the stored array carries the sum of the pixels that have been binned. Hence I have to correct the mean DN value for the applied binning.
End of explanation
"""
df['DN_per_s'] = df.corrected_mean / (df.INT_TIME/1000)
df.index = df.FILENAME.map(lambda x: io.Filename(x).time)
df.index.name = 'Time'
df.sort_index(inplace=True)
"""
Explanation: Integration time correction
This is just to normalize for integration time to create DN/s.
I also put here the time of the observation into the index and sort the data by time. This way, people that are more aware of the timeline could correlate special events to irregularities in the data.
End of explanation
"""
df.MCP_VOLT.value_counts()
import numpy as np
df[df.MCP_VOLT > 0] = np.nan
"""
Explanation: Different dark modes
Two different kinds of dark modes have been taken:
Using the shutter to block photon transport to the detector
Setting the MCP_Voltage to approx zero to avoid creating signal in the detector
Because of reduced reproducability for darks from mode 1, it was decided to use mode 2 for the future darks. I therefore filter out case 1 for now and set it to NAN.
Below one can see that most of the darks have been taken in mode 2 anyway:
End of explanation
"""
from iuvs import calib
df.DET_TEMP = calib.iuvs_dn_to_temp(df.DET_TEMP, det_temp=True)
df.CASE_TEMP = calib.iuvs_dn_to_temp(df.CASE_TEMP, det_temp=False)
"""
Explanation: Calibrate temperatures
I implemented my own DN_to_degC converter using the polynoms from our L1B pipeline.
End of explanation
"""
df.DET_TEMP.describe()
"""
Explanation: Checking how the stats look like for calibrated DET_TEMP
End of explanation
"""
cols = 'DN_per_s CASE_TEMP DET_TEMP INT_TIME'.split()
"""
Explanation: Remaining trends in the DN_per_s
Now that I should have removed influences from INT_TIME, different methods of dark imaging and binning counts, I can focus on temperature effects.
End of explanation
"""
pd.scatter_matrix(df[cols], figsize=(10,10));
"""
Explanation: This is a scatter matrix for the above chosen columns.
End of explanation
"""
df.plot(x='INT_TIME', y='DN_per_s', kind='scatter')
"""
Explanation: Note that the DN_per_s and DET_TEMP scatter plot still shows different families.
Because longer exposure times allow more cosmic rays to hit, it is to be expected that despite the correction for INT_TIME towards DN_per_s still shows differences per initial INT_TIME.
End of explanation
"""
_, axes = subplots(nrows=2)
df[['DET_TEMP', 'DN_per_s']].plot(secondary_y='DN_per_s',ax=axes[0])
df.CASE_TEMP.plot(ax=axes[1], ylim=(4,6))
"""
Explanation: Above we see that the longest INT_TIME shows the highest inherent scatter, most likely due to the CRs.
Let's see how the DN_per_s develop in general over time of observations and how that compares to DET_TEMP.
End of explanation
"""
df.plot(x='DET_TEMP', y='DN_per_s',kind='scatter')
"""
Explanation: As previously known, we have a strong correlation between DET_TEMP and DN_per_s created.
Looking at a scatter plot, there seem to exist identifiable situations that create different relationships between DET_TEMP and DN_per_s:
End of explanation
"""
df.NAXIS3.value_counts(dropna=False)
subdf = df[df.NAXIS3.isnull()]
subdf.plot(x='DET_TEMP', y='DN_per_s', kind='scatter')
"""
Explanation: I was worried that I have to treat sets of dark images differently for some reason, so I filter out any data that has a valid NAXIS3 and look only at data that has no valid NAXIS3. (i.e. focusing on single dark images, which is the majority anyway, as seen here with the NaN entry).
End of explanation
"""
df.INT_TIME.value_counts()
fuv = subdf[subdf.XUV=='FUV']
muv = subdf[subdf.XUV=='MUV']
"""
Explanation: But the result looks the same, so this is no issue.
Nevertheless, to be sure I will use this subframe from now on.
INT_TIME dependencies
As mentioned before, different integration times offer different probabilities for disturbant factors to happen. So let's focus on particular INT_TIMEs. Here's how the dark INT_TIMEs distribute over the L1A dataset.
One also has to divide the MUV and FUV data.
End of explanation
"""
inttimes = [14400, 10200, 6000, 4200, 4000, 1400]
fig, axes = subplots(nrows=len(inttimes), figsize=(12,13))
for ax, inttime in zip(axes, inttimes):
muv[muv.INT_TIME==inttime].plot(x='DET_TEMP', y='DN_per_s',kind='scatter', ax=ax, sharex=False)
ax.set_title('INT_TIME = {}'.format(inttime))
fig.suptitle('MUV DN_per_s vs DET_TEMP, sorted by INT_TIME', fontsize=20)
fig.tight_layout()
"""
Explanation: Looping over a chosen set of INT_TIMEs to create the following overview plot, where things are separated for different INT_TIMES.
First, the MUV data.
MUV DN_per_s vs DET_TEMP
End of explanation
"""
inttimes = [14400, 10200, 6000, 4200, 4000, 1400]
fig, axes = subplots(nrows=len(inttimes), figsize=(12,13))
for ax, inttime in zip(axes, inttimes):
fuv[fuv.INT_TIME==inttime].plot(x='DET_TEMP', y='DN_per_s',kind='scatter', ax=ax, sharex=False)
ax.set_title('INT_TIME = {}'.format(inttime))
fig.suptitle('FUV DN_per_s vs DET_TEMP, sorted by INT_TIME', fontsize=20)
fig.tight_layout()
"""
Explanation: FUV DN_per_s vs DET_TEMP
End of explanation
"""
fig, axes = subplots(nrows=len(inttimes), figsize=(13,14))
for ax,inttime in zip(axes, inttimes):
muv[muv.INT_TIME==inttime]['DET_TEMP'].plot(ax=ax, style='*', sharex=False)
ax.set_title('MUV INT_TIME={}'.format(inttime))
fig.tight_layout()
"""
Explanation: Following the interesting consistent separation in families of scatter points, let's look at what the DET_TEMP does over time during these different INT_TIMES.
MUV DET_TEMP over Time
End of explanation
"""
fig, axes = subplots(nrows=len(inttimes), figsize=(13,14))
for ax,inttime in zip(axes, inttimes):
fuv[fuv.INT_TIME==inttime]['DET_TEMP'].plot(ax=ax, style='*', sharex=False)
ax.set_title('FUV INT_TIME={}'.format(inttime))
fig.tight_layout()
"""
Explanation: FUV DET_TEMP over TIME
End of explanation
"""
|
gprakhar/sifar | document_clustering.ipynb | gpl-3.0 | # Author: Peter Prettenhofer <peter.prettenhofer@gmail.com>
# Lars Buitinck
# License: BSD 3 clause
from __future__ import print_function
from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
from sklearn import metrics
from sklearn.cluster import KMeans, MiniBatchKMeans
import logging
from optparse import OptionParser
import sys
from time import time
import numpy as np
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# parse commandline arguments
op = OptionParser()
op.add_option("--lsa",
dest="n_components", type="int",
help="Preprocess documents with latent semantic analysis.")
op.add_option("--no-minibatch",
action="store_false", dest="minibatch", default=True,
help="Use ordinary k-means algorithm (in batch mode).")
op.add_option("--no-idf",
action="store_false", dest="use_idf", default=True,
help="Disable Inverse Document Frequency feature weighting.")
op.add_option("--use-hashing",
action="store_true", default=False,
help="Use a hashing feature vectorizer")
op.add_option("--n-features", type=int, default=10000,
help="Maximum number of features (dimensions)"
" to extract from text.")
op.add_option("--verbose",
action="store_true", dest="verbose", default=False,
help="Print progress reports inside k-means algorithm.")
print(__doc__)
op.print_help()
(opts, args) = op.parse_args()
if len(args) > 0:
op.error("this script takes no arguments.")
sys.exit(1)
"""
Explanation: Clustering text documents using k-means
This is an example showing how the scikit-learn can be used to cluster
documents by topics using a bag-of-words approach. This example uses
a scipy.sparse matrix to store the features instead of standard numpy arrays.
Two feature extraction methods can be used in this example:
TfidfVectorizer uses a in-memory vocabulary (a python dict) to map the most
frequent words to features indices and hence compute a word occurrence
frequency (sparse) matrix. The word frequencies are then reweighted using
the Inverse Document Frequency (IDF) vector collected feature-wise over
the corpus.
HashingVectorizer hashes word occurrences to a fixed dimensional space,
possibly with collisions. The word count vectors are then normalized to
each have l2-norm equal to one (projected to the euclidean unit-ball) which
seems to be important for k-means to work in high dimensional space.
HashingVectorizer does not provide IDF weighting as this is a stateless
model (the fit method does nothing). When IDF weighting is needed it can
be added by pipelining its output to a TfidfTransformer instance.
Two algorithms are demoed: ordinary k-means and its more scalable cousin
minibatch k-means.
Additionally, latent semantic analysis can also be used to reduce dimensionality
and discover latent patterns in the data.
It can be noted that k-means (and minibatch k-means) are very sensitive to
feature scaling and that in this case the IDF weighting helps improve the
quality of the clustering by quite a lot as measured against the "ground truth"
provided by the class label assignments of the 20 newsgroups dataset.
This improvement is not visible in the Silhouette Coefficient which is small
for both as this measure seem to suffer from the phenomenon called
"Concentration of Measure" or "Curse of Dimensionality" for high dimensional
datasets such as text data. Other measures such as V-measure and Adjusted Rand
Index are information theoretic based evaluation scores: as they are only based
on cluster assignments rather than distances, hence not affected by the curse
of dimensionality.
Note: as k-means is optimizing a non-convex objective function, it will likely
end up in a local optimum. Several runs with independent random init might be
necessary to get a good convergence.
End of explanation
"""
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
# Uncomment the following to do the analysis on all the categories
#categories = None
print("Loading 20 newsgroups dataset for categories:")
print(categories)
dataset = fetch_20newsgroups(subset='all', categories=categories,
shuffle=True, random_state=42)
#print("dateset infomation:")
#print(type(dataset))
#print(dataset)
#print("-------")
print("%d documents" % len(dataset.data))
print("%d categories" % len(dataset.target_names))
print()
labels = dataset.target
true_k = np.unique(labels).shape[0]
print("Labels info")
print(type(labels))
print(true_k)
print("Extracting features from the training dataset using a sparse vectorizer")
t0 = time()
if opts.use_hashing:
if opts.use_idf:
# Perform an IDF normalization on the output of HashingVectorizer
hasher = HashingVectorizer(n_features=opts.n_features,
stop_words='english', non_negative=True,
norm=None, binary=False)
vectorizer = make_pipeline(hasher, TfidfTransformer())
else:
vectorizer = HashingVectorizer(n_features=opts.n_features,
stop_words='english',
non_negative=False, norm='l2',
binary=False)
else:
vectorizer = TfidfVectorizer(max_df=0.5, max_features=opts.n_features,
min_df=2, stop_words='english',
use_idf=opts.use_idf)
X = vectorizer.fit_transform(dataset.data)
print("done in %fs" % (time() - t0))
print("n_samples: %d, n_features: %d" % X.shape)
print()
if opts.n_components:
print("Performing dimensionality reduction using LSA")
t0 = time()
# Vectorizer results are normalized, which makes KMeans behave as
# spherical k-means for better results. Since LSA/SVD results are
# not normalized, we have to redo the normalization.
svd = TruncatedSVD(opts.n_components)
normalizer = Normalizer(copy=False)
lsa = make_pipeline(svd, normalizer)
X = lsa.fit_transform(X)
print("done in %fs" % (time() - t0))
explained_variance = svd.explained_variance_ratio_.sum()
print("Explained variance of the SVD step: {}%".format(
int(explained_variance * 100)))
print()
"""
Explanation: Load some categories from the training set
End of explanation
"""
if opts.minibatch:
km = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1,
init_size=1000, batch_size=1000, verbose=opts.verbose)
else:
km = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1,
verbose=opts.verbose)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(X)
print("done in %0.3fs" % (time() - t0))
print()
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels, km.labels_))
print("Completeness: %0.3f" % metrics.completeness_score(labels, km.labels_))
print("V-measure: %0.3f" % metrics.v_measure_score(labels, km.labels_))
print("Adjusted Rand-Index: %.3f"
% metrics.adjusted_rand_score(labels, km.labels_))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, km.labels_, sample_size=1000))
print()
if not opts.use_hashing:
print("Top terms per cluster:")
if opts.n_components:
original_space_centroids = svd.inverse_transform(km.cluster_centers_)
order_centroids = original_space_centroids.argsort()[:, ::-1]
else:
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
"""
Explanation: Do the actual clustering
End of explanation
"""
|
softEcon/talks | intro_scientific_python/talk.ipynb | mit | import this
"""
Explanation: Python for Scientific Computing in Economics
<font size="3"> ... background material available at <a href="https://github.com/softecon/talks">https://github.com/softecon/talks</a> </font>
Why Python?
<style>
table,td,tr,th {border:none!important}
</style>
<table style="width:90%">
<tbody>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li>general-purpose</li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li>widely used</li>
</td>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li>high-level</li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li>readability </li>
</td>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li>extensibility</li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li>active community</li>
</td>
</tr>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li>numerous interfaces</li>
</td>
</tr>
</tbody>
</table>
End of explanation
"""
print("Hello, World!")
"""
Explanation: Why Python for Scientific Computing?
Python is used by computer programmers and scientists alike. Thus, tools from software engineering are readily available.
Python is an open-source project. This ensures that all implementation details can be critically examined. There are no licence costs and low barriers to recomputability.
Python can be easily linked to high-performance languages such as C and Fortran. Python is ideal for prototyping with a focus on readability, design patterns, and ease of testing.
Python has numerous high-quality libraries for scientific computing under active development.
<img src="images/codeeval2015.png">
What do you need to get started?
<style>
table,td,tr,th {border:none!important}
</style>
<table style="width:90%"><p>
<tbody>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li>SciPy Stack</li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li>Basic Example</li>
</td>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li> Integrated Development Environment</li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li> Additional Resources</li>
</td>
</tr>
</tbody>
</table>
First things first, here is the ``Hello, World!'' program in Python.
End of explanation
"""
# Import relevant libraries from the SciPy Stack
import numpy as np
# Specify parametrization
num_agents = 1000
num_covars = 3
betas_true = np.array([0.22, 0.30, -0.1]).T
# Set a seed to ensure recomputability in light of randomness
np.random.seed(4292367295)
# Sample exogenous agent characteristics from a uniform distribution in
# a given shape
X = np.random.rand(num_agents, num_covars)
# Sample random disturbances from a standard normal distribution and rescale
eps = np.random.normal(scale=0.1, size=num_agents)
# Construct endogenous agent characteristic
Y = np.dot(X, betas_true) + eps
"""
Explanation: SciPy Stack<br>
Most of our required tools are part of the SciPy Stack, a collection of open source software for scientific computing in Python.<br><br>
<style>
table,td,tr,th {border:none!important}
</style>
<table style="width:90%">
<tbody>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li><a href="http://www.scipy.org/scipylib/index.html">SciPy Library</a></li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li><a href="http://numpy.org">NumPy</a></li>
</td>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li><a href="http://matplotlib.org">Matplotlib</a></li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li><a href="http://pandas.pydata.org">pandas</a></li>
</td>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li><a href="http://www.sympy.org">SymPy</a></li>
</td>
<td style="vertical-align:top; padding-left:30px;">
<li><a href="http://ipython.org">IPython</a></li>
</td>
</tr>
<tr height="40">
<td style="vertical-align:top; padding-left:30px;">
<li><a href="https://nose.readthedocs.org">nose</a></li>
</td>
</tbody>
</table>
Depending on your particular specialization, this package might be of additional interest to you, e.g. statsmodels.
Basic Example
To get a feel for the language, let us work with a basic example. We will set up a simple Ordinary Least Squares (OLS) model.
$$Y=Xβ+ϵ$$
We start by simulating a synthetic dataset. Then we fit a basic OLS regression and assess the quality of its prediction.
Alternatives
Terminal
Jupyter Notebook
Pseudorandom Number Generation
End of explanation
"""
# Import relevant libraries from the SciPy Stack
import statsmodels.api as sm
# Specify and fit the model
rslt = sm.OLS(Y, X).fit()
# Provide some summary information
print(rslt.summary())
"""
Explanation: Statistical Analysis
End of explanation
"""
# Import relevant libraries from the SciPy Stack
import matplotlib.pyplot as plt
# Initialize canvas
ax = plt.figure(figsize=(12, 8)).add_subplot(111, axisbg='white')
# Plot actual and fitted values
ax.plot(np.dot(X, rslt.params), Y, 'o', label='True')
ax.plot(np.dot(X, rslt.params), rslt.fittedvalues, 'r--.', label="Predicted")
# Set axis labels and ranges
ax.set_xlabel(r'$X\hat{\beta}$', fontsize=20)
ax.set_ylabel(r'$Y$', fontsize=20)
# Remove first element on y-axis
ax.yaxis.get_major_ticks()[0].set_visible(False)
# Add legend
plt.legend(loc='upper center', bbox_to_anchor=(0.50, -0.10),
fancybox=False, frameon=False, shadow=False, ncol=2, fontsize=20)
# Add title
plt.suptitle('Synthetic Sample', fontsize=20)
# Save figure
plt.savefig('images/scatterplot.png', bbox_inches='tight', format='png')
from IPython.display import Image
Image(filename='images/scatterplot.png', width=700, height=700)
"""
Explanation: Data Visualization
End of explanation
"""
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1K5apRH').read())
"""
Explanation: Integrated Development Environment
PyCharm
PyCharm is developed by the Czech company JetBrains. It is free to use for educational purposes. However, it is a commerical product and thus very well documented. Numerous resources are available to get you started.
Quick Start Guide
Video Lectures
If you would like to check out some alternatives: (1) Spyder, (2) PyDev.
Potential Benefits
Unit Testing Integration
Graphical Debugger
Version Control Integration
Coding Assistance
Code Completion
Syntax and Error Highlighting
...
Let us check it all out for our Basic Example.
Graphical User Interface
<img src='images/pycharm.png' width="650" height="650">
Conclusion
Next Steps
Set up your machine for scientific computing with Python
Visit Continuum Analytics and download Anaconda for your own computer. Anaconda is a free Python distribution with all the required packages to get you started.
Install PyCharm. Make sure to hook it up to your Anacadona distribution (instructions).
Check out the additional resources to dive more into the details.
Additional Resources
Gaël Varoquaux, Emmanuelle Gouillart, Olaf Vahtras (eds.). SciPy Lecture Notes, available at http://www.scipy-lectures.org.
Hans Petter Langtangen. A Primer on Scientific Programming with Python, Springer, New York, NY.
Thomas J. Sargent, John Stachurski (2016). Quantitative Economics. Online Lecture Notes.
Software Engineering for Economists Initiative, Online Resources.
Numerous additional lecture notes, tutorials, online courses, and books are available online.
<style>
li { margin: 1em 3em; padding: 0.2em; }
</style>
<h2>Contact</h2>
<br><br>
<b>Philipp Eisenhauer</b>
<ul>
<li> Mail <a href="mailto:eisenhauer@policy-lab.org">eisenhauer@policy-lab.org</a></li><br>
<li>Web <a href="http://eisenhauer.io">http://eisenhauer.io</a></li><br>
<li>Repository <a href="https://github.com/peisenha">https://github.com/peisenha</a></li>
</ul>
<br><br>
<b>Software Engineering for Economists Initiative</b>
<ul>
<li>Overview <a href="http://softecon.github.io">http://softecon.github.io</a></li><br>
<li>Repository <a href="https://github.com/softEcon">https://github.com/softEcon</a></li><br>
<br>
</ul>
End of explanation
"""
|
jinzishuai/learn2deeplearn | deeplearning.ai/C1.NN_DL/week4/Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb | gpl-3.0 | import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v3 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
"""
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
"""
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
"""
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
"""
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
"""
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
"""
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
"""
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters["W"+str(l)], parameters["b"+str(l)], activation = "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters["W"+str(L)], parameters["b"+str(L)], activation = "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
"""
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
"""
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1/m*np.sum(np.multiply(np.log(AL),Y)+np.multiply(np.log(1-AL),1-Y))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
"""
Explanation: <table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
"""
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1/m*np.dot(dZ, A_prev.T)
db = 1/m*np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
"""
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward (dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward (dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
"""
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
"""
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward( grads["dA" + str(l+2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
"""
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
"""
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(1, L+1):
parameters['W' + str(l)] = parameters['W' + str(l)] - learning_rate*grads["dW"+str(l)]
parameters['b' + str(l)] = parameters['b' + str(l)] - learning_rate*grads["db"+str(l)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
"""
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation
"""
|
Erhil/PythonNpCourse | materials/week 2/numpy.ipynb | mit | %pylab inline
import this
"""
Explanation: Программирование на Python
Дзен Python
End of explanation
"""
import numpy as np
np.array([1,2,3])
a = np.array([[1,2,3], [4,5,6]])
a = np.array([1,2,3])
b = np.array([4,5,6])
a+b
a*b
a/b
a**b
"""
Explanation: Красивое лучше, чем уродливое.<br>
Явное лучше, чем неявное.<br>
Простое лучше, чем сложное.<br>
Сложное лучше, чем запутанное.<br>
Плоское лучше, чем вложенное.<br>
Разреженное лучше, чем плотное.<br>
Читаемость имеет значение.<br>
Особые случаи не настолько особые, чтобы нарушать правила.<br>
При этом практичность важнее безупречности.<br>
Ошибки никогда не должны замалчиваться.<br>
Если не замалчиваются явно.<br>
Встретив двусмысленность, отбрось искушение угадать.<br>
Должен существовать один — и, желательно, только один — очевидный способ сделать это.<br>
Хотя он поначалу может быть и не очевиден, если вы не голландец.<br>
Сейчас лучше, чем никогда.<br>
Хотя никогда зачастую лучше, чем прямо сейчас.<br>
Если реализацию сложно объяснить — идея плоха.<br>
Если реализацию легко объяснить — идея, возможно, хороша.<br>
Пространства имён — отличная штука! Будем делать их побольше!
End of explanation
"""
np.array([1, 2, 4], dtype=np.float32)
a = np.array([1,2,3])
print(a.dtype)
print(a.astype(np.float64).dtype)
"""
Explanation: Типы данных в np.array
Целые знаковые
* np.int8
* np.int16
* np.int32
* np.int64
Целые беззнаковые
* np.uint8
* np.uint16
* np.uint32
* np.uint64
С плавающей точкой
* np.float16
* np.float32
* np.float64
End of explanation
"""
np.arange(2, 10, 3, dtype=np.float32)
np.linspace(1,10,10000)
"""
Explanation: Создание массивов в numpy
Диапозон значений
End of explanation
"""
np.zeros((3,1),dtype=np.float16)
np.ones((5,3),dtype=np.float16)
"""
Explanation: Заполнение массива
End of explanation
"""
np.random.random((4,2,3))
np.random.randint(1,10,(5,3))
np.random.normal(5, 6, (4,2))
np.random.seed(42)
a = np.zeros((3,2))
b = np.ones((3,2))
np.hstack([a,b])
np.vstack([a, b])
a
a.shape
b = np.array([[1,2],[3,4],[5,6]])
b.T
a.dot(b)
X = np.arange(1,11).reshape((-1,1))
y = np.arange(2,12)+np.random.normal(size=(10))
y = y.reshape((-1,1))
W = np.random.random((2,1))
"""
Explanation: Случайные значения
End of explanation
"""
X = np.hstack([X, np.ones((10,1))])
f(X)
"""
Explanation: $$f(x) = kx+b$$
$$f(x) = X*W$$
End of explanation
"""
def f(X, W):
return X.dot(W)
def MSE(X, W, y):
return (X.dot(W)-y).T.dot(X.dot(W)-y)/X.shape[0]
def dMSE(X, W, y):
return 2/X.shape[0]*X.T.dot((X.dot(W)-y))
def optimize(W,X,y,a):
for i in range(1000):
W = W - a*dMSE(X,W,y)
MSE(X, W, y)
dMSE(X,W,y)
def optimize(W,X,y,a):
global coef, mses
coef = []
mses = []
for i in range(1000):
coef.append(W)
mses.append(MSE(X,W,y)[0,0])
W = W - a*dMSE(X,W,y)
# print(MSE(X,W,y))
return W
W = np.random.random((2,1))
P = optimize(W, X, y, 0.02)
coef = np.array(coef)
ylabel("k")
xlabel("b")
plot(coef[:,0,0], coef[:,1,0]);
ylabel("MSE")
xlabel("iteration")
plot(mses);
scatter(X[:,0],y.reshape(-1))
plot(X[:,0], f(X, W))
plot(X[:,0], f(X, P))
"""
Explanation: $$MSE(X,\omega, y) = \frac{1}{N} \sum_i (f(x_i, \omega) - y_i)^2$$
$$\frac{dMSE}{dk} = \frac{2}{N} \sum_i (f(x_i, \omega) - y_i)*x_i$$
$$\frac{dMSE}{db} = \frac{2}{N} \sum_i (f(x_i, \omega) - y_i)$$
$$MSE(X,\omega, y) = \frac{1}{N}(XW - y)(XW - y)$$
$$\frac{dMSE}{dk} = \frac{2}{N}(XW - y)x$$
$$\frac{dMSE}{db} = \frac{2}{N} \sum (XW - y)$$
$$dMSE = \frac{2}{N} * (X * W-y)*X$$
$$W_{i+1} = W_{i}-\alpha*dMSE(X, W_i, y)$$
End of explanation
"""
|
3DGenomes/tadbit | doc/notebooks/tutorial_1-Retrieve_published_HiC_datasets.ipynb | gpl-3.0 | %%bash
mkdir -p FASTQs
fastq-dump SRR5344921 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs/
mv FASTQs/SRR5344921_1.fastq FASTQs/mouse_B_rep1_1.fastq
mv FASTQs/SRR5344921_2.fastq FASTQs/mouse_B_rep1_2.fastq
fastq-dump SRR5344925 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs/
mv FASTQs/SRR5344925_1.fastq FASTQs/mouse_B_rep2_1.fastq
mv FASTQs/SRR5344925_2.fastq FASTQs/mouse_B_rep2_2.fastq
fastq-dump SRR5344969 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs
mv FASTQs/SRR5344969_1.fastq FASTQs/mouse_PSC_rep1_1.fastq
mv FASTQs/SRR5344969_2.fastq FASTQs/mouse_PSC_rep1_2.fastq
fastq-dump SRR5344973 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs/
mv FASTQs/SRR5344973_1.fastq FASTQs/mouse_PSC_rep2_1.fastq
mv FASTQs/SRR5344973_2.fastq FASTQs/mouse_PSC_rep2_2.fastq
"""
Explanation: Retrieve HiC dataset from NCBI
We will use data from <a name="ref-1"/>(Stadhouders R, Vidal E, Serra F, Di Stefano B et al. 2018), which comes from mouse cells where Hi-C experiment where conducted in different states during highly-efficient somatic cell reprogramming.
The data can be downloaded from:
https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE53463
Once downloaded the files can be converted to the FASTQ format in order for TADbit to read them.
The easiest way to download the data might be through the fastq-dump program from the SRA Toolkit (http://www.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?cmd=show&f=software&m=software&s=software).
We download 100M reads for each of 4 replicates (2 replicates from B cells and 2 from Pluripotent Stem Cells),and organize each in two files, one per read-end (this step is long and can take up to 6 hours):
End of explanation
"""
%%bash
dsrc c -t8 FASTQs/mouse_B_rep1_1.fastq FASTQs/mouse_B_rep1_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_B_rep1_2.fastq FASTQs/mouse_B_rep1_2.fastq.dsrc
dsrc c -t8 FASTQs/mouse_B_rep2_1.fastq FASTQs/mouse_B_rep2_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_B_rep2_2.fastq FASTQs/mouse_B_rep2_2.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep1_1.fastq FASTQs/mouse_PSC_rep1_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep1_2.fastq FASTQs/mouse_PSC_rep1_2.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep2_1.fastq FASTQs/mouse_PSC_rep2_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep2_2.fastq FASTQs/mouse_PSC_rep2_2.fastq.dsrc
"""
Explanation: Files are renamed for convenience.
Note: the parameter used here for fastq-dump are for generating simple FASTQ files, --defline-seq ‘@$ac.$si’ reduces the information in the headers to the accession number and the read id, --split-files is to separate both read-ends in different files, finally -X 100000000 is to download only the first 100 Million reads of each replicate
Note: alternatively you can also directly download the FASTQ from http://www.ebi.ac.uk/
Compression
Each of these 8 files, contains 100M reads of 75 nucleotides each, and occupies ~17 Gb (total 130 Gb).
Internally we use DSRC <a name="ref-4"/>(Roguski and Deorowicz, 2014) that allows better compression ration and, more importantly, faster decompression:
End of explanation
"""
%%bash
rm -f FASTQs/mouse_B_rep1_1.fastq
rm -f FASTQs/mouse_B_rep1_2.fastq
rm -f FASTQs/mouse_B_rep2_1.fastq
rm -f FASTQs/mouse_B_rep2_2.fastq
rm -f FASTQs/mouse_PSC_rep1_1.fastq
rm -f FASTQs/mouse_PSC_rep1_2.fastq
rm -f FASTQs/mouse_PSC_rep2_1.fastq
rm -f FASTQs/mouse_PSC_rep2_2.fastq
"""
Explanation: After compression we reduce the total size to 27 Gb (20% of the original size, and dsrc ensures fast reading of the compressed data)
Note:
- using gzip instead reduces size to ~38 Gb (occupies ~40% more than dsrc compressed files)
- using bzip2 instead reduces size to ~31 Gb (occupies ~15% more than dsrc compressed files)
Both are much slower to generate and read
Cleanup
End of explanation
"""
|
Xilinx/PYNQ | boards/Pynq-Z1/logictools/notebooks/fsm_generator.ipynb | bsd-3-clause | from pynq.overlays.logictools import LogicToolsOverlay
logictools_olay = LogicToolsOverlay('logictools.bit')
"""
Explanation: Finite State Machine Generator
This notebook will show how to use the Finite State Machine (FSM) Generator to generate a state machine. The FSM we will build is a Gray code counter. The counter has three state bits and can count up or down through eight states. The counter outputs are Gray coded, meaning that there is only a single-bit transition between the output vector of any state and its next states.
Step 1: Download the logictools overlay
End of explanation
"""
fsm_spec = {'inputs': [('reset','D0'), ('direction','D1')],
'outputs': [('bit2','D3'), ('bit1','D4'), ('bit0','D5')],
'states': ['S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7'],
'transitions': [['01', 'S0', 'S1', '000'],
['00', 'S0', 'S7', '000'],
['01', 'S1', 'S2', '001'],
['00', 'S1', 'S0', '001'],
['01', 'S2', 'S3', '011'],
['00', 'S2', 'S1', '011'],
['01', 'S3', 'S4', '010'],
['00', 'S3', 'S2', '010'],
['01', 'S4', 'S5', '110'],
['00', 'S4', 'S3', '110'],
['01', 'S5', 'S6', '111'],
['00', 'S5', 'S4', '111'],
['01', 'S6', 'S7', '101'],
['00', 'S6', 'S5', '101'],
['01', 'S7', 'S0', '100'],
['00', 'S7', 'S6', '100'],
['1-', '*', 'S0', '']]}
"""
Explanation: Step 2: Specify the FSM
End of explanation
"""
fsm_generator = logictools_olay.fsm_generator
"""
Explanation: Notes on the FSM specification format
Step 3: Instantiate the FSM generator object
End of explanation
"""
fsm_generator.trace()
"""
Explanation: Setup to use trace analyzer
In this notebook trace analyzer is used to check if the inputs and outputs of the FSM.
Users can choose whether to use the trace analyzer by calling the trace() method.
End of explanation
"""
fsm_generator.setup(fsm_spec)
"""
Explanation: Step 5: Setup the FSM generator
The FSM generator will work at the default frequency of 10MHz. This can be modified using a frequency argument in the setup() method.
End of explanation
"""
fsm_generator.show_state_diagram()
"""
Explanation: Display the FSM state diagram
This method should only be called after the generator has been properly set up.
End of explanation
"""
fsm_generator.run()
fsm_generator.show_waveform()
"""
Explanation: Set up the FSM inputs on the PYNQ board
* Check that the reset and direction inputs are correctly wired on the PYNQ board, as shown below:
* Connect D0 to GND
* Connect D1 to 3.3V
Notes:
The 3-bit Gray code counter is an up-down, wrap-around counter that will count from states 000 to 100 in either ascending or descending order
The reset input is connected to pin D0 of the Arduino connector
Connect the reset input to GND for normal operation
When the reset input is set to logic 1 (3.3V), the counter resets to state 000
The direction input is connected to pin D1 of the Arduino connector
When the direction is set to logic 0, the counter counts down
Conversely, when the direction input is set to logic 1, the counter counts up
Step 6: Run and display waveform
The run() method will execute all the samples, show_waveform() method is used to display the waveforms
End of explanation
"""
fsm_generator.stop()
"""
Explanation: Verify the trace output against the expected Gray code count sequence
| State | FSM output bits: bit2, bit1, bit0 |
|:-----:|:----------------------------------------:|
| s0 | 000 |
| s1 | 001 |
| s2 | 011 |
| s3 | 010 |
| s4 | 110 |
| s5 | 111 |
| s6 | 101 |
| s7 | 100 |
Step 7: Stop the FSM generator
Calling stop() will clear the logic values on output pins; however, the waveform will be recorded locally in the FSM instance.
End of explanation
"""
|
ernestyalumni/servetheloop | servetheloop.ipynb | mit | import sympy
from sympy import Eq, solve, Symbol, symbols, pi
from sympy import Rational as Rat
from sympy.abc import tau,l,F
"""
Explanation: Start with the Final Design Report - SpaceX Hyperloop Competition II for high level view.
SpaceX Hyperloop Track Specification
End of explanation
"""
eta_1 = Symbol('eta_1',positive='True')
BallScrewThrustfromTorque = Eq(F,Rat(2)*pi*eta_1*tau/l)
"""
Explanation: Ball Screws, for the (Eddy current) Brake Mechanism
cf. THK Ball Screw, General Catalog
cf. NSK Ball Screws: 6mm ⌀ thru 15mm ⌀
cf. FUNdaMENTALS of Design, Topic 6 Power Transmission Elements II
The THK Ball Screw, General Catalog yields the following general relationship for the thrust generated when torque is applied:
$ Fa = \frac{ 2\pi \cdot \eta_1 \cdot T }{ Ph }$ (THK's notation)
$ \boxed{ F = \frac{2\pi \eta_1 \tau}{l} }$ (EY notation)
where $\eta_1$ is the efficiency of converting rotational motion to linear motion (i.e. linear output$/$ rotational input), $l$ is the thread lead (i.e. distance either the nut or screw moves, under 1 full rotation (revolution)), and $\tau$ is the applied input torque. Indeed, I had double-checked the kinematics and thus, using energy conversation, verified this relation (cf. servetheloop_dump)
End of explanation
"""
solve( BallScrewThrustfromTorque.subs(eta_1,0.95).subs(tau,3.*1000).subs(l,4.), F)
"""
Explanation: From NSK Ball Screws: 6mm ⌀ thru 15mm ⌀, given the stated product dimensions for the actual produce we had used (Product ID: W1003WF-24P-C3Z4 for the ball screw), $l=4 \, mm$
Supposing the forward or backward efficiency $\eta$ is $ \sim 0.95$ and torque $\tau$ is $3 \, N \cdot m$,
End of explanation
"""
|
molpopgen/fwdpy | docs/examples/BGS.ipynb | gpl-3.0 | #Use Python 3's print a a function.
#This future-proofs the code in the notebook
from __future__ import print_function
#Import fwdpy. Give it a shorter name
import fwdpy as fp
##Other libs we need
import numpy as np
import pandas as pd
import math
"""
Explanation: Example: background selection
Setting up the simulation
Neutral mutations will occur on the interval $[0,1)$.
Strongly-deleterious mutations will occur on the intervals $[-1,0)$ and $[1,2)$.
Recombination will be uniform throughout the region.
End of explanation
"""
# Where neutral mutations occur:
nregions = [fp.Region(beg=0,end=1,weight=1)]
# Where selected mutations occur:
sregions = [fp.ConstantS(beg=-1,end=0,weight=1,s=-0.05,h=1),
fp.ConstantS(beg=1,end=2,weight=1,s=-0.05,h=1)]
# Recombination:
recregions = [fp.Region(beg=-1,end=2,weight=1)]
"""
Explanation: Establishing 'regions' for mutation and recombination
End of explanation
"""
#Population size
N=1000
#We'll evolve for 10N generations.
#nlist is a list of population sizes over time.
#len(nlist) is the length of the simulation
#We use numpy arrays for speed and optimised RAM
#use. Note the dtype=np.uint32, which means 32-bit
#unsigned integer. Failure to use this type will
#cause a run-time error.
nlist = np.array([N]*10*N,dtype=np.uint32)
#Initalize a random number generator with seed value of 101
rng = fp.GSLrng(101)
#Simulate 40 replicate populations. This uses C++11 threads behind the scenes:
pops = fp.evolve_regions(rng, #The random number generator
40, #The number of pops to simulate = number of threads to use.
N, #Initial population size for each of the 40 demes
nlist[0:], #List of population sizes over time.
0.005, #Neutral mutation rate (per gamete, per generation)
0.01, #Deleterious mutation rate (per gamete, per generation)
0.005, #Recombination rate (per diploid, per generation)
nregions, #Defined above
sregions, #Defined above
recregions)#Defined above
#Now, pops is a Python list with len(pops) = 40
#Each element's type is fwdpy.singlepop
print(len(pops))
print(type(pops[0]))
"""
Explanation: Population size and simulation length
End of explanation
"""
#Use a list comprehension to get a random sample of size
#n = 20 from each replicate
samples = [fp.get_samples(rng,i,20) for i in pops]
#Samples is now a list of tuples of two lists.
#Each list contains tuples of mutation positions and genotypes.
#The first list represents neutral variants.
#The second list represents variants affecting fitness ('selected' variants)
#We will manipulate/analyze these genotypes, etc.,
#in a later example
for i in samples[:4]:
print ("A sample from a population is a ",type(i))
print(len(samples))
"""
Explanation: Taking samples from simulated populations
End of explanation
"""
#Again, use list comprehension to get the 'details' of each sample
#Given that each object in samples is a tuple, and that the second
#item in each tuple represents selected mutations, i[1] in the line
#below means that we are getting the mutation information only for
#selected variants
details = [pd.DataFrame(fp.get_sample_details(i[1],j)) for i,j in zip(samples,pops)]
#details is now a list of pandas DataFrame objects
#Each DataFrame has the following columns:
# a: mutation age (in generations)
# h: dominance of the mutation
# p: frequency of the mutation in the population
# s: selection coefficient of the mutation
# label: A label applied for mutations for each region. Here, I use 0 for all regions
for i in details[:4]:
print(i)
#The order of the rows in each DataFrame is the
#same as the order as the objects in 'samples':
for i in range(4):
print("Number of sites in samples[",i,"] = ",
len(samples[i][1]),". Number of rows in DataFrame ",i,
" = ",len(details[i].index),sep="")
#Pandas DataFrames are cool.
#Let's add a column to each DataFrame
#specifying the mutation position,
#count of derived state,
#and a "replicate ID"
for i in range(len(details)):
##samples[i][1] again is the selected mutations in the sample taken
##from the i-th replicate
details[i]['pos']=[x[0] for x in samples[i][1]] #Mutation position
details[i]['count']=[ x[1].count('1') for x in samples[i][1]] #No. occurrences of derived state in sample
details[i]['id']=[i]*len(details[i].index) #Replicate id
##Merge into 1 big DataFrame:
BigTable = pd.concat(details)
print("This is the merged table:")
print(BigTable)
"""
Explanation: Getting additional information about samples
End of explanation
"""
import libsequence.polytable as polyt
import libsequence.summstats as sstats
#Convert neutral mutations into libsequence "SimData" objects,
#which are intended to handle binary (0/1) data like
#what comes out of these simulations
n = [polyt.SimData(i[0]) for i in samples]
#Create "factories" for calculating the summary stats
an = [sstats.PolySIM(i) for i in n]
##Collect a bunch of summary stats into a pandas.DataFrame:
NeutralMutStats = pd.DataFrame([ {'thetapi':i.thetapi(),'npoly':i.numpoly(),'thetaw':i.thetaw()} for i in an ])
NeutralMutStats
"""
Explanation: Summary statistics from samples
We will use the pylibseq package to calculate summary statistics. pylibseq is a Python wrapper around libsequence.
End of explanation
"""
print(20*math.exp(-0.02/(0.1+0.005)))
"""
Explanation: The average $\pi$ under the model
Under the BGS model, the expectation of $\pi$ is $E[\pi]=\pi_0e^{-\frac{U}{2sh+r}},$ $U$ is the mutation rate to strongly-deleterious variants, $\pi_0$ is the value expected in the absence of BGS (i.e. $\pi_0 = \theta = 4N_e\mu$), $s$ and $h$ are the selection and dominance coefficients, and $r$ is the recombination rate.
Note that the definition of $U$ is per diploid, meaning twice the per gamete rate. (See Hudson and Kaplan (1995) PMC1206891 for details).
For our parameters, we have $E[\pi] = 20e^{-\frac{0.02}{0.1+0.005}},$ which equals:
End of explanation
"""
for i in range(0,24,1):
pops = fp.evolve_regions(rng,
40,
N,
nlist[0:],
0.005,
0.01,
0.005,
nregions,
sregions,
recregions)
samples = [fp.get_samples(rng,i,20) for i in pops]
simdatasNeut = [polyt.SimData(i[0]) for i in samples]
polySIMn = [sstats.PolySIM(i) for i in simdatasNeut]
##Append stats into our growing DataFrame:
NeutralMutStats=pd.concat([NeutralMutStats,
pd.DataFrame([ {'thetapi':i.thetapi(),
'npoly':i.numpoly(),
'thetaw':i.thetaw()} for i in polySIMn ])])
"""
Explanation: Now, let's get the average $\pi$ from 1000 simulated replicates. We already have 40 replicates that we did above, so we'll run another 24 sets of four populations.
We will use standard Python to grow our collection of summary statistics.
End of explanation
"""
#Get means for each column:
NeutralMutStats.mean(0)
"""
Explanation: Getting the mean diversity
We've collected everything into a big pandas DataFrame. We can easily get the mean using the built-in groupby and mean functions.
For users happier in R, you could write this DataFrame to a text file and process it using R's dplyr package, which is a really excellent tool for this sort of thing.
End of explanation
"""
|
pligor/predicting-future-product-prices | 04_time_series_prediction/20_price_history_seq2seq-L2reg.ipynb | agpl-3.0 | from __future__ import division
import tensorflow as tf
from os import path, remove
import numpy as np
import pandas as pd
import csv
from sklearn.model_selection import StratifiedShuffleSplit
from time import time
from matplotlib import pyplot as plt
import seaborn as sns
from mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \
renderStatsListWithLabels, renderStatsCollectionOfCrossValids
from tensorflow.contrib import rnn
from tensorflow.contrib import learn
import shutil
from tensorflow.contrib.learn.python.learn import learn_runner
from mylibs.tf_helper import getDefaultGPUconfig
from sklearn.metrics import r2_score
from mylibs.py_helper import factors
from fastdtw import fastdtw
from collections import OrderedDict
from scipy.spatial.distance import euclidean
from statsmodels.tsa.stattools import coint
from common import get_or_run_nn
from data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider
from data_providers.price_history_dataset_generator import PriceHistoryDatasetGenerator
from skopt.space.space import Integer, Real
from skopt import gp_minimize
from skopt.plots import plot_convergence
import pickle
import inspect
import dill
import sys
from models.price_history_20_seq2seq_raw_L2reg import PriceHistorySeq2SeqRawL2reg
dtype = tf.float32
seed = 16011984
random_state = np.random.RandomState(seed=seed)
config = getDefaultGPUconfig()
n_jobs = 1
%matplotlib inline
"""
Explanation: https://www.youtube.com/watch?v=ElmBrKyMXxs
https://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb
https://github.com/ematvey/tensorflow-seq2seq-tutorials
End of explanation
"""
epochs = 10
num_features = 1
num_units = 400 #state size
input_len = 60
target_len = 30
lamda2 = 1e-2
batch_size = 50 #47
#trunc_backprop_len = ??
with_EOS = False
total_train_size = 57994
train_size = 6400
test_size = 1282
"""
Explanation: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
End of explanation
"""
data_path = '../data/price_history'
#npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz'
#npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz'
#npz_train = data_path + '/price_history_03_dp_60to30_57980_train.npz'
#npz_train = data_path + '/price_history_03_dp_60to30_6400_train.npz'
npz_train = data_path + '/price_history_60to30_6400_targets_normed_train.npz'
#npz_test = data_path + '/price_history_03_dp_60to30_test.npz'
npz_test = data_path + '/price_history_60to30_targets_normed_test.npz'
"""
Explanation: Once generate data
End of explanation
"""
dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)
dp.inputs.shape, dp.targets.shape
aa, bb = dp.next()
aa.shape, bb.shape
"""
Explanation: Step 1 - collect data
End of explanation
"""
model = PriceHistorySeq2SeqRawL2reg(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)
graph = model.getGraph(batch_size=batch_size,
num_units=num_units,
input_len=input_len,
target_len=target_len,
lamda2=lamda2)
#show_graph(graph)
"""
Explanation: Step 2 - Build model
End of explanation
"""
#rnn_cell = PriceHistorySeq2SeqCV.RNN_CELLS.GRU
#cross_val_n_splits = 5
epochs, num_units, batch_size
#set(factors(train_size)).intersection(factors(train_size/5))
best_learning_rate = 1e-3 #0.0026945952539362472
keep_prob_input = 0.7
def experiment():
return model.run(npz_path=npz_train,
epochs=epochs,
batch_size = batch_size,
num_units = num_units,
input_len=input_len,
target_len=target_len,
learning_rate = best_learning_rate,
preds_gather_enabled=True,
keep_prob_input = keep_prob_input,
lamda2=lamda2,
)
"""
Explanation: Step 3 training the network
RECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors
End of explanation
"""
%%time
dyn_stats, preds_dict = get_or_run_nn(
experiment,
filename='020_seq2seq_60to30_epochs{}_learning_rate_{:.4f}_prob_input{}_lamda2_{}'.format(
epochs, best_learning_rate, keep_prob_input, lamda2
))
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])
for ind in range(len(dp.targets))]
ind = np.argmin(r2_scores)
ind
reals = dp.targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]
for ind in range(len(dp.targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(dp.targets))
reals = dp.targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
"""
Explanation: Recall that without batch normalization within 10 epochs with num units 400 and batch_size 64 we reached at 4.940
and with having the decoder inputs NOT filled from the outputs
End of explanation
"""
|
ogaway/Econometrics | MultiCollinearity.ipynb | gpl-3.0 | %matplotlib inline
# -*- coding:utf-8 -*-
from __future__ import print_function
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# データ読み込み
data = pd.read_csv('example/k0801.csv')
data
# 説明変数設定
X = data[['X', 'Z']]
X = sm.add_constant(X)
X
# 被説明変数設定
Y = data['Y']
Y
# OLSの実行(Ordinary Least Squares: 最小二乗法)
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary())
"""
Explanation: 多重共線性(MultiCollinearity)
『Rによる計量経済学』第8章「多重共線性」をPythonで実行する。
テキスト付属データセット(「k0801.csv」等)については出版社サイトよりダウンロードしてください。
また、以下の説明は本書の一部を要約したものですので、より詳しい説明は本書を参照してください。
概要
次のような重回帰モデルを推定。
$Y_{i} = \alpha + \beta X_{i} + \gamma Z_{i} + u_{i} (i = 1, 2, ..., n)$
回帰係数$\hat{\beta}$の分散である$s_{\hat{\beta}}^{2}$は次の3つの部分からなる。
$s_{\hat{\beta}}^{2} = \frac{s^{2}}{\Sigma (X_{i} - X)^{2}(1 - r_{XZ}^{2})}$
$= \frac{(A)}{(B)[1-(C)]}$
ここで(C)が理由で回帰係数$\hat{\beta}$が有意にならないことを多重共線性(multi collinearity)の問題があると呼ぶ。
多重共線性の検討の手順
① 他の説明変数を除いた場合に説明力があるかを確認する。
② (A)の部分の計算
③ (B)の部分の計算
④ (C)の部分の計算
⑤ ①〜④の結果から④が原因であるかどうか判断する。
例題8-1
「k0801.csv」を分析。
End of explanation
"""
# 説明変数設定
X = data['X']
X = sm.add_constant(X)
X
# 説明変数をXのみにして推定
# OLSの実行(Ordinary Least Squares: 最小二乗法)
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary())
# 説明変数設定
X = data['Z']
X = sm.add_constant(X)
X
# 説明変数をZのみにして推定
# OLSの実行(Ordinary Least Squares: 最小二乗法)
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary())
# 標準偏差
data.std()
# 平均
data.mean()
# 相関係数
data.corr()
"""
Explanation: これから、Xの係数 $\hat{\beta}$ のP値が小さく有意水準5%でも有意ですが、Zの係数 $\hat{\gamma}$ のP値が大きく明らかに有意でないことがわかります。
End of explanation
"""
# データ読み込み
data2 = pd.read_csv('example/k0802.csv')
data2
import statsmodels.formula.api as smf
def forward_selected(data, response):
"""Linear model designed by forward selection.
Parameters:
-----------
data : pandas DataFrame with all possible predictors and response
response: string, name of response column in data
Returns:
--------
model: an "optimal" fitted statsmodels linear model
with an intercept
selected by forward selection
evaluated by adjusted R-squared
"""
remaining = set(data.columns)
remaining.remove(response)
selected = []
current_score, best_new_score = 0.0, 0.0
while remaining and current_score == best_new_score:
scores_with_candidates = []
for candidate in remaining:
formula = "{} ~ {} + 1".format(response,
' + '.join(selected + [candidate]))
score = smf.ols(formula, data).fit().rsquared_adj
scores_with_candidates.append((score, candidate))
scores_with_candidates.sort()
best_new_score, best_candidate = scores_with_candidates.pop()
if current_score < best_new_score:
remaining.remove(best_candidate)
selected.append(best_candidate)
current_score = best_new_score
formula = "{} ~ {} + 1".format(response,
' + '.join(selected))
model = smf.ols(formula, data).fit()
return model
# 第一引数にdata、第二引数に被説明変数のコラム名
model = forward_selected(data2, 'Y')
print(model.model.formula)
# OLS実行
X = data2[['X5', 'X6', 'X8']]
X = sm.add_constant(X)
Y = data2['Y']
model = sm.OLS(Y,X)
results = model.fit()
print(results.summary())
"""
Explanation: 以上の結果から、重回帰分析において $Z_{i}$ は有意ではなかったが、多重共線性によるものだったとうかがえる。
赤池情報量基準(Akaike's Information Criterion: AIC)
自由度調整済決定係数 $\overline{R}^{2}$ と同様に、回帰係数の実質的な説明力を表す。
しかし、Statsmodelsにはステップワイズ法を使って変数選択をしてくれるメソッドがないらしいので、ここのページからforward_selected()を借りて変数選択を行う。
End of explanation
"""
|
zzsza/Datascience_School | 10. 기초 확률론3 - 확률 분포 모형/02. 베르누이 확률 분포 (파이썬 버전).ipynb | mit | theta = 0.6
rv = sp.stats.bernoulli(theta)
rv
"""
Explanation: 베르누이 확률 분포
베르누이 시도
결과가 성공(Success) 혹은 실패(Fail) 두 가지 중 하나로만 나오는 것을 베르누이 시도(Bernoulli trial)라고 한다. 예를 들어 동전을 한 번 던져 앞면(H:Head)이 나오거나 뒷면(T:Tail)이 나오게 하는 것은 베르누이 시도의 일종이다.
베르누이 시도의 결과를 확률 변수(random variable) $X$ 로 나타낼 때는 일반적으로 성공을 정수 1 ($X=1$), 실패를 정수 0 ($X=0$)으로 정한다. 때로는 실패를 0 이 아닌 -1($X=-1$)로 정하는 경우도 있다.
베르누이 분포
베르누이 확률 변수는 0, 1 두 가지 값 중 하나만 가질 수 있으므로 이산 확률 변수(discrete random variable)이다. 따라서 확률 질량 함수(pmf: probability mass function)와 누적 분포 함수(cdf:cumulataive distribution function)으로 정의할 수 있다.
베르누이 확률 변수는 1이 나올 확률 $\theta$ 라는 하나의 모수(parameter)만을 가진다. 0이 나올 확률은 $1 - \theta$ 로 정의된다.
베르누이 확률 분포의 확률 질량 함수는 다음과 같다.
$$
\text{Bern}(x;\theta) =
\begin{cases}
\theta & \text{if }x=1, \
1-\theta & \text{if }x=0
\end{cases}
$$
이를 case문 없이 하나의 수식으로 표현하면 다음과 같이 쓸 수도 있다.
$$
\text{Bern}(x;\theta) = \theta^x(1-\theta)^{(1-x)}
$$
만약 베르누이 확률 변수가 1과 -1이라는 값을 가진다면 다음과 같은 수식으로 써야 한다.
$$ \text{Bern}(x; \theta) = \theta^{(1+x)/2} (1-\theta)^{(1-x)/2} $$
만약 어떤 확률 변수 $X$가 베르누이 분포에 의해 발생된다면 "확률 변수 $X$가 베르누이 분포를 따른다"라고 말하고 다음과 같이 수식으로 쓴다.
$$ X \sim \text{Bern}(x;\theta) $$
SciPy를 사용한 베르누이 분포의 시뮬레이션
Scipy의 stats 서브 패키지에 있는 bernoulli 클래스가 베르누이 확률 분포를 위한 클래스다. p 인수로 분포의 모수 $\theta$을 설정한다.
다음 예에서는 p = 0.6 으로 설정하였다.
End of explanation
"""
xx = [0, 1]
plt.bar(xx, rv.pmf(xx), align="center")
plt.xlim(-1, 2)
plt.ylim(0, 1)
plt.xticks([0, 1], ["X=0", "X=1"])
plt.ylabel("P(x)")
plt.title("pmf of Bernoulli distribution")
plt.show()
"""
Explanation: pmf 메서드를 사용하면 확률 질량 함수(pmf: probability mass function)를 계산할 수 있다.
End of explanation
"""
x = rv.rvs(100, random_state=0)
x
"""
Explanation: 시뮬레이션을 하려면 rvs 메서드를 사용한다.
End of explanation
"""
sns.countplot(x)
plt.show()
"""
Explanation: 결과를 seaborn의 countplot 명령으로 시각화한다.
End of explanation
"""
y = np.bincount(x, minlength=2)/float(len(x))
df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["value", "type", "ratio"]
df.pivot("value", "type", "ratio")
"""
Explanation: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.
NumPy의 bincount 명령으로 값이 0인 데이터의 수와 값이 1인 데이터의 수를 세고 이를 데이터프레임에 정리했다.
End of explanation
"""
sns.barplot(x="value", y="ratio", hue="type", data=df)
plt.show()
"""
Explanation: seaborn의 barplot 명령으로 시각화하면 다음과 같다.
End of explanation
"""
np.mean(x)
np.var(x, ddof=1)
"""
Explanation: 베르누이 분포의 모멘트
베르누이 분포의 모멘트는 다음과 같다.
기댓값
$$\text{E}[X] = \theta$$
(증명)
$$\text{E}[X] = 1 \cdot \theta + 0 \cdot (1 - \theta) = \theta$$
분산
$$\text{Var}[X] = \theta(1-\theta)$$
(증명)
$$\text{Var}[X] = (1 - \theta)^2 \cdot \theta + (0 - \theta)^2 \cdot (1 - \theta) = \theta(1-\theta)$$
앞의 예에서는 $\theta = 0.6$이였으므로 이론적인 기댓값과 분산은 다음과 같다.
$$ \text{E}[X] = 0.6 $$
$$ \text{Var}[X] = 0.6 \cdot (1 - 0.6) = 0.24 $$
데이터에서 계산한 샘플 평균 및 샘플 분산은 다음과 같이 계산한다.
End of explanation
"""
s = sp.stats.describe(x)
s[2], s[3]
"""
Explanation: SciPy의 describe 명령을 쓰면 다음과 같이 계산할 수 있다.
End of explanation
"""
pd.Series(x).describe()
"""
Explanation: 또는 Pandas의 Series객체로 바꾸어 describe 메서드를 써서 다음과 같이 계산한다.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Non-parametric 1 sample cluster statistic on single trial power
This script shows how to estimate significant clusters
in time-frequency power estimates. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists of:
extracting epochs
compute single trial power estimates
baseline line correct the power estimates (power ratios)
compute stats to see if ratio deviates from 1.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# just use right temporal sensors for speed
epochs.pick_channels(mne.read_vectorview_selection('Right-temporal'))
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(-0.1, 0.4)
tfr_epochs.crop(-0.1, 0.4)
epochs_power = tfr_epochs.data
"""
Explanation: Set parameters
End of explanation
"""
sensor_adjacency, ch_names = mne.channels.find_ch_adjacency(
tfr_epochs.info, 'grad')
# Subselect the channels we are actually using
use_idx = [ch_names.index(ch_name.replace(' ', ''))
for ch_name in tfr_epochs.ch_names]
sensor_adjacency = sensor_adjacency[use_idx][:, use_idx]
assert sensor_adjacency.shape == \
(len(tfr_epochs.ch_names), len(tfr_epochs.ch_names))
assert epochs_power.data.shape == (
len(epochs), len(tfr_epochs.ch_names),
len(tfr_epochs.freqs), len(tfr_epochs.times))
adjacency = mne.stats.combine_adjacency(
sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times))
# our adjacency is square with each dim matching the data size
assert adjacency.shape[0] == adjacency.shape[1] == \
len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)
"""
Explanation: Define adjacency for statistics
To compute a cluster-corrected value, we need a suitable definition
for the adjacency/adjacency of our values. So we first compute the
sensor adjacency, then combine that with a grid/lattice adjacency
assumption for the time-frequency plane:
End of explanation
"""
threshold = 3.
n_permutations = 50 # Warning: 50 is way too small for real-world analysis.
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,
threshold=threshold, tail=0,
adjacency=adjacency,
out_type='mask', verbose=True)
"""
Explanation: Compute statistic
End of explanation
"""
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
# Just plot one channel's data
ch_idx, f_idx, t_idx = np.unravel_index(
np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:])
# ch_idx = tfr_epochs.ch_names.index('MEG 1332') # to show a specific one
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs[ch_idx], cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})')
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2], time_unit='s')
plt.show()
"""
Explanation: View time-frequency plots
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/c569084177bc9cce4e0419ab10cfd45d/plot_dipole_fit.ipynb | bsd-3-clause | from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
from nilearn.plotting import plot_anat
from nilearn.datasets import load_mni152_template
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
"""
Explanation: ============================================================
Source localization with equivalent current dipole (ECD) fit
============================================================
This shows how to fit a dipole using mne-python.
For a comparison of fits between MNE-C and mne-python, see
this gist <https://gist.github.com/larsoner/ca55f791200fe1dc3dd2>__.
End of explanation
"""
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain with the MRI image.
dip.plot_locations(fname_trans, 'sample', subjects_dir, mode='orthoview')
# Plot the result in 3D brain with the MRI image using Nilearn
# In MRI coordinates and in MNI coordinates (template brain)
trans = mne.read_trans(fname_trans)
subject = 'sample'
mni_pos = mne.head_to_mni(dip.pos, mri_head_t=trans,
subject=subject, subjects_dir=subjects_dir)
mri_pos = mne.head_to_mri(dip.pos, mri_head_t=trans,
subject=subject, subjects_dir=subjects_dir)
t1_fname = op.join(subjects_dir, subject, 'mri', 'T1.mgz')
fig_T1 = plot_anat(t1_fname, cut_coords=mri_pos[0], title='Dipole loc.')
template = load_mni152_template()
fig_template = plot_anat(template, cut_coords=mni_pos[0],
title='Dipole loc. (MNI Space)')
"""
Explanation: Let's localize the N100m (using MEG only)
End of explanation
"""
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, cov=None, nave=np.inf)
# find time point with highest GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
print('Highest GOF %0.1f%% at t=%0.1f ms with confidence volume %0.1f cm^3'
% (dip.gof[best_idx], best_time * 1000,
dip.conf['vol'][best_idx] * 100 ** 3))
# remember to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4])
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False, time_unit='s')
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, -pred_evoked], weights='equal')
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2], **plot_params)
plt.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
"""
Explanation: Calculate and visualise magnetic field predicted by dipole with maximum GOF
and compare to the measured data, highlighting the ipsilateral (right) source
End of explanation
"""
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot(time_unit='s')
"""
Explanation: Estimate the time course of a single dipole with fixed position and
orientation (the one that maximized GOF) over the entire interval
End of explanation
"""
|
rubensfernando/mba-analytics-big-data | Python/2016-08-08/aula7-parte2-web-scraping.ipynb | mit | import requests
req = requests.get("http://pythonscraping.com/pages/page1.html")
print(req.text)
from bs4 import BeautifulSoup
bs = BeautifulSoup(req.text, "html.parser")
print(bs)
type(bs.h1)
bs.h1
bs.h1.string
req = requests.get("http://pythonscraping.com/pages/page1.html")
req.status_code
"""
Explanation: Web Scraping
End of explanation
"""
import requests
from requests.exceptions import ConnectionError
from bs4 import BeautifulSoup
def recuperarTitulo(url):
# seu código aqui
"""
Explanation: Exercício - Crie uma função chamada recuperarTitulo(url), que deverá retornar o titulo da URL (página) passada por parâmetro. Lembre-se de tratar os erros necessários.
End of explanation
"""
recuperarTitulo("http://pythonscrapingxxx.com/pages/page1.html")
recuperarTitulo("http://pythonscraping.com/pages/page12.html")
recuperarTitulo("http://pythonscraping.com/pages/page1.html")
"""
Explanation: Teste a função com as seguintes URLs:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-1/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
shikhar413/openmc | examples/jupyter/mgxs-part-ii.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
"""
Explanation: Multigroup Cross Section Generation Part II: Advanced Features
This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:
Creation of multi-group cross sections on a heterogeneous geometry
Calculation of cross sections on a nuclide-by-nuclide basis
The use of tally precision triggers with multi-group cross sections
Built-in features for energy condensation in downstream data processing
The use of the openmc.data module to plot continuous-energy vs. multi-group cross sections
Validation of multi-group cross sections with OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system in order to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
"""
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
"""
Explanation: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
End of explanation
"""
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create box to surround the geometry
box = openmc.model.rectangular_prism(1.26, 1.26, boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & box
pin_cell_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(pin_cell_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
"""
Explanation: We now must create a geometry with the pin cell universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
End of explanation
"""
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
"""
Explanation: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
End of explanation
"""
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
"""
Explanation: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
End of explanation
"""
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1e-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
"""
Explanation: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
End of explanation
"""
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
End of explanation
"""
# Run OpenMC
openmc.run()
"""
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
"""
Explanation: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
End of explanation
"""
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
"""
Explanation: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
End of explanation
"""
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
"""
Explanation: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
End of explanation
"""
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
"""
Explanation: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
End of explanation
"""
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
"""
Explanation: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
End of explanation
"""
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
"""
Explanation: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.
End of explanation
"""
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
"""
Explanation: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
End of explanation
"""
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
"""
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
"""
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
"""
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
End of explanation
"""
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
"""
Explanation: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Visualizing MGXS Data
It is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.
One particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.plotter module to plot continuous-energy cross sections from the openly available cross section library distributed by NNDC.
The MGXS data can also be plotted using the openmc.plot_xs command, however we will do this manually here to show how the openmc.Mgxs.get_xs method can be used to obtain data.
End of explanation
"""
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
"""
Explanation: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
End of explanation
"""
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
"""
Explanation: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
End of explanation
"""
|
bkoz37/labutil | samples/lab5_samples/Lab_5_Handout.ipynb | mit | from ase.calculators.eam import EAM
"""
Explanation: NEB using ASE
1. Setting up an EAM calculator.
Suppose we want to calculate the minimum energy path of adatom diffusion on a (100) surface. We first need to choose an energy model, and in ASE, this is done by defining a "calculator". Let's choose our calculator to be Zhou's aluminum EAM potential, which we've used in previous labs.
We first import ASE's built-in EAM calculator class:
End of explanation
"""
import os
pot_file = os.environ.get('LAMMPS_POTENTIALS') + '/Al_zhou.eam.alloy'
print(pot_file)
"""
Explanation: Then set the potential file:
End of explanation
"""
zhou = EAM(potential=pot_file)
"""
Explanation: We just have to point ASE to the potential file. It automatically parses the file and constructs the corresponding EAM calculator for us:
End of explanation
"""
from ase.build import fcc100, add_adsorbate
slab = fcc100('Al', size=(3, 3, 3))
add_adsorbate(slab, 'Al', 2, 'hollow') # put adatom 2 A above the slab
slab.center(vacuum=5.0, axis=2) # 5 A of vacuum on either side
"""
Explanation: Next, we define our surface. Let's put the atom in the "hollow" site on the (100) surface. You can find out the adatom sites that are available for different types of surfaces in the ASE documentation: https://wiki.fysik.dtu.dk/ase/ase/build/surface.html
End of explanation
"""
from ase.visualize import view
view(slab, viewer='x3d')
"""
Explanation: Let's see what the slab looks like:
End of explanation
"""
slab.set_calculator(zhou)
"""
Explanation: Let's set the calculator of the slab to the EAM calculator we defined above:
End of explanation
"""
slab.get_potential_energy() # energy in eV
slab.get_forces() # forces in eV/A
"""
Explanation: This lets us calculate the potential energy and forces on the atoms:
End of explanation
"""
from ase.optimize import BFGS
dyn = BFGS(slab)
dyn.run(fmax=0.0001)
"""
Explanation: Notice the nonzero forces, and in particular the strong force in the z-direction acting on the adatom. That's a signal that we're not relaxed.
2. Structure relaxation in ASE.
We can use one of ASE's built-in structure optimizers to relax the structure and find the local energy minimum predicted by the EAM potential. The optimization terminates when the maximum force on an atom falls below fmax, which we set to 0.1 meV/A.
End of explanation
"""
slab_2 = fcc100('Al', size=(3, 3, 3))
add_adsorbate(slab_2, 'Al', 2, 'hollow') # put adatom 2 A above the slab
slab_2.center(vacuum=5.0, axis=2) # 5 A of vacuum on either side
slab_2.set_calculator(EAM(potential=pot_file))
slab_2.positions
slab_2.positions[-1][0:2] = slab_2.positions[10][0:2] # notice the adatom is directly above atom 9
view(slab_2, viewer='x3d')
"""
Explanation: To compute the activation barrier for adatom diffusion, we first need to know the endpoint of the transition path, which we can determine by looking at the atomic coordinates.
End of explanation
"""
dyn = BFGS(slab_2)
dyn.run(fmax=0.0001)
"""
Explanation: We again relax the structure:
End of explanation
"""
from ase.neb import NEB
import numpy as np
# make band
no_images = 15
images = [slab]
images += [slab.copy() for i in range(no_images-2)]
images += [slab_2]
neb = NEB(images)
# interpolate middle images
neb.interpolate()
# set calculators of middle images
pot_dir = os.environ.get('LAMMPS_POTENTIALS')
for image in images[1:no_images-1]:
image.set_calculator(EAM(potential=pot_file))
# optimize the NEB trajectory
optimizer = BFGS(neb)
optimizer.run(fmax=0.01)
# calculate the potential energy of each image
pes = np.zeros(no_images)
pos = np.zeros((no_images, len(images[0]), 3))
for n, image in enumerate(images):
pes[n] = image.get_potential_energy()
pos[n] = image.positions
"""
Explanation: Nudged elastic band calculations.
We've succeeded in relaxing the endpoints. We can proceed to search for the saddle point using NEB.
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot(pes-pes[0], 'k.', markersize=10) # plot energy difference in eV w.r.t. first image
plt.plot(pes-pes[0], 'k--', markersize=10)
plt.xlabel('image #')
plt.ylabel('energy difference (eV)')
plt.show()
"""
Explanation: Let's plot the results.
End of explanation
"""
import numpy as np
from ase import Atoms
positions = np.array([[0, 0, 0], [1, 0, 0]])
cell = np.eye(3) * 10
two_atoms = Atoms(positions=positions, cell=cell)
from ase.visualize import view
view(two_atoms, viewer='x3d')
"""
Explanation: FLARE code
1. Setup
In this lab, we'll use the Bayesian ML code FLARE ("fast learning of atomistic rare events") that has recently been made open source. To set it up on Google cloud, vbox, or your personal machine, you'll need to pull it from github. I'll give the commands for setting everything up on an AP275 Google cloud instance, but the steps will be pretty much the same on any machine.
git clone https://github.com/mir-group/flare.git
The code is written in Python, but inner loops are accelerated with Numba, which you'll need to install with pip.
sudo apt install python3-pip
pip3 install numba
You'll see warnings from Numba that a more recent version of "colorama" needs to be installed. You can install it with pip:
pip3 install colorama
You may find it helpful to add the FLARE directory to your Python path and bash environment, which makes it easier to directly import files.
nano .profile
export FLARE=\$HOME/Software/flare
export PYTHONPATH=\$PYTHONPATH:\$FLARE:\$FLARE/otf_engine:\$FLARE/modules
source .profile
2. A toy example
Let's look at a simple example to get a feel for how the code works. Let's put two atoms in a box along the x axis:
End of explanation
"""
forces = np.array([[-1, 0, 0], [1, 0, 0]])
"""
Explanation: Let's put equal and opposite forces on our atoms.
End of explanation
"""
from kernels import two_body, two_body_grad, two_body_force_en
from gp import GaussianProcess
"""
Explanation: The FLARE code uses Gaussian process regression (GPR) to construct a covariant force field based on atomic forces, which are the training labels. GPR is a kernel based machine learning method, which means that it makes predictions by comparing test points to structures in the training set. For this simple system, we choose a two-body kernel, which compares pairwise distances in two structures.
End of explanation
"""
hyps = np.array([1, 1, 1e-3]) # signal std, length scale, noise std
cutoffs = np.array([2])
gp_model = GaussianProcess(kernel=two_body, kernel_grad=two_body_grad, hyps=hyps,
cutoffs=cutoffs, energy_force_kernel=two_body_force_en)
"""
Explanation: The GP models are local, and require you to choose a cutoff. We'll pick 4 A. The kernel has a few hyperparameters which control uncertainty estimates and length scales. They can be optimized in a rigorous way by maximizing the likelihood of the training data, but for this lab we'll just set the hyperparameters to reasonable values.
End of explanation
"""
import struc
training_struc = struc.Structure(cell=cell, species=['A']*2, positions=positions)
"""
Explanation: The GP models take structure objects as input, which contain information about the cell and atomic coordinates (much like the Atoms class in ASE).
End of explanation
"""
gp_model.update_db(training_struc, forces)
gp_model.set_L_alpha()
"""
Explanation: We train the GP model by giving it training structures and corresponding forces:
End of explanation
"""
gp_model.predict(gp_model.training_data[0], 2) # second argument is the force component (x=1, y=2, z=3)
"""
Explanation: As a quick check, let's make sure we get reasonable results on the training structure:
End of explanation
"""
from gp_calculator import GPCalculator
gp_calc = GPCalculator(gp_model)
"""
Explanation: To make it easier to get force and energy estimates from the GP model, we can wrap it in an ASE calculator:
End of explanation
"""
# print positions, energy, and forces before rotation
two_atoms.set_calculator(gp_calc)
print(two_atoms.positions)
print(two_atoms.get_potential_energy())
print(two_atoms.get_forces())
two_atoms.rotate(90, 'z') # rotate the atoms 90 degrees about the z axis
two_atoms.set_calculator(gp_calc) # set calculator to gp model
# print positions, energy, and forces after rotation
print(two_atoms.positions)
print(two_atoms.get_potential_energy())
print(two_atoms.get_forces())
"""
Explanation: Now let's test the covariance property of the model. Let's rotate the structure by 90 degrees, and see what forces we get.
End of explanation
"""
# initialize gp model
import kernels
import gp
import numpy as np
kernel = kernels.two_plus_three_body
kernel_grad = kernels.two_plus_three_body_grad
hyps = np.array([1, 1, 0.1, 1, 1e-3]) # sig2, ls2, sig3, ls3, noise std
cutoffs = np.array([4.96, 4.96])
energy_force_kernel = kernels.two_plus_three_force_en
gp_model = gp.GaussianProcess(kernel, kernel_grad, hyps, cutoffs,
energy_force_kernel=energy_force_kernel)
# make slab structure in ASE
from ase.build import fcc100, add_adsorbate
import os
from ase.calculators.eam import EAM
slab = fcc100('Al', size=(3, 3, 3))
add_adsorbate(slab, 'Al', 2, 'hollow') # put adatom 2 A above the slab
slab.center(vacuum=5.0, axis=2) # 5 A of vacuum on either side
pot_file = os.environ.get('LAMMPS_POTENTIALS') + '/Al_zhou.eam.alloy'
zhou = EAM(potential=pot_file)
slab.set_calculator(zhou)
# make training structure
import struc
training_struc = struc.Structure(cell=slab.cell,
species=['Al']*len(slab),
positions=slab.positions)
training_forces = slab.get_forces()
# add atoms to training database
gp_model.update_db(training_struc, training_forces)
gp_model.set_L_alpha()
# wrap in ASE calculator
from gp_calculator import GPCalculator
gp_calc = GPCalculator(gp_model)
# test on training structure
slab.set_calculator(gp_calc)
GP_forces = slab.get_forces()
# check accuracy by making a parity plot
import matplotlib.pyplot as plt
plt.plot(training_forces.reshape(-1), GP_forces.reshape(-1), '.')
plt.show()
"""
Explanation: 3. Two plus three body model
In the lab, we'll add a three-body term to the potential, which makes the model significantly more accurate for certain systems. Let's see how this would work for our (100) slab.
End of explanation
"""
|
oditorium/blog | iPython/Error-Estimation-for-Survey-Data.ipynb | agpl-3.0 | N_people = 500
ratio_female = 0.30
proba = 0.40
"""
Explanation: Error Estimation for Survey Data
the issue we have is the following: we are drawing indendent random numbers from a binary distribution of probability $p$ (think: the probability of a certain person liking the color blue) and we have two groups (think: male and female). Those two groups dont necessarily have the same size.
The question we ask is what difference we can expect in the spread of the ex-post estimation of $p$
We first define our population parameters
End of explanation
"""
def the_sd(N, p, r):
N = float(N)
p = float(p)
r = float(r)
return sqrt(1.0/N*(p*(1.0-p))/(r*(1.0-r)))
def sd_func_factory(N,r):
def func(p):
return the_sd(N,p,r)
return func
f = sd_func_factory(N_people, ratio_female)
f2 = sd_func_factory(N_people/2, ratio_female)
"""
Explanation: Closed Form Approximation
of course we could have done this analytically using Normal approximation: we have two independent Normal random variables, both with expectation $p$. The variance of the $male$ variable is $p(1-p)/N_{male}$ and the one of the female one accordingly. The overall variance of the difference (or sum, it does not matter here because they are uncorrelated) is
$$
var = p(1-p)\times \left(\frac{1}{N_{male}} + \frac{1}{N_{female}}\right)
$$
Using the female/male ratio $r$ instead we can write for the standard deviation
$$
sd = \sqrt{var} = \sqrt{\frac{1}{N}\frac{p(1-p)}{r(1-r)}}
$$
meaning that we expect the difference in estimators for male and female of the order of $sd$
End of explanation
"""
p = linspace(0,0.25,5)
f = sd_func_factory(N_people, ratio_female)
f2 = sd_func_factory(N_people/2, ratio_female)
sd = list(map(f, p))
sd2 = list(map(f2, p))
pd.DataFrame(data= {'p':p, 'sd':sd, 'sd2':sd2})
"""
Explanation: Thats the one-standard deviation range about the estimator. For example: if the underlying probability is $0.25=25\%$ then the difference between the estimators for the male and the female group is $4.2\%$ for the full group (sd), or $5.9\%$ for if only half of the people replied (sd2)
End of explanation
"""
p = linspace(0,0.25,50)
sd = list(map(f, p))
sd2 = list(map(f2, p))
plot (p,p, 'k')
plot (p,p-sd, 'g--')
plot (p,p+sd, 'g--')
plot (p,p-sd2, 'r--')
plot (p,p+sd2, 'r--')
grid(b=True, which='major', color='k', linestyle='--')
"""
Explanation: that's the same relationship as a plot
End of explanation
"""
z=linspace(1.,3,100)
plot(z,1. - (norm.cdf(z)-norm.cdf(-z)))
grid(b=True, which='major', color='k', linestyle='--')
plt.title("Probability of being beyond Z (2-sided) vs Z")
"""
Explanation: For reference, the 2-sided tail probabilites as a function of $z$ (the way to read it is as follows: the probability of a Normal distribution being 2 standard deviations away from its mean to either side is about 0.05, or 5%). Saying it the other way round, a two-standard-deviation difference corresponds to about 95% confidence
End of explanation
"""
number_of_tries = 1000
"""
Explanation: Using Monte Carlo
we also need some additional parameters for our Monte Carlo
End of explanation
"""
N_female = int (N_people * ratio_female)
N_male = N_people - N_female
"""
Explanation: We do some intermediate calculations...
End of explanation
"""
data_male = np.random.binomial(n=1, p=proba, size=(number_of_tries, N_male))
data_female = np.random.binomial(n=1, p=proba, size=(number_of_tries, N_female))
"""
Explanation: ...and then generate our random numbers...
End of explanation
"""
proba_male = map(numpy.mean, data_male)
proba_female = map(numpy.mean, data_female)
proba_diff = list((pm-pf) for pm,pf in zip(proba_male, proba_female))
np.mean(proba_diff), np.std(proba_diff)
"""
Explanation: ...that we then reduce in one dimension (ie, over that people in the sample) to obtain our estimator for the probas for males and females as well as the difference. On the differences finally we look at the mean (should be zero-ish) and the standard deviation (should be consistent with the numbers above)
End of explanation
"""
|
karlnapf/shogun | doc/ipython-notebooks/clustering/GMM.ipynb | bsd-3-clause | %pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all Shogun classes
from shogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
"""
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
"""
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
"""
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - heiko.strathmann@gmail.com - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the GMM framework of the Google summer of code 2011 project of Alesis Novik - https://github.com/alesis
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
"""
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.put('m_coefficients', weights)
"""
Explanation: Set up the model in Shogun
End of explanation
"""
# now sample from each component seperately first, the from the joint model
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.put('m_coefficients', w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.put('m_coefficients', weights)
"""
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
"""
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
"""
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> interface, including the mixture.
End of explanation
"""
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
"""
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
"""
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
feat=features(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(feat)
# learn GMM
gmm_est.train_em()
return gmm_est
"""
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
"""
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
"""
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
"""
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
"""
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
"""
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
"""
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
"""
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
title("Data coloured by likelihood for component %d" % comp_idx)
"""
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
"""
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
pcolor(Xs,Ys,D_est)
"""
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation
"""
|
clazaro/elastica | 2DRodFF_1.ipynb | gpl-3.0 | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
"""
Explanation: © C. Lázaro, Universidad Politécnica de Valencia, 2015
Form finding of planar flexible rods (1)
1 Motivation
Schek, 1973 & Linkwitz - Force denisty method
1. Define net topology and restrained nodes
2. Prescribe force densities
3. Solve for nodal coordinates
A similar procedure can be devised for flexible rods.
2 Problem definition
Given fixed conditions for the endpoints of a flexible and unextensible (Kirchhoff) rod, find the rod geometry for prescibed static data
3 Tentative algorithm
Define rod topology (nodes) and restrained nodes
Prescribe a minimal set of static data to compute bending moments and torques
Compute curvatures and twist
Solve for nodal coordinates
4 Static invariants
Consider a rod with forces/moments applied only at end sections.
The equilibrium equations are
$$\mathbf{N}^{'} = \mathbf{N} \times \mathbf{C}{K}^{-1} \mathbf{M}$$
$$\mathbf{M}^{'} = \mathbf{N} \times \mathbf{C}{\Gamma}^{-1} \mathbf{N} + \mathbf{M} \times \mathbf{C}_{K}^{-1} \mathbf{M}$$
The following quantities are invariants for a given configuration (Lázaro, 2005):
- $\frac{1}{2}\,{\mathbf{M} \cdot \mathbf{C}_{K}^{-1} \mathbf{M}} + \mathbf{N} \cdot \mathbf{A_1}$ (Linear density of complementary deformation energy plus axial force)
- $\mathbf{N} \cdot \mathbf{N}$ (Constant modulus of the end forces $\equiv$ Equilibrium of forces)
- $\mathbf{M} \cdot \mathbf{N}$ (Constant modulus of the moment at the origin $\equiv$ Equilibrium of moments)
4.1 Planar case
In the planar case the equilibrium equations are
$$N^{'} = \frac{1}{EI}Q M$$
$$Q^{'} = -\frac{1}{EI}N M$$
$$M^{'} = -Q$$
and the two first invariants reduce to
- $\frac{1}{2} \frac{1}{EI} M^2 + N = constant$
- $N^2 + Q^2 = constant$
The third invariant is null.
Solutions of equilibrium lie at the orbits given by the intersections of both surfaces in the $(N, Q, M)$ space.
End of explanation
"""
EI = 5000. #kN m^2
H = 3600. #kN m/m
F1 = -2600. #kN
F2 = -3600. #kN
F3 = -4600. #kN
phi = np.linspace(np.pi, -np.pi, 501)
theta0 = np.arccos(H/F3)
phi3 = np.linspace(theta0, - theta0, 501)
N1 = F1*np.cos(phi)
Q1 = -F1*np.sin(phi)
M1 = -np.sqrt(2*EI*(H - N1)) # sign is consistent with the choice of sense for theta
N2 = F2*np.cos(phi)
Q2 = -F2*np.sin(phi)
M2 = -np.sqrt(2*EI*(H - N2))
N3 = F3*np.cos(phi3)
Q3 = -F3*np.sin(phi3)
M3 = -np.sqrt(2*EI*(H - N3))
fig = plt.figure(figsize=(9., 9.))
ax = fig.gca(projection='3d')
ax.plot(N1, Q1, M1, color='r')
ax.plot(N1, Q1, -M1, color='r')
ax.plot(N2, Q2, M2, color='g')
ax.plot(N2, Q2, -M2, color='g')
ax.plot(N3, Q3, M3, color='b')
ax.plot(N3, Q3, -M3, color='b')
ax.set_xlabel('$N$')
ax.set_ylabel('$V$')
ax.set_zlabel('$M$')
"""
Explanation: 4.1.1 Visualizing the section force orbits
We may choose a parameter $\theta$ to traverse the orbit. Then, denoting the first constant as $\mathcal{H}$ and the second as $F^2$, invariant equations are equivalent to
$$N = F \cos\theta$$
$$Q = -F \sin\theta$$
$$M = \pm\sqrt{2EI (\mathcal{H} - F \cos\theta)}$$
The expression for the moment will fail when the root is less than 0. Therefore, we may distinguish the following situations:
1. $\mathcal{H} > |F|$. The orbit will be a closed one with moments of equal sign all along it.
2. $\mathcal{H} = |F|$.
3. $\mathcal{H} < |F|$. The values of $\theta$ are in the interval $(\arccos\frac{\mathcal{H}}{F}, -\arccos\frac{\mathcal{H}}{F})$. We are traversing the orbit in decreasing $\theta$ sense; this chioce determines the side of the (buckled) structure.
End of explanation
"""
print('F= {0:.0f} kN, H = {1:.0f} kN m/m, theta_0 = {2:.5f} rad'.format(F3, H, theta0))
sectionForces = []
N0 = F3*np.cos(theta0)
Q0 = -F3*np.sin(theta0)
M0 = 0.
sectionForces.append([N0, Q0, M0])
nEdges = 50
h = 0.1
Nn = N0
Qn = Q0
Mn = M0
for n1 in range(1, nEdges+1):
Nn1 = Nn*np.cos(h*(Mn - h*Qn/4)/EI) + Qn*np.sin(h*(Mn - h*Qn/4)/EI)
Qn1 = Qn*np.cos(h*(Mn - h*Qn/4)/EI) - Nn*np.sin(h*(Mn - h*Qn/4)/EI)
Mn1 = Mn - 0.5*h*(Qn + Qn1)
sectionForces.append([Nn1, Qn1, Mn1])
Nn = Nn1
Qn = Qn1
Mn = Mn1
axialForce = np.array([force[0] for force in sectionForces])
shearForce = np.array([force[1] for force in sectionForces])
bendingMoment = np.array([force[2] for force in sectionForces])
H = bendingMoment**2/2/EI + axialForce
fig = plt.figure(figsize=(9., 9.))
ax = fig.gca(projection='3d')
ax.plot(N3, Q3, M3, color='b')
ax.plot(N3, Q3, -M3, color='b')
ax.scatter(axialForce, shearForce, bendingMoment, color='r')
ax.set_xlabel('$N$')
ax.set_ylabel('$Q$')
ax.set_zlabel('$M$')
fig = plt.figure(figsize=(7., 7.))
ax = fig.gca()
ax.plot(np.linspace(0, h*nEdges, nEdges+1), H)
ax.set_xlabel('$s$')
ax.set_ylabel('$\mathcal{H}$')
"""
Explanation: 4.1.2 Numerical solution of the equilibrium equations
According to Hairer et al. (2006), p. 247 Lie-Poisson systems may be numerically solved by using splitting methods if the Hamiltonian can be splitted into independent functions (which is indeed the case for the Kirchhoff rod). The resulting method is:
$$N_{n+1} = N_n \cos(h(M_n - h Q_n/4)/EI) + Q_n \sin(h(M_n - h Q_n/4)/EI)$$
$$Q_{n+1} = Q_n \cos(h(M_n - h Q_n/4)/EI) - N_n \sin(h(M_n - h Q_n/4)/EI)$$
$$M_{n+1} = M_n - \frac{h}{2} (Q_n + Q_{n+1})$$
Pinned rod
Let's consider the case of a pinned rod subject to end forces of modulus $F$.
At the start of the rod $M = 0$ and $N^2 + Q^2 = F^2$. Thus, we have the freedom to prescribe the value of the first invariant. Once this is made, the orbit -and also the start orientation of the rod $\theta_0$- is defined and therefore $(N, Q, M)$ triplets on the orbit may be computed by the numerical method introduced above.
We will experiment with the corresponding values for the third case, taking $h = 0.1$
End of explanation
"""
import scipy.optimize
def implicitMidpoint(Xn1, Xn, h, EI):
f = np.empty(3)
f[0] = Xn1[0] - Xn[0] - h*(Xn[1] + Xn1[1])*(Xn[2] + Xn1[2])/4./EI
f[1] = Xn1[1] - Xn[1] + h*(Xn[0] + Xn1[0])*(Xn[2] + Xn1[2])/4./EI
f[2] = Xn1[2] - Xn[2] + h*(Xn[1] + Xn1[1])/2.
return f
sectionForces = []
N0 = F3*np.cos(theta0)
Q0 = -F3*np.sin(theta0)
M0 = 0.
sectionForces.append([N0, Q0, M0])
nEdges = 50
h = 0.1
Xn = np.array([N0, Q0, M0])
for n1 in range(1, nEdges+1):
eqSystem = lambda Xn1: implicitMidpoint(Xn1, Xn, h, EI)
solution = scipy.optimize.root(eqSystem, Xn)
Xn1 = solution.x
sectionForces.append(Xn1)
Xn = Xn1
axialForce = np.array([force[0] for force in sectionForces])
shearForce = np.array([force[1] for force in sectionForces])
bendingMoment = np.array([force[2] for force in sectionForces])
H = bendingMoment**2/2/EI + axialForce
fig = plt.figure(figsize=(9., 9.))
ax = fig.gca(projection='3d')
ax.plot(N3, Q3, M3, color='b')
ax.plot(N3, Q3, -M3, color='b')
ax.scatter(axialForce, shearForce, bendingMoment, color='r')
ax.set_xlabel('$N$')
ax.set_ylabel('$Q$')
ax.set_zlabel('$M$')
fig = plt.figure(figsize=(7., 7.))
ax = fig.gca()
ax.plot(np.linspace(0, h*nEdges, nEdges+1), H)
ax.set_xlabel('$s$')
ax.set_ylabel('$\mathcal{H}$')
"""
Explanation: The result shows that this numerical method does not preserve the Hamiltonian. Therefore, we try now with the implicit midpoint rule, which according to Hairer et al. (2006), p. 247 preserves quadratic invariants
$$N_{n+1} = N_n + \frac{h}{4EI} (Q_n + Q_{n+1})(M_n + M_{n+1})$$
$$Q_{n+1} = Q_n - \frac{h}{4EI} (N_n + N_{n+1})(M_n + M_{n+1})$$
$$M_{n+1} = M_n - \frac{h}{2} (Q_n + Q_{n+1})$$
Rearranging and dividing the first equation by the second one yields $N_{n+1}^2 + Q_{n+1}^2 = N_n^2 + Q_n^2$ (preservation of the Casimir). Substituting the third into the first shows the preservation of the Hamiltonian.
The drawback is that we end up with an implicit scheme in which the $n+1$ variables can't be isolated. In the next experiment we will advance each step using scipy.optimize.root non-linear solver.
End of explanation
"""
EI = 5000. #kN m^2
H = 3600. #kN m/m
F = -4600. #kN
theta0 = np.arccos(H/F)
theta = np.linspace(theta0, -theta0, nEdges+1)
N = F*np.cos(theta)
Q = -F*np.sin(theta)
M = -np.sqrt(2*EI*(H - N))
"""
Explanation: This method yields an excellent result. However, if we are thinking in a formfinding problem of a pinned rod, we can't control that the endpoint of our solution falls on the endpoint of the rod. Therefore we will try another experiment using the recurrent step of the implicit midpoint rule.
The basic idea is to let the interval size be a variable one, and to calculate every value following this procedure (exemplified in a pinned rod):
Prescribe the end force $F$ and the Hamiltonian $\mathcal{H}$ (this fixes the orbit)
Parametrize the orbit through $\theta$. Using the condition $M = 0$ at the endpoints, calculate the start and end values of the parameter.
Calculate the triplets $(N, Q, M)$ at fixed intervals of the parameter (up to this point this is the same procedure we used to draw the orbit.)
(Key step) Calculate the values of $h$ between triplet values. They will be used later on to draw the solution.
End of explanation
"""
h = []
for n in range(nEdges):
hNn = 4*EI*(N[n+1] - N[n])/(Q[n] + Q[n+1])/(M[n] + M[n+1])
hQn = -4*EI*(Q[n+1] - Q[n])/(N[n] + N[n+1])/(M[n] + M[n+1])
hMn = -2*(M[n+1] - M[n])/(Q[n] + Q[n+1])
h.append([hNn, hQn, hMn])
# h
"""
Explanation: We are testing if the interval values computed from each formula are consistent
$$h_N = 4EI \frac{N_{n+1} - N_n}{(Q_n + Q_{n+1})(M_n + M_{n+1})}$$
$$h_Q = -4EI \frac{Q_{n+1} - Q_n}{(N_n + N_{n+1})(M_n + M_{n+1})}$$
$$h_M = -2 \frac{M_{n+1} - M_n}{Q_n + Q_{n+1}}$$
End of explanation
"""
h = []
for n in range(nEdges):
hn = -2*(M[n+1] - M[n])/(Q[n] + Q[n+1])
h.append(hn)
print('The length of the rod is {:.3f} m'.format(np.sum(h)))
"""
Explanation: The experiment succeeds; if we examin the result for $h$, we see that every formula produces exactly the same results in each step. We have a straightforward and simple method to compute section forces and rod length for a prescribed traction and Hamiltonian.
End of explanation
"""
kappa = M / EI
phi = np.zeros(nEdges)
rotor = np.zeros(nEdges) + 1j*np.zeros(nEdges)
for n in range(1, nEdges):
phi[n] = 2.*np.arctan(kappa[n]*(h[n-1] + h[n])/4.)
rotor[n] = (4./(h[n-1] + h[n]) + 1j * kappa[n])/(4./(h[n-1] + h[n]) - 1j * kappa[n])
np.sum(phi)/2
phi[0] = theta0
rotor[0] = np.exp(1j*phi[0])
gamma = np.zeros(len(kappa)) + 1j*np.zeros(len(kappa))
gamma[0] = 0.+0j
gamma[1] = gamma[0] + h[0]*rotor[0]
for n in range(1, len(kappa)-1):
gamma[n+1] = gamma[n] + h[n]/h[n-1] * (gamma[n] - gamma[n-1]) * rotor[n]
fig = plt.figure(figsize=(9,9))
ax = fig.gca(aspect='equal')
ax.plot(gamma.real, gamma.imag, color='b')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
print('Total rotation at the orbit: {:.5f}*pi rad'.format((theta[-1] - theta[0])/np.pi))
print('Total rotation computed with DDG: {:.5f}*pi rad'.format((np.sum(phi) - phi[0])/np.pi))
"""
Explanation: 4.2 Spatial case
The above equations may be expressed in terms of the components of the section forces and moments
- $\frac{1}{2}\Bigl( \frac{1}{GJ}(T)^2 + \frac{1}{EI_2}(M_2)^2 + \frac{1}{EI_3} (M_3)^2\Bigr) + N = constant$
- $N^2 + (Q_2)^2 + (Q_3)^2 = constant$
- $N\,T + Q_2\,M_2 + Q_3\,M_3 = constant$
5 Kinematic relationships
We refer to the field of Discrete Diferential Geometry. Hoffmann, 2008 introduces the following concepts and definitions:
A discrete curve in $\mathbb{R}^n$ is a map $\gamma: I \in \mathbb{Z} \rightarrow \mathbb{R}^n$
The edge tangent vector is defined as the forward difference $\Delta\gamma_k := \gamma_{k+1} - \gamma_k$
A discrete curve is called parametrized by arc-length if $\|\Delta\gamma_k\| \equiv h = const. \neq 0$
5.1 Planar case
For planar curves we may work in the complex plane, i.e. $\gamma: I \in \mathbb{N} \rightarrow \mathbb{R}^2 \cong \mathbb{C}$.
The vertex tangent vector of a planar discrete curve is given by...
The curvature of a arc-length parametrized (planar) discrete curve $\gamma$ is given by $\kappa = \frac{2}{h} \tan\frac{\phi}{2}$
The curvature of a general planar discrete curve $\gamma$ is defined at every vertex as $\kappa_n = \frac{4}{h_{n-1}+h_n} \tan\frac{\phi_n}{2}$
The key result for us is the following:
The discrete curvature function $\kappa$ determines a parametrized discrete curve up to an Euclidean motion. The rotation at every vertex relates the normalized edge segments
$$\frac{\Delta\gamma_n}{h_n} = \frac{\Delta\gamma_{n-1}}{h_{n-1}} e^{i \phi_n}$$
The curve is then determined by the recurrent relation
$$\gamma_{n+1} = \gamma_n + \frac{h_n}{h_{n-1}} \Delta\gamma_{n-1} \, e^{i \phi_n}$$
Note that the rotation operator transforming the direction of the discrete curve at every vertex can be expresed in terms of the curvature at the vertex through the following (exact) quotient
$$e^{i \phi_n} = \frac{1 + i \tan(\phi_n/2)}{1 - i \tan(\phi_n/2)} = \frac{\frac{4}{h_{n-1}+h_n} + i \kappa_n}{\frac{4}{h_{n-1}+h_n} - i \kappa_n}$$
With this expression we may compute the elastica in a straightforward manner once we have prescribed key values. Let's return to the pinned rod...
End of explanation
"""
EI = 5000. #kN m^2
H = 3600. #kN m/m
F = -4600. #kN
nEdges = 200
nVertex = nEdges + 1
theta0 = np.arccos(H/F)
theta = np.linspace(theta0, -theta0, nVertex)
N = F*np.cos(theta)
Q = -F*np.sin(theta)
M = -np.sqrt(2*EI*(H - N))
h = np.zeros(nEdges)
h[:] = -2*(M[1:] - M[0:-1])/(Q[0:-1] + Q[1:]) # pythonic looping
print('The length of the rod is {:.3f} m'.format(np.sum(h)))
kappa = M / EI
phi = np.zeros(nVertex-1)
rotor = np.zeros(nVertex-1) + 1j*np.zeros(nVertex-1)
phi[0] = theta0
phi[1:] = 2.*np.arctan(kappa[1:-1]*(h[0:-1] + h[1:])/4.)
rotor[0] = np.exp(1j*phi[0])
rotor[1:] = (4./(h[0:-1] + h[1:]) + 1j * kappa[1:-1])/(4./(h[0:-1] + h[1:]) - 1j * kappa[1:-1])
gamma = np.zeros(nVertex) + 1j*np.zeros(nVertex)
gamma[0] = 0.+0j
gamma[1] = gamma[0] + h[0]*rotor[0]
for n in range(1, nVertex-1):
gamma[n+1] = gamma[n] + h[n]/h[n-1] * (gamma[n] - gamma[n-1]) * rotor[n]
fig = plt.figure(figsize=(9,9))
ax = fig.gca(aspect='equal')
ax.plot(gamma.real, gamma.imag, color='b')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
print('Total rotation at the orbit: {:.5f}*pi rad'.format((theta[-1] - theta[0])/np.pi))
print('Total rotation computed with DDG: {:.5f}*pi rad'.format((np.sum(phi) - phi[0])/np.pi))
"""
Explanation: Let's pack the code into a single tool and do a new experiment:
End of explanation
"""
EI = 5000. #kN m^2
H = 3600. #kN m/m
F = -3500. #kN
nEdges = 50
nVertex = nEdges + 1
if F > H:
theta0 = np.arccos(H/F)
else:
theta0 = np.pi
theta = np.linspace(theta0, -theta0, nVertex)
N = F*np.cos(theta)
Q = -F*np.sin(theta)
M = -np.sqrt(2*EI*(H - N))
h = np.zeros(nEdges)
h[:] = -2*(M[1:] - M[0:-1])/(Q[0:-1] + Q[1:])
kappa = M / EI
phi = np.zeros(nVertex-1)
rotor = np.zeros(nVertex-1) + 1j*np.zeros(nVertex-1)
phi[0] = theta0
phi[1:] = 2.*np.arctan(kappa[1:-1]*(h[0:-1] + h[1:])/4.)
rotor[0] = np.exp(1j*phi[0])
rotor[1:] = (4./(h[0:-1] + h[1:]) + 1j * kappa[1:-1])/(4./(h[0:-1] + h[1:]) - 1j * kappa[1:-1])
gamma = np.zeros(nVertex) + 1j*np.zeros(nVertex)
gamma[0] = 0.+0j
gamma[1] = gamma[0] + h[0]*rotor[0]
for n in range(1, nVertex-1):
gamma[n+1] = gamma[n] + h[n]/h[n-1] * (gamma[n] - gamma[n-1]) * rotor[n]
fig = plt.figure(figsize=(9,9))
ax = fig.gca(aspect='equal')
ax.plot(gamma.real, gamma.imag, color='b')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
print('Total rotation at the orbit: {:.5f}*pi rad'.format((theta[-1] - theta[0])/np.pi))
print('Total rotation computed with DDG: {:.5f}*pi rad'.format((np.sum(phi) - phi[0])/np.pi))
"""
Explanation: We make a new experiment, this time with $F<\mathcal{H}$ which means that there are moments at the end sections
End of explanation
"""
EI = 5000. #kN m^2
H = 3600. #kN m/m
F = -3500. #kN
nEdges = 50
nVertex = nEdges + 1
if F > H:
theta0 = np.arccos(H/F)
else:
theta0 = np.pi
theta = np.linspace(theta0, -theta0, nVertex)
N = F*np.cos(theta)
Q = -F*np.sin(theta)
M = -np.sqrt(2*EI*(H - N))
h = np.zeros(nEdges)
h[:] = -2*(M[1:] - M[0:-1])/(Q[0:-1] + Q[1:])
kappa = M / EI
phi = np.zeros(nVertex-1)
rotor = np.zeros(nVertex-1) + 1j*np.zeros(nVertex-1)
phi[0] = theta0 + np.arctan(h[0]*kappa[0]/2) # modified code line
phi[1:] = 2.*np.arctan(kappa[1:-1]*(h[0:-1] + h[1:])/4.)
rotor[0] = np.exp(1j*phi[0])
rotor[1:] = (4./(h[0:-1] + h[1:]) + 1j * kappa[1:-1])/(4./(h[0:-1] + h[1:]) - 1j * kappa[1:-1])
gamma = np.zeros(nVertex) + 1j*np.zeros(nVertex)
gamma[0] = 0.+0j
gamma[1] = gamma[0] + h[0]*rotor[0]
for n in range(1, nVertex-1):
gamma[n+1] = gamma[n] + h[n]/h[n-1] * (gamma[n] - gamma[n-1]) * rotor[n]
fig = plt.figure(figsize=(9,9))
ax = fig.gca(aspect='equal')
ax.plot(gamma.real, gamma.imag, color='b')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
print('The y coordinate at the end section is now y = {:.7f} m'.format(gamma[-1].imag))
"""
Explanation: It is easy to observe that the solution is "rotated" compared to the expected one. The reason of this behavior is that no rotation has been considered at the start node, but there is rotation caused by the non-zero curvature at the start section. Let's modify the code to fix this issue. For this purpose we will add to the initial (reference) angle, half of the angle corresponding to the curvature at the start section, and we will take $h_0$ as averaging length for the curvature
$$\kappa_0 = \frac{2}{h_0} \tan\biggl(\frac{\phi_0^}{2}\biggr)$$
$$\Delta\phi_0 = \frac{\phi_0^}{2} = \arctan\biggl(\frac{h_0}{2} \kappa_0\biggl)$$
End of explanation
"""
|
martinjrobins/hobo | examples/toy/model-simple-harmonic-oscillator.ipynb | bsd-3-clause | import pints
import pints.toy
import matplotlib.pyplot as plt
import numpy as np
model = pints.toy.SimpleHarmonicOscillatorModel()
"""
Explanation: Simple Harmonic Oscillator model
This example shows how the Simple Harmonic Oscillator model can be used.
A model for a particle undergoing Newtonian dynamics that experiences a force in proportion to its displacement from an equilibrium position, and, in addition, a friction force. The motion of the particle can be determined by solving a second order ordinary differential equation (from Newton's $F = ma$):
$$\frac{d^2y}{dt^2} = -y(t) - \theta \frac{dy(t)}{dt}.$$
where $y(t)$ is the particle's displacement and $\theta$ is the frictional force.
End of explanation
"""
times = np.linspace(0, 50, 1000)
parameters = model.suggested_parameters()
values = model.simulate(parameters, times)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
"""
Explanation: Parameters are given in the order $(y(0), dy/dt(0), \theta)$. Here, we see that since $\theta > 0$, that the oscillations of the particle decay exponentially over time.
End of explanation
"""
parameters = model.suggested_parameters()
values = model.simulate([1, 0, 2], times)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
"""
Explanation: Substituting an exponential solution of the form: $y(t) = Ae^{\lambda t}$ into the governing ODE, we obtain: $\lambda^2 + \theta \lambda + 1=0$, which has solutions:
$$\lambda = (-\theta \pm \sqrt{\theta^2 - 4})/2.$$
As we can see above, if $\theta^2 < 4$, i.e. if $-2<\theta<2$, $\lambda$ has an imaginary part, which causes the solution to oscillate sinusoidally whilst decaying to zero.
If $\theta = 2$, we get critically dampled dynamics where the displacement decays exponentially to zero, rather than oscillatory motion.
End of explanation
"""
parameters = model.suggested_parameters()
values = model.simulate([1, 0, 5], times)
plt.figure(figsize=(15,2))
plt.xlabel('t')
plt.ylabel(r'$y$ (Displacement)')
plt.plot(times, values)
plt.show()
"""
Explanation: If $\theta > 2$, we get overdamped dynamics, with an increased rate of decay to zero.
End of explanation
"""
values, sensitivities = model.simulateS1(parameters, times)
"""
Explanation: This model also provides sensitivities: derivatives $\frac{\partial y}{\partial p}$ of the output $y$ with respect to the parameters $p$.
End of explanation
"""
plt.figure(figsize=(15,7))
plt.subplot(3, 1, 1)
plt.ylabel(r'$\partial y/\partial y(0)$')
plt.plot(times, sensitivities[:, 0])
plt.subplot(3, 1, 2)
plt.xlabel('t')
plt.ylabel(r'$\partial y/\partial \dot y(0)$')
plt.plot(times, sensitivities[:, 1])
plt.subplot(3, 1, 3)
plt.xlabel('t')
plt.ylabel(r'$\partial y/\partial \theta$')
plt.plot(times, sensitivities[:, 1])
plt.show()
"""
Explanation: We can plot these sensitivities, to see where the model is sensitive to each of the parameters:
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/prod/.ipynb_checkpoints/n09_dyna_10000_states_full_training-checkpoint.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = -1 # 252*4 + 28
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_train_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
n_levels=10)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=20,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
"""
Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
End of explanation
"""
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 7
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
"""
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
"""
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
"""
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
"""
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
"""
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
"""
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
import pickle
with open('../../data/dyna_10000_states_full_training.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
"""
Explanation: What are the metrics for "holding the position"?
End of explanation
"""
|
Featuretools/featuretools | docs/source/getting_started/afe.ipynb | bsd-3-clause | import featuretools as ft
es = ft.demo.load_mock_customer(return_entityset=True)
es
"""
Explanation: Deep Feature Synthesis
Deep Feature Synthesis (DFS) is an automated method for performing feature engineering on relational and temporal data.
Input Data
Deep Feature Synthesis requires structured datasets in order to perform feature engineering. To demonstrate the capabilities of DFS, we will use a mock customer transactions dataset.
End of explanation
"""
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_dataframe_name="customers",
agg_primitives=["count"],
trans_primitives=["month"],
max_depth=1)
feature_matrix
"""
Explanation: Once data is prepared as an .EntitySet, we are ready to automatically generate features for a target dataframe - e.g. customers.
Running DFS
Typically, without automated feature engineering, a data scientist would write code to aggregate data for a customer, and apply different statistical functions resulting in features quantifying the customer's behavior. In this example, an expert might be interested in features such as: total number of sessions or month the customer signed up.
These features can be generated by DFS when we specify the target_dataframe as customers and "count" and "month" as primitives.
End of explanation
"""
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_dataframe_name="customers",
agg_primitives=["mean", "sum", "mode"],
trans_primitives=["month", "hour"],
max_depth=2)
feature_matrix
"""
Explanation: In the example above, "count" is an aggregation primitive because it computes a single value based on many sessions related to one customer. "month" is called a transform primitive because it takes one value for a customer transforms it to another.
Creating "Deep Features"
The name Deep Feature Synthesis comes from the algorithm's ability to stack primitives to generate more complex features. Each time we stack a primitive we increase the "depth" of a feature. The max_depth parameter controls the maximum depth of the features returned by DFS. Let us try running DFS with max_depth=2
End of explanation
"""
feature_matrix[['MEAN(sessions.SUM(transactions.amount))']]
"""
Explanation: With a depth of 2, a number of features are generated using the supplied primitives. The algorithm to synthesize these definitions is described in this paper. In the returned feature matrix, let us understand one of the depth 2 features
End of explanation
"""
feature_matrix[['MODE(sessions.HOUR(session_start))']]
"""
Explanation: For each customer this feature
calculates the sum of all transaction amounts per session to get total amount per session,
then applies the mean to the total amounts across multiple sessions to identify the average amount spent per session
We call this feature a "deep feature" with a depth of 2.
Let's look at another depth 2 feature that calculates for every customer the most common hour of the day when they start a session
End of explanation
"""
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_dataframe_name="sessions",
agg_primitives=["mean", "sum", "mode"],
trans_primitives=["month", "hour"],
max_depth=2)
feature_matrix.head(5)
"""
Explanation: For each customer this feature calculates
The hour of the day each of his or her sessions started, then
uses the statistical function mode to identify the most common hour he or she started a session
Stacking results in features that are more expressive than individual primitives themselves. This enables the automatic creation of complex patterns for machine learning.
Changing Target DataFrame
DFS is powerful because we can create a feature matrix for any dataframe in our dataset. If we switch our target dataframe to "sessions", we can synthesize features for each session instead of each customer. Now, we can use these features to predict the outcome of a session.
End of explanation
"""
feature_matrix[['customers.MEAN(transactions.amount)']].head(5)
"""
Explanation: As we can see, DFS will also build deep features based on a parent dataframe, in this case the customer of a particular session. For example, the feature below calculates the mean transaction amount of the customer of the session.
End of explanation
"""
|
caromedellin/Python-notes | exploratory-data-analysis/state example.ipynb | mit | import pandas as pd
import numpy as np
my_data = pd.DataFrame([1,2,3])
"""
Explanation: Example for dimensionnality reduction
End of explanation
"""
def to_binary(value):
return "{0:b}".format(value)
to_binary(5)
unique_values = my_data.thrid.unique()
"""
Explanation: convert integer in to binary string
End of explanation
"""
my_dict = {}
for index,val in enumerate(unique_vals):
my_dict[val] = to_binary(index)
my_data["thrid_binary"] = my_data.apply(lambda x: my_dict[x.thrid], axis = 1)
"""
Explanation: we apply this to a data
we use enumerate to loop
then save a dictionary to encapsulate of the values that are unique in binary code, so it give you the index possition and the value, now what we do is that we create a dictionary that maps this individual string in to binary.
End of explanation
"""
|
gfeiden/Notebook | Projects/senap/real_variables.ipynb | mit | import fileinput as fi
"""
Explanation: Identifying Hard Coded Single Precision Variables
In looking to merge MARCS with DSEP, it is immediately clear that the two codes are incompatible when it comes to passing variables. MARCS is written with single precision declarations for real variables and DSEP is written with double precision declarations. Type conversions can be carried out in several different ways, but perhaps it is worth upgrading MARCS to double precision. However, while it is simple enough to redefine real declarations to real(dp), where dp is a double precision type declaration, there are many instances of hard coded single precision variables. That is, variables defined with X.XeYYY instead of X.XdYYY. Finding such declarations is non-trivial.
End of explanation
"""
single_precision_vars = [line.rstrip('\n') for line in fi.input('marcs_single_precision.txt')]
"""
Explanation: An initial attempt to extract all instances of hard coded single precision real variables was done using grep and a regular expression,
bash
grep -i "[0-9].e.[0-9]" *.f > marcs_single_precision.txt
where -i flags the result to be case insensitive. Let's read in the result.
End of explanation
"""
print single_precision_vars[:5]
"""
Explanation: Take a quick look at the first few lines,
End of explanation
"""
single_precision_vars = [line for line in single_precision_vars if line.lower().find('format') == -1]
"""
Explanation: Everything looks reasonble, but I know for a fact that scientific notation can also be declared in FORMAT statements. Therefore, it's necessary to remove these from the entry list.
End of explanation
"""
print single_precision_vars[:5]
"""
Explanation: Make sure we haven't removed valid entries, as compared to the initial output.
End of explanation
"""
file_stream = open('single_precision_index.txt', 'w')
routine = ''
for line in single_precision_vars:
line = line.split(':')
if line[0] == routine:
file_stream.write('\t{:4s} :: {:50s}\n'.format(line[1].strip(), line[2].strip()))
else:
routine = line[0]
file_stream.write('\n\n{:s}\n\n'.format(line[0].rstrip('.f').upper()))
file_stream.write('\t{:4s} :: {:70s}\n'.format(line[1].strip(), line[2].strip()))
file_stream.close()
"""
Explanation: We can now organize the data by subroutine and output the lines on which a single precision real is expcted to occur. There will be false positives, but those can be checked by eye.
End of explanation
"""
file_stream = open('single_precision_index.md', 'w')
routine = ''
for line in single_precision_vars:
line = line.split(':')
if line[0] == routine:
file_stream.write('\t{:4s} :: {:50s}\n'.format(line[1].strip(), line[2].strip()))
else:
routine = line[0]
file_stream.write('\n\n# {:s}\n\n'.format(line[0].rstrip('.f').upper()))
file_stream.write('\t{:4s} :: {:70s}\n'.format(line[1].strip(), line[2].strip()))
file_stream.close()
"""
Explanation: Markdown is also a helpful format.
End of explanation
"""
print "There are {:.0f} lines that need editing.".format(len(single_precision_vars))
"""
Explanation: Changing these can be done by a single person, but would perhaps benefit from multiple editors working on a subset of the declarations.
End of explanation
"""
N_lines = []
routine = ''
n = 0
for line in single_precision_vars:
line = line.split(':')
if line[0] == routine:
n += 1
else:
# output previous routine count
N_lines.append([routine, n])
n = 0
# start a new routine
routine = line[0]
n += 1
N_lines.pop(0)
"""
Explanation: Which means that, split among 3 people, it's approximately 96 lines of code per person. There are some routines that contain significantly more lines that require editing, as compared to others. We can create a histogram showing number of lines per subroutine.
End of explanation
"""
N_lines = sorted(N_lines, key = lambda routine: routine[1])
print N_lines
print "\n \t There are {:.0f} subroutines that need editing.".format(len(N_lines))
"""
Explanation: Now we should aim to sort and then distribute routines to be modified.
End of explanation
"""
lines_in_routines = [line.split() for line in fi.input('marcs_routines_nlines.txt')]
lines_in_routines = [[x[1], int(x[0])] for x in lines_in_routines]
"""
Explanation: This is not too unreasonable for a single person, but additional checking for single precision declarations and functions would be beneficial. This requires that we also consider the total number of lines in each file that needs to be edited.
End of explanation
"""
total_lines = []
for routine in lines_in_routines:
n_edits = [x[1] for x in N_lines if x[0] == routine[0]]
if n_edits == []:
n_edits = 0
else:
n_edits = int(n_edits[0])
total_lines.append([routine[0], routine[1] + n_edits])
total_lines = sorted(total_lines, key = lambda routine: routine[1])
"""
Explanation: We now have a record for number of single precision statements to be changed and the number of lines in the entire routine.
End of explanation
"""
print "\t There are {:.0f} subroutines that need checking.".format(len(total_lines))
"""
Explanation: We've now combined the total number of lines in each file with the number of edits to fixed single precision constants that need to be made. Of course, there are instances where function calls need to be verified, but one expects this is somewhat proportional to the number of lines in the entire file.
End of explanation
"""
person_1 = []
person_2 = []
person_3 = []
person_4 = []
for i, routine in enumerate(total_lines):
if i % 4 == 0:
person_1.append(routine)
elif i % 4 == 1:
person_2.append(routine)
elif i % 4 == 2:
person_3.append(routine)
elif i % 4 == 3:
person_4.append(routine)
else:
print "Whoops, misplaced", routine
"""
Explanation: If divided equally, that means each of 3 people will need to check 26 separate subroutines. However, these should we weighted so that each person searches roughly the same number of routines and the same number of lines.
End of explanation
"""
print len(person_1), len(person_2), len(person_3), len(person_4)
"""
Explanation: Now, let's confirm that everything looks about equal.
End of explanation
"""
for person in [person_1, person_2, person_3, person_4]:
print sum([x[1] for x in person])
"""
Explanation: and the number of lines of code
End of explanation
"""
print person_1
print
print person_2
print
print person_3
print
print person_4
person_2.append(person_4.pop(-5))
"""
Explanation: So that didn't work out quite as well as I had anticipated. Certainly person 2 will have considerably more work cut out for them, of order 1400 lines of code.
Looking at the routines in each set, it's possible to even up the 1st and 3rd sets in terms of lines of code by switching one entry with approx. 150 lines of code.
End of explanation
"""
for person in [person_1, person_2, person_3, person_4]:
print sum([x[1] for x in person])
"""
Explanation: Now, trying this again,
End of explanation
"""
for i, person in enumerate([person_1, person_2, person_3, person_4]):
text_stream = open('Person_{:02.0f}.txt'.format(i + 1), 'w')
mark_stream = open('Person_{:02.0f}.md'.format(i + 1), 'w')
for routine in person:
s = '{:20s} :: {:4.0f} lines \n'.format(routine[0], routine[1])
text_stream.write(s)
mark_stream.write('### ' + s)
text_stream.close()
mark_stream.close()
"""
Explanation: Much better!
From these lists, we can create simple markdown and text file to-do lists for each person.
End of explanation
"""
|
aaronmckinstry706/twitter-crime-prediction | notebooks/New Pipeline.ipynb | gpl-3.0 | import pyspark.sql as sql
ss = sql.SparkSession.builder.appName("TwitterTokenizing")\
.getOrCreate()
"""
Explanation: The New Pipeline
This is a rough draft of our new code. We're using PySpark's DataFrame and Pipeline API (for the most part) to re-implement what we've already done, and then move forward. It's been much more efficient (from a human-time-spent perspective; not necessarily from time/space complexity perspective) to use thus far.
Basics
End of explanation
"""
import pyspark.sql.types as types
tweets_schema = types.StructType([
types.StructField('id', types.LongType()),
types.StructField('timestamp', types.LongType()),
types.StructField('postalCode', types.StringType()),
types.StructField('lon', types.DoubleType()),
types.StructField('lat', types.DoubleType()),
types.StructField('tweet', types.StringType()),
types.StructField('user_id', types.LongType()),
types.StructField('application', types.StringType()),
types.StructField('source', types.StringType())
])
tweets_df = ss.read.csv('tweets2.csv',
escape='"',
header='true',
schema=tweets_schema,
mode='DROPMALFORMED')
tweets_df = tweets_df.drop('id') \
.drop('postalCode') \
.drop('user_id') \
.drop('application') \
.drop('source')
print('Dataframe columns:')
print(tweets_df.columns)
print('Sample row:')
print(tweets_df.take(1))
print('Number of tweets:')
print(tweets_df.count())
"""
Explanation: Importing Tweet Data
End of explanation
"""
import os
import sys
# From https://stackoverflow.com/a/36218558 .
def sparkImport(module_name, module_directory):
"""
Convenience function.
Tells the SparkContext sc (must already exist) to load
module module_name on every computational node before
executing an RDD.
Args:
module_name: the name of the module, without ".py".
module_directory: the path, absolute or relative, to
the directory containing module
module_Name.
Returns: none.
"""
module_path = os.path.abspath(
module_directory + "/" + module_name + ".py")
sc.addPyFile(module_path)
# Add all scripts from repository to local path.
# From https://stackoverflow.com/a/35273613 .
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import twokenize
sparkImport("twokenize", "..")
print ("Original tweet:")
example_tweet = u':( :( :( Incident on #VariousLocalExpressBuses SB from 5th Avenue:106th Street to 5th Avenue: 57th Street http://t.co/KrLOmkAqcE'
print(example_tweet)
print("Tokenized tweet:")
print(twokenize.tokenize(example_tweet))
"""
Explanation: Importing Tokenizer
End of explanation
"""
import pyspark.sql.functions as functions
sql_tokenize = functions.udf(
lambda tweet: twokenize.tokenize(tweet),
returnType=types.ArrayType(types.StringType()))
tweets_df = tweets_df \
.withColumn("tweet_tokens", sql_tokenize(tweets_df.tweet)) \
.drop('tweet')
print(tweets_df.columns)
print(tweets_df.take(1))
"""
Explanation: Tokenize the Data
End of explanation
"""
date_column = tweets_df['timestamp'].cast(types.TimestampType()) \
.cast(types.DateType())
tweets_df = tweets_df.withColumn('date', date_column) \
.drop('timestamp')
print(tweets_df.columns)
print(tweets_df.take(1))
import datetime
date_to_column = functions.lit(datetime.datetime(2016, 3, 3))
date_from_column = functions.lit(functions.date_sub(date_to_column, 31))
filtered_tweets_df = tweets_df.filter(
~(tweets_df.date < date_from_column)
& (tweets_df.date < date_to_column))
print(filtered_tweets_df.count())
print(filtered_tweets_df.take(1))
"""
Explanation: Filter by Date
End of explanation
"""
import grid
sparkImport('grid', '..')
# Southwest corner of New York:
# lat = 40.488320, lon = -74.290739
# Northeast corner of New York:
# lat = 40.957189, lon = -73.635679
latlongrid = grid.LatLonGrid(
lat_min=40.488320,
lat_max=40.957189,
lon_min=-74.290739,
lon_max=-73.635679,
lat_step=grid.get_lon_delta(1000, (40.957189 - 40.488320)/2.0),
lon_step=grid.get_lat_delta(1000))
print(latlongrid.lat_grid_dimension)
print(latlongrid.lon_grid_dimension)
# The only way to group elements and get a set of data (as far as I know) is by converting the DataFrame into an RDD.
import operator
row_to_index_tokens = lambda row: (
latlongrid.grid_square_index(lat=row['lat'], lon=row['lon']),
row['tweet_tokens'])
filtered_tweets_rdd = filtered_tweets_df.rdd
tokens_by_grid_square = filtered_tweets_rdd.map(row_to_index_tokens) \
.reduceByKey(operator.concat)
print(tokens_by_grid_square.count())
print(tokens_by_grid_square.first())
"""
Explanation: Group by Grid Square
End of explanation
"""
tokens_grid_schema = types.StructType([
types.StructField('grid_square', types.IntegerType()),
types.StructField('tokens', types.ArrayType(types.StringType()))
])
tokens_grid_df = ss.createDataFrame(tokens_by_grid_square, schema=tokens_grid_schema)
print(tokens_grid_df.count())
print(tokens_grid_df.take(1))
import pyspark.ml.feature as feature
import pyspark.ml as ml
import pyspark.ml.clustering as clustering
count_vectorizer = feature.CountVectorizer(inputCol='tokens', outputCol='token_frequencies')
lda = clustering.LDA().setFeaturesCol('token_frequencies').setK(10).setTopicDistributionCol('topic_distributions')
pipeline = ml.Pipeline(stages=[count_vectorizer, lda])
lda_model = pipeline.fit(tokens_grid_df)
topic_distributions = lda_model.transform(tokens_grid_df)
print(topic_distributions.count())
print(topic_distributions.take(1))
"""
Explanation: Perform LDA on Grid Squares
End of explanation
"""
|
simonsfoundation/CaImAn | demos/notebooks/demo_pipeline_cnmfE.ipynb | gpl-2.0 | try:
get_ipython().magic(u'load_ext autoreload')
get_ipython().magic(u'autoreload 2')
get_ipython().magic(u'matplotlib qt')
except:
pass
import logging
import matplotlib.pyplot as plt
import numpy as np
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.DEBUG)
import caiman as cm
from caiman.source_extraction import cnmf
from caiman.utils.utils import download_demo
from caiman.utils.visualization import inspect_correlation_pnr, nb_inspect_correlation_pnr
from caiman.motion_correction import MotionCorrect
from caiman.source_extraction.cnmf import params as params
from caiman.utils.visualization import plot_contours, nb_view_patches, nb_plot_contour
import cv2
try:
cv2.setNumThreads(0)
except:
pass
import bokeh.plotting as bpl
import holoviews as hv
bpl.output_notebook()
hv.notebook_extension('bokeh')
"""
Explanation: Pipeline for microendoscopic data processing in CaImAn using the CNMF-E algorithm
This demo presents a complete pipeline for processing microendoscopic data using CaImAn. It includes:
- Motion Correction using the NoRMCorre algorithm
- Source extraction using the CNMF-E algorithm
- Deconvolution using the OASIS algorithm
Some basic visualization is also included. The demo illustrates how to params, MoctionCorrection and cnmf object for processing 1p microendoscopic data. For processing two-photon data consult the related demo_pipeline.ipynb demo. For more information see the companion CaImAn paper.
End of explanation
"""
fnames = ['data_endoscope.tif'] # filename to be processed
fnames = [download_demo(fnames[0])]
"""
Explanation: Select file(s) to be processed
The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fnames variable as a list. Note that the memory requirement of the CNMF-E algorithm are much higher compared to the standard CNMF algorithm. Test the limits of your system before trying to process very large amounts of data.
End of explanation
"""
#%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened)
if 'dview' in locals():
cm.stop_server(dview=dview)
c, dview, n_processes = cm.cluster.setup_cluster(
backend='local', n_processes=None, single_thread=False)
"""
Explanation: Setup a cluster
To enable parallel processing a (local) cluster needs to be set up. This is done with a cell below. The variable backend determines the type of cluster used. The default value 'local' uses the multiprocessing package. The ipyparallel option is also available. More information on these choices can be found here. The resulting variable dview expresses the cluster option. If you use dview=dview in the downstream analysis then parallel processing will be used. If you use dview=None then no parallel processing will be employed.
End of explanation
"""
# dataset dependent parameters
frate = 10 # movie frame rate
decay_time = 0.4 # length of a typical transient in seconds
# motion correction parameters
motion_correct = True # flag for performing motion correction
pw_rigid = False # flag for performing piecewise-rigid motion correction (otherwise just rigid)
gSig_filt = (3, 3) # size of high pass spatial filtering, used in 1p data
max_shifts = (5, 5) # maximum allowed rigid shift
strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels
overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)
max_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts
border_nan = 'copy' # replicate values along the boundaries
mc_dict = {
'fnames': fnames,
'fr': frate,
'decay_time': decay_time,
'pw_rigid': pw_rigid,
'max_shifts': max_shifts,
'gSig_filt': gSig_filt,
'strides': strides,
'overlaps': overlaps,
'max_deviation_rigid': max_deviation_rigid,
'border_nan': border_nan
}
opts = params.CNMFParams(params_dict=mc_dict)
"""
Explanation: Setup some parameters
We first set some parameters related to the data and motion correction and create a params object. We'll modify this object with additional settings later on. You can also set all the parameters at once as demonstrated in the demo_pipeline.ipynb notebook.
End of explanation
"""
if motion_correct:
# do motion correction rigid
mc = MotionCorrect(fnames, dview=dview, **opts.get_group('motion'))
mc.motion_correct(save_movie=True)
fname_mc = mc.fname_tot_els if pw_rigid else mc.fname_tot_rig
if pw_rigid:
bord_px = np.ceil(np.maximum(np.max(np.abs(mc.x_shifts_els)),
np.max(np.abs(mc.y_shifts_els)))).astype(np.int)
else:
bord_px = np.ceil(np.max(np.abs(mc.shifts_rig))).astype(np.int)
plt.subplot(1, 2, 1); plt.imshow(mc.total_template_rig) # % plot template
plt.subplot(1, 2, 2); plt.plot(mc.shifts_rig) # % plot rigid shifts
plt.legend(['x shifts', 'y shifts'])
plt.xlabel('frames')
plt.ylabel('pixels')
bord_px = 0 if border_nan is 'copy' else bord_px
fname_new = cm.save_memmap(fname_mc, base_name='memmap_', order='C',
border_to_0=bord_px)
else: # if no motion correction just memory map the file
fname_new = cm.save_memmap(fnames, base_name='memmap_',
order='C', border_to_0=0, dview=dview)
"""
Explanation: Motion Correction
The background signal in micro-endoscopic data is very strong and makes the motion correction challenging.
As a first step the algorithm performs a high pass spatial filtering with a Gaussian kernel to remove the bulk of the background and enhance spatial landmarks.
The size of the kernel is given from the parameter gSig_filt. If this is left to the default value of None then no spatial filtering is performed (default option, used in 2p data).
After spatial filtering, the NoRMCorre algorithm is used to determine the motion in each frame. The inferred motion is then applied to the original data so no information is lost.
The motion corrected files are saved in memory mapped format. If no motion correction is being performed, then the file gets directly memory mapped.
End of explanation
"""
# load memory mappable file
Yr, dims, T = cm.load_memmap(fname_new)
images = Yr.T.reshape((T,) + dims, order='F')
"""
Explanation: Load memory mapped file
End of explanation
"""
# parameters for source extraction and deconvolution
p = 1 # order of the autoregressive system
K = None # upper bound on number of components per patch, in general None
gSig = (3, 3) # gaussian width of a 2D gaussian kernel, which approximates a neuron
gSiz = (13, 13) # average diameter of a neuron, in general 4*gSig+1
Ain = None # possibility to seed with predetermined binary masks
merge_thr = .7 # merging threshold, max correlation allowed
rf = 40 # half-size of the patches in pixels. e.g., if rf=40, patches are 80x80
stride_cnmf = 20 # amount of overlap between the patches in pixels
# (keep it at least large as gSiz, i.e 4 times the neuron size gSig)
tsub = 2 # downsampling factor in time for initialization,
# increase if you have memory problems
ssub = 1 # downsampling factor in space for initialization,
# increase if you have memory problems
# you can pass them here as boolean vectors
low_rank_background = None # None leaves background of each patch intact,
# True performs global low-rank approximation if gnb>0
gnb = 0 # number of background components (rank) if positive,
# else exact ring model with following settings
# gnb= 0: Return background as b and W
# gnb=-1: Return full rank background B
# gnb<-1: Don't return background
nb_patch = 0 # number of background components (rank) per patch if gnb>0,
# else it is set automatically
min_corr = .8 # min peak value from correlation image
min_pnr = 10 # min peak to noise ration from PNR image
ssub_B = 2 # additional downsampling factor in space for background
ring_size_factor = 1.4 # radius of ring is gSiz*ring_size_factor
opts.change_params(params_dict={'method_init': 'corr_pnr', # use this for 1 photon
'K': K,
'gSig': gSig,
'gSiz': gSiz,
'merge_thr': merge_thr,
'p': p,
'tsub': tsub,
'ssub': ssub,
'rf': rf,
'stride': stride_cnmf,
'only_init': True, # set it to True to run CNMF-E
'nb': gnb,
'nb_patch': nb_patch,
'method_deconvolution': 'oasis', # could use 'cvxpy' alternatively
'low_rank_background': low_rank_background,
'update_background_components': True, # sometimes setting to False improve the results
'min_corr': min_corr,
'min_pnr': min_pnr,
'normalize_init': False, # just leave as is
'center_psf': True, # leave as is for 1 photon
'ssub_B': ssub_B,
'ring_size_factor': ring_size_factor,
'del_duplicates': True, # whether to remove duplicates from initialization
'border_pix': bord_px}) # number of pixels to not consider in the borders)
"""
Explanation: Parameter setting for CNMF-E
We now define some parameters for the source extraction step using the CNMF-E algorithm.
We construct a new dictionary and use this to modify the existing params object,
End of explanation
"""
# compute some summary images (correlation and peak to noise)
cn_filter, pnr = cm.summary_images.correlation_pnr(images[::1], gSig=gSig[0], swap_dim=False) # change swap dim if output looks weird, it is a problem with tiffile
# inspect the summary images and set the parameters
nb_inspect_correlation_pnr(cn_filter, pnr)
"""
Explanation: Inspect summary images and set parameters
Check the optimal values of min_corr and min_pnr by moving slider in the figure that pops up. You can modify them in the params object.
Note that computing the correlation pnr image can be computationally and memory demanding for large datasets. In this case you can compute
only on a subset of the data (the results will not change). You can do that by changing images[::1] to images[::5] or something similar.
This will compute the correlation pnr image
End of explanation
"""
# print parameters set above, modify them if necessary based on summary images
print(min_corr) # min correlation of peak (from correlation image)
print(min_pnr) # min peak to noise ratio
"""
Explanation: You can inspect the correlation and PNR images to select the threshold values for min_corr and min_pnr. The algorithm will look for components only in places where these value are above the specified thresholds. You can adjust the dynamic range in the plots shown above by choosing the selection tool (third button from the left) and selecting the desired region in the histogram plots on the right of each panel.
End of explanation
"""
cnm = cnmf.CNMF(n_processes=n_processes, dview=dview, Ain=Ain, params=opts)
cnm.fit(images)
"""
Explanation: Run the CNMF-E algorithm
End of explanation
"""
# cnm1 = cnmf.CNMF(n_processes, params=opts, dview=dview)
# cnm1.fit_file(motion_correct=motion_correct)
"""
Explanation: Alternate way to run the pipeline at once
It is possible to run the combined steps of motion correction, memory mapping, and cnmf fitting in one step as shown below. The command is commented out since the analysis has already been performed. It is recommended that you familiriaze yourself with the various steps and the results of the various steps before using it.
End of explanation
"""
#%% COMPONENT EVALUATION
# the components are evaluated in three ways:
# a) the shape of each component must be correlated with the data
# b) a minimum peak SNR is required over the length of a transient
# c) each shape passes a CNN based classifier
min_SNR = 3 # adaptive way to set threshold on the transient size
r_values_min = 0.85 # threshold on space consistency (if you lower more components
# will be accepted, potentially with worst quality)
cnm.params.set('quality', {'min_SNR': min_SNR,
'rval_thr': r_values_min,
'use_cnn': False})
cnm.estimates.evaluate_components(images, cnm.params, dview=dview)
print(' ***** ')
print('Number of total components: ', len(cnm.estimates.C))
print('Number of accepted components: ', len(cnm.estimates.idx_components))
"""
Explanation: Component Evaluation
The processing in patches creates several spurious components. These are filtered out by evaluating each component using three different criteria:
the shape of each component must be correlated with the data at the corresponding location within the FOV
a minimum peak SNR is required over the length of a transient
each shape passes a CNN based classifier
<img src="../../docs/img/evaluationcomponent.png"/>
After setting some parameters we again modify the existing params object.
End of explanation
"""
#%% plot contour plots of accepted and rejected components
cnm.estimates.plot_contours_nb(img=cn_filter, idx=cnm.estimates.idx_components)
"""
Explanation: Do some plotting
End of explanation
"""
# accepted components
cnm.estimates.hv_view_components(img=cn_filter, idx=cnm.estimates.idx_components,
denoised_color='red', cmap='gray')
# rejected components
cnm.estimates.hv_view_components(img=cn_filter, idx=cnm.estimates.idx_components_bad,
denoised_color='red', cmap='gray')
"""
Explanation: View traces of accepted and rejected components. Note that if you get data rate error you can start Jupyter notebooks using:
'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10'
End of explanation
"""
cm.stop_server(dview=dview)
"""
Explanation: Stop cluster
End of explanation
"""
# with background
cnm.estimates.play_movie(images, q_max=99.5, magnification=2,
include_bck=True, gain_res=10, bpx=bord_px)
# without background
cnm.estimates.play_movie(images, q_max=99.9, magnification=2,
include_bck=False, gain_res=4, bpx=bord_px)
"""
Explanation: Some instructive movies
Play the reconstructed movie alongside the original movie and the (amplified) residual
End of explanation
"""
|
IST256/learn-python | content/lessons/09-Dictionaries/HW-Dictionaries.ipynb | mit | import requests
import json
file='US-Senators.json'
senators = requests.get('https://www.govtrack.us/api/v2/role?current=true&role_type=senator').json()['objects']
with open(file,'w') as f:
f.write(json.dumps(senators))
print(f"Saved: {file}")
"""
Explanation: Homework: US Senator Lookup
The Problem
Let's write a program similar to this unit's End-To-End Example. Instead of European countries this program will provide a drop-down of US states. When a state is selected, the program should display the US senators for that state.
What information Should you display? Here is a sample of the 2 senators from the State of NY:
Sen. Charles “Chuck” Schumer [D-NY]
Senior Senator for New York
PARTY: Democrat
PHONE: 202-224-6542
WEBSITE: https://www.schumer.senate.gov
CONTACT: https://www.schumer.senate.gov/contact/email-chuck
## Sen. Kirsten Gillibrand [D-NY]
Junior Senator for New York
PARTY: Democrat
PHONE: 202-224-4451
WEBSITE: https://www.gillibrand.senate.gov
CONTACT: https://www.gillibrand.senate.gov/contact/email-me
HINTS:
Everything you will display for a senator can be found in the dictionary for that senator. Look at the keys available for a single senator as reference.
You will need to make a list of unqiue states from the senators.json file, similar to how to approaches the problem in last week's homework for product categories.
This Code will fetch the current US Senators from the web and save the results to a US-Senators.json file.
End of explanation
"""
# Step 2: Write code here
"""
Explanation: Part 1: Problem Analysis
Inputs:
TODO: Inputs
Outputs:
TODO: Outputs
Algorithm (Steps in Program):
```
TODO:Steps Here
```
Part 2: Code Solution
You may write your code in several cells, but place the complete, final working copy of your code solution within this single cell below. Only the within this cell will be considered your solution. Any imports or user-defined functions should be copied into this cell.
End of explanation
"""
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
"""
Explanation: Part 3: Questions
What are the advantages of using a dictionary for this information instead of a delimited file like in the previous homework?
--== Double-Click and Write Your Answer Below This Line ==--
How easy would it be to write a similar program for World Leaders? Or College Professors? or NBA Players? What is different about each case?
--== Double-Click and Write Your Answer Below This Line ==--
Explain your approach to figuring out which dictionary keys you needed to complete the program.
--== Double-Click and Write Your Answer Below This Line ==--
Part 4: Reflection
Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
Keep your response to between 100 and 250 words.
--== Double-Click and Write Your Reflection Below Here ==--
End of explanation
"""
|
alexandrnikitin/algorithm-sandbox | courses/DAT256x/Module03/03-03-Matrices.ipynb | mit | import numpy as np
A = np.array([[1,2,3],
[4,5,6]])
print (A)
"""
Explanation: Introduction to Matrices
In general terms, a matrix is an array of numbers that are arranged into rows and columns.
Matrices and Matrix Notation
A matrix arranges numbers into rows and columns, like this:
\begin{equation}A = \begin{bmatrix}
1 & 2 & 3 \
4 & 5 & 6
\end{bmatrix}
\end{equation}
Note that matrices are generally named as a capital letter. We refer to the elements of the matrix using the lower case equivalent with a subscript row and column indicator, like this:
\begin{equation}A = \begin{bmatrix}
a_{1,1} & a_{1,2} & a_{1,3} \
a_{2,1} & a_{2,2} & a_{2,3}
\end{bmatrix}
\end{equation}
In Python, you can define a matrix as a 2-dimensional numpy.array**, like this:
End of explanation
"""
import numpy as np
M = np.matrix([[1,2,3],
[4,5,6]])
print (M)
"""
Explanation: You can also use the numpy.matrix type, which is a specialist subclass of array**:
End of explanation
"""
import numpy as np
A = np.array([[1,2,3],
[4,5,6]])
B = np.array([[6,5,4],
[3,2,1]])
print(A + B)
"""
Explanation: There are some differences in behavior between array and matrix types - particularly with regards to multiplication (which we'll explore later). You can use either, but most experienced Python programmers who need to work with both vectors and matrices tend to prefer the array type for consistency.
Matrix Operations
Matrices support common arithmetic operations.
Adding Matrices
To add two matrices of the same size together, just add the corresponding elements in each matrix:
\begin{equation}\begin{bmatrix}1 & 2 & 3 \4 & 5 & 6\end{bmatrix}+ \begin{bmatrix}6 & 5 & 4 \3 & 2 & 1\end{bmatrix} = \begin{bmatrix}7 & 7 & 7 \7 & 7 & 7\end{bmatrix}\end{equation}
In this example, we're adding two matrices (let's call them A and B). Each matrix has two rows of three columns (so we describe them as 2x3 matrices). Adding these will create a new matrix of the same dimensions with the values a<sub>1,1</sub> + b<sub>1,1</sub>, a<sub>1,2</sub> + b<sub>1,2</sub>, a<sub>1,3</sub> + b<sub>1,3</sub>,a<sub>2,1</sub> + b<sub>2,1</sub>, a<sub>2,2</sub> + b<sub>2,2</sub>, and a<sub>2,3</sub> + b<sub>2,3</sub>. In this instance, each pair of corresponding elements(1 and 6, 2, and 5, 3 and 4, etc.) adds up to 7.
Let's try that with Python:
End of explanation
"""
import numpy as np
A = np.array([[1,2,3],
[4,5,6]])
B = np.array([[6,5,4],
[3,2,1]])
print (A - B)
"""
Explanation: Subtracting Matrices
Matrix subtraction works similarly to matrix addition:
\begin{equation}\begin{bmatrix}1 & 2 & 3 \4 & 5 & 6\end{bmatrix}- \begin{bmatrix}6 & 5 & 4 \3 & 2 & 1\end{bmatrix} = \begin{bmatrix}-5 & -3 & -1 \1 & 3 & 5\end{bmatrix}\end{equation}
Here's the Python code to do this:
End of explanation
"""
import numpy as np
C = np.array([[-5,-3,-1],
[1,3,5]])
print (C)
print (-C)
"""
Explanation: Conformability
In the previous examples, we were able to add and subtract the matrices, because the operands (the matrices we are operating on) are conformable for the specific operation (in this case, addition or subtraction). To be conformable for addition and subtraction, the operands must have the same number of rows and columns. There are different conformability requirements for other operations, such as multiplication; which we'll explore later.
Negative Matrices
The nagative of a matrix, is just a matrix with the sign of each element reversed:
\begin{equation}C = \begin{bmatrix}-5 & -3 & -1 \1 & 3 & 5\end{bmatrix}\end{equation}
\begin{equation}-C = \begin{bmatrix}5 & 3 & 1 \-1 & -3 & -5\end{bmatrix}\end{equation}
Let's see that with Python:
End of explanation
"""
import numpy as np
A = np.array([[1,2,3],
[4,5,6]])
print(A.T)
"""
Explanation: Matrix Transposition
You can transpose a matrix, that is switch the orientation of its rows and columns. You indicate this with a superscript T, like this:
\begin{equation}\begin{bmatrix}1 & 2 & 3 \4 & 5 & 6\end{bmatrix}^{T} = \begin{bmatrix}1 & 4\2 & 5\3 & 6 \end{bmatrix}\end{equation}
In Python, both numpy.array and numpy.**matrix have a T function:
End of explanation
"""
|
gunan/tensorflow | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
# TensorFlow is an open source machine learning library
import tensorflow as tf
# Numpy is a math library
import numpy as np
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# math is Python's math library
import math
"""
Explanation: Create and convert a TensorFlow model
This notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the hello_world sample for TensorFlow Lite for Microcontrollers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Import dependencies
Our first task is to import the dependencies we need. Run the following cell to do so:
End of explanation
"""
# We'll generate this many sample datapoints
SAMPLES = 1000
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook
np.random.seed(1337)
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
"""
Explanation: Generate data
Deep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
In a real world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.
In the hello_world sample for TensorFlow Lite for Microcontrollers, we'll use this model to control LEDs that light up in a sequence.
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph:
End of explanation
"""
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
"""
Explanation: Add some noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph:
End of explanation
"""
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
"""
Explanation: Split our data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
To ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.
The following code will split our data and then plot each set as a different color:
End of explanation
"""
# We'll use Keras to create a simple model architecture
from tensorflow.keras import layers
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
"""
Explanation: Design a model
We're going to build a model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression.
To achieve this, we're going to create a simple neural network. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 16 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained:
End of explanation
"""
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate))
"""
Explanation: Train the model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 1000 epochs, with 16 pieces of data in each batch. We also pass in some data to use for validation. As you will see when you run the cell, training can take a while to complete:
End of explanation
"""
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Check the training metrics
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form:
End of explanation
"""
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: Look closer at the data
The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs:
End of explanation
"""
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
"""
Explanation: Further metrics
From the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 600 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers:
End of explanation
"""
# Use the model to make predictions from our validation data
predictions = model_1.predict(x_train)
# Plot the predictions along with to the test data
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_train, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
"""
Explanation: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
To get more insight into what is happening, we can plot our network's predictions for the training data against the expected values:
End of explanation
"""
model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
"""
Explanation: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. From 0 <= x <= 1.1 the line mostly fits, but for the rest of our x values it is a rough approximation at best.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Change our model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle:
End of explanation
"""
history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,
validation_data=(x_validate, y_validate))
"""
Explanation: We'll now train the new model. To save time, we'll train for only 600 epochs:
End of explanation
"""
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.clf()
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
"""
Explanation: Evaluate our new model
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ):
Epoch 600/600
600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845
You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.015, and validation MAE has dropped from 0.31 to 0.1.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
End of explanation
"""
# Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
"""
Explanation: Great results! From these graphs, we can see several exciting things:
Our network has reached its peak accuracy much more quickly (within 200 epochs instead of 600)
The overall loss and MAE are much better than our previous network
Metrics are better for validation than training, which means the network is not overfitting
The reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
End of explanation
"""
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
# Save the model to disk
open("sine_model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)
"""
Explanation: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Convert to TensorFlow Lite
We now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the TensorFlow Lite Converter. The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called quantization. It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
The TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice: once with quantization, once without:
End of explanation
"""
# Instantiate an interpreter for each model
sine_model = tf.lite.Interpreter('sine_model.tflite')
sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')
# Allocate memory for each model
sine_model.allocate_tensors()
sine_model_quantized.allocate_tensors()
# Get the input and output tensors so we can feed in values and get the results
sine_model_input = sine_model.tensor(sine_model.get_input_details()[0]["index"])
sine_model_output = sine_model.tensor(sine_model.get_output_details()[0]["index"])
sine_model_quantized_input = sine_model_quantized.tensor(sine_model_quantized.get_input_details()[0]["index"])
sine_model_quantized_output = sine_model_quantized.tensor(sine_model_quantized.get_output_details()[0]["index"])
# Create arrays to store the results
sine_model_predictions = np.empty(x_test.size)
sine_model_quantized_predictions = np.empty(x_test.size)
# Run each model's interpreter for each value and store the results in arrays
for i in range(x_test.size):
sine_model_input().fill(x_test[i])
sine_model.invoke()
sine_model_predictions[i] = sine_model_output()[0]
sine_model_quantized_input().fill(x_test[i])
sine_model_quantized.invoke()
sine_model_quantized_predictions[i] = sine_model_quantized_output()[0]
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual')
plt.plot(x_test, predictions, 'ro', label='Original predictions')
plt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions')
plt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
"""
Explanation: Test the converted models
To prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
End of explanation
"""
import os
basic_model_size = os.path.getsize("sine_model.tflite")
print("Basic model is %d bytes" % basic_model_size)
quantized_model_size = os.path.getsize("sine_model_quantized.tflite")
print("Quantized model is %d bytes" % quantized_model_size)
difference = basic_model_size - quantized_model_size
print("Difference is %d bytes" % difference)
"""
Explanation: We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!
We can print the difference in file size:
End of explanation
"""
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i sine_model_quantized.tflite > sine_model_quantized.cc
# Print the source file
!cat sine_model_quantized.cc
"""
Explanation: Our quantized model is only 16 bytes smaller than the original version, which only a tiny reduction in size! At around 2.6 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.
More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.
Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller!
Write to a C file
The final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in hello_world/sine_model_data.cc.
To do so, we can use a command line utility named xxd. The following cell runs xxd on our quantized model and prints the output:
End of explanation
"""
|
ml4a/ml4a-guides | examples/fundamentals/math_review_numpy.ipynb | gpl-2.0 | import numpy as np
"""
Explanation: Review of numpy and basic mathematics
written by Gene Kogan
Before learning about what regression and classification are, we will do a review of key mathematical concepts from linear algebra and calculus, as well as an introduction to the numpy package. These fundamentals will be helpful to understand some of the theoretical materials of the next few guides.
We will be working a lot with numpy, a Python library for large-scale vector and matrix operations, as well as fast and efficient computation of various mathematical functions. Additionally, most of the deep learning frameworks, including PyTorch and Tensorflow, largely follow the conventions laid out by numpy and are often called "numpy-like".
numpy is a very large library with many convenient functions. A review of them is beyond the scope of this notebook. We will introduce relevant functions in future sessions as we go, depending on when we need them. A good non-comprehensive review can also be found in Stanford CS231n. If you are unfamiliar with numpy, it will be very helpful to go through the exercises in that tutorial, which also contain a short Python review.
In the following section, we will just introduce some of the most common operations briefly, along with their corresponding mathematical concepts, focusing on the ones that will help us in the next section.
To start, we import numpy (often imported under the alias np to make calls shorter).
End of explanation
"""
v = np.array([1, -5, 3])
print(v)
"""
Explanation: Vectors
The most basic data structure in numpy is a vector, or array. So for example, to represent the vector $v = \begin{bmatrix} 1 \ -5 \ 3 \end{bmatrix}$, we would write:
End of explanation
"""
v = np.arange(0, 10)
print(v)
"""
Explanation: numpy has many convenience functions for generating vectors. For example, to create a list of all integers between 0 and 10:
End of explanation
"""
v = np.linspace(0, 10, 8) # give me 8 numbers linearly interpolated between 0 and 10
print(v)
"""
Explanation: Or the numpy.linspace function, which gives you a linear interpolation of n numbers between two endpoints.
End of explanation
"""
a = np.array([2, 3, 1])
b = np.array([0, 2, -2])
c = a + b
print(c)
"""
Explanation: Addition
When two vectors of equal length are added, the elements are added point-wise.
$$
\begin{bmatrix} 2 \ 3 \ 1 \end{bmatrix} + \begin{bmatrix} 0 \ 2 \ -2 \end{bmatrix} = \begin{bmatrix} 2 \ 5 \ -1 \end{bmatrix}
$$
End of explanation
"""
v = 3 * np.array([2,3,1])
print(v)
"""
Explanation: Multiplication
A vector can be multiplied element-wise by a number (called a "scalar"). For example:
$$
3 \begin{bmatrix} 2 \ 3 \ 1 \end{bmatrix} = \begin{bmatrix} 6 \ 9 \ 3 \end{bmatrix}
$$
End of explanation
"""
a = np.array([1,-2,2])
b = np.array([0,2,3])
c = np.dot(a, b)
print(c)
"""
Explanation: Dot product
A dot product is defined as the sum of the element-wise products of two equal-sized vectors. For two vectors $a$ and $b$, it is denoted as $a \cdot b$ or as $a b^T$ (where T refers to the transpose operation, introduced further down this notebook.
$$
\begin{bmatrix} 1 & -2 & 2 \end{bmatrix} \begin{bmatrix} 0 \ 2 \ 3 \end{bmatrix} = 2
$$
In other words, it's:
$$
(1 \cdot 0) + (-2 \cdot 2) + (2 \cdot 3) = 2
$$
This can be calculated with the numpy.dot function:
End of explanation
"""
c = a.dot(b)
print(c)
"""
Explanation: Or the shorter way:
End of explanation
"""
np.matrix([[2,3,1],[0, 4,-2]])
"""
Explanation: Matrices
A matrix is a rectangular array of numbers. For example, consider the following 2x3 matrix:
$$
\begin{bmatrix} 2 & 3 & 1 \ 0 & 4 & -2 \end{bmatrix}
$$
Note that we always denote the size of the matix as rows x columns. So a 2x3 matrix has two rows and 3 columns.
numpy can create matrices from normal Python lists using numpy.matrix. For example:
End of explanation
"""
np.zeros((3, 3))
"""
Explanation: To instantiate a matrix of all zeros:
End of explanation
"""
np.ones((2, 2))
"""
Explanation: To instantiate a matrix of all ones:
End of explanation
"""
np.eye(3)
"""
Explanation: Identity matrix
In linear algebra, a square matrix whose elements are all zeros, except the diagonals, which are ones, is called an "identity matrix."
For example:
$$
\mathbf I =
\begin{bmatrix}
1 & 0 & 0 \
0 & 1 & 0 \
0 & 0 & 1
\end{bmatrix}
$$
is a 3x3 identity matrix. The reason why it is called an identity matrix is that it is analagous to multiplying a scalar by 1. A matrix multiplied by an identity matrix is unchanged.
$$
\mathbf I v = v
$$
To instantiate an identity matrix, use numpy.eye. For example:
End of explanation
"""
M = np.matrix([[9,5,6],[-1,0,5],[-2,4,2]])
I = np.eye(3)
print("original matrix = \n", M)
M2 = I * M
print("I * M = \n", M2)
M3 = M * I
print("M * I = \n", M3)
"""
Explanation: Notice when you multiple an identity matrix by another matrix, the result is the same as the original matrix. This goes in either order. Basically, the identity matrix is like $\times 1$.
End of explanation
"""
A = np.random.random((2, 3))
print(A)
"""
Explanation: Random matrices
To instantiate a matrix of random elements (between 0 and 1), you can use numpy.random:
End of explanation
"""
A_transpose = np.transpose(A)
print(A_transpose)
"""
Explanation: Transposition
To transpose a matrix is to reverse the axes of the matrix. So the element at i,j in the transposed matrix is equal to the element at j,i in the original. The matrix $A$ transposed is denoted as $A^T$.
End of explanation
"""
A_transpose = A.T
print(A_transpose)
"""
Explanation: It can also be done with the shorthand .T operation, as in:
End of explanation
"""
a = np.matrix([[4, 3],[3,-1],[-2,1]])
b = np.matrix([[-2, 1],[5,3],[1,0]])
c = a + b
print(c)
"""
Explanation: Matrix adddition
Like regular vectors, matrices are added point-wise (or element-wise) and must be of the same size. So for example:
$$
\begin{bmatrix} 4 & 3 \ 3 & -1 \ -2 & 1 \end{bmatrix} + \begin{bmatrix} -2 & 1 \ 5 & 3 \ 1 & 0 \end{bmatrix} = \begin{bmatrix} 2 & 4 \ 8 & 2 \ -1 & 1 \end{bmatrix}
$$
End of explanation
"""
a = np.matrix([[1,-2,0],[6,4,-2]])
-2 * a
"""
Explanation: Matrix multiplication
Also like vectors, matrices can be multiplied element-wise by a scalar.
$$
-2 \begin{bmatrix} 1 & -2 & 0 \ 6 & 4 & -2 \end{bmatrix} = \begin{bmatrix} -2 & 4 & 0 \ -12 & -8 & 4 \end{bmatrix}
$$
End of explanation
"""
a = np.matrix([[1,-2,0],[6,4,-2]])
b = np.matrix([[4,-1],[0,-2],[1,3]])
c = a * b
print(c)
"""
Explanation: To multiply two matrices together, you take the dot product of each row of the first matrix and each column of the second matrix, as in:
So in order to multiply matrices $A$ and $B$ together, as in $C = A \dot B$, $A$ must have the same number of columns as $B$ has rows. For example:
$$
\begin{bmatrix} 1 & -2 & 0 \ 6 & 4 & -2 \end{bmatrix} * \begin{bmatrix} 4 & -1 \ 0 & -2 \ 1 & 3 \end{bmatrix} = \begin{bmatrix} 4 & 3 \ 22 & -20 \end{bmatrix}
$$
End of explanation
"""
a = np.array([[3,1],[0,5]])
b = np.array([[-2,4],[1,-2]])
np.multiply(a,b)
"""
Explanation: Hadamard product
The Hadamard product of two matrices differs from normal multiplication in that it is the element-wise multiplication of two matrices.
$$
\mathbf A \odot B =
\begin{bmatrix}
A_{1,1} B_{1,1} & \dots & A_{1,n} B_{1,n} \
\vdots & \dots & \vdots \
A_{m,1} B_{m,1} & \dots & A_{m,n} B_{m,n}
\end{bmatrix}
$$
So for example:
$$
\begin{bmatrix} 3 & 1 \ 0 & 5 \end{bmatrix} \odot \begin{bmatrix} -2 & 4 \ 1 & -2 \end{bmatrix} = \begin{bmatrix} -6 & 4 \ 0 & -10 \end{bmatrix}
$$
To calculate this with numpy, simply instantiate the matrices with numpy.array instead of numpy.matrix and it will use element-wise multiplication by default.
End of explanation
"""
def f(x):
return 3*(x**2)-5*x+9
f(2)
"""
Explanation: Functions
A function is an equation which shows the value of some expression which depends on or more variables. For example:
$$
f(x) = 3x^2 - 5x + 9
$$
So for example, at $x=2$, $f(2)=11$. We will be encountering functions constantly. A neural network is one very big function.
With functions, in machine learning, we often make a distinction between "variables" and "parameters". The variable is that part of the equation which can vary, and the output depends on it. So the above function depends on $x$. The coefficients in the above function (3, -5, 9) are sometimes called parameters because they are characterize the shape of the function, but are held fixed.
End of explanation
"""
def f(x):
return -2*(x**3)
def f_deriv(x):
return -6*(x**2)
print(f(2))
print(f_deriv(2))
"""
Explanation: Derivatives
The derivative of a function $f(x)$ is the instantaneous slope of the function at a given point, and is denoted as $f^\prime(x)$.
$$f^\prime(x) = \lim_{\Delta x\to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} $$
The derivative of $f$ with respect to $x$ can also be denoted as $\frac{df}{dx}$.
The derivative can be interpreted as the slope of a function at any point, as in the following video clip, which shows that the limit converges upon the true slope as $\Delta x$ approaches 0.
The derivative of a polynomial function is given below:
$$
f(x) = a x ^ b
$$
$$
\frac{df}{dx} = b a x^{b-1}
$$
For example, let:
$$
f(x) = -2 x^3
$$
then:
$$
\frac{df}{dx} = -6 x^2
$$
End of explanation
"""
def h(x):
return 4*x-5
def g(x):
return x**3
def f(x):
return g(h(x))
def h_deriv(x):
return 4
def g_deriv(x):
return 3*(x**2)
def f_deriv(x):
return g_deriv(h(x)) * h_deriv(x)
f(4)
f_deriv(2)
"""
Explanation: The derivative of any constant is 0. To see why, let:
$$
f(x) = C
$$
Then:
$$
f^\prime(x) = \lim_{\Delta x\to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} \
f^\prime(x) = \lim_{\Delta x\to 0} \frac{C - C}{\Delta x} \
f^\prime(x) = \lim_{\Delta x\to 0} \frac{0}{\Delta x} \
f^\prime(x) = 0
$$
Properties of derivatives
Derivatives are commutative. That is, the derivative of a sum is the sum of the derivatives. In other words:
Let $g$ and $h$ be functions. Then:
$$
\frac{d}{dx}(g + h) = \frac{dg}{dx} +\frac{dh}{dx}
$$
Similarly, constants can be factored out of derivatives, using the following property:
$$
\frac{d}{dx}(C f(x)) = C \frac{df}{dx}
$$
Chain rule
Functions can be composites of multiple functions. For example, consider the function:
$$
f(x) = (4x-5)^3
$$
This function can be broken down by letting:
$$
h(x) = 4x-5 \
g(x) = x^3 \
f(x) = g(h(x))
$$
The chain rule states that the derivative of a composite function $g(h(x))$ is:
$$
f^\prime(x) = g^\prime(h(x)) h^\prime(x)
$$
Another way of expressing this is:
$$
\frac{df}{dx} = \frac{dg}{dh} \frac{dh}{dx}
$$
Since $f$ and $g$ are both polynomials we find, we can easily calculate that:
$$
g^\prime(x) = 3x^2 \
h^\prime(x) = 4
$$
and therefore:
$$
f^\prime(x) = g^\prime(h(x)) h^\prime(x) \
f^\prime(x) = g^\prime(4x-5) \cdot 4 \
f^\prime(x) = 3 \cdot (4x-5)^2 \cdot 4 \
f^\prime(x) = 12 \cdot (4x-5)^2
$$
The chain rule is fundamental to how neural networks are trained, because it is what allows us to compute the derivative (gradient) of the network's cost function efficiently. We will see more about this in the next notebook.
End of explanation
"""
import matplotlib.pyplot as plt
X = np.arange(-5, 5, 0.1)
Y = np.sin(X)
# make the figure
plt.figure(figsize=(6,6))
plt.plot(X, Y)
plt.xlabel('x')
plt.ylabel('y = sin(x)')
plt.title('My plot title')
"""
Explanation: Multivariate functions
A function may depend on more than one variable. For example:
$$
f(X) = w_1 x_1 + w_2 x_2 + w_3 x_3 + ... + w_n x_n + b
$$
or using sum notation:
$$
f(X) = b + \sum_i w_i x_i
$$
One useful trick to simplify this formula is to append a $1$ to the input vector $X$, so that:
$$
X = \begin{bmatrix} x_1 & x_2 & ... & x_n & 1 \end{bmatrix}
$$
and let $b$ just be an element in the weights vector, so:
$$
W = \begin{bmatrix} w_1 & w_2 & ... & w_n & b \end{bmatrix}
$$
So then we can rewrite the function as:
$$
f(X) = W X^T
$$
Partial derivatives
A partial derivative of a multivariable function is the derivative of the function with respect to just one of the variables, holding all the others constant.
The partial derivative of $f$ with respect to $x_i$ is denoted as $\frac{\partial f}{\partial x_i}$.
Gradient
The gradient of a function is the vector containing each of its partial derivatives at point $x$.
$$
\nabla f(X) = \left[ \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, ..., \frac{\partial f}{\partial x_n} \right]
$$
We will look more closely at the gradient later when we get into how neural networks are trained.
Plotting with numpy
We introduced plotting in the previous guide. The example below recreates that plot, except using numpy.
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/r1/tutorials/eager/custom_training.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow.compat.v1 as tf
"""
Explanation: 사용자 정의 학습: 기초
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
docs-ko@tensorflow.org로
메일을 보내주시기 바랍니다.
이전 튜토리얼에서는 머신러닝을 위한 기본 구성 요소인 자동 미분(automatic differentiation)을 위한 텐서플로 API를 알아보았습니다. 이번 튜토리얼에서는 이전 튜토리얼에서 소개되었던 텐서플로의 기본 요소를 사용하여 간단한 머신러닝을 수행해보겠습니다.
텐서플로는 반복되는 코드를 줄이기 위해 유용한 추상화를 제공하는 고수준 신경망(neural network) API인 tf.keras를 포함하고 있습니다. 신경망을 다룰 때 이러한 고수준의 API을 강하게 추천합니다. 이번 짧은 튜토리얼에서는 탄탄한 기초를 기르기 위해 기본적인 요소만으로 신경망 훈련시켜 보겠습니다.
설정
End of explanation
"""
# 파이썬 구문 사용
x = tf.zeros([10, 10])
x += 2 # 이것은 x = x + 2와 같으며, x의 초기값을 변경하지 않습니다.
print(x)
"""
Explanation: 변수
텐서플로의 텐서(Tensor)는 상태가 없고, 변경이 불가능한(immutable stateless) 객체입니다. 그러나 머신러닝 모델은 상태가 변경될(stateful) 필요가 있습니다. 예를 들어, 모델 학습에서 예측을 계산하기 위한 동일한 코드는 시간이 지남에 따라 다르게(희망하건대 더 낮은 손실로 가는 방향으로)동작해야 합니다. 이 연산 과정을 통해 변화되어야 하는 상태를 표현하기 위해 명령형 프로그래밍 언어인 파이썬을 사용 할 수 있습니다.
End of explanation
"""
v = tf.Variable(1.0)
assert v.numpy() == 1.0
# 값을 재배열합니다.
v.assign(3.0)
assert v.numpy() == 3.0
# tf.square()와 같은 텐서플로 연산에 `v`를 사용하고 재할당합니다.
v.assign(tf.square(v))
assert v.numpy() == 9.0
"""
Explanation: 텐서플로는 상태를 변경할 수 있는 연산자가 내장되어 있으며, 이러한 연산자는 상태를 표현하기 위한 저수준 파이썬 표현보다 사용하기가 더 좋습니다. 예를 들어, 모델에서 가중치를 나타내기 위해서 텐서플로 변수를 사용하는 것이 편하고 효율적입니다.
텐서플로 변수는 값을 저장하는 객체로 텐서플로 연산에 사용될 때 저장된 이 값을 읽어올 것입니다. tf.assign_sub, tf.scatter_update 등은 텐서플로 변수에 저장되있는 값을 조작하는 연산자입니다.
End of explanation
"""
class Model(object):
def __init__(self):
# 변수를 (5.0, 0.0)으로 초기화 합니다.
# 실제로는 임의의 값으로 초기화 되어야합니다.
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
"""
Explanation: 변수를 사용한 연산은 그래디언트가 계산될 때 자동적으로 추적됩니다. 임베딩(embedding)을 나타내는 변수의 경우 기본적으로 희소 텐서(sparse tensor)를 사용하여 업데이트됩니다. 이는 연산과 메모리에 더욱 효율적입니다.
또한 변수를 사용하는 것은 코드를 읽는 독자에게 상태가 변경될 수 있다는 것을 알려주는 손쉬운 방법입니다.
예: 선형 모델 훈련
지금까지 몇 가지 개념을 설명했습니다. 간단한 모델을 구축하고 학습시키기 위해 ---Tensor, GradientTape, Variable --- 등을 사용하였고, 이는 일반적으로 다음의 과정을 포함합니다.
모델 정의
손실 함수 정의
훈련 데이터 가져오기
훈련 데이터에서 실행, 데이터에 최적화하기 위해 "옵티마이저(optimizer)"를 사용한 변수 조정
이번 튜토리얼에서는 선형 모델의 간단한 예제를 살펴보겠습니다. f(x) = x * W + b, 모델은 W 와 b 두 변수를 가지고 있는 선형모델이며, 잘 학습된 모델이 W = 3.0 and b = 2.0의 값을 갖도록 합성 데이터를 만들겠습니다.
모델 정의
변수와 연산을 캡슐화하기 위한 간단한 클래스를 정의해봅시다.
End of explanation
"""
def loss(predicted_y, desired_y):
return tf.reduce_mean(tf.square(predicted_y - desired_y))
"""
Explanation: 손실 함수 정의
손실 함수는 주어진 입력에 대한 모델의 출력이 원하는 출력과 얼마나 잘 일치하는지를 측정합니다. 평균 제곱 오차(mean square error)를 적용한 손실 함수를 사용하겠습니다.
End of explanation
"""
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
"""
Explanation: 훈련 데이터 가져오기
약간의 잡음과 훈련 데이터를 합칩니다.
End of explanation
"""
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('현재 손실: '),
print(loss(model(inputs), outputs).numpy())
"""
Explanation: 모델을 훈련시키기 전에, 모델의 현재 상태를 시각화합시다. 모델의 예측을 빨간색으로, 훈련 데이터를 파란색으로 구성합니다.
End of explanation
"""
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
"""
Explanation: 훈련 루프 정의
이제 네트워크와 훈련 데이터가 준비되었습니다. 모델의 변수(W 와 b)를 업데이트하기 위해 훈련 데이터를 사용하여 훈련시켜 보죠. 그리고 경사 하강법(gradient descent)을 사용하여 손실을 감소시킵니다. 경사 하강법에는 여러가지 방법이 있으며, tf.train.Optimizer 에 구현되어있습니다. 이러한 구현을 사용하는것을 강력히 추천드립니다. 그러나 이번 튜토리얼에서는 기본적인 방법을 사용하겠습니다.
End of explanation
"""
model = Model()
# 도식화를 위해 W값과 b값의 변화를 저장합니다.
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('에포크 %2d: W=%1.2f b=%1.2f, 손실=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# 저장된 값들을 도식화합니다.
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'true W', 'true_b'])
plt.show()
"""
Explanation: 마지막으로, 훈련 데이터를 반복적으로 실행하고, W 와 b의 변화 과정을 확인합니다.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tensorboard/scalars_and_keras.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
"""
Explanation: TensorBoard のスカラー: Keras でトレーニングメトリックをログする
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tensorboard/scalars_and_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/scalars_and_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/scalars_and_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
</table>
概要
機械学習には、損失などの主要なメトリック、そしてトレーニングが進むにつれそれらがどのように変化するかを理解することが必ず伴います。こういったメトリックは、過適合が発生していないか、不要に長くトレーニングしていないかなどを理解する上で役立ちます。そのため、モデルのデバッグや改善を行いやすくするために、トレーニングの実行から得られるメトリックを比較すると良いでしょう。
TensorBoard の Scalars ダッシュボードでは、単純な API を使用して簡単にメトリックを視覚化することができます。このチュートリアルでは、非常に基本的な例を使って、Keras モデルを開発する際に TensorBoard でこれらの API を使用する方法を説明します。Keras TensorBoard コールバックと TensorFlow Summary API を使用してデフォルトとカスタムのスカラーを視覚化する方法を学習します。
セットアップ
End of explanation
"""
data_size = 1000
# 80% of the data is for training.
train_pct = 0.8
train_size = int(data_size * train_pct)
# Create some input data between -1 and 1 and randomize it.
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# Generate the output data.
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# Split into test and train pairs.
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:]
"""
Explanation: 簡単な回帰用データをセットアップする
これから Keras を使用して回帰を計算し、データセットペアにおける最適な適合線を見つけたいと思います。(この種の問題でニューラルネットワークと勾配降下を使用するのはやり過ぎではありますが、例を理解しやすくなります。)
TensorBoard を使用して、エポック間でトレーニングとテストの損失がどのように変化するのかを観測します。時間の経過とともにトレーニングとテストの損失が下降して安定化する様子を確認してください。
まず、おおまかに線 y = 0.5x + 2 に沿って 1000 個のデータポイントを生成し、それらをトレーニングセットとテストセットに分割します。ここでは、ニューラルネットワークがこの関係を学習することを目標とします。
End of explanation
"""
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(lr=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss']))
"""
Explanation: モデルをトレーニングして損失をログする
モデルの定義、トレーニング、および評価の準備が整いました。
トレーニングしながら損失スカラーをログするには、次のように行います。
Keras TensorBoard コールバックを作成します。
ログディレクトリを指定します。
TensorBoard コールバックを Keras の Model.fit() に渡します。
TensorBoard は、ログディレクトリ階層からログデータを読み取ります。このノートブックでは、ルートログディレクトリは logs/scalars で、その後にタイムスタンプ付きのサブディレクトリが続きます。タイムスタンプがサブディレクトリに付加されることで、TensorBoard を使用してモデルでイテレーションを行う際に、トレーニングセットを簡単に識別して選択することができます。
End of explanation
"""
%tensorboard --logdir logs/scalars
"""
Explanation: TensorBoard を使って損失を調べる
では、上記で使用したルートログディレクトリを指定して、TensorBoard を起動しましょう。
TensorBoard の UI が読み込まれるまで数秒ほどかかります。
End of explanation
"""
print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]]
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_loss.png?raw=1"/> -->
TensorBoard に「No dashboards are active for the current data set(現在のデータセットにはアクティブなダッシュボードはありません)」というメッセージが表示されることがあります。これは、最初のログデータがまだ保存されていないためです。トレーニングが進むにつれ、Keras モデルはデータをログし始めるため、TensorBoard の再読み込みが定期的に行われ、スカラーメトリックが表示されるようになります。それまで待てない方は、右上にある再読み込み矢印をタップしてください。
トレーニングが進むにつれ、トレーニングと検証の損失が急速に下降し、安定する様子が見られます。実際、トレーニングが 25 エポックを終了したところであまり改善が見られなくなるため、その時点でトレーニングを停止することができます。
グラフをホバーし、特定のデータポイントを確認します。マウスを使用して拡大したり、部分的に選択して詳細を確認することも可能です。
左側に「Runs(実行)」セレクタがあります。1 つの「実行」は、1 ラウンドのトレーニングから得たログ一式を指します。このケースでは、これは Model.fit() の結果です。通常、長期間にわたってモデルの実験と開発が行われるため、実行の数は非常に多くなります。
Runs セレクタを使用して特定の実行を選択するか、トレーニングまたは検証のみから選択します。実行を比較すると、どのバージョンのコードで問題の解決が最適に行われているかを評価しやすくなります。
TensorBoard の損失グラフには、トレーニングと検証の両方で損失が一定して下降し、安定したことが示されています。つまり、モデルのメトリックは非常に良質である可能性が高いということです。では、このモデルが実際のデータでどのように動作するかを見てみましょう。
入力データ (60, 25, 2) がある場合、線 y = 0.5x + 2 は (32, 14.5, 3) となるはずです。モデルはこれと合致するでしょうか。
End of explanation
"""
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
"""
Returns a custom learning rate that decreases as epochs progress.
"""
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
"""
Explanation: よくできました!
カスタムスカラーをログする
動的学習率などのカスタムの値をログする場合はどうでしょうか。この場合は、TensorFlow Summary API を使用する必要があります。
回帰モデルを維持したまま、カスタム学習率をログします。次のように行ってください。
tf.summary.create_file_writer() を使ってファイルライターを作成します。
カスタム学習率の関数を定義します。この関数は、Keras の LearningRateScheduler コールバックに渡されます。
学習率関数内に、カスタム学習率をログするための tf.summary.scalar() を使用します。
LearningRateScheduler コールバックを Model.fit() に渡します。
一般的に、カスタムスカラーをログするには、tf.summary.scalar() をファイルライターとともに使用する必要があります。この実行のデータを特定のディレクトリに書き込むのはファイルライターであり、tf.summary.scalar() を使用する際にファイルライターが暗黙的に使用されるためです。
End of explanation
"""
%tensorboard --logdir logs/scalars
"""
Explanation: もう一度 TensorBoard を確認しましょう。
End of explanation
"""
print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]]
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_custom_lr.png?raw=1"/> -->
左側の「Runs」セレクタを使用すると、<timestamp>/metrics 実行が 1 つあります。この実行を選択すると「learning rate(学習率)」グラフが表示され、この実行における学習率の進行を確認することができます。
また、この実行のトレーニングと検証の損失曲線を以前の実行と比較することもできます。また、エポックによっては、学習率スケジュールが離散値を返すことがありますが、学習率プロットは滑らかに見える場合があることに気付くかもしれません。TensorBoard には平滑化パラメーターがあり、平滑化されていない値を表示するには、ゼロまで下げる必要がある場合があります。
このモデルの出来栄えはどうでしょうか。
End of explanation
"""
|
nntisapeh/intro_programming | notebooks/while_input.ipynb | mit | # Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
"""
Explanation: While Loops and Input
While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users.
Previous: If Statements |
Home |
Next: Basic Terminal Apps
Contents
What is a while loop?
General syntax
Example
Exercises
Accepting user input
General syntax
Example
Accepting input in Python 2.7
Exercises
Using while loops to keep your programs running
Exercises
Using while loops to make menus
Using while loops to process items in a list
Accidental Infinite loops
Exercises
Overall Challenges
What is a while loop?
A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing.
General syntax
End of explanation
"""
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
"""
Explanation: Every while loop needs an initial condition that starts out true.
The while statement includes a condition to test.
All of the code in the loop will run as long as the condition remains true.
As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.
Any code that is defined after the loop will run at this point.
Example
Here is a simple example, showing how a game will stay active as long as the player has enough power.
End of explanation
"""
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
"""
Explanation: top
<a id="Exercises-while"></a>
Exercises
Growing Strength
Make a variable called strength, and set its initial value to 5.
Print a message reporting the player's strength.
Set up a while loop that runs until the player's strength increases to a value such as 10.
Inside the while loop, print a message that reports the player's current strength.
Inside the while loop, write a statement that increases the player's strength.
Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.
Bonus: Play around with different cutoff levels for the value of strength, and play around with different ways to increase the strength value within the while loop.
top
Accepting user input
Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the input() function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable.
<a id="General-syntax-input"></a>
General syntax
The general case for accepting input looks something like this:
End of explanation
"""
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user.
<a id="Example-input"></a>
Example
In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
End of explanation
"""
# The same program, in Python 2.7
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = raw_input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: Accepting input in Python 2.7
In Python 3, you always use input(). In Python 2.7, you need to use raw_input():
End of explanation
"""
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: The function input() will work in Python 2.7, but it's not good practice to use it. When you use the input() function in Python 2.7, Python runs the code that's entered. This is fine in controlled situations, but it's not a very safe practice overall.
If you're using Python 3, you have to use input(). If you're using Python 2.7, use raw_input().
<a id="Exercises-input"></a>
Exercises
Game Preferences
Make a list that includes 3 or 4 games that you like to play.
Print a statement that tells the user what games you like.
Ask the user to tell you a game they like, and store the game in a variable such as new_game.
Add the user's game to your list.
Print a new statement that lists all of the games that we like to play (we means you and your user).
top
Using while loops to keep your programs running
Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
End of explanation
"""
###highlight=[15,16]
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: That worked, except we ended up with the name 'quit' in our list. We can use a simple if test to eliminate this bug:
End of explanation
"""
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
"""
Explanation: This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working.
<a id="Exercises-running"></a>
Exercises
Many Games
Modify Game Preferences so your user can add as many games as they like.
top
Using while loops to make menus
You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
End of explanation
"""
###highlight=[2,3,4,5,6,7,8,9,10,30,31,32,33,34,35]
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
"""
Explanation: Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
End of explanation
"""
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
"""
Explanation: This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action.
top
Using while loops to process items in a list
In the section on Lists, you saw that we can pop() items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
End of explanation
"""
###highlight=[10]
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
"""
Explanation: This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
End of explanation
"""
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
"""
Explanation: This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding one character to our program!
top
Accidental Infinite loops
Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.
Take a look at the following example. Can you pick out why this loop will never stop?
End of explanation
"""
###highlight=[7]
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
"""
Explanation: I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:
On most systems, Ctrl-C will interrupt the currently running program.
If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.
The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
End of explanation
"""
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
"""
Explanation: You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.
Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.
Here is one more example of an accidental infinite loop:
End of explanation
"""
|
ireapps/cfj-2017 | completed/08. Working with APIs (Part 1).ipynb | mit | from os import environ
slack_hook = environ.get('IRE_CFJ_2017_SLACK_HOOK', None)
"""
Explanation: Let's post a message to Slack
In this session, we're going to use Python to post a message to Slack. I set up a team for us so we can mess around with the Slack API.
We're going to use a simple incoming webhook to accomplish this.
Hello API
API stands for "Application Programming Interface." An API is a way to interact programmatically with a software application.
If you want to post a message to Slack, you could open a browser and navigate to your URL and sign in with your username and password (or open the app), click on the channel you want, and start typing.
OR ... you could post your Slack message with a Python script.
Hello environmental variables
The code for this boot camp is on the public internet. We don't want anyone on the internet to be able to post messages to our Slack channels, so we're going to use an environmental variable to store our webhook.
The environmental variable we're going to use -- IRE_CFJ_2017_SLACK_HOOK -- should already be stored on your computer.
Python has a standard library module for working with the operating system called os. The os module has a data attribute called environ, a dictionary of environmental variables stored on your computer.
(Here is a new thing: Instead of using brackets to access items in a dictionary, you can use the get() method. The advantage to doing it this way: If the item you're trying to get doesn't exist in your dictionary, it'll return None instead of throwing an exception, which is sometimes a desired behavior.)
End of explanation
"""
import json
"""
Explanation: Hello JSON
So far we've been working with tabular data -- CSVs with columns and rows. Most modern web APIs prefer to shake hands with a data structure called JSON (JavaScript Object Notation), which is more like a matryoshka doll.
Python has a standard library module for working with JSON data called json. Let's import it.
End of explanation
"""
import requests
"""
Explanation: Using requests to post data
We're also going to use the requests library again, except this time, instead of using the get() method to get something off the web, we're going to use the post() method to send data to the web.
End of explanation
"""
# build a dictionary of payload data
payload = {
'channel': '#general',
'username': 'IRE Python Bot',
'icon_emoji': ':ire:',
'text': 'helllllllo!'
}
# turn it into a string of JSON
payload_as_json = json.dumps(payload)
"""
Explanation: Formatting the data correctly
The JSON data we're going to send to the Slack webhook will start its life as a Python dictionary. Then we'll use the json module's dumps() method to turn it into a string of JSON.
End of explanation
"""
# check to see if you have the webhook URL
if slack_hook:
# send it to slack!
requests.post(slack_hook, data=payload_as_json)
else:
# if you don't have the webhook env var, print a message to the terminal
print("You don't have the IRE_CFJ_2017_SLACK_HOOK"
" environmental variable")
"""
Explanation: Send it off to Slack
End of explanation
"""
|
necromuralist/necromuralist.github.io | posts/baysian-spam-detector.ipynb | mit | # python standard library
from fractions import Fraction
import sys
# it turns out 'reduce' is no longer a built-in function in python 3
if sys.version_info.major >= 3:
from functools import reduce
spam = 'offer is secret, click secret link, secret sports link'.split(',')
print(len(spam))
ham = 'play sports today, went play sports, secret sports event, sports is today, sports costs money'.split(',')
print(len(ham))
"""
Explanation: Spam detection with Bayesian Networks
These are my notes for the Bayesian Networks section of the udacity course on artifical intelligence.
End of explanation
"""
class MailBag(object):
"""
A place to put spam or ham
"""
def __init__(self, mail, other_mail, k=0):
"""
:param:
- `mail`: list of example mail
- `other_mail`: mail not in this class (e.g. spam if this is ham)
- `k`: Laplace smoothing constant
"""
self.mail = mail
self.other_mail = other_mail
self.k = k
self._bag = None
self._probability = None
self._vocabulary_size = None
self._sample_size = None
return
@property
def vocabulary_size(self):
"""
:return: count of unique words in all examples
"""
if self._vocabulary_size is None:
self._vocabulary_size = len(set(self.bag) | set(self.bag_boy(self.other_mail)))
return self._vocabulary_size
@property
def bag(self):
"""
:return: list of words in `mail`
"""
if self._bag is None:
self._bag = self.bag_boy(self.mail)
return self._bag
@property
def sample_size(self):
"""
:return: count of mail in both spam and not spam
"""
if self._sample_size is None:
self._sample_size = len(self.mail + self.other_mail)
return self._sample_size
@property
def probability(self):
"""
:return: count of this mail/total sample size
"""
if self._probability is None:
SPAM_AND_HAM = 2
self._probability = self.l_probability(len(self.mail),
len(self.mail) + len(self.other_mail),
SPAM_AND_HAM)
return self._probability
def bag_boy(self, lines):
"""
:param:
- `lines`: list of lines
:return: list of words taken from the lines
"""
tokenized = (line.split() for line in lines)
bag = []
for tokens in tokenized:
for token in tokens:
bag.append(token)
return bag
def l_probability(self, event_size, sample_size, classes):
"""
:param:
- `event_size`: count of events of interest
- `sample_size`: count of all events
- `classes`: count of all classes of events
:return: probability with Laplace Smoothing
"""
return Fraction(event_size + self.k,
sample_size + classes * self.k)
def p_message(self, message):
"""
:param:
- `message`: line of mail
:return: p(message|this class)
"""
probabilities = (self.p_word(word) for word in message.split())
return reduce(lambda x, y: x * y, probabilities) * self.probability
def p_word(self, word):
"""
:param:
- `word`: string to check for
:return: fraction of word occurence in bag
"""
return self.l_probability(self.word_count(word), len(self.bag), self.vocabulary_size)
def word_count(self, word):
"""
:param:
- `word`: string to check for
:return: number of times word appears in bag
"""
return sum((1 for token in self.bag if token == word))
"""
Explanation: The terms have to be changed to be either all plural or all singular. In this case I changed 'sport' to 'sports' where needed.
The SpamDetector classes
I originally implemented everything as functions, but decided it was too scattered and created these after the fact, which is why there's all the duplication below. I left the old code to validate these classes.
The MailBag
This class holds either spam or ham. It actually holds both but the idea is one of them is the real type of interest.
End of explanation
"""
class SpamDetector(object):
"""
A bayesian network spam detector
"""
def __init__(self, spam, ham, k=0):
"""
:param:
- `spam`: list of example spam lines
- `ham`: list of example ham_lines
- `k`: laplace smoothing constant
"""
self.spam = MailBag(mail=spam, k=k, other_mail=ham)
self.ham = MailBag(mail=ham, k=k, other_mail=spam)
return
def p_spam_given_message(self, message):
"""
:param:
- `message`: line to check if it's spam
:return: probability that it's spam
"""
p_message_given_spam = self.spam.p_message(message)
return p_message_given_spam/ (p_message_given_spam +
self.ham.p_message(message))
# leave this in the same cell so updating the class updates the instance
detector = SpamDetector(spam=spam, ham=ham)
l_detector = SpamDetector(spam=spam, ham=ham, k=1)
"""
Explanation: SpamDetector
End of explanation
"""
def bagger(mail):
"""
converts list of lines into list of tokens
:param:
- `mail`: list of space-separated lines
:return: list of words in `mail`
"""
mail_tokenized = (line.split() for line in mail)
mail_bag = []
for tokens in mail_tokenized:
for token in tokens:
mail_bag.append(token)
return mail_bag
spam_bag = bagger(spam)
ham_bag = bagger(ham)
def assert_equal(expected, actual, description):
assert expected == actual, \
"'{2}'\nExpected: {0}, Actual: {1}".format(expected, actual,
description)
vocabulary_list = set(spam_bag) | set(ham_bag)
vocabulary = len(set(spam_bag) | set(ham_bag))
assert_equal(spam_bag, detector.spam.bag, 'check spam bags')
assert_equal(ham_bag, detector.ham.bag, 'ham bags')
assert_equal(vocabulary, detector.spam.vocabulary_size, 'vocabulary size')
print(vocabulary)
"""
Explanation: What is the size of the vocabulary?
End of explanation
"""
mail_count = len(ham) + len(spam)
assert_equal(mail_count, detector.spam.sample_size, 'mail count')
p_spam = Fraction(len(spam), mail_count)
assert_equal(p_spam, Fraction(3, 8), 'p-spam known')
assert_equal(p_spam, detector.spam.probability, 'p-spam detector')
print(p_spam)
"""
Explanation: what is the probability that a piece of mail is spam?
End of explanation
"""
def word_count(bag, word):
"""
count the number of times a word is in the bag
:param:
- `bag`: collection of words
- `word`: word to count
:return: number of times word appears in bag
"""
return sum((1 for token in bag if token == word))
def p_word(bag, word, k=0, sample_space=12):
"""
fraction of times word appears in the bag
:param:
- `bag`: collection of words
- `word`: word to count in bag
- `k`: laplace smoothing constant
- `sample_space`: total number of words in vocabulary
:return: Fraction of total bag that is word
"""
return Fraction(word_count(bag, word) + k, len(bag) + k * sample_space)
p_secret_given_spam = p_word(spam_bag, 'secret')
assert p_secret_given_spam == Fraction(3, 9)
assert_equal(p_secret_given_spam, detector.spam.p_word('secret'),
'secret given spam')
print(p_secret_given_spam)
"""
Explanation: what is p('secret'| spam)?
End of explanation
"""
p_secret_given_ham = p_word(ham_bag, 'secret')
assert p_secret_given_ham == Fraction(1, 15)
assert_equal(p_secret_given_ham, detector.ham.p_word('secret'), 'p(secret|ham)')
print(p_secret_given_ham)
"""
Explanation: what is p('secret'| ham)?
End of explanation
"""
%%latex
$p(spam|`sports') = \frac{p(`sports' | spam)p(spam)}{p(`sports')}$
p_sports_given_spam = p_word(spam_bag, 'sports')
assert p_sports_given_spam == Fraction(1, 9)
assert_equal(p_sports_given_spam, detector.spam.p_word('sports'),
'p(sports|spam)')
print(p_sports_given_spam)
p_sports_given_ham = p_word(ham_bag, 'sports')
expected = Fraction(1, 3)
assert p_sports_given_ham == expected
assert_equal(p_sports_given_ham, detector.ham.p_word('sports'),
'p(sports|ham)')
p_ham = Fraction(len(ham), mail_count)
assert_equal(p_ham, detector.ham.probability, 'p(ham)')
print(p_ham)
p_sports = Fraction(word_count(spam_bag, 'sports') + word_count(ham_bag, 'sports'), vocabulary)
print(p_sports)
p_spam_given_sports = (p_sports_given_spam * p_spam)/(p_sports_given_spam * p_spam + p_sports_given_ham * p_ham)
assert p_spam_given_sports == Fraction(3, 18)
assert_equal(p_spam_given_sports, detector.p_spam_given_message('sports'),
'p(spam|sports)')
print(p_spam_given_sports)
"""
Explanation: You get a message with one word - 'sports', what is p(spam|'sports')?
End of explanation
"""
%%latex
$p(spam|message) = \frac{p(message|spam)p(spam}{p(message|spam)p(spam) + p(message|ham)p(ham)}$
"""
Explanation: Given the message 'secret is secret', what is the probability that it is spam?
End of explanation
"""
%%latex
$p(spam|sis) = \frac{p(s|spam)p(i|spam)p(s|spam)p(spam)}{p(s|spam)p(i|spam)p(s|spam)p(spam) + p(s|ham)p(i|ham)p(s|ham)p(ham)}$
"""
Explanation: So, the question here is, how do you calculate the probabilities for the entire message instead of for a single word? The answer turns out to be to multiply the probability for each of the words together - so p('secret is secret'| spam) is the product p('secret'|spam) x p('is'|spam) x p('secret'|spam)
End of explanation
"""
p_is_given_spam = p_word(spam_bag, 'is')
assert_equal(p_is_given_spam, detector.spam.p_word('is'), 'p(is|spam)')
p_is_given_ham = p_word(ham_bag, 'is')
assert_equal(p_is_given_ham, detector.ham.p_word('is'), 'p(is|ham)')
def p_message_given_class(message, bag, class_probability, k=0, sample_space=12):
"""
:param:
- `message`: string of words
- `bag`: bag of words
- `class_probability`: probability for this class (e.g. p(spam))
- `k`: Laplace smoothing constant
- `sample_space`: Size of the vocabulary
:return: p(message|classification) * p(classification)
"""
probabilities = (p_word(bag, word, k=k, sample_space=sample_space) for word in message.split())
probability = class_probability
for p in probabilities:
probability *= p
return probability
def p_spam_given_message(message, k=0, sample_space=12):
"""
:param:
- `message`: string of words
- `k`: Laplace Smoothing constant
- `sample_space`: total count of words in spam/ham bags
:return: probability message is spam
"""
spam_probability = p_spam if k == 0 else lp_spam
ham_probability = p_ham if k == 0 else lp_ham
p_m_given_spam = p_message_given_class(message, spam_bag, spam_probability, k=k, sample_space=sample_space)
p_m_given_ham = p_message_given_class(message, ham_bag, ham_probability, k=k, sample_space=sample_space)
return p_m_given_spam/(p_m_given_spam + p_m_given_ham)
message = 'secret is secret'
expected = Fraction(25, 26)
p_sis_given_spam = (p_secret_given_spam * p_is_given_spam * p_secret_given_spam
* p_spam)
assert p_message_given_class(message, spam_bag, p_spam) == p_sis_given_spam
assert_equal(p_sis_given_spam, detector.spam.p_message(message), 'p(sis|spam)')
p_sis_given_ham = p_secret_given_ham * p_is_given_ham * p_secret_given_ham * p_ham
assert p_message_given_class(message, ham_bag, p_ham) == p_sis_given_ham
assert_equal(p_sis_given_ham, detector.ham.p_message(message), 'p(sis|ham)')
p_spam_given_sis = p_sis_given_spam / (p_sis_given_spam + p_sis_given_ham)
assert_equal(p_spam_given_sis, detector.p_spam_given_message(message), 'p(spam|sis)')
assert p_spam_given_message(message) == p_spam_given_sis
assert p_spam_given_sis == expected
print(p_spam_given_sis)
"""
Explanation: Where s = 'secret', i = 'is' and sis='secret is secret'.
End of explanation
"""
%%latex
$p(spam|tis) = \frac{p(t|spam)p(i|spam)p(s|spam)p(spam)}{p(t|spam)p(i|spam)p(s|spam)p(spam) + p(t|ham)p(i|ham)p(s|ham)p(ham)}$
tis = 'today is secret'
p_spam_given_tis = p_spam_given_message(tis)
print(p_spam_given_tis)
assert p_spam_given_tis == 0
assert_equal(p_spam_given_tis, detector.p_spam_given_message(tis),
'p(spam|tis)')
'today' in spam_bag
"""
Explanation: What is the probability that "today is secret" is spam?
End of explanation
"""
%%latex
$p(s) = \frac{s_{count} + k}{total_{count} + k * |classes|}$
"""
Explanation: Since one of the words isn't in the spam bag of words, the numerator is going to be 0 (p('today'|spam) = 0) so the probability overall is 0.
Laplace Smoothing
When a single missing word drops the probability to 0, this means your model is overfitting the data. To get around this Laplace Smoothing is used.
End of explanation
"""
def l_probability(class_count, total_count, k=1, classes=2):
"""
:param:
- `class_count`: size of event space
- `total_count`: size of sample space
- `k`: constant to prevent 0 probability
- `classes`: total number of events
:return: probability of class_count with Laplace Smoothing
"""
return Fraction(class_count + k, total_count + classes * k)
k = 1
# classes = spam, ham
number_of_classes = 2
messages = 1
spam_messages = 1
actual = Fraction(spam_messages + k, messages + number_of_classes * k)
assert actual == Fraction(2, 3)
print(actual)
"""
Explanation: let k = 1.
What is the probability that a message is spam if you have 1 example message and it's spam?
End of explanation
"""
messages, spam_messages = 10, 6
actual = l_probability(spam_messages, messages, k, number_of_classes)
expected = Fraction(spam_messages + k, messages + number_of_classes * k)
assert actual == expected
print(actual)
"""
Explanation: What if you have 10 messages and 6 are spam?
End of explanation
"""
messages, spam_messages = 100, 60
print(l_probability(spam_messages, messages, k, number_of_classes))
"""
Explanation: What if you have 100 messages and 60 are spam?
End of explanation
"""
lp_spam = l_probability(total_count=mail_count, class_count=len(spam))
assert_equal(lp_spam, l_detector.spam.probability, 'p(spam)')
lp_ham = l_probability(total_count=mail_count, class_count=len(ham))
assert_equal(lp_ham, l_detector.ham.probability, 'p(ham)')
print(lp_spam)
print(lp_ham)
"""
Explanation: spam/ham with Laplace Smoothing
What are the probabilities that a message is spam or ham with k=1?
End of explanation
"""
print(p_word(spam_bag, 'today', k=1, sample_space=vocabulary))
lp_today_given_spam = l_probability(total_count=len(spam_bag),
class_count=word_count(spam_bag, 'today'),
classes=vocabulary)
assert_equal(lp_today_given_spam, l_detector.spam.p_word('today'), 'p(today|spam)')
lp_today_given_ham = l_probability(total_count=len(ham_bag),
class_count=word_count(ham_bag, 'today'),
classes=vocabulary
)
assert_equal(lp_today_given_ham, l_detector.ham.p_word('today'),
'p(today|ham)')
assert lp_today_given_spam == Fraction(1, 21)
assert lp_today_given_ham == Fraction(1, 9)
print('p(today|spam) = {0}'.format(lp_today_given_spam))
print('p(today|ham) = {0}'.format(lp_today_given_ham))
"""
Explanation: What are p('today'|spam) and p('today'|ham)?
In this case the class-count isn't 2 (for spam or ham) but 12, for the total number of words in the vocabulary.
End of explanation
"""
tis = 'today is secret'
lp_is_given_spam = p_word(spam_bag, 'is', k=1, sample_space=vocabulary)
assert_equal(lp_is_given_spam, l_detector.spam.p_word('is'), 'p(is|spam)')
lp_is_given_ham = p_word(ham_bag, 'is', k=1, sample_space=vocabulary)
assert_equal(lp_is_given_ham, l_detector.ham.p_word('is'), 'p(is|ham)')
lp_secret_given_spam = p_word(spam_bag, 'secret', k=1, sample_space=vocabulary)
assert_equal(lp_secret_given_spam, l_detector.spam.p_word('secret'), 'p(secret|spam)')
lp_secret_given_ham = p_word(ham_bag, 'secret', k=1, sample_space=vocabulary)
assert_equal(lp_secret_given_ham, l_detector.ham.p_word('secret'), 'p(secret|ham)')
lp_tis_given_spam = lp_today_given_spam * lp_is_given_spam * lp_secret_given_spam * lp_spam
lp_tis_given_ham = lp_today_given_ham * lp_is_given_ham * lp_secret_given_ham * lp_ham
lp_spam_given_tis = Fraction(lp_tis_given_spam, lp_tis_given_spam + lp_tis_given_ham)
assert_equal(lp_tis_given_spam, l_detector.spam.p_message(tis), 'p(tis|spam)')
assert_equal(lp_tis_given_ham, l_detector.ham.p_message(tis), 'p(tis|ham)')
assert_equal(lp_spam_given_tis, l_detector.p_spam_given_message(tis), 'p(spam|tis)')
print(lp_spam_given_tis)
"""
Explanation: What is p(spam|m) if m = 'today is secret' and k=1?
End of explanation
"""
actual = p_message_given_class(tis, ham_bag, lp_ham, k=1, sample_space=vocabulary)
assert lp_tis_given_ham == actual, "Expected: {0} Actual: {1}".format(lp_tis_given_ham, actual)
actual = p_spam_given_message(message=tis, k=1, sample_space=vocabulary)
assert lp_spam_given_tis == actual , "Expected: {0} Actual: {1}".format(lp_spam_given_tis, actual)
"""
Explanation: This is just more double-checking to make sure that the functions I originally wrote match the hand-calculated answers.
End of explanation
"""
spam_detector = SpamDetector(spam=spam, ham=ham, k=1)
message = 'today is secret'
answer = spam_detector.p_spam_given_message(message)
print("p(spam|'today is secret') = {0}".format(answer))
assert_equal(lp_spam_given_tis, answer, "p(spam|'today is secret')")
"""
Explanation: Re-do
Since the code ended up being so messy I'm going to re-do the last example using the class-based version only.
End of explanation
"""
|
ledrui/cat-vs-dogs-deeplearning | vgg16/lesson1.ipynb | mit | %matplotlib inline
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as at 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
"""
path = "data/dogscats/"
#path = "data/dogscats/sample/"
"""
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
"""
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
"""
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
"""
import utils; reload(utils)
from utils import plots
"""
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
"""
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
"""
vgg = Vgg16()
"""
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
"""
batches = vgg.get_batches(path+'train', batch_size=4)
"""
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
"""
imgs,labels = next(batches)
"""
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
"""
plots(imgs, titles=labels)
"""
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
"""
vgg.predict(imgs, True)
"""
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
"""
vgg.classes[:4]
"""
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
"""
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
"""
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
"""
vgg.finetune(batches)
"""
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
"""
vgg.fit(batches, val_batches, nb_epoch=1)
"""
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
"""
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
"""
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
"""
FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
"""
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
"""
classes[:5]
"""
Explanation: Here's a few examples of the categories we just imported:
End of explanation
"""
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
"""
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
"""
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
"""
Explanation: ...and here's the fully-connected definition.
End of explanation
"""
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
"""
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
"""
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
"""
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
"""
model = VGG_16()
"""
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
"""
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
"""
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
"""
batch_size = 4
"""
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
"""
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
"""
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
"""
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
"""
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
"""
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
"""
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/guide/keras/sequential_model.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: Sequential 모델
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/sequential_model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/sequential_model.ipynb" class=""><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/sequential_model.ipynb" class=""><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/sequential_model.ipynb" class=""><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
설정
End of explanation
"""
# Define Sequential model with 3 layers
model = keras.Sequential(
[
layers.Dense(2, activation="relu", name="layer1"),
layers.Dense(3, activation="relu", name="layer2"),
layers.Dense(4, name="layer3"),
]
)
# Call model on a test input
x = tf.ones((3, 3))
y = model(x)
"""
Explanation: Sequential 모델을 사용하는 경우
Sequential 모델은 각 레이어에 정확히 하나의 입력 텐서와 하나의 출력 텐서가 있는 일반 레이어 스택에 적합합니다.
개략적으로 다음과 같은 Sequential 모델은
End of explanation
"""
# Create 3 layers
layer1 = layers.Dense(2, activation="relu", name="layer1")
layer2 = layers.Dense(3, activation="relu", name="layer2")
layer3 = layers.Dense(4, name="layer3")
# Call layers on a test input
x = tf.ones((3, 3))
y = layer3(layer2(layer1(x)))
"""
Explanation: 다음 함수와 동일합니다.
End of explanation
"""
model = keras.Sequential(
[
layers.Dense(2, activation="relu"),
layers.Dense(3, activation="relu"),
layers.Dense(4),
]
)
"""
Explanation: Sequential 모델은 다음의 경우에 적합하지 않습니다.
모델에 다중 입력 또는 다중 출력이 있습니다
레이어에 다중 입력 또는 다중 출력이 있습니다
레이어 공유를 해야 합니다
비선형 토폴로지를 원합니다(예: 잔류 연결, 다중 분기 모델)
Sequential 모델 생성하기
레이어의 목록을 Sequential 생성자에 전달하여 Sequential 모델을 만들 수 있습니다.
End of explanation
"""
model.layers
"""
Explanation: 속한 레이어는 layers 속성을 통해 접근할 수 있습니다.
End of explanation
"""
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu"))
model.add(layers.Dense(3, activation="relu"))
model.add(layers.Dense(4))
"""
Explanation: add() 메서드를 통해 Sequential 모델을 점진적으로 작성할 수도 있습니다.
End of explanation
"""
model.pop()
print(len(model.layers)) # 2
"""
Explanation: 레이어를 제거하는 pop() 메서드도 있습니다. Sequential 모델은 레이어의 리스트와 매우 유사하게 동작합니다.
End of explanation
"""
model = keras.Sequential(name="my_sequential")
model.add(layers.Dense(2, activation="relu", name="layer1"))
model.add(layers.Dense(3, activation="relu", name="layer2"))
model.add(layers.Dense(4, name="layer3"))
"""
Explanation: 또한 Sequential 생성자는 Keras의 모든 레이어 또는 모델과 마찬가지로 name 인수를 허용합니다. 이것은 의미론적으로 유의미한 이름으로 TensorBoard 그래프에 주석을 달 때 유용합니다.
End of explanation
"""
layer = layers.Dense(3)
layer.weights # Empty
"""
Explanation: 미리 입력 형상 지정하기
일반적으로 Keras의 모든 레이어는 가중치를 만들려면 입력의 형상을 알아야 합니다. 따라서 다음과 같은 레이어를 만들면 처음에는 가중치가 없습니다.
End of explanation
"""
# Call layer on a test input
x = tf.ones((1, 4))
y = layer(x)
layer.weights # Now it has weights, of shape (4, 3) and (3,)
"""
Explanation: 가중치는 모양이 입력의 형상에 따라 달라지기 때문에 입력에서 처음 호출될 때 가중치를 만듭니다.
End of explanation
"""
model = keras.Sequential(
[
layers.Dense(2, activation="relu"),
layers.Dense(3, activation="relu"),
layers.Dense(4),
]
) # No weights at this stage!
# At this point, you can't do this:
# model.weights
# You also can't do this:
# model.summary()
# Call the model on a test input
x = tf.ones((1, 4))
y = model(x)
print("Number of weights after calling the model:", len(model.weights)) # 6
"""
Explanation: 당연히 이것은 Sequential 모델에도 적용됩니다. 입력 형상이 없는 Sequential 모델을 인스턴스화할 때는 "빌드"되지 않습니다. 가중치가 없습니다(그리고 model.weights를 호출하면 오류가 발생함). 모델에 처음 입력 데이터가 표시되면 가중치가 생성됩니다.
End of explanation
"""
model.summary()
"""
Explanation: 모델이 "빌드"되면, 그 내용을 표시하기 위해 summary() 메서드를 호출할 수 있습니다.
End of explanation
"""
model = keras.Sequential()
model.add(keras.Input(shape=(4,)))
model.add(layers.Dense(2, activation="relu"))
model.summary()
"""
Explanation: 그러나 현재 출력 형상을 포함하여 지금까지 모델의 요약을 표시할 수 있도록 Sequential 모델을 점진적으로 빌드할 때 매우 유용할 수 있습니다. 이 경우 Input 객체를 모델에 전달하여 모델의 시작 형상을 알 수 있도록 모델을 시작해야 합니다.
End of explanation
"""
model.layers
"""
Explanation: Input 객체는 레이어가 아니므로 model.layers의 일부로 표시되지 않습니다.
End of explanation
"""
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu", input_shape=(4,)))
model.summary()
"""
Explanation: 간단한 대안은 첫 번째 레이어에 input_shape 인수를 전달하는 것입니다.
End of explanation
"""
model = keras.Sequential()
model.add(keras.Input(shape=(250, 250, 3))) # 250x250 RGB images
model.add(layers.Conv2D(32, 5, strides=2, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(3))
# Can you guess what the current output shape is at this point? Probably not.
# Let's just print it:
model.summary()
# The answer was: (40, 40, 32), so we can keep downsampling...
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(3))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(2))
# And now?
model.summary()
# Now that we have 4x4 feature maps, time to apply global max pooling.
model.add(layers.GlobalMaxPooling2D())
# Finally, we add a classification layer.
model.add(layers.Dense(10))
"""
Explanation: 이처럼 사전 정의된 입력 모양으로 빌드된 모델은 항상 가중치를 가지며(데이터를 보기 전에도) 항상 정의된 출력 형상을 갖습니다.
일반적으로 Sequential 모델의 입력 형상을 알고 있는 경우 항상 Sequential 모델의 입력 형상을 지정하는 것이 좋습니다.
일반적인 디버깅 워크플로우: add() + summary()
새로운 Sequential 아키텍처를 구축할 때는 add() 하여 레이어를 점진적으로 쌓고 모델 요약을 자주 인쇄하는 것이 유용합니다. 예를 들어 Conv2D 및 MaxPooling2D 레이어의 스택이 이미지 특성 맵을 다운 샘플링 하는 방법을 모니터링할 수 있습니다.
End of explanation
"""
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu"),
layers.Conv2D(32, 3, activation="relu"),
layers.Conv2D(32, 3, activation="relu"),
]
)
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=[layer.output for layer in initial_model.layers],
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)
"""
Explanation: 매우 실용적이죠?
모델이 완성되면 해야 할 일
모델 아키텍처가 준비되면 다음을 수행할 수 있습니다.
모델을 훈련시키고 평가하며 추론을 실행합니다. 내장 루프를 사용한 훈련 및 평가 가이드를 참조하세요.
모델을 디스크에 저장하고 복구합니다. 직렬화 및 저장 가이드를 참조하세요.
다중 GPU를 활용하여 모델의 훈련 속도를 향상합니다. 다중 GPU 및 분산 훈련 가이드를 참조하세요.
Sequential 모델을 사용한 특성 추출
Sequential 모델이 빌드되면 Functional API 모델처럼 동작합니다. 이는 모든 레이어가 input 및 output 속성을 갖는다는 것을 의미합니다. 이러한 속성을 사용하면 Sequential 모델 내의 모든 중간 레이어들의 출력을 추출하는 모델을 빠르게 생성하는 등 깔끔한 작업을 수행할 수 있습니다.
End of explanation
"""
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu"),
layers.Conv2D(32, 3, activation="relu", name="my_intermediate_layer"),
layers.Conv2D(32, 3, activation="relu"),
]
)
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=initial_model.get_layer(name="my_intermediate_layer").output,
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)
"""
Explanation: 다음은 한 레이어에서 특성만 추출하는 것과 유사한 예입니다.
End of explanation
"""
|
mehmetcanbudak/JupyterWorkflow | JupyterWorkflow4.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use("seaborn")
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
data.head()
data.resample("W").sum().plot()
data.groupby(data.index.time).mean().plot()
pivoted = data.pivot_table("Total", index=data.index.time, columns=data.index.date)
pivoted.iloc[:5, :5]
pivoted.plot(legend=False, alpha=0.01)
"""
Explanation: JupyterWorkflow4
From exploratory analysis to reproducible research
Mehmetcan Budak
End of explanation
"""
#get_fremont_data?
"""
Explanation: SECOND PART
To make a python package so we and other people can use it for analysis.
Go to the directory
mkdir jupyterworkflow create a directory
touch jupyterworkflow/init.py initialize a python package
create a data.py in this directory.
import os
from urllib.request import urlretrieve
import pandas as pd
FREMONT_URL = "https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"
/# create a function to only dowload this data if we need to download it, first run..
def get_fremont_data(filename="Fremont.csv", url=FREMONT_URL, force_download=False):
"""Download and cache the fremont data
Parameters
----------
filename :string (optional)
loation to save the data
url: string (optional)
web location of the data
force_download: bool (optional)
if True, force redownload of data
Returns
-------
data: pandas.DataFrame
The fremont bridge data
"""
if force_download or not os.path.exists(filename):
urlretrieve(url, filename)
data = pd.read_csv("Fremont.csv", index_col="Date", parse_dates=True)
data.columns = ["West", "East"]
data["Total"] = data["West"] + data["East"]
return data
End of explanation
"""
|
cranmer/look-elsewhere-2d | examples_from_paper.ipynb | mit | %pylab inline --no-import-all
from lee2d import *
"""
Explanation: Look Elsewhere Effect in 2-d
Kyle Cranmer, Nov 19, 2015
Based on
Estimating the significance of a signal in a multi-dimensional search by Ofer Vitells and Eilam Gross http://arxiv.org/pdf/1105.4355v1.pdf
This is for the special case of a likelihood function of the form
$L(\mu, \nu_1, \nu_2)$ where $\mu$ is a single parameter of interest and
$\nu_1,\nu_2$ are two nuisance parameters that are not identified under the null.
For example, $\mu$ is the signal strength of a new particle and $\nu_1,\nu_2$ are the
unknown mass and width of the new particle. Under the null hypothesis, those parameters
don't mean anything... aka they "are not identified under the null" in the statistics jargon.
This introduces a 2-d look elsewhere effect.
The LEE correction in this case is based on
\begin{equation}
E[ \phi(A_u) ] = P(\chi^2_1 > u) + e^{-u/2} (N_1 + \sqrt{u} N_2) \,
\end{equation}
where
* $A_u$ is the 'excursion set above level $u$ (eg. the set of parameter points in $(\nu_1,\nu_2)$ that have a -2 log-likelihood ratio greater than $u$ )
* $\phi(A_u)$ is the Euler characteristic of the excursion set
* $E[ \phi(A_u) ]$ is the expectation of the Euler characteristic of those excursion sets under the null
* $P(\chi^2_1 > u)$ is the standard chi-square probability
* and $N_1$ and $N_2$ are two coefficients that characterize the chi-square random field.
structure of the notebook
The notebook is broken into two parts.
* calculation of $N_1$ and $N_2$ based on $E[ \phi(A_u) ]$ at two different levels $u_1$ and $u_2$
* calculation of LEE-corrected 'global p-value' given $N_1,N_2$
End of explanation
"""
# An example from the paper
n1, n2 = get_coefficients(u1=0., u2=1., exp_phi_1=33.5, exp_phi_2=94.6)
print n1, n2
# reproduce Fig 5 from paper (the markers are read by eye)
u = np.linspace(5,35,100)
global_p = global_pvalue(u,n1,n2)
plt.plot(u, global_p)
plt.scatter(35,2.E-5) #from Fig5
plt.scatter(30,2.E-4) #from Fig5
plt.scatter(25,2.5E-3) #from Fig5
plt.scatter(20,2.5E-2) #from Fig5
plt.scatter(15,.3) #from Fig5
plt.xlabel('u')
plt.ylabel('P(max q > u)')
plt.semilogy()
"""
Explanation: Test numerical solution to $N_1, N_2$ from the example in the paper
Usage: calculate n1,n2 based on expected value of Euler characteristic (calculated from toy Monte Carlo) at two different levels u1, u2. For example:
* $u_1=0$ with $E[ \phi(A_{u=u_1})]=33.5 $
* $u_2=1$ with $E[ \phi(A_{u=u_2})]=94.6 $
would lead to a call like this
End of explanation
"""
#create Fig 3 of http://arxiv.org/pdf/1105.4355v1.pdf
a = np.zeros((7,7))
a[1,2]=a[1,3]=a[2,1]=a[2,2]=a[2,3]=a[2,4]=1
a[3,1]=a[3,2]=a[3,3]=a[3,4]=a[3,5]=1
a[4,1]=a[4,2]=a[4,3]=a[4,4]=1
a[5,3]=1
a[6,0]=a[6,1]=1
a=a.T
plt.imshow(a,cmap='gray',interpolation='none')
#should be 2
calculate_euler_characteristic(a)
"""
Explanation: Check Euler characteristic from Fig 3 example in the paper
End of explanation
"""
#Fully filled, should be 1
randMatrix = np.zeros((100,100))+1
calculate_euler_characteristic(randMatrix)
# split in half vertically, should be 2
randMatrix[50,:]=0
plt.imshow(randMatrix,cmap='gray')
calculate_euler_characteristic(randMatrix)
#split in half horizontally twice, should be 6
randMatrix[:,25]=0
randMatrix[:,75]=0
plt.imshow(randMatrix,cmap='gray')
calculate_euler_characteristic(randMatrix)
#remove a hole from middle of one, should be 5
randMatrix[25:30,50:53]=0
plt.imshow(randMatrix,cmap='gray')
calculate_euler_characteristic(randMatrix)
#remove a single pixel hole
randMatrix[75,50]=0
plt.imshow(randMatrix,cmap='gray')
"""
Explanation: Try a big matrix
End of explanation
"""
|
thempel/adaptivemd | examples/tutorial/3_example_adaptive.ipynb | lgpl-2.1 | import sys, os
from adaptivemd import (
Project,
Event, FunctionalEvent,
File
)
# We need this to be part of the imports. You can only restore known objects
# Once these are imported you can load these objects.
from adaptivemd.engine.openmm import OpenMMEngine
from adaptivemd.analysis.pyemma import PyEMMAAnalysis
"""
Explanation: AdaptiveMD
Example 3 - Running an adaptive loop
0. Imports
End of explanation
"""
project = Project('tutorial')
"""
Explanation: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
End of explanation
"""
print project.files
print project.generators
print project.models
"""
Explanation: Open all connections to the MongoDB and Session so we can get started.
An interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in the last example, go back here and check on the change of the contents of the project.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
"""
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
"""
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
"""
def strategy(loops=10, trajs_per_loop=4, length=100):
for loop in range(loops):
# submit some trajectory tasks
trajectories = project.new_ml_trajectory(length, trajs_per_loop)
tasks = map(engine.task_run_trajectory, trajectories)
project.queue(tasks)
# continue if ALL of the tasks are done (can be failed)
yield [task.is_done for task in tasks]
# submit a model job
task = modeller.execute(list(project.trajectories))
project.queue(task)
# when it is done do next loop
yield task.is_done
"""
Explanation: Run simulations
You are free to conduct your simulations from a notebook but normally you will use a script. The main point about adaptivity is to make decision about tasks along the way.
To make this happen we need Conditions which are functions that evaluate to True or False and once they are True they cannot change anymore back to False. Like a one time on switch.
These are used to describe the happening of an event. We will now deal with some types of events.
Functional Events
We want to first look into a way to run python code asynchroneously in the project. For this, we write a function that should be executed. Inside you will create tasks and submit them.
If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called. An example
End of explanation
"""
project.add_event(strategy(loops=2))
"""
Explanation: and add the event to the project (these cannot be stored yet!)
End of explanation
"""
import time
from IPython.display import clear_output
try:
while project._events:
clear_output(wait=True)
print '# of files %8d : %s' % (len(project.trajectories), '#' * len(project.trajectories))
print '# of models %8d : %s' % (len(project.models), '#' * len(project.models))
sys.stdout.flush()
time.sleep(2)
project.trigger()
except KeyboardInterrupt:
pass
"""
Explanation: What is missing now? The adding of the event triggered the first part of the code. But to recheck if we should continue needs to be done manually.
RP has threads in the background and these can call the trigger whenever something changed or finished.
Still that is no problem, we can do that easily and watch what is happening
Let's see how our project is growing. TODO: Add threading.Timer to auto trigger.
End of explanation
"""
project.add_event(strategy(loops=2))
"""
Explanation: Let's do another round with more loops
End of explanation
"""
# find, which frames from which trajectories have been chosen
trajs = project.trajectories
q = {}
ins = {}
for f in trajs:
source = f.frame if isinstance(f.frame, File) else f.frame.trajectory
ind = 0 if isinstance(f.frame, File) else f.frame.index
ins[source] = ins.get(source, []) + [ind]
for a,b in ins.iteritems():
print a.short, ':', b
"""
Explanation: And some analysis (might have better functions for that)
End of explanation
"""
def strategy2():
for loop in range(10):
num = len(project.trajectories)
task = modeller.execute(list(project.trajectories))
project.queue(task)
yield task.is_done
# continue only when there are at least 2 more trajectories
yield project.on_ntraj(num + 2)
project.add_event(strategy(loops=10, trajs_per_loop=2))
project.add_event(strategy2())
"""
Explanation: Event
And do this with multiple events in parallel.
End of explanation
"""
project.wait_until(project.events_done)
"""
Explanation: And now wait until all events are finished.
End of explanation
"""
project.close()
"""
Explanation: See, that we again reused our strategy.
End of explanation
"""
|
oroszl/szamprob | notebooks/Package04/3D.ipynb | gpl-3.0 | %pylab inline
from mpl_toolkits.mplot3d import * #3D-s ábrák alcsomagja
from ipywidgets import * #interaktivitáshoz szükséges függvények
"""
Explanation: 3D ábrák
A matplotlib csomag elsősorban 2D ábrák gyártására lett kitalálva. Ennek ellenére rendelkezik néhány 3D-s ábrakészítési függvénnyel is. Vizsgáljunk meg ebből párat! Ahhoz, hogy a 3D-s ábrázolási függvényeket el tudjuk érni, be kell tölteni a matplotlib csomag mpl_toolkits.mplot3d alcsomagját.
End of explanation
"""
t=linspace(0,2*pi,100) # 100 pont 0 és 2*pi között
"""
Explanation: Térbeli görbék, adathalmazok
Ahhoz hogy egy ábrát térben tudjunk megjeleníteni, fel kell készíteni a környezetet. A térbeli ábrák megjelenítése és azok tulajdonságainak beállítása kicsit körülményesebb a 2D-s ábráknál. A legszembetűnőbb különbség, hogy az ábrák úgynevezett axes (körül belül itt a koordinátatengelyekre kell gondolni...) objektumok köré csoportosulnak, s ezek tulajdonságaiként, illetve ezeken alkalmazott függvényekként jönnek létre maguk az ábrák. Példaképpen ábrázoljunk egy egszerű paraméteres térbeli görbét! Legyen ez a görbe a következő spirális függvény:
\begin{equation}
\mathbf{r}(t)=\left(\begin{array}{c}
\cos(3t)\
\sin(3t)\
t
\end{array}\right)
\end{equation}
Először is gyártsuk let a $t$ paraméter mintavételezési pontjait a $[0,2\pi]$ intervallumban:
End of explanation
"""
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t)
"""
Explanation: A következő kódcellában két dolog fog történni. Előszöris létrehozzuk az ax nevű axes objektumot, amelynek expliciten megadjuk, hogy 3D-s koordinátarendszer legyen. Illetve erre az objektumra hatva a plot függvénnyel létrehozzuk magát az ábrát. Figyeljük meg, hogy most a plot függvény háruom bemenő paramétert vár!
End of explanation
"""
ax=subplot(1,1,1,projection='3d')
ax.plot(rand(10),rand(10),rand(10),'o')
"""
Explanation: Ahogy a síkbeli ábráknál láttuk, a plot függvényt itt is használhatjuk rendezetlenül mintavételezett adatok ábrázolására is.
End of explanation
"""
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t,color='green',linestyle='dashed',linewidth=3)
"""
Explanation: A stílusdefiníciók a 2D ábrákhoz hasonló kulcsszavas argumentumok alapján dolgozódnak fel! Lássunk erre is egy példát:
End of explanation
"""
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t)
ax.view_init(0,0)
"""
Explanation: Térbeli ábrák megjelenítése kapcsán rendszeresen felmerülő probléma, hogy jó irányból nézzünk rá az ábrára. Az ábra nézőpontjait a view_init függvény segítségével tudjuk megadni. A view_init két paramétere ekvatoriális gömbi koordinátarendszerben adja meg az ábra nézőpontját. A két bemenő paraméter a deklináció és az azimutszög fokban mérve. Például az $x$-tengely felől így lehet készíteni ábrát:
End of explanation
"""
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása
ax.plot(cos(3*t),sin(3*t),t)
ax.view_init(0,90)
"""
Explanation: Az $y$-tengely felől pedig így:
End of explanation
"""
def forog(th,phi):
ax=subplot(1,1,1,projection='3d')
ax.plot(sin(3*t),cos(3*t),t)
ax.view_init(th,phi)
interact(forog,th=(-90,90),phi=(0,360));
"""
Explanation: Ha interaktív függvényeket használunk, akkor a nézőpontot az alábbiak szerint interaktívan tudjuk változtatni:
End of explanation
"""
x,y = meshgrid(linspace(-3,3,250),linspace(-5,5,250)) # mintavételezési pontok legyártása.
z = -(sin(x) ** 10 + cos(10 + y * x) * cos(x))*exp((-x**2-y**2)/4) # függvény kiértékelés
"""
Explanation: Kétváltozós függvények és felületek
A térbeli ábrák egyik előnye, hogy térbeli felületeket is meg tudunk jeleníteni. Ennek a legegyszerűbb esete a kétváltozós
$$z=f(x,y)$$
függvények magasságtérképszerű ábrázolása. Ahogy azt már megszoktuk, itt is az első feladat a mintavételezés és a függvény kiértékelése. Az alábbiakban vizsgáljuk meg a $$z=-[\sin(x) ^{10} + \cos(10 + y x) \cos(x)]\exp((-x^2-y^2)/4)$$ függvényt!
End of explanation
"""
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
"""
Explanation: A plot_surface függvény segítségével jeleníthetjük meg ezt a függvényt.
End of explanation
"""
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z,cmap='viridis')
"""
Explanation: Sokszor szemléletes a kirajzolódott felületet valamilyen színskála szerint színezni. Ezt a síkbeli ábráknál már megszokott módon a cmap kulcsszó segítségével tehetjük.
End of explanation
"""
theta,phi=meshgrid(linspace(0,2*pi,250),linspace(0,2*pi,250))
x=(4 + 1*cos(theta))*cos(phi)
y=(4 + 1*cos(theta))*sin(phi)
z=1*sin(theta)
"""
Explanation: A térbeli felületek legáltalánosabb megadása kétparaméteres vektor értékű függvényekkel lehetséges. Azaz
\begin{equation}
\mathbf{r}(u,v)=\left(\begin{array}{c}
f(u,v)\
g(u,v)\
h(u,v)
\end{array}\right)
\end{equation}
Vizsgáljunk meg erre egy példát, ahol a megjeleníteni kívánt felület egy tórusz! A tórusz egy lehetséges paraméterezése a következő:
\begin{equation}
\mathbf{r}(\theta,\varphi)=\left(\begin{array}{c}
(R_1 + R_2 \cos \theta) \cos{\varphi}\
(R_1 + R_2 \cos \theta) \sin{\varphi} \
R_2 \sin \theta
\end{array}\right)
\end{equation}
Itt $R_1$ és $R_2$ a tórusz két sugarának paramétere, $\theta$ és $\varphi$ pedig mind a ketten a $[0,2\pi]$ intervallumon futnak végig. Legyen $R_1=4$ és $R_2=1$. Rajzoljuk ki ezt a felületet! Első lépésként gyártsuk le az ábrázolandó felület pontjait:
End of explanation
"""
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
"""
Explanation: Ábrázolni ismét a plot_surface függvény segítségével tudunk:
End of explanation
"""
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
ax.set_aspect('equal');
ax.set_xlim(-5,5);
ax.set_ylim(-5,5);
ax.set_zlim(-5,5);
"""
Explanation: A fenti ábrát egy kicsit arányosabbá tehetjük, ha a tengelyek megjelenítésének arányát, illetve a tengerek határait átállítjuk. Ezt a set_aspect, illetve a set_xlim, set_ylim és set_zlim függvények segítségével tehetjük meg:
End of explanation
"""
def forog(th,ph):
ax = subplot(111, projection='3d')
ax.plot_surface(x, y, z)
ax.view_init(th,ph)
ax.set_aspect('equal');
ax.set_xlim(-5,5);
ax.set_ylim(-5,5);
ax.set_zlim(-5,5);
interact(forog,th=(-90,90),ph=(0,360));
"""
Explanation: Végül tegyük ezt az ábrát is interaktívvá:
End of explanation
"""
phiv,thv=(2*pi*rand(100),pi*rand(100)) #Ez a két sor a térbeli egység gömb
xv,yv,zv=(cos(phiv)*sin(thv),sin(phiv)*sin(thv),cos(thv)) #100 véletlen pontját jelöli ki
uv,vv,wv=(xv,yv,zv) #Ez pedig a megfelelő pontokhoz hozzá rendel egy egy radiális vektort
ax = subplot(111, projection='3d')
ax.quiver(xv, yv, zv, uv, vv, wv, length=0.3,color='darkcyan')
ax.set_aspect('equal')
"""
Explanation: Erőterek 3D-ben
Térbeli vektortereket, azaz olyan függvényeket, amelyek a tér minden pontjához egy háromdimenziós vektort rendelnek, a síkbeli ábrákhoz hasonlóan itt is a quiver parancs segítségével tudunk megjeleníteni. Az alábbi példában az egységgömb felületének 100 pontjába rajzolunk egy-egy radiális irányba mutató vektort:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.