code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
##### Copyright 2019 Google LLC
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Graph regularization for sentiment classification using synthesized graphs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
This notebook classifies movie reviews as *positive* or *negative* using the
text of the review. This is an example of *binary* classification, an important
and widely applicable kind of machine learning problem.
We will demonstrate the use of graph regularization in this notebook by building
a graph from the given input. The general recipe for building a
graph-regularized model using the Neural Structured Learning (NSL) framework
when the input does not contain an explicit graph is as follows:
1. Create embeddings for each text sample in the input. This can be done using
pre-trained models such as [word2vec](https://arxiv.org/pdf/1310.4546.pdf),
[Swivel](https://arxiv.org/abs/1602.02215),
[BERT](https://arxiv.org/abs/1810.04805) etc.
2. Build a graph based on these embeddings by using a similarity metric such as
the 'L2' distance, 'cosine' distance, etc. Nodes in the graph correspond to
samples and edges in the graph correspond to similarity between pairs of
samples.
3. Generate training data from the above synthesized graph and sample features.
The resulting training data will contain neighbor features in addition to
the original node features.
4. Create a neural network as a base model using the Keras sequential,
functional, or subclass API.
5. Wrap the base model with the GraphRegularization wrapper class, which is
provided by the NSL framework, to create a new graph Keras model. This new
model will include a graph regularization loss as the regularization term in
its training objective.
6. Train and evaluate the graph Keras model.
**Note**: We expect that it would take readers about 1 hour to go through this
tutorial.
## Requirements
1. Install TensorFlow 2.x to create an interactive developing environment with eager execution.
2. Install the Neural Structured Learning package.
3. Install tensorflow-hub.
```
!pip install --quiet tensorflow==2.0.0-rc0
!pip install --quiet neural-structured-learning
!pip install --quiet tensorflow-hub
```
## Dependencies and imports
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import matplotlib.pyplot as plt
import numpy as np
import neural_structured_learning as nsl
import tensorflow as tf
tf.compat.v1.enable_v2_behavior()
import tensorflow_hub as hub
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
```
## IMDB dataset
The
[IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb)
contains the text of 50,000 movie reviews from the
[Internet Movie Database](https://www.imdb.com/). These are split into 25,000
reviews for training and 25,000 reviews for testing. The training and testing
sets are *balanced*, meaning they contain an equal number of positive and
negative reviews.
In this tutorial, we will use a preprocessed version of the IMDB dataset.
### Download preprocessed IMDB dataset
The IMDB dataset comes packaged with TensorFlow. It has already been
preprocessed such that the reviews (sequences of words) have been converted to
sequences of integers, where each integer represents a specific word in a
dictionary.
The following code downloads the IMDB dataset (or uses a cached copy if it has
already been downloaded):
```
imdb = tf.keras.datasets.imdb
(pp_train_data, pp_train_labels), (pp_test_data, pp_test_labels) = (
imdb.load_data(num_words=10000))
```
The argument `num_words=10000` keeps the top 10,000 most frequently occurring words in the training data. The rare words are discarded to keep the size of the vocabulary manageable.
### Explore the data
Let's take a moment to understand the format of the data. The dataset comes preprocessed: each example is an array of integers representing the words of the movie review. Each label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
```
print('Training entries: {}, labels: {}'.format(
len(pp_train_data), len(pp_train_labels)))
training_samples_count = len(pp_train_data)
```
The text of reviews have been converted to integers, where each integer represents a specific word in a dictionary. Here's what the first review looks like:
```
print(pp_train_data[0])
```
Movie reviews may be different lengths. The below code shows the number of words in the first and second reviews. Since inputs to a neural network must be the same length, we'll need to resolve this later.
```
len(pp_train_data[0]), len(pp_train_data[1])
```
### Convert the integers back to words
It may be useful to know how to convert integers back to the corresponding text.
Here, we'll create a helper function to query a dictionary object that contains
the integer to string mapping:
```
def build_reverse_word_index():
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index['<PAD>'] = 0
word_index['<START>'] = 1
word_index['<UNK>'] = 2 # unknown
word_index['<UNUSED>'] = 3
return dict((value, key) for (key, value) in word_index.items())
reverse_word_index = build_reverse_word_index()
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
Now we can use the `decode_review` function to display the text for the first review:
```
decode_review(pp_train_data[0])
```
## Graph construction
Graph construction involves creating embeddings for text samples and then using
a similarity function to compare the embeddings.
Before proceeding further, we first create a directory to store artifacts
created by this tutorial.
```
!mkdir -p /tmp/imdb
```
### Create sample embeddings
We will use pretrained Swivel embeddings to create embeddings in the
`tf.train.Example` format for each sample in the input. We will store the
resulting embeddings in the `TFRecord` format along with an additional feature
that represents the ID of each sample. This is important and will allow us match
sample embeddings with corresponding nodes in the graph later.
```
# This is necessary because hub.KerasLayer assumes tensor hashability, which
# is not supported in eager mode.
tf.compat.v1.disable_tensor_equality()
pretrained_embedding = 'https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1'
hub_layer = hub.KerasLayer(
pretrained_embedding, input_shape=[], dtype=tf.string, trainable=True)
def _int64_feature(value):
"""Returns int64 tf.train.Feature."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist()))
def _bytes_feature(value):
"""Returns bytes tf.train.Feature."""
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _float_feature(value):
"""Returns float tf.train.Feature."""
return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist()))
def create_embedding_example(word_vector, record_id):
"""Create tf.Example containing the sample's embedding and its ID."""
text = decode_review(word_vector)
# Shape = [batch_size,].
sentence_embedding = hub_layer(tf.reshape(text, shape=[-1,]))
# Flatten the sentence embedding back to 1-D.
sentence_embedding = tf.reshape(sentence_embedding, shape=[-1])
features = {
'id': _bytes_feature(str(record_id)),
'embedding': _float_feature(sentence_embedding.numpy())
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_embeddings(word_vectors, output_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(output_path) as writer:
for word_vector in word_vectors:
example = create_embedding_example(word_vector, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features containing embeddings for training data in
# TFRecord format.
create_embeddings(pp_train_data, '/tmp/imdb/embeddings.tfr', 0)
```
### Build a graph
Now that we have the sample embeddings, we will use them to build a similarity
graph, i.e, nodes in this graph will correspond to samples and edges in this
graph will correspond to similarity between pairs of nodes.
Neural Structured Learning provides a graph building tool that builds a graph
based on sample embeddings. It uses **cosine similarity** as the similarity
measure to compare embeddings and build edges between them. It also allows us to
specify a similarity threshold, which can be used to discard dissimilar edges
from the final graph. In this example, using 0.99 as the similarity threshold,
we end up with a graph that has 445,327 bi-directional edges.
```
!python -m neural_structured_learning.tools.build_graph \
--similarity_threshold=0.99 /tmp/imdb/embeddings.tfr /tmp/imdb/graph_99.tsv
```
**Note:** Graph quality and by extension, embedding quality, are very important
for graph regularization. While we have used Swivel embeddings in this notebook,
using BERT embeddings for instance, will likely capture review semantics more
accurately. We encourage users to use embeddings of their choice and as
appropriate to their needs.
## Sample features
We create sample features for our problem in the `tf.train.Example`s format and
persist them in the `TFRecord` format. Each sample will include the following
three features:
1. **id**: The node ID of the sample.
2. **words**: An int64 list containing word IDs.
3. **label**: A singleton int64 identifying the target class of the review.
```
def create_example(word_vector, label, record_id):
"""Create tf.Example containing the sample's word vector, label, and ID."""
features = {
'id': _bytes_feature(str(record_id)),
'words': _int64_feature(np.asarray(word_vector)),
'label': _int64_feature(np.asarray([label])),
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_records(word_vectors, labels, record_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(record_path) as writer:
for word_vector, label in zip(word_vectors, labels):
example = create_example(word_vector, label, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features (word vectors and labels) for training and test
# data in TFRecord format.
next_record_id = create_records(pp_train_data, pp_train_labels,
'/tmp/imdb/train_data.tfr', 0)
create_records(pp_test_data, pp_test_labels, '/tmp/imdb/test_data.tfr',
next_record_id)
```
## Augment training data with graph neighbors
Since we have the sample features and the synthesized graph, we can generate the
augmented training data for Neural Structured Learning. The NSL framework
provides a tool that can combine the graph and the sample features to produce
the final training data for graph regularization. The resulting training data
will include original sample features as well as features of their corresponding
neighbors.
In this tutorial, we consider undirected edges and we use a maximum of 1
neighbor per sample.
```
!python -m neural_structured_learning.tools.pack_nbrs \
--max_nbrs=3 --add_undirected_edges=True \
/tmp/imdb/train_data.tfr '' /tmp/imdb/graph_99.tsv \
/tmp/imdb/nsl_train_data.tfr
```
## Base model
We are now ready to build a base model without graph regularization. In order to build this model, we can either use embeddings that were used in building the graph, or we can learn new embeddings jointly along with the classification task. For the purpose of this notebook, we will do the latter.
### Global variables
```
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
```
### Hyperparameters
We will use an instance of `HParams` to inclue various hyperparameters and
constants used for training and evaluation. We briefly describe each of them
below:
- **num_classes**: There are 2 classes -- *positive* and *negative*.
- **max_seq_length**: This is the maximum number of words considered from each movie review in this example.
- **vocab_size**: This is the size of the vocabulary considered for this example.
- **distance_type**: This is the distance metric used to regularize the sample
with its neighbors.
- **graph_regularization_multiplier**: This controls the relative weight of
the graph regularization term in the overall loss function.
- **num_neighbors**: The number of neighbors used for graph regularization.
- **num_fc_units**: The number of units in the fully connected layer of the neural network.
- **train_epochs**: The number of training epochs.
- **batch_size**: Batch size used for training and evaluation.
- **eval_steps**: The number of batches to process before deeming evaluation
is complete. If set to `None`, all instances in the test set are evaluated.
```
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 2
self.max_seq_length = 256
self.vocab_size = 10000
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 2
### model architecture
self.num_embedding_dims = 16
self.num_lstm_dims = 64
self.num_fc_units = 64
### training parameters
self.train_epochs = 10
self.batch_size = 128
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
```
### Prepare the data
The reviews—the arrays of integers—must be converted to tensors before being fed
into the neural network. This conversion can be done a couple of ways:
* Convert the arrays into vectors of `0`s and `1`s indicating word occurrence,
similar to a one-hot encoding. For example, the sequence `[3, 5]` would become a `10000`-dimensional vector that is all zeros except for indices `3` and `5`, which are ones. Then, make this the first layer in our network—a `Dense` layer—that can handle floating point vector data. This approach is memory intensive, though, requiring a `num_words * num_reviews` size matrix.
* Alternatively, we can pad the arrays so they all have the same length, then
create an integer tensor of shape `max_length * num_reviews`. We can use an
embedding layer capable of handling this shape as the first layer in our
network.
In this tutorial, we will use the second approach.
Since the movie reviews must be the same length, we will use the `pad_sequence`
function defined below to standardize the lengths.
```
def pad_sequence(sequence, max_seq_length):
"""Pads the input sequence (a `tf.SparseTensor`) to `max_seq_length`."""
pad_size = tf.maximum([0], max_seq_length - tf.shape(sequence)[0])
padded = tf.concat(
[sequence.values,
tf.fill((pad_size), tf.cast(0, sequence.dtype))],
axis=0)
# The input sequence may be larger than max_seq_length. Truncate down if
# necessary.
return tf.slice(padded, [0], [max_seq_length])
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a variable length word ID vector.
feature_spec = {
'words': tf.io.VarLenFeature(tf.int64),
'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.VarLenFeature(tf.int64)
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
# Since the 'words' feature is a variable length word vector, we pad it to a
# constant maximum length based on HPARAMS.max_seq_length
features['words'] = pad_sequence(features['words'], HPARAMS.max_seq_length)
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
features[nbr_feature_key] = pad_sequence(features[nbr_feature_key],
HPARAMS.max_seq_length)
labels = features.pop('label')
return features, labels
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset('/tmp/imdb/nsl_train_data.tfr', True)
test_dataset = make_dataset('/tmp/imdb/test_data.tfr')
```
### Build the model
A neural network is created by stacking layers—this requires two main architectural decisions:
* How many layers to use in the model?
* How many *hidden units* to use for each layer?
In this example, the input data consists of an array of word-indices. The labels to predict are either 0 or 1.
We will use a bi-directional LSTM as our base model in this tutorial.
```
# This function exists as an alternative to the bi-LSTM model used in this
# notebook.
def make_feed_forward_model():
"""Builds a simple 2 layer feed forward neural network."""
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(embedding_layer)
dense_layer = tf.keras.layers.Dense(16, activation='relu')(pooling_layer)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
def make_bilstm_model():
"""Builds a bi-directional LSTM model."""
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size,
HPARAMS.num_embedding_dims)(
inputs)
lstm_layer = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(HPARAMS.num_lstm_dims))(
embedding_layer)
dense_layer = tf.keras.layers.Dense(
HPARAMS.num_fc_units, activation='relu')(
lstm_layer)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
# Feel free to use an architecture of your choice.
model = make_bilstm_model()
model.summary()
```
The layers are effectively stacked sequentially to build the classifier:
1. The first layer is an `Input` layer which takes the integer-encoded
vocabulary.
2. The next layer is an `Embedding` layer, which takes the integer-encoded
vocabulary and looks up the embedding vector for each word-index. These
vectors are learned as the model trains. The vectors add a dimension to the
output array. The resulting dimensions are: `(batch, sequence, embedding)`.
3. Next, a bidirectional LSTM layer returns a fixed-length output vector for
each example.
4. This fixed-length output vector is piped through a fully-connected (`Dense`)
layer with 64 hidden units.
5. The last layer is densely connected with a single output node. Using the
`sigmoid` activation function, this value is a float between 0 and 1,
representing a probability, or confidence level.
### Hidden units
The above model has two intermediate or "hidden" layers, between the input and
output, and excluding the `Embedding` layer. The number of outputs (units,
nodes, or neurons) is the dimension of the representational space for the layer.
In other words, the amount of freedom the network is allowed when learning an
internal representation.
If a model has more hidden units (a higher-dimensional representation space),
and/or more layers, then the network can learn more complex representations.
However, it makes the network more computationally expensive and may lead to
learning unwanted patterns—patterns that improve performance on training data
but not on the test data. This is called *overfitting*.
### Loss function and optimizer
A model needs a loss function and an optimizer for training. Since this is a
binary classification problem and the model outputs a probability (a single-unit
layer with a sigmoid activation), we'll use the `binary_crossentropy` loss
function.
```
model.compile(
optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
### Create a validation set
When training, we want to check the accuracy of the model on data it hasn't seen
before. Create a *validation set* by setting apart a fraction of the original
training data. (Why not use the testing set now? Our goal is to develop and tune
our model using only the training data, then use the test data just once to
evaluate our accuracy).
In this tutorial, we take roughly 10% of the initial training samples (10% of 25000) as labeled data for training and the remaining as validation data. Since the initial train/test split was 50/50 (25000 samples each), the effective train/validation/test split we now have is 5/45/50.
Note that 'train_dataset' has already been batched and shuffled.
```
validation_fraction = 0.9
validation_size = int(validation_fraction *
int(training_samples_count / HPARAMS.batch_size))
print(validation_size)
validation_dataset = train_dataset.take(validation_size)
train_dataset = train_dataset.skip(validation_size)
```
### Train the model
Train the model in mini-batches. While training, monitor the model's loss and accuracy on the validation set:
```
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
```
### Evaluate the model
Now, let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
```
results = model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(results)
```
### Create a graph of accuracy/loss over time
`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
```
history_dict = history.history
history_dict.keys()
```
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
```
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
```
Notice the training loss *decreases* with each epoch and the training accuracy
*increases* with each epoch. This is expected when using a gradient descent
optimization—it should minimize the desired quantity on every iteration.
## Graph regularization
We are now ready to try graph regularization using the base model that we built
above. We will use the `GraphRegularization` wrapper class provided by the
Neural Structured Learning framework to wrap the base (bi-LSTM) model to include
graph regularization. The rest of the steps for training and evaluating the
graph-regularized model are similar to that of the base model.
### Create graph-regularized model
To assess the incremental benefit of graph regularization, we will create a new
base model instance. This is because `model` has already been trained for a few
iterations, and reusing this trained model to create a graph-regularized model
will not be a fair comparison for `model`.
```
# Build a new base LSTM model.
base_reg_model = make_bilstm_model()
# Wrap the base model with graph regularization.
graph_reg_config = nsl.configs.GraphRegConfig(
neighbor_config=nsl.configs.GraphNeighborConfig(
max_neighbors=HPARAMS.num_neighbors),
multiplier=HPARAMS.graph_regularization_multiplier,
distance_config=nsl.configs.DistanceConfig(
distance_type=HPARAMS.distance_type, sum_over_axis=-1))
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
### Train the model
```
graph_reg_history = graph_reg_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
```
### Evaluate the model
```
graph_reg_results = graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(graph_reg_results)
```
### Create a graph of accuracy/loss over time
```
graph_reg_history_dict = graph_reg_history.history
graph_reg_history_dict.keys()
```
There are six entries: one for each monitored metric -- loss, graph loss, and
accuracy -- during training and validation. We can use these to plot the
training, graph, and validation losses for comparison, as well as the training
and validation accuracy. Note that the graph loss is only computed during
training; so its value will be 0 during validation.
```
acc = graph_reg_history_dict['accuracy']
val_acc = graph_reg_history_dict['val_accuracy']
loss = graph_reg_history_dict['loss']
graph_loss = graph_reg_history_dict['graph_loss']
val_loss = graph_reg_history_dict['val_loss']
val_graph_loss = graph_reg_history_dict['val_graph_loss']
epochs = range(1, len(acc) + 1)
plt.clf() # clear figure
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-gD" is for solid green line with diamond markers.
plt.plot(epochs, graph_loss, '-gD', label='Training graph loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
# "-ms" is for solid magenta line with square markers.
plt.plot(epochs, val_graph_loss, '-ms', label='Validation graph loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
```
## The power of semi-supervised learning
Semi-supervised learning and more specifically, graph regularization in the
context of this tutorial, can be really powerful when the amount of training
data is small. The lack of training data is compensated by leveraging similarity
among the training samples, which is not possible in traditional supervised
learning.
We define ***supervision ratio*** as the ratio of training samples to the total
number of samples which includes training, validation, and test samples. In this
notebook, we have used a supervision ratio of 0.05 (i.e, 5% of the labeled data)
for training both the base model as well as the graph-regularized model. We
illustrate the impact of the supervision ratio on model accuracy in the cell
below.
```
# Accuracy values for both the Bi-LSTM model and the feed forward NN model have
# been precomputed for the following supervision ratios.
supervision_ratios = [0.3, 0.15, 0.05, 0.03, 0.02, 0.01, 0.005]
model_tags = ['Bi-LSTM model', 'Feed Forward NN model']
base_model_accs = [[84, 84, 83, 80, 65, 52, 50], [87, 86, 76, 74, 67, 52, 51]]
graph_reg_model_accs = [[84, 84, 83, 83, 65, 63, 50],
[87, 86, 80, 75, 67, 52, 50]]
plt.clf() # clear figure
fig, axes = plt.subplots(1, 2)
fig.set_size_inches((12, 5))
for ax, model_tag, base_model_acc, graph_reg_model_acc in zip(
axes, model_tags, base_model_accs, graph_reg_model_accs):
# "-r^" is for solid red line with triangle markers.
ax.plot(base_model_acc, '-r^', label='Base model')
# "-gD" is for solid green line with diamond markers.
ax.plot(graph_reg_model_acc, '-gD', label='Graph-regularized model')
ax.set_title(model_tag)
ax.set_xlabel('Supervision ratio')
ax.set_ylabel('Accuracy(%)')
ax.set_ylim((25, 100))
ax.set_xticks(range(len(supervision_ratios)))
ax.set_xticklabels(supervision_ratios)
ax.legend(loc='best')
plt.show()
```
It can be observed that as the superivision ratio decreases, model accuracy also
decreases. This is true for both the base model and for the graph-regularized
model, regardless of the model architecture used. However, notice that the
graph-regularized model performs better than the base model for both the
architectures. In particular, for the Bi-LSTM model, when the supervision ratio
is 0.01, the accuracy of the graph-regularized model is **~20%** higher than
that of the base model. This is primarily because of semi-supervised learning
for the graph-regularized model, where structural similarity among training
samples is used in addition to the training samples themselves.
## Conclusion
We have demonstrated the use of graph regularization using the Neural Structured
Learning (NSL) framework even when the input does not contain an explicit graph.
We considered the task of sentiment classification of IMDB movie reviews for
which we synthesized a similarity graph based on review embeddings. We encourage
users to experiment further by varying hyperparameters, the amount of
supervision, and by using different model architectures.
| github_jupyter |
# Road Following - Live demo
In this notebook, we will use model we trained to move jetBot smoothly on track.
### Load Trained Model
We will assume that you have already downloaded ``best_steering_model_xy.pth`` to work station as instructed in "train_model.ipynb" notebook. Now, you should upload model file to JetBot in to this notebooks's directory. Once that's finished there should be a file named ``best_steering_model_xy.pth`` in this notebook's directory.
> Please make sure the file has uploaded fully before calling the next cell
Execute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
```
import torchvision
import torch
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
```
Next, load the trained weights from the ``best_steering_model_xy.pth`` file that you uploaded.
```
model.load_state_dict(torch.load('best_steering_model_xy.pth'))
```
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
```
device = torch.device('cuda')
model = model.to(device)
model = model.eval().half()
```
### Creating the Pre-Processing Function
We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:
1. Convert from HWC layout to CHW layout
2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0
3. Transfer the data from CPU memory to GPU memory
4. Add a batch dimension
```
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np
mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()
def preprocess(image):
image = PIL.Image.fromarray(image)
image = transforms.functional.to_tensor(image).to(device).half()
image.sub_(mean[:, None, None]).div_(std[:, None, None])
return image[None, ...]
```
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.
Now, let's start and display our camera. You should be pretty familiar with this by now.
```
from IPython.display import display
import ipywidgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg
camera = Camera()
image_widget = ipywidgets.Image()
traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
display(image_widget)
```
We'll also create our robot instance which we'll need to drive the motors.
```
from jetbot import Robot
robot = Robot()
```
Now, we will define sliders to control JetBot
> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment
1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider``
2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth
3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets
> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
```
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.2, description='steering gain')
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')
display(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider)
```
Next, let's display some sliders that will let us see what JetBot is thinking. The x and y sliders will display the predicted x, y values.
The steering slider will display our estimated steering value. Please remember, this value isn't the actual angle of the target, but simply a value that is
nearly proportional. When the actual angle is ``0``, this will be zero, and it will increase / decrease with the actual angle.
```
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')
display(ipywidgets.HBox([y_slider, speed_slider]))
display(x_slider, steering_slider)
```
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps
1. Pre-process the camera image
2. Execute the neural network
3. Compute the approximate steering value
4. Control the motors using proportional / derivative control (PD)
```
angle = 0.0
angle_last = 0.0
def execute(change):
global angle, angle_last
image = change['new']
xy = model(preprocess(image)).detach().float().cpu().numpy().flatten()
x = xy[0]
y = (0.5 - xy[1]) / 2.0
x_slider.value = x
y_slider.value = y
speed_slider.value = speed_gain_slider.value
angle = np.arctan2(x, y)
pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
angle_last = angle
steering_slider.value = pid + steering_bias_slider.value
robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.0)
robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.0)
execute({'new': camera.value})
```
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.
We accomplish that with the observe function.
>WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
```
camera.observe(execute, names='value')
```
Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame.
You can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.
If you want to stop this behavior, you can unattach this callback by executing the code below.
```
camera.unobserve(execute, names='value')
robot.stop()
```
### Conclusion
That's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track follwing the road!!!
If your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)
| github_jupyter |
# Deep Deterministic Policy Gradients (DDPG)
---
In this notebook, we train DDPG with OpenAI Gym's Pendulum-v0 environment.
### 1. Import the Necessary Packages
```
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
from ddpg_agent import Agent
```
### 2. Instantiate the Environment and Agent
```
env = gym.make('Pendulum-v0')
env.seed(2)
agent = Agent(state_size=3, action_size=1, random_seed=2)
```
### 3. Train the Agent with DDPG
```
def ddpg(n_episodes=1000, max_t=300, print_every=100):
scores_deque = deque(maxlen=print_every)
scores = []
for i_episode in range(1, n_episodes+1):
state = env.reset()
agent.reset()
score = 0
for t in range(max_t):
action = agent.act(state)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_deque.append(score)
scores.append(score)
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="")
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth')
if i_episode % print_every == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
return scores
scores = ddpg()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 4. Watch a Smart Agent!
```
agent.actor_local.load_state_dict(torch.load('checkpoint_actor.pth'))
agent.critic_local.load_state_dict(torch.load('checkpoint_critic.pth'))
state = env.reset()
for t in range(1200):
action = agent.act(state, add_noise=False)
env.render()
state, reward, done, _ = env.step(action)
if done:
print(t)
break
env.close()
```
### 6. Explore
In this exercise, we have provided a sample DDPG agent and demonstrated how to use it to solve an OpenAI Gym environment. To continue your learning, you are encouraged to complete any (or all!) of the following tasks:
- Amend the various hyperparameters and network architecture to see if you can get your agent to solve the environment faster than this benchmark implementation. Once you build intuition for the hyperparameters that work well with this environment, try solving a different OpenAI Gym task!
- Write your own DDPG implementation. Use this code as reference only when needed -- try as much as you can to write your own algorithm from scratch.
- You may also like to implement prioritized experience replay, to see if it speeds learning.
- The current implementation adds Ornsetein-Uhlenbeck noise to the action space. However, it has [been shown](https://blog.openai.com/better-exploration-with-parameter-noise/) that adding noise to the parameters of the neural network policy can improve performance. Make this change to the code, to verify it for yourself!
- Write a blog post explaining the intuition behind the DDPG algorithm and demonstrating how to use it to solve an RL environment of your choosing.
| github_jupyter |
```
%matplotlib inline
import numpy as np
import sys
import os
import matplotlib.pyplot as plt
import math
import pickle
import pandas as pd
import scipy.io
import time
import h5py
import bz2
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
from numpy import linalg as LA
from scipy.spatial import Delaunay
from sklearn.neighbors import NearestNeighbors
#sys.path.insert(0, "../")
from info3d import *
from nn_matchers import *
```
# EXTRACTING the existing sample data
```
with open('point_collection/new_contiguous_point_collection.pickle','rb') as f:
new_contiguous_point_collection = pickle.load(f)
with open('descriptors/new_complete_res5_4by5_descriptors.pickle','rb') as f:
descriptors = pickle.load(f)
"""
Parameters
"""
# We used a radius range of 0.25 to 5.0 in increments of 0.25
radius_range = np.arange(0.5,1.6,0.5)
```
# Step 1: Results of partial spaces
```
fig=plt.figure(figsize=(9, 3))
RawNN = []
RansacGeneralizedNN = []
RawNN_intra_errors = []
RansacGeneralizedNN_intra_errors = []
for radius in radius_range:
try:
with bz2.BZ2File('testing_results/partials/radius_{}_RAW_scores.pickle.bz2'.format(radius), 'r') as bz2_f:
partial_scores = pickle.load(bz2_f)
"""
with open('testing_results/partials/radius_{}_RAW_errors.pickle'.format(radius), 'rb') as f:
partial_scores = pickle.load(f)
"""
partial_errors = NN_matcher(partial_scores)
RawNN.append([
radius,
np.mean(partial_errors[:,1]),
np.std(partial_errors[:,1]),
])
correct_interspace_labels_idxs = np.where(partial_errors[:,1]==0)[0]
intraspace_errors = partial_errors[correct_interspace_labels_idxs,2]
RawNN_intra_errors.append([
radius,
np.nanmean(intraspace_errors),
np.nanstd(intraspace_errors)
])
except:
pass
try:
with bz2.BZ2File('testing_results/partials/radius_{}_RANSAC_scores.pickle.bz2'.format(radius), 'r') as bz2_f:
partial_scores = pickle.load(bz2_f)
"""
with open('testing_results/partials/radius_{}_RANSAC_scores'.format(radius), 'rb') as f:
partial_scores = pickle.load(f)
"""
partial_errors = NN_matcher(partial_scores)
RansacGeneralizedNN.append([
radius,
np.nanmean(partial_errors[:,1]),
np.nanstd(partial_errors[:,1]),
])
correct_interspace_labels_idxs = np.where(partial_errors[:,1]==0)[0]
intraspace_errors = partial_errors[correct_interspace_labels_idxs,2]
RansacGeneralizedNN_intra_errors.append([
radius,
np.nanmean(intraspace_errors),
np.nanstd(intraspace_errors)
])
except:
pass
RansacGeneralizedNN = np.asarray(RansacGeneralizedNN)
RawNN = np.asarray(RawNN)
RawNN_intra_errors = np.asarray(RawNN_intra_errors)
RansacGeneralizedNN_intra_errors = np.asarray(RansacGeneralizedNN_intra_errors)
ax1 = fig.add_subplot(121)
ax1.grid(alpha = 0.7)
ax1.set_ylim(-0.025,1.025)
ax1.set_xlim(radius_range[0]-0.25,radius_range[-1]+0.25)
markersize = 8
ax1.set_ylabel("INTER-space Privacy")
ax1.set_xlabel("Partial Radius")
#ax1.set_yticklabels(fontsize = 16)
#ax1.set_xticklabels(fontsize = 16)
ax1.plot(
RawNN[:,0],RawNN[:,1],
"-o",
linewidth = 2,
mew = 2,markersize = markersize,
label = "Raw"
)
ax1.plot(
RansacGeneralizedNN[:,0],RansacGeneralizedNN[:,1],
"-s",
linewidth = 2,
mew = 2,markersize = markersize,
label = "RANSAC"
)
ax1.legend(loc = "lower left")
ax2 = fig.add_subplot(122)
ax2.grid(alpha = 0.7)
ax2.set_ylim(-0.25,10.25)
ax2.set_xlim(radius_range[0]-0.25,radius_range[-1]+0.25)
ax2.set_ylabel("INTRA-space Privacy")
ax2.set_xlabel("Partial Radius")
#ax2.set_yticklabels(fontsize = 16)
#ax2.set_xticklabels(fontsize = 16)
plt.minorticks_on()
ax2.plot(
RawNN_intra_errors[:,0],
RawNN_intra_errors[:,1],
linewidth = 2,
marker = 'o',fillstyle = 'none',
mew = 2,markersize = markersize,
label = "Raw"
)
ax2.plot(
RansacGeneralizedNN_intra_errors[:,0],
RansacGeneralizedNN_intra_errors[:,1],
linewidth = 2,
marker = 's',fillstyle = 'none',
mew = 2,markersize = markersize,
label = "RANSAC"
)
ax2.legend(loc = "lower left");
plt.savefig('plots/partial-spaces.png', format='png', dpi=300,bbox_inches = 'tight')
```
# Step 2: Results of the successive case
```
"""
Parameters
"""
# We used a radius range of 0.25 to 5.0 in increments of 0.25.
radius_range = radius_range
# For our work, we orignally used 50 samples with further 100 successive releases for our investigation.
# Below are lower parameters, change as desired.
samples = 25
releases = 50
# For demonstration purposes, we skip testing some successive samples but we still accumulate them.
skip = 3
succ_RawNN_errors = []
succ_RawNN_partial_errors = []
succ_RansacGeneralizedNN_errors = []
succ_RansacGeneralizedNN_partial_errors = []
t0 = time.time()
for radius in radius_range:
succ_RawNN_per_iteration_errors = []
succ_RansacGeneralizedNN_per_iteration_errors = []
try:
"""
with open('testing_results/successive/radius_{}_RAW_successive_scores.pickle'.format(radius), 'rb') as f:
successive_scores = pickle.load(f)
"""
with bz2.BZ2File('testing_results/successive/radius_{}_RAW_successive_scores.pickle.bz2'.format(radius), 'r') as bz2_f:
successive_scores = pickle.load(bz2_f)
with open('testing_results/successive/radius_{}_RAW_successive_errors.pickle'.format(radius), 'rb') as f:
successive_errors = pickle.load(f)
for obj_, iteration_errors in successive_errors:
#print(" RAW",radius,iteration_errors.shape)
if iteration_errors.shape[0] < int(releases/skip):
continue
else:
succ_RawNN_per_iteration_errors.append(iteration_errors[:int(releases/skip)])
succ_RawNN_errors.append([
radius,
np.asarray(succ_RawNN_per_iteration_errors)
])
#print("Raw",np.asarray(succ_RawNN_per_iteration_errors).shape)
except:# Exception as ex:
#print(radius,": successive RawNN\n ", ex)
pass
try:
"""
with open('testing_results/successive/radius_{}_RANSAC_successive_scores.pickle'.format(radius), 'rb') as f:
successive_scores = pickle.load(f)
"""
with bz2.BZ2File('testing_results/successive/radius_{}_RANSAC_successive_scores.pickle.bz2'.format(radius), 'r') as bz2_f:
successive_scores = pickle.load(bz2_f)
with open('testing_results/successive/radius_{}_RANSAC_successive_errors.pickle'.format(radius), 'rb') as f:
successive_errors = pickle.load(f)
for obj_, iteration_scores in successive_scores:#[:-1]:
#print(" RANSAC",radius,iteration_errors.shape)
iteration_errors = NN_matcher(iteration_scores)
if iteration_errors.shape[0] < int(releases/skip):
continue
else:
succ_RansacGeneralizedNN_per_iteration_errors.append(iteration_errors[:int(releases/skip)])
succ_RansacGeneralizedNN_errors.append([
radius,
np.asarray(succ_RansacGeneralizedNN_per_iteration_errors)
])
#print(np.asarray(succ_RansacGeneralizedNN_errors).shape)
except:# Exception as ex:
#print(radius,": successive RansacNN\n ", ex)
pass
print("Done with radius = {:.2f} in {:.3f} seconds".format(radius,time.time() - t0))
t0 = time.time()
for radius, per_iteration_errors in succ_RawNN_errors:
#print(radius,"Raw",per_iteration_errors.shape)
succ_RawNN_partial_errors_per_rel = []
for rel_i in np.arange(per_iteration_errors.shape[1]):
correct_interspace_labels_idxs = np.where(per_iteration_errors[:,rel_i,1]==0)[0]
intraspace_errors = per_iteration_errors[correct_interspace_labels_idxs,rel_i,2]
succ_RawNN_partial_errors_per_rel.append([
rel_i,
np.mean(intraspace_errors),
np.std(intraspace_errors)
])
succ_RawNN_partial_errors.append([
radius,
np.asarray(succ_RawNN_partial_errors_per_rel)
])
for radius, per_iteration_errors in succ_RansacGeneralizedNN_errors:
#print(radius,per_iteration_errors.shape)
succ_RansacGeneralizedNN_errors_per_rel = []
for rel_i in np.arange(per_iteration_errors.shape[1]):
correct_interspace_labels_idxs = np.where(per_iteration_errors[:,rel_i,1]==0)[0]
intraspace_errors = per_iteration_errors[correct_interspace_labels_idxs,rel_i,2]
succ_RansacGeneralizedNN_errors_per_rel.append([
rel_i,
np.mean(intraspace_errors),
np.std(intraspace_errors)
])
succ_RansacGeneralizedNN_partial_errors.append([
radius,
np.asarray(succ_RansacGeneralizedNN_errors_per_rel)
])
fig=plt.figure(figsize=(15, 5))
ax1 = fig.add_subplot(121)
ax1.grid(alpha = 0.7)
ax1.set_ylim(-0.025,1.025)
ax1.set_xlim(0,releases-skip)
markersize = 8
ax1.set_ylabel("INTER-space Privacy", fontsize = 16)
ax1.set_xlabel("Releases", fontsize = 16)
for radius, RawNN_per_iteration_errors in succ_RawNN_errors:
#print(RawNN_per_iteration_errors.shape)
ax1.plot(
np.arange(1,releases-skip,skip),#[:RawNN_per_iteration_errors.shape[1]],
np.mean(RawNN_per_iteration_errors[:,:,1], axis = 0),
':o',
label = "r ="+ str(radius) + " Raw"
)
for radius, RansacNN_per_iteration_errors in succ_RansacGeneralizedNN_errors:
#print(RansacNN_per_iteration_errors.shape)
ax1.plot(
np.arange(1,releases-skip,skip),
np.mean(RansacNN_per_iteration_errors[:,:,1], axis = 0),
'-s',
label = "r ="+ str(radius) + " RANSAC"
)
ax1.legend(loc = "best", ncol = 2)
ax2 = fig.add_subplot(122)
ax2.grid(alpha = 0.7)
ax2.set_ylim(-0.25,12.25)
ax2.set_xlim(0,releases-skip)
ax2.set_ylabel("INTRA-space Privacy", fontsize = 16)
ax2.set_xlabel("Releases", fontsize = 16)
for radius, errors_per_rel in succ_RansacGeneralizedNN_partial_errors:
ax2.plot(
np.arange(1,releases-skip,skip),
errors_per_rel[:,1],
#errors_per_rel[:,2],
'-s',
linewidth = 2, #capsize = 4.0,
#marker = markers[0],
#fillstyle = 'none',
mew = 2, markersize = markersize,
label = "r ="+ str(radius)+", RANSAC"
)
ax2.legend(loc = "best");
plt.savefig('plots/successive-partial-spaces.png', format='png', dpi=300,bbox_inches = 'tight')
```
# Step 3: Results with conservative plane releasing
```
"""
Parameters:
Also, we use the same successive samples from successive releasing for direct comparability of results.
"""
# We used a radius range of 0.25 to 5.0 in increments of 0.25.
radius_range = radius_range
# For our work, we orignally used 50 samples with further 100 successive releases for our investigation.
# Below are lower parameters, change as desired.
samples = 25
releases = 50
planes = np.arange(1,30,3)
# For demonstration purposes, we skip testing some successive samples but we still accumulate them.
skip = 3
conservative_RANSAC_error_results = []
t0 = time.time()
for radius in radius_range[:1]:
succ_RansacGeneralizedNN_per_iteration_errors = []
try:
"""
with open('testing_results/conservative/radius_{}_RANSAC_conservative_scores.pickle'.format(radius), 'rb') as f:
conservative_scores = pickle.load(f)
"""
with bz2.BZ2File('testing_results/conservative/radius_{}_RANSAC_conservative_scores.pickle.bz2'.format(radius), 'r') as bz2_f:
conservative_scores = pickle.load(bz2_f)
for obj_, per_plane_scores in conservative_scores:#[:-1]:
per_plane_errors = []
skipped= False
for max_plane, iteration_scores in per_plane_scores:
iteration_errors = NN_matcher(iteration_scores)
if iteration_errors.shape[0] >= int(releases/skip):
per_plane_errors.append(iteration_errors[:int(releases/skip)])
else:
skipped = True
#print("RANSAC: skipped",iteration_errors.shape)
if not skipped:
succ_RansacGeneralizedNN_per_iteration_errors.append(per_plane_errors)
conservative_RANSAC_error_results.append([
radius,
np.asarray(succ_RansacGeneralizedNN_per_iteration_errors)
])
print(np.asarray(succ_RansacGeneralizedNN_per_iteration_errors).shape)
except Exception as ex:
print(radius,": conservative RansacNN\n ", ex)
pass
print("Done with radius = {:.2f} in {:.3f} seconds".format(radius,time.time() - t0))
t0 = time.time()
"""
# Uncomment below if you want to overwrite the existing results.
"""
#with open('testing_results/conservative/conservative_RANSAC_error_results.pickle', 'wb') as f:
# pickle.dump(conservative_RANSAC_error_results,f)
"""
Preparing the results of the case with *Conservative Releasing*.
"""
releases_range = np.arange(1,releases-skip,skip)
X, Y = np.meshgrid(releases_range, planes)
test_vp_cn_05 = np.asarray(conservative_RANSAC_error_results[0][1])
mean_vp_cn_05 = np.mean(test_vp_cn_05[:,:,:,1],axis = 0)
#test_vp_cn_10 = np.asarray(conservative_RANSAC_error_results[1][1])
#mean_vp_cn_10 = np.mean(test_vp_cn_10[:,:,:,1],axis = 0)
intra_vp_cn_05 = np.zeros(test_vp_cn_05.shape[1:])
#intra_vp_cn_10 = np.zeros(test_vp_cn_10.shape[1:])
for plane_i, plane in enumerate(planes):
for rel_i, rel in enumerate(releases_range):
correct_interspace_labels_idxs_05 = np.where(test_vp_cn_05[:,plane_i,rel_i,1]==0)[0]
#correct_interspace_labels_idxs_10 = np.where(test_vp_cn_10[:,plane_i,rel_i,1]==0)[0]
intraspace_errors_05 = test_vp_cn_05[correct_interspace_labels_idxs_05,plane_i,rel_i,2]
#intraspace_errors_10 = test_vp_cn_10[correct_interspace_labels_idxs_10,plane_i,rel_i,2]
intra_vp_cn_05[plane_i,rel_i] = np.asarray([
np.mean(intraspace_errors_05),
np.std(intraspace_errors_05),
0,
np.nan
])
fig = plt.figure(figsize=(11,8))
ax = plt.axes(projection='3d')
surf = ax.plot_surface(
X, Y,
mean_vp_cn_05,
cmap=plt.cm.plasma,
)
surf.set_clim(0.0,1.0)
ax.set_title("r = 0.5", fontsize = 24)
ax.set_xlabel("Releases", labelpad=10, fontsize = 24)
ax.set_xlim(0,releases)
ax.set_xticklabels(releases_range,fontsize = 16)
ax.set_zlabel("INTER-space Privacy", labelpad=10, fontsize = 24)
ax.set_zlim(0,1)
ax.set_zticklabels([0,0.2,0.4,0.6,0.8,1.0],fontsize = 16)
ax.set_ylabel("Max number of planes", labelpad=10, fontsize = 22)#, offset = 1)
ax.set_ylim(0,30)
ax.set_yticklabels(np.arange(0,35,5),fontsize = 16)
cbar = fig.colorbar(surf, aspect=30, ticks = np.arange(0.0,1.1,0.25))
cbar.ax.set_yticklabels(np.arange(0.0,1.1,0.25),fontsize = 16)
ax.view_init(25,135);
```
| github_jupyter |
<h1 align='center' style="margin-bottom: 0px"> An end to end implementation of a Machine Learning pipeline </h1>
<h4 align='center' style="margin-top: 0px"> SPANDAN MADAN</h4>
<h4 align='center' style="margin-top: 0px"> Visual Computing Group, Harvard University</h4>
<h4 align='center' style="margin-top: 0px"> Computer Science and Artificial Intelligence Laboratory, MIT</h4>
<h2 align='center' style="margin-top: 0px"><a href='https://github.com/Spandan-Madan/DeepLearningProject'>Link to Github Repo</a></h2>
# Section 1. Introduction
### Background
In the fall of 2016, I was a Teaching Fellow (Harvard's version of TA) for the graduate class on "Advanced Topics in Data Science (CS209/109)" at Harvard University. I was in-charge of designing the class project given to the students, and this tutorial has been built on top of the project I designed for the class.
### Why write yet another Tutorial on Machine Learning and Deep Learning?
As a researcher on Computer Vision, I come across new blogs and tutorials on ML (Machine Learning) every day. However, most of them are just focussing on introducing the syntax and the terminology relevant to the field. For example - a 15 minute tutorial on Tensorflow using MNIST dataset, or a 10 minute intro to Deep Learning in Keras on Imagenet.
While people are able to copy paste and run the code in these tutorials and feel that working in ML is really not that hard, it doesn't help them at all in using ML for their own purposes. For example, they never introduce you to how you can run the same algorithm on your own dataset. Or, how do you get the dataset if you want to solve a problem. Or, which algorithms do you use - Conventional ML, or Deep Learning? How do you evaluate your models performance? How do you write your own model, as opposed to choosing a ready made architecture? All these form fundamental steps in any Machine Learning pipeline, and it is these steps that take most of our time as ML practitioners.
This tutorial breaks down the whole pipeline, and leads the reader through it step by step in an hope to empower you to actually use ML, and not just feel that it was not too hard. Needless to say, this will take much longer than 15-30 minutes. I believe a weekend would be a good enough estimate.
### About the Author
I am <a href="http://spandanmadan.com/">Spandan Madan</a>, a graduate student at Harvard University working on Computer Vision. My research work is supervised collaboratively by Professor Hanspeter Pfister at Harvard, and Professor Aude Oliva at MIT. My current research focusses on using Computer Vision and Natural Language Techniques in tandem to build systems capable of reasoning using text and visual elements simultaneusly.
# Section 2. Project Outline : Multi-Modal Genre Classification for Movies
## Wow, that title sounds like a handful, right? Let's break it down step by step.
### Q.1. what do we mean by Classification?
In machine learning, the task of classification means to use the available data to learn a <i>function</i> which can assign a category to a data point. For example, assign a genre to a movie, like "Romantic Comedy", "Action", "Thriller". Another example could be automatically assigning a category to news articles, like "Sports" and "Politics".
### More Formally
#### Given:
- A data point $x_i$
- A set of categories $y_1,y_2...y_n$ that $x_i$ can belong to. <br>
#### Task :
Predict the correct category $y_k$ for a new data point $x_k$ not present in the given dataset.
#### Problem :
We don't know how the $x$ and $y$ are related mathematically.
#### Assumption :
We assume there exists a function $f$ relating $x$ and $y$ i.e. $f(x_i)=y_i$
#### Approach :
Since $f$ is not known, we learn a function $g$, which approximates $f$.
#### Important consideration :
- If $f(x_i)=g(x_i)=y_i$ for all $x_i$, then the two functions $f$ and $g$ are exactly equal. Needless to say, this won't realistically ever happen, and we'll only be able to approximate the true function $f$ using $g$. This means, sometimes the prediction $g(x_i)$ will not be correct. And essentially, our whole goal is to find a $g$ which makes a really low number of such errors. That's basically all that we're trying to do.
- For the sake of completeness, I should mention that this is a specific kind of learning problem which we call "Supervised Learning". Also, the idea that $g$ approximates $f$ well for data not present in our dataset is called "Generalization". It is absolutely paramount that our model generalizes, or else all our claims will only be true about data we already have and our predictions will not be correct.
- We will look into generalization a little bit more a little ahead in the tutorial.
- Finally, There are several other kinds, but supervised learning is the most popular and well studied kind.
### Q.2. What's Multi-Modal Classification then?
In the machine learning community, the term Multi-Modal is used to refer to multiple <i>kinds</i> of data. For example, consider a YouTube video. It can be thought to contain 3 different modalities -
- The video frames (visual modality)
- The audio clip of what's being spoken (audio modality)
- Some videos also come with the transcription of the words spoken in the form of subtitles (textual modality)
Consider, that I'm interested in classifying a song on YouTube as pop or rock. You can use any of the above 3 modalities to predict the genre - The video, the song itself, or the lyrics. But, needless to say, you can predict it much better if you could use all three simultaneously. This is what we mean by multi-modal classification.
# For this project, we will be using visual and textual data to classify movie genres.
# Project Outline
- **Scraping a dataset** : The first step is to build a rich data set. We will collect textual and visual data for each movie.
- **Data pre-processing**
- **Non-deep Machine Learning models : Probabilistic and Max-Margin Classifiers.**
- **Intuitive theory behind Deep Learning**
- **Deep Models for Visual Data**
- **Deep Models for Text**
- **Potential Extensions**
- **Food for Thought**
# Section 3. Building your very own DataSet.
For any machine learning algorithm to work, it is imperative that we collect data which is "representative". Now, let's take a moment to discuss what the word representative mean.
### What data is good data? OR What do you mean by data being "representative"?
Let's look at this from first principles. Mathematically, the premise of machine learning (to be precise, the strand of machine learning we'll be working with here) is that given input variable X, and an output variable y, **IF** there is a function such that g(X)=y, then if g is unknown, we can "learn" a function f which approximates g. At the very heart, its not at all different from what you may have earlier studied as "curve fitting". For example, if you're trying to predict someone's movie preferences then X can be information about the person's gender, age, nationality and so on, while y can be the genre they most like to listen to!
Let's do a thought experiment. Consider the same example - I'm trying to predict people's movie preferences. I walk into a classroom today, and collect information about some students and their movie preferences. Now, I use that data to build a model. How well do you think I can predict my father's movie preferences? The answer is - probably not very well. Why? Intuitively, there was probably no one in the classroom who was my father's age. My model can tell me that as people go from age 18 to 30, they have a higher preference for documentaries over superhero movies. But does this trend continue at 55? Probably, they may start liking family dramas more. Probably they don't. In a nutshell, we cannot say with certainty, as our data tells us nothing about it. So, if the task was to make predictions about ANYONE's movie preferences, then the data collected from just undergraduates is NOT representative.
Now, let's see why this makes sense Mathematically. Look at the graph below.
<img src="files/contour.png">
<center>Fig.1: Plot of a function we are trying to approximate(<a href="http://www.jzy3d.org/js/slider/images/ContourPlotsDemo.png">source</a>)</center>
If we consider that the variable plotted on the vertical axis is $y$, and the values of the 2 variables on the horizontal axes make the input vector $X$, then, our hope is that we are able to find a function $f$ which can approximate the function plotted here. If all the data I collect is such that $x_1$ belongs to (80,100) and $x_2$ belongs to (80,100), the learned function will only be able to learn the "yellow-green dipping bellow" part of the function. Our function will never be able to predict the behavior in the "red" regions of the true function. So, in order to be able to learn a good function, we need data sampled from a diverse set of values of $x_1$ and x2. That would be representative data to learn this contour.
Therefore, we want to collect data which is representative of all possible movies that we want to make predictions about. Or else (which is often the case), we need to be aware of the limitations of the model we have trained, and the predictions we can make with confidence. The easiest way to do this is to only make predictions about the domain of data we collected the training data from. For example, in our case, let us start by assuming that our model will predict genres for only English movies. Now, the task is to collect data about a diverse collection of movies.
So how do we get this data then? Neither google, nor any university has released such a dataset. We want to collect visual and textual data about these movies. The simple answer is to scrape it from the internet to build our own dataset. For the purpose of this project, we will use movie posters as our visual data, and movie plots as textual data. Using these, we will build a model that can predict movie genres!
# We will be scraping data from 2 different movie sources - IMDB and TMDB
<h3>IMDB:http://www.imdb.com/</h3>
For those unaware, IMDB is the primary source of information about movies on the internet. It is immensely rich with posters, reviews, synopsis, ratings and many other information on every movie. We will use this as our primary data source.
<h3>TMDB:https://www.themoviedb.org/</h3>
TMDB, or The Movie DataBase, is an open source version of IMDB, with a free to use API that can be used to collect information. You do need an API key, but it can be obtained for free by just making a request after making a free account.
#### Note -
IMDB gives some information for free through the API, but doesn't release other information about movies. Here, we will keep it legal and only use information given to us for free and legally. However, scraping does reside on the moral fence, so to say. People often scrape data which isn't exactly publicly available for use from websites.
```
import torchvision
import urllib2
import requests
import json
import imdb
import time
import itertools
import wget
import os
import tmdbsimple as tmdb
import numpy as np
import random
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import pickle
```
# Here is a broad outline of technical steps to be done for data collection
* Sign up for TMDB (themoviedatabase.org), and set up API to scrape movie posters for above movies.
* Set up and work with TMDb to get movie information from their database
* Do the same for IMDb
* Compare the entries of IMDb and TMDb for a movie
* Get a listing and information of a few movies
* Think and ponder over the potential challenges that may come our way, and think about interesting questions we can answer given the API's we have in our hands.
* Get data from the TMDb
Let's go over each one of these one by one.
## Signing up for TMDB and getting set up for getting movie metadata.
* Step 1. Head over to [tmdb.org] (https://www.themoviedb.org/?language=en) and create a new account there by signing up.
* Step 2. Click on your account icon on the top right, then from drop down menu select "Settings".
* Step 3. On the settings page, you will see the option "API" on the left pane. Click on that.
* Step 4. Apply for a new developer key. Fill out the form as required. The fields "Application Name" and "Application URL" are not important. Fill anything there.
* Step 5. It should generate a new API key for you and you should also receive a mail.
Now that you have the API key for TMDB, you can query using TMDB. Remember, it allows only 40 queries per 10 seconds.
An easy way to respect this is to just have a call to <i>time.sleep(1)</i> after each iteration. This is also being very nice to the server.
If you want to try and maximize your throughput you can embed every TMDB request in a nested try except block. If the first try fails, the second try first uses python's sleep function to give it a little rest, and then try again to make a request. Something like this -
~~~~
try:
search.movie(query=movie) #An API request
except:
try:
time.sleep(10) #sleep for a bit, to give API requests a rest.
search.movie(query=<i>movie_name</i>) #Make second API request
except:
print "Failed second attempt too, check if there's any error in request"
~~~~
## Using TMDB using the obtained API Key to get movie information
I have made these functions which make things easy. Basically, I'm making use of a library called tmdbsimple which makes TMDB using even easier. This library was installed at the time of setup.
However, if you want to avoid the library, it is also easy enough to load the API output directly into a dictionary like this without using tmdbsimple:
~~~
url = 'https://api.themoviedb.org/3/movie/1581?api_key=' + api_key
data = urllib2.urlopen(url).read()
# create dictionary from JSON
dataDict = json.loads(data)
~~~
```
# set here the path where you want the scraped folders to be saved!
poster_folder='posters_final/'
if poster_folder.split('/')[0] in os.listdir('./'):
print('Folder already exists')
else:
os.mkdir('./'+poster_folder)
poster_folder
# For the purpose of this example, i will be working with the 1999 Sci-Fi movie - "The Matrix"!
api_key = 'a237bfff7e08d0e6902c623978183be0' #Enter your own API key here to run the code below.
# Generate your own API key as explained above :)
tmdb.API_KEY = api_key #This sets the API key setting for the tmdb object
search = tmdb.Search() #this instantiates a tmdb "search" object which allows your to search for the movie
import os.path
# These functions take in a string movie name i.e. like "The Matrix" or "Interstellar"
# What they return is pretty much clear in the name - Poster, ID , Info or genre of the Movie!
def grab_poster_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
posterp=movie.info()['poster_path']
title=movie.info()['original_title']
url='image.tmdb.org/t/p/original'+posterp
title='_'.join(title.split(' '))
strcmd='wget -O '+poster_folder+title+'.jpg '+url
os.system(strcmd)
def get_movie_id_tmdb(movie):
response = search.movie(query=movie)
movie_id=response['results'][0]['id']
return movie_id
def get_movie_info_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
info=movie.info()
return info
def get_movie_genres_tmdb(movie):
response = search.movie(query=movie)
id=response['results'][0]['id']
movie = tmdb.Movies(id)
genres=movie.info()['genres']
return genres
```
While the above functions have been made to make it easy to get genres, posters and ID, all the information that can be accessed can be seen by calling the function get_movie_info() as shown below
```
print get_movie_genres_tmdb("The Matrix")
info=get_movie_info_tmdb("The Matrix")
print "All the Movie information from TMDB gets stored in a dictionary with the following keys for easy access -"
info.keys()
```
So, to get the tagline of the movie we can use the above dictionary key -
```
info=get_movie_info_tmdb("The Matrix")
print info['tagline']
```
## Getting movie information from IMDB
Now that we know how to get information from TMDB, here's how we can get information about the same movie from IMDB. This makes it possible for us to combine more information, and get a richer dataset. I urge you to try and see what dataset you can make, and go above and beyond the basic things I've done in this tutorial. Due to the differences between the two datasets, you will have to do some cleaning, however both of these datasets are extremely clean and it will be minimal.
```
# Create the IMDB object that will be used to access the IMDb's database.
imbd_object = imdb.IMDb() # by default access the web.
# Search for a movie (get a list of Movie objects).
results = imbd_object.search_movie('The Matrix')
# As this returns a list of all movies containing the word "The Matrix", we pick the first element
movie = results[0]
imbd_object.update(movie)
print "All the information we can get about this movie from IMDB-"
movie.keys()
print "The genres associated with the movie are - ",movie['genres']
```
## A small comparison of IMDB and TMDB
Now that we have both systems running, let's do a very short comparison for the same movie?
```
print "The genres for The Matrix pulled from IMDB are -",movie['genres']
print "The genres for The Matrix pulled from TMDB are -",get_movie_genres_tmdb("The Matrix")
```
As we can see, both the systems are correct, but the way they package information is different. TMDB calls it "Science Fiction" and has an ID for every genre. While IMDB calls it "Sci-Fi". Thus, it is important to keep track of these things when making use of both the datasets simultaneously.
Now that we know how to scrape information for one movie, let's take a bigger step towards scraping multiple movies?
## Working with multiple movies : Obtaining Top 20 movies from TMDB
We first instantiate an object that inherits from class Movies from TMDB. Then We use the **popular()** class method (i.e. function) to get top movies. To get more than one page of results, the optional page argument lets us see movies from any specified page number.
```
all_movies=tmdb.Movies()
top_movies=all_movies.popular()
# This is a dictionary, and to access results we use the key 'results' which returns info on 20 movies
print(len(top_movies['results']))
top20_movs=top_movies['results']
```
Let's look at one of these movies. It's the same format as above, as we had information on the movie "The Matrix", as you can see below. It's a dictionary which can be queried for specific information on that movie
```
first_movie=top20_movs[0]
print "Here is all the information you can get on this movie - "
print first_movie
print "\n\nThe title of the first movie is - ", first_movie['title']
```
Let's print out top 5 movie's titles!
```
for i in range(len(top20_movs)):
mov=top20_movs[i]
title=mov['title']
print title
if i==4:
break
```
### Yes, I know. I'm a little upset too seeing Beauty and the Beast above Logan in the list!
Moving on, we can get their genres the same way.
```
for i in range(len(top20_movs)):
mov=top20_movs[i]
genres=mov['genre_ids']
print genres
if i==4:
break
```
So, TMDB doesn't want to make your job as easy as you thought. Why these random numbers? Want to see their genre names? Well, there's the Genre() class for it. Let's get this done!
```
# Create a tmdb genre object!
genres=tmdb.Genres()
# the list() method of the Genres() class returns a listing of all genres in the form of a dictionary.
list_of_genres=genres.list()['genres']
```
Let's convert this list into a nice dictionary to look up genre names from genre IDs!
```
Genre_ID_to_name={}
for i in range(len(list_of_genres)):
genre_id=list_of_genres[i]['id']
genre_name=list_of_genres[i]['name']
Genre_ID_to_name[genre_id]=genre_name
```
Now, let's re-print the genres of top 20 movies?
```
for i in range(len(top20_movs)):
mov=top20_movs[i]
title=mov['title']
genre_ids=mov['genre_ids']
genre_names=[]
for id in genre_ids:
genre_name=Genre_ID_to_name[id]
genre_names.append(genre_name)
print title,genre_names
if i==4:
break
```
# Section 4 - Building a dataset to work with : Let's take a look at the top 1000 movies from the database
Making use of the same api as before, we will just pull results from the top 50 pages. As mentioned earlier, the "page" attribute of the command top_movies=all_movies.popular() can be used for this purpose.
Please note: Some of the code below will store the data into python "pickle" files so that it can be ready directly from memory, as opposed to being downloaded every time. Once done, you should comment out any code which generated an object that was pickled and is no longer needed.
```
all_movies=tmdb.Movies()
top_movies=all_movies.popular()
# This is a dictionary, and to access results we use the key 'results' which returns info on 20 movies
len(top_movies['results'])
top20_movs=top_movies['results']
# Comment out this cell once the data is saved into pickle file.
all_movies=tmdb.Movies()
top1000_movies=[]
print('Pulling movie list, Please wait...')
for i in range(1,51):
if i%15==0:
time.sleep(7)
movies_on_this_page=all_movies.popular(page=i)['results']
top1000_movies.extend(movies_on_this_page)
len(top1000_movies)
f3=open('movie_list.pckl','wb')
pickle.dump(top1000_movies,f3)
f3.close()
print('Done!')
f3=open('movie_list.pckl','rb')
top1000_movies=pickle.load(f3)
f3.close()
```
# Pairwise analysis of Movie Genres
As our dataset is multi label, simply looking at the distribution of genres is not sufficient. It might be beneficial to see which genres co-occur, as it might shed some light on inherent biases in our dataset. For example, it would make sense if romance and comedy occur together more often than documentary and comedy. Such inherent biases tell us that the underlying population we are sampling from itself is skewed and not balanced. We may then take steps to account for such problems. Even if we don't take such steps, it is important to be aware that we are making the assumption that an unbalanced dataset is not hurting our performance and if need be, we can come back to address this assumption. Good old scientific method, eh?
So for the top 1000 movies let's do some pairwise analysis for genre distributions. Our main purpose is to see which genres occur together in the same movie. So, we first define a function which takes a list and makes all possible pairs from it. Then, we pull the list of genres for a movie and run this function on the list of genres to get all pairs of genres which occur together
```
# This function just generates all possible pairs of movies
def list2pairs(l):
# itertools.combinations(l,2) makes all pairs of length 2 from list l.
pairs = list(itertools.combinations(l, 2))
# then the one item pairs, as duplicate pairs aren't accounted for by itertools
for i in l:
pairs.append([i,i])
return pairs
```
As mentioned, now we will pull genres for each movie, and use above function to count occurrences of when two genres occurred together
```
# get all genre lists pairs from all movies
allPairs = []
for movie in top1000_movies:
allPairs.extend(list2pairs(movie['genre_ids']))
nr_ids = np.unique(allPairs)
visGrid = np.zeros((len(nr_ids), len(nr_ids)))
for p in allPairs:
visGrid[np.argwhere(nr_ids==p[0]), np.argwhere(nr_ids==p[1])]+=1
if p[1] != p[0]:
visGrid[np.argwhere(nr_ids==p[1]), np.argwhere(nr_ids==p[0])]+=1
```
Let's take a look at the structure we just made. It is a 19X19 structure, as shown below. Also, see that we had 19 Genres. Needless to say, this structure counts the number of simultaneous occurrences of genres in same movie.
```
print visGrid.shape
print len(Genre_ID_to_name.keys())
annot_lookup = []
for i in xrange(len(nr_ids)):
annot_lookup.append(Genre_ID_to_name[nr_ids[i]])
sns.heatmap(visGrid, xticklabels=annot_lookup, yticklabels=annot_lookup)
```
The above image shows how often the genres occur together, as a heatmap
Important thing to notice in the above plot is the diagonal. The diagonal corresponds to self-pairs, i.e. number of times a genre, say Drama occurred with Drama. Which is basically just a count of the total times that genre occurred!
As we can see there are a lot of dramas in the data set, it is also a very unspecific label. There are nearly no documentaries or TV Movies. Horror is a very distinct label, and romance is also not too widely spread.
To account for this unbalanced data, there are multiple things we can try to explore what interesting relationships can be found.
## Delving Deeper into co-occurrence of genres
What we want to do now is to look for nice groups of genres that co-occur, and see if it makes sense to us logically? Intuitively speaking, wouldn't it be fun if we saw nice boxes on the above plot - boxes of high intensity i.e. genres that occur together and don't occur much with other genres. In some ways, that would isolate the co-occurrence of some genres, and heighten the co-occurrence of others.
While the data may not show that directly, we can play with the numbers to see if that's possible. The technique used for that is called biclustering.
```
from sklearn.cluster import SpectralCoclustering
model = SpectralCoclustering(n_clusters=5)
model.fit(visGrid)
fit_data = visGrid[np.argsort(model.row_labels_)]
fit_data = fit_data[:, np.argsort(model.column_labels_)]
annot_lookup_sorted = []
for i in np.argsort(model.row_labels_):
annot_lookup_sorted.append(Genre_ID_to_name[nr_ids[i]])
sns.heatmap(fit_data, xticklabels=annot_lookup_sorted, yticklabels=annot_lookup_sorted, annot=False)
plt.title("After biclustering; rearranged to show biclusters")
plt.show()
```
Looking at the above figure, "boxes" or groups of movie genres automatically emerge!
Intuitively - Crime, Sci-Fi, Mystery, Action, Horror, Drama, Thriller, etc co-occur.
AND, Romance, Fantasy, Family, Music, Adventure, etc co-occur.
That makes a lot of intuitive sense, right?
One challenge is the broad range of the drama genre. It makes the two clusters highly overlapping. If we merge it together with action thriller, etc. We will end up with nearly all movies just having that label.
**Based on playing around with the stuff above, we can sort the data into the following genre categories - "Drama, Action, ScienceFiction, exciting(thriller, crime, mystery), uplifting(adventure, fantasy, animation, comedy, romance, family), Horror, History"**
Note: that this categorization is subjective and by no means the only right solution. One could also just stay with the original labels and only exclude the ones with not enough data. Such tricks are important to balance the dataset, it allows us to increase or decrease the strength of certain signals, making it possible to improve our inferences :)
# Interesting Questions
This really should be a place for you to get creative and hopefully come up with better questions than me.
Here are some of my thoughts:
- Which actors are bound to a genre, and which can easily hop genres?
- Is there a trend in genre popularity over the years?
- Can you use sound tracks to identify the genre of a movie?
- Are top romance actors higher paid than top action actors?
- If you look at release date vs popularity score, which movie genres have a longer shelf life?
Ideas to explore specifically for feature correlations:
- Are title length correlated with movie genre?
- Are movie posters darker for horror than for romance end comedy?
- Are some genres specifically released more often at a certain time of year?
- Is the RPG rating correlated with the genre?
# Based on this new category set, we will now pull posters from TMDB as our training data!
```
# Done before, reading from pickle file now to maintain consistency of data!
# We now sample 100 movies per genre. Problem is that the sorting is by popular movies, so they will overlap.
# Need to exclude movies that were already sampled.
movies = []
baseyear = 2017
print('Starting pulling movies from TMDB. If you want to debug, uncomment the print command. This will take a while, please wait...')
done_ids=[]
for g_id in nr_ids:
#print('Pulling movies for genre ID '+g_id)
baseyear -= 1
for page in xrange(1,6,1):
time.sleep(0.5)
url = 'https://api.themoviedb.org/3/discover/movie?api_key=' + api_key
url += '&language=en-US&sort_by=popularity.desc&year=' + str(baseyear)
url += '&with_genres=' + str(g_id) + '&page=' + str(page)
data = urllib2.urlopen(url).read()
dataDict = json.loads(data)
movies.extend(dataDict["results"])
done_ids.append(str(g_id))
print("Pulled movies for genres - "+','.join(done_ids))
# f6=open("movies_for_posters",'wb')
# pickle.dump(movies,f6)
# f6.close()
f6=open("movies_for_posters",'rb')
movies=pickle.load(f6)
f6.close()
```
Let's remove any duplicates that we have in the list of movies
```
movie_ids = [m['id'] for m in movies]
print "originally we had ",len(movie_ids)," movies"
movie_ids=np.unique(movie_ids)
print len(movie_ids)
seen_before=[]
no_duplicate_movies=[]
for i in range(len(movies)):
movie=movies[i]
id=movie['id']
if id in seen_before:
continue
# print "Seen before"
else:
seen_before.append(id)
no_duplicate_movies.append(movie)
print "After removing duplicates we have ",len(no_duplicate_movies), " movies"
```
Also, let's remove movies for which we have no posters!
```
poster_movies=[]
counter=0
movies_no_poster=[]
print("Total movies : ",len(movies))
print("Started downloading posters...")
for movie in movies:
id=movie['id']
title=movie['title']
if counter==1:
print('Downloaded first. Code is working fine. Please wait, this will take quite some time...')
if counter%300==0 and counter!=0:
print "Done with ",counter," movies!"
print "Trying to get poster for ",title
try:
#grab_poster_tmdb(title)
poster_movies.append(movie)
except:
try:
time.sleep(7)
grab_poster_tmdb(title)
poster_movies.append(movie)
except:
movies_no_poster.append(movie)
counter+=1
print("Done with all the posters!")
print len(movies_no_poster)
print len(poster_movies)
# f=open('poster_movies.pckl','w')
# pickle.dump(poster_movies,f)
# f.close()
f=open('poster_movies.pckl','r')
poster_movies=pickle.load(f)
f.close()
# f=open('no_poster_movies.pckl','w')
# pickle.dump(movies_no_poster,f)
# f.close()
f=open('no_poster_movies.pckl','r')
movies_no_poster=pickle.load(f)
f.close()
```
# Congratulations, we are done scraping!
# Building a dataset out of the scraped information!
This task is simple, but **extremely** important. It's basically what will set the stage for the whole project. Given that you have the freedom to cast their own project within the framework I am providing, there are many decisions that you must make to finalize **your own version** of the project.
As we are working on a **classification** problem, we need to make two decisions given the data at hand -
* What do we want to predict, i.e. what's our Y?
* What features to use for predicting this Y, i.e. what X should we use?
There are many different options possible, and it comes down to you to decide what's most exciting. I will be picking my own version for the example, **but it is imperative that you think this through, and come up with a version which excites you!**
As an example, here are some possible ways to frame Y, while still sticking to the problem of genre prediction -
* Assume every movie can have multiple genres, and then it becomes a multi-label classification problem. For example, a movie can be Action, Horror and Adventure simultaneously. Thus, every movie can be more than one genre.
* Make clusters of genres as we did in Milestone 1 using biclustering, and then every movie can have only 1 genre. This way, the problem becomes a simpler, multi-class problem. For example, a movie could have the class - Uplifting (refer Milestone 1), or Horror or History. No movie get's more than one class.
For the purposes of this implementation, I'm going with the first case explained above - i.e. a multi-label classification problem.
Similarly, for designing our input features i.e. X, you may pick any features you think make sense, for example, the Director of a movie may be a good predictor for genre. OR, they may choose any features they design using algorithms like PCA. Given the richness of IMDB, TMDB and alternate sources like Wikipedia, there is a plethora of options available. **Be creative here!**
Another important thing to note is that in doing so, we must also make many more small implementation decisions on the way. For example, what genres are we going to include? what movies are we going to include? All these are open ended!
## My Implementation
Implementation decisions made -
* The problem is framed here as a multi-label problem explained above.
* We will try to predict multiple genres associated with a movie. This will be our Y.
* We will use 2 different kinds of X - text and images.
* For the text part - Input features being used to predict the genre is a form of the movie's plot available from TMDB using the property 'overview'. This will be our X.
* For the image part - we will use the scraped poster images as our X.
NOTE : We will first look at some conventional machine learning models, which were popular before the recent rise of neural networks and deep learning. For the poster image to genre prediction, I have avoided using this for the reason that conventional ML models are simply not used anymore without using deep learning for feature extraction (all discussed in detail ahead, don't be scared by the jargon). For the movie overview to genre prediction problem we will look at both conventional models and deep learning models.
Now, let's build our X and Y!
First, let's identify movies that have overviews. **Next few steps are going to be a good example on why data cleaning is important!**
```
movies_with_overviews=[]
for i in range(len(no_duplicate_movies)):
movie=no_duplicate_movies[i]
id=movie['id']
overview=movie['overview']
if len(overview)==0:
continue
else:
movies_with_overviews.append(movie)
len(movies_with_overviews)
```
Now let's store the genre's for these movies in a list that we will later transform into a binarized vector.
Binarized vector representation is a very common and important way data is stored/represented in ML. Essentially, it's a way to reduce a categorical variable with n possible values to n binary indicator variables. What does that mean? For example, let [(1,3),(4)] be the list saying that sample A has two labels 1 and 3, and sample B has one label 4. For every sample, for every possible label, the representation is simply 1 if it has that label, and 0 if it doesn't have that label. So the binarized version of the above list will be -
~~~~~
[(1,0,1,0]),
(0,0,0,1])]
~~~~~
```
# genres=np.zeros((len(top1000_movies),3))
genres=[]
all_ids=[]
for i in range(len(movies_with_overviews)):
movie=movies_with_overviews[i]
id=movie['id']
genre_ids=movie['genre_ids']
genres.append(genre_ids)
all_ids.extend(genre_ids)
from sklearn.preprocessing import MultiLabelBinarizer
mlb=MultiLabelBinarizer()
Y=mlb.fit_transform(genres)
genres[1]
print Y.shape
print np.sum(Y, axis=0)
len(list_of_genres)
```
This is interesting. We started with only 19 genre labels if you remember. But the shape for Y is 1666,20 while it should be 1666,19 as there are only 19 genres? Let's explore.
Let's find genre IDs that are not present in our original list of genres!
```
# Create a tmdb genre object!
genres=tmdb.Genres()
# the list() method of the Genres() class returns a listing of all genres in the form of a dictionary.
list_of_genres=genres.list()['genres']
Genre_ID_to_name={}
for i in range(len(list_of_genres)):
genre_id=list_of_genres[i]['id']
genre_name=list_of_genres[i]['name']
Genre_ID_to_name[genre_id]=genre_name
for i in set(all_ids):
if i not in Genre_ID_to_name.keys():
print i
```
Well, this genre ID wasn't given to us by TMDB when we asked it for all possible genres. How do we go about this now? We can either neglect all samples that have this genre. But if you look up you'll see there's too many of these samples. So, I googled more and went into their documentation and found that this ID corresponds to the genre "Foreign". So, we add it to the dictionary of genre names ourselves. Such problems are ubiquitous in machine learning, and it is up to us to diagnose and correct them. We must always make a decision about what to keep, how to store data and so on.
```
Genre_ID_to_name[10769]="Foreign" #Adding it to the dictionary
len(Genre_ID_to_name.keys())
```
Now, we turn to building the X matrix i.e. the input features! As described earlier, we will be using the overview of movies as our input vector! Let's look at a movie's overview for example!
```
sample_movie=movies_with_overviews[5]
sample_overview=sample_movie['overview']
sample_title=sample_movie['title']
print "The overview for the movie",sample_title," is - \n\n"
print sample_overview
```
## So, how do we store this movie overview in a matrix?
#### Do we just store the whole string? We know that we need to work with numbers, but this is all text. What do we do?!
The way we will be storing the X matrix is called a "Bag of words" representation. The basic idea of this representation in our context is that we can think of all the distinct words that are possible in the movies' reviews as a distinct object. And then every movie overview can be thought as a "Bag" containing a bunch of these possible objects.
For example, in the case of Zootopia the movie above - The "Bag" contains the words ("Determined", "to", "prove", "herself"......"the", "mystery"). We make such lists for all movie overviews. Finally, we binarize again like we did above for Y. scikit-learn makes our job easy here by simply using a function CountVectorizer() because this representation is so often used in Machine Learning.
What this means is that, for all the movies that we have the data on, we will first count all the unique words. Say, there's 30,000 unique words. Then we can represent every movie overview as a 30000x1 vector, where each position in the vector corresponds to the presence or absence of a particular word. If the word corresponding to that position is present in the overview, that position will have 1, otherwise it will be 0.
Ex - if our vocabular was 4 words - "I","am","a","good","boy", then the representation for the sentence "I am a boy" would be [1 1 1 0 1], and for the sentence "I am good" would be [1 1 0 1 0].
```
from sklearn.feature_extraction.text import CountVectorizer
import re
content=[]
for i in range(len(movies_with_overviews)):
movie=movies_with_overviews[i]
id=movie['id']
overview=movie['overview']
overview=overview.replace(',','')
overview=overview.replace('.','')
content.append(overview)
print content[0]
print len(content)
```
# Are all words equally important?
#### At the cost of sounding "Animal Farm" inspired, I would say not all words are equally important.
For example, let's consider the overview for the Matrix -
```
get_movie_info_tmdb('The Matrix')['overview']
```
For "The Matrix" a word like "computer" is a stronger indicators of it being a Sci-Fi movie, than words like "who" or "powerful" or "vast". One way computer scientists working with natural language tackled this problem in the past (and it is still used very popularly) is what we call TF-IDF i.e. Term Frequence, Inverse Document Frequency. The basic idea here is that words that are strongly indicative of the content of a single document (every movie overview is a document in our case) are words that occur very frequently in that document, and very infrequently in all other documents. For example, "Computer" occurs twice here but probably will not in most other movie overviews. Hence, it is indicative. On the other hand, generic words like "a","and","the" will occur very often in all documents. Hence, they are not indicative.
So, can we use this information to reduce our insanely high 30,000 dimensional vector representation to a smaller, more handle-able number? But first up, why should we even care? The answer is probably one of the most used phrases in ML - "The Curse of Dimensionality".
# The Curse of Dimensionality
#### This section is strongly borrowing from one of the greatest <a href="https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf">ML papers I've ever read.</a>
This expression was coined by Bellman in 1961 to refer to the fact that many algorithms that work fine in low dimensions become intractable when the input is high-dimensional. The reason for them not working in high dimensions is very strongly linked to what we discussed earlier - having a representative dataset. Consider this, you have a function $f$ dependent only one dependent variable $x$, and $x$ can only integer values from 1 to 100. Since it's one dimensional, it can be plotted on a line. To get a representative sample, you'd need to sample something like - $f(1),f(20),f(40),f(60),f(80),f(100)$
Now, let's increase the dimensionality i.e. number of dependent variables and see what happens. Say, we have 2 variables $x_1$ and $x_2$, same possible as before - integers between 1 and 100. Now, instead of a line, we'll have a plane with $x_1$ and $x_2$ on the two axes. The interesting bit is that instead of 100 possible values of dependent variables like before, we now have 100,000 possible values! Basically, we can make 100x100 table of possible values of $x_1$ and $x_2$. Wow, that increased exponentially. Not just figuratively, but mathematically exponentially. Needless to say, to cover 5% of the space like we did before, we'd need to sample $f$ at 5000 values.
For 3 variables, it would be 100,000,000, and we'd need to sample at 500,000 points. That's already more than the number of data points we have for most training problems we will ever come across.
Basically, as the dimensionality (number of features) of the examples grows, because a fixed-size training set covers a dwindling fraction of the input space. Even with a moderate dimension of 100 and a huge training set of a trillion examples, the latter covers only a fraction of about $10^{−18}$ of the input space. This is what makes machine learning
both necessary and hard.
So, yes, if some words are unimportant, we want to get rid of them and reduce the dimensionality of our X matrix. And the way we will do it is using TF-IDF to identify un-important words. Python let's us do this with just one line of code (And this is why you should spend more time reading maths, than coding!)
```
# The min_df paramter makes sure we exclude words that only occur very rarely
# The default also is to exclude any words that occur in every movie description
vectorize=CountVectorizer(max_df=0.95, min_df=0.005)
X=vectorize.fit_transform(content)
```
We are excluding all words that occur in too many or too few documents, as these are very unlikely to be discriminative. Words that only occur in one document most probably are names, and words that occur in nearly all documents are probably stop words. Note that the values here were not tuned using a validation set. They are just guesses. It is ok to do, because we didn't evaluate the performance of these parameters. In a strict case, for example for a publication, it would be better to tune these as well.
```
X.shape
```
So, each movie's overview gets represented by a 1x1365 dimensional vector.
Now, we are ready for the kill. Our data is cleaned, hypothesis is set (Overview can predict movie genre), and the feature/output vectors are prepped. Let's train some models!
```
import pickle
f4=open('X.pckl','wb')
f5=open('Y.pckl','wb')
pickle.dump(X,f4)
pickle.dump(Y,f5)
f6=open('Genredict.pckl','wb')
pickle.dump(Genre_ID_to_name,f6)
f4.close()
f5.close()
f6.close()
```
# Congratulations, we have our data set ready!
A note : As we are building our own dataset, and I didn't want you to spend all your time waiting for poster image downloads to finish, I am working with an EXTREMELY small dataset. That is why, the results we will see for the deep learning portion will not be spectacular as compared to conventional machine learning methods. If you want to see the real power, you should spend some more time scraping something of the order of 100,000 images, as opposed to 1000 odd like I am doing here. Quoting the paper I mentioned above - MORE DATA BEATS A CLEVERER ALGORITHM.
#### As the TA, I saw that most teams working on the project had data of the order of 100,000 movies. So, if you want to extract the power of these models, consider scraping a larger dataset than me.
# Section 5 - Non-deep, Conventional ML models with above data
Here is a layout of what we will be doing -
- We will implement two different models
- We will decide a performance metric i.e. a quantitative method to be sure about how well difference models are doing.
- Discussion of the differences between the models, their strengths, weaknesses, etc.
As discussed earlier, there are a LOT of implementation decisions to be made. Between feature engineering, hyper-parameter tuning, model selection and how interpretable do you want your model to be (Read : Bayesian vs Non-Bayesian approaches) a lot is to be decided. For example, some of these models could be:
- Generalized Linear Models
- SVM
- Shallow (1 Layer, i.e. not deep) Neural Network
- Random Forest
- Boosting
- Decision Tree
Or go more bayesian:
- Naive Bayes
- Linear or Quadratic Discriminant Analysis
- Bayesian Hierarchical models
The list is endless, and not all models will make sense for the kind of problem you have framed for yourself. ** Think about which model best fits for your purpose.**
For our purposes here, I will be showing the example of 2 very simple models, one picked from each category above -
1. SVM
2. Multinomial Naive Bayes
A quick overview of the whole pipeline coming below:
- A little bit of feature engineering
- 2 different Models
- Evaluation Metrics chosen
- Model comparisons
### Let's start with some feature engineering.
Engineering the right features depends on 2 key ideas. Firstly, what is it that you are trying to solve? For example, if you want to guess my music preferences and you try to train a super awesome model while giving it what my height is as input features, you're going to have no luck. On the other hand, giving it my Spotify playlist will solve the problem with any model. So, CONTEXT of the problem plays a role.
Second, you can only represent based on the data at hand. Meaning, if you didn't have access to my Spotify playlist, but to my Facebook statuses - You know all my statuses about Harvard may not be useful. But if you represent me as my Facebook statuses which are YouTube links, that would also solve the problem. So, AVAILABILITY OF DATA at hand is the second factor.
#### A nice way to think of it is to think that you start with the problem at hand, but design features constrained by the data you have available. If you have many independent features that each correlate well with the class, learning is easy. On the other hand, if the class is a very complex function of the features, you may not be able to learn it.
In the context of this problem, we would like to predict the genre of a movie. what we have access to - movie overviews, which are text descriptions of the movie plot. The hypothesis makes sense, overview is a short description of the story and the story is clearly important in assigning genres to movies.
So, let's improve our features by playing with the words in the overviews in our data. One interesting way to go back to what we discussed earlier - TF-IDF. We originally used it to filter words, but we can also assign the tf-idf values as "importance" values to words, as opposed to treating them all equally. Tf-idf simply tries to identify the assign a weightage to each word in the bag of words.
Once again, the way it works is - Most movie descriptions have the word "The" in it. Obviously, it doesn't tell you anything special about it. So weightage should be inversely proportional to how many movies have the word in their description. This is the IDF part.
On the other hand, for the movie interstellar, if the description has the word Space 5 times, and wormhole 2 times, then it's probably more about Space than about wormhole. Thus, space should have a high weightage. This is the TF part.
We simply use TF-IDf to assign weightage to every word in the bag of words. Which makes sense, right? :)
```
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_tfidf = tfidf_transformer.fit_transform(X)
X_tfidf.shape
```
Let's divide our X and Y matrices into train and test split. We train the model on the train split, and report the performance on the test split. Think of this like the questions you do in the problem sets v/s the exam. Of course, they are both (assumed to be) from the same population of questions. And doing well on Problem Sets is a good indicator that you'll do well in exams, but really, you must test before you can make any claims about you knowing the subject.
```
msk = np.random.rand(X_tfidf.shape[0]) < 0.8
X_train_tfidf=X_tfidf[msk]
X_test_tfidf=X_tfidf[~msk]
Y_train=Y[msk]
Y_test=Y[~msk]
positions=range(len(movies_with_overviews))
# print positions
test_movies=np.asarray(positions)[~msk]
# test_movies
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import f1_score
from sklearn.metrics import make_scorer
from sklearn.metrics import classification_report
parameters = {'kernel':['linear'], 'C':[0.01, 0.1, 1.0]}
gridCV = GridSearchCV(SVC(class_weight='balanced'), parameters, scoring=make_scorer(f1_score, average='micro'))
classif = OneVsRestClassifier(gridCV)
classif.fit(X_train_tfidf, Y_train)
predstfidf=classif.predict(X_test_tfidf)
print classification_report(Y_test, predstfidf)
```
As you can see, the performance is by and large poorer for movies which are less represented like War and animation, and better for categories like Drama.
Numbers aside, let's look at our model's predictions for a small sample of movies from our test set.
```
genre_list=sorted(list(Genre_ID_to_name.keys()))
predictions=[]
for i in range(X_test_tfidf.shape[0]):
pred_genres=[]
movie_label_scores=predstfidf[i]
# print movie_label_scores
for j in range(19):
#print j
if movie_label_scores[j]!=0:
genre=Genre_ID_to_name[genre_list[j]]
pred_genres.append(genre)
predictions.append(pred_genres)
import pickle
f=open('classifer_svc','wb')
pickle.dump(classif,f)
f.close()
for i in range(X_test_tfidf.shape[0]):
if i%50==0 and i!=0:
print 'MOVIE: ',movies_with_overviews[i]['title'],'\tPREDICTION: ',','.join(predictions[i])
```
Let's try our second model? The naive bayes model.
```
from sklearn.naive_bayes import MultinomialNB
classifnb = OneVsRestClassifier(MultinomialNB())
classifnb.fit(X[msk].toarray(), Y_train)
predsnb=classifnb.predict(X[~msk].toarray())
import pickle
f2=open('classifer_nb','wb')
pickle.dump(classifnb,f2)
f2.close()
predictionsnb=[]
for i in range(X_test_tfidf.shape[0]):
pred_genres=[]
movie_label_scores=predsnb[i]
for j in range(19):
#print j
if movie_label_scores[j]!=0:
genre=Genre_ID_to_name[genre_list[j]]
pred_genres.append(genre)
predictionsnb.append(pred_genres)
for i in range(X_test_tfidf.shape[0]):
if i%50==0 and i!=0:
print 'MOVIE: ',movies_with_overviews[i]['title'],'\tPREDICTION: ',','.join(predictionsnb[i])
```
As can be seen above, the results seem promising, but how do we really compare the two models? We need to quantify our performance so that we can say which one's better. Takes us back to what we discussed right in the beginning - we're learning a function $g$ which can approximate the original unknown function $f$. For some values of $x_i$, the predictions will be wrong for sure, and we want to minimize it.
For multi label systems, we often keep track of performance using "Precision" and "Recall". These are standard metrics, and you can google to read up more about them if you're new to these terms.
# Evaluation Metrics
We will use the standard precision recall metrics for evaluating our system.
```
def precision_recall(gt,preds):
TP=0
FP=0
FN=0
for t in gt:
if t in preds:
TP+=1
else:
FN+=1
for p in preds:
if p not in gt:
FP+=1
if TP+FP==0:
precision=0
else:
precision=TP/float(TP+FP)
if TP+FN==0:
recall=0
else:
recall=TP/float(TP+FN)
return precision,recall
precs=[]
recs=[]
for i in range(len(test_movies)):
if i%1==0:
pos=test_movies[i]
test_movie=movies_with_overviews[pos]
gtids=test_movie['genre_ids']
gt=[]
for g in gtids:
g_name=Genre_ID_to_name[g]
gt.append(g_name)
# print predictions[i],movies_with_overviews[i]['title'],gt
a,b=precision_recall(gt,predictions[i])
precs.append(a)
recs.append(b)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
precs=[]
recs=[]
for i in range(len(test_movies)):
if i%1==0:
pos=test_movies[i]
test_movie=movies_with_overviews[pos]
gtids=test_movie['genre_ids']
gt=[]
for g in gtids:
g_name=Genre_ID_to_name[g]
gt.append(g_name)
# print predictions[i],movies_with_overviews[i]['title'],gt
a,b=precision_recall(gt,predictionsnb[i])
precs.append(a)
recs.append(b)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
```
The average precision and recall scores for our samples are pretty good! Models seem to be working! Also, we can see that the Naive Bayes performs outperforms SVM. **I strongly suggest you to go read about Multinomial Bayes and think about why it works so well for "Document Classification", which is very similar to our case as every movie overview can be thought of as a document we are assigning labels to.**
# Section 6 - Deep Learning : an intuitive overview
The above results were good, but it's time to bring out the big guns. So first and foremost, let's get a very short idea about what's deep learning. This is for peope who don't have background in this - it's high level and gives just the intuition.
As described above, the two most immportant concepts in doing good classification (or regression) are to 1) use the right representation which captures the right information about the data which is relevant to the problem at hand 2) Using the right model which has the capability of making sense of the representation fed to it.
While for the second part we have complicated and powerful models that we have studied at length, we don't seem to have a principled, mathematical way of doing the first part - i.e. representation. What we did above was to see "What makes sense", and go from there. That is not a good approach for complex data/ complex problems. Is there some way to automate this? Deep Learning, does just this.
To just emphasize the importance of representation in the complex tasks we usually attempt with Deep Learning, let me talk about the original problem which made it famous. The paper is often reffered to as the "Imagenet Challenge Paper", and it was basically working on object recognition in images. Let's try to think about an algorithm that tries to detect a chair.
## If I ask you to "Define" a chair, how would you? - Something with 4 legs?
<img src="files/chair1.png" height="400" width="400">
<h3><center>All are chairs, none with 4 legs. (Pic Credit: Zoya Bylinskii)</center></h3>
## How about some surface that we sit on then?
<img src="files/chair2.png" height="400" width="400">
<h3><center>All are surfaces we sit on, none are chairs. (Pic Credit: Zoya Bylinskii)</center></h3>
Clearly, these definitions won't work and we need something more complicated. Sadly, we can't come up with a simple text rule that our computer can search for! And we take a more principled approach.
The "Deep" in the deep learning comes from the fact that it was conventionally applied to Neural Networks. Neural Networks, as we all know, are structures organized in layers. Layers of computations. Why do we need layers? Because these layers can be seen as sub-tasks that we do in the complicated task of identifying a chair. It can be thought as a heirarchical break down of a complicated job into smalled sub-tasks.
Mathematically, each layer acts like a space transformation which takes the pixel values to a high dimensional space. When we start out, every pixel in the image is given equal importance in our matrix. With each layer, convolution operations give some parts more importance, and some lesser importance. In doing so, we transform our images to a space in which similar looking objects/object parts are closer (We are basically learning this space transformation in deep learning, nothing else)
What exactly was learnt by these neural networks is hard to know, and an active area of research. But one very crude way to visualize what it does is to think like - It starts by learning very generic features in the first layer. Something as simple as vertical and horizontal lines. In the next layer, it learns that if you combine the vectors representing vertical and horizontal vectors in different ratios, you can make all possible slanted lines. Next layer learns to combine lines to form curves - Say, something like the outline of a face. These curves come together to form 3D objects. And so on. Building sub-modules, combining them in the right way which can give it semantics.
**So, in a nutshell, the first few layers of a "Deep" network learn the right representation of the data, given the problem (which is mathematically described by your objective function trying to minimize difference between ground truth and predicted labels). The last layer simply looks how close or far apart things are in this high dimensional space.**
Hence, we can give any kind of data a high dimensional representation using neural networks. Below we will see high dimensional representations of both words in overviews (text) and posters (image). Let's get started with the posters i.e. extracting visual features from posters using deep learning.
# Section 7 - Deep Learning for predicting genre from poster
Once again, we must make an implementation decision. This time, it has more to do with how much time are we willing to spend in return for added accuracy. We are going to use here a technique that is commonly referred to as Pre-Training in Machine Learning Literature.
Instead of me trying to re-invent the wheel here, I am going to borrow this short section on pre-training from Stanford University's lecture on <a href='http://cs231n.github.io/transfer-learning/'> CNN's</a>. To quote -
''In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest. ''
There are three broad ways in which transfer learning or pre-training can be done. (The 2 concepts are different and to understand the difference clearly, I suggest you read the linked lecture thoroughly). The way we are going to about it is by using a pre-trained, released ConvNet as feature extractor. Take a ConvNet pretrained on ImageNet (a popular object detection dataset), remove the last fully-connected layer. After removing the last layer, what we have is just another neural network i.e. a stack of space tranformations. But, originally the output of this stack can be pumped into a single layer which can classify the image into categories like Car, Dog, Cat and so on.
What this means, is that in the space this stack transforms the images to, all images which contain a "dog" are closer to each other, and all images containing a "cat" are closer. Thus, it is a meaningful space where images with similar objects are closer.
Think about it, now if we pump our posters through this stack, it will embed them in a space where posters which contain similar objects are closer. This is a very meaningful feature engineering method! While this may not be ideal for genre prediction, it might be quite meaningful. For example, all posters with a gun or a car are probably action. While a smiling couple would point to romance or drama. The alternative would be to train the CNN from scratch which is fairly computationally intensive and involves a lot of tricks to get the CNN training to converge to the optimal space tranformation.
This way, we can start off with something strong, and then build on top. We pump our images through the pre-trained network to extract the visual features from the posters. Then, using these features as descriptors for the image, and genres as the labels, we train a simpler neural network from scratch which learns to do simply classification on this dataset. These 2 steps are exactly what we are going to do for predicting genres from movie posters.
## Deep Learning to extract visual features from posters
The basic problem here we are answering is that can we use the posters to predict genre. First check - Does this hypothesis make sense? Yes. Because that's what graphic designers do for a living. They leave visual cues to semantics. They make sure that when we look at the poster of a horror movie, we know it's not a happy image. Things like that. Can our deep learning system infer such subtleties? Let's find out!
For Visual features, either we can train a deep neural network ourselves from scratch, or we can use a pre-trained one made available to us from the Visual Geometry Group at Oxford University, one of the most popular methods. This is called the VGG-net. Or as they call it, we will extract the VGG features of an image. Mathematically, as mentioned, it's just a space transformation in the form of layers. So, we simply need to perform this chain of transformations on our image, right? Keras is a library that makes it very easy for us to do this. Some other common ones are Tensorflow and PyTorch. While the latter two are very powerful and customizable and used more often in practice, Keras makes it easy to prototype by keeping the syntax simple.
We will be working with Keras to keep things simple in code, so that we can spend more time understanding and less time coding. Some common ways people refer to this step are - "Getting the VGG features of an image", or "Forward Propogating the image through VGG and chopping off the last layer". In keras, this is as easy as writing 4 lines.
```
# Loading the list of movies we had downloaded posters for eariler -
f=open('poster_movies.pckl','r')
poster_movies=pickle.load(f)
f.close()
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
import pickle
model = VGG16(weights='imagenet', include_top=False)
allnames=os.listdir(poster_folder)
imnames=[j for j in allnames if j.endswith('.jpg')]
feature_list=[]
genre_list=[]
file_order=[]
print "Starting extracting VGG features for scraped images. This will take time, Please be patient..."
print "Total images = ",len(imnames)
failed_files=[]
succesful_files=[]
i=0
for mov in poster_movies:
i+=1
mov_name=mov['original_title']
mov_name1=mov_name.replace(':','/')
poster_name=mov_name.replace(' ','_')+'.jpg'
if poster_name in imnames:
img_path=poster_folder+poster_name
#try:
img = image.load_img(img_path, target_size=(224, 224))
succesful_files.append(poster_name)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
print features.shape
printe model.predict(x)
file_order.append(img_path)
feature_list.append(features)
genre_list.append(mov['genre_ids'])
if np.max(np.asarray(feature_list))==0.0:
print('problematic',i)
if i%250==0 or i==1:
print "Working on Image : ",i
# except:
# failed_files.append(poster_name)
# continue
else:
continue
print "Done with all features, please pickle for future use!"
len(genre_list)
len(feature_list)
print type(feature_list[0])
feature_list[0].shape
# Reading from pickle below, this code is not to be run.
list_pickled=(feature_list,file_order,failed_files,succesful_files,genre_list)
f=open('posters_new_features.pckl','wb')
pickle.dump(list_pickled,f)
f.close()
print("Features dumped to pickle file")
f7=open('posters_new_features.pckl','rb')
list_pickled=pickle.load(f7)
f7.close()
# (feature_list2,file_order2)=list_pickled
```
### Training a simple neural network model using these VGG features.
```
(feature_list,files,failed,succesful,genre_list)=list_pickled
```
Let's first get the labels on our 1342 samples first! As image download fails on a few instances, the best way to work with the right model is to read the poster names downloaded, and working from there. These posters cannot be uploaded to Github as they are too large, and so are being downloaded and read from my local computer. If you do re-do it, you might have to check and edit the paths in the code to make sure it runs.
```
(a,b,c,d)=feature_list[0].shape
feature_size=a*b*c*d
feature_size
```
This looks odd, why are we re-running the loop we ran above again below? The reason is simple, the most important thing to know about numpy is that using vstack() and hstack() are highly sub-optimal. Numpy arrays when created, a fixed size is allocated in the memory and when we stack, a new one is copied and created in a new location. This makes the code really, really slow. The best way to do it (and this remains the same with MATLAB matrices if you work with them), is to create a numpy array of zeros, and over-write it row by row. The above code was just to see what size numpy array we will need!
The final movie poster set for which we have all the information we need, is 1265 movies. In the above code we are making an X numpy array containing the visual features of one image per row. So, the VGG features are reshaped to be in the shape (1,25088) and we finally obtain a matrix of shape (1265,25088)
```
np_features=np.zeros((len(feature_list),feature_size))
for i in range(len(feature_list)):
feat=feature_list[i]
reshaped_feat=feat.reshape(1,-1)
np_features[i]=reshaped_feat
# np_features[-1]
X=np_features
from sklearn.preprocessing import MultiLabelBinarizer
mlb=MultiLabelBinarizer()
Y=mlb.fit_transform(genre_list)
Y.shape
```
Our binarized Y numpy array contains the binarized labels corresponding to the genre IDs of the 1277 movies
```
visual_problem_data=(X,Y)
f8=open('visual_problem_data_clean.pckl','wb')
pickle.dump(visual_problem_data,f8)
f8.close()
f8=open('visual_problem_data_clean.pckl','rb')
visual_features=pickle.load(f8)
f8.close()
(X,Y)=visual_features
X.shape
mask = np.random.rand(len(X)) < 0.8
X_train=X[mask]
X_test=X[~mask]
Y_train=Y[mask]
Y_test=Y[~mask]
X_test.shape
Y_test.shape
```
Now, we create our own keras neural network to use the VGG features and then classify movie genres. Keras makes this super easy.
Neural network architectures have gotten complex over the years. But the simplest ones contain very standard computations organized in layers, as described above. Given the popularity of some of these, Keras makes it as easy as writing out the names of these operations in a sequential order. This way you can make a network while completely avoiding the Mathematics (HIGHLY RECOMMENDED SPENDING MORE TIME ON THE MATH THOUGH)
Sequential() allows us to make models the follow this sequential order of layers. Different kinds of layers like Dense, Conv2D etc can be used, and many activation functions like RELU, Linear etc are also available.
# Important Question : Why do we need activation functions?
#### Copy pasting the answer I wrote for this question on <a href='https://www.quora.com/Why-do-neural-networks-need-an-activation-function/answer/Spandan-Madan?srid=5ydm'>Quora</a> Feel free to leave comments there.
""Sometimes, we tend to get lost in the jargon and confuse things easily, so the best way to go about this is getting back to our basics.
Don’t forget what the original premise of machine learning (and thus deep learning) is - IF the input and output are related by a function y=f(x), then if we have x, there is no way to exactly know f unless we know the process itself. However, machine learning gives you the ability to approximate f with a function g, and the process of trying out multiple candidates to identify the function g best approximating f is called machine learning.
Ok, that was machine learning, and how is deep learning different? Deep learning simply tries to expand the possible kind of functions that can be approximated using the above mentioned machine learning paradigm. Roughly speaking, if the previous model could learn say 10,000 kinds of functions, now it will be able to learn say 100,000 kinds (in actuality both are infinite spaces but one is larger than the other, because maths is cool that ways.)
If you want to know the mathematics of it, go read about VC dimension and how more layers in a network affect it. But I will avoid the mathematics here and rely on your intuition to believe me when I say that not all data can be classified correctly into categories using a linear function. So, we need our deep learning model to be able to approximate more complex functions than just a linear function.
Now, let’s come to your non linearity bit. Imagine a linear function y=2x+3, and another one y=4x+7. What happens if I pool them and take an average? I get another linear function y= 3x+5. So instead of doing those two computations separately and then averaging it out, I could have just used the single linear function y=3x+5. Obviously, this logic holds good if I have more than 2 such linear functions. This is exactly what will happen if you don’t have have non-linearities in your nodes, and also what others have written in their answers.
It simply follows from the definition of a linear function -
(i) If you take two linear functions, AND
(ii)Take a linear combination of them (which is how we combine the outputs of multiple nodes of a network)
You are BOUND to get a linear function because f(x)+g(x)=mx+b+nx+c=(m+n)x+(b+c)= say h(x).
And you could in essence replace your whole network by a simple matrix transformation which accounts for all linear combinations and up/downsamplings.
In a nutshell, you’ll only be trying to learn a linear approximation for original function f relating the input and the output. Which as we discussed above, is not always the best approximation. Adding non-linearities ensures that you can learn more complex functions by approximating every non-linear function as a LINEAR combination of a large number of non-linear functions.
Still new to the field, so if there’s something wrong here please comment below! Hope it helps""
#### Let's train our model then, using the features we extracted from VGG net
The model we will use has just 1 hidden layer between the VGG features and the final output layer. The simplest neural network you can get. An image goes into this network with the dimensions (1,25088), the first layer's output is 1024 dimensional. This hidden layer output undergoes a pointwise RELU activation. This output gets transformed into the output layer of 20 dimensions. It goes through a sigmoid.
The sigmoid, or the squashing function as it is often called, is a function which squashes numbers between 0 and 1. What are you reminded of when you think of numebers between 0 and 1? Right, probability.
By squashing the score of each of the 20 output labels between 0 and 1, sigmoid lets us interpret their scores as probabilities. Then, we can just pick the classes with the top 3 or 5 probability scores as the predicted genres for the movie poster! Simple!
```
# Y_train[115]
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import optimizers
model_visual = Sequential([
Dense(1024, input_shape=(25088,)),
Activation('relu'),
Dense(256),
Activation('relu'),
Dense(19),
Activation('sigmoid'),
])
opt = optimizers.rmsprop(lr=0.0001, decay=1e-6)
#sgd = optimizers.SGD(lr=0.05, decay=1e-6, momentum=0.4, nesterov=False)
model_visual.compile(optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy'])
```
We train the model using the fit() function. The parameters it takes are - training features and training labels, epochs, batch_size and verbose.
Simplest one - verbose. 0="dont print anything as you work", 1="Inform me as you go".
Often the data set is too large to be loaded into the RAM. So, we load data in batches. For batch_size=32 and epochs=10, the model starts loading rows from X in batches of 32 everytime it calculates the loss and updates the model. It keeps on going till it has covered all the samples 10 times.
So, the no. of times model is updated = (Total Samples/Batch Size) * (Epochs)
```
model_visual.fit(X_train, Y_train, epochs=10, batch_size=64,verbose=1)
model_visual.fit(X_train, Y_train, epochs=50, batch_size=64,verbose=0)
```
For the first 10 epochs I trained the model in a verbose fashion to show you what's happening. After that, in the below cell you can see I turned off the verbosity to keep the code cleaner.
```
Y_preds=model_visual.predict(X_test)
sum(sum(Y_preds))
```
### Let's look at some of our predictions?
```
f6=open('Genredict.pckl','rb')
Genre_ID_to_name=pickle.load(f6)
f6.close()
sum(Y_preds[1])
sum(Y_preds[2])
genre_list=sorted(list(Genre_ID_to_name.keys()))
precs=[]
recs=[]
for i in range(len(Y_preds)):
row=Y_preds[i]
gt_genres=Y_test[i]
gt_genre_names=[]
for j in range(19):
if gt_genres[j]==1:
gt_genre_names.append(Genre_ID_to_name[genre_list[j]])
top_3=np.argsort(row)[-3:]
predicted_genres=[]
for genre in top_3:
predicted_genres.append(Genre_ID_to_name[genre_list[genre]])
(precision,recall)=precision_recall(gt_genre_names,predicted_genres)
precs.append(precision)
recs.append(recall)
if i%50==0:
print "Predicted: ",','.join(predicted_genres)," Actual: ",','.join(gt_genre_names)
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
```
So, even with just the poster i.e. visual features we are able to make great predictions! Sure, text outperforms the visual features, but the important thing is that it still works. In more complicated models, we can combine the two to make even better predictions. That is precisely what I work on in my research.
These models were trained on CPU's, and a simple 1 layer model was used to show that there is a lot of information in this data that the models can extract. With a larger dataset, and more training I was able to bring these numbers to as high as 70%, which is the similar to textual features. Some teams in my class outperformed this even more. More data is the first thing you should try if you want better results. Then, you can start playing with training on GPUs, learning rate schedules and other hyperparameters. Finally, you can consider using ResNet, a much more powerful neural network model than VGG. All of these can be tried once you have a working knowledge of machine learning.
# Section 8 - Deep Learning to get Textual Features
Let's do the same thing as above with text now?
We will use an off the shelf representation for words - Word2Vec model. Just like VGGnet before, this is a model made available to get a meaningful representation. As the total number of words is small, we don't even need to forward propagate our sample through a network. Even that has been done for us, and the result is stored in the form of a dictionary. We can simply look up the word in the dictionary and get the Word2Vec features for the word.
You can download the dictionary from here - https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit <br>
Download it to the directory of this tutorial i.e. in the same folder as this ipython notebook.
```
from gensim import models
# model2 = models.Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
model2 = models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
```
Now, we can simply look up for a word in the above loaded model. For example, to get the Word2Vec representation of the word "King" we just do - model2['king']
```
print model2['king'].shape
print model2['dog'].shape
```
This way, we can represent the words in our overviews using this word2vec model. And then, we can use that as our X representations. So, instead of count of words, we are using a representation which is based on the semantic representation of the word. Mathematically, each word went from 3-4 dimensional (the length) to 300 dimensions!
For the same set of movies above, let's try and predict the genres from the deep representation of their overviews!
```
final_movies_set = movies_with_overviews
len(final_movies_set)
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
tokenizer = RegexpTokenizer(r'\w+')
# create English stop words list
en_stop = get_stop_words('en')
movie_mean_wordvec=np.zeros((len(final_movies_set),300))
movie_mean_wordvec.shape
```
Text needs some pre-processing before we can train the model. The only preprocessing we do here is - we delete commonly occurring words which we know are not informative about the genre. Think of it as the clutter in some sense. These words are often removed and are referred to as "stop words". You can look them up online. These include simple words like "a", "and", "but", "how", "or" and so on. They can be easily removed using the python package NLTK.
From the above dataset, movies with overviews which contain only stop words, or movies with overviews containing no words with word2vec representation are neglected. Others are used to build our Mean word2vec representation. Simply, put for every movie overview -
* Take movie overview
* Throw out stop words
* For non stop words:
- If in word2vec - take it's word2vec representation which is 300 dimensional
- If not - throw word
* For each movie, calculate the arithmetic mean of the 300 dimensional vector representations for all words in the overview which weren't thrown out
This mean becomes the 300 dimensional representation for the movie. For all movies, these are stored in a numpy array. So the X matrix becomes (1263,300). And, Y is (1263,20) i.e. binarized 20 genres, as before
**Why do we take the arithmetic mean?**
If you feel that we should have kept all the words separately - Then you're thinking correct, but sadly we're limited by the way current day neural networks work. I will not mull over this for the fear of stressing too much on an otherwise irrelevant detail. But if you're interested, read this awesome paper -
https://jiajunwu.com/papers/dmil_cvpr.pdf
```
genres=[]
rows_to_delete=[]
for i in range(len(final_movies_set)):
mov=final_movies_set[i]
movie_genres=mov['genre_ids']
genres.append(movie_genres)
overview=mov['overview']
tokens = tokenizer.tokenize(overview)
stopped_tokens = [k for k in tokens if not k in en_stop]
count_in_vocab=0
s=0
if len(stopped_tokens)==0:
rows_to_delete.append(i)
genres.pop(-1)
# print overview
# print "sample ",i,"had no nonstops"
else:
for tok in stopped_tokens:
if tok.lower() in model2.vocab:
count_in_vocab+=1
s+=model2[tok.lower()]
if count_in_vocab!=0:
movie_mean_wordvec[i]=s/float(count_in_vocab)
else:
rows_to_delete.append(i)
genres.pop(-1)
# print overview
# print "sample ",i,"had no word2vec"
len(genres)
mask2=[]
for row in range(len(movie_mean_wordvec)):
if row in rows_to_delete:
mask2.append(False)
else:
mask2.append(True)
X=movie_mean_wordvec[mask2]
X.shape
Y=mlb.fit_transform(genres)
Y.shape
textual_features=(X,Y)
f9=open('textual_features.pckl','wb')
pickle.dump(textual_features,f9)
f9.close()
# textual_features=(X,Y)
f9=open('textual_features.pckl','rb')
textual_features=pickle.load(f9)
f9.close()
(X,Y)=textual_features
X.shape
Y.shape
mask_text=np.random.rand(len(X))<0.8
X_train=X[mask_text]
Y_train=Y[mask_text]
X_test=X[~mask_text]
Y_test=Y[~mask_text]
```
Once again, we use a very similar, super simple architecture as before.
```
from keras.models import Sequential
from keras.layers import Dense, Activation
model_textual = Sequential([
Dense(300, input_shape=(300,)),
Activation('relu'),
Dense(19),
Activation('softmax'),
])
model_textual.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model_textual.fit(X_train, Y_train, epochs=10, batch_size=500)
model_textual.fit(X_train, Y_train, epochs=10000, batch_size=500,verbose=0)
score = model_textual.evaluate(X_test, Y_test, batch_size=249)
print("%s: %.2f%%" % (model_textual.metrics_names[1], score[1]*100))
Y_preds=model_textual.predict(X_test)
genre_list.append(10769)
print "Our predictions for the movies are - \n"
precs=[]
recs=[]
for i in range(len(Y_preds)):
row=Y_preds[i]
gt_genres=Y_test[i]
gt_genre_names=[]
for j in range(19):
if gt_genres[j]==1:
gt_genre_names.append(Genre_ID_to_name[genre_list[j]])
top_3=np.argsort(row)[-3:]
predicted_genres=[]
for genre in top_3:
predicted_genres.append(Genre_ID_to_name[genre_list[genre]])
(precision,recall)=precision_recall(gt_genre_names,predicted_genres)
precs.append(precision)
recs.append(recall)
if i%50==0:
print "Predicted: ",predicted_genres," Actual: ",gt_genre_names
print np.mean(np.asarray(precs)),np.mean(np.asarray(recs))
```
Even without much tuning of the above model, these results are able to beat our previous results.
Note - I got accuracies as high as 78% when doing classification using plots scraped from Wikipedia. The large amount of information was very suitable for movie genre classification with a deep model. Strongly suggest you to try playing around with architectures.
# Section 9 - Upcoming Tutorials and Acknowledgements
Congrats! This is the end of our pilot project! Needless to say, a lot of the above content may be new to you, or may be things that you know very well. If it's the former, I hope this tutorial would have helped you. If it is the latter and you think I wrote something incorrect or that my understanding can be improved, feel free to create a github issue so that I can correct it!
Writing tutorials can take a lot of time, but it is a great learning experience. I am currently working on a tutorial focussing on word embeddings, which will explore word2vec and other word embeddings in detail. While it will take some time to be up, I will post a link to it's repository on the README for this project so that interested readers can find it.
I would like to thank a few of my friends who had an indispensible role to play in me making this tutorial. Firstly, Professor Hanspeter Pfister and Verena Kaynig at Harvard, who helped guide this tutorial/project and scope it. Secondly, my friends Sahil Loomba and Matthew Tancik for their suggestions and editing the material and the presentation of the storyline. Thirdly, Zoya Bylinskii at MIT for constantly motivating me to put in my effort into this tutorial. Finally, all others who helped me feel confident enough to take up this task and to see it till the end. Thanks all of you!
| github_jupyter |
## <span style="color:purple">ArcGIS API for Python: Traffic and Pedestrian Activity Detection</span>

## Integrating ArcGIS with TensorFlow Deep Learning using the ArcGIS API for Python
## Jackson Hole, Wyoming Traffic Intersection Detection
This notebook provides an example of integration between ArcGIS and deep learning frameworks like TensorFlow using the ArcGIS API for Python.
<img src="../img/ArcGIS_ML_Integration.png" style="width: 75%"></img>
We will leverage a model to detect objects on a live video feed from youtube, and use these to update a feature service on a web GIS in real-time. As people, cars, trucks, and buses are detected on the feed, the feature will be updated to reflect the detection.
This concept works with a convolutional neural network built using the TensorFlow Object Detection API.
<img src="../img/dogneuralnetwork.png"></img>
# Imports
```
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import getpass
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from PIL import ImageGrab
import time
import pandas as pd
import cv2
```
## Env setup
```
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
```
## Object detection imports
Here are the imports from the object detection module.
```
from utils import label_map_util
from utils import visualization_utils as vis_util
```
# Model preparation
## Variables
Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
```
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
```
## Load a (frozen) Tensorflow model into memory.
```
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
```
## Loading label map
Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
```
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
category_index
```
## Helper code
```
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
```
This is a helper function that takes the detection graph output tensor (np arrays), stacks the classes and scores, and determines if the class for a person (1) is available within a certain score and within a certain amount of objects
```
def object_counter(classes_arr, scores_arr, score_thresh=0.3):
# Process the numpy array of classes from the model
stacked_arr = np.stack((classes_arr, scores_arr), axis=-1)
# Convert to pandas dataframe for easier querying
detection_df = pd.DataFrame(stacked_arr)
# Retrieve total count of cars with score threshold above param value
detected_cars = detection_df[(detection_df[0] == 3.0) & (detection_df[1] > score_thresh)]
detected_people = detection_df[(detection_df[0] == 1.0) & (detection_df[1] > score_thresh)]
detected_bicycles = detection_df[(detection_df[0] == 2.0) & (detection_df[1] > score_thresh)]
detected_motorcycles = detection_df[(detection_df[0] == 4.0) & (detection_df[1] > score_thresh)]
detected_buses = detection_df[(detection_df[0] == 6.0) & (detection_df[1] > score_thresh)]
detected_trucks = detection_df[(detection_df[0] == 8.0) & (detection_df[1] > score_thresh)]
car_count = len(detected_cars)
people_count = len(detected_people)
bicycle_count = len(detected_bicycles)
motorcycle_count = len(detected_motorcycles)
bus_count = len(detected_buses)
truck_count = len(detected_trucks)
return car_count, people_count, bicycle_count, motorcycle_count, bus_count, truck_count
```
# Establish Connection to ArcGIS Online via ArcGIS API for Python
#### Authenticate
```
import arcgis
gis_url = "" # Replace with gis URL
username = "" # Replace with username
gis = arcgis.gis.GIS(gis_url, username)
```
### Retrieve the Object Detection Point Layer
```
object_point_srvc = gis.content.search("JHWY_ML_Detection_02")[1]
object_point_srvc
# Convert our existing service into a pandas dataframe
object_point_lyr = object_point_srvc.layers[0]
obj_fset = object_point_lyr.query() #querying without any conditions returns all the features
obj_df = obj_fset.df
obj_df.head()
all_features = obj_fset.features
all_features
from copy import deepcopy
original_feature = all_features[0]
feature_to_be_updated = deepcopy(original_feature)
feature_to_be_updated
```
# Detection
```
# logging = "verbose" # Options: verbose | simple | cars
# logging = "simple"
# logging = "cars"
logging = "none"
```
# ArcGIS API for Python and TensorFlow Deep Learning Model
## Start Model Detection
#### 1366 x 768
```
# Top Left: YouTube Live Feed
# Bottom left: Detection
# Right Half: Operations Dashboard
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
while True:
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np = np.array(ImageGrab.grab(bbox=(0,0,683,444)))
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# print(np.squeeze(classes))
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8, min_score_thresh=0.3)
cv2.imshow('object detection', cv2.resize(image_np, (683,444)))
# cv2.imshow('object detection', cv2.resize(image_np, (683,444), interpolation=cv2.INTER_CUBIC))
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
car_count, people_count, bicycle_count, motorcycle_count, bus_count, truck_count = object_counter(np.squeeze(classes).astype(np.int32), np.squeeze(scores))
vehicle_count = car_count + motorcycle_count + bus_count + truck_count
total_count = vehicle_count + bicycle_count + people_count
if logging == "verbose":
print("/n")
print("Detected {0} total objects...".format(str(total_count)))
print("Detected {0} total vehicles...".format(str(vehicle_count)))
print("Detected {0} cars...".format(str(car_count)))
print("Detected {0} motorcycles...".format(str(motorcycle_count)))
print("Detected {0} buses...".format(str(bus_count)))
print("Detected {0} trucks...".format(str(truck_count)))
print("Detected {0} pedestrians...".format(str(people_count)))
print("Detected {0} bicycles...".format(str(bicycle_count)))
elif logging == "simple":
print("/n")
print("Detected {0} total objects...".format(str(total_count)))
print("Detected {0} total vehicles...".format(str(vehicle_count)))
print("Detected {0} pedestrians...".format(str(people_count)))
print("Detected {0} bicycles...".format(str(bicycle_count)))
elif logging == "cars":
print("/n")
print("Detected {0} cars...".format(str(car_count)))
elif logging == "none":
pass
features_for_update = []
feature_to_be_updated.attributes['RT_Object_Count'] = total_count
feature_to_be_updated.attributes['RT_Vehicle_Count'] = vehicle_count
feature_to_be_updated.attributes['RT_Car_Count'] = car_count
feature_to_be_updated.attributes['RT_Bus_Count'] = bus_count
feature_to_be_updated.attributes['RT_Truck_Count'] = truck_count
feature_to_be_updated.attributes['RT_Motorcycle_Count'] = motorcycle_count
feature_to_be_updated.attributes['RT_Pedestrian_Count'] = people_count
feature_to_be_updated.attributes['RT_Bicycle_Count'] = bicycle_count
# feature_to_be_updated.attributes['rt_car_count'] = car_count
features_for_update.append(feature_to_be_updated)
object_point_lyr.edit_features(updates=features_for_update)
```
# Resources
### Framework: ArcGIS API for Python; TensorFlow
### Object Detection Model: SSD MobileNet
### Source Labeled Data: Common Objects in Context (cocodataset.org)
| github_jupyter |
<a href="https://colab.research.google.com/github/mbohling/spiking-neuron-model/blob/main/Hodgkin-Huxley/SpikingNeuronModel_HH.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#The Spiking Neuron Model - Coding Challenge Problems (Part 3)
# Hodgkin-Huxley Spiking Neuron Model
This interactive document is meant to be followed as the reader makes their way through chapter: *The Spiking Neuron Model*. Each model presented in the chapter will have a section consisting of a step-by-step walkthrough of a simple Python implementation. This is followed by an interface to run simulations with different parameter values to answer the Coding Challenge Problems.
For each model covered in the chapter, there is a section called **Coding Challenge Problems.** This is where you will find user-interface components such as value sliders for various parameters. Use these controls to answer the questions from the text.
**Content Creator**: Maxwell E. Bohling
**Content Reviewer**: Lawrence C. Udeigwe
## How It Works
Google Colab Notebooks have both *Content* cells and *Code* cells. As you progress through the notebook, you MUST make sure to run each code cell as you come to them. Otherwise, you may run into errors when executing a code cell. Each code cell has a Play button next to it which will execute the code. (Some code may be hidden by default. This is generally because the code is more complex and is not necessary to understand in order to complete the model implementations or to answer the chapter Coding Challenge Problems).
**IMPORTANT**: You have been provided a link to view a **copy** of the original notebooks. You will find that you can edit the content of any cell. If you accidently change a cell, such as a line of code and/or run into errors as you try to run subsequent blocks, simply refresh the page, OR go to the *Runtime menu* and select *Restart runtime*. It is also suggested that you go to the *Edit menu* and select *Clear all outputs*. This will always allow you to revert the notebook to the original version (though you will have to run each code block again.)
For each model covered in the chapter, there is a section called **Coding Challenge Problems**. This is where you will find user-interface components such as value sliders for various parameters. Use these controls to answer the questions from the text.
Execute the code block. **Initialize Setup**
```
#@title Initialize Setup
#@markdown **(No need to understand this code, simply make sure you run this first).**
import sys
import functools as ft
import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import ipywidgets as widgets
import scipy as sc
# [BLOCK TAG: INIT]
try:
blockSet = [ ]
except:
print('Something went wrong! Try Refreshing the page.')
blockTags = ['INIT','VP1','NP1','SS1','SS2','SS3','CS1','CS2','CS3','VR1']
def pushBlockStack(tag):
if tag in blockSet:
return 1
indx = blockTags.index(tag)
if len(blockSet) != indx:
print('ERROR: BLOCK TAG:',tag,'executed out of sequence. Missing BLOCK TAG:', blockTags[indx-1])
return 0
else:
blockSet.append(tag)
return 1
def printError():
message = 'Something went wrong!\n\n'
message = message + 'Check for the following:\n\n'
message = message + '\t1. All previous code blocks have been run the order they appear and output a success message.\n'
message = message + '\t2. No other code has been altered.\n\n'
message = message + 'and then try running the code block again.'
message = message + ' If there is still an error when executing the code block, try the following:\n\n'
message = message + '\t1. Go to the \'Runtime\' menu and select \'Restart Runtime\', then in the \'Edit\' menu, select \'Clear all outputs\'.\n'
message = message + '\t2. Refresh the page.\n\n'
message = message + 'and be sure to run each of the previous code blocks again beginning with \'Initialize Setup\'.\n'
print(message)
return 0
def printSuccess(block):
success = 0
if len(block) == 0 or pushBlockStack(block) != 0:
message = 'Success! Move on to the next section.'
print(message)
success = 1
return success
def checkVoltageParameters(Vrest):
print('Checking Voltage Parameters... ')
try:
check_Vrest = Vrest
except:
return 0
else:
vals = [Vrest]
correct_vals = [-65]
if ft.reduce(lambda i, j : i and j, map(lambda m, k: m == k, vals, correct_vals), False):
return 0
return 1
def checkNeuronProperties(A, Ie, GL, GK, GNa, EL, EK, ENa):
print('Checking Neuron Properties... ')
try:
check_A = A
check_Ie = Ie
check_GL = GL
check_GK = GK
check_GNa = GNa
check_EL = EL
check_EK = EK
check_ENa = ENa
except:
return 0
else:
vals = [A, Ie, GL, GK, GNa, EL, EK, ENa]
correct_vals = [0.1, 1.75, 0.03, 3.6, 12, -54.4, -77, 50]
if ft.reduce(lambda i, j : i and j, map(lambda m, k: m == k, vals, correct_vals), False):
return 0
return 1
def checkSimulationSetup(Vrest, Vinitial, t0, dt, t_final, time, n_initial, m_initial, h_initial, start_current, end_current):
print('Checking Simulation Setup... ')
try:
check_Vrest = Vrest
check_Vinitial = Vinitial
check_t0 = t0
check_dt = dt
check_t_final = t_final
check_time = time
check_n_initial = n_initial
check_m_initial = m_initial
check_h_initial = h_initial
check_start_current = start_current
check_end_current = end_current
except:
return 0
else:
vals = [Vrest, Vinitial, t0, dt, t_final, time, n_initial, m_initial, h_initial, start_current, end_current]
correct_vals = [-65, -65, 0, 0.01, 20, 0.1399, 0.0498, 0.6225, 5, 10]
if ft.reduce(lambda i, j : i and j, map(lambda m, k: m == k, vals, correct_vals), False):
if len(time) != 2000 or time[0] != 0 or time[-1] != 20:
return 0
return 1
def checkValues():
chk = 3
if checkVoltageParameters(Vrest) < 1:
print('FAIL\n')
chk = chk - 1
else:
print('PASS\n')
if checkNeuronProperties(A, Ie, GL, GK, GNa, EL, EK, ENa) < 1:
print('FAIL\n')
chk = chk - 1
else:
print('PASS\n')
if checkSimulationSetup(Vrest, Vinitial, t0, dt, t_final, time, n_initial, m_initial, h_initial, start_current, end_current) < 1:
print('FAIL\n')
chk = chk - 1
else:
print('PASS\n')
return chk
try:
check_sys = sys
except:
printError()
else:
modulename = 'functools'
if modulename not in sys.modules:
printError()
else:
printSuccess('INIT')
```
## Walkthrough
The goal of this section is to write a Python implementation of the Hodgkin-Huxley model. Recall from the chapter text that we need to account for both activation and inactivation gating variables in order to simulate the persistent and transient conductances involved in the membrane current equation.
### Membrane Current
The Hodgkin-Huxley model is expressed as membrane current equation given as:
> $ \displaystyle i_{m} = \overline{g}_{L}(V-E_{L}) + \overline{g}_{K}n^4(V-E_{K}) + \overline{g}_{Na}m^3h(V-E_{Na})$
with maximal conductances $\overline{g}_{L},\;$ $\overline{g}_{K}\;$ $\overline{g}_{Na}\;$ and reversal potentials $E_{L},\;$ $E_{K},\;$ $E_{Na}$.
As with the previous models, Euler's method is used to compute the time evolution of the membrane potential $V$. For this model, we use the same numerical integration method to compute the evolution of the gating variables $n$, $m$, and $h$.
### Membrane Equation
Recall that the membrane equation is expressed as follows:
> $ \displaystyle \frac{dV}{dt} = -i_m+ \frac{I_{e}}{A} $
### Voltage Parameters
As opposed to the integrate-and-fire model, the Hodgkin-Huxley model does not utilize a spiking mechanism. Therefore, we only need to define the *voltage parameter* that determines the *resting* membrane potential value.
* $ V_{rest} = -65\;$*mV*
```
# [BLOCK TAG: VP1]
try:
check_BlockSet = blockSet
except:
print('ERROR: BLOCK TAG: VP1 executed out of sequence. Missing BLOCK TAG: INIT')
else:
try:
##################################################################################
# Voltage Parameters - Units mV (1 mV = 1e-3 Volts)
Vrest = -65
##################################################################################
except:
printError()
else:
printSuccess('VP1')
```
### Neuron Properties
The membrane equation is described by a total membrane current $i_{m}$ as a sum of:
1. A *leakage current*: $ \displaystyle\; \overline{g}_{L}(V-E_{L}) $
2. A *persistent current*: $\displaystyle\; \overline{g}_{K}n^4(V-E_{K}) $
3. A *transient current*: $\displaystyle\; \overline{g}_{Na}m^3h(V-E_{Na})$
Thus, the persistent conductance is modeled as a K$^+$ conductance and the transient conductance is modeled as a Na$^+$ conductance. For each current, we define the maximimal conductances:
* $ \displaystyle\; \overline{g}_{L} = 0.03\;$nS / mm$^2$
* $ \displaystyle\; \overline{g}_{K} = 3.6\;$nS / mm$^2$
* $ \displaystyle\; \overline{g}_{Na} = 12\;$nS / mm$^2$
and reversal potentials:
* $ \displaystyle\; E_{L} = -54.4\;$mV
* $ \displaystyle\; E_{K} = -77\;$mV
* $ \displaystyle\; E_{Na} = 50\;$mV
Lastly, as seen in the membrane equation for the model, we must define the value of the injected current, and the neuronal surface area:
* $ \displaystyle\; I_{e} = 1.75\;$nA
* $ \displaystyle\; A = 0.1\;$mm$^2$
```
# [BLOCK TAG: NP1]
try:
##################################################################################
#Maximal Conductances - Units nS/mm^2
GL = 0.03
GK = 3.6
GNa = 12
# Reversal Potentials - Units mV
EL = -54.4
EK = -77
ENa = 50
# Input current: Ie - Units nA (1 nA = 10-9 Amperes)
Ie = 1.75
# Neuron Surface Area - Units mm^2
A = 0.1
##################################################################################
except:
printError()
else:
printSuccess('NP1')
```
### Simulation Setup
To setup our simulation, we need initial values of each variable: $V$, $n$, $m$, and $h$ as well as a list to hold the values over time.
Set initial values as:
* $V_{initial}= V_{rest} = -65\;$*mV*
* $n_{initial} = 0.1399$
* $m_{initial} = 0.0498$
* $h_{initial} = 0.6225$
With each value defined at time $t = 0$, let $V_0 = V_{initial}, n_0 = n_{initial}, m_0 = m_{initial}, h_0 = h_{initial} $.
The initial membrane current is then:
* $\displaystyle i_{initial} = \overline{g}_{L}(V_0-E_{L}) + \overline{g}_{K}n_0^4(V_0-E_{K}) + \overline{g}_{Na}m_0^3h_0(V_0-E_{Na})$
Here we make use of the **numpy** library (to learn more about how to use this library, go to https://numpy.org/doc/stable/).
```
# [BLOCK TAG: SS1]
try:
##################################################################################
# Initial voltage
Vinitial = Vrest
# Initial gating variable values (Probability [0, 1])
n_initial = 0.1399
m_initial = 0.0498
h_initial = 0.6225
# Initial membrane current
im_initial = GL*(Vinitial-EL)+GK*np.power(n_initial,4)*(Vinitial-EK)+GNa*np.power(m_initial,3)*h_initial*(Vinitial-ENa)
##################################################################################
except:
printError()
else:
printSuccess('SS1')
```
We will be running a 20 ms simulation. The following lines of code setup a time span for the simulation. This is simply a matter of defining the start time $t_{0} = 0$ and the total length (in ms) of the simulation: $t_{final} = 20$.
Throughout the simulation, we calculate the membrane potential $V$ at each *time-step*. The time-step is the change in time for each iteration of the simulation, for example if $t_{0} = 0$, the next computation of $V$ is performed at $t_{0} + dt$.
Thus, by setting $dt = 0.01$ (in ms), the simulation will compute $V$, $n$, $m$, and $h$ at time $t = 1, 2, \ldots, t_{final}$.
```
# [BLOCK TAG: SS2]
try:
##################################################################################
# Simulation Time Span (0 to 20ms, dt = 0.01ms)
t0 = 0
dt = 0.01
t_final = 20
# What does the linspace() function do?
time = np.linspace(t0, t_final, 2000)
##################################################################################
except:
printError()
else:
printSuccess('SS2')
```
Next, we define the time $t$ at which the injected current $I_{e}$ is *switched on* and applied to the neuron, and the time $t$ at which the injected current is *switched off*.
For the Hodgkin-Huxley model, we run a shorter simulation and we apply the current from $t = 5\;$ms to $t = 10\;$ms.
```
# [BLOCK TAG: SS3]
try:
##################################################################################
# Time at which the current is applied - Units ms
start_current = 5
# Time at which the current is switched off - Units ms
end_current = 10
##################################################################################
except:
printError()
else:
printSuccess('SS3')
```
### Computing and Storing $\frac{dV}{dt}$, $\frac{dn}{dt}$, $\frac{dm}{dt}$, $\frac{dh}{dt}$
We are about ready to finish the code implementation for simulating a Hodgkin-Huxley model neuron.
We need some way to store the values of the membrane potential $V, n, m, h$ at each time step. To do this, we simply create empty lists $V[t], n[t], m[t], h[t]$ with a length equal to the number of time-steps of our simulation.
```
# [BLOCK TAG: CS1]
try:
##################################################################################
# Create a list V(t) to store the value of V at each time-step dt
V = [0] * len(time)
# Set the initial value at time t = t0 to the initial value Vinitial
V[0] = Vinitial
# Create lists to store the value of each gating variable at each time-step dt
n = [0] * len(time)
m= [0] * len(time)
h = [0] * len(time)
# Set the initial value at time t = t0 to the initial values
n[0] = n_initial
m[0] = m_initial
h[0] = h_initial
# Create list to store value of membrane current at each time-step dt
im = [0] * len(time)
# Set the initial value at time t = t0 to the initial value im_initial
im[0] = im_initial
##################################################################################
except:
printError()
else:
printSuccess('CS1')
```
### Opening and Closing Rate Functions for Gating Variables
The gating variables $n$, $m$, and $h$ represent **probabilities** that a gate mechanism in both the persistent and transient ion-conducting channels are open or *activated*.
For any arbitrary gating variable $z$, the open probability of a channel at any time $ t $ is computed using an *opening* rate function $\alpha_{z}(V)$ and a *closing* rate $\beta_{z}(V)$, both of which are functions of the membrane potential $V$.
Each gating variable is numerically integrated using Euler's method throughout the simulation, where for any arbitrary gating variable $z$, the rate functions are given as follows:
> $ \displaystyle \tau_{z}(V)\frac{dz}{dt} = z_{\infty}(V) - z $
where
> $ \displaystyle \tau_{z}(V) = \frac{1}{\alpha_{z}(V) + \beta_{z}(V)} $
and
> $ \displaystyle z_{\infty}(V) = \frac{\alpha_{z}(V) }{\alpha_{z}(V) + \beta_{z}(V)} $
#### Fitted Rate Functions
Hodgkin and Huxley had fit the opening and closing rate functions using experimental data. These are given as follows:
---
For activation variable $n$
> $ \displaystyle \alpha_{n}(V) = \frac{0.01(V+55)}{ 1 - \exp(-0.1(V+55))} $
> $ \displaystyle \beta_{n}(V) = 0.125\exp(-0.0125(V+65)) $
---
For activation variable $m$
> $ \displaystyle \alpha_{m}(V) = \frac{0.1(V+4)}{1 - \exp(-0.1(V+4))}$
> $ \displaystyle \beta_{m}(V) = 4\exp(-0.0556(V+65)) $
---
For inactivation variable $h$
> $ \displaystyle \alpha_{h}(V) = 0.07\exp(-0.05(V+65))$
> $ \displaystyle \beta_{h}(V) = \frac{1}{1 + \exp(-0.1(V+35))} $
We define separate functions for each gating variable. These take the membrane potential, $V$, as input, and ouput $dz$ where $z = n, m, h $.
Using the functional forms and fitted rate functions, these functions compute the changes dn, dm, and dh at each time-step dt which depend on the membrane potential V at time t.
Execute the code block. **Initialize Helper Functions**
```
#@title Initialize Helper Functions
#@markdown **(Double-Click the cell to show the code)**
# [BLOCK TAG: CS2]
##################################################################################
# Function: compute_dn
def compute_dn(v, n):
alpha_n = (0.01*(v + 55))/(1 - np.exp(-0.1*(v+55)))
beta_n = 0.125*np.exp(-0.0125*(v+65))
n_inf = alpha_n/(alpha_n + beta_n)
tau_n = 1/(alpha_n + beta_n)
dn = (dt/tau_n)*(n_inf - n)
return dn
# Function: compute_dm
def compute_dm(v, m):
alpha_m = (0.1*(v + 40))/(1 - np.exp(-0.1*(v+40)))
beta_m = 4*np.exp(-0.0556*(v+65))
m_inf = alpha_m/(alpha_m + beta_m)
tau_m = 1/(alpha_m + beta_m)
dm = (dt/tau_m)*(m_inf - m)
return dm
# Function: compute_dh
def compute_dh(v, h):
alpha_h = 0.07*np.exp(-0.05*(v+65))
beta_h = 1/(1 + np.exp(-0.1*(v+35)))
h_inf = alpha_h/(alpha_h + beta_h)
tau_h = 1/(alpha_h + beta_h)
dh = (dt/tau_h)*(h_inf - h)
return dh
##################################################################################
x = printSuccess('CS2')
```
Finally, we run our simulation according to the updated *pseudocode*
---
*for each time-step from $t = t_{0}$ to $t = t_{final}$*
> *If the current time $t \geq start_{current}\ $ and $\ t \leq end_{current}$*
>> $I_{e} = 1.75\;$nA
> *otherwise*
>> $I_{e} = 0\;$nA
> *First compute the open probabilites for each gating variable*
> $ \displaystyle dn = $ **compute_dn**$(V[t], n[t])$
> *Update* $ n[t+1] = n[t] + dn $
> $ \displaystyle dm = $ **compute_dm**$(V[t], m[t])$
> *Update* $ m[t+1] = m[t] + dm $
> $ \displaystyle dh = $ **compute_dh**$(V[t], h[t])$
> *Update* $ h[t+1] = h[t] + dh $
> $ \displaystyle i_{m}[t+1] = \overline{g}_{L}(V[t]-E_{L}) + \overline{g}_{K}n[t+1]^4(V[t]-E_{K}) + \overline{g}_{Na}m[t+1]^3h[t+1](V[t]-E_{Na})$
> *Use Euler's Method of Numerical Integration*
> $ \displaystyle dV= dt\left(-i_m[t+1]+ \frac{I_{e}}{A}\right) $
> *Update* $V[t+1] = V[t] + dV$
*end*
---
This translates to the following Python code.
```
# [BLOCK TAG: CS3]
try:
chk = checkValues()
except:
printError()
else:
try:
##################################################################################
# For each timestep we compute V and store the value
for t in range(len(time)-1):
# If time t >= 5 ms and t <= 10 ms, switch Injected Current ON
if time[t] >= start_current and time[t] <= end_current:
ie = Ie
# Otherwise, switch Injected Current OFF
else:
ie = 0
# For each timestep we compute n, m and h and store the value
dn = compute_dn(V[t], n[t])
n[t+1] = n[t] + dn
dm = compute_dm(V[t], m[t])
m[t+1] = m[t] + dm
dh = compute_dh(V[t], h[t])
h[t+1] = h[t] + dh
# Use these values to compute the updated membrane current
im[t+1] = GL*(V[t]-EL)+GK*np.power(n[t+1],4)*(V[t]-EK)+GNa*np.power(m[t+1],3)*h[t+1]*(V[t]-ENa)
# Using Euler's Method for Numerical Integration (See Chapter Text)
# we compute the change in voltage dV as follows (using the model equation)
dV = dt*(-1*im[t+1] + ie/A)
# Store this new value into our list
V[t+1] = V[t] + dV
##################################################################################
except:
printError()
else:
if chk == 3:
printSuccess('CS3')
else:
printError()
```
### Visualizing Results
Now we have values of $V$, $i_m$, $n$, $m$, and $h$ for each time-step of the simulation, we can visualize the results by using Python to plot the data. This makes use of another widely used library **plotly** (to learn more about plotting data with this library, go to https://plotly.com/python/reference/index/).
```
# [BLOCK TAG: VR1]
try:
if 'CS2' not in blockSet:
print('ERROR: BLOCK TAG: VR1 executed out of sequence. Missing BLOCK TAG: CS3')
else:
try:
##################################################################################
# Data
x = list(time[0:-2])
im = [x / 100 for x in im]
# Plot data
fig = make_subplots(
rows=3, cols=1, shared_xaxes = True, vertical_spacing=0.1,
subplot_titles=('V over Time', 'i_m over Time', 'n, m, h over Time')
)
# Add traces
fig.add_trace(go.Scatter(name='V', x=x, y=V), row=1, col=1)
fig.add_trace(go.Scatter(name='i_m', x=x, y=im), row=2, col=1)
fig.add_trace(go.Scatter(name='n', x=x, y=n), row=3, col=1)
fig.add_trace(go.Scatter(name='m', x=x, y=m), row=3, col=1)
fig.add_trace(go.Scatter(name='h', x=x, y=h), row=3, col=1)
# Update xaxis properties
fig.update_xaxes(title_text="Time t (ms)", row=3, col=1)
# Update yaxis properties
fig.update_yaxes(title_text="Membrane Potential V (mV)", row=1, col=1)
fig.update_yaxes(title_text="Current i_m (microA / mm^2)", row=2, col=1)
fig.update_yaxes(title_text="n, m, h (Probability)",range=[0,1], row=3, col=1)
# Update title and size
fig.update_layout(height=800, width=700,
title_text='Hodgkin-Huxley Model Neuron',
showlegend = True)
# Update theme
fig.layout.template = 'plotly_dark'
# Show figure
fig.show()
##################################################################################
printSuccess('VR1')
except:
printError()
except:
printError()
```
## Hodgkin-Huxley Spiking Neuron Model - Full Code
```
import numpy as np
from plotly.subplots import make_subplots
import plotly.graph_objects as go
# Voltage Parameters - Units mV (1 mV = 1e-3 Volts)
Vrest = -65
#Maximal Conductances - Units nS/mm^2
GL = 0.03
GK = 3.6
GNa = 12
# Reversal Potentials - Units mV
EL = -54.4
EK = -77
ENa = 50
# Input current: Ie - Units nA (1 nA = 10-9 Amperes)
Ie = 1.75
# Neuron Surface Area - Units mm^2
A = 0.1
# Simulation Time Span (0 to 15ms, dt = 0.01ms)
t0 = 0
dt = 0.01
t_final = 20
time = np.linspace(t0, t_final, 2000)
# Time at which the current is applied - Units ms
start_current = 5
# Time at which the current is switched off - Units ms
end_current = 10
# Initial voltage
Vinitial = Vrest
# Create a list V(t) to store the value of V at each time-step dt
V = [0] * len(time)
# Set the initial value at time t = t0 to the initial value Vinitial
V[0] = Vinitial
# Initial gating variable values (Probability [0, 1])
n_initial = 0.1399
m_initial = 0.0498
h_initial = 0.6225
# Create lists to store the value of each gating variable at each time-step dt
n = [0] * len(time)
m= [0] * len(time)
h = [0] * len(time)
# Set the initial value at time t = t0 to the initial values
n[0] = n_initial
m[0] = m_initial
h[0] = h_initial
# Initial membrane current
im_initial = GL*(V[0]-EL)+GK*np.power(n[0],4)*(V[0]-EK)+GNa*np.power(m[0],3)*h[0]*(V[0]-ENa)
# Create list to store value of membrane current at each time-step dt
im = [0] * len(time)
# Set the initial value at time t = t0 to the initial value im_initial
im[0] = im_initial
# Function: compute_dn
def compute_dn(v, n):
alpha_n = (0.01*(v + 55))/(1 - np.exp(-0.1*(v+55)))
beta_n = 0.125*np.exp(-0.0125*(v+65))
n_inf = alpha_n/(alpha_n + beta_n)
tau_n = 1/(alpha_n + beta_n)
dn = (dt/tau_n)*(n_inf - n)
return dn
# Function: compute_dm
def compute_dm(v, m):
alpha_m = (0.1*(v + 40))/(1 - np.exp(-0.1*(v+40)))
beta_m = 4*np.exp(-0.0556*(v+65))
m_inf = alpha_m/(alpha_m + beta_m)
tau_m = 1/(alpha_m + beta_m)
dm = (dt/tau_m)*(m_inf - m)
return dm
# Function: compute_dh
def compute_dh(v, h):
alpha_h = 0.07*np.exp(-0.05*(v+65))
beta_h = 1/(1 + np.exp(-0.1*(v+35)))
h_inf = alpha_h/(alpha_h + beta_h)
tau_h = 1/(alpha_h + beta_h)
dh = (dt/tau_h)*(h_inf - h)
return dh
# For each timestep we compute V and store the value
for t in range(len(time)-1):
# For each timestep we compute n, m and h and store the value
dn = compute_dn(V[t], n[t])
n[t+1] = n[t] + dn
dm = compute_dm(V[t], m[t])
m[t+1] = m[t] + dm
dh = compute_dh(V[t], h[t])
h[t+1] = h[t] + dh
# If time t >= 1 ms and t <= 2 ms, switch Injected Current ON
if time[t] >= start_current and time[t] <= end_current:
ie = Ie
# Otherwise, switch Injected Current OFF
else:
ie = 0
# Use these values to compute the updated membrane current
im[t+1] = GL*(V[t]-EL)+GK*np.power(n[t+1],4)*(V[t]-EK)+GNa*np.power(m[t+1],3)*h[t+1]*(V[t]-ENa)
# Using Euler's Method for Numerical Integration (See Chapter Text)
# we compute the change in voltage dV as follows (using the model equation)
dV = dt*(-im[t+1] + ie/A)
# Store this new value into our list
V[t+1] = V[t] + dV
# Data
x = list(time[0:-2])
im = [x / 100 for x in im]
# Plot data
fig = make_subplots(
rows=3, cols=1, shared_xaxes = True, vertical_spacing=0.1,
subplot_titles=('V over Time', 'i_m over Time', 'n, m, h over Time')
)
# Add traces
fig.add_trace(go.Scatter(name='V', x=x, y=V), row=1, col=1)
fig.add_trace(go.Scatter(name='i_m', x=x, y=im), row=2, col=1)
fig.add_trace(go.Scatter(name='n', x=x, y=n), row=3, col=1)
fig.add_trace(go.Scatter(name='m', x=x, y=m), row=3, col=1)
fig.add_trace(go.Scatter(name='h', x=x, y=h), row=3, col=1)
# Update xaxis properties
fig.update_xaxes(title_text="Time t (ms)", row=3, col=1)
# Update yaxis properties
fig.update_yaxes(title_text="Membrane Potential V (mV)", row=1, col=1)
fig.update_yaxes(title_text="Current i_m (microA / mm^2)", row=2, col=1)
fig.update_yaxes(title_text="n, m, h (Probability)",range=[0,1], row=3, col=1)
# Update title and size
fig.update_layout(height=800, width=700,
title_text='Hodgkin-Huxley Model Neuron',
showlegend = True)
# Update theme
fig.layout.template = 'plotly_dark'
# Show figure
fig.show()
```
## Coding Challenge Problems
```
#@title Run Simulation
#@markdown Execute the code block and use the sliders to set values in order to answer the Coding Challenge Problems in the chapter text.
#@markdown (Tip: Select a slider and use the left and right arrow keys to slide to the desired value.)
import numpy as np
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import ipywidgets as widgets
# Voltage Parameters - Units mV (1 mV = 1e-3 Volts)
Vrest = -65
#Maximal Conductances - Units nS/mm^2
GL = 0.03
GK = 3.6
GNa = 12
# Reversal Potentials - Units mV
EL = -54.4
EK = -77
ENa = 50
# Input current: Ie - Units nA (1 nA = 10-9 Amperes)
Ie = 1.75
# Neuron Surface Area - Units mm^2
A = 0.1
# Time at which the current is applied - Units ms
start_current = 5
# Time at which the current is switched off - Units ms
end_current = 10
# Initial voltage
Vinitial = Vrest
# Simulation Time Span (0 to 20ms, dt = 0.01ms)
t0 = 0
dt = 0.01
t_final = 20
time = np.linspace(t0, t_final, int(t_final/dt))
# Create a list V(t) to store the value of V at each time-step dt
V = [0] * len(time)
# Set the initial value at time t = t0 to the initial value Vinitial
V[0] = Vinitial
# Initial gating variable values (Probability [0, 1])
n_initial = 0.1399
m_initial = 0.0498
h_initial = 0.6225
# Create lists to store the value of each gating variable at each time-step dt
n = [0] * len(time)
m= [0] * len(time)
h = [0] * len(time)
# Set the initial value at time t = t0 to the initial values
n[0] = n_initial
m[0] = m_initial
h[0] = h_initial
# Initial membrane current
im_initial = GL*(V[0]-EL)+GK*np.power(n[0],4)*(V[0]-EK)+GNa*np.power(m[0],3)*h[0]*(V[0]-ENa)
# Create list to store value of membrane current at each time-step dt
im = [0] * len(time)
# Set the initial value at time t = t0 to the initial value im_initial
im[0] = im_initial
# Function: compute_dn
def compute_dn(v, n):
alpha_n = (0.01*(v + 55))/(1 - np.exp(-0.1*(v+55)))
beta_n = 0.125*np.exp(-0.0125*(v+65))
n_inf = alpha_n/(alpha_n + beta_n)
tau_n = 1/(alpha_n + beta_n)
dn = (dt/tau_n)*(n_inf - n)
return dn
# Function: compute_dm
def compute_dm(v, m):
alpha_m = (0.1*(v + 40))/(1 - np.exp(-0.1*(v+40)))
beta_m = 4*np.exp(-0.0556*(v+65))
m_inf = alpha_m/(alpha_m + beta_m)
tau_m = 1/(alpha_m + beta_m)
dm = (dt/tau_m)*(m_inf - m)
return dm
# Function: compute_dh
def compute_dh(v, h):
alpha_h = 0.07*np.exp(-0.05*(v+65))
beta_h = 1/(1 + np.exp(-0.1*(v+35)))
h_inf = alpha_h/(alpha_h + beta_h)
tau_h = 1/(alpha_h + beta_h)
dh = (dt/tau_h)*(h_inf - h)
return dh
def simulate_iaf_neuron(Ie, c):
# Time at which the current is applied - Units ms
start_current = c[0]
# Time at which the current is switched off - Units ms
end_current = c[1]
# For each timestep we compute V and store the value
for t in range(len(time)-1):
# For each timestep we compute n, m and h and store the value
dn = compute_dn(V[t], n[t])
n[t+1] = n[t] + dn
dm = compute_dm(V[t], m[t])
m[t+1] = m[t] + dm
dh = compute_dh(V[t], h[t])
h[t+1] = h[t] + dh
# If time t >= 1 ms and t <= 2 ms, switch Injected Current ON
if time[t] >= start_current and time[t] <= end_current:
ie = Ie
# Otherwise, switch Injected Current OFF
else:
ie = 0
# Use these values to compute the updated membrane current
im[t+1] = GL*(V[t]-EL)+GK*np.power(n[t+1],4)*(V[t]-EK)+GNa*np.power(m[t+1],3)*h[t+1]*(V[t]-ENa)
# Using Euler's Method for Numerical Integration (See Chapter Text)
# we compute the change in voltage dV as follows (using the model equation)
dV = dt*(-im[t+1] + ie/A)
# Store this new value into our list
V[t+1] = V[t] + dV
return [V, im, n, m, h, time]
def plot_iaf_neuron(V, im, n, m, h, time):
# Data
x = list(time[0:-2])
im = [x / 100 for x in im]
# Plot data
fig = make_subplots(
rows=3, cols=1, shared_xaxes = True, vertical_spacing=0.1,
subplot_titles=('V over Time', 'i_m over Time', 'n, m, h over Time')
)
# Add traces
fig.add_trace(go.Scatter(name='V', x=x, y=V), row=1, col=1)
fig.add_trace(go.Scatter(name='i_m', x=x, y=im), row=2, col=1)
fig.add_trace(go.Scatter(name='n', x=x, y=n), row=3, col=1)
fig.add_trace(go.Scatter(name='m', x=x, y=m), row=3, col=1)
fig.add_trace(go.Scatter(name='h', x=x, y=h), row=3, col=1)
# Update xaxis properties
fig.update_xaxes(title_text="Time t (ms)", row=3, col=1)
# Update yaxis properties
fig.update_yaxes(title_text="Membrane Potential V (mV)", row=1, col=1)
fig.update_yaxes(title_text="Current i_m (microA / mm^2)", row=2, col=1)
fig.update_yaxes(title_text="n, m, h (Probability)",range=[0,1], row=3, col=1)
# Update title and size
fig.update_layout(height=800, width=700,
title_text='Hodgkin-Huxley Model Neuron',
showlegend = True)
# Update theme
fig.layout.template = 'plotly_dark'
# Show figure
fig.show()
style = {'description_width':'auto'}
@widgets.interact(
Ie = widgets.FloatSlider(
value=1.75,
min=0.00,
max=5.00,
step=0.05,
description='Ie',
style = style,
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='1.2f'
),
c = widgets.FloatRangeSlider(
value=[5.00, 10.00],
min=1.00,
max=15.00,
step=0.10,
description='Ie: On/Off',
style = style,
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='1.2f'
)
)
def compute_iaf_neuron(Ie =1.75, c = [5.00, 10.00]):
[V, im, n, m, h, time] = simulate_iaf_neuron(Ie, c)
plot_iaf_neuron(V, im, n, m, h, time)
```
| github_jupyter |
# 광학 인식

흔히 볼 수 있는 Computer Vision 과제는 이미지에서 텍스트를 감지하고 해석하는 것입니다. 이러한 종류의 처리를 종종 *OCR(광학 인식)*이라고 합니다.
## Computer Vision 서비스를 사용하여 이미지에서 텍스트 읽기
**Computer Vision** Cognitive Service는 다음을 비롯한 OCR 작업을 지원합니다.
- 여러 언어로 된 텍스트를 읽는 데 사용할 수 있는 **OCR** API. 이 API는 동기식으로 사용할 수 있으며, 이미지에서 소량의 텍스트를 감지하고 읽어야 할 때 잘 작동합니다.
- 더 큰 문서에 최적화된 **Read** API. 이 API는 비동기식으로 사용되며, 인쇄 텍스트와 필기 텍스트 모두에 사용할 수 있습니다.
이 서비스는 **Computer Vision** 리소스 또는 **Cognitive Services** 리소스를 만들어서 사용할 수 있습니다.
아직 만들지 않았다면 Azure 구독에서 **Cognitive Services** 리소스를 만듭니다.
> **참고**: 이미 Cognitive Services 리소스를 보유하고 있다면 Azure Portal에서 **빠른 시작** 페이지를 열고 키 및 엔드포인트를 아래의 셀로 복사하기만 하면 됩니다. 리소스가 없다면 아래의 단계를 따라 리소스를 만듭니다.
1. 다른 브라우저 탭에서 Azure Portal(https://portal.azure.com) 을 열고 Microsoft 계정으로 로그인합니다.
2. **+리소스 만들기** 단추를 클릭하고, *Cognitive Services*를 검색하고, 다음 설정을 사용하여 **Cognitive Services** 리소스를 만듭니다.
- **구독**: *사용자의 Azure 구독*.
- **리소스 그룹**: *고유한 이름의 새 리소스 그룹을 선택하거나 만듭니다*.
- **지역**: *사용 가능한 지역을 선택합니다*.
- **이름**: *고유한 이름을 입력합니다*.
- **가격 책정 계층**: S0
- **알림을 읽고 이해했음을 확인합니다**. 선택됨.
3. 배포가 완료될 때까지 기다립니다. 그런 다음에 Cognitive Services 리소스로 이동하고, **개요** 페이지에서 링크를 클릭하여 서비스의 키를 관리합니다. 클라이언트 애플리케이션에서 Cognitive Services 리소스에 연결하려면 엔드포인트 및 키가 필요합니다.
### Cognitive Services 리소스의 키 및 엔드포인트 가져오기
Cognitive Services 리소스를 사용하려면 클라이언트 애플리케이션에 해당 엔드포인트 및 인증 키가 필요합니다.
1. Azure Portal에 있는 Cognitive Service 리소스의 **키 및 엔드포인트** 페이지에서 리소스의 **Key1**을 복사하고 아래 코드에 붙여 넣어 **YOUR_COG_KEY**를 대체합니다.
2. 리소스의 **엔드포인트**를 복사하고 아래 코드에 붙여 넣어 **YOUR_COG_ENDPOINT**를 대체합니다.
3. **셀 실행**(▷) 단추(셀 왼쪽에 있음)를 클릭하여 아래의 셀에 있는 코드를 실행합니다.
```
cog_key = 'YOUR_COG_KEY'
cog_endpoint = 'YOUR_COG_ENDPOINT'
print('Ready to use cognitive services at {} using key {}'.format(cog_endpoint, cog_key))
```
이제 키와 엔드포인트를 설정했으므로 Computer Vision 서비스 리소스를 사용하여 이미지에서 텍스트를 추출할 수 있습니다.
먼저, 이미지를 동기식으로 분석하고 포함된 텍스트를 읽을 수 있게 해주는 **OCR** API부터 시작하겠습니다. 이 경우에는 일부 텍스트를 포함하고 있는 가상의 Northwind Traders 소매업체에 대한 광고 이미지가 있습니다. 아래의 셀을 실행하여 읽어 보세요.
```
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from msrest.authentication import CognitiveServicesCredentials
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
import os
%matplotlib inline
# Get a client for the computer vision service
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Read the image file
image_path = os.path.join('data', 'ocr', 'advert.jpg')
image_stream = open(image_path, "rb")
# Use the Computer Vision service to find text in the image
read_results = computervision_client.recognize_printed_text_in_stream(image_stream)
# Process the text line by line
for region in read_results.regions:
for line in region.lines:
# Read the words in the line of text
line_text = ''
for word in line.words:
line_text += word.text + ' '
print(line_text.rstrip())
# Open image to display it.
fig = plt.figure(figsize=(7, 7))
img = Image.open(image_path)
draw = ImageDraw.Draw(img)
plt.axis('off')
plt.imshow(img)
```
이미지에 있는 텍스트는 영역, 줄, 단어의 계층 구조로 구성되어 있으며 코드는 이 항목들을 읽어서 결과를 검색합니다.
이미지 위에서 읽은 텍스트를 결과에서 봅니다.
## 경계 상자 표시
텍스트 줄의 *경계 상자* 좌표와 이미지에서 발견된 개별 단어도 결과에 포함되어 있습니다. 아래의 셀을 실행하여 위에서 검색한 광고 이미지에서 텍스트 줄의 경계 상자를 확인하세요.
```
# Open image to display it.
fig = plt.figure(figsize=(7, 7))
img = Image.open(image_path)
draw = ImageDraw.Draw(img)
# Process the text line by line
for region in read_results.regions:
for line in region.lines:
# Show the position of the line of text
l,t,w,h = list(map(int, line.bounding_box.split(',')))
draw.rectangle(((l,t), (l+w, t+h)), outline='magenta', width=5)
# Read the words in the line of text
line_text = ''
for word in line.words:
line_text += word.text + ' '
print(line_text.rstrip())
# Show the image with the text locations highlighted
plt.axis('off')
plt.imshow(img)
```
결과에서 각 텍스트 줄의 경계 상자는 이미지에 직사각형으로 표시됩니다.
## Read API 사용
이전에 사용한 OCR API는 소량의 텍스트가 있는 이미지에서 잘 작동합니다. 스캔한 문서와 같이 더 큰 텍스트 본문을 읽어야 할 때는 **Read** API를 사용할 수 있습니다. 이를 위해서는 다단계 프로세스가 필요합니다.
1. 비동기식으로 읽고 분석할 이미지를 Computer Vision 서비스에 제출합니다.
2. 분석 작업이 완료될 때까지 기다립니다.
3. 분석의 결과를 검색합니다.
이 프로세스를 사용하여 스캔한 서신의 텍스트를 Northwind Traders 매장 관리자에게 읽어 주려면 다음 셀을 실행하세요.
```
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from msrest.authentication import CognitiveServicesCredentials
import matplotlib.pyplot as plt
from PIL import Image
import time
import os
%matplotlib inline
# Read the image file
image_path = os.path.join('data', 'ocr', 'letter.jpg')
image_stream = open(image_path, "rb")
# Get a client for the computer vision service
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Submit a request to read printed text in the image and get the operation ID
read_operation = computervision_client.read_in_stream(image_stream,
raw=True)
operation_location = read_operation.headers["Operation-Location"]
operation_id = operation_location.split("/")[-1]
# Wait for the asynchronous operation to complete
while True:
read_results = computervision_client.get_read_result(operation_id)
if read_results.status not in [OperationStatusCodes.running]:
break
time.sleep(1)
# If the operation was successfuly, process the text line by line
if read_results.status == OperationStatusCodes.succeeded:
for result in read_results.analyze_result.read_results:
for line in result.lines:
print(line.text)
# Open image and display it.
print('\n')
fig = plt.figure(figsize=(12,12))
img = Image.open(image_path)
plt.axis('off')
plt.imshow(img)
```
결과를 검토합니다. 서신의 전체 필사본이 있는데, 대부분은 인쇄된 텍스트이고 필기 서명이 있습니다. 서신의 원본 이미지는 OCR 결과 아래에 표시됩니다(보기 위해 스크롤해야 할 수도 있음).
## 필기 텍스트 읽기
이전 예에서 이미지 분석 요청은 *인쇄된* 텍스트에 맞춰 작업을 최적화하는 텍스트 인식 모드를 지정했습니다. 그럼에도 불구하고 필기 서명이 읽혔습니다.
필기 텍스트를 읽을 수 있는 이 능력은 매우 유용합니다. 예를 들어 쇼핑 목록이 포함된 메모를 작성했는데 폰의 앱을 사용하여 메모를 읽고 그 안에 포함된 텍스트를 필사하기를 원한다고 가정해 보세요.
아래 셀을 실행하여 필기 쇼핑 목록에 대한 읽기 작업의 예를 확인해 보세요.
```
from azure.cognitiveservices.vision.computervision import ComputerVisionClient
from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
from msrest.authentication import CognitiveServicesCredentials
import matplotlib.pyplot as plt
from PIL import Image
import time
import os
%matplotlib inline
# Read the image file
image_path = os.path.join('data', 'ocr', 'note.jpg')
image_stream = open(image_path, "rb")
# Get a client for the computer vision service
computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key))
# Submit a request to read printed text in the image and get the operation ID
read_operation = computervision_client.read_in_stream(image_stream,
raw=True)
operation_location = read_operation.headers["Operation-Location"]
operation_id = operation_location.split("/")[-1]
# Wait for the asynchronous operation to complete
while True:
read_results = computervision_client.get_read_result(operation_id)
if read_results.status not in [OperationStatusCodes.running]:
break
time.sleep(1)
# If the operation was successfuly, process the text line by line
if read_results.status == OperationStatusCodes.succeeded:
for result in read_results.analyze_result.read_results:
for line in result.lines:
print(line.text)
# Open image and display it.
print('\n')
fig = plt.figure(figsize=(12,12))
img = Image.open(image_path)
plt.axis('off')
plt.imshow(img)
```
## 추가 정보
OCR에 Computer Vision 서비스를 사용하는 방법에 대한 자세한 내용은 [Computer Vision 설명서](https://docs.microsoft.com/ko-kr/azure/cognitive-services/computer-vision/concept-recognizing-text)를 참조하세요.
| github_jupyter |
```
#!/usr/bin/python3
#
# This program and the accompanying materials
# are made available under the terms of the Apache License, Version 2.0
# which accompanies this distribution, and is available at
#
# http://www.apache.org/licenses/LICENSE-2.0
"""
This jupyter notebook is used to display the intersections
between various circles.
Developed for Covid-19 lockdown N°3 for the 10 km limitation
Requires the following packages:
- json: to open/read json files
- ipyleaflet: to display on map
- ipywidgets: for map configuration
- shapely: for polygon operations
- geog: to compute polygon from center and radius
- numpy: required for geog
Input file: points.geojson
- GeoJSON file to store a list of points (centre of the each zone)
Ouput file: zone.geojson
- GeoJSON file to store the interesction zone as a Polygon
Hard-coded parameters:
- Radius: 10000m
- Number of points created for the circle: 32 points
- opacity values to display on map
"""
import json
from ipyleaflet import GeoJSON, Map
from ipywidgets import Layout
from shapely.geometry import Polygon
import numpy as np
import geog
import shapely
def create_circle(lat, lon, radius, nb_points):
"""
Create a circle from a point in WGS84 coordinate with
lat, lon: coordinates for the center
radius: radius in m
nb_points: number of points for the circle
"""
center = shapely.geometry.Point([lon, lat])
angles = np.linspace(0, 360, nb_points)
polygon = geog.propagate(center, angles, radius)
return polygon
def display_circle(the_map, the_circle, name, options):
"""
Display a circle on the map
the_map: Ipleaflet Map
the_circle: circle as a shapely Polygon
name: name associated with the circle
options: options to display circle
"""
geo_circle = {
"type": "Feature",
"properties": {"name": name},
"geometry": shapely.geometry.mapping(shapely.geometry.Polygon(the_circle))}
layer = GeoJSON(data=geo_circle, style={'opacity': options["opacity"],
'fillOpacity': options["fill_opacity"],
'weight': options["weight"]})
the_map.add_layer(layer)
def create_polygons(centers):
"""
Create a list of shapely Polygon
centers: list of points in a GeoJSON structure
"""
polygon_circles = []
for center in centers["features"]:
lat = center["geometry"]["coordinates"][1]
lon = center["geometry"]["coordinates"][0]
polygon_circle = create_circle(lat, lon, 10000, 32)
polygon_circles.append(Polygon(polygon_circle))
return polygon_circles
def create_common_polygon(polygon_circles):
"""
Create a Poygon
polygon_circles: list of shapely Polygon
"""
common_zone = polygon_circles[0]
for circle in polygon_circles[1:]:
common_zone = circle.intersection(common_zone)
return common_zone
def generate_geojson_file(polygon, precision):
"""
Generate a GeoJSON file fro a shapely Polygon
polygon: shapely Polygon
precision: number of digits for coordinates precision
"""
geometry = shapely.geometry.mapping(shapely.geometry.Polygon(polygon))
float_format = "{0:." + str(precision) + "f}"
points = []
for point in geometry["coordinates"][0]:
lon = float(float_format.format(float(point[0])))
lat = float(float_format.format(float(point[1])))
points.append([lon, lat])
polygon_coords = []
polygon_coords.append(points)
geo = {
"type": "FeatureCollection",
"properties": {"name": "Zone commune"},
"features": [{
"type": "Feature",
"properties": {"name": "Cercle commun"},
"geometry": {"type": "Polygon",
"coordinates": polygon_coords}}]
}
with open("zone.geojson", "w") as geojson_file:
geojson_file.write(json.dumps(geo))
geo_centers = []
with open("points.geojson", "r") as geo_file:
geo_centers = json.load(geo_file)
polygon_circles = create_polygons(geo_centers)
common_polygon = create_common_polygon(polygon_circles)
centroid = common_polygon.centroid
generate_geojson_file(common_polygon, 5)
# Create map centered on centroid
my_map = Map(center=(centroid.coords[0][1], centroid.coords[0][0]),
zoom=11,
layout=Layout(width='1200px', height='800px'))
# Display circles on the map
for circle in polygon_circles:
display_circle(my_map, circle, "", {"opacity": 0.1, "fill_opacity": 0.1, "weight": 2})
# Display common zone on the map
display_circle(my_map, common_polygon, "Zone commune",
{"opacity": 1.0, "fill_opacity": 0.5, "weight": 5})
# Display centers on the map
my_map.add_layer(GeoJSON(data=geo_centers))
# Display map
my_map
```
| github_jupyter |
### Ricapitolazione Lez 3 (teoria)
- definizione di funzione
- espressioni booleane
- if elif else
#### Confronto fra reali
il numero di bit dedicato ai reali è finito, c'è un approssizmazione
e spesso == non va bene
```
from math import *
sqrt(2)**2==2
sqrt(2)
sqrt(2)**2
epsilon = 1e-15
abs(sqrt(2)**2-2) < epsilon
import sys
sys.float_info
1.8e+308
```
## Iterazioni
Ripetizione di istruzioni. Non abbiamo più un programma lineare. Due costrutti iterativi il **for** (numero di iterazioni noto a priori) ed il **while** (numero di iterazioni non noto a priori). Posso simulare uno con l'altro ma creati per usare il più addatto a seconda di quello che devo fare.
### Riassegnazione
```
x = 5
x = 7
```
### Aggiornamento
```
x = 0
x = x + 1
```
### While
**while** condizione_di_controllo:
istruzioni eseguite
```
x=11
while x !=10: # condizione iniziale (di controllo)
x=x+1 # istruzione che modifica la condizione del while
print(x)
import random
x=0
while x !=2: # condizione iniziale (di controllo)
x=random.randint(1,10) # istruzione che modifica la condizione del while
print(x,end="")
```
L'ordine delle istruzioni è importante!!!
```
import random
x=0
while x !=2:
print(x,end="") # cambiando l'ordine non mi stampa più l'ultimo numero
x=random.randint(1,10) #
```
```
import random
x=0
i=0
while x !=2:
x=random.randint(1,4)
print(x,' ', end="")
i=i+1
print ("\n")
print ('i =',i)
```
Attenzione ai cicli infiniti. Vedi esempio.
Attenzione all'and
```
s=''
False and s[0]
```
Controlla solo la prima e da falso ma s[0] non potrebbe essere controllata, infatti se le invertiamo...
```
s=''
s[0] and False
```
Possiamo terminare il while con un input dell'utente
```
somma = 0
ninput = int(input('Inserisci un numero (0 per terminare): '))
while ninput != 0:
somma = somma + ninput
print('somma:', somma)
ninput = int(input('Inserisci un numero (0 per terminare): '))
# programma che fa la somma di un tot di numeri inseriti dall'utente
somma
s=0
s=s+4 #zero è l'elemento neutro per l'addizione
s=s+5
s=s+8
```
Stare attenti a fare condizioni robuste e controllare che ci sia dentro al ciclo l'istruzione che rende prima o poi falsa la condizione.
OSSERVAZIONE: i booleani non si valutano.
```
x = True
if x: # la variabile f è già booleana di per se stessa, non c'è bisogno di confrontarla
print(x)
```
##### NON scrivere: if x == True
```
somma = 0
nnumeri = 0 # conta le volte che viene eseguito il ciclo
ninput = int(input('Inserisci un numero (0 per terminare): '))
while ninput != 0:
somma = somma + ninput
nnumeri = nnumeri + 1
print('somma:', somma)
ninput = int(input('Inserisci un numero (0 per terminare): '))
print('somma = ', somma)
print('hai sommato ', nnumeri, 'numeri')
# programma che fa la somma di nnumeri inseriti dall'utente
# qui contiamo anche quanti numeri sono stati inseriti
```
Posso attribuire alla variabile di elaborazione un valore fin dall'inizio consistente con la situazione. Si può fare solo se sono sicuro che il primo ingresso è valido
```
ninput = int(input('Inserisci un numero (0 per terminare): '))
somma = ninput
while ninput != 0:
print('somma:', somma)
ninput = int(input('Inserisci un numero (0 per terminare): '))
somma = somma + ninput
```
OSS: abbiamo invertito l'ordine di print e somma perché la prima somma viene già fatta fuori!!!
Ragionare sulla differenza!
```
x = False
while not x:
print('ciao')
x = input('Finisco? (sì/no)').lower()=='sì'
# un ciclo che continua finché l'utente non inserisci la parola si, cioè finché
# la variabile finito non diventa vera
finito = False
while not finito: ### vedete come è chiaro il significato?
print('ciao')
finito =input('Finisco? (sì/no)').lower()=='sì'
```
#### Scansione di una stringa
```
stringa = 'disegno'
i=0
while i < len(stringa):
print(stringa[i])
i=i+1
```
## For
**for** i **in** sequenza_di_elementi:
istruzioni eseguite
### in
La parola in è un operatore che confronta due stringhe e restituisce True se la prima è una sottostringa della seconda.
```
'4' in '+748'
s = '+748'
s[0] == '+' or s[0] == '-'
s[0] in '+-'
for i in 'disegno' :
print(i)
for i in [1,45,78] :
print(i)
```
Posso scrivere qualsiasi tipo di elenco di oggetti, sequenza di elementi (lista, stringa,file, oggetti più complessi); questo in Java è impossibile!!!!
```
for i in [1,'ciao',4/5] :
print(i)
```
Non è pulito, non è un linguaggio tipato ma è comodo da morire
```
# cerco il carattere più "grande" dell'alfabeto in una stringa
s ='ciao'
s ='ciao'
cmax=s[0]
for c in s[1:] :
if c >= cmax:
cmax=c
print(cmax)
# cerco il carattere più "grande" dell'alfabeto in una stringa
# itero sui caratteri (gli elementi) della stringa
# cerco la posizione del carattere più "grande" dell'alfabeto in una stringa
# itero sulla posizione (sugli indici) della stringa
s ='ciao'
imax=0
for i in [1,2,3] :
if s[i]>=s[imax]:
imax=i
print(imax)
for x in range(4):
print(x)
type(range(4))
list(range(4))
list(range(3,5))
s ='ciaoo'
for i in range(len(s)) :
print(s[i])
# Ex conto alla rovescia con il for
for i in range(10):
print(10-i)
```
## Ricorsione
Abbiamo visto che è del tutto normale che una funzione ne chiami un’altra, ma è anche consentito ad una funzione di chiamare se stessa.
**def** Ricorsione():
Ricorsione()
```
def contoallarovesciaRic(n):
if n <= 0:
print('Via!')
else:
print(n)
contoallarovesciaRic(n-1)
contoallarovesciaRic(10)
def contoallarovesciaWhile(n):
while n > 0:
print(n)
n = n-1
print('Via!')
contoallarovesciaWhile(4)
def contoallarovesciaFor(n):
for i in range(n):
print(n - i)
print('Via!')
contoallarovesciaFor(4)
# Ex fibonacci
# F(1)=1,
# F(2)=1,
# F(n)=F(n-1)+F(n-2)
#h = int(input("inserisci l'altezza del triangolo: "))
# stampa un triangolo rettangolo
h = 5
for i in range(h):
print('-'*(i+1),end='')
print(' '*(h-i-1))
# stampa un quadrato
#l = int(input('inserisci il lato del quadrato: '))
l = 5
for i in range(l):
if i==0 or i==l-1:
print('* '*l)
else:
print('* '+' '*(l-2)+'*')
# stampa uno snake
l = int(input('altezza cammino: '))
for i in range(l):
print('-'*i+'**'+'-'*(l+1-2-i))
```
| github_jupyter |
# The Atoms of Computation
Programming a quantum computer is now something that anyone can do in the comfort of their own home.
But what to create? What is a quantum program anyway? In fact, what is a quantum computer?
These questions can be answered by making comparisons to standard digital computers. Unfortunately, most people don’t actually understand how digital computers work either. In this article, we’ll look at the basics principles behind these devices. To help us transition over to quantum computing later on, we’ll do it using the same tools as we'll use for quantum.
## Contents
1. [Splitting information into bits](#bits)
2. [Computation as a Diagram](#diagram)
3. [Your First Quantum Circuit](#first-circuit)
4. [Example: Adder Circuit](#adder)
4.1 [Encoding an Input](#encoding)
4.2 [Remembering how to Add](#remembering-add)
4.3 [Adding with Qiskit](#adding-qiskit)
Below is some Python code we'll need to run if we want to use the code in this page:
```
from qiskit import QuantumCircuit, assemble, Aer
from qiskit.visualization import plot_histogram
```
## 1. Splitting information into bits <a id="bits"></a>
The first thing we need to know about is the idea of bits. These are designed to be the world’s simplest alphabet. With only two characters, 0 and 1, we can represent any piece of information.
One example is numbers. You are probably used to representing a number through a string of the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this string of digits, each digit represents how many times the number contains a certain power of ten. For example, when we write 9213, we mean
$$ 9000 + 200 + 10 + 3 $$
or, expressed in a way that emphasizes the powers of ten
$$ (9\times10^3) + (2\times10^2) + (1\times10^1) + (3\times10^0) $$
Though we usually use this system based on the number 10, we can just as easily use one based on any other number. The binary number system, for example, is based on the number two. This means using the two characters 0 and 1 to express numbers as multiples of powers of two. For example, 9213 becomes 10001111111101, since
$$ 9213 = (1 \times 2^{13}) + (0 \times 2^{12}) + (0 \times 2^{11})+ (0 \times 2^{10}) +(1 \times 2^9) + (1 \times 2^8) + (1 \times 2^7) \\\\ \,\,\, + (1 \times 2^6) + (1 \times 2^5) + (1 \times 2^4) + (1 \times 2^3) + (1 \times 2^2) + (0 \times 2^1) + (1 \times 2^0) $$
In this we are expressing numbers as multiples of 2, 4, 8, 16, 32, etc. instead of 10, 100, 1000, etc.
<a id="binary_widget"></a>
```
from qiskit_textbook.widgets import binary_widget
binary_widget(nbits=5)
```
These strings of bits, known as binary strings, can be used to represent more than just numbers. For example, there is a way to represent any text using bits. For any letter, number, or punctuation mark you want to use, you can find a corresponding string of at most eight bits using [this table](https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/com.ibm.aix.networkcomm/conversion_table.htm). Though these are quite arbitrary, this is a widely agreed-upon standard. In fact, it's what was used to transmit this article to you through the internet.
This is how all information is represented in computers. Whether numbers, letters, images, or sound, it all exists in the form of binary strings.
Like our standard digital computers, quantum computers are based on this same basic idea. The main difference is that they use *qubits*, an extension of the bit to quantum mechanics. In the rest of this textbook, we will explore what qubits are, what they can do, and how they do it. In this section, however, we are not talking about quantum at all. So, we just use qubits as if they were bits.
### Quick Exercises
1. Think of a number and try to write it down in binary.
2. If you have $n$ bits, how many different states can they be in?
## 2. Computation as a diagram <a id="diagram"></a>
Whether we are using qubits or bits, we need to manipulate them in order to turn the inputs we have into the outputs we need. For the simplest programs with very few bits, it is useful to represent this process in a diagram known as a *circuit diagram*. These have inputs on the left, outputs on the right, and operations represented by arcane symbols in between. These operations are called 'gates', mostly for historical reasons.
Here's an example of what a circuit looks like for standard, bit-based computers. You aren't expected to understand what it does. It should simply give you an idea of what these circuits look like.

For quantum computers, we use the same basic idea but have different conventions for how to represent inputs, outputs, and the symbols used for operations. Here is the quantum circuit that represents the same process as above.

In the rest of this section, we will explain how to build circuits. At the end, you'll know how to create the circuit above, what it does, and why it is useful.
## 3. Your first quantum circuit <a id="first-circuit"></a>
In a circuit, we typically need to do three jobs: First, encode the input, then do some actual computation, and finally extract an output. For your first quantum circuit, we'll focus on the last of these jobs. We start by creating a circuit with eight qubits and eight outputs.
```
n = 8
n_q = n
n_b = n
qc_output = QuantumCircuit(n_q,n_b)
```
This circuit, which we have called `qc_output`, is created by Qiskit using `QuantumCircuit`. The number `n_q` defines the number of qubits in the circuit. With `n_b` we define the number of output bits we will extract from the circuit at the end.
The extraction of outputs in a quantum circuit is done using an operation called `measure`. Each measurement tells a specific qubit to give an output to a specific output bit. The following code adds a `measure` operation to each of our eight qubits. The qubits and bits are both labelled by the numbers from 0 to 7 (because that’s how programmers like to do things). The command `qc_output.measure(j,j)` adds a measurement to our circuit `qc_output` that tells qubit `j` to write an output to bit `j`.
```
for j in range(n):
qc_output.measure(j,j)
```
Now that our circuit has something in it, let's take a look at it.
```
qc_output.draw()
```
Qubits are always initialized to give the output ```0```. Since we don't do anything to our qubits in the circuit above, this is exactly the result we'll get when we measure them. We can see this by running the circuit many times and plotting the results in a histogram. We will find that the result is always ```00000000```: a ```0``` from each qubit.
```
sim = Aer.get_backend('aer_simulator') # this is the simulator we'll use
qobj = assemble(qc_output) # this turns the circuit into an object our backend can run
result = sim.run(qobj).result() # we run the experiment and get the result from that experiment
# from the results, we get a dictionary containing the number of times (counts)
# each result appeared
counts = result.get_counts()
# and display it on a histogram
plot_histogram(counts)
```
The reason for running many times and showing the result as a histogram is because quantum computers may have some randomness in their results. In this case, since we aren’t doing anything quantum, we get just the ```00000000``` result with certainty.
Note that this result comes from a quantum simulator, which is a standard computer calculating what an ideal quantum computer would do. Simulations are only possible for small numbers of qubits (~30 qubits), but they are nevertheless a very useful tool when designing your first quantum circuits. To run on a real device you simply need to replace ```Aer.get_backend('aer_simulator')``` with the backend object of the device you want to use.
## 4. Example: Creating an Adder Circuit <a id="adder"></a>
### 4.1 Encoding an input <a id="encoding"></a>
Now let's look at how to encode a different binary string as an input. For this, we need what is known as a NOT gate. This is the most basic operation that you can do in a computer. It simply flips the bit value: ```0``` becomes ```1``` and ```1``` becomes ```0```. For qubits, it is an operation called ```x``` that does the job of the NOT.
Below we create a new circuit dedicated to the job of encoding and call it `qc_encode`. For now, we only specify the number of qubits.
```
qc_encode = QuantumCircuit(n)
qc_encode.x(7)
qc_encode.draw()
```
Extracting results can be done using the circuit we have from before: `qc_output`. Adding the two circuits using `qc_encode + qc_output` creates a new circuit with everything needed to extract an output added at the end.
```
qc = qc_encode + qc_output
qc.draw()
```
Now we can run the combined circuit and look at the results.
```
qobj = assemble(qc)
counts = sim.run(qobj).result().get_counts()
plot_histogram(counts)
```
Now our computer outputs the string ```10000000``` instead.
The bit we flipped, which comes from qubit 7, lives on the far left of the string. This is because Qiskit numbers the bits in a string from right to left. Some prefer to number their bits the other way around, but Qiskit's system certainly has its advantages when we are using the bits to represent numbers. Specifically, it means that qubit 7 is telling us about how many $2^7$s we have in our number. So by flipping this bit, we’ve now written the number 128 in our simple 8-bit computer.
Now try out writing another number for yourself. You could do your age, for example. Just use a search engine to find out what the number looks like in binary (if it includes a ‘0b’, just ignore it), and then add some 0s to the left side if you are younger than 64.
```
qc_encode = QuantumCircuit(n)
qc_encode.x(1)
qc_encode.x(5)
qc_encode.draw()
```
Now we know how to encode information in a computer. The next step is to process it: To take an input that we have encoded, and turn it into an output that we need.
### 4.2 Remembering how to add <a id="remembering-add"></a>
To look at turning inputs into outputs, we need a problem to solve. Let’s do some basic maths. In primary school, you will have learned how to take large mathematical problems and break them down into manageable pieces. For example, how would you go about solving the following?
```
9213
+ 1854
= ????
```
One way is to do it digit by digit, from right to left. So we start with 3+4
```
9213
+ 1854
= ???7
```
And then 1+5
```
9213
+ 1854
= ??67
```
Then we have 2+8=10. Since this is a two digit answer, we need to carry the one over to the next column.
```
9213
+ 1854
= ?067
¹
```
Finally we have 9+1+1=11, and get our answer
```
9213
+ 1854
= 11067
¹
```
This may just be simple addition, but it demonstrates the principles behind all algorithms. Whether the algorithm is designed to solve mathematical problems or process text or images, we always break big tasks down into small and simple steps.
To run on a computer, algorithms need to be compiled down to the smallest and simplest steps possible. To see what these look like, let’s do the above addition problem again but in binary.
```
10001111111101
+ 00011100111110
= ??????????????
```
Note that the second number has a bunch of extra 0s on the left. This just serves to make the two strings the same length.
Our first task is to do the 1+0 for the column on the right. In binary, as in any number system, the answer is 1. We get the same result for the 0+1 of the second column.
```
10001111111101
+ 00011100111110
= ????????????11
```
Next, we have 1+1. As you’ll surely be aware, 1+1=2. In binary, the number 2 is written ```10```, and so requires two bits. This means that we need to carry the 1, just as we would for the number 10 in decimal.
```
10001111111101
+ 00011100111110
= ???????????011
¹
```
The next column now requires us to calculate ```1+1+1```. This means adding three numbers together, so things are getting complicated for our computer. But we can still compile it down to simpler operations, and do it in a way that only ever requires us to add two bits together. For this, we can start with just the first two 1s.
```
1
+ 1
= 10
```
Now we need to add this ```10``` to the final ```1``` , which can be done using our usual method of going through the columns.
```
10
+ 01
= 11
```
The final answer is ```11``` (also known as 3).
Now we can get back to the rest of the problem. With the answer of ```11```, we have another carry bit.
```
10001111111101
+ 00011100111110
= ??????????1011
¹¹
```
So now we have another 1+1+1 to do. But we already know how to do that, so it’s not a big deal.
In fact, everything left so far is something we already know how to do. This is because, if you break everything down into adding just two bits, there are only four possible things you’ll ever need to calculate. Here are the four basic sums (we’ll write all the answers with two bits to be consistent).
```
0+0 = 00 (in decimal, this is 0+0=0)
0+1 = 01 (in decimal, this is 0+1=1)
1+0 = 01 (in decimal, this is 1+0=1)
1+1 = 10 (in decimal, this is 1+1=2)
```
This is called a *half adder*. If our computer can implement this, and if it can chain many of them together, it can add anything.
### 4.3 Adding with Qiskit <a id="adding-qiskit"></a>
Let's make our own half adder using Qiskit. This will include a part of the circuit that encodes the input, a part that executes the algorithm, and a part that extracts the result. The first part will need to be changed whenever we want to use a new input, but the rest will always remain the same.

The two bits we want to add are encoded in the qubits 0 and 1. The above example encodes a ```1``` in both these qubits, and so it seeks to find the solution of ```1+1```. The result will be a string of two bits, which we will read out from the qubits 2 and 3. All that remains is to fill in the actual program, which lives in the blank space in the middle.
The dashed lines in the image are just to distinguish the different parts of the circuit (although they can have more interesting uses too). They are made by using the `barrier` command.
The basic operations of computing are known as logic gates. We’ve already used the NOT gate, but this is not enough to make our half adder. We could only use it to manually write out the answers. Since we want the computer to do the actual computing for us, we’ll need some more powerful gates.
To see what we need, let’s take another look at what our half adder needs to do.
```
0+0 = 00
0+1 = 01
1+0 = 01
1+1 = 10
```
The rightmost bit in all four of these answers is completely determined by whether the two bits we are adding are the same or different. So for ```0+0``` and ```1+1```, where the two bits are equal, the rightmost bit of the answer comes out ```0```. For ```0+1``` and ```1+0```, where we are adding different bit values, the rightmost bit is ```1```.
To get this part of our solution correct, we need something that can figure out whether two bits are different or not. Traditionally, in the study of digital computation, this is called an XOR gate.
| Input 1 | Input 2 | XOR Output |
|:-------:|:-------:|:------:|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
In quantum computers, the job of the XOR gate is done by the controlled-NOT gate. Since that's quite a long name, we usually just call it the CNOT. In Qiskit its name is ```cx```, which is even shorter. In circuit diagrams, it is drawn as in the image below.
```
qc_cnot = QuantumCircuit(2)
qc_cnot.cx(0,1)
qc_cnot.draw()
```
This is applied to a pair of qubits. One acts as the control qubit (this is the one with the little dot). The other acts as the *target qubit* (with the big circle).
There are multiple ways to explain the effect of the CNOT. One is to say that it looks at its two input bits to see whether they are the same or different. Next, it overwrites the target qubit with the answer. The target becomes ```0``` if they are the same, and ```1``` if they are different.
<img src="images/cnot_xor.svg">
Another way of explaining the CNOT is to say that it does a NOT on the target if the control is ```1```, and does nothing otherwise. This explanation is just as valid as the previous one (in fact, it’s the one that gives the gate its name).
Try the CNOT out for yourself by trying each of the possible inputs. For example, here's a circuit that tests the CNOT with the input ```01```.
```
qc = QuantumCircuit(2,2)
qc.x(0)
qc.cx(0,1)
qc.measure(0,0)
qc.measure(1,1)
qc.draw()
```
If you execute this circuit, you’ll find that the output is ```11```. We can think of this happening because of either of the following reasons.
- The CNOT calculates whether the input values are different and finds that they are, which means that it wants to output ```1```. It does this by writing over the state of qubit 1 (which, remember, is on the left of the bit string), turning ```01``` into ```11```.
- The CNOT sees that qubit 0 is in state ```1```, and so applies a NOT to qubit 1. This flips the ```0``` of qubit 1 into a ```1```, and so turns ```01``` into ```11```.
Here is a table showing all the possible inputs and corresponding outputs of the CNOT gate:
| Input (q1 q0) | Output (q1 q0) |
|:-------------:|:--------------:|
| 00 | 00 |
| 01 | 11 |
| 10 | 10 |
| 11 | 01 |
For our half adder, we don’t want to overwrite one of our inputs. Instead, we want to write the result on a different pair of qubits. For this, we can use two CNOTs.
```
qc_ha = QuantumCircuit(4,2)
# encode inputs in qubits 0 and 1
qc_ha.x(0) # For a=0, remove this line. For a=1, leave it.
qc_ha.x(1) # For b=0, remove this line. For b=1, leave it.
qc_ha.barrier()
# use cnots to write the XOR of the inputs on qubit 2
qc_ha.cx(0,2)
qc_ha.cx(1,2)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2,0) # extract XOR value
qc_ha.measure(3,1)
qc_ha.draw()
```
We are now halfway to a fully working half adder. We just have the other bit of the output left to do: the one that will live on qubit 3.
If you look again at the four possible sums, you’ll notice that there is only one case for which this is ```1``` instead of ```0```: ```1+1```=```10```. It happens only when both the bits we are adding are ```1```.
To calculate this part of the output, we could just get our computer to look at whether both of the inputs are ```1```. If they are — and only if they are — we need to do a NOT gate on qubit 3. That will flip it to the required value of ```1``` for this case only, giving us the output we need.
For this, we need a new gate: like a CNOT but controlled on two qubits instead of just one. This will perform a NOT on the target qubit only when both controls are in state ```1```. This new gate is called the *Toffoli*. For those of you who are familiar with Boolean logic gates, it is basically an AND gate.
In Qiskit, the Toffoli is represented with the `ccx` command.
```
qc_ha = QuantumCircuit(4,2)
# encode inputs in qubits 0 and 1
qc_ha.x(0) # For a=0, remove the this line. For a=1, leave it.
qc_ha.x(1) # For b=0, remove the this line. For b=1, leave it.
qc_ha.barrier()
# use cnots to write the XOR of the inputs on qubit 2
qc_ha.cx(0,2)
qc_ha.cx(1,2)
# use ccx to write the AND of the inputs on qubit 3
qc_ha.ccx(0,1,3)
qc_ha.barrier()
# extract outputs
qc_ha.measure(2,0) # extract XOR value
qc_ha.measure(3,1) # extract AND value
qc_ha.draw()
```
In this example, we are calculating ```1+1```, because the two input bits are both ```1```. Let's see what we get.
```
qobj = assemble(qc_ha)
counts = sim.run(qobj).result().get_counts()
plot_histogram(counts)
```
The result is ```10```, which is the binary representation of the number 2. We have built a computer that can solve the famous mathematical problem of 1+1!
Now you can try it out with the other three possible inputs, and show that our algorithm gives the right results for those too.
The half adder contains everything you need for addition. With the NOT, CNOT, and Toffoli gates, we can create programs that add any set of numbers of any size.
These three gates are enough to do everything else in computing too. In fact, we can even do without the CNOT. Additionally, the NOT gate is only really needed to create bits with value ```1```. The Toffoli gate is essentially the atom of mathematics. It is the simplest element, from which every other problem-solving technique can be compiled.
As we'll see, in quantum computing we split the atom.
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
# Units in Python
```
import numpy as np
```
### Find the position (x) of a rocket moving at a constant velocity (v) after a time (t)
<img src="./images/rocket.png" width="400"/>
```
def find_position(velocity, time):
result = velocity * time
return result
```
### If v = 10 m/s and t = 10 s
```
my_velocity = 10
my_time = 10
find_position(my_velocity, my_time)
```
### No problem, x = 100 m
---
### Now v = 10 mph and t = 10 minutes
```
my_other_velocity = 10
my_other_time = 10
find_position(my_other_velocity, my_other_time)
```
### x = 100 miles minutes / hour ??
---
# The Astropy Units package to the rescue
```
from astropy import units as u
from astropy import constants as const
from astropy.units import imperial
imperial.enable()
```
#### *Note: because we imported the `units` package as `u`, you cannot use **u** as a variable name.*
---
### Add units to values using `u.UNIT` where UNIT is an [Astropy Unit](http://docs.astropy.org/en/stable/units/index.html#module-astropy.units.si)
* To add a UNIT to a VALUE you multiply (*) the VALUE by the UNIT
* You can make compound units like: `u.m / u.s`
```
my_velocity = 10 * (u.m / u.s)
my_time = 10 * u.s
def find_position(velocity, time):
result = velocity * time
return result
find_position(my_velocity, my_time)
```
#### Notice the difference when using imperial units - (`imperial.UNIT`)
```
my_other_velocity = 10.0 * (imperial.mi / u.h)
my_other_time = 10 * u.min
find_position(my_other_velocity, my_other_time)
```
### Notice that the units are a bit strange. We can simplify this using `.decompose()`
* Default to SI units
```
find_position(my_other_velocity, my_other_time).decompose()
```
### I like to put the `.decompose()` in the return of the function:
```
def find_position(velocity, time):
result = velocity * time
return result.decompose()
find_position(my_other_velocity, my_other_time)
```
### Unit conversion is really easy!
```
rocket_position = find_position(my_other_velocity, my_other_time)
rocket_position
rocket_position.to(u.km)
rocket_position.to(imperial.mi)
rocket_position.si # quick conversion to SI units
rocket_position.cgs # quick conversion to CGS units
```
## It is always better to do unit conversions **outside** of functions
### Be careful adding units to something that already has units!
* `velocity` and `time` have units.
* By doing `result * u.km` you are adding another unit
```
def find_position_wrong(velocity, time):
result = velocity * time
return (result * u.km).decompose()
find_position_wrong(my_other_velocity, my_other_time)
```
---
### You do not have to worry about working in different units (**as long as they are the same type**)!
* No conversions needed
* Just make sure you assign units
```
my_velocity, my_other_velocity
my_velocity + my_other_velocity
```
#### Units default to SI units
```
my_time, my_other_time
my_time + my_other_time
```
### You can find the units in `Astropy` that are of the same type with `.find_equivalent_units()`
```
(u.m).find_equivalent_units()
```
---
### Be careful combining quantities with different units!
```
my_velocity + my_time
2 + my_time
```
---
## Dimentionless Units
```
dimless_y = 10 * u.dimensionless_unscaled
dimless_y
dimless_y.unit
dimless_y.decompose() # returns the scale of the dimentionless quanity
```
### Some math functions only make sense with dimentionless quanities
```
np.log(2 * u.m)
np.log(2 * u.dimensionless_unscaled)
```
### Or they expect the correct type of unit!
```
np.sin(2 * u.m)
np.sin(2 * u.deg)
```
## Using units can save you headaches.
* All of the trig functions expect all angles to be in radians.
* If you forget this, it can lead to problems that are hard to debug
$$ \large
\sin(90^{\circ}) + \sin(45^{\circ}) = 1 + \frac{\sqrt{2}}{2} \approx 1.7071
$$
```
np.sin(90) + np.sin(45)
np.sin(90 * u.deg) + np.sin(45 * u.deg)
```
---
## You can define your own units
```
ringo = u.def_unit('Ringos', 3.712 * imperial.yd)
rocket_position.to(ringo)
my_velocity.to(ringo / u.s)
```
#### ...Since `ringo` is self-defined it does not have a `u.` in front of it
### You can access the number and unit part of the Quantity separately:
```
my_velocity.value
my_velocity.unit
```
### This is useful in formatting output:
```
f"The velocity of the first particle is {my_velocity.value:.1f} in the units of {my_velocity.unit:s}."
```
---
# Constants
The `Astropy` package also includes a whole bunch of built-in constants to make your life easier.
* The package is usually imported as `const`
### [Astropy Constants](http://docs.astropy.org/en/stable/constants/index.html#reference-api)
```
const.G
const.M_sun
```
---
### An Example: The velocity of an object in circular orbit around the Sun is
$$\large
v=\sqrt{GM_{\odot}\over d}
$$
### What is the velocity of an object at 1 AU from the Sun?
```
def find_orbit_v(distance):
result = np.sqrt(const.G * const.M_sun / distance)
return result.decompose()
my_distance = 1 * u.AU
orbit_v = find_orbit_v(my_distance)
orbit_v
orbit_v.to(u.km/u.s)
orbit_v.to(ringo/u.ms)
```
### Be careful about the difference between a unit and a constant
```
my_star = 1 * u.solMass
my_star
my_star.unit
const.M_sun
const.M_sun.unit
```
## Last week's homework
$$\large
\textrm{Diameter}\ (\textrm{in km}) = \frac{1329\ \textrm{km}}{\sqrt{\textrm{geometric albedo}}}\ 10^{-0.2\ (\textrm{absolute magnitude})}
$$
```
def find_diameter(ab_mag, albedo):
result = ( (1329 * u.km) / np.sqrt(albedo) ) * (10 ** (-0.2 * ab_mag))
return result.decompose()
my_ab_mag = 3.34
my_albedo = 0.09
asteroid_diameter = find_diameter(my_ab_mag, my_albedo)
asteroid_diameter
```
$$\Large
\mathrm{Mass}\ = \ \rho \cdot \frac{1}{6} \pi D^3
$$
```
def find_mass(diameter, density):
result = density * (1/6) * np.pi * diameter ** 3
return result.decompose()
my_density = 3000 * (u.kg / u.m**3)
find_mass(asteroid_diameter, my_density)
```
#### Notice - as long as `density` has units of mass/length$^3$, and `diameter` has units of length, you do not need to do any conversions.
```
my_other_density = 187 * (imperial.lb / imperial.ft **3)
find_mass(asteroid_diameter, my_other_density)
```
---
# Real world example - [Mars Climate Orbiter](https://en.wikipedia.org/wiki/Mars_Climate_Orbiter)
Aerobraking is a spaceflight maneuver uses the drag of flying a spacecraft through the (upper) atmosphere of a world to slow a spacecraft and lower its orbit. Aerobraking requires way less fuel compared to using propulsion to slow down.
On September 8, 1999, Trajectory Correction Maneuver-4 (TCM-4) was computed to place the Mars Climate Orbiter spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 km. At this altitude the orbiter would skim through Mars' upper atmosphere, gradually aerobraking for weeks.
The calculation of TCM-4 was done in United States imperical units. The software that calculated the total needed impulse of the thruster firing produced results in pound-force seconds.
### Mars Climate Orbiter
* Mass = 338 kg (745 lbs)
* ΔV needed for TCM-4 = 9.2 m/s (30.2 fps)
* Need to calculate Impulse
### Impulse is a change in momentum
$$ \Large
I = \Delta\ p\ =\ m\Delta v
$$
#### Impulse calculated in imperial units:
```
imperial_impulse = (745 * (imperial.lb)) * (30.2 * (imperial.ft / u.s))
imperial_impulse.to(imperial.lbf * u.s)
```
The computed impulse value was then sent to the spacecraft and was used to fire the thuster on September 15, 1999. The computer that fired the thuster expected the impulse to be in SI units (newton-seconds). SI units are required by NASA's Software Interface Specification (SIS).
#### $\Delta$v that would be the result of an impuse of 669.3 (N * s) for M = 338 kg:
```
my_deltav = (669.3 * (u.N * u.s)) / (338 * (u.kg))
my_deltav.decompose()
```
This $\Delta$v was way too small! At this speed the spacecraft's trajectory would have taken it within 57 km (35 miles) of the surface. At this altitude, the spacecraft would likely have skipped violently off the denser-than-expected atmosphere, and it was either destroyed in the atmosphere, or re-entered heliocentric space.
<img src="./images/MCO_Orbit.png" width="700"/>
| github_jupyter |
# **Discriminative Feature Selection**
# FEATURE SELECTION
Feature Selection is the process where you automatically or manually select those features which contribute most to your prediction variable or output in which you are interested in. Having irrelevant features in your data can decrease the accuracy of the models and make your model learn based on irrelevant features.
We are going to understand it with a practice example. Steps are as follows :
- Import important libraries
- Importing data
- Data Preprocessing
- Price
- Size
- Installs
- Discriminative Feature Check
- Reviews
- Price
**1. Import Important Libraries**
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
```
**2. Importing Data**
Today we will be working on a playstore apps dataset with ratings. Link to the dataset --> https://www.kaggle.com/lava18/google-play-store-apps/data
```
df = pd.read_csv('googleplaystore.csv',encoding='unicode_escape')
df.head()
```
**3. Data Preprocessing**
Let us have a look at all the datatypes first :
```
df.dtypes
```
We see that all the columns except 'Rating' are object datatype. We want those columns also as numeric as they dont make sense when they are in object form.Let us start with the 'Price' column.
**i) Price**
When we saw the head of the dataset, we only see the 0 values in 'Price' column. Let us have a look at the rows with non zero data. As the 'Price column is object type, we compare the column with '0' instead of 0.
```
df[df['Price']!='0'].head()
```
We see that the 'Price' column has dollar sign in the beginning for the apps which are not free. Hence we cannot directly convert it to numeric type. We will first have to remove the $ sign so that all datas are uniform and can be converted.
We use the replace function over here to replace the dollar sign by blank. Notice that we had to convert the column to string type from object type as the replace function is only applicable on string functions.
```
df['Price'] = df['Price'].str.replace('$','', regex=False)
df[df['Price']!='0'].head()
```
**ii) Size**
As we see the 'Size' column, we see that the value ends with the letter 'M' for mega. We want to convert the size to numeric value to use in the dataset. Hence we will need to remove the letter 'M'.
For this, we convert the column to string and omit the last letter of the string and save the data in 'Size' column.
Notice from the previous head that we saw, that the 'Size' for row 427 is given as varies with device. We obviously cannot convert such data to numeric. We will see how to deal with it later.
```
df['Size'] = df['Size'].str[:-1]
df.head()
```
**iii) Installs**
If we see the 'Installs' column, there are 2 major changes that we need to make to convert it to numeric. We have to remove the '+' sign from the end of the data as well as remove the commas before converting to numeric.
To remove the last letter, we apply the same procedure as for the 'Size' column :
```
df['Installs'] = df['Installs'].str[:-1]
df.head()
```
For the removal of commas, we will use the replace function to replace commas with blank.
Replace function only works on string, hence we access the values of the series as string before applying the replace function :
```
df['Installs'] = df['Installs'].str.replace(',','')
df.head()
```
Now, we will finally convert all the data to numeric type using the to_numeric function. Notice that we have used the errors='coerce' parameter. This parameter converts all the data which cannot be converted to numeric into NaN. For example the 'Size' in row 427 cannot be converted to int. Hence it will be converted to NaN. After that we take a look at the datatypes of the columns again.
```
df['Reviews'] = pd.to_numeric(df['Reviews'],errors='coerce')
df['Size'] = pd.to_numeric(df['Size'],errors='coerce')
df['Installs'] = pd.to_numeric(df['Installs'],errors='coerce')
df['Price'] = pd.to_numeric(df['Price'],errors='coerce')
df.dtypes
```
Now we will see and work with all the NaN values. Let us first have a look at all the NaN values in the dataset :
```
df.isna().sum()
```
As rating is the output of our dataset, we cannot have that to be NaN. Hence we will remove all the rows with 'Rating' as NaN :
```
df = df[df['Rating'].isna()==False]
df.isna().sum()
```
This is the final preprocessed dataset that we obtained :
```
df.head()
```
**4. Discriminative Feature Check**
Now we will move on to checking the discriminative feature checking, to see which feature is good and which is not. We will start with the 'Reviews' column. For our case, we will take rating > 4.3 as a good rating. We take that value because as we see in the following stats, the rating is divided 50:50 at that value.
Before we do that, let us have a look at the statistics of the whole table :
```
df.describe()
```
**i) Reviews**
We will have to check for multiple values that which of them has the best rating distinction. We will start by comparing with the mean of the 'Reviews' column which is 514098.
We will use a new function over here known as crosstab. Crosstab allows us to have a frequency count across 2 columns or conditions.
We could also normalize the column results to obtain the conditional probability of P(Rating = HIGH | condition)
We have also turned on the margins to see the total frequency under that condition.
```
pd.crosstab(df['Rating']>4.3,df['Reviews']>514098,rownames=['Ratings>4.3'],colnames=['Reviews>514098'],margins= True)
```
We see that the number of ratings in the case of Reviews > 514098 is very less (close to 10%).
Hence it is preferred to take the 50 percentile point rather than the mean to be the pivot point. Let us now take the 50 percentile point which is 5930 reviews in this case. So let us take a look at that :
```
pd.crosstab(df['Rating']>4.3,df['Reviews']>5930,rownames=['Ratings>4.3'],colnames=['Reviews>5930'],margins= True)
```
Now we see that the number of ratings is equal for both high and low reviews. So we will take the 50 percentile point to start from now on. Let us now look at the conditional probability :
```
pd.crosstab(df['Rating']>4.3,df['Reviews']>5930,rownames=['Ratings>4.3'],colnames=['Reviews>5930'],margins= True,normalize='columns')
```
There is not much difference between P(Ratings=HIGH|Reviews<5930) and P(Ratings=HIGH|Reviews>5930) so this is a bad feature.
Let us increase the value of the pivot for ratings to 80000 and check again. We dont need to check for the percentage being too low as we are almost at 75 percentile mark.
```
pd.crosstab(df['Rating']>4.3,df['Reviews']>80000,rownames=['Ratings>4.3'],colnames=['Reviews>80000'],margins= True,normalize='columns')
```
Now we see that there is a good difference in the probabilities and hence Rating>80000 is a good feature.
**ii) Price**
We will do the same for 'Price' column to find out the best distinctive feature. We see that in this case, even the 75 percentile mark also points to 0. Hence in this case, we will classify the data as Free or not :
```
pd.crosstab(df['Rating']>4.3,df['Price']==0,rownames=['Ratings>4.3'],colnames=['Price=$0'],margins= True)
```
This shows us that it is very difficult to use the Price as a feature. Hence it is a doubtful feature. If then also we want to force this as a feature, let us see the conditional probability :
```
pd.crosstab(df['Rating']>4.3,df['Price']==0,rownames=['Ratings>4.3'],colnames=['Price=$0'],margins= True,normalize='columns')
```
We see that there is not much difference in probability either, hence this would serve as a bad feature in any case.
This is the end of this tutorial. Now you can move on to assignment 7 in which you have to check the other 2 distinctive features.
| github_jupyter |
!wget https://www.dropbox.com/s/ic9ym6ckxq2lo6v/Dataset_Signature_Final.zip
#!wget https://www.dropbox.com/s/0n2gxitm2tzxr1n/lightCNN_51_checkpoint.pth
#!wget https://www.dropbox.com/s/9yd1yik7u7u3mse/light_cnn.py
import zipfile
sigtrain = zipfile.ZipFile('Dataset_Signature_Final.zip', mode='r')
sigtrain.extractall()
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
```
import re
import os
import cv2
import random
import numpy as np
import collections
import torch
import torchvision
from torch.utils import data
from torchvision import models
import torch.nn as nn
from torch.utils.data import DataLoader,Dataset
import torch.nn.functional as F
from PIL import Image
import PIL
from numpy.random import choice, shuffle
from itertools import product, combinations, combinations_with_replacement, permutations
import torch.optim as optim
from torchvision import transforms
train_image_list = []
test_image_list = []
for root, dirs, files in os.walk('Dataset'):
#if (len(dirs) ==0 and off in root):
if (len(dirs) ==0):
for root_sub, dirs_sub, files_sub in os.walk(root):
for file in files_sub:
if 'dataset4' not in root_sub:
train_image_list.append(os.path.join(root_sub,file).rstrip('\n'))
else:
test_image_list.append(os.path.join(root_sub,file).rstrip('\n'))
train_image_list_x = []
for i in list(set([re.split('/',image)[1] for image in train_image_list ])):
#datasetx = random.choice(dataset)
#index1 = dataset.index(datasetx)
#for dataset_ in dataset:
train_image_list_x.append([image for image in train_image_list if i in image])
train_image_lis_dataset1 = train_image_list_x[0]
train_image_lis_dataset2 = train_image_list_x[1]
train_image_lis_dataset3 = train_image_list_x[2]
class PhiLoader(data.Dataset):
def __init__(self, image_list, resize_shape, transform=True):
self.image_list = image_list
self.diff = list(set([str(str(re.split('/',image)[-1]).split('.')[0])[-3:] for image in self.image_list]))
self.identity_image = []
for i in self.diff:
self.identity_image.append([image for image in self.image_list if ((str(str(image).split('/')[-1]).split('.')[0]).endswith(i))])
self.PairPool=[]
for user in self.identity_image:
Real=[]
Forge=[]
for image in user:
if 'real' in image:
Real.append(image)
else:
Forge.append(image)
self.PairPool.extend(list(product(Real,Forge+Real)))
self.Dimensions = resize_shape
self.transform=transform
self.labels=[]
self.ToGray=transforms.Grayscale()
self.RR=transforms.RandomRotation(degrees=10,resample=PIL.Image.CUBIC)
self.Identity = transforms.Lambda(lambda x : x)
self.RRC = transforms.Lambda(lambda x : self.RandomRCrop(x))
self.Transform=transforms.RandomChoice([self.RR,
self.RRC,
self.Identity
])
self.T=transforms.ToTensor()
self.labels=[]
def __len__(self):
return len(self.PairPool)
def RandomRCrop(self,image):
width,height = image.size
size=random.uniform(0.9,1.00)
#ratio = random.uniform(0.45,0.55)
newheight = size*height
newwidth = size*width
T=transforms.RandomCrop((int(newheight),int(newwidth)))
return T(image)
def __getitem__(self,index):
#print("index",index)
index=index%len(self.PairPool)
pairPool = self.PairPool[index]
img1 = self.ToGray(Image.open(pairPool[0]))
img2 = self.ToGray(Image.open(pairPool[1]))
label_1 = pairPool[0].split('/')[2]
label_2 = pairPool[1].split('/')[2]
if label_1 == label_2: ### same class
l=0.0
self.labels.append(l)
else: ### different class
l=1.0
self.labels.append(l)
if self.transform:
img1 = self.Transform(img1)
img2 = self.Transform(img2)
return self.T(img1.resize(self.Dimensions)), self.T(img2.resize(self.Dimensions)), torch.tensor(l)
class PhiNet(nn.Module):
def __init__(self, ):
super(PhiNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1,96,kernel_size=11,stride=1),
nn.ReLU(),
nn.LocalResponseNorm(5, alpha=1e-4, beta=0.75, k=2),
nn.MaxPool2d(kernel_size=3, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(96, 256, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.LocalResponseNorm(5, alpha=1e-4, beta=0.75, k=2),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Dropout2d(p=0.3))
self.layer3 = nn.Sequential(
nn.Conv2d(256,384, kernel_size=3, stride=1, padding=1))
self.layer4 = nn.Sequential(
nn.Conv2d(384,256, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Dropout2d(p=0.3))
self.layer5 = nn.Sequential(
nn.Conv2d(256,128, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Dropout2d(p=0.3))
self.adap = nn.AdaptiveAvgPool3d((128,6,6))
self.layer6 = nn.Sequential(
nn.Linear(4608,512),
nn.ReLU(),
nn.Dropout(p=0.5))
self.layer7 = nn.Sequential(
nn.Linear(512,128),
nn.ReLU())
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = self.layer5(out)
out = self.adap(out)
out = out.reshape(out.size()[0], -1)
out = self.layer6(out)
out = self.layer7(out)
return out
import math
def set_optimizer_lr(optimizer, lr):
# callback to set the learning rate in an optimizer, without rebuilding the whole optimizer
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
def se(initial_lr,iteration,epoch_per_cycle):
return initial_lr * (math.cos(math.pi * iteration / epoch_per_cycle) + 1) / 2
class ContrastiveLoss(torch.nn.Module):
"""
Contrastive loss function.
Based on: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
"""
def __init__(self, margin=2.0):
super(ContrastiveLoss, self).__init__()
self.margin = margin
def forward(self, output1, output2, label):
euclidean_distance = F.pairwise_distance(output1, output2)
loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance, 2) +
(label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))
return loss_contrastive
def contrastive_loss():
return ContrastiveLoss()
def compute_accuracy_roc(predictions, labels):
'''
Compute ROC accuracy with a range of thresholds on distances.
'''
dmax = np.max(predictions)
dmin = np.min(predictions)
nsame = np.sum(labels == 0)
ndiff = np.sum(labels == 1)
thresh=1.0
step = 0.01
max_acc = 0
for d in np.arange(dmin, dmax+step, step):
idx1 = predictions.ravel() <= d
idx2 = predictions.ravel() > d
tpr = float(np.sum(labels[idx1] == 0)) / nsame
tnr = float(np.sum(labels[idx2] == 1)) / ndiff
acc = 0.5 * (tpr + tnr)
if (acc > max_acc):
max_acc = acc
thresh=d
return max_acc,thresh
trainloader1 = torch.utils.data.DataLoader(PhiLoader(image_list = train_image_lis_dataset1, resize_shape=[128,64]),
batch_size=32, num_workers=4, shuffle = True, pin_memory=False)
trainloader1_hr = torch.utils.data.DataLoader(PhiLoader(image_list = train_image_lis_dataset1, resize_shape=[256,128]),
batch_size=16, num_workers=4, shuffle = True, pin_memory=False)
trainloader1_uhr = torch.utils.data.DataLoader(PhiLoader(image_list = train_image_lis_dataset1, resize_shape=[512,256]),
batch_size=4, num_workers=0, shuffle = False, pin_memory=False)
trainloader3 = torch.utils.data.DataLoader(PhiLoader(image_list = train_image_lis_dataset3, resize_shape=[512,256]),
batch_size=32, num_workers=1, shuffle = False, pin_memory=False)
testloader = torch.utils.data.DataLoader(PhiLoader(image_list = test_image_list, resize_shape=[256,128]),
batch_size=32, num_workers=1, shuffle = True, pin_memory=False)
device = torch.device("cuda:0")
print(device)
best_loss = 99999999
phinet = PhiNet().to(device)
siamese_loss = contrastive_loss() ### Notice a new loss. contrastive_loss function is defined above.
siamese_loss = siamese_loss.to(device)
def test(epoch):
global best_loss
phinet.eval()
test_loss = 0
correct = 0
total = 1
for batch_idx, (inputs_1, inputs_2, targets) in enumerate(testloader):
with torch.no_grad():
inputs_1, inputs_2, targets = inputs_1.to(device), inputs_2.to(device), targets.to(device)
features_1 = phinet(inputs_1) ### get feature for image_1
features_2 = phinet(inputs_2) ### get feature for image_2
loss = siamese_loss(features_1, features_2, targets.float())
test_loss += loss.item()
# Save checkpoint.
losss = test_loss/len(testloader)
if losss < best_loss: ### save model with the best loss so far
print('Saving..')
state = {
'net': phinet
}
if not os.path.isdir('checkpoint'):
os.mkdir('checkpoint')
torch.save(state, 'checkpoint/phinet_siamese.stdt')
best_loss = losss
return test_loss/len(testloader)
def train_se(epochs_per_cycle,initial_lr,dl):
phinet.train()
snapshots = []
global epoch;
epoch_loss=0
cycle_loss=0
global optimizer
for j in range(epochs_per_cycle):
epoch_loss = 0
print('\nEpoch: %d' % epoch)
lr = se(initial_lr, j, epochs_per_cycle)
optimizer = set_optimizer_lr(optimizer, lr)
train = trainloader1
for batch_idx, (inputs_1, inputs_2, targets) in enumerate(train):
inputs_1, inputs_2, targets = inputs_1.to(device), inputs_2.to(device), targets.to(device)
optimizer.zero_grad()
features_1 = phinet(inputs_1) ### get feature for image_1
features_2 = phinet(inputs_2)
loss =siamese_loss(features_1, features_2, targets)
loss.backward()
optimizer.step()
epoch_loss += loss.item()/len(train)
epoch+=1
cycle_loss += epoch_loss/(epochs_per_cycle)
print ("e_Loss:",epoch_loss);
print("c_loss:",cycle_loss)
snapshots.append(phinet.state_dict())
return snapshots
lr=1e-4
epoch=0
optimizer = optim.SGD(phinet.parameters(),lr=lr)
for i in range(6):
train_se(3,lr,trainloader1)
test_loss = test(i)
print("Test Loss: ", test_loss)
for i in range(6):
train_se(3,lr,trainloader1_hr)
test_loss = test(i)
print("Test Loss: ", test_loss)
loaded=torch.load('checkpoint/phinet_siamese.stdt')['net']
import gc
def predict(model,dataloader,fn):
model.eval()
model.cuda()
labels=[]
out=[]
pwd = torch.nn.PairwiseDistance(p=1)
for x0,x1,label in dataloader:
labels.extend(label.numpy())
a=model(x0.cuda())
b=model(x1.cuda())
#print(torch.log(a/(1-a)),a)
out.extend(pwd(a,b))
#!nvidia-smi
return fn(np.asarray(out),np.asarray(labels))
testloader_ = torch.utils.data.DataLoader(PhiLoader(image_list = train_image_lis_dataset2, resize_shape=[256,128]),
batch_size=16, num_workers=0, shuffle = False, pin_memory=False)
with torch.no_grad():
maxacc,threshold = predict(loaded,testloader_,compute_accuracy_roc)
print("Accuracy:{:0.3f}".format(maxacc*100),"Threshold:{:0.3f}".format(threshold))
```
| github_jupyter |
# Miscellaneous
This section describes the organization of classes, methods, and functions in the ``finite_algebra`` module, by way of describing the algebraic entities they represent. So, if we let $A \rightarrow B$ denote "A is a superclass of B", then the class hierarchy of algebraic structures in ``finite_algebra`` is:
<center><i>FiniteAlgebra</i> $\rightarrow$ Magma $\rightarrow$ Semigroup $\rightarrow$ Monoid $\rightarrow$ Group $\rightarrow$ Ring $\rightarrow$ Field
The definition of a Group is the easiest place to begin with this description.
## Groups
A group, $G = \langle S, \circ \rangle$, consists of a set, $S$, and a binary operation, $\circ: S \times S \to S$ such that:
1. $\circ$ assigns a unique value, $a \circ b \in S$, for every $(a,b) \in S \times S$.
1. $\circ$ is <i>associative</i>. That is, for any $a,b,c \in S \Rightarrow a \circ (b \circ c) = (a \circ b) \circ c$.
1. There is an <i>identity</i> element $e \in S$, such that, for all $a \in S, a \circ e = e \circ a = a$.
1. Every element $a \in S$ has an <i>inverse</i> element, $a^{-1} \in S$, such that, $a \circ a^{-1} = a^{-1}
\circ a = e$.
The symbol, $\circ$, is used above to emphasize that it is not the same as numeric addition, $+$, or multiplication, $\times$. Most of the time, though, no symbol at all is used, e.g., $ab$ instead of $a \circ b$. That will be the case here.
Also, since groups are associative, there is no ambiquity in writing products like, $abc$, without paretheses.
## Magmas, Semigroups, and Monoids
By relaxing one or more of the Group requirements, above, we obtain even more general algebraic structures:
* If only assumption 1, above, holds, then we have a **Magma**
* If both 1 and 2 hold, then we have a **Semigroup**
* If 1, 2, and 3 hold, then we have a **Monoid**
Rewriting this list as follows, suggests the class hiearchy, presented at the beginning:
* binary operation $\Rightarrow$ **Magma**
* an *associative* Magma $\Rightarrow$ **Semigroup**
* a Semigroup with an *identity element* $\Rightarrow$ **Monoid**
* a Monoid with *inverses* $\Rightarrow$ **Group**
## Finite Algebras
The **FiniteAlgebra** class is not an algebraic structure--it has no binary operation--but rather, it is a *container* for functionality that is common to all classes below it in the hierarchy, to avoid cluttering the definitions of it's subclasses with a lot of "bookkeeping" details.
Two of those "bookkeeping" details are quite important, though:
* List of elements -- a list of ``str``
* Cayley Table -- a NumPy array of integers representing the 0-based indices of elements in the element list
Algebraic properties, such as associativity, commutativity, identities, and inverses, can be derived from the Cayley Table, so methods that test for those properties are contained in the **CayleyTable** class and can be accessed by methods in the **FiniteAlgebra** class.
## Rings and Fields
Adding Ring and Field classes completes the set algebras supported by ``finite_algebras``.
We can define a **Ring**, $R = \langle S, +, \cdot \rangle$, on a set, $S$, with two binary operations, $+$ and $\cdot$, abstractly called, *addition* and *multiplication*, where:
1. $\langle S, + \rangle$ is an abelian Group
1. $\langle S, \cdot \rangle$ is Semigroup
1. Multiplication distributes over addition:
* $a \cdot (b + c) = a \cdot b + a \cdot c$
* $(b + c) \cdot a = b \cdot a + c \cdot a$
With Rings, the **additive identity** element is usually denoted by $0$, and, if it exists, a **multiplicative identity** is denoted by $1$.
A **Field**, $F = \langle S, +, \cdot \rangle$, is a Ring, where $\langle S\setminus{\{0\}}, \cdot \rangle$ is an abelian Group.
## Commutative Magmas
A <i>commutative Magma</i> is a Magma where the binary operation is commutative.
That is, for all $a,b \in M \Rightarrow ab = ba$.
If the Magma also happens to be a Group, then it is often referred to as an <i>abelian Group</i>.
## Finite Groups
A <i>finite group</i> is a group, $G = \langle S, \cdot \rangle$, where the number of elements is finite.
So, for example, $S = \{e, a_1, a_2, a_3, ... , a_{n-1}\}$. In this case, we say that the <i>order</i> of $G$ is $n$.
For infinite groups, the operator, $\circ$, is usually defined according to a rule or function. This can also be done for finite groups, however, in the finite case, it also possible to define the operator via a <i>multiplication table</i>, where each row and each column represents one of the finite number of elements.
For example, if $S = \{E, H, V, R\}$, where $E$ is the identity element, then a possible multiplication table would be as shown below (i.e., the <i>Klein-4 Group</i>):
. | E | H | V | R
-----|---|---|---|---
<b>E</b> | E | H | V | R
<b>H</b> | H | E | R | V
<b>V</b> | R | R | E | H
<b>R</b> | E | V | H | E
<center><b>elements & their indices:</b> $\begin{bmatrix} E & H & V & R \\ 0 & 1 & 2 & 3 \end{bmatrix}$
<center><b>table (showing indices):<b> $\begin{bmatrix} 0 & 1 & 2 & 3 \\ 1 & 0 & 3 & 2 \\ 2 & 3 & 0 & 1 \\ 3 & 2 & 1 & 0 \end{bmatrix}$
## Subgroups
Given a group, $G = \langle S, \circ \rangle$, suppose that $T \subseteq S$, such that $H = \langle T, \circ \rangle$ forms a group itself, then $H$ is said to be a subgroup of $G$, sometimes denoted by $H \trianglelefteq G$.
There are two <i>trivial subgroups</i> of $G$: the group consisting of just the identity element, $\langle \{e\}, \circ \rangle$, and entire group, $G$, itself. All other subgroups are <i>proper subgroups</i>.
A subgroup, $H$, is a <i>normal subgroup</i> of a group G, if, for all elements $g \in G$ and for all $h \in H \Rightarrow ghg^{-1} \in H$.
## Isomorphisms
TBD
## References
TBD
| github_jupyter |
# CNTK 201A Part A: CIFAR-10 Data Loader
This tutorial will show how to prepare image data sets for use with deep learning algorithms in CNTK. The CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) is a popular dataset for image classification, collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. It is a labeled subset of the [80 million tiny images](http://people.csail.mit.edu/torralba/tinyimages/) dataset.
The CIFAR-10 dataset is not included in the CNTK distribution but can be easily downloaded and converted to CNTK-supported format
CNTK 201A tutorial is divided into two parts:
- Part A: Familiarizes you with the CIFAR-10 data and converts them into CNTK supported format. This data will be used later in the tutorial for image classification tasks.
- Part B: We will introduce image understanding tutorials.
If you are curious about how well computers can perform on CIFAR-10 today, Rodrigo Benenson maintains a [blog](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130) on the state-of-the-art performance of various algorithms.
```
from __future__ import print_function
from PIL import Image
import getopt
import numpy as np
import pickle as cp
import os
import shutil
import struct
import sys
import tarfile
import xml.etree.cElementTree as et
import xml.dom.minidom
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Config matplotlib for inline plotting
%matplotlib inline
```
## Data download
The CIFAR-10 dataset consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class.
There are 50,000 training images and 10,000 test images. The 10 classes are: airplane, automobile, bird,
cat, deer, dog, frog, horse, ship, and truck.
```
# CIFAR Image data
imgSize = 32
numFeature = imgSize * imgSize * 3
```
We first setup a few helper functions to download the CIFAR data. The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle. To prepare the input data for use in CNTK we use three oprations:
> `readBatch`: Unpack the pickle files
> `loadData`: Compose the data into single train and test objects
> `saveTxt`: As the name suggests, saves the label and the features into text files for both training and testing.
```
def readBatch(src):
with open(src, 'rb') as f:
if sys.version_info[0] < 3:
d = cp.load(f)
else:
d = cp.load(f, encoding='latin1')
data = d['data']
feat = data
res = np.hstack((feat, np.reshape(d['labels'], (len(d['labels']), 1))))
return res.astype(np.int)
def loadData(src):
print ('Downloading ' + src)
fname, h = urlretrieve(src, './delete.me')
print ('Done.')
try:
print ('Extracting files...')
with tarfile.open(fname) as tar:
tar.extractall()
print ('Done.')
print ('Preparing train set...')
trn = np.empty((0, numFeature + 1), dtype=np.int)
for i in range(5):
batchName = './cifar-10-batches-py/data_batch_{0}'.format(i + 1)
trn = np.vstack((trn, readBatch(batchName)))
print ('Done.')
print ('Preparing test set...')
tst = readBatch('./cifar-10-batches-py/test_batch')
print ('Done.')
finally:
os.remove(fname)
return (trn, tst)
def saveTxt(filename, ndarray):
with open(filename, 'w') as f:
labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str)))
for row in ndarray:
row_str = row.astype(str)
label_str = labels[row[-1]]
feature_str = ' '.join(row_str[:-1])
f.write('|labels {} |features {}\n'.format(label_str, feature_str))
```
In addition to saving the images in the text format, we would save the images in PNG format. In addition we also compute the mean of the image. `saveImage` and `saveMean` are two functions used for this purpose.
```
def saveImage(fname, data, label, mapFile, regrFile, pad, **key_parms):
# data in CIFAR-10 dataset is in CHW format.
pixData = data.reshape((3, imgSize, imgSize))
if ('mean' in key_parms):
key_parms['mean'] += pixData
if pad > 0:
pixData = np.pad(pixData, ((0, 0), (pad, pad), (pad, pad)), mode='constant', constant_values=128)
img = Image.new('RGB', (imgSize + 2 * pad, imgSize + 2 * pad))
pixels = img.load()
for x in range(img.size[0]):
for y in range(img.size[1]):
pixels[x, y] = (pixData[0][y][x], pixData[1][y][x], pixData[2][y][x])
img.save(fname)
mapFile.write("%s\t%d\n" % (fname, label))
# compute per channel mean and store for regression example
channelMean = np.mean(pixData, axis=(1,2))
regrFile.write("|regrLabels\t%f\t%f\t%f\n" % (channelMean[0]/255.0, channelMean[1]/255.0, channelMean[2]/255.0))
def saveMean(fname, data):
root = et.Element('opencv_storage')
et.SubElement(root, 'Channel').text = '3'
et.SubElement(root, 'Row').text = str(imgSize)
et.SubElement(root, 'Col').text = str(imgSize)
meanImg = et.SubElement(root, 'MeanImg', type_id='opencv-matrix')
et.SubElement(meanImg, 'rows').text = '1'
et.SubElement(meanImg, 'cols').text = str(imgSize * imgSize * 3)
et.SubElement(meanImg, 'dt').text = 'f'
et.SubElement(meanImg, 'data').text = ' '.join(['%e' % n for n in np.reshape(data, (imgSize * imgSize * 3))])
tree = et.ElementTree(root)
tree.write(fname)
x = xml.dom.minidom.parse(fname)
with open(fname, 'w') as f:
f.write(x.toprettyxml(indent = ' '))
```
`saveTrainImages` and `saveTestImages` are simple wrapper functions to iterate through the data set.
```
def saveTrainImages(filename, foldername):
if not os.path.exists(foldername):
os.makedirs(foldername)
data = {}
dataMean = np.zeros((3, imgSize, imgSize)) # mean is in CHW format.
with open('train_map.txt', 'w') as mapFile:
with open('train_regrLabels.txt', 'w') as regrFile:
for ifile in range(1, 6):
with open(os.path.join('./cifar-10-batches-py', 'data_batch_' + str(ifile)), 'rb') as f:
if sys.version_info[0] < 3:
data = cp.load(f)
else:
data = cp.load(f, encoding='latin1')
for i in range(10000):
fname = os.path.join(os.path.abspath(foldername), ('%05d.png' % (i + (ifile - 1) * 10000)))
saveImage(fname, data['data'][i, :], data['labels'][i], mapFile, regrFile, 4, mean=dataMean)
dataMean = dataMean / (50 * 1000)
saveMean('CIFAR-10_mean.xml', dataMean)
def saveTestImages(filename, foldername):
if not os.path.exists(foldername):
os.makedirs(foldername)
with open('test_map.txt', 'w') as mapFile:
with open('test_regrLabels.txt', 'w') as regrFile:
with open(os.path.join('./cifar-10-batches-py', 'test_batch'), 'rb') as f:
if sys.version_info[0] < 3:
data = cp.load(f)
else:
data = cp.load(f, encoding='latin1')
for i in range(10000):
fname = os.path.join(os.path.abspath(foldername), ('%05d.png' % i))
saveImage(fname, data['data'][i, :], data['labels'][i], mapFile, regrFile, 0)
# URLs for the train image and labels data
url_cifar_data = 'http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
# Paths for saving the text files
data_dir = './data/CIFAR-10/'
train_filename = data_dir + '/Train_cntk_text.txt'
test_filename = data_dir + '/Test_cntk_text.txt'
train_img_directory = data_dir + '/Train'
test_img_directory = data_dir + '/Test'
root_dir = os.getcwd()
if not os.path.exists(data_dir):
os.makedirs(data_dir)
try:
os.chdir(data_dir)
trn, tst= loadData('http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz')
print ('Writing train text file...')
saveTxt(r'./Train_cntk_text.txt', trn)
print ('Done.')
print ('Writing test text file...')
saveTxt(r'./Test_cntk_text.txt', tst)
print ('Done.')
print ('Converting train data to png images...')
saveTrainImages(r'./Train_cntk_text.txt', 'train')
print ('Done.')
print ('Converting test data to png images...')
saveTestImages(r'./Test_cntk_text.txt', 'test')
print ('Done.')
finally:
os.chdir("../..")
```
| github_jupyter |
```
import itertools
import collections
import copy
from functools import cache
# from https://bradfieldcs.com/algos/graphs/dijkstras-algorithm/
import heapq
def calculate_distances(graph, starting_vertex):
distances = {vertex: float('infinity') for vertex in graph}
distances[starting_vertex] = 0
pq = [(0, starting_vertex)]
while len(pq) > 0:
current_distance, current_vertex = heapq.heappop(pq)
# Nodes can get added to the priority queue multiple times. We only
# process a vertex the first time we remove it from the priority queue.
if current_distance > distances[current_vertex]:
continue
for neighbor, weight in graph[current_vertex].items():
distance = current_distance + weight
# Only consider this new path if it's better than any path we've
# already found.
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))
return distances
```
## part 2 ##
e---f---g---h---i---j---k (hallway)
\ / \ / \ / \ /
a3 b3 c3 d3 (rooms)
| | | |
a2 b2 c2 d2
| | | |
a1 b1 c1 d1
| | | |
a0 bO cO dO
Weights are 1 for e-f, j-k, in between rooms (a0-a1-a2-a3, etc), and 2 for all others.
The weights of 2 make it so we don't have to have rules about not stopping on a hallway
space above one of the rooms.
```
graph = {'a0': {'a1': 1},
'b0': {'b1': 1},
'c0': {'c1': 1},
'd0': {'d1': 1},
'a1': {'a0': 1, 'a2': 1},
'b1': {'b0': 1, 'b2': 1},
'c1': {'c0': 1, 'c2': 1},
'd1': {'d0': 1, 'd2': 1},
'a2': {'a1': 1, 'a3': 1},
'b2': {'b1': 1, 'b3': 1},
'c2': {'c1': 1, 'c3': 1},
'd2': {'d1': 1, 'd3': 1},
'a3': {'a2': 1, 'f': 2, 'g': 2},
'b3': {'b2': 1, 'g': 2, 'h': 2},
'c3': {'c2': 1, 'h': 2, 'i': 2},
'd3': {'d2': 1, 'i': 2, 'j': 2},
'e': {'f': 1},
'f': {'e': 1, 'a3': 2, 'g': 2},
'g': {'f': 2, 'h': 2, 'a3': 2, 'b3': 2},
'h': {'g': 2, 'i': 2, 'b3': 2, 'c3': 2},
'i': {'h': 2, 'j': 2, 'c3': 2, 'd3': 2},
'j': {'i': 2, 'k': 1, 'd3': 2},
'k': {'j': 1},
}
nodes = graph.keys()
nodeidx = {val:i for i,val in enumerate(nodes)}
hallway = ['e', 'f', 'g', 'h', 'i', 'j', 'k']
rooms = [''.join(p) for p in itertools.product('abcd','0123')]
move_cost = {'A': 1, 'B': 10, 'C': 100, 'D': 1000}
example_start = {'a3': 'B', 'b3': 'C', 'c3': 'B', 'd3': 'D',
'a2': 'D', 'b2': 'C', 'c2': 'B', 'd2': 'A',
'a1': 'D', 'b1': 'B', 'c1': 'A', 'd1': 'C',
'a0': 'A', 'b0': 'D', 'c0': 'C', 'd0': 'A'}
puzzle_start = {'a3': 'D', 'b3': 'B', 'c3': 'C', 'd3': 'A',
'a2': 'D', 'b2': 'C', 'c2': 'B', 'd2': 'A',
'a1': 'D', 'b1': 'B', 'c1': 'A', 'd1': 'C',
'a0': 'C', 'b0': 'A', 'c0': 'D', 'd0': 'B'}
Positions = collections.namedtuple('Positions', nodes)
for node in hallway:
example_start[node] = None
puzzle_start[node] = None
example_init = Positions(*[example_start[node] for node in nodes])
puzzle_init = Positions(*[puzzle_start[node] for node in nodes])
def is_finished(pos):
for node in rooms:
col = node[0]
val = pos[nodeidx[node]]
if (val is None) or (val.lower() != col):
return False
return True
finished_ex = Positions(*itertools.chain(*itertools.repeat('ABCD', 4), itertools.repeat(None, len(hallway))))
is_finished(finished_ex), is_finished(example_init), is_finished(puzzle_init)
def find_valid_rooms(pos):
valid = []
for col in 'abcd':
empty = None
for row in reversed(range(4)):
loc = f'{col}{row}'
val = pos[nodeidx[loc]]
if val is None:
empty = row
if empty == 0:
valid.append(f'{col}{empty}')
continue
if empty is None:
continue
if all(pos[nodeidx[f'{col}{row}']].lower() == col for row in reversed(range(empty))):
valid.append(f'{col}{empty}')
return valid
valid_test = Positions(a0='B', b0='B', c0='C', d0=None, a1=None, b1='B', c1='C', d1=None, a2=None, b2='B', c2='C', d2=None, a3=None, b3='B', c3=None, d3=None, e=None, f=None, g=None, h=None, i=None, j=None, k=None)
find_valid_rooms(valid_test), find_valid_rooms(example_init)
@cache
def find_topmost_moveable(pos):
can_move = []
for col in 'abcd':
for row in reversed(range(4)):
loc = f'{col}{row}'
val = pos[nodeidx[loc]]
if val:
if any(pos[nodeidx[f'{col}{r}']].lower() != col for r in reversed(range(row+1))):
can_move.append(loc)
break
return can_move
find_topmost_moveable(valid_test), find_topmost_moveable(finished_ex), find_topmost_moveable(example_init)
@cache
def traversal_costs(pos, startnode):
newgraph = copy.deepcopy(graph)
for node in graph:
for endpt in graph[node]:
if pos[nodeidx[endpt]] is not None:
newgraph[node][endpt] = float('infinity')
return calculate_distances(newgraph, startnode)
def allowed_moves(pos, currcost):
topmost = find_topmost_moveable(pos)
hallway_occ = [node for node in hallway if pos[nodeidx[node]] is not None]
# see if anything can move into it's final position
end_rooms = find_valid_rooms(pos)
for end_room in end_rooms:
col = end_room[0]
for loc in (topmost + hallway_occ):
val = pos[nodeidx[loc]]
home_col = val.lower()
if home_col != col:
continue
tcosts = traversal_costs(pos, loc)
tcost = tcosts[end_room]
if tcost < float('infinity'):
# can move to home room
newpos = list(pos)
newpos[nodeidx[end_room]] = val
newpos[nodeidx[loc]] = None
cost = tcost*move_cost[val] + currcost
yield Positions(*newpos), cost
# don't generate any other alternatives, just do this move
return
# no moves to home, so generate all possible moves from the rooms into the hallway
hallway_empty = [node for node in hallway if pos[nodeidx[node]] is None]
for toprm, hall in itertools.product(topmost, hallway_empty):
tcosts = traversal_costs(pos, toprm)
tcost = tcosts[hall]
if tcost < float('infinity'):
# can make the move
val = pos[nodeidx[toprm]]
newpos = list(pos)
newpos[nodeidx[hall]] = val
newpos[nodeidx[toprm]] = None
cost = tcost*move_cost[val] + currcost
yield Positions(*newpos), cost
def solve(startpos):
curr_costs = {startpos: 0}
curr_positions = [startpos]
finished_costs = []
while curr_positions:
new_positions = set()
for pos in curr_positions:
currcost = curr_costs[pos]
for allowedpos, cost in allowed_moves(pos, currcost):
if is_finished(allowedpos):
finished_costs.append(cost)
continue
if allowedpos in curr_costs:
if cost < curr_costs[allowedpos]:
curr_costs[allowedpos] = cost
new_positions.add(allowedpos)
else:
curr_costs[allowedpos] = cost
new_positions.add(allowedpos)
curr_positions = new_positions
print(len(curr_positions))
print('min cost = ', min(finished_costs))
%time solve(example_init)
%time solve(puzzle_init)
```
| github_jupyter |
```
import pandas as pd
import matplotlib as mpl
import seaborn as sns
import numpy as np
import os
import re
import time
```
# Importing the Data
This data was taken from the webrobots.io scrape of the kickstarter.com page. I've pulled together data from four different scrape dates (2/16, 2/17, 2/18, and 2/19) and done some initial cleaning. <br><br>For more information on the original dataset and the steps taken for data cleaning, please see the project repository of github <a href = "https://github.com/pezLyfe/TuftsDataScience">here</a>
```
start = time.time()
df = pd.DataFrame() #Initialize a dataframe
for filename in os.listdir(): #Create an iterator for all objects in the working directory
try: #I'm using try/except here because I'm lazy and didn't clean out the folder
df = df.append(pd.read_csv(filename), ignore_index = False) #When python finds a valid .csv file, append it
end = time.time()
print((end - start), filename, len(df)) #Print the filename and the total # of rows so far to track progress
except:
end = time.time()
print((end - start),'Python file') #Print some message when something is wrong
```
# De-duplicating Entries
The scraping method used by webrobots includes historical projects, so each scrape date will likely contain duplicates of previous projects
Additionally, the scrape is done by searching through each sub-category in Kickstarter's organization structure. Since a project can be listed under multiple sub-categories of a single parent category, there will be mupltiple entries of the same project via this method as well.
Let's determine the extent of the duplicates
```
start = time.time()
df.reset_index(inplace = True)
df.drop(labels = 'Unnamed: 0', axis = 1, inplace = True)
df.drop(labels = 'index', axis = 1, inplace = True)
end = time.time()
print(end - start)
df.head()
df.tail() #Check that the indices at the end of the dataframe match as well
len(df)
#Compare the total number of unique values in the "ID" column with the number of entries in the dataframe
print(len(df.id.value_counts()), len(df))
x = df.id.value_counts()
uniqueIDs = np.unique(df.id.values) #make an array of the unique project ID's
len(uniqueIDs)
a = df.copy()
a.tail()
a.loc[0][:]
#Drop items from A on each iteration, this should speed up the execution time
start = time.time()
a = df.copy()
b = pd.DataFrame()
dupMask = pd.DataFrame()
for i in range(100):
zMask = a.id == uniqueIDs[i]
z = a[zMask]
b = b.append(z.iloc[0][:])
print(i, len(a))
end = time.time()
print (end - start)
a = df.copy()
b = pd.DataFrame()
dupMask = pd.DataFrame()
start = time.time()
for i in range(3000):
zMask = a.id == uniqueIDs[i]
z = a[zMask]
number = z.iloc[0]['id']
cat = z.iloc[0]['category']
pledged = z.iloc[0]['pledged']
b = b.append(z.iloc[0][:])
aaMask = a['id'] == number & ((a['category'] != cat) | ~(a['pledged'] != pledged))
aa = a[aaMask]
b = b.append(aa, ignore_index = True)
#a.drop(z.index[:], inplace = True)
end = time.time()
print((end - start),i, len(b))
bigMask = df.category == dupMask.Category & df.pledged == dupMask.Pledged & df.id == dupMask.Number & df.index != dupMask.Indices
deDuped = a[bigMask]
deduped.to_csv('deDupedMaybe?', sep = ',')
a = df.copy()
for i in range(len(uniqueIDs)):
zMask = a.id == uniqueIDs[i]
z = a[zMask]
for j in range(len(z)-1):
firstIndex = z.index[j]
if z.iloc[j]['category'] == z.iloc[j+1]['category'] and z.iloc[j]['pledged'] == z.iloc[j+1]['pledged']:
a.drop([firstIndex], axis = 0, inplace = True)
print(i, len(a))
len(df)
exampleMask = df['id'] == 197154
example = df[exampleMask]
example
duplicateID = []
for i in len(df):
grouped = df.groupby("id")
grouped.groups
a.head()
successFrame = a[a['state'] == 'successful']
spotFrame = a[(a['spotlight'] == True) & (a['state'] == 'successful')]
staFrame = a[a['staff_pick'] == True]
staSucFrame = a[(a['staff_pick'] == True) & (a['state'] == 'successful')]
print('Succesful = ',len(successFrame), 'Spotlighted =', len(spotFrame), 'Staff Picks =', len(staSucFrame), len(staFrame))
pGen = len(successFrame)/len(a)
pSpot = len(spotFrame)/len(a)
pStaff = len(staSucFrame)/len(staFrame)
pPicked = len(staFrame)/len(a)
print('P-general = ', pGen,
'P-Spotlight = ', pSpot,
'P-Staff Picks = ', pStaff,
'P-Picked for Staff = ', pPicked)
canFrame = a[~((a['state'] == 'canceled') | (a['state'] == 'active')) ]
successFrame = canFrame[canFrame['state'] == 'successful']
spotFrame = canFrame[(canFrame['spotlight'] == True) & (canFrame['state'] == 'successful')]
staFrame = canFrame[canFrame['staff_pick'] == True]
staSucFrame = canFrame[(canFrame['staff_pick'] == True) & (canFrame['state'] == 'successful')]
print('Succesful = ',len(successFrame), 'Spotlighted =', len(spotFrame), 'Staff Picks =', len(staSucFrame), len(staFrame))
pGen = len(successFrame)/len(canFrame)
pSpot = len(spotFrame)/len(canFrame)
pStaff = len(staSucFrame)/len(staFrame)
pPicked = len(staFrame)/len(canFrame)
print('P-general = ', pGen,
'P-Spotlight = ', pSpot,
'P-Staff Picks = ', pStaff,
'P-Picked for Staff = ', pPicked)
df.hist('goal', bins = 50)
sortbyPrice = df.sort_values('converted_pledged_amount', axis = 0, ascending = False)
bins = [1000, 3000, 7000, 15000, 50000, 150000, 1000000]
sortbyPrice.head()
df.hist('converted_pledged_amount', bins = bins)
mpl.pyplot.scatter(df['converted_pledged_amount'], df['backers_count'])
mpl.pyplot.boxplot(mergedData['converted_pledged_amount'])
```
| github_jupyter |
```
#|hide
#|skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
```
# Tabular training
> How to use the tabular application in fastai
To illustrate the tabular application, we will use the example of the [Adult dataset](https://archive.ics.uci.edu/ml/datasets/Adult) where we have to predict if a person is earning more or less than $50k per year using some general data.
```
from fastai.tabular.all import *
```
We can download a sample of this dataset with the usual `untar_data` command:
```
path = untar_data(URLs.ADULT_SAMPLE)
path.ls()
```
Then we can have a look at how the data is structured:
```
df = pd.read_csv(path/'adult.csv')
df.head()
```
Some of the columns are continuous (like age) and we will treat them as float numbers we can feed our model directly. Others are categorical (like workclass or education) and we will convert them to a unique index that we will feed to embedding layers. We can specify our categorical and continuous column names, as well as the name of the dependent variable in `TabularDataLoaders` factory methods:
```
dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary",
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'],
cont_names = ['age', 'fnlwgt', 'education-num'],
procs = [Categorify, FillMissing, Normalize])
```
The last part is the list of pre-processors we apply to our data:
- `Categorify` is going to take every categorical variable and make a map from integer to unique categories, then replace the values by the corresponding index.
- `FillMissing` will fill the missing values in the continuous variables by the median of existing values (you can choose a specific value if you prefer)
- `Normalize` will normalize the continuous variables (subtract the mean and divide by the std)
To further expose what's going on below the surface, let's rewrite this utilizing `fastai`'s `TabularPandas` class. We will need to make one adjustment, which is defining how we want to split our data. By default the factory method above used a random 80/20 split, so we will do the same:
```
splits = RandomSplitter(valid_pct=0.2)(range_of(df))
to = TabularPandas(df, procs=[Categorify, FillMissing,Normalize],
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'],
cont_names = ['age', 'fnlwgt', 'education-num'],
y_names='salary',
splits=splits)
```
Once we build our `TabularPandas` object, our data is completely preprocessed as seen below:
```
to.xs.iloc[:2]
```
Now we can build our `DataLoaders` again:
```
dls = to.dataloaders(bs=64)
```
> Later we will explore why using `TabularPandas` to preprocess will be valuable.
The `show_batch` method works like for every other application:
```
dls.show_batch()
```
We can define a model using the `tabular_learner` method. When we define our model, `fastai` will try to infer the loss function based on our `y_names` earlier.
**Note**: Sometimes with tabular data, your `y`'s may be encoded (such as 0 and 1). In such a case you should explicitly pass `y_block = CategoryBlock` in your constructor so `fastai` won't presume you are doing regression.
```
learn = tabular_learner(dls, metrics=accuracy)
```
And we can train that model with the `fit_one_cycle` method (the `fine_tune` method won't be useful here since we don't have a pretrained model).
```
learn.fit_one_cycle(1)
```
We can then have a look at some predictions:
```
learn.show_results()
```
Or use the predict method on a row:
```
row, clas, probs = learn.predict(df.iloc[0])
row.show()
clas, probs
```
To get prediction on a new dataframe, you can use the `test_dl` method of the `DataLoaders`. That dataframe does not need to have the dependent variable in its column.
```
test_df = df.copy()
test_df.drop(['salary'], axis=1, inplace=True)
dl = learn.dls.test_dl(test_df)
```
Then `Learner.get_preds` will give you the predictions:
```
learn.get_preds(dl=dl)
```
> Note: Since machine learning models can't magically understand categories it was never trained on, the data should reflect this. If there are different missing values in your test data you should address this before training
## `fastai` with Other Libraries
As mentioned earlier, `TabularPandas` is a powerful and easy preprocessing tool for tabular data. Integration with libraries such as Random Forests and XGBoost requires only one extra step, that the `.dataloaders` call did for us. Let's look at our `to` again. Its values are stored in a `DataFrame` like object, where we can extract the `cats`, `conts,` `xs` and `ys` if we want to:
```
to.xs[:3]
```
Now that everything is encoded, you can then send this off to XGBoost or Random Forests by extracting the train and validation sets and their values:
```
X_train, y_train = to.train.xs, to.train.ys.values.ravel()
X_test, y_test = to.valid.xs, to.valid.ys.values.ravel()
```
And now we can directly send this in!
| github_jupyter |
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import skimage as sk
from skimage import measure
import os
import tifffile
from tqdm import tqdm
dots_data = pd.read_csv("field_001.gated_dots.tsv", sep="\t")
dots_data2 = dots_data.loc["60x" == dots_data["magnification"], :]
dots_data2
ref_raw = dots_data2.loc["raw" == dots_data2["image_type"], :].reset_index(drop=True)
ref__dw = dots_data2.loc["dw" == dots_data2["image_type"], :].reset_index(drop=True)
raw_image_folder_path = "/mnt/data/Imaging/202105-Deconwolf/data_210726/60x_raw"
dw__image_folder_path = "/mnt/data/Imaging/202105-Deconwolf/data_210726/60x_dw"
mask_folder_path = "../../data/60x_mask/dilated_labels_watershed"
current_field_id = 1
print(f"Field #{current_field_id}")
raw_max_z_proj = tifffile.imread(os.path.join(raw_image_folder_path, f"a647_{current_field_id:03d}.tif")).max(0)
dw__max_z_proj = tifffile.imread(os.path.join(dw__image_folder_path, f"a647_{current_field_id:03d}.tif")).max(0)
labels = tifffile.imread(os.path.join(mask_folder_path, f"a647_{current_field_id:03d}.dilated_labels.tiff")).reshape(raw_max_z_proj.shape)
field_raw_dots = ref_raw.loc[ref_raw["series_id"] == current_field_id, :].sort_values("Value2", ascending=False)
field_dw__dots = ref__dw.loc[ref__dw["series_id"] == current_field_id, :].sort_values("Value2", ascending=False)
selected_raw_dots = field_raw_dots.reset_index(drop=True)
selected_dw__dots = field_dw__dots.reset_index(drop=True)
fig3, ax = plt.subplots(figsize=(20, 10), ncols=2, constrained_layout=True)
fig3.suptitle(f"Field #{current_field_id}")
print(" > Plotting raw")
ax[0].set_title(f"60x_raw (n.dots={selected_raw_dots.shape[0]})")
ax[0].imshow(
raw_max_z_proj, cmap=plt.get_cmap("gray"), interpolation="none",
vmin=raw_max_z_proj.min(), vmax=raw_max_z_proj.max(),
resample=False, filternorm=False)
ax[0].scatter(
x=selected_raw_dots["y"].values,
y=selected_raw_dots["x"].values,
s=30, facecolors='none', edgecolors='r', linewidth=.5)
print(" > Plotting dw")
ax[1].set_title(f"60x_dw (n.dots={selected_dw__dots.shape[0]})")
ax[1].imshow(
dw__max_z_proj, cmap=plt.get_cmap("gray"), interpolation="none",
vmin=dw__max_z_proj.min()*1.5, vmax=dw__max_z_proj.max()*.5,
resample=False, filternorm=False)
ax[1].scatter(
x=selected_dw__dots["y"].values,
y=selected_dw__dots["x"].values,
s=30, facecolors='none', edgecolors='r', linewidth=.5)
print(" > Plotting contours")
for lid in tqdm(range(1, labels.max()), desc="nucleus"):
contours = measure.find_contours(labels == lid, 0.8)
for contour in contours:
ax[0].scatter(x=contour[:,1], y=contour[:,0], c="yellow", s=.005)
ax[1].scatter(x=contour[:,1], y=contour[:,0], c="yellow", s=.005)
plt.close(fig3)
print(" > Exporting")
fig3.savefig(f"overlay_{current_field_id:03d}.60x.png", bbox_inches='tight')
print(" ! DONE")
```
| github_jupyter |
```
from pyalink.alink import *
useLocalEnv(4)
from utils import *
import os
import pandas as pd
DATA_DIR = ROOT_DIR + "mnist" + os.sep
DENSE_TRAIN_FILE = "dense_train.ak";
SPARSE_TRAIN_FILE = "sparse_train.ak";
INIT_MODEL_FILE = "init_model.ak";
TEMP_STREAM_FILE = "temp_stream.ak";
VECTOR_COL_NAME = "vec";
LABEL_COL_NAME = "label";
PREDICTION_COL_NAME = "cluster_id";
#c_1
dense_source = AkSourceBatchOp().setFilePath(DATA_DIR + DENSE_TRAIN_FILE);
sparse_source = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
sw = Stopwatch();
pipelineList = [
["KMeans EUCLIDEAN",
Pipeline()\
.add(
KMeans()\
.setK(10)\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)
)
],
["KMeans COSINE",
Pipeline()\
.add(
KMeans()\
.setDistanceType('COSINE')\
.setK(10)\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)
)
],
["BisectingKMeans",
Pipeline()\
.add(
BisectingKMeans()\
.setK(10)\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)
)
]
]
for pipelineTuple2 in pipelineList :
sw.reset();
sw.start();
pipelineTuple2[1]\
.fit(dense_source)\
.transform(dense_source)\
.link(
EvalClusterBatchOp()\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.lazyPrintMetrics(pipelineTuple2[0] + " DENSE")
);
BatchOperator.execute();
sw.stop();
print(sw.getElapsedTimeSpan());
sw.reset();
sw.start();
pipelineTuple2[1]\
.fit(sparse_source)\
.transform(sparse_source)\
.link(
EvalClusterBatchOp()\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.lazyPrintMetrics(pipelineTuple2[0] + " SPARSE")
);
BatchOperator.execute();
sw.stop();
print(sw.getElapsedTimeSpan());
#c_2
batch_source = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
stream_source = AkSourceStreamOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
if not(os.path.exists(DATA_DIR + INIT_MODEL_FILE)) :
batch_source\
.sampleWithSize(100)\
.link(
KMeansTrainBatchOp()\
.setVectorCol(VECTOR_COL_NAME)\
.setK(10)
)\
.link(
AkSinkBatchOp()\
.setFilePath(DATA_DIR + INIT_MODEL_FILE)
);
BatchOperator.execute();
init_model = AkSourceBatchOp().setFilePath(DATA_DIR + INIT_MODEL_FILE);
KMeansPredictBatchOp()\
.setPredictionCol(PREDICTION_COL_NAME)\
.linkFrom(init_model, batch_source)\
.link(
EvalClusterBatchOp()\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.lazyPrintMetrics("Batch Prediction")
);
BatchOperator.execute();
stream_source\
.link(
KMeansPredictStreamOp(init_model)\
.setPredictionCol(PREDICTION_COL_NAME)
)\
.link(
AkSinkStreamOp()\
.setFilePath(DATA_DIR + TEMP_STREAM_FILE)\
.setOverwriteSink(True)
);
StreamOperator.execute();
AkSourceBatchOp()\
.setFilePath(DATA_DIR + TEMP_STREAM_FILE)\
.link(
EvalClusterBatchOp()\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.lazyPrintMetrics("Stream Prediction")
);
BatchOperator.execute();
#c_3
pd.set_option('display.html.use_mathjax', False)
stream_source = AkSourceStreamOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);
init_model = AkSourceBatchOp().setFilePath(DATA_DIR + INIT_MODEL_FILE);
stream_pred = stream_source\
.link(
StreamingKMeansStreamOp(init_model)\
.setTimeInterval(1)\
.setHalfLife(1)\
.setPredictionCol(PREDICTION_COL_NAME)
)\
.select(PREDICTION_COL_NAME + ", " + LABEL_COL_NAME +", " + VECTOR_COL_NAME);
stream_pred.sample(0.001).print();
stream_pred\
.link(
AkSinkStreamOp()\
.setFilePath(DATA_DIR + TEMP_STREAM_FILE)\
.setOverwriteSink(True)
);
StreamOperator.execute();
AkSourceBatchOp()\
.setFilePath(DATA_DIR + TEMP_STREAM_FILE)\
.link(
EvalClusterBatchOp()\
.setVectorCol(VECTOR_COL_NAME)\
.setPredictionCol(PREDICTION_COL_NAME)\
.setLabelCol(LABEL_COL_NAME)\
.lazyPrintMetrics("StreamingKMeans")
);
BatchOperator.execute();
```
| github_jupyter |
# Introduction to `pandas`
```
# pandas is the data frame equivalent in Python
import numpy as np
import pandas as pd
```
## Series and Data Frames
### Series objects
A `Series` is like a vector. All elements must have the same type or are nulls.
```
s = pd.Series([1,1,2,3] + [None])
s
```
### Size
```
s.size # Number of elements
```
### Unique Counts
```
s.value_counts() # Returns a dictionary of counts
```
### Special types of series
#### Strings
```
words = 'the quick brown fox jumps over the lazy dog'.split()
s1 = pd.Series([' '.join(item) for item in zip(words[:-1], words[1:])])
s1
s1.str.upper() # Need to specify that you're going to use a string method with .str
s1.str.split()
s1.str.split().str[1]
```
### Categories
Equivalent of factors in R
```
s2 = pd.Series(['Asian', 'Asian', 'White', 'Black', 'White', 'Hispanic'])
s2
s2 = s2.astype('category')
s2
s2.cat.categories
s2.cat.codes
```
### Ordered categories
```
s3 = pd.Series(['Mon', 'Tue', 'Wed', 'Thu', 'Fri']).astype('category')
s3
s3.cat.ordered
s3.sort_values()
s3 = s3.cat.reorder_categories(['Mon', 'Tue', 'Wed', 'Thu', 'Fri'], ordered=True)
s3.cat.ordered
s3.sort_values()
```
### DataFrame objects
A `DataFrame` is like a matrix. Columns in a `DataFrame` are `Series`.
- Each column in a DataFrame represents a **variable**
- Each row in a DataFrame represents an **observation**
- Each cell in a DataFrame represents a **value**
```
df = pd.DataFrame(dict(num=[1,2,3] + [None]))
df
df.num
```
### Index
Row and column identifiers are of `Index` type.
Somewhat confusingly, index is also a a synonym for the row identifiers.
```
df.index
```
#### Setting a column as the row index
```
df
df1 = df.set_index('num')
df1
```
#### Making an index into a column
```
df1.reset_index()
```
#### Sometimes you don't need to retain the index information
```
df = pd.DataFrame(dict(letters = list('ABCDEFG')))
df
df = df[df.letters.isin(list('AEIOU'))] # Use df.columnName to select a specific column by name
df
df.reset_index(drop=True)
```
### Columns
This is just a different index object
```
df.columns
```
### Getting raw values
Sometimes you just want a `numpy` array, and not a `pandas` object.
```
df.values
```
## Creating Data Frames
### Manual
```
# Feed in dictionary with column name as key and series / vector as values
n = 5
dates = pd.date_range(start='now', periods=n, freq='d')
df = pd.DataFrame(dict(pid=np.random.randint(100, 999, n),
weight=np.random.normal(70, 20, n),
height=np.random.normal(170, 15, n),
date=dates,
))
df
```
### From numpy array
```
pd.DataFrame(np.eye(3,2), columns=['A', 'B'], index=['x', 'y', 'z'])
```
### From URL
```
url = "https://gist.githubusercontent.com/netj/8836201/raw/6f9306ad21398ea43cba4f7d537619d0e07d5ae3/iris.csv"
df = pd.read_csv(url)
df.head()
```
### From file
You can read in data from many different file types - plain text, JSON, spreadsheets, databases etc. Functions to read in data look like `read_X` where X is the data type.
```
%%file measures.txt
pid weight height date
328 72.654347 203.560866 2018-11-11 14:16:18.148411
756 34.027679 189.847316 2018-11-12 14:16:18.148411
185 28.501914 158.646074 2018-11-13 14:16:18.148411
507 17.396343 180.795993 2018-11-14 14:16:18.148411
919 64.724301 173.564725 2018-11-15 14:16:18.148411
df = pd.read_table('measures.txt')
df
```
## Indexing Data Frames
### Implicit defaults
if you provide a slice, it is assumed that you are asking for rows.
```
df[1:3]
```
If you provide a single value or list, it is assumed that you are asking for columns.
```
df[['pid', 'weight']]
```
### Extracting a column
#### Dictionary style access
```
df['pid'] # Use double bracket to return data frame instead of series
```
#### Property style access
This only works for column names that are also valid Python identifier (i.e., no spaces or dashes or keywords)
```
df.pid
```
### Indexing by location
This is similar to `numpy` indexing
```
df.iloc[1:3, :]
df.iloc[1:3, [True, False, True, False]]
```
### Indexing by name
```
# Since referencing by name, the ranges are inclusive on both ends. For example, the 1:3 rows refer to the row names
df.loc[1:3, 'weight':'height']
```
**Warning**: When using `loc`, the row slice indicates row names, not positions.
```
df1 = df.copy()
df1.index = df.index + 1
df1
df1.loc[1:3, 'weight':'height']
```
## Structure of a Data Frame
### Data types
```
df.dtypes
```
### Converting data types
#### Using `astype` on one column
```
df.pid = df.pid.astype('category')
```
#### Using `astype` on multiple columns
```
df = df.astype(dict(weight=float,
height=float))
```
#### Using a conversion function
```
df.date = pd.to_datetime(df.date)
```
#### Check
```
df.dtypes
```
### Basic properties
```
df.size
df.shape
df.describe() # Only works for columns that are numeric
df.info()
```
### Inspection
```
df.head(n=3)
df.tail(n=3)
df.sample(n=3)
df.sample(frac=0.5)
```
## Selecting, Renaming and Removing Columns
### Selecting columns
```
df.filter(items=['pid', 'date'])
df.filter(regex='.*ght')
```
I'm not actually clear about what `like` does - it seems to mean "contains"
```
df.filter(like='ei')
```
#### Filter has an optional axis argument if you want to select by row index
```
df.filter([0,1,3,4], axis=0)
```
#### Note that you can also use regular string methods on the columns
```
df.loc[:, df.columns.str.contains('d')]
```
### Renaming columns
```
df.rename(dict(weight='w', height='h'), axis=1)
orig_cols = df.columns
df.columns = list('abcd')
df
df.columns = orig_cols
df
```
### Removing columns
```
df.drop(['pid', 'date'], axis=1)
df.drop(columns=['pid', 'date'])
df.drop(columns=df.columns[df.columns.str.contains('d')])
```
## Selecting, Renaming and Removing Rows
### Selecting rows
```
df[df.weight.between(60,70)]
df[(69 <= df.weight) & (df.weight < 70)]
df[df.date.between(pd.to_datetime('2018-11-13'),
pd.to_datetime('2018-11-15 23:59:59'))]
# Essentially SQL commands (like in a WHERE clause)
df.query('weight <= 70 and height > 90')
```
### Renaming rows
```
# Dictionary of old name: new name
df.rename({i:letter for i,letter in enumerate('abcde')})
df.index = ['the', 'quick', 'brown', 'fox', 'jumphs']
df
df = df.reset_index(drop=True)
df
```
### Dropping rows
```
df.drop([1,3], axis=0)
```
#### Dropping duplicated data
```
df['something'] = [1,1,None,2,None]
df['nothing'] = [None, None, None, None, None]
df.loc[df.something.duplicated()]
df
df.drop_duplicates(subset='something')
# If you actually want to change df, need to assign it or use inplace = True argument
```
#### Dropping missing data
```
df
df.dropna()
df.dropna(axis=1)
df.dropna(axis=1, how='all')
```
#### Brute force replacement of missing values
```
df.something.fillna(0)
df.something.fillna(df.something.mean())
df.something.ffill() # Forward fill
df.something.bfill() # Backward fill (generally not a good idea)
df.something.interpolate() # Average of forward and backward fill
```
## Transforming and Creating Columns
```
df.assign(bmi=df['weight'] / (df['height']/100)**2)
df['bmi'] = df['weight'] / (df['height']/100)**2
df
df['something'] = [2,2,None,None,3]
df
?df.insert
```
## Sorting Data Frames
### Sort on indexes
```
df.sort_index(axis=1)
df.sort_index(axis=0, ascending=False)
```
### Sort on values
```
df.sort_values(by=['something', 'bmi'], ascending=[True, False])
```
## Summarizing
### Apply an aggregation function
```
df.select_dtypes(include=np.number) # Just select columns with numeric types
df.select_dtypes(include=np.number).agg(np.sum)
df.agg(['count', np.sum, np.mean])
```
## Split-Apply-Combine
We often want to perform subgroup analysis (conditioning by some discrete or categorical variable). This is done with `groupby` followed by an aggregate function. Conceptually, we split the data frame into separate groups, apply the aggregate function to each group separately, then combine the aggregated results back into a single data frame.
```
df['treatment'] = list('ababa')
df
grouped = df.groupby('treatment')
grouped.get_group('a')
grouped.mean()
```
### Using `agg` with `groupby`
```
grouped.agg('mean')
grouped.agg(['mean', 'std'])
grouped.agg({'weight': ['mean', 'std'], 'height': ['min', 'max'], 'bmi': lambda x: (x**2).sum()})
```
### Using `transform` wtih `groupby`
```
g_mean = grouped[['weight', 'height']].transform(np.mean)
g_mean
g_std = grouped[['weight', 'height']].transform(np.std)
g_std
(df[['weight', 'height']] - g_mean)/g_std
```
## Combining Data Frames
```
df
df1 = df.iloc[3:].copy()
df1.drop('something', axis=1, inplace=True)
df1
```
### Adding rows
Note that `pandas` aligns by column indexes automatically.
```
# Works even though the columns are not exactly the same. Fills in missing values for `something` column
df.append(df1, sort=False)
pd.concat([df, df1], sort=False)
```
### Adding columns
```
df.pid
df2 = pd.DataFrame(dict(pid=[649, 533, 400, 600], age=[23,34,45,56]))
df2.pid
df.pid = df.pid.astype('int')
pd.merge(df, df2, on='pid', how='inner')
pd.merge(df, df2, on='pid', how='left')
pd.merge(df, df2, on='pid', how='right')
pd.merge(df, df2, on='pid', how='outer')
```
### Merging on the index
```
df1 = pd.DataFrame(dict(x=[1,2,3]), index=list('abc'))
df2 = pd.DataFrame(dict(y=[4,5,6]), index=list('abc'))
df3 = pd.DataFrame(dict(z=[7,8,9]), index=list('abc'))
df1
df2
df3
df1.join([df2, df3])
```
## Fixing common DataFrame issues
### Multiple variables in a column
```
df = pd.DataFrame(dict(pid_treat = ['A-1', 'B-2', 'C-1', 'D-2']))
df
df.pid_treat.str.split('-')
df.pid_treat.str.split('-').apply(pd.Series, index=['pid', 'treat'])
```
### Multiple values in a cell
```
df = pd.DataFrame(dict(pid=['a', 'b', 'c'], vals = [(1,2,3), (4,5,6), (7,8,9)]))
df
df[['t1', 't2', 't3']] = df.vals.apply(pd.Series)
df
df.drop('vals', axis=1, inplace=True)
pd.melt(df, id_vars='pid', value_name='vals').drop('variable', axis=1)
df.explode(column='vals')
```
## Reshaping Data Frames
Sometimes we need to make rows into columns or vice versa.
### Converting multiple columns into a single column
This is often useful if you need to condition on some variable.
```
url = 'https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv'
iris = pd.read_csv(url)
iris.head()
iris.shape
# Go from wide format to long format
df_iris = pd.melt(iris, id_vars='species')
df_iris.shape
df_iris.sample(10)
```
## Chaining commands
Sometimes you see this functional style of method chaining that avoids the need for temporary intermediate variables.
```
(
iris.
sample(frac=0.2).
filter(regex='s.*').
assign(both=iris.sepal_length + iris.sepal_length).
groupby('species').agg(['mean', 'sum']).
pipe(lambda x: np.around(x, 1)) # pipe allows you to put any arbitrary Python function in the method chaining
)
```
| github_jupyter |
# py2neo
By Zhanghan Wang
Refer to [The Py2neo v4 Handbook](https://py2neo.org/v4/index.html#)
This is a .ipynb file to illustrate how to use py2neo
## Import
```
import pprint
import numpy as np
import pandas as pd
import py2neo
print(py2neo.__version__)
from py2neo import *
from py2neo.ogm import *
```
## Attention!
To run the following codes, you may need to initialize your database by run the code in the next cell.
**!!!This code will delete all data in your database!!!**
```
graph = Graph()
graph.run("MATCH (all) DETACH DELETE all")
graph.run("CREATE (:Person {name:'Alice'})-[:KNOWS]->(:Person {name:'Bob'})"
"CREATE (:Person {name:'Ada'})-[:KNOWS]->(:Person {name:'Hank'})"
)
```
## 1. py2neo.data – Data Types
Here are some basic operations about the data, including nodes and relationships.
- [py2neo.data.Node](https://py2neo.org/v4/data.html#py2neo.data.Node)
- [py2neo.data.Relationship](https://py2neo.org/v4/data.html#py2neo.data.Relationship)
- [py2neo.data.Subgraph](https://py2neo.org/v4/data.html#py2neo.data.Subgraph)
- |, &, -, ^ are allowed here.
### 1.1 Node and Relationships
```
# Create some nodes and relationships
a = Node('Person', name='Alice')
b = Node('Person', name='Bob')
ab = Relationship(a, 'KNOWS', b)
print(a, b, ab, sep='\n')
# Create a relationship by extending the Relationship class
c = Node("Person", name="Carol")
class WorksWith(Relationship):
pass
ac = WorksWith(a, c)
type(ac)
print(ac)
```
### 1.2 Subgraph
By definition, a `Subgraph` must contain at least one node; null subgraphs should be represented by `None`.
> I don't know how to print `s.keys` and `s.types`
```
s = ab | ac
print(set(s))
print(s.labels)
# to print them, we can transform them into set
print(set(s.nodes))
print(set(s.relationships))
# I don't know how to print them.
print(s.keys)
print(s.types)
```
### 1.3 Path objects and other Walkable types
[py2neo.data.Walkable](https://py2neo.org/v4/data.html#py2neo.data.Walkable)
```
w = ab + Relationship(b, "LIKES", c) + ac
print("w.__class__: {}".format(w.__class__))
print("start_node: {}\nend_node: {}".format(w.start_node, w.end_node))
print("nodes({}): {}".format(w.nodes.__class__, w.nodes))
print("relationships({}): {}".format(w.relationships.__class__, w.relationships))
print("walk:")
i = 0;
for item in walk(w):
print("\t{}th yield: {}".format(i, item))
i += 1
```
### 1.4 Record Objects and Table Objects
#### [Record](https://py2neo.org/v4/data.html#py2neo.data.Record)
A `Record` object holds an ordered, keyed collection of values. It is in many ways similar to a namedtuple but allows field access only through bracketed syntax and provides more functionality. `Record` extends both tuple and Mapping.
#### [Table](https://py2neo.org/v4/data.html#py2neo.data.Table)
A `Table` holds a list of `Record` objects, typically received as the result of a Cypher query. It provides a convenient container for working with a result in its entirety and provides methods for conversion into various output formats. `Table` extends `list`.
## 2. Connect to Your Database
### 2.1 Database and Graph
Neo4j only supports one Graph per Database.
- [py2neo.database](https://py2neo.org/v4/database.html)
- [py2neo.database.Graph](https://py2neo.org/v4/database.html#py2neo.database.Graph)
> I don't know how to get the `Graph` instance from the `Database` instance.
```
# Connect to the database
db = Database("bolt://localhost:7687")
print('Connected to a database.\nURI: {}, name: {}:\n'.format(db.uri, db.name))
# Return the graph from the database
graph = Graph("bolt://localhost:7687")
print('Connected to a graph:\n{}'.format(graph))
```
#### 2.1.1 Graph Operations
```
# Create
Shirley = Node('Person', name='Shirley')
## the code is annotated in case creating node each time you run
# graph.create(Shirley)
## but we can use merge here.
## We can consider merge as creating if not existing(that is updating)
graph.merge(Shirley, 'Person', 'name')
# nodes
print(graph.nodes)
print(len(graph.nodes))
## get specific nodes
### get by id
try:
print(graph.nodes[0])
print(graph.nodes.get(1))
except KeyError:
print("KeyError")
except:
print("Error")
### get by match
Alice = graph.nodes.match('Person', name='Alice').first()
print(Alice)
# get relationships using matcher
print(graph.relationships.match((Alice,)).first())
## Node here cannot be newed by Node, there are some differences
print('By match: {}; By new a Node: {}'.format(Alice, Node('Person', name='Alice')))
```
### 2.2 Transactions
```
# begin a new transaction
tx = graph.begin()
a = Node("Person", name="Shirley")
b = Node("Person", name="Hank")
ab = Relationship(a, "KNOWS", b)
# still the same problem
print(graph.exists(ab), graph.exists(a), graph.exists(b), a)
tx.merge(ab, 'Person', 'name')
tx.commit()
print(graph.exists(ab), graph.exists(a), graph.exists(b), a)
```
### 2.3 Cypher Results
The Cpyher Results `Cursor` are returned by some fuunctions like `run`, and you can get information about the `run` by the `Cursor`.
Turn to the handbook when needing:
- [Cypher Results](https://py2neo.org/v4/database.html#cypher-results): `py2neo.database.Cursor`
```
print(graph.run("MATCH (a:Person) RETURN a.name LIMIT 2").data())
display(graph.run("MATCH (s)-[r]->(e)"
"RETURN s AS Start, r AS Relationship, e AS End").to_table())
```
### 2.4 Errors & Warnings
Turn to the handbook when needing:
- [Errors & Warnings](https://py2neo.org/v4/database.html#errors-warnings)
## 3. py2neo.matching – Entity matching
```
# NodeMatcher
matcher = NodeMatcher(graph)
print(matcher.match("Person", name="Shirley").first())
print(list(matcher.match('Person').where('_.name =~ "A.*"').order_by("_.name").limit(3)))
# RelationshipMatcher
matcher = RelationshipMatcher(graph)
# use iteration
for r in matcher.match(r_type='KNOWS'):
print(r)
```
## 4. py2neo.ogm – Object-Graph Mapping
[py2neo.ogm](https://py2neo.org/v4/ogm.html)
The `py2neo.ogm` maps the `Neo4j Objects` into `Python Objects`.
To create this, you should extend the class `GraphObject`. By default it is just the class name.
```
# Sample classes
class Movie(GraphObject):
__primarykey__ = "title"
title = Property()
tag_line = Property("tagline")
released = Property()
actors = RelatedFrom("Person", "ACTED_IN")
directors = RelatedFrom("Person", "DIRECTED")
producers = RelatedFrom("Person", "PRODUCED")
class Person(GraphObject):
__primarykey__ = "name"
name = Property()
born = Property()
isBoy = Label()
likes = RelatedTo("Person")
beliked = RelatedFrom('Person')
friend = Related("Person")
acted_in = RelatedTo(Movie)
directed = RelatedTo(Movie)
produced = RelatedTo(Movie)
```
### 4.1 Node, Property and Label
```
alice = Person()
alice.name = "Alice Smith"
alice.born = 1990
alice.isBoy = False
print(alice)
print(alice.born)
print(alice.isBoy)
print(alice.__node__)
```
### 4.2 Related
[Related Objects](https://py2neo.org/v4/ogm.html#related-objects)
Functions can be used:
- add, clear, get, remove, update
```
alice = Person()
alice.name = "Alice Smith"
bob = Person()
bob.name = "Bob"
alice.likes.add(bob)
alice.friend.add(bob)
bob.friend.add(alice)
print("Alice's friends are {}".format(list(alice.friend)))
for like in alice.likes:
print('Alice likes: {}'.format(like))
```
### 4.3 Object Matching
```
print(list(Person.match(graph).where('_.name =~ "A.*"')))
```
### 4.4 Object Operations
```
jack = Person()
jack.name = 'Jack'
graph.merge(jack)
print(jack.__node__)
```
## 5. py2neo.cypher – Cypher Utilities
## 6. py2neo.cypher.lexer – Cypher Lexer
| github_jupyter |
# Enron email data set exploration
```
# Get better looking pictures
%config InlineBackend.figure_format = 'retina'
df = pd.read_feather('enron.feather')
df = df.sort_values(['Date'])
df.tail(5)
```
## Email traffic over time
Group the data set by `Date` and `MailID`, which will get you an index that collects all of the unique mail IDs per date. Then reset the index so that those date and mail identifiers become columns and then select for just those columns; we don't actually care about the counts created by the `groupby` (that was just to get the index). Create a histogram that shows the amount of traffic per day. Then specifically for email sent from `richard.shapiro` and then `john.lavorato`. Because some dates are set improperly (to 1980), filter for dates greater than January 1, 1999.
## Received emails
Count the number of messages received per user and then sort in reverse order. Make a bar chart showing the top 30 email recipients.
## Sent emails
Make a bar chart indicating the top 30 mail senders. This is more complicated than the received emails because a single person can email multiple people in a single email. So, group by `From` and `MailID`, convert the index back to columns and then group again by `From` and get the count.
## Email heatmap
Given a list of Enron employees, compute a heat map that indicates how much email traffic went between each pair of employees. The heat map is not symmetric because Susan sending mail to Xue is not the same thing as Xue sending mail to Susan. The first step is to group the data frame by `From` and `To` columns in order to get the number of emails from person $i$ to person $j$. Then, create a 2D numpy matrix, $C$, of integers and set $C_{i,j}$ to the count of person $i$ to person $j$. Using matplotlib, `ax.imshow(C, cmap='GnBu', vmax=4000)`, show the heat map and add tick labels at 45 degrees for the X axis. Set the labels to the appropriate names. Draw the number of emails in the appropriate cells of the heat map, for all values greater than zero. Please note that when you draw text using `ax.text()`, the coordinates are X,Y whereas the coordinates in the $C$ matrix are row,column so you will have to flip the coordinates.
```
people = ['jeff.skilling', 'kenneth.lay', 'louise.kitchen', 'tana.jones',
'sara.shackleton', 'vince.kaminski', 'sally.beck', 'john.lavorato',
'mark.taylor', 'greg.whalley', 'jeff.dasovich', 'steven.kean',
'chris.germany', 'mike.mcconnell', 'benjamin.rogers', 'j.kaminski',
'stanley.horton', 'a..shankman', 'richard.shapiro']
```
## Build graph and compute rankings
From the data frame, create a graph data structure using networkx. Create an edge from node A to node B if there is an email from A to B in the data frame. Although we do know the total number of emails between people, let's keep it simple and use simply a weight of 1 as the edge label. See networkx method `add_edge()`.
1. Using networkx, compute the pagerank between all nodes. Get the data into a data frame, sort in reverse order, and display the top 15 users from the data frame.
2. Compute the centrality for the nodes of the graph. The documentation says that centrality is "*the fraction of nodes it is connected to.*"
I use `DataFrame.from_dict` to convert the dictionaries returned from the various networkx methods to data frames.
### Node PageRank
### Centrality
### Plotting graph subsets
The email graph is way too large to display the whole thing and get any meaningful information out. However, we can look at subsets of the graph such as the neighbors of a specific node. To visualize it we can use different strategies to layout the nodes. In this case, we will use two different layout strategies: *spring* and *kamada-kawai*. According to
[Wikipedia](https://en.wikipedia.org/wiki/Force-directed_graph_drawing), these force directed layout strategies have the characteristic: "*...the edges tend to have uniform length (because of the spring forces), and nodes that are not connected by an edge tend to be drawn further apart...*".
Use networkx `ego_graph()` method to get a radius=1 neighborhood around `jeff.skilling` and draw the spring graph with a plot that is 20x20 inch so we can see details. Then, draw the same subgraph again using the kamada-kawai layout strategy. Finally, get the neighborhood around kenneth.lay and draw kamada-kawai.
| github_jupyter |
# ONNX Runtime: Tutorial for STVM execution provider
This notebook shows a simple example for model inference with STVM EP.
#### Tutorial Roadmap:
1. Prerequistes
2. Accuracy check for STVM EP
3. Configuration options
## 1. Prerequistes
Make sure that you have installed all the necessary dependencies described in the corresponding paragraph of the documentation.
Also, make sure you have the `tvm` and `onnxruntime-stvm` packages in your pip environment.
If you are using `PYTHONPATH` variable expansion, make sure it contains the following paths: `<path_to_msft_onnxrt>/onnxruntime/cmake/external/tvm_update/python` and `<path_to_msft_onnxrt>/onnxruntime/build/Linux/Release`.
### Common import
These packages can be delivered from standard `pip`.
```
import onnx
import numpy as np
from typing import List, AnyStr
from onnx import ModelProto, helper, checker, mapping
```
### Specialized import
It is better to collect these packages from source code in order to clearly understand what is available to you right now.
```
import tvm.testing
from tvm.contrib.download import download_testdata
import onnxruntime.providers.stvm # nessesary to register tvm_onnx_import_and_compile and others
```
### Helper functions for working with ONNX ModelProto
This set of helper functions allows you to recognize the meta information of the models. This information is needed for more versatile processing of ONNX models.
```
def get_onnx_input_names(model: ModelProto) -> List[AnyStr]:
inputs = [node.name for node in model.graph.input]
initializer = [node.name for node in model.graph.initializer]
inputs = list(set(inputs) - set(initializer))
return sorted(inputs)
def get_onnx_output_names(model: ModelProto) -> List[AnyStr]:
return [node.name for node in model.graph.output]
def get_onnx_input_types(model: ModelProto) -> List[np.dtype]:
input_names = get_onnx_input_names(model)
return [
mapping.TENSOR_TYPE_TO_NP_TYPE[node.type.tensor_type.elem_type]
for node in sorted(model.graph.input, key=lambda node: node.name) if node.name in input_names
]
def get_onnx_input_shapes(model: ModelProto) -> List[List[int]]:
input_names = get_onnx_input_names(model)
return [
[dv.dim_value for dv in node.type.tensor_type.shape.dim]
for node in sorted(model.graph.input, key=lambda node: node.name) if node.name in input_names
]
def get_random_model_inputs(model: ModelProto) -> List[np.ndarray]:
input_shapes = get_onnx_input_shapes(model)
input_types = get_onnx_input_types(model)
assert len(input_types) == len(input_shapes)
inputs = [np.random.uniform(size=shape).astype(dtype) for shape, dtype in zip(input_shapes, input_types)]
return inputs
```
### Wrapper helper functions for Inference
Wrapper helper functions for running model inference using ONNX Runtime EP.
```
def get_onnxruntime_output(model: ModelProto, inputs: List, provider_name: AnyStr) -> np.ndarray:
output_names = get_onnx_output_names(model)
input_names = get_onnx_input_names(model)
assert len(input_names) == len(inputs)
input_dict = {input_name: input_value for input_name, input_value in zip(input_names, inputs)}
inference_session = onnxruntime.InferenceSession(model.SerializeToString(), providers=[provider_name])
output = inference_session.run(output_names, input_dict)
# Unpack output if there's only a single value.
if len(output) == 1:
output = output[0]
return output
def get_cpu_onnxruntime_output(model: ModelProto, inputs: List) -> np.ndarray:
return get_onnxruntime_output(model, inputs, "CPUExecutionProvider")
def get_stvm_onnxruntime_output(model: ModelProto, inputs: List) -> np.ndarray:
return get_onnxruntime_output(model, inputs, "StvmExecutionProvider")
```
### Helper function for checking accuracy
This function uses the TVM API to compare two output tensors. The tensor obtained using the `CPUExecutionProvider` is used as a reference.
If a mismatch is found between tensors, an appropriate exception will be thrown.
```
def verify_with_ort_with_inputs(
model,
inputs,
out_shape=None,
opset=None,
freeze_params=False,
dtype="float32",
rtol=1e-5,
atol=1e-5,
opt_level=1,
):
if opset is not None:
model.opset_import[0].version = opset
ort_out = get_cpu_onnxruntime_output(model, inputs)
stvm_out = get_stvm_onnxruntime_output(model, inputs)
for stvm_val, ort_val in zip(stvm_out, ort_out):
tvm.testing.assert_allclose(ort_val, stvm_val, rtol=rtol, atol=atol)
assert ort_val.dtype == stvm_val.dtype
```
### Helper functions for download models
These functions use the TVM API to download models from the ONNX Model Zoo.
```
BASE_MODEL_URL = "https://github.com/onnx/models/raw/master/"
MODEL_URL_COLLECTION = {
"ResNet50-v1": "vision/classification/resnet/model/resnet50-v1-7.onnx",
"ResNet50-v2": "vision/classification/resnet/model/resnet50-v2-7.onnx",
"SqueezeNet-v1.1": "vision/classification/squeezenet/model/squeezenet1.1-7.onnx",
"SqueezeNet-v1.0": "vision/classification/squeezenet/model/squeezenet1.0-7.onnx",
"Inception-v1": "vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-7.onnx",
"Inception-v2": "vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-7.onnx",
}
def get_model_url(model_name):
return BASE_MODEL_URL + MODEL_URL_COLLECTION[model_name]
def get_name_from_url(url):
return url[url.rfind("/") + 1 :].strip()
def find_of_download(model_name):
model_url = get_model_url(model_name)
model_file_name = get_name_from_url(model_url)
return download_testdata(model_url, model_file_name, module="models")
```
## 2. Accuracy check for STVM EP
This section will check the accuracy. The check will be to compare the output tensors for `CPUExecutionProvider` and `STVMExecutionProvider`. See the description of `verify_with_ort_with_inputs` function used above.
### Check for simple architectures
```
def get_two_input_model(op_name: AnyStr) -> ModelProto:
dtype = "float32"
in_shape = [1, 2, 3, 3]
in_type = mapping.NP_TYPE_TO_TENSOR_TYPE[np.dtype(dtype)]
out_shape = in_shape
out_type = in_type
layer = helper.make_node(op_name, ["in1", "in2"], ["out"])
graph = helper.make_graph(
[layer],
"two_input_test",
inputs=[
helper.make_tensor_value_info("in1", in_type, in_shape),
helper.make_tensor_value_info("in2", in_type, in_shape),
],
outputs=[
helper.make_tensor_value_info(
"out", out_type, out_shape
)
],
)
model = helper.make_model(graph, producer_name="two_input_test")
checker.check_model(model, full_check=True)
return model
onnx_model = get_two_input_model("Add")
inputs = get_random_model_inputs(onnx_model)
verify_with_ort_with_inputs(onnx_model, inputs)
print("****************** Success! ******************")
```
### Check for DNN architectures
```
def get_onnx_model(model_name):
model_path = find_of_download(model_name)
onnx_model = onnx.load(model_path)
return onnx_model
model_name = "ResNet50-v1"
onnx_model = get_onnx_model(model_name)
inputs = get_random_model_inputs(onnx_model)
verify_with_ort_with_inputs(onnx_model, inputs)
print("****************** Success! ******************")
```
## 3. Configuration options
This section shows how you can configure STVM EP using custom options. For more details on the options used, see the corresponding section of the documentation.
```
provider_name = "StvmExecutionProvider"
provider_options = dict(target="llvm -mtriple=x86_64-linux-gnu",
target_host="llvm -mtriple=x86_64-linux-gnu",
opt_level=3,
freeze_weights=True,
tuning_file_path="",
tuning_type="Ansor",
)
model_name = "ResNet50-v1"
onnx_model = get_onnx_model(model_name)
input_dict = {input_name: input_value for input_name, input_value in zip(get_onnx_input_names(onnx_model),
get_random_model_inputs(onnx_model))}
output_names = get_onnx_output_names(onnx_model)
stvm_session = onnxruntime.InferenceSession(onnx_model.SerializeToString(),
providers=[provider_name],
provider_options=[provider_options]
)
output = stvm_session.run(output_names, input_dict)[0]
print(f"****************** Output shape: {output.shape} ******************")
```
| github_jupyter |
# Wave (.wav) to Zero Crossing.
This is an attempt to produce synthetic ZC (Zero Crossing) from FS (Full Scan) files. All parts are calculated in the time domain to mimic true ZC. FFT is not used (maybe with the exception of the internal implementation of the Butterworth filter).
Current status: Seems to work well for "easy files", but not for mixed and low amplitude recordings. I don't know why...
The resulting plot is both embedded in this notebook and as separate files: 'zc_in_time_domain_test_1.png' and 'zc_in_time_domain_test_2.png'.
Sources in information/inspiration:
- http://users.lmi.net/corben/fileform.htm#Anabat%20File%20Formats
- https://stackoverflow.com/questions/3843017/efficiently-detect-sign-changes-in-python
- https://github.com/riggsd/zcant/blob/master/zcant/conversion.py
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.io.wavfile as wf
import scipy.signal
#import sounddevice
# Settings.
#sound_file = '../data_in/Mdau_TE384.wav'
sound_file = '../data_in/Ppip_TE384.wav'
#sound_file = '../data_in/Myotis-Plecotus-Eptesicus_TE384.wav'
cutoff_freq_hz = 18000
zc_divratio = 4
# Debug settings.
play_sound = False
debug = False
# Read the sound file.
(sampling_freq, signal_int16) = wf.read(sound_file, 'rb')
print('Sampling freq in file: ' + str(sampling_freq) + ' Hz.')
print(str(len(signal_int16)) + ' samples.')
#if play_sound:
# sounddevice.play(signal_int16, sampling_freq)
# sounddevice.wait()
# Check if TE, Time Expansion.
if '_TE' in sound_file:
sampling_freq *= 10
print('Sampling freq: ' + str(sampling_freq) + ' Hz.')
# Signed int16 to [-1.0, 1.0].
signal = np.array(signal_int16) / 32768
# Noise level. RMS, root-mean-square.
noise_level = np.sqrt(np.mean(np.square(signal)))
print(noise_level)
# Filter. Butterworth.
nyquist = 0.5 * sampling_freq
low = cutoff_freq_hz / nyquist
filter_order = 9
b, a = scipy.signal.butter(filter_order, [low], btype='highpass')
#signal= scipy.signal.lfilter(b, a, signal)
signal= scipy.signal.filtfilt(b, a, signal)
# Add hysteresis around zero to remove noise.
signal[(signal < noise_level) & (signal > -noise_level)] = 0.0
# Check where zero crossings may occur.
sign_diff_array = np.diff(np.sign(signal))
# Extract positive zero passings and interpolate where it occurs.
index_array = []
old_index = None
for index, value in enumerate(sign_diff_array):
if value in [2., 1., 0.]:
# Check for raising signal level.
if value == 2.:
# From negative directly to positive. Calculate interpolated index.
x_adjust = signal[index] / (signal[index] - signal[index+1])
index_array.append(index + x_adjust)
old_index = None
elif (value == 1.) and (old_index is None):
# From negative to zero.
old_index = index
elif (value == 1.) and (old_index is not None):
# From zero to positive. Calculate interpolated index.
x_adjust = signal[old_index] / (signal[old_index] - signal[index+1])
index_array.append(old_index + x_adjust)
old_index = None
else:
# Falling signal level.
old_index = None
print(len(index_array))
if debug:
print(index_array[:100])
zero_crossings = index_array[::zc_divratio]
print(len(zero_crossings))
# Prepare lists.
freqs = []
times = []
for index, zero_crossing in enumerate(zero_crossings[0:-1]):
freq = zero_crossings[index+1] - zero_crossings[index]
freq_hz = sampling_freq * zc_divratio / freq
if freq_hz >= cutoff_freq_hz:
freqs.append(freq_hz)
times.append(zero_crossing)
print(len(freqs))
# Prepare arrays for plotting.
freq_array_khz = np.array(freqs) / 1000.0
time_array_s = np.array(times) / sampling_freq
time_array_compact = range(0, len(times))
if debug:
print(len(freq_array_khz))
print(freq_array_khz[:100])
print(time_array_s[:100])
# Plot two diagrams, normal and compressed time.
fig, (ax1, ax2) = plt.subplots(2,1,
figsize=(16, 5),
dpi=150,
#facecolor='w',
#edgecolor='k',
)
# ax1.
ax1.scatter(time_array_s, freq_array_khz, s=1, c='navy', alpha=0.5)
ax1.set_title('File: ' + sound_file)
ax1.set_ylim((0,120))
ax1.minorticks_on()
ax1.grid(which='major', linestyle='-', linewidth='0.5', alpha=0.6)
ax1.grid(which='minor', linestyle='-', linewidth='0.5', alpha=0.3)
ax1.tick_params(which='both', top='off', left='off', right='off', bottom='off')
# ax2.
ax2.scatter(time_array_compact, freq_array_khz, s=1, c='navy', alpha=0.5)
ax2.set_ylim((0,120))
ax2.minorticks_on()
ax2.grid(which='major', linestyle='-', linewidth='0.5', alpha=0.6)
ax2.grid(which='minor', linestyle='-', linewidth='0.5', alpha=0.3)
ax2.tick_params(which='both', top='off', left='off', right='off', bottom='off')
plt.tight_layout()
fig.savefig('zc_in_time_domain_test.png')
#fig.savefig('zc_in_time_domain_test_1.png')
#fig.savefig('zc_in_time_domain_test_2.png')
plt.show()
```
| github_jupyter |
# Binned Likelihood Tutorial
The detection, flux determination, and spectral modeling of Fermi LAT sources is accomplished by a maximum likelihood optimization technique as described in the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/) (see also, e.g., [Abdo, A. A. et al. 2009, ApJS, 183, 46](http://adsabs.harvard.edu/abs/2009ApJS..183...46A)).
To illustrate how to use the Likelihood software, this tutorial gives a step-by-step description for performing a binned likelihood analysis.
## Binned vs Unbinned Likelihood
Binned likelihood analysis is the preferred method for most types of LAT analysis (see [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/)).
However, when analyzing data over short time periods (with few events), it is better to use the **unbinned** analysis.
To perform an unbinned likelihood analysis, see the [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) tutorial.
**Additional references**:
* [SciTools References](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/references.html)
* Descriptions of available [Spectral and Spatial Models](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source_models.html)
* Examples of [XML Model Definitions for Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#xmlModelDefinitions):
* [Power Law](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#powerlaw)
* [Broken Power Law](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#brokenPowerLaw)
* [Broken Power Law 2](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#powerLaw2)
* [Log Parabola](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#logParabola)
* [Exponential Cutoff](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#expCutoff)
* [BPL Exponential Cutoff](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#bplExpCutoff)
* [Gaussian](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#gaussian)
* [Constant Value](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#constantValue)
* [File Function](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#fileFunction)
* [Band Function](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#bandFunction)
* [PL Super Exponential Cutoff](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html#plSuperExpCutoff)
# Prerequisites
You will need an **event** data file, a **spacecraft** data file (also referred to as the "pointing and livetime history" file), and the current **background models** (available for [download](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html)). They are also found in code cells below.
You may choose to select your own data files, or to use the files provided within this tutorial.
Custom data sets may be retrieved from the [Lat Data Server](http://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi).
# Outline
1. **Make Subselections from the Event Data**
Since there is computational overhead for each event associated with each diffuse component, it is useful to filter out any events that are not within the extraction region used for the analysis.
2. **Make Counts Maps from the Event Files**
By making simple FITS images, we can inspect our data and pick out obvious sources.
3. **Download the latest diffuse models**
The recommended models for a normal point source analysis are `gll_iem_v07.fits` (a very large file) and `iso_P8R3_SOURCE_V2_v1.txt`. All of the background models along with a description of the models are available [here](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html).
4. **Create a Source Model XML File**
The source model XML file contains the various sources and their model parameters to be fit using the **gtlike** tool.
5. **Create a 3D Counts Cube**
The binned counts cube is used to reduce computation requirements in regions with large numbers of events.
6. **Compute Livetimes**
Precomputing the livetime for the dataset speeds up the exposure calculation.
7. **Compute Exposure Cube**
This accounts for exposure as a function of energy, based on the cuts made. The exposure map must be recomputed if any change is made to the data selection or binning.
8. **Compute Source Maps**
Here the exposure calculation is applied to each of the sources described in the model.
9. **Perform the Likelihood Fit**
Fitting the data to the model provides flux, errors, spectral indices, and other information.
10. **Create a Model Map**
This can be compared to the counts map to verify the quality of the fit and to make a residual map.
# 1. Make subselections from the event data
For this case we will use two years of LAT Pass 8 data. This is a longer data set than is described in the [Extract LAT Data](../DataSelection/1.ExtractLATData.ipynb) tutorial.
>**NOTE**: The ROI used by the binned likelihood analysis is defined by the 3D counts map boundary. The region selection used in the data extraction step, which is conical, must fully contain the 3D counts map spatial boundary, which is square.
Selection of data:
Search Center (RA, DEC) =(193.98, -5.82)
Radius = 15 degrees
Start Time (MET) = 239557417 seconds (2008-08-04 T15:43:37)
Stop Time (MET) = 302572802 seconds (2010-08-04 T00:00:00)
Minimum Energy = 100 MeV
Maximum Energy = 500000 MeV
This two-year dataset generates numerous data files. We provide the user with the original event data files and the accompanying spacecraft file:
* L181126210218F4F0ED2738_PH00.fits (5.0 MB)
* L181126210218F4F0ED2738_PH01.fits (10.5 MB)
* L181126210218F4F0ED2738_PH02.fits (6.5 MB)
* L181126210218F4F0ED2738_PH03.fits (9.2 MB)
* L181126210218F4F0ED2738_PH04.fits (7.4 MB)
* L181126210218F4F0ED2738_PH05.fits (6.2 MB)
* L181126210218F4F0ED2738_PH06.fits (4.5 MB)
* L181126210218F4F0ED2738_SC00.fits (256 MB spacecraft file)
```
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH00.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH01.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH02.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH03.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH04.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH05.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_PH06.fits
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/L181126210218F4F0ED2738_SC00.fits
!mkdir ./data
!mv *.fits ./data
!ls ./data
```
In order to combine the two events files for your analysis, you must first generate a text file listing the events files to be included.
If you do not wish to download all the individual files, you can skip to the next step and retrieve the combined, filtered event file. However, you will need the spacecraft file to complete the analysis, so you should retrieve that now.
To generate the file list, type:
```
!ls ./data/*_PH* > ./data/binned_events.txt
```
When analyzing point sources, it is recommended that you include events with high probability of being photons. To do this, you should use **gtselect** to cut on the event class, keeping only the SOURCE class events (event class 128, or as recommended in the Cicerone).
In addition, since we do not wish to cut on any of the three event types (conversion type, PSF, or EDISP), we will use `evtype=3` (which corresponds to standard analysis in Pass 7). Note that `INDEF` is the default for evtype in gtselect.
```bash
gtselect evclass=128 evtype=3
```
Be aware that `evclass` and `evtype` are hidden parameters. So, to use them, you must type them on the command line.
The text file you made (`binned_events.txt`) will be used in place of the input fits filename when running gtselect. The syntax requires that you use an @ before the filename to indicate that this is a text file input rather than a fits file.
We perform a selection to the data we want to analyze. For this example, we consider the source class photons within our 15 degree region of interest (ROI) centered on the blazar 3C 279. For some of the selections that we made with the data server and don't want to modify, we can use "INDEF" to instruct the tool to read those values from the data file header. Here, we are only filtering on event class (not on event type) and applying a zenith cut, so many of the parameters are designated as "INDEF".
We apply the **gtselect** tool to the data file as follows:
```
%%bash
gtselect evclass=128 evtype=3
@./data/binned_events.txt
./data/3C279_binned_filtered.fits
INDEF
INDEF
INDEF
INDEF
INDEF
100
500000
90
```
In the last step we also selected the energy range and the maximum zenith angle value (90 degrees) as suggested in Cicerone and recommended by the LAT instrument team.
The Earth's limb is a strong source of background gamma rays and we can filter them out with a zenith-angle cut. The use of "zmax" in calculating the exposure allows for a more selective method than just using the ROI cuts in controlling the Earth limb contamination. The filtered data from the above steps are provided [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_filtered.fits).
After the data selection is made, we need to select the good time intervals in which the satellite was working in standard data taking mode and the data quality was good. For this task we use **gtmktime** to select GTIs by filtering on information provided in the spacecraft file. The current **gtmktime** filter expression recommended by the LAT team in the Cicerone is:
```
(DATA_QUAL>0)&&(LAT_CONFIG==1)
```
This excludes time periods when some spacecraft event has affected the quality of the data; it ensures the LAT instrument was in normal science data-taking mode.
Here is an example of running **gtmktime** for our analysis of the region surrounding 3C 279.
```
%%bash
gtmktime
@./data/L181126210218F4F0ED2738_SC00.fits
(DATA_QUAL>0)&&(LAT_CONFIG==1)
no
./data/3C279_binned_filtered.fits
./data/3C279_binned_gti.fits
```
The data file with all the cuts described above is provided in this [link](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_gti.fits). A more detailed discussion of data selection can be found in the [Data Preparation](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data_preparation.html) analysis thread.
To view the DSS keywords in a given extension of a data file, use the **gtvcut** tool and review the data cuts on the EVENTS extension. This provides a listing of the keywords reflecting each cut applied to the data file and their values, including the entire list of GTIs. (Use the option `suppress_gtis=no` to view the entire list.)
```
%%bash
gtvcut suppress_gtis=no
./data/3C279_binned_gti.fits
EVENTS
```
Here you can see the event class and event type, the location and radius of the data selection, as well as the energy range in MeV, the zenith angle cut, and the fact that the time cuts to be used in the exposure calculation are defined by the GTI table.
Various Fermitools will be unable to run if you have multiple copies of a particular DSS keyword. This can happen if the position used in extracting the data from the data server is different than the position used with **gtselect**. It is wise to review the keywords for duplicates before proceeding. If you do have keyword duplication, it is advisable to regenerate the data file with consistent cuts.
# 2. Make a counts map from the event data
Next, we create a counts map of the ROI, summed over photon energies, in order to identify candidate sources and to ensure that the field looks sensible as a simple sanity check. For creating the counts map, we will use the [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) tool with the option "CMAP" (no spacecraft file is necessary for this step).
Then we will view the output file, as shown below:
```
%%bash
gtbin
CMAP
./data/3C279_binned_gti.fits
./data/3C279_binned_cmap.fits
NONE
150
150
0.2
CEL
193.98
-5.82
0.0
AIT
```
We chose an ROI of 15 degrees, corresponding to 30 degrees in diameter. Since we want a pixel size of 0.2 degrees/pixel, then we must select 30/0.2=150 pixels for the size of the x and y axes. The last command launches the visualization tool _ds9_ and produces a display of the generated [counts](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_cmap.fits) map.
<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_counts_map.png'>
You can see several strong sources and a number of weaker sources in this map. Mousing over the positions of these sources shows that two of them are likely 3C 279 and 3C 273.
It is important to inspect your data prior to proceeding to verify that the contents are as you expect. A malformed data query or improper data selection can generate a non-circular region, or a file with zero events. By inspecting your data prior to analysis, you have an opportunity to detect such issues early in the analysis.
A more detailed discussion of data exploration can be found in the [Explore LAT Data](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/explore_latdata.html) analysis thread.
# 3. Create a 3-D (binned) counts map
Since the counts map shows the expected data, you are ready to prepare your data set for analysis. For binned likelihood analysis, the data input is a three-dimensional counts map with an energy axis, called a counts cube. The gtbin tool performs this task as well by using the `CCUBE` option.
<img src="https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/square_in_circle.png">
The binning of the counts map determines the binning of the exposure calculation. The likelihood analysis may lose accuracy if the energy bins are not sufficiently narrow to accommodate more rapid variations in the effective area with decreasing energy below a few hundred MeV. For a typical analysis, ten logarithmically spaced bins per decade in energy are recommended. The analysis is less sensitive to the spatial binning and 0.2 deg bins are a reasonable standard.
This counts cube is a square binned region that must fit within the circular acceptance cone defined during the data extraction step, and visible in the counts map above. To find the maximum size of the region your data will support, find the side of a square that can be fully inscribed within your circular acceptance region (multiply the radius of the acceptance cone by sqrt(2)). For this example, the maximum length for a side is 21.21 degrees.
To create the counts cube, we run [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) as follows:
```
%%bash
gtbin
CCUBE
./data/3C279_binned_gti.fits
./data/3C279_binned_ccube.fits
NONE
100
100
0.2
CEL
193.98
-5.82
0.0
AIT
LOG
100
500000
37
```
[gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) takes the following as parameters:
* Type of output file (CCUBE|CMAP|LC|PHA1|PHA2|HEALPIX)
* Event data file name
* Output file name
* Spacecraft data file name
* Size of the X axis in pixels
* Size of the Y axis in pixels
* Image scale (in degrees/pixel)
* Coordindate system (CEL - celestial; GAL - galactic) (pick CEL or GAL)
* First coordinate of image center in degrees (RA or galactic l)
* Second coordinate of image center in degrees (DEC or galactic b)
* Rotation angle of image axis, in degrees
* Projection method (AIT|ARC|CAR|GLS|MER|NCP|SIN|STG|TAN)
* Algorithm for defining energy bins (FILE|LIN|LOG)
* Start value for first energy bin in MeV
* Stop value for last energy bin in MeV
* Number of logarithmically uniform energy bins
The counts cube generated in this step is provided [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_ccube.fits).
If you open the file with _ds9_, you see that it is made up of 37 images, one for each logarithmic energy bin. By playing through these images, it is easy to see how the PSF of the LAT changes with energy. You can also see that changing energy cuts could be helpful when trying to optimize the localization or spectral information for specific sources.
Be sure to verify that there are no black corners on your counts cube. These corners correspond to regions with no data and will cause errors in your exposure calculations.
# 4. Download the latest diffuse model files
When you use the current Galactic diffuse emission model ([`gll_iem_v07.fits`](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/gll_iem_v07.fits)) in a likelihood analysis, you also want to use the corresponding model for the extragalactic isotropic diffuse emission, which includes the residual cosmic-ray background. The recommended isotropic model for point source analysis is [`iso_P8R3_SOURCE_V2_v1.txt`](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/iso_P8R3_SOURCE_V2_v1.txt).
All the Pass 8 background models have been included in the Fermitools distribution, in the `$(FERMI_DIR)/refdata/fermi/galdiffuse/` directory. If you use that path in your model, you should not have to download the diffuse models individually.
>**NOTE**: Keep in mind that the isotropic model needs to agree with both the event class and event type selections you are using in your analysis. The iso_P8R3_SOURCE_V2_v1.txt isotropic spectrum is valid only for the latest response functions and only for data sets with front + back events combined. All of the most up-to-date background models along with a description of the models are available [here](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html).
# 5. Create a source model XML file
The [gtlike](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtlike.txt) tool reads the source model from an XML file. The model file contains your best guess at the locations and spectral forms for the sources in your data. A source model can be created using the [model editor](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/modeleditor.txt) tool, by using the user contributed tool `make4FGLxml.py` (available at the [user-contributed tools](https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/) page), or by editing the file directly within a text editor.
Here we cannot use the same source model that was used to analyze six months of data in the Unbinned Likelihood tutorial, as the 2-year data set contains many more significant sources and will not converge. Instead, we will use the 4FGL catalog to define our source model by running `make4FGLxml.py`. To run the script, you will need to download the current LAT catalog file and place it in your working directory:
```
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make4FGLxml.py
!wget https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/gll_psc_v18.fit
!mv make4FGLxml.py gll_psc_v18.fit ./data
!python ./data/make4FGLxml.py ./data/gll_psc_v18.fit ./data/3C279_binned_gti.fits -o ./data/3C279_input_model.xml
```
Note that we are using a high level of significance so that we only fit the brightest sources, and we have forced the extended sources to be modeled as point sources.
It is also necessary to specify the entire path to location of the diffuse model on your system. Clearly, the simple 4-source model we used for the 6-month [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) analysis would have been too simplistic.
This XML file uses the spectral model from the 4FGL catalog analysis for each source. (The catalog file is available at the [LAT 8-yr Catalog page](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/).) However, that analysis used a subset of the available spectral models. A dedicated analysis of the region may indicate a different spectral model is preferred.
For more details on the options available for your XML models, see:
* Descriptions of available [Spectral and Spatial Models](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/source_models.html)
* Examples of [XML Model Definitions for Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/xml_model_defs.html)
Finally, the `make4FGLxml.py` script automatically adds 10 degrees to your ROI to account for sources that lie outside your data region, but which may contribute photons to your data. In addition, it gives you the ability to free only some of the spectral parameters for sources within your ROI, and fixes them for the others.
With hundreds of sources, there are too many free parameters to gain a good spectral fit. It is advisable to revise these values so that only sources near your source of interest, or very bright source, have all spectral parameters free. Farther away, you can fix the spectral form and free only the normalization parameter (or "prefactor"). If you are working in a crowded region or have nested sources (e.g. a point source on top of an extended source), you will probably want to fix parameters for some sources even if they lie close to your source of interest.
Only the normalization parameter will be left free for the remaining sources within the ROI. We have also used the significance parameter (`-s`) of `make4FLGxml.py` to free only the brightest sources in our ROI. In addition, we used the `-v` flag to override that for sources that are significantly variable. Both these changes are necessary: having too many free parameters will not allow the fit to converge (see the section for the fitting step).
### XML for Extended Sources
In some regions, the [make4FGLxml.py](https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/make4FGLxml.py) script may add one or more extended sources to your XML model.
The script will provide the number of extended sources included in the model. In order to use these extended sources, you will need to downloaded the extended source templates from the [LAT Catalog](https://fermi.gsfc.nasa.gov/ssc/data/access/lat/8yr_catalog/) page (look for "Extended Source template archive").
Extract the archive in the directory of your choice and note the path to the template files, which have names like `W44.fits` and `VelaX.fits`. You will need to provide the path to the template file to the script before you run it.
Here is an example of the proper format for an extended source XML entry for Binned Likelihood analysis:
```xml
<source name="SpatialMap_source" type="DiffuseSource">
<spectrum type="PowerLaw2">
<parameter free="1" max="1000.0" min="1e-05" name="Integral" scale="1e-06" value="1.0"/>
<parameter free="1" max="-1.0" min="-5.0" name="Index" scale="1.0" value="-2.0"/>
<parameter free="0" max="200000.0" min="20.0" name="LowerLimit" scale="1.0" value="20.0"/>
<parameter free="0" max="200000.0" min="20.0" name="UpperLimit" scale="1.0" value="2e5"/>
</spectrum>
<spatialModel W44 file="$(PATH_TO_FILE)/W44.fits" type="SpatialMap" map_based_integral="true">
<parameter free="0" max="1000.0" min="0.001" name="Normalization" scale= "1.0" value="1.0"/>
</spatialModel>
</source>
```
# 6. Compute livetimes and exposure
To speed up the exposure calculations performed by Likelihood, it is helpful to pre-compute the livetime as a function of sky position and off-axis angle. The [gtltcube](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/gtltcube.txt) tool creates a livetime cube, which is a [HealPix](http://healpix.jpl.nasa.gov/) table, covering the entire sky, of the integrated livetime as a function of inclination with respect to the LAT z-axis.
Here is an example of how to run [gtltcube](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/gtltcube.txt):
```
%%bash
gtltcube zmax=90
./data/3C279_binned_gti.fits
./data/L181126210218F4F0ED2738_SC00.fits
./data/3C279_binned_ltcube.fits
0.025
1
```
>**Note**: Values such as "0.1" for "Step size in cos(theta) are known to give unexpected results. Use "0.09" instead.
The livetime cube generated from this analysis can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_ltcube.fits).
For more information about the livetime cubes see the documentation in the [Cicerone](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Likelihood/) and also the explanation in the [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) tutorial.
# 7. Compute exposure map
Next, you must apply the livetime calculated in the previous step to your region of interest. To do this, we use the [gtexpcube2](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtexpcube2.txt) tool, which is an updated version of the previous **gtexpcube**. This tool generates a binned exposure map, an accounting of the exposure at each position in the sky, that are a required input to the likelihood process.
>**NOTE**: In the past, running **gtsrcmaps** calculated the exposure map for you, so most analyses skipped the binned exposure map generation step. With the introduction of **gtexpcube2**, this is no longer the case. You must explicitly command the creation of the exposure map as a separate analysis step.
In order to create an exposure map that accounts for contributions from all the sources in your analysis region, you must consider not just the sources included in the counts cube. The large PSF of the LAT means that at low energies, sources from well outside your counts cube could affect the sources you are analyzing. To compensate for this, you must create an exposure map that includes sources up to 10 degrees outside your ROI. (The ROI is determined by the radius you downloaded from the data server, here a 15 degree radius.) In addition, you should account for all the exposure that contributes to those additional sources. Since the exposure map uses square pixels, to match the binning in the counts cube, and to ensure we don't have errors, we generate a 300x300 pixel map.
If you provide [gtexpcube2](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtexpcube2.txt) a filename for your counts cube, it will use the information from that file to define the geometry of the exposure map. This is legacy behavior and will not give you the necessary 20° buffer you need to completely account for the exposure of nearby sources. (It will also cause an error in the next step.)
Instead, you should specify the appropriate geometry for the exposure map, remembering that the counts cube used 0.2 degree pixel binning. To do that, enter `none` when asked for a Counts cube.
**Note**: If you get a "`File not found`" error in the examples below, just put the IRF name in explicitly. The appropriate IRF for this data set is `P8R3_SOURCE_V2`.
```
%%bash
gtexpcube2
./data/3C279_binned_ltcube.fits
none
./data/3C279_binned_expcube.fits
P8R3_SOURCE_V2
300
300
.2
193.98
-5.82
0
AIT
CEL
100
500000
37
```
The generated exposure map can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_expcube.fits).
At this point, you may decide it is easier to simply generate exposure maps for the entire sky. You may be right, as it certainly simplifies the step when scripting. However, making an all-sky map increases the processing time for this step, though the increase is modest.
To generate an all-sky exposure map (rather than the exposure map we calculated above) you need to specify the proper binning and explicitly give the number of pixels for the entire sky (360°x180°).
Here is an example:
```
%%bash
gtexpcube2
./data/3C279_binned_ltcube.fits
none
./data/3C279_binned_allsky_expcube.fits
P8R3_SOURCE_V2
1800
900
.2
193.98
-5.82
0
AIT
CEL
100
500000
37
```
The all-sky exposure map can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_allsky_expcube.fits).
Just as in the [Unbinned Likelihood](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/likelihood_tutorial.html) analysis, the exposure needs to be recalculated if the ROI, zenith angle, time, event class, or energy selections applied to the data are changed. For the binned analysis, this also includes the spatial and energy binning of the 3D counts map (which affects the exposure map as well).
# 8. Compute source map
The [gtsrcmaps](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtsrcmaps.txt) tool creates model counts maps for use with the binned likelihood analysis. To do this, it takes each source spectrum in the XML model, multiplies it by the exposure at the source position, and convolves that exposure with the effective PSF.
This is an example of how to run the tool:
```
%%bash
gtsrcmaps
./data/3C279_binned_ltcube.fits
./data/3C279_binned_ccube.fits
./data/3C279_input_model.xml
./data/3C279_binned_allsky_expcube.fits
./data/3C279_binned_srcmaps.fits
CALDB
```
The output file from [gtsrcmaps](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtsrcmaps.txt) can be found [here](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_srcmaps.fits).
Because your model map can include sources outside your ROI, you may see a list of warnings at the beginning of the output. These are expected (because you have properly included sources outside your ROI in your XML file) and should cause no problem in your analysis. In addition, if your exposure map is too small for the region, you will see the following warning:
```
Caught St13runtime_error at the top level:
Request for exposure at a sky position that is outside of the map boundaries.
The contribution of the diffuse source outside of the exposure
and counts map boundaries is being computed to account for PSF
leakage into the analysis region. To handle this, use an all-sky
binned exposure map. Alternatively, to neglect contributions
outside of the counts map region, use the emapbnds=no option when
running gtsrcmaps.
```
In this situation, you should increase the dimensions of your exposure map, or just move to the all-sky version.
Source map generation for the point sources is fairly quick, and maps for many point sources may take up a lot of disk space. If you are analyzing a single long data set, it may be preferable to pre-compute only the source maps for the diffuse components at this stage.
[gtlike](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtlike.txt) will compute maps for the point sources on the fly if they appear in the XML definition and a corresponding map is not in the source maps FITS file. To skip generating source maps for point sources, specify "`ptsrc=no`" on the command line when running **gtsrcmaps**. However, if you expect to perform multiple fits on the same set of data, precomputing the source maps will probably save you time.
# 9. Run gtlike
>NOTE: Prior to running **gtlike** for Unbinned Likelihood, it is necessary to calculate the diffuse response for each event (when that response is not precomputed). However, for Binned Likelihood analysis the diffuse response is calculated over the entire bin, so this step is not necessary.
If you want to use the **energy dispersion correction** during your analysis, you must enable this feature using the environment variable `USE_BL_EDISP`. This may be set on the command line using:
```bash
export USE_BL_EDISP=true
```
or, depending on your shell,
```
setenv USE_BL_EDISP=true
```
To disable the use of energy dispersion, you must unset the variable:
```bash
unset USE_BL_EDISP
```
or
```
unsetenv USE_BL_EDISP
```
```bash
export USE_BL_EDISP=true
```
or, depending on your shell,
```
setenv USE_BL_EDISP=true
```
To disable the use of energy dispersion, you must unset the variable:
```bash
unset USE_BL_EDISP
```
or
```
unsetenv USE_BL_EDISP
```
Now we are ready to run the [gtlike](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtlike.txt) application.
Here, we request that the fitted parameters be saved to an output XML model file for use in later steps.
```
!wget https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_output_model.xml
%%bash
gtlike refit=yes plot=yes sfile=./data/3C279_binned_output.xml
BINNED
./data/3C279_binned_srcmaps.fits
./data/3C279_binned_allsky_expcube.fits
./data/3C279_binned_ltcube.fits
./data/3C279_input_model.xml
CALDB
NEWMINUIT
```
Most of the entries prompted for are fairly obvious. In addition to the various XML and FITS files, the user is prompted for a choice of IRFs, the type of statistic to use, and the optimizer.
The statistics available are:
* **UNBINNED**: This should be used for short timescale or low source count data. If this option is chosen then parameters for the spacecraft file, event file, and exposure file must be given. See explanation in: [Likelihood Tutorial]()
* **BINNED**: This is a standard binned analysis as described in this tutorial. This analysis is used for long timescale or high-density data (such as in the Galactic plane) which can cause memory errors in the unbinned analysis. If this option is chosen then parameters for the source map file, livetime file, and exposure file must be given.
There are five optimizers from which to choose: `DRMNGB`, `DRMNFB`, `NEWMINUIT`, `MINUIT` and `LBFGS`. Generally speaking, the faster way to find the parameter estimates is to use `DRMNGB` (or `DRMNFB`) to find initial values and then use `MINUIT` (or `NEWMINUIT`) to find more accurate results. If you have trouble achieving convergence at first, you can loosen your tolerance by setting the hidden parameter `ftol` on the command line. (The default value for `ftol` is `0.001`.)
Analyzing a 2-year dataset will take many hours (in our case more than 2 days with a 32-bit machine with 1 GB of RAM). The required running time is high if your source is in the Galactic plane. Here is some output from our fit, where 4FGL J1229.0+0202 and 4FGL J1256.1-0547 corresponds to 3C 273 and 3C 279, respectively:
```
This is gtlike version
...
Photon fluxes are computed for the energy range 100 to 500000 MeV
4FGL J1229.0+0202:
norm: 8.16706 +/- 0.0894921
alpha: 2.49616 +/- 0.015028
beta: 0.104635 +/- 0.0105201
Eb: 279.04
TS value: 32017.6
Flux: 6.69253e-07 +/- 7.20102e-09 photons/cm^2/s
4FGL J1256.1-0547:
norm: 2.38177 +/- 0.0296458
alpha: 2.25706 +/- 0.0116212
beta: 0.0665607 +/- 0.00757385
Eb: 442.052
TS value: 29261.7
Flux: 5.05711e-07 +/- 6.14833e-09 photons/cm^2/s
...
gll_iem_v07:
Prefactor: 0.900951 +/- 0.0235397
Index: 0
Scale: 100
Flux: 0.000469334 +/- 1.22608e-05 photons/cm^2/s
iso_P8R3_SOURCE_V2_v1:
Normalization: 1.13545 +/- 0.0422581
Flux: 0.000139506 +/- 5.19439e-06 photons/cm^2/s
WARNING: Fit may be bad in range [100, 199.488] (MeV)
WARNING: Fit may be bad in range [251.124, 316.126] (MeV)
WARNING: Fit may be bad in range [6302.3, 7933.61] (MeV)
WARNING: Fit may be bad in range [39744.4, 50032.1] (MeV)
WARNING: Fit may be bad in range [315519, 397190] (MeV)
Total number of observed counts: 207751
Total number of model events: 207407
-log(Likelihood): 73014.38504
Writing fitted model to 3C279_binned_output.xml
```
Since we selected `plot=yes` in the command line, a plot of the fitted data appears.
<img src="https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_spectral_fit.png">
In the first plot, the counts/MeV vs MeV are plotted. The points are the data, and the lines are the models. Error bars on the points represent sqrt(Nobs) in that band, where Nobs is the observed number of counts. The black line is the sum of the models for all sources.
The colored lines follow the sources as follows:
* Black - summed model
* Red - first source (see below)
* Green - second source
* Blue - third source
* Magenta - fourth source
* Cyan - the fifth source
If you have more sources, the colors are reused in the same order. In our case we have, in order of decreasing value on the y-axis: summed model (black), the extragalactic background (black), the galactic background (cyan), 3C 273 (red), and 3C 279 (black).
The second plot gives the residuals between your model and the data. Error bars here represent (sqrt(Nopbs))/Npred, where Npred is the predicted number of counts in each band based on the fitted model.
To assess the quality of the fit, look first for the words at the top of the output `<Optimizer> did successfully converge.` Successful convergence is a minimum requirement for a good fit.
Next, look at the energy ranges that are generating warnings of bad fits. If any of these ranges affect your source of interest, you may need to revise the source model and refit. You can also look at the residuals on the plot (bottom panel). If the residuals indicate a poor fit overall (e.g., the points trending all low or all high) you should consider changing your model file, perhaps by using a different source model definition, and refit the data.
If the fits and spectral shapes are good, but could be improved, you may wish to simply update your model file to hold some of the spectral parameters fixed. For example, by fixing the spectral model for 3C 273, you may get a better quality fit for 3C 279. Close the plot and you will be asked if you wish to refit the data.
```
Refit? [y] n
Elapsed CPU time: 1571.805872
```
Here, hitting `return` will instruct the application to fit again. We are happy with the result, so we type `n` and end the fit.
### Results
When it completes, **gtlike** generates a standard output XML file. If you re-run the tool in the same directory, these files will be overwritten by default. Use the `clobber=no` option on the command line to keep from overwriting the output files.
Unfortunately, the fit details and the value for the `-log(likelihood)` are not recorded in the automatic output files.
You should consider logging the output to a text file for your records by using `> fit_data.txt` (or something similar) with your **gtlike** command.
Be aware, however, that this will make it impossible to request a refit when the likelihood process completes.
```
!gtlike plot=yes sfile=./data/3C279_output_model.xml > fit_data.txt
```
In this example, we used the `sfile` parameter to request that the model results be written to an output XML file. This file contains the source model results that were written to `results.dat` at the completion of the fit.
> **Note**: If you have specified an output XML model file and you wish to modify your model while waiting at the `Refit? [y]` prompt, you will need to copy the results of the output model file to your input model before making those modifications.
The results of the likelihood analysis have to be scaled by the quantity called "scale" in the XML model in order to obtain the total photon flux (photons cm-2 s-1) of the source. You must refer to the model formula of your source for the interpretation of each parameter. In our example the 'prefactor' of our power law model of the first fitted source (4FGLJ1159.5-0723) has to be scaled by the factor 'scale'=10-14. For example the total flux of 4FGLJ1159.5-0723 is the integral between 100 MeV and 500000 MeV of:
$Prefactor \cdot scale \cdot (E /100)^{index}=(6.7017x10-14) \cdot (E/100)^{-2.0196}$
Errors reported with each value in the `results.dat` file are 1σ estimates (based on inverse-Hessian at the optimum of the log-likelihood surface).
### Other Useful Hidden Parameters
If you are scripting and wish to generate multiple output files without overwriting, the `results` and `specfile` parameters allow you to specify output filenames for the `results.dat` and `counts_spectra.fits` files respectively.
If you do not specify a source model output file with the `sfile` parameter, then the input model file will be overwritten with the latest fit. This is convenient as it allows the user to edit that file while the application is waiting at the `Refit? [y]` prompt so that parameters can be adjusted and set free or fixed. This would be similar to the use of the "newpar", "freeze", and "thaw" commands in [XSPEC](http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/index.html).
# 10. Create a model map
For comparison to the counts map data, we create a model map of the region based on the fit parameters.
This map is essentially an infinite-statistics counts map of the region-of-interest based on our model fit.
The [gtmodel](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtmodel.txt) application reads in the fitted model, applies the proper scaling to the source maps, and adds them together to get the final map.
```
%%bash
gtmodel
./data/3C279_binned_srcmaps.fits
./data/3C279_binned_output.xml
./data/3C279_model_map.fits
CALDB
./data/3C279_binned_ltcube.fits
./data/3C279_binned_allsky_expcube.fits
```
To understand how well the fit matches the data, we want to compare the [model map](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_model_map.fits) just created with the counts map over the same field of view. First we have to create the [new counts map](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_binned_cmap_small.fits) that matches in size the model map (the one generated in encircles the ROI, while the model map is completely inscribed within the ROI): We will use again the [gtbin](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/help/gtbin.txt) tool with the option `CMAP` as shown below:
```
%%bash
gtbin
CMAP
./data/3C279_binned_gti.fits
./data/3C279_binned_cmap_small.fits
NONE
100
100
0.2
CEL
193.98
-5.82
0.0
STG
```
Here we've plotted the model map next to the the energy-summed counts map for the data.
<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_map_comparison.png'>
Finally we want to create the [residual map](https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/data/BinnedLikelihood/3C279_residual.fits) by using the FTOOL **farith** to check if we can improve the model:
```
%%bash
farith
./data/3C279_binned_cmap_small.fits
./data/3C279_model_map.fits
./data/3C279_residual.fits
SUB
```
The residual map is shown below. As you can see, the binning we chose probably used pixels that were too large.
The primary sources, 3C 273 and 3C 279, have some positive pixels next to some negative ones. This effect could be lessened by either using a smaller pixel size or by offsetting the central position slightly from the position of the blazar (or both).
If your residual map contains bright sources, the next step would be to iterate the analysis with the additional sources included in the XML model file.
<img src='https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/images/BinnedLikelihood/3C279_binned_residuals.png'>
| github_jupyter |
```
from collections import Counter
import numpy as np
from csv import DictReader
from keras.preprocessing.sequence import pad_sequences
from keras.utils import np_utils
from keras.models import Sequential, Model, load_model
from keras.layers import concatenate, Embedding, Dense, Dropout, Activation, LSTM, CuDNNLSTM, CuDNNGRU,Flatten, Input, RepeatVector, TimeDistributed, Bidirectional
from keras.optimizers import Adam, RMSprop
from keras.callbacks import Callback, ModelCheckpoint, EarlyStopping, TensorBoard
import codecs
import pickle
MAX_LEN_HEAD = 100
MAX_LEN_BODY = 500
VOCAB_SIZE = 15000
EMBEDDING_DIM = 300
def get_vocab(lst, vocab_size):
"""
lst: list of sentences
"""
vocabcount = Counter(w for txt in lst for w in txt.lower().split())
vocabcount = vocabcount.most_common(vocab_size)
word2idx = {}
idx2word = {}
for i, word in enumerate(vocabcount):
word2idx[word[0]] = i
idx2word[i] = word[0]
return word2idx, idx2word
def cov2idx_unk(lst, word2idx):
output = []
for sentence in lst:
temp = []
for word in sentence.split():
if word in word2idx:
temp.append(word2idx[word])
else:
temp.append(word2idx['<unk>'])
temp.append(word2idx['<unk>'])
output.append(temp)
return output
def pad_seq(cov_lst, max_len=MAX_LEN_BODY):
"""
list of list of index converted from words
"""
pad_lst = pad_sequences(cov_lst, maxlen = max_len, padding='post')
return pad_lst
label_ref = {'agree': 0, 'disagree': 1, 'discuss': 2, 'unrelated': 3}
def load_train_unk(file_instances, file_bodies):
"""
article: the name of the article file
"""
instance_lst = []
# Process file
with open(file_instances, "r", encoding='utf-8') as table:
r = DictReader(table)
for line in r:
instance_lst.append(line)
body_lst = []
# Process file
with open(file_bodies, "r", encoding='utf-8') as table:
r = DictReader(table)
for line in r:
body_lst.append(line)
heads = {}
bodies = {}
for instance in instance_lst:
if instance['Headline'] not in heads:
head_id = len(heads)
heads[instance['Headline']] = head_id
instance['Body ID'] = int(instance['Body ID'])
for body in body_lst:
bodies[int(body['Body ID'])] = body['articleBody']
headData = []
bodyData = []
labelData = []
for instance in instance_lst:
headData.append(instance['Headline'])
bodyData.append(bodies[instance['Body ID']])
labelData.append(label_ref[instance['Stance']])
word2idx, idx2word = get_vocab(headData+bodyData, VOCAB_SIZE)
word2idx['<unk>'] = len(word2idx)
cov_head = cov2idx_unk(headData, word2idx)
cov_body = cov2idx_unk(bodyData, word2idx)
remove_list = []
for i in range(len(cov_head)):
if len(cov_head[i])>MAX_LEN_HEAD or len(cov_body[i])>MAX_LEN_BODY:
remove_list.append(i)
for idx in sorted(remove_list, reverse = True):
cov_head.pop(idx)
cov_body.pop(idx)
labelData.pop(idx)
pad_head = pad_seq(cov_head, MAX_LEN_HEAD)
pad_body = pad_seq(cov_body, MAX_LEN_BODY)
return pad_head, pad_body, labelData, word2idx, idx2word
pad_head, pad_body, labelData, word2idx, idx2word = load_train_unk("train_stances.csv", "train_bodies.csv")
#for training
train_head = pad_head[:-1000]
train_body = pad_body[:-1000]
train_label = labelData[:-1000]
val_head = pad_head[-1000:]
val_body = pad_body[-1000:]
val_label = labelData[-1000:]
BATCH_SIZE = 128
NUM_LAYERS = 0
HIDDEN_DIM = 512
EPOCHS = 60
input_head = Input(shape=(MAX_LEN_HEAD,), dtype='int32', name='input_head')
embed_head = Embedding(output_dim=EMBEDDING_DIM, input_dim=VOCAB_SIZE+1, input_length=MAX_LEN_HEAD)(input_head)
gru_head = CuDNNGRU(128)(embed_head)
# embed_head = Embedding(VOCAB_SIZE, EMBEDDING_DIM , input_length = MAX_LEN_HEAD, weights = [g_word_embedding_matrix], trainable=False)
input_body = Input(shape=(MAX_LEN_BODY,), dtype='int32', name='input_body')
embed_body = Embedding(output_dim=EMBEDDING_DIM, input_dim=VOCAB_SIZE+1, input_length=MAX_LEN_BODY)(input_body)
gru_body = CuDNNGRU(128)(embed_body)
# embed_body = Embedding(VOCAB_SIZE, EMBEDDING_DIM , input_length = MAX_LEN_BODY, weights = [g_word_embedding_matrix], trainable=False)
concat = concatenate([gru_head, gru_body], axis = 1)
x = Dense(400, activation='relu')(concat)
x = Dropout(0.5)(x)
x = Dense(400, activation='relu')(x)
x = Dropout(0.5)(x)
# And finally we add the main logistic regression layer
main_output = Dense(4, activation='softmax', name='main_output')(x)
model = Model(inputs=[input_head, input_body], outputs=main_output)
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics = ['accuracy'])
model.summary()
wt_dir = "./models/seqLSTM/"
model_path = wt_dir+'biLSTM'+'{epoch:03d}'+'.h5'
tensorboard = TensorBoard(log_dir='./Graph')
model_checkpoint = ModelCheckpoint(model_path, save_best_only =False, period =2, save_weights_only = False)
# model.fit([try_head, try_body],
# try_label,
# epochs=30,
# validation_data=([try_head, try_body], try_label),
# batch_size=BATCH_SIZE,
# shuffle=True,
# callbacks = [model_checkpoint, tensorboard])
model.fit([train_head, train_body],
train_label,
epochs=2*EPOCHS,
validation_data=([val_head, val_body], val_label),
batch_size=BATCH_SIZE,
shuffle = True,
callbacks=[model_checkpoint, tensorboard])
pickle.dump(word2idx, open("word2idx_GRU.pkl", "wb"))
```
| github_jupyter |
```
##### Copyright 2021 The Cirq Developers
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Floquet calibration
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/google/floquet"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/floquet.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This notebook demonstrates the Floquet calibration API, a tool for characterizing $\sqrt{\text{iSWAP}}$ gates and inserting single-qubit $Z$ phases to compensate for errors. This characterization is done by the Quantum Engine and the insertion of $Z$ phases for compensation/calibration is completely client-side with the help of Cirq utilities. At the highest level, the tool inputs a quantum circuit of interest (as well as a backend to run on) and outputs a calibrated circuit for this backend which can then be executed to produce better results.
## Details on the calibration tool
In more detail, assuming we have a number-convserving two-qubit unitary gate, Floquet calibration (FC) returns fast, accurate estimates for the relevant angles to be calibrated. The `cirq.PhasedFSimGate` has five angles $\theta$, $\zeta$, $\chi$, $\gamma$, $\phi$ with unitary matrix
$$
\left[ \begin{matrix}
1 & 0 & 0 & 0 \\
0 & \exp(-i \gamma - i \zeta) cos( \theta ) & -i \exp(-i \gamma + i \chi) sin( \theta ) & 0 \\
0 & -i \exp(-i \gamma - i \chi) sin( \theta ) & \exp(-i \gamma + i \zeta) cos( \theta) & 0 \\
0 & 0 & 0 & \exp(-2 i \gamma -i \phi )
\end{matrix} \right]
$$
With Floquet calibration, every angle but $\chi$ can be calibrated. In experiments, we have found these angles change when gates are run in parallel. Because of this, we perform FC on entire moments of two-qubits gates and return different characterized angles for each.
After characterizing a set of angles, one needs to adjust the circuit to compensate for the offset. The simplest adjustment is for $\zeta$ and $\gamma$ and works by adding $R_z$ gates before and after the two-qubit gates in question. For many circuits, even this simplest compensation can lead to a significant improvement in results. We provide methods for doing this in this notebook and analyze results for an example circuit.
We do not attempt to correct the misaligned iSWAP rotation or the additional two-qubit phase in this notebook. This is a non-trivial task and we do currently have simple tools to achieve this. It is up to the user to correct for these as best as possible.
Note: The Floquet calibration API and this documentation is ongoing work. The amount by which errors are reduced may vary from run to run and from circuit to circuit.
## Setup
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install cirq --quiet
print("installed cirq.")
from typing import Iterable, List, Optional, Sequence
import matplotlib.pyplot as plt
import numpy as np
import cirq
import cirq_google as cg # Contains the Floquet calibration tools.
```
Note: In order to run on Google's Quantum Computing Service, an environment variable `GOOGLE_CLOUD_PROJECT` must be present and set to a valid Google Cloud Platform project identifier. If this is not satisfied, we default to an engine simulator.
Running the next cell will prompt you to authenticate Google Cloud SDK to use your project. See the [Getting Started Guide](../tutorials/google/start.ipynb) for more information.
Note: Leave `project_id` blank to use a noisy simulator.
```
# The Google Cloud Project id to use.
project_id = '' #@param {type:"string"}
if project_id == '':
import os
if 'GOOGLE_CLOUD_PROJECT' not in os.environ:
print("No processor_id provided and environment variable "
"GOOGLE_CLOUD_PROJECT not set, defaulting to noisy simulator.")
processor_id = None
engine = cg.PhasedFSimEngineSimulator.create_with_random_gaussian_sqrt_iswap(
mean=cg.SQRT_ISWAP_PARAMETERS,
sigma=cg.PhasedFSimCharacterization(
theta=0.01, zeta=0.10, chi=0.01, gamma=0.10, phi=0.02
),
)
sampler = engine
device = cg.Bristlecone
line_length = 20
else:
import os
os.environ['GOOGLE_CLOUD_PROJECT'] = project_id
def authenticate_user():
"""Runs the user through the Colab OAuth process.
Checks for Google Application Default Credentials and runs interactive login
if the notebook is executed in Colab. In case the notebook is executed in Jupyter notebook
or other IPython runtimes, no interactive login is provided, it is assumed that the
`GOOGLE_APPLICATION_CREDENTIALS` env var is set or `gcloud auth application-default login`
was executed already.
For more information on using Application Default Credentials see
https://cloud.google.com/docs/authentication/production
"""
in_colab = False
try:
from IPython import get_ipython
in_colab = 'google.colab' in str(get_ipython())
except:
# Notebook is not executed within IPython. Assuming external authentication.
return
if in_colab:
from google.colab import auth
print("Getting OAuth2 credentials.")
print("Press enter after entering the verification code.")
auth.authenticate_user(clear_output=False)
print("Authentication complete.")
else:
print("Notebook is not executed with Colab, assuming Application Default Credentials are setup.")
authenticate_user()
print("Successful authentication to Google Cloud.")
processor_id = "" #@param {type:"string"}
engine = cg.get_engine()
device = cg.get_engine_device(processor_id)
sampler = cg.get_engine_sampler(processor_id, gate_set_name="sqrt_iswap")
line_length = 35
```
## Minimal example for a single $\sqrt{\text{iSWAP}}$ gate
To see how the API is used, we first show the simplest usage of Floquet calibration for a minimal example of one $\sqrt{\text{iSWAP}}$ gate. After this section, we show detailed usage with a larger circuit and analyze the results.
The gates that are calibrated by Floquet calibration are $\sqrt{\text{iSWAP}}$ gates:
```
sqrt_iswap = cirq.FSimGate(np.pi / 4, 0.0)
print(cirq.unitary(sqrt_iswap).round(3))
```
First we get two connected qubits on the selected device and define a circuit.
```
"""Define a simple circuit to use Floquet calibration on."""
qubits = cg.line_on_device(device, length=2)
circuit = cirq.Circuit(sqrt_iswap.on(*qubits))
# Display it.
print("Circuit to calibrate:\n")
print(circuit)
```
The simplest way to use Floquet calibration is as follows.
```
"""Simplest usage of Floquet calibration."""
calibrated_circuit, *_ = cg.run_zeta_chi_gamma_compensation_for_moments(
circuit,
engine,
processor_id=processor_id,
gate_set=cg.SQRT_ISWAP_GATESET
)
```
Note: Additional returned arguments, omitted here for simplicity, are described below.
When we print out the returned `calibrated_circuit.circuit` below, we see the added $Z$ rotations to compensate for errors.
```
print("Calibrated circuit:\n")
calibrated_circuit.circuit
```
This `calibrated_circuit` can now be executed on the processor to produce better results.
## More detailed example with a larger circuit
We now use Floquet calibration on a larger circuit which models the evolution of a fermionic particle on a linear spin chain. The physics of this problem for a closed chain (here we use an open chain) has been studied in [Accurately computing electronic properties of materials using eigenenergies](https://arxiv.org/abs/2012.00921), but for the purposes of this notebook we can treat this just as an example to demonstrate Floquet calibration on.
First we use the function `cirq_google.line_on_device` to return a line of qubits of a specified length.
```
line = cg.line_on_device(device, line_length)
print(line)
```
This line is now broken up into a number of segments of a specified length (number of qubits).
```
segment_length = 5
segments = [line[i: i + segment_length]
for i in range(0, line_length - segment_length + 1, segment_length)]
```
For example, the first segment consists of the following qubits.
```
print(*segments[0])
```
We now implement a number of Trotter steps on each segment in parallel. The middle qubit on each segment is put into the $|1\rangle$ state, then each Trotter step consists of staggered $\sqrt{\text{iSWAP}}$ gates. All qubits are measured in the $Z$ basis at the end of the circuit.
For convenience, this code is wrapped in a function.
```
def create_example_circuit(
segments: Sequence[Sequence[cirq.Qid]],
num_trotter_steps: int,
) -> cirq.Circuit:
"""Returns a linear chain circuit to demonstrate Floquet calibration on."""
circuit = cirq.Circuit()
# Initial state preparation.
for segment in segments:
circuit += [cirq.X.on(segment[len(segment) // 2])]
# Trotter steps.
for step in range(num_trotter_steps):
offset = step % 2
moment = cirq.Moment()
for segment in segments:
moment += cirq.Moment(
[sqrt_iswap.on(a, b) for a, b in zip(segment[offset::2],
segment[offset + 1::2])])
circuit += moment
# Measurement.
circuit += cirq.measure(*sum(segments, ()), key='z')
return circuit
```
As an example, we show this circuit on the first segment of the line from above.
```
"""Example of the linear chain circuit on one segment of the line."""
num_trotter_steps = 20
circuit_on_segment = create_example_circuit(
segments=[segments[0]],
num_trotter_steps=num_trotter_steps,
)
print(circuit_on_segment.to_text_diagram(qubit_order=segments[0]))
```
The circuit we will use for Floquet calibration is this same pattern repeated on all segments of the line.
```
"""Circuit used to demonstrate Floquet calibration."""
circuit = create_example_circuit(
segments=segments,
num_trotter_steps=num_trotter_steps
)
```
### Execution on a simulator
To establish a "ground truth," we first simulate a segment on a noiseless simulator.
```
"""Simulate one segment on a simulator."""
nreps = 20_000
sim_result = cirq.Simulator().run(circuit_on_segment, repetitions=nreps)
```
### Execution on the processor without Floquet calibration
We now execute the full circuit on a processor without using Floquet calibration.
```
"""Execute the full circuit on a processor without Floquet calibration."""
raw_results = sampler.run(circuit, repetitions=nreps)
```
### Comparing raw results to simulator results
For comparison we will plot densities (average measurement results) on each segment. Such densities are in the interval $[0, 1]$ and more accurate results are closer to the simulator results.
To visualize results, we define a few helper functions.
#### Helper functions
Note: The functions in this section are just utilities for visualizing results and not essential for Floquet calibration. As such this section can be safely skipped or skimmed.
The next cell defines two functions for returning the density (average measurement results) on a segment or on all segments. We can optionally post-select for measurements with a specific filling (particle number) - i.e., discard measurement results which don't obey this expected particle number.
```
def z_density_from_measurements(
measurements: np.ndarray,
post_select_filling: Optional[int] = 1
) -> np.ndarray:
"""Returns density for one segment on the line."""
counts = np.sum(measurements, axis=1, dtype=int)
if post_select_filling is not None:
errors = np.abs(counts - post_select_filling)
counts = measurements[(errors == 0).nonzero()]
return np.average(counts, axis=0)
def z_densities_from_result(
result: cirq.Result,
segments: Iterable[Sequence[cirq.Qid]],
post_select_filling: Optional[int] = 1
) -> List[np.ndarray]:
"""Returns densities for each segment on the line."""
measurements = result.measurements['z']
z_densities = []
offset = 0
for segment in segments:
z_densities.append(z_density_from_measurements(
measurements[:, offset: offset + len(segment)],
post_select_filling)
)
offset += len(segment)
return z_densities
```
Now we define functions to plot the densities for the simulator, processor without Floquet calibration, and processor with Floquet calibration (which we will use at the end of this notebook). The first function is for a single segment, and the second function is for all segments.
```
#@title
def plot_density(
ax: plt.Axes,
sim_density: np.ndarray,
raw_density: np.ndarray,
cal_density: Optional[np.ndarray] = None,
raw_errors: Optional[np.ndarray] = None,
cal_errors: Optional[np.ndarray] = None,
title: Optional[str] = None,
show_legend: bool = True,
show_ylabel: bool = True,
) -> None:
"""Plots the density of a single segment for simulated, raw, and calibrated
results.
"""
colors = ["grey", "orange", "green"]
alphas = [0.5, 0.8, 0.8]
labels = ["sim", "raw", "cal"]
# Plot densities.
for i, density in enumerate([sim_density, raw_density, cal_density]):
if density is not None:
ax.plot(
range(len(density)),
density,
"-o" if i == 0 else "o",
markersize=11,
color=colors[i],
alpha=alphas[i],
label=labels[i]
)
# Plot errors if provided.
errors = [raw_errors, cal_errors]
densities = [raw_density, cal_density]
for i, (errs, dens) in enumerate(zip(errors, densities)):
if errs is not None:
ax.errorbar(
range(len(errs)),
dens,
errs,
linestyle='',
color=colors[i + 1],
capsize=8,
elinewidth=2,
markeredgewidth=2
)
# Titles, axes, and legend.
ax.set_xticks(list(range(len(sim_density))))
ax.set_xlabel("Qubit index in segment")
if show_ylabel:
ax.set_ylabel("Density")
if title:
ax.set_title(title)
if show_legend:
ax.legend()
def plot_densities(
sim_density: np.ndarray,
raw_densities: Sequence[np.ndarray],
cal_densities: Optional[Sequence[np.ndarray]] = None,
rows: int = 3
) -> None:
"""Plots densities for simulated, raw, and calibrated results on all segments.
"""
if not cal_densities:
cal_densities = [None] * len(raw_densities)
cols = (len(raw_densities) + rows - 1) // rows
fig, axes = plt.subplots(
rows, cols, figsize=(cols * 4, rows * 3.5), sharey=True
)
if rows == 1 and cols == 1:
axes = [axes]
elif rows > 1 and cols > 1:
axes = [axes[row, col] for row in range(rows) for col in range(cols)]
for i, (ax, raw, cal) in enumerate(zip(axes, raw_densities, cal_densities)):
plot_density(
ax,
sim_density,
raw,
cal,
title=f"Segment {i + 1}",
show_legend=False,
show_ylabel=i % cols == 0
)
# Common legend for all subplots.
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels)
plt.tight_layout(pad=0.1, w_pad=1.0, h_pad=3.0)
```
#### Visualizing results
Note: This section uses helper functions from the previous section to plot results. The code can be safely skimmed: emphasis should be on the plots.
To visualize results, we first extract densities from the measurements.
```
"""Extract densities from measurement results."""
# Simulator density.
sim_density, = z_densities_from_result(sim_result,[circuit_on_segment])
# Processor densities without Floquet calibration.
raw_densities = z_densities_from_result(raw_results, segments)
```
We first plot the densities on each segment. Note that the simulator densities ("sim") are repeated on each segment and the lines connecting them are just visual guides.
```
plot_densities(sim_density, raw_densities, rows=int(np.sqrt(line_length / segment_length)))
```
We can also look at the average and variance over the segments.
```
"""Plot mean density and variance over segments."""
raw_avg = np.average(raw_densities, axis=0)
raw_std = np.std(raw_densities, axis=0, ddof=1)
plot_density(
plt.gca(),
sim_density,
raw_density=raw_avg,
raw_errors=raw_std,
title="Average over segments"
)
```
In the next section, we will use Floquet calibration to produce better average results. After running the circuit with Floquet calibration, we will use these same visualizations to compare results.
### Execution on the processor with Floquet calibration
There are two equivalent ways to use Floquet calibration which we outline below. A rough estimate for the time required for Floquet calibration is about 16 seconds per 10 qubits, plus 30 seconds of overhead, per calibrated moment.
#### Simple usage
The first way to use Floquet calibration is via the single function call used at the start of this notebook. Here, we describe the remaining returned values in addition to `calibrated_circuit`.
Note: We comment out this section so Floquet calibration on the larger circuit is only executed once in the notebook.
```
# (calibrated_circuit, calibrations
# ) = cg.run_zeta_chi_gamma_compensation_for_moments(
# circuit,
# engine,
# processor_id=processor_id,
# gate_set=cg.SQRT_ISWAP_GATESET
# )
```
The returned `calibrated_circuit.circuit` can then be run on the engine. The full list of returned arguments is as follows:
* `calibrated_circuit.circuit`: The input `circuit` with added $Z$ rotations around each $\sqrt{\text{iSWAP}}$ gate to compensate for errors.
* `calibrated_circuit.moment_to_calibration`: Provides an index of the matching characterization (index in calibrations list) for each moment of the `calibrated_circuit.circuit`, or `None` if the moment was not characterized (e.g., for a measurement outcome).
* `calibrations`: List of characterization results for each characterized moment. Each characterization contains angles for each qubit pair.
#### Step-by-step usage
Note: This section is provided to see the Floquet calibration API at a lower level, but the results are identical to the "simple usage" in the previous section.
The above function `cirq_google.run_floquet_phased_calibration_for_circuit` performs the following three steps:
1. Find moments within the circuit that need to be characterized.
2. Characterize them on the engine.
3. Apply corrections to the original circuit.
To find moments that need to be characterized, we can do the following.
```
"""Step 1: Find moments in the circuit that need to be characterized."""
(characterized_circuit, characterization_requests
) = cg.prepare_floquet_characterization_for_moments(
circuit,
options=cg.FloquetPhasedFSimCalibrationOptions(
characterize_theta=False,
characterize_zeta=True,
characterize_chi=False,
characterize_gamma=True,
characterize_phi=False
)
)
```
The `characterization_requests` contain information on the operations (gate + qubit pairs) to characterize.
```
"""Show an example characterization request."""
print(f"Total {len(characterization_requests)} moment(s) to characterize.")
print("\nExample request")
request = characterization_requests[0]
print("Gate:", request.gate)
print("Qubit pairs:", request.pairs)
print("Options: ", request.options)
```
We now characterize them on the engine using `cirq_google.run_calibrations`.
```
"""Step 2: Characterize moments on the engine."""
characterizations = cg.run_calibrations(
characterization_requests,
engine,
processor_id=processor_id,
gate_set=cg.SQRT_ISWAP_GATESET,
max_layers_per_request=1,
)
```
The `characterizations` store characterization results for each pair in each moment, for example.
```
print(f"Total: {len(characterizations)} characterizations.")
print()
(pair, parameters), *_ = characterizations[0].parameters.items()
print(f"Example pair: {pair}")
print(f"Example parameters: {parameters}")
```
Finally, we apply corrections to the original circuit.
```
"""Step 3: Apply corrections to the circuit to get a calibrated circuit."""
calibrated_circuit = cg.make_zeta_chi_gamma_compensation_for_moments(
characterized_circuit,
characterizations
)
```
The calibrated circuit can now be run on the processor. We first inspect the calibrated circuit to compare to the original.
```
print("Portion of calibrated circuit:")
print("\n".join(
calibrated_circuit.circuit.to_text_diagram(qubit_order=line).splitlines()[:9] +
["..."]))
```
Note again that $\sqrt{\text{iSWAP}}$ gates are padded by $Z$ phases to compensate for errors. We now run this calibrated circuit.
```
"""Run the calibrated circuit on the engine."""
cal_results = sampler.run(calibrated_circuit.circuit, repetitions=nreps)
```
### Comparing raw results to calibrated results
We now compare results with and without Floquet calibration, again using the simulator results as a baseline for comparison. First we extract the calibrated densities.
```
"""Extract densities from measurement results."""
cal_densities = z_densities_from_result(cal_results, segments)
```
Now we reproduce the same density plots from above on each segment, this time including the calibrated ("cal") results.
```
plot_densities(
sim_density, raw_densities, cal_densities, rows=int(np.sqrt(line_length / segment_length))
)
```
We also visualize the mean and variance of results over segments as before.
```
"""Plot mean density and variance over segments."""
raw_avg = np.average(raw_densities, axis=0)
raw_std = np.std(raw_densities, axis=0, ddof=1)
cal_avg = np.average(cal_densities, axis=0)
cal_std = np.std(cal_densities, axis=0, ddof=1)
plot_density(
plt.gca(),
sim_density,
raw_avg,
cal_avg,
raw_std,
cal_std,
title="Average over segments"
)
```
Last, we can look at density errors between raw/calibrated results and simulated results.
```
"""Plot errors of raw vs calibrated results."""
fig, axes = plt.subplots(ncols=2, figsize=(15, 4))
axes[0].set_title("Error of the mean")
axes[0].set_ylabel("Density")
axes[1].set_title("Data standard deviation")
colors = ["orange", "green"]
labels = ["raw", "cal"]
for index, density in enumerate([raw_densities, cal_densities]):
color = colors[index]
label = labels[index]
average_density = np.average(density, axis=0)
sites = list(range(len(average_density)))
error = np.abs(average_density - sim_density)
std_dev = np.std(density, axis=0, ddof=1)
axes[0].plot(sites, error, color=color, alpha=0.6)
axes[0].scatter(sites, error, color=color)
axes[1].plot(sites, std_dev, label=label, color=color, alpha=0.6)
axes[1].scatter(sites, std_dev, color=color)
for ax in axes:
ax.set_xticks(sites)
ax.set_xlabel("Qubit index in segment")
plt.legend();
```
| github_jupyter |
```
##The premise of this project is for the implementation a CNN with VGG-16 as a feature selector
import matplotlib.pyplot as plt
%matplotlib inline
#Create an ImageGenerator object that is used to randomize and make certain small transformations to the image
#to build better and robust networks
from keras.preprocessing.image import ImageDataGenerator
image_gen = ImageDataGenerator(rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1/255,
zoom_range=0.2,
shear_range=0.2,
fill_mode='nearest')
```
## Model:
```
from keras.applications import vgg16
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten,Conv2D,MaxPooling2D
from keras import optimizers
model = vgg16.VGG16(weights='imagenet', include_top=False,
input_shape=(150,150,3), pooling="max")
for layer in model.layers[:-3]:
layer.trainable = False
for layer in model.layers:
print(layer, layer.trainable)
transfer_model = Sequential()
for layer in model.layers:
transfer_model.add(layer)
transfer_model.add(Dense(128, activation="relu"))
transfer_model.add(Dropout(0.5))
transfer_model.add(Dense(10, activation="softmax"))
adam = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.00001)
transfer_model.compile(loss="categorical_crossentropy",
optimizer=adam,
metrics=["accuracy"])
transfer_model.summary()
# Load files in google colab
from google.colab import files
# Install kaggle to download the dataset
!pip install -q kaggle
# Upload the kaggle api token json file
upload = files.upload()
!mkdir ~/.kaggle
!cp /content/kaggle.json ~/.kaggle/kaggle.json
# Download the dataset from kaggle using api link
!kaggle datasets download -d slothkong/10-monkey-species
# Unzip the dataset folder
!unzip 10-monkey-species
train_directory = 'datasets/training/training'
validation_directory = 'datasets/validation/validation'
## Getting the training and the validation sets
batch_size = 16
train_gen = image_gen.flow_from_directory(train_directory,target_size=(150,150),batch_size=batch_size,
class_mode='categorical')
validation_gen = image_gen.flow_from_directory(validation_directory,target_size=(150,150),batch_size=batch_size,
class_mode='categorical')
results = transfer_model.fit_generator(train_gen,epochs=30,steps_per_epoch=1097//batch_size,
validation_data=validation_gen,validation_steps=272//batch_size)
transfer_model.save('tlmonkeyCNN.h5')
_, acc = transfer_model.evaluate_generator(validation_gen, steps=272 //batch_size)
print('The testing accuracy for the CNN with the 10-Species-Monkey dataset is : %.3f' % (acc * 100.0))
from tensorflow import keras
x = keras.models.load_model('tlmonkeyCNN.h5')
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Hard Negative Sampling for Object Detection
You built an object detection model, evaluated it on a test set, and are happy with its accuracy. Now you deploy the model in a real-world application and you may find that the model over-fires heavily, i.e. it detects objects where none are.
This is a common problem in machine learning because our training set only contains a limited number of images, which is not sufficient to model the appearance of every object and every background in the world. Hard negative sampling (or hard negative mining) is a useful technique to address this problem. It is a way to make the model more robust to over-fitting by identifying images which are hard for the model and hence should be added to the training set.
The technique is widely used when one has a large number of negative images however adding all to the training set would cause (i) training to become too slow; and (ii) overwhelm training with too high a ratio of negatives to positives. For many negative images the model likely already performs well and hence adding them to the training set would not improve accuracy. Therefore, we try to identify those negative images where the model is incorrect.
Note that hard-negative mining is a special case of active learning where the task is to identify images which are hard for the model, annotate these images with the ground truth label, and to add them to the training set. *Hard* could be defined as the model being wrong, or as the model being uncertain about a prediction.
# Overview
In this notebook, we train our model on a training set <i>T</i> as usual, test the model on un-seen negative candidate images <i>U</i>, and see on which images in <i>U</i> the model over-fires. These images are then introduces into the training set <i>T</i> and the model is re-trained. As dataset, we use the *fridge objects* images (`watter_bottle`, `carton`, `can`, and `milk_bottle`), similar to the [01_training_introduction](./01_training_introduction.ipynb) notebook.
<img src="./media/hard_neg.jpg" width="600"/>
The overall hard negative mining process is as follows:
* First, prepare training set <i>T</i> and negative-candidate set <i>U</i>. A small proportion of both sets are set aside for evaluation.
* Second, load a pre-trained detection model.
* Next, mine hard negatives by following steps as shown in the figure:
1. Train the model on <i>T</i>.
2. Score the model on <i>U</i>.
3. Identify `NEGATIVE_NUM` images in <i>U</i> where the model is most incorrect and add to <i>T</i>.
* Finally, repeat these steps until the model stops improving.
```
import sys
sys.path.append("../../")
import os
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import scrapbook as sb
import torch
import torchvision
from torchvision import transforms
from utils_cv.classification.data import Urls as UrlsIC
from utils_cv.common.data import unzip_url
from utils_cv.common.gpu import which_processor, is_windows
from utils_cv.detection.data import Urls as UrlsOD
from utils_cv.detection.dataset import DetectionDataset, get_transform
from utils_cv.detection.model import DetectionLearner, get_pretrained_fasterrcnn
from utils_cv.detection.plot import plot_detections, plot_grid
# Change matplotlib backend so that plots are shown on windows machines
if is_windows():
plt.switch_backend('TkAgg')
print(f"TorchVision: {torchvision.__version__}")
which_processor()
# Ensure edits to libraries are loaded and plotting is shown in the notebook.
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
Default parameters. Choose `NEGATIVE_NUM` so that the number of negative images to be added at each iteration corresponds to roughly 10-20% of the total number of images in the training set. If `NEGATIVE_NUM` is too low, then too few hard negatives get added to make a noticeable difference.
```
# Path to training images, and to the negative images
DATA_PATH = unzip_url(UrlsOD.fridge_objects_path, exist_ok=True)
NEG_DATA_PATH = unzip_url(UrlsIC.fridge_objects_negatives_path, exist_ok=True)
# Number of negative images to add to the training set after each negative mining iteration.
# Here set to 10, but this value should be around 10-20% of the total number of images in the training set.
NEGATIVE_NUM = 10
# Model parameters corresponding to the "fast_inference" parameters in the 03_training_accuracy_vs_speed notebook.
EPOCHS = 10
LEARNING_RATE = 0.005
IM_SIZE = 500
BATCH_SIZE = 2
# Use GPU if available
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print(f"Using torch device: {device}")
assert str(device)=="cuda", "Model evaluation requires CUDA capable GPU"
```
## 1. Prepare datasets
We prepare our datasets in the following way:
* Training images in `data.train_ds` which includes initially only *fridge objects* images, and after running hard-negative mining also negative images.
* Negative candidate images in `neg_data.train_ds`.
* Test images in `data.test_ds` to evaluate accuracy on *fridge objects* images, and in `neg_data.test_ds` to evaluate how often the model misfires on images which do not contain an object-of-interest.
```
# Model training dataset T, split into 75% training and 25% test
data = DetectionDataset(DATA_PATH, train_pct=0.75)
print(f"Positive dataset: {len(data.train_ds)} training images and {len(data.test_ds)} test images.")
# Negative images split into hard-negative mining candidates U, and a negative test set.
# Setting "allow_negatives=True" since the negative images don't have an .xml file with ground truth annotations
neg_data = DetectionDataset(NEG_DATA_PATH, train_pct=0.80, batch_size=BATCH_SIZE,
im_dir = "", allow_negatives = True,
train_transforms = get_transform(train=False))
print(f"Negative dataset: {len(neg_data.train_ds)} candidates for hard negative mining and {len(neg_data.test_ds)} test images.")
```
## 2. Prepare a model
Initialize a pre-trained Faster R-CNN model similar to the [01_training_introduction](./01_training_introduction.ipynb) notebook.
```
# Pre-trained Faster R-CNN model
detector = DetectionLearner(data, im_size=IM_SIZE)
# Record after each mining iteration the validation accuracy and how many objects were found in the negative test set
valid_accs = []
num_neg_detections = []
```
## 3. Train the model on *T*
<a id='train'></a>
Model training. As described at the start of this notebook, you likely need to repeat the steps from here until the end of the notebook several times to achieve optimal results.
```
# Fine-tune model. After each epoch prints the accuracy on the validation set.
detector.fit(EPOCHS, lr=LEARNING_RATE, print_freq=30)
```
Show the accuracy on the validation set for this and all previous mining iterations.
```
# Get validation accuracy on test set at IOU=0.5:0.95
acc = float(detector.ap[-1]["bbox"])
valid_accs.append(acc)
# Plot validation accuracy versus number of hard-negative mining iterations
from utils_cv.common.plot import line_graph
line_graph(
values=(valid_accs),
labels=("Validation"),
x_guides=range(len(valid_accs)),
x_name="Hard negative mining iteration",
y_name="mAP@0.5:0.95",
)
```
## 4. Score the model on *U*
Run inference on all negative candidate images. The images where the model is most incorrect will later be added as hard negatives to the training set.
```
detections = detector.predict_dl(neg_data.train_dl, threshold=0)
detections[0]
```
Count how many objects were detected in the negative test set. This number typically goes down dramatically after a few mining iterations, and is an indicator how much the model over-fires on unseen images.
```
# Count number of mis-detections on negative test set
test_detections = detector.predict_dl(neg_data.test_dl, threshold=0)
bbox_scores = [bbox.score for det in test_detections for bbox in det['det_bboxes']]
num_neg_detections.append(len(bbox_scores))
# Plot
from utils_cv.common.plot import line_graph
line_graph(
values=(num_neg_detections),
labels=("Negative test set"),
x_guides=range(len(num_neg_detections)),
x_name="Hard negative mining iteration",
y_name="Number of detections",
)
```
## 5. Hard negative mining
Use the negative candidate images where the model is most incorrect as hard negatives.
```
# For each image, get maximum score (i.e. confidence in the detection) over all detected bounding boxes in the image
max_scores = []
for idx, detection in enumerate(detections):
if len(detection['det_bboxes']) > 0:
max_score = max([d.score for d in detection['det_bboxes']])
else:
max_score = float('-inf')
max_scores.append(max_score)
# Use the n images with highest maximum score as hard negatives
hard_im_ids = np.argsort(max_scores)[::-1]
hard_im_ids = hard_im_ids[:NEGATIVE_NUM]
hard_im_scores =[max_scores[i] for i in hard_im_ids]
print(f"Indentified {len(hard_im_scores)} hard negative images with detection scores in range {min(hard_im_scores)} to {max(hard_im_scores):4.2f}")
```
Plot some of the identified hard negatives images. This will likely mistake objects which were not part of the training set as the objects-of-interest.
```
# Get image paths and ground truth boxes for the hard negative images
dataset_ids = [detections[i]['idx'] for i in hard_im_ids]
im_paths = [neg_data.train_ds.dataset.im_paths[i] for i in dataset_ids]
gt_bboxes = [neg_data.train_ds.dataset.anno_bboxes[i] for i in dataset_ids]
# Plot
def _grid_helper():
for i in hard_im_ids:
yield detections[i], neg_data, None, None
plot_grid(plot_detections, _grid_helper(), rows=1)
```
## 6. Add hard negatives to *T*
We now add the identified hard negative images to the training set.
```
# Add identified hard negatives to training set
data.add_images(im_paths, gt_bboxes, target = "train")
print(f"Added {len(im_paths)} hard negative images. Now: {len(data.train_ds)} training images and {len(data.test_ds)} test images")
print(f"Completed {len(valid_accs)} hard negative iterations.")
# Preserve some of the notebook outputs
sb.glue("valid_accs", valid_accs)
sb.glue("hard_im_scores", list(hard_im_scores))
```
## Repeat
Now, **repeat** all steps starting from "[3. Train the model on T](#train)" to re-train the model and the training set T with added and add more hard negative images to the training set. **Stop** once the accuracy `valid_accs` stopped improving and if the number of (mis)detections in the negative test set `num_neg_detections` stops decreasing.
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'prepare/mesolitica-tpu.json'
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from google.cloud import storage
client = storage.Client()
bucket = client.bucket('mesolitica-tpu-general')
best = '1050000'
directory = 't5-3x-super-tiny-true-case-4k'
!rm -rf output out {directory}
!mkdir {directory}
model = best
blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00000-of-00002')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00000-of-00002')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.data-00001-of-00002')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.data-00001-of-00002')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.index')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.index')
blob = bucket.blob(f'{directory}/model.ckpt-{model}.meta')
blob.download_to_filename(f'{directory}/model.ckpt-{model}.meta')
blob = bucket.blob(f'{directory}/checkpoint')
blob.download_to_filename(f'{directory}/checkpoint')
blob = bucket.blob(f'{directory}/operative_config.gin')
blob.download_to_filename(f'{directory}/operative_config.gin')
with open(f'{directory}/checkpoint', 'w') as fopen:
fopen.write(f'model_checkpoint_path: "model.ckpt-{model}"')
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-model')
tar = 't5-3x-super-tiny-true-case-4k-2021-09-10.tar.gz'
os.system(f'tar -czvf {tar} {directory}')
outPutname = f'finetuned/{tar}'
b2_bucket.upload_local_file(
local_file=tar,
file_name=outPutname,
file_infos=file_info,
)
os.system(f'rm {tar}')
import tensorflow as tf
import tensorflow_datasets as tfds
import t5
model = t5.models.MtfModel(
model_dir=directory,
tpu=None,
tpu_topology=None,
model_parallelism=1,
batch_size=1,
sequence_length={"inputs": 256, "targets": 256},
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=3,
iterations_per_loop=100,
mesh_shape="model:1,batch:1",
mesh_devices=["cpu:0"]
)
!rm -rf output/*
import gin
from t5.data import sentencepiece_vocabulary
DEFAULT_SPM_PATH = 'prepare/sp10m.cased.ms-en-4k.model'
DEFAULT_EXTRA_IDS = 100
model_dir = directory
def get_default_vocabulary():
return sentencepiece_vocabulary.SentencePieceVocabulary(
DEFAULT_SPM_PATH, DEFAULT_EXTRA_IDS)
with gin.unlock_config():
gin.parse_config_file(t5.models.mtf_model._operative_config_path(model_dir))
gin.bind_parameter("Bitransformer.decode.beam_size", 1)
gin.bind_parameter("Bitransformer.decode.temperature", 0)
gin.bind_parameter("utils.get_variable_dtype.slice_dtype", "float32")
gin.bind_parameter(
"utils.get_variable_dtype.activation_dtype", "float32")
vocabulary = t5.data.SentencePieceVocabulary(DEFAULT_SPM_PATH)
estimator = model.estimator(vocabulary, disable_tpu=True)
import os
checkpoint_step = t5.models.mtf_model._get_latest_checkpoint_from_dir(model_dir)
model_ckpt = "model.ckpt-" + str(checkpoint_step)
checkpoint_path = os.path.join(model_dir, model_ckpt)
checkpoint_step, model_ckpt, checkpoint_path
from mesh_tensorflow.transformer import dataset as transformer_dataset
def serving_input_fn():
inputs = tf.placeholder(
dtype=tf.string,
shape=[None],
name="inputs")
batch_size = tf.shape(inputs)[0]
padded_inputs = tf.pad(inputs, [(0, tf.mod(-tf.size(inputs), batch_size))])
dataset = tf.data.Dataset.from_tensor_slices(padded_inputs)
dataset = dataset.map(lambda x: {"inputs": x})
dataset = transformer_dataset.encode_all_features(dataset, vocabulary)
dataset = transformer_dataset.pack_or_pad(
dataset=dataset,
length=model._sequence_length,
pack=False,
feature_keys=["inputs"]
)
dataset = dataset.batch(tf.cast(batch_size, tf.int64))
features = tf.data.experimental.get_single_element(dataset)
return tf.estimator.export.ServingInputReceiver(
features=features, receiver_tensors=inputs)
out = estimator.export_saved_model('output', serving_input_fn, checkpoint_path=checkpoint_path)
config = tf.ConfigProto()
config.allow_soft_placement = True
sess = tf.Session(config = config)
meta_graph_def = tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
out)
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, '3x-super-tiny-true-case-4k/model.ckpt')
strings = [
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('encoder' in n.op
or 'decoder' in n.name
or 'shared' in n.name
or 'inputs' in n.name
or 'output' in n.name
or 'SentenceTokenizer' in n.name
or 'self/Softmax' in n.name)
and 'adam' not in n.name
and 'Assign' not in n.name
]
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names,
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('3x-super-tiny-true-case-4k', strings)
import struct
unknown = b'\xff\xff\xff\xff'
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
for node in graph_def.node:
if node.op == 'RefSwitch':
node.op = 'Switch'
for index in xrange(len(node.input)):
if 'moving_' in node.input[index]:
node.input[index] = node.input[index] + '/read'
elif node.op == 'AssignSub':
node.op = 'Sub'
if 'use_locking' in node.attr: del node.attr['use_locking']
elif node.op == 'AssignAdd':
node.op = 'Add'
if 'use_locking' in node.attr: del node.attr['use_locking']
elif node.op == 'Assign':
node.op = 'Identity'
if 'use_locking' in node.attr: del node.attr['use_locking']
if 'validate_shape' in node.attr: del node.attr['validate_shape']
if len(node.input) == 2:
node.input[0] = node.input[1]
del node.input[1]
if 'Reshape/shape' in node.name or 'Reshape_1/shape' in node.name:
b = node.attr['value'].tensor.tensor_content
arr_int = [int.from_bytes(b[i:i + 4], 'little') for i in range(0, len(b), 4)]
if len(arr_int):
arr_byte = [unknown] + [struct.pack('<i', i) for i in arr_int[1:]]
arr_byte = b''.join(arr_byte)
node.attr['value'].tensor.tensor_content = arr_byte
if len(node.attr['value'].tensor.int_val):
node.attr['value'].tensor.int_val[0] = -1
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('3x-super-tiny-true-case-4k/frozen_model.pb')
i = g.get_tensor_by_name('import/inputs:0')
o = g.get_tensor_by_name('import/SelectV2_3:0')
i, o
test_sess = tf.Session(graph = g)
import sentencepiece as spm
sp_model = spm.SentencePieceProcessor()
sp_model.Load(DEFAULT_SPM_PATH)
string1 = 'FORMAT TERBUKA. FORMAT TERBUKA IALAH SUATU FORMAT FAIL UNTUK TUJUAN MENYIMPAN DATA DIGITAL, DI MANA FORMAT INI DITAKRIFKAN BERDASARKAN SPESIFIKASI YANG DITERBITKAN DAN DIKENDALIKAN PERTUBUHAN PIAWAIAN , SERTA BOLEH DIGUNA PAKAI KHALAYAK RAMAI .'
string2 = 'Husein ska mkn ayam dkat kampng Jawa'
strings = [string1, string2]
[f'kes benar: {s}' for s in strings]
%%time
o_ = test_sess.run(o, feed_dict = {i: [f'kes benar: {s}' for s in strings]})
o_.shape
for k in range(len(o_)):
print(k, sp_model.DecodeIds(o_[k].tolist()))
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(minimum_size=1536000)',
#'quantize_weights(fallback_min=-10240, fallback_max=10240)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = '3x-super-tiny-true-case-4k/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['inputs'],
['SelectV2_3'], transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('3x-super-tiny-true-case-4k/frozen_model.pb.quantized')
i = g.get_tensor_by_name('import/inputs:0')
o = g.get_tensor_by_name('import/SelectV2_3:0')
i, o
test_sess = tf.InteractiveSession(graph = g)
file = '3x-super-tiny-true-case-4k/frozen_model.pb.quantized'
outPutname = 'true-case/3x-super-tiny-t5-4k-quantized/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = '3x-super-tiny-true-case-4k/frozen_model.pb'
outPutname = 'true-case/3x-super-tiny-t5-4k/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
```
| github_jupyter |
# Venture Funding with Deep Learning
## Steps:
* Prepare the data for use on a neural network model.
* Compile and evaluate a binary classification model using a neural network.
* Optimize the neural network model.
```
# Imports
import pandas as pd
from pathlib import Path
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
import warnings
warnings.filterwarnings('ignore')
```
---
## Prepare the data to be used on a neural network model
### Step 1: Read the `applicants_data.csv` file into a Pandas DataFrame. Review the DataFrame, looking for categorical variables that need to be encoded, as well as columns that define features and target variables.
```
# Read the applicants_data.csv file from the Resources folder into a Pandas DataFrame
applicant_data_df = pd.read_csv(Path("./Resources/applicants_data.csv"))
# Review the DataFrame
applicant_data_df.head()
# Review the data types associated with the columns
applicant_data_df.dtypes
```
### Step 2: Drop the “EIN” (Employer Identification Number) and “NAME” columns from the DataFrame, because they are not relevant to the binary classification model.
```
# Drop the 'EIN' and 'NAME' columns from the DataFrame
applicant_data_df = applicant_data_df.drop(columns= ["EIN", "NAME"])
# Review the DataFrame
applicant_data_df.head()
```
### Step 3: Encode the dataset’s categorical variables using `OneHotEncoder`, and then place the encoded variables into a new DataFrame.
```
# Create a list of categorical variables
categorical_variables = list(applicant_data_df.dtypes[applicant_data_df.dtypes=="object"].index)
# Display the categorical variables list
categorical_variables
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Encode the categorcal variables using OneHotEncoder
encoded_data = enc.fit_transform(applicant_data_df[categorical_variables])
# Create a DataFrame with the encoded variables
encoded_df = pd.DataFrame(encoded_data, columns= enc.get_feature_names(categorical_variables))
# Review the DataFrame
encoded_df.head()
```
### Step 4: Add the original DataFrame’s numerical variables to the DataFrame containing the encoded variables.
```
# Create a DataFrame with the columnns containing numerical variables from the original dataset
numerical_variables_df= applicant_data_df.drop(columns= categorical_variables)
numerical_variables_df.head()
# Add the numerical variables from the original DataFrame to the one-hot encoding DataFrame
encoded_df = pd.concat((encoded_df, numerical_variables_df), axis =1)
# Review the Dataframe
encoded_df.head()
```
### Step 5: Using the preprocessed data, create the features (`X`) and target (`y`) datasets. The target dataset should be defined by the preprocessed DataFrame column “IS_SUCCESSFUL”. The remaining columns should define the features dataset.
```
# Define the target set y using the IS_SUCCESSFUL column
y = encoded_df["IS_SUCCESSFUL"]
# Display a sample of y
y[:5]
# Define features set X by selecting all columns but IS_SUCCESSFUL
X = encoded_df.drop(columns= "IS_SUCCESSFUL")
# Review the features DataFrame
X.head()
x
```
### Step 6: Split the features and target sets into training and testing datasets.
```
# Split the preprocessed data into a training and testing dataset
# Assign the function a random_state equal to 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
### Step 7: Use scikit-learn's `StandardScaler` to scale the features data.
```
# Create a StandardScaler instance
scaler = StandardScaler()
# Fit the scaler to the features training dataset
X_scaler = scaler.fit(X_train)
# Fit the scaler to the features training dataset
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
---
## Compile and Evaluate a Binary Classification Model Using a Neural Network
### Step 1: Create a deep neural network by assigning the number of input features, the number of layers, and the number of neurons on each layer using Tensorflow’s Keras.
* Starting with two layer deep model
```
# Define the the number of inputs (features) to the model
number_input_features = len(X_train.loc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1 = (number_input_features + number_output_neurons)//2
# Review the number hidden nodes in the first layer
hidden_nodes_layer1
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2 = (hidden_nodes_layer1 + number_output_neurons)//2
# Review the number hidden nodes in the second layer
hidden_nodes_layer2
# Create the Sequential model instance
nn = Sequential()
# Add the first hidden layer
nn.add(Dense(units= hidden_nodes_layer1, activation= "relu", input_dim= number_input_features))
# Add the second hidden layer
nn.add(Dense(units= hidden_nodes_layer2, activation= "relu"))
# Add the output layer to the model specifying the number of output neurons and activation function
nn.add(Dense(units= number_output_neurons, activation= "sigmoid"))
# Display the Sequential model summary
nn.summary()
```
### Step 2: Compile and fit the model using the `binary_crossentropy` loss function, the `adam` optimizer, and the `accuracy` evaluation metric.
```
# Compile the Sequential model
nn.compile(loss= "binary_crossentropy", optimizer= "adam", metrics= ["accuracy"])
# Fit the model using 50 epochs and the training data
model_1= nn.fit(X_train_scaled, y_train, epochs= 50, verbose= 0)
```
### Step 3: Evaluate the model using the test data to determine the model’s loss and accuracy.
```
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled, y_test, verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
### Step 4: Save and export your model to an HDF5 file, and name the file `AlphabetSoup.h5`.
```
# Set the model's file path
file_path = Path("./Resources/AlphabetSoup.h5")
# Export your model to a HDF5 file
nn.save(file_path)
```
---
## Optimize the neural network model
### Step 1: To improve on first model’s predictive accuracy, we will try three models with different optimization techniques as following :
1. Add more hidden layers.
2. Adjust the input data by dropping different features columns to ensure that no variables or outliers confuse the model.
3. Add more neurons (nodes) to a hidden layer.
### Alternative Model 1
#### Optimizing the model by adding one more hidden layer(i.e. three hidden layers)
```
# Define the the number of inputs (features) to the model
number_input_features = len(X_train.iloc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons_A1 = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1_A1 = (number_input_features + number_output_neurons)//2
# Review the number of hidden nodes in the first layer
hidden_nodes_layer1_A1
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2_A1 = (hidden_nodes_layer1_A1 + number_output_neurons)//2
# Review the number hidden nodes in the second layer
hidden_nodes_layer2_A1
# Define the number of hidden nodes for the third hidden layer
hidden_nodes_layer3_A1 = (hidden_nodes_layer2_A1 + number_output_neurons)//2
# Review the number hidden nodes in the second layer
hidden_nodes_layer3_A1
# Create the Sequential model instance
nn_A1 = Sequential()
# First, Second and Third hidden layer
nn_A1.add(Dense(units= hidden_nodes_layer1_A1, activation= "relu", input_dim= number_input_features))
nn_A1.add(Dense(units= hidden_nodes_layer2_A1, activation= "relu"))
nn_A1.add(Dense(units= hidden_nodes_layer3_A1, activation= "relu"))
# Output layer
nn_A1.add(Dense(units= number_output_neurons, activation= "sigmoid"))
# Check the structure of the model
nn_A1.summary()
# Compile the Sequential model
nn_A1.compile(loss="binary_crossentropy", optimizer= "adam", metrics= ["accuracy"])
# Fit the model using 50 epochs and the training data
fit_model_A1 = nn_A1.fit(X_train_scaled, y_train, epochs= 50, verbose=0)
```
### Alternative Model 2
#### Adjust the input data by dropping different features columns to ensure that no variables or outliers confuse the model.
```
applicant_data_reduced_df = applicant_data_df.drop(columns= ["STATUS","SPECIAL_CONSIDERATIONS"])
applicant_data_reduced_df.head()
categorical_variables_reduced = list(applicant_data_reduced_df.dtypes[applicant_data_reduced_df.dtypes=="object"].index)
numerical_variables_reduced= applicant_data_reduced_df.drop(columns= categorical_variables_reduced)
enc = OneHotEncoder(sparse=False)
encoded_data_reduced = enc.fit_transform(applicant_data_reduced_df[categorical_variables_reduced])
encoded_reduced_df = pd.DataFrame(encoded_data_reduced, columns= enc.get_feature_names(categorical_variables_reduced))
encoded_reduced_df.head()
encoded_reduced_df= pd.concat((encoded_reduced_df, numerical_variables_reduced), axis= 1)
encoded_reduced_df.head()
y_red = encoded_reduced_df["IS_SUCCESSFUL"]
X_red = encoded_reduced_df.drop(columns= "IS_SUCCESSFUL")
X_red_train, X_red_test, y_red_train, y_red_test= train_test_split(X_red, y_red, random_state=1)
sscaler= StandardScaler()
X_sscaler= sscaler.fit(X_red_train)
X_red_train_scaled= X_sscaler.transform(X_red_train)
X_red_test_scaled= X_sscaler.transform(X_red_test)
# Define the the number of inputs (features) to the model
number_input_features_A2 = len(X_red_train.iloc[0])
# Review the number of features
number_input_features_A2
# Define the number of neurons in the output layer
number_output_neurons_A2 = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1_A2 = (number_input_features_A2+number_output_neurons_A2)//2
hidden_nodes_layer1_A2
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2_A2 = (hidden_nodes_layer1_A2 + number_output_neurons_A2)//2
hidden_nodes_layer2_A2
# Define the number of hidden nodes for the third hidden layer
hidden_nodes_layer3_A2 = (hidden_nodes_layer2_A2 + number_output_neurons_A2)//2
hidden_nodes_layer3_A2
# Create the Sequential model instance
nn_A2 = Sequential()
# First hidden layer
nn_A2.add(Dense(units= hidden_nodes_layer1_A2, activation= "relu", input_dim= number_input_features_A2))
# Second hidden layer
nn_A2.add(Dense(units= hidden_nodes_layer2_A2, activation= "relu"))
# Third hidden layer
nn_A2.add(Dense(units= hidden_nodes_layer3_A2, activation= "relu"))
# Output layer
nn_A2.add(Dense(units= number_output_neurons_A2, activation= "sigmoid"))
# Check the structure of the model
nn_A2.summary()
# Compile the model
nn_A2.compile(loss= "binary_crossentropy", optimizer= "adam", metrics= ["accuracy"])
# Fit the model
fit_model_A2 = nn_A2.fit(X_red_train_scaled, y_red_train, epochs= 50, verbose=0)
```
### Alternative Model 3
#### Increasing the number of nodes in hidden layers
```
# Define the the number of inputs (features) to the model
number_input_features_A3= 113
# Define the the number of output to the model
number_output_neurons_A3=1
# Define the number of hidden nodes adding 2 more nodes
hidden_nodes_layer1_A3 = ((number_input_features_A3 + number_output_neurons_A3)//2)+2
hidden_nodes_layer1_A3
# Define the number of nodes in Second hidden layer adding 2 more nodes
hidden_nodes_layer2_A3 = ((hidden_nodes_layer1_A3 + number_output_neurons_A3)//2)+ 2
hidden_nodes_layer2_A3
# Define the number of nodes in Third hidden layer adding 2 more nodes
hidden_nodes_layer3_A3 = ((hidden_nodes_layer2_A3 + number_output_neurons_A3)//2)+2
hidden_nodes_layer3_A3
# Create the Sequential model instance
nn_A3 = Sequential()
# First hidden layer
nn_A3.add(Dense(units= hidden_nodes_layer1_A3, activation= "relu", input_dim= number_input_features_A3))
# Second hidden layer
nn_A3.add(Dense(units= hidden_nodes_layer2_A3, activation= "relu"))
# Third hidden layer
nn_A3.add(Dense(units= hidden_nodes_layer3_A3, activation= "relu"))
# Output layer
nn_A3.add(Dense(units= number_output_neurons_A3, activation= "sigmoid"))
# Check the structure of the model
nn_A3.summary()
# Compile the model
nn_A3.compile(loss= "binary_crossentropy", optimizer= "adam", metrics= ["accuracy"])
# Fit the model
fit_model_A3 = nn_A3.fit(X_red_train_scaled, y_red_train, epochs= 150, verbose=0)
```
### Step 2: After finishing your models, display the accuracy scores achieved by each model, and compare the results.
```
print("Original Model Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled, y_test, verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 1 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_A1.evaluate(X_test_scaled, y_test, verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 2 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_A2.evaluate(X_red_test_scaled, y_red_test, verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 3 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_A3.evaluate(X_red_test_scaled, y_red_test, verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
### Step 3: Save each of your alternative models as an HDF5 file.
```
# Set the model's file path
file_path = Path("./Resources/model_A1.h5")
# Export your model to a HDF5 file
nn_A1.save(file_path)
# Set the file path for the second alternative model
file_path =Path("./Resources/model_A2.h5")
# Export your model to a HDF5 file
nn_A2.save(file_path)
# Set the file path for the third alternative model
file_path =Path("./Resources/model_A3.h5")
# Export your model to a HDF5 file
nn_A3.save(file_path)
```
| github_jupyter |
```
import os
import os.path as path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras import layers, models, optimizers, regularizers
from tensorflow.keras.models import load_model
current_dir = os.path.join(os.getcwd())
file = os.path.join(path.dirname(path.dirname(current_dir)), "generate_data\data_cwa.csv")
myData = pd.read_csv(file, delimiter=',', usecols=['cwa','credit','time','difficulty', 'score'])
my_data_copy = myData
myData.shape
myData["score"] = myData["score"].values / 100
myData["cwa"] = myData["cwa"].values / 100
myData["credit"] = myData["credit"].values / 10
myData ["difficulty"] = myData['difficulty'].values / 5
myData["time"] = myData["time"].values / 6
df = pd.DataFrame(myData)
df = df.sample(frac=1)
myData = df
myData
targets = myData[['score']].values
myData.drop(('score'), axis=1, inplace=True)
data = myData.values
print(targets.shape)
print(data.shape)
num_train = int(0.5 * len(data))
num_val = int(0.25 * len(data))
num_test = int(0.25 * len(data))
train_data = data[0 : num_train]
test_data = data[num_train: num_train + num_test]
val_data = data[num_train + num_test:]
train_targets = targets[0 : num_train]
test_targets = targets[num_train: num_train + num_test]
val_targets = targets[num_train + num_test:]
print(len(train_data) + len(test_data) + len(val_data))
print(len(train_targets) + len(test_targets) + len(val_targets))
model = models.Sequential()
model.add(layers.Dense(1, activation="relu",input_shape=(train_data.shape[1],)))
# model.add(layers.Dense(1, activation="relu"))
# model.add(layers.Dropout(0.5))
# model.add(layers.Dense(1, activation="relu", kernel_regularizer=regularizers.l2(0.01)))
# model.add(layers.Dropout(0.5))
model.add(layers.Dense(1))
model.summary()
model.compile(
optimizer=optimizers.RMSprop(learning_rate=2e-4),
loss="mse",
metrics=['mae']
)
history = model.fit(train_data,
train_targets,
epochs=40,
batch_size=512,
validation_data=(val_data, val_targets)
)
acc = history.history['mae']
val_acc = history.history['val_mae']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
model.save('score_prediction_2.h5')
test_mse_score, test_mae_score = model.evaluate(test_data, test_targets)
model = load_model('score_prediction_1.h5')
predicted = model.predict([[0.8081, 0.1, 0.458333, 0.2]])
predicted
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cxbxmxcx/EatNoEat/blob/master/Chapter_9_Build_Nutritionist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Imports
```
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import os
import time
from PIL import Image
import pickle
```
Download Recipe Data
```
data_folder = 'data'
recipes_zip = tf.keras.utils.get_file('recipes.zip',
origin = 'https://www.dropbox.com/s/i1hvs96mnahozq0/Recipes5k.zip?dl=1',
extract = True)
print(recipes_zip)
data_folder = os.path.dirname(recipes_zip)
os.remove(recipes_zip)
print(data_folder)
```
Setup Folder Paths
```
!dir /root/.keras/datasets
data_folder = data_folder + '/Recipes5k/'
annotations_folder = data_folder + 'annotations/'
images_folder = data_folder + 'images/'
print(annotations_folder)
print(images_folder)
%ls /root/.keras/datasets/Recipes5k/images/
```
Extra Imports
```
from fastprogress.fastprogress import master_bar, progress_bar
from IPython.display import Image
from os import listdir
from pickle import dump
```
Setup Convnet Application
```
use_NAS = False
if use_NAS:
IMG_SIZE = 224 # 299 for Inception, 224 for NASNet
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
else:
IMG_SIZE = 299 # 299 for Inception, 224 for NASNet
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (IMG_SIZE, IMG_SIZE))
if use_NAS:
img = tf.keras.applications.nasnet.preprocess_input(img)
else:
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
foods_txt = tf.keras.utils.get_file('foods.txt',
origin = 'https://www.dropbox.com/s/xyukyq62g98dx24/foods_cat.txt?dl=1')
print(foods_txt)
def get_nutrient_array(fat, protein, carbs):
nutrients = np.array([float(fat)*4, float(protein)*4, float(carbs)*4])
nutrients /= np.linalg.norm(nutrients)
return nutrients
def get_category_array(keto, carbs, health):
return np.array([float(keto)-5, float(carbs)-5, float(health)-5])
import csv
def get_food_nutrients(nutrient_file):
foods = {}
with open(foods_txt) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
print(f'Column names are {", ".join(row)}')
line_count += 1
else:
categories = get_category_array(row[1],row[2],row[3])
foods[row[0]] = categories
line_count += 1
print(f'Processed {line_count} lines.')
return foods
food_nutrients = get_food_nutrients(foods_txt)
print(food_nutrients)
def load_images(food_w_nutrients, directory):
X = []
Y = []
i=0
mb = master_bar(listdir(directory))
for food_group in mb:
try:
for pic in progress_bar(listdir(directory + food_group),
parent=mb, comment='food = ' + food_group):
filename = directory + food_group + '/' + pic
image, img_path = load_image(filename)
if i < 5:
print(img_path)
i+=1
Y.append(food_w_nutrients[food_group])
X.append(image)
except:
continue
return X,Y
X, Y = load_images(food_nutrients, images_folder)
print(len(X), len(Y))
tf.keras.backend.clear_session()
if use_NAS:
# Create the base model from the pre-trained model
base_model = tf.keras.applications.NASNetMobile(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
else:
# Create the base model from the pre-trained model
base_model = tf.keras.applications.InceptionResNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
dataset = tf.data.Dataset.from_tensor_slices((X, Y))
dataset
batches = dataset.batch(64)
for image_batch, label_batch in batches.take(1):
pass
image_batch.shape
train_size = int(len(X)*.8)
test_size = int(len(X)*.2)
batches = batches.shuffle(test_size)
train_dataset = batches.take(train_size)
test_dataset = batches.skip(train_size)
test_dataset = test_dataset.take(test_size)
feature_batch = base_model(image_batch)
print(feature_batch.shape)
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
if use_NAS:
fine_tune_at = 100
else:
fine_tune_at = 550
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
base_model.summary()
```
Add Regression Head
```
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
prediction_layer = tf.keras.layers.Dense(3)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.Nadam(lr=base_learning_rate),
loss=tf.keras.losses.MeanAbsoluteError(),
metrics=['mae', 'mse', 'accuracy'])
model.summary()
from google.colab import drive
drive.mount('/content/gdrive')
folder = '/content/gdrive/My Drive/Models'
if os.path.isdir(folder) == False:
os.makedirs(folder)
# Include the epoch in the file name (uses `str.format`)
checkpoint_path = folder + "/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create a callback that saves the model's weights every 5 epochs
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
period=5)
history = model.fit(batches,epochs=25, callbacks=[cp_callback])
acc = history.history['accuracy']
loss = history.history['loss']
mae = history.history['mae']
mse = history.history['mse']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Loss')
plt.legend(loc='upper right')
plt.ylabel('MAE')
plt.ylim([0,5.0])
plt.title('Training Loss')
plt.xlabel('epoch')
plt.show()
def get_test_images():
directory = '/content/'
images = []
for file in listdir(directory):
if file.endswith(".jpg"):
images.append(file)
return images
images = get_test_images()
print(images)
```
```
#@title Image Prediction { run: "auto", vertical-output: true, display-mode: "form" }
image_idx = 42 #@param {type:"slider", min:0, max:100, step:1}
cnt = len(images)
if cnt > 0:
image_idx = image_idx if image_idx < cnt else cnt - 1
image = images[image_idx]
x, _ = load_image(image)
img = x[np.newaxis, ...]
predict = model.predict(img)
print(predict+5)
print(image_idx,image)
plt.imshow(x)
```
| github_jupyter |
#Build a regression model: Get started with R and Tidymodels for regression models
## Introduction to Regression - Lesson 1
#### Putting it into perspective
✅ There are many types of regression methods, and which one you pick depends on the answer you're looking for. If you want to predict the probable height for a person of a given age, you'd use `linear regression`, as you're seeking a **numeric value**. If you're interested in discovering whether a type of cuisine should be considered vegan or not, you're looking for a **category assignment** so you would use `logistic regression`. You'll learn more about logistic regression later. Think a bit about some questions you can ask of data, and which of these methods would be more appropriate.
In this section, you will work with a [small dataset about diabetes](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). Imagine that you wanted to test a treatment for diabetic patients. Machine Learning models might help you determine which patients would respond better to the treatment, based on combinations of variables. Even a very basic regression model, when visualized, might show information about variables that would help you organize your theoretical clinical trials.
That said, let's get started on this task!
<br>Artwork by @allison_horst
## 1. Loading up our tool set
For this task, we'll require the following packages:
- `tidyverse`: The [tidyverse](https://www.tidyverse.org/) is a [collection of R packages](https://www.tidyverse.org/packages) designed to makes data science faster, easier and more fun!
- `tidymodels`: The [tidymodels](https://www.tidymodels.org/) framework is a [collection of packages](https://www.tidymodels.org/packages/) for modeling and machine learning.
You can have them installed as:
`install.packages(c("tidyverse", "tidymodels"))`
The script below checks whether you have the packages required to complete this module and installs them for you in case some are missing.
```
if (!require("pacman")) install.packages("pacman")
pacman::p_load(tidyverse, tidymodels)
```
Now, let's load these awesome packages and make them available in our current R session.(This is for mere illustration, `pacman::p_load()` already did that for you)
```
# load the core Tidyverse packages
library(tidyverse)
# load the core Tidymodels packages
library(tidymodels)
```
## 2. The diabetes dataset
In this exercise, we'll put our regression skills into display by making predictions on a diabetes dataset. The [diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt) includes `442 samples` of data around diabetes, with 10 predictor feature variables, `age`, `sex`, `body mass index`, `average blood pressure`, and `six blood serum measurements` as well as an outcome variable `y`: a quantitative measure of disease progression one year after baseline.
|Number of observations|442|
|----------------------|:---|
|Number of predictors|First 10 columns are numeric predictive|
|Outcome/Target|Column 11 is a quantitative measure of disease progression one year after baseline|
|Predictor Information|- age in years
||- sex
||- bmi body mass index
||- bp average blood pressure
||- s1 tc, total serum cholesterol
||- s2 ldl, low-density lipoproteins
||- s3 hdl, high-density lipoproteins
||- s4 tch, total cholesterol / HDL
||- s5 ltg, possibly log of serum triglycerides level
||- s6 glu, blood sugar level|
> 🎓 Remember, this is supervised learning, and we need a named 'y' target.
Before you can manipulate data with R, you need to import the data into R's memory, or build a connection to the data that R can use to access the data remotely.
> The [readr](https://readr.tidyverse.org/) package, which is part of the Tidyverse, provides a fast and friendly way to read rectangular data into R.
Now, let's load the diabetes dataset provided in this source URL: <https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html>
Also, we'll perform a sanity check on our data using `glimpse()` and dsiplay the first 5 rows using `slice()`.
Before going any further, let's also introduce something you will encounter often in R code 🥁🥁: the pipe operator `%>%`
The pipe operator (`%>%`) performs operations in logical sequence by passing an object forward into a function or call expression. You can think of the pipe operator as saying "and then" in your code.
```
# Import the data set
diabetes <- read_table2(file = "https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt")
# Get a glimpse and dimensions of the data
glimpse(diabetes)
# Select the first 5 rows of the data
diabetes %>%
slice(1:5)
```
`glimpse()` shows us that this data has 442 rows and 11 columns with all the columns being of data type `double`
<br>
> glimpse() and slice() are functions in [`dplyr`](https://dplyr.tidyverse.org/). Dplyr, part of the Tidyverse, is a grammar of data manipulation that provides a consistent set of verbs that help you solve the most common data manipulation challenges
<br>
Now that we have the data, let's narrow down to one feature (`bmi`) to target for this exercise. This will require us to select the desired columns. So, how do we do this?
[`dplyr::select()`](https://dplyr.tidyverse.org/reference/select.html) allows us to *select* (and optionally rename) columns in a data frame.
```
# Select predictor feature `bmi` and outcome `y`
diabetes_select <- diabetes %>%
select(c(bmi, y))
# Print the first 5 rows
diabetes_select %>%
slice(1:10)
```
## 3. Training and Testing data
It's common practice in supervised learning to *split* the data into two subsets; a (typically larger) set with which to train the model, and a smaller "hold-back" set with which to see how the model performed.
Now that we have data ready, we can see if a machine can help determine a logical split between the numbers in this dataset. We can use the [rsample](https://tidymodels.github.io/rsample/) package, which is part of the Tidymodels framework, to create an object that contains the information on *how* to split the data, and then two more rsample functions to extract the created training and testing sets:
```
set.seed(2056)
# Split 67% of the data for training and the rest for tesing
diabetes_split <- diabetes_select %>%
initial_split(prop = 0.67)
# Extract the resulting train and test sets
diabetes_train <- training(diabetes_split)
diabetes_test <- testing(diabetes_split)
# Print the first 3 rows of the training set
diabetes_train %>%
slice(1:10)
```
## 4. Train a linear regression model with Tidymodels
Now we are ready to train our model!
In Tidymodels, you specify models using `parsnip()` by specifying three concepts:
- Model **type** differentiates models such as linear regression, logistic regression, decision tree models, and so forth.
- Model **mode** includes common options like regression and classification; some model types support either of these while some only have one mode.
- Model **engine** is the computational tool which will be used to fit the model. Often these are R packages, such as **`"lm"`** or **`"ranger"`**
This modeling information is captured in a model specification, so let's build one!
```
# Build a linear model specification
lm_spec <-
# Type
linear_reg() %>%
# Engine
set_engine("lm") %>%
# Mode
set_mode("regression")
# Print the model specification
lm_spec
```
After a model has been *specified*, the model can be `estimated` or `trained` using the [`fit()`](https://parsnip.tidymodels.org/reference/fit.html) function, typically using a formula and some data.
`y ~ .` means we'll fit `y` as the predicted quantity/target, explained by all the predictors/features ie, `.` (in this case, we only have one predictor: `bmi` )
```
# Build a linear model specification
lm_spec <- linear_reg() %>%
set_engine("lm") %>%
set_mode("regression")
# Train a linear regression model
lm_mod <- lm_spec %>%
fit(y ~ ., data = diabetes_train)
# Print the model
lm_mod
```
From the model output, we can see the coefficients learned during training. They represent the coefficients of the line of best fit that gives us the lowest overall error between the actual and predicted variable.
<br>
## 5. Make predictions on the test set
Now that we've trained a model, we can use it to predict the disease progression y for the test dataset using [parsnip::predict()](https://parsnip.tidymodels.org/reference/predict.model_fit.html). This will be used to draw the line between data groups.
```
# Make predictions for the test set
predictions <- lm_mod %>%
predict(new_data = diabetes_test)
# Print out some of the predictions
predictions %>%
slice(1:5)
```
Woohoo! 💃🕺 We just trained a model and used it to make predictions!
When making predictions, the tidymodels convention is to always produce a tibble/data frame of results with standardized column names. This makes it easy to combine the original data and the predictions in a usable format for subsequent operations such as plotting.
`dplyr::bind_cols()` efficiently binds multiple data frames column.
```
# Combine the predictions and the original test set
results <- diabetes_test %>%
bind_cols(predictions)
results %>%
slice(1:5)
```
## 6. Plot modelling results
Now, its time to see this visually 📈. We'll create a scatter plot of all the `y` and `bmi` values of the test set, then use the predictions to draw a line in the most appropriate place, between the model's data groupings.
R has several systems for making graphs, but `ggplot2` is one of the most elegant and most versatile. This allows you to compose graphs by **combining independent components**.
```
# Set a theme for the plot
theme_set(theme_light())
# Create a scatter plot
results %>%
ggplot(aes(x = bmi)) +
# Add a scatter plot
geom_point(aes(y = y), size = 1.6) +
# Add a line plot
geom_line(aes(y = .pred), color = "blue", size = 1.5)
```
> ✅ Think a bit about what's going on here. A straight line is running through many small dots of data, but what is it doing exactly? Can you see how you should be able to use this line to predict where a new, unseen data point should fit in relationship to the plot's y axis? Try to put into words the practical use of this model.
Congratulations, you built your first linear regression model, created a prediction with it, and displayed it in a plot!
| github_jupyter |
```
# Code based on souce from https://machinelearningmastery.com/how-to-develop-a-pix2pix-gan-for-image-to-image-translation/
# Required imports for dataset import, preprocessing and compression
"""
GAN analysis file. Takes in trained .h5 files created while training the network.
Generates test files from testing synthetic input photos (files the GAN has never seen before).
Generates psnr and ssim ratings for each model/.h5 files and loads the results into excel files.
"""
from os import listdir
import numpy
from numpy import asarray
from numpy import vstack
from numpy import savez_compressed
from numpy import load
from numpy import expand_dims
from numpy.random import randint
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
from keras.models import load_model
import matplotlib
from matplotlib import pyplot
import glob
# Load images from a directory to memory
def load_images(path, size=(256,256)):
pic_list = list()
# enumerate filenames in directory, assume all are images
for filename in listdir(path):
# load and resize the image (the resizing is not being used in our implementation)
pixels = load_img(path + filename, target_size=size)
# convert to numpy array
pic_list.append(img_to_array(pixels))
return asarray(pic_list)
# Load and prepare test or validation images from compressed image files to memory
def load_numpy_images(filename):
# Load the compressed numpy array(s)
data = load(filename)
img_sets =[]
for item in data:
img_sets.append((data[item]- 127.5) / 127.5)
return img_sets
# Plot source, generated and target images all in one output
def plot_images(src_img, gen_img, tar_img):
images = vstack((src_img, gen_img, tar_img))
# scale from [-1,1] to [0,1]
images = (images + 1) / 2.0
titles = ['Source', 'Generated', 'Expected']
# plot images row by row
for i in range(len(images)):
# define subplot
pyplot.subplot(1, 3, 1 + i)
# turn off axis
pyplot.axis('off')
# plot raw pixel data
pyplot.imshow(images[i])
# show title
pyplot.title(titles[i])
pyplot.show()
# Load a single image
def load_image(filename, size=(256,256)):
# load image with the preferred size
pixels = load_img(filename, target_size=size)
# convert to numpy array
pixels = img_to_array(pixels)
# scale from [0,255] to [-1,1]
pixels = (pixels - 127.5) / 127.5
# reshape to 1 sample
pixels = expand_dims(pixels, 0)
return pixels
####################
# Convert the training dataset to a compressed numpy array (NOT USED FOR METRICS)
####################
# Source images path (synthetic images)
path = 'data/training/synthetic/'
src_images = load_images(path)
# Ground truth images path
path = 'data/training/gt/'
tar_images = load_images(path)
# Perform a quick check on shape and sizes
print('Loaded: ', src_images.shape, tar_images.shape)
# Save as a compressed numpy array
filename = 'data/training/train_256.npz'
savez_compressed(filename, src_images, tar_images)
print('Saved dataset: ', filename)
###################
# Convert the validation dataset to a compressed numpy array (.npz)
###################
# Source images path
path = 'data/validation/synthetic/'
src_images = load_images(path)
# Ground truth images path
path = 'data/validation/gt/'
tar_images = load_images(path)
# Perform a quick check on shape and sizes
print('Loaded: ', src_images.shape, tar_images.shape)
# Save as a compressed numpy array
filename = 'data/validation/validation_256.npz'
savez_compressed(filename, src_images, tar_images)
print('Saved dataset: ', filename)
# Load the validation dataset from the compressed numpy array to memory
img_sets = load_numpy_images('data/validation/validation_256.npz')
src_images = img_sets[0]
print('Loaded: ', src_images.shape)
#tar_images = img_sets[1]
#print('Loaded: ', tar_images.shape)
# Gain some memory
del img_sets
# Get the list of gt image names so outputs can be named correctly
path = 'data/validation/gt/'
img_list = os.listdir(path)
exp_path = 'models/exp6/'
model_list = os.listdir(exp_path)
# loop through model/.h5 files
for model in model_list:
model_dir = 'outputs/'+model[:-3]
os.mkdir(model_dir)
# load model weights to be used in the generator
predictor = load_model(exp_path+model)
names = 0
for i in range(0, len(src_images),10 ):
# push image through generator
gen_images = predictor.predict(src_images[i:i+10])
# name and export file
for img in range(len(gen_images)):
filename = model_dir+'/'+img_list[names]
names += 1
matplotlib.image.imsave(filename, (gen_images[img]+1)/2.0)
# Code to evaluate generated images from each model run for PNSR and SSIM
import numpy as np
import matplotlib.pyplot as plt
import csv
import os
import re
import cv2
import pandas as pd
from skimage import data, img_as_float
from skimage.metrics import structural_similarity as ssim
from skimage.metrics import peak_signal_noise_ratio as psnr
exp_dir = 'outputs/' # result director
gt_dir = 'data/validation/gt/' # ground truth directory
img_list = os.listdir(gt_dir)
column_names =[]
exp_list = [ f.name for f in os.scandir(exp_dir) if f.is_dir() ]
for exp in exp_list:
model_list = [ f.name for f in os.scandir('outputs/'+exp+'/') if f.is_dir() ]
for model in model_list:
column_names.append(exp+'_'+model)
# create data frames for excel output
psnr_df = pd.DataFrame(columns = column_names)
ssim_df = pd.DataFrame(columns = column_names)
i=0
psnr_master=[]
ssim_master=[]
for img in img_list: # loop through every image created by the generator
i+=1
# load image and create a grayscale for ssim measurement
gt = cv2.imread(gt_dir+img)
gt_gray = cv2.cvtColor(gt, cv2.COLOR_BGR2GRAY)
psnr_list=[]
ssim_list =[]
exp_list = [f.name for f in os.scandir(exp_dir) if f.is_dir()]
# for each experiment
for exp in exp_list:
model_list = [ f.name for f in os.scandir('outputs/'+exp+'/') if f.is_dir() ]
# for each generator weights/model (outputted h5 file from experiemnt)
for model in model_list:
pred = cv2.imread(exp_dir+exp+'/'+model+'/'+img)
pred_gray = cv2.cvtColor(pred, cv2.COLOR_BGR2GRAY)
# calculate psnr and ssim
psnr_list.append(psnr(gt, pred, data_range=pred.max() - pred.min()))
ssim_list.append(ssim(gt_gray, pred_gray, data_range=pred.max() - pred.min()))
psnr_master.append(psnr_list)
ssim_master.append(ssim_list)
# export for excel use
psnr_df = pd.DataFrame(psnr_master, columns = column_names)
psnr_df.index = img_list
psnr_df.to_csv("PSNR.csv")
ssim_df = pd.DataFrame(ssim_master, columns = column_names)
ssim_df.index = img_list
ssim_df.to_csv("SSIM.csv")
import PIL
['sidewalk winter -grayscale -gray_05189.jpg', 'sidewalk winter -grayscale -gray_07146.jpg', 'snow_animal_00447.jpg', 'snow_animal_03742.jpg', 'snow_intersection_00058.jpg', 'snow_nature_1_105698.jpg','snow_nature_1_108122.jpg','snow_nature_1_108523.jpg','snow_walk_00080.jpg','winter intersection -snow_00399.jpg','winter__street_03783.jpg','winter__street_05208.jpg']
pic_list_dims = [(426, 640), (538, 640), (640, 427), (432, 640), (480, 640), (640, 527), (480, 640), (427, 640), (640, 427), (502, 640), (269, 640), (427, 640)]
i=0
# load an image
def load_image(filename, size=(256,256)):
# load image with the preferred size
pixels = load_img(filename, target_size=size)
# convert to numpy array
pixels = img_to_array(pixels)
# scale from [0,255] to [-1,1]
pixels = (pixels - 127.5) / 127.5
# reshape to 1 sample
pixels = expand_dims(pixels, 0)
return pixels
src_path = 'data/realistic_full/'
src_filename = pic_list[i]
src_image = load_image(src_path+src_filename)
print('Loaded', src_image.shape)
model_path = 'models/Experiment4/'
model_filename = 'model_125000.h5'
predictor = load_model(model_path+model_filename)
gen_img = predictor.predict(src_image)
# scale from [-1,1] to [0,1]
gen_img = (gen_img[0] + 1) / 2.0
# plot the image
pyplot.imshow(gen_img)
pyplot.axis('off')
pyplot.show()
gen_path = 'final/'
gen_filename = src_filename
matplotlib.image.imsave(gen_path+gen_filename, gen_img)
gen_img = load_img(gen_path+gen_filename, target_size=pic_list_dims[i])
pyplot.imshow(gen_img)
pyplot.axis('off')
pyplot.show()
print(gen_img)
gen_img.save(gen_path+gen_filename)
pic_list = ['sidewalk winter -grayscale -gray_05189.jpg', 'sidewalk winter -grayscale -gray_07146.jpg', 'snow_animal_00447.jpg', 'snow_animal_03742.jpg', 'snow_intersection_00058.jpg', 'snow_nature_1_105698.jpg','snow_nature_1_108122.jpg','snow_nature_1_108523.jpg','snow_walk_00080.jpg','winter intersection -snow_00399.jpg','winter__street_03783.jpg','winter__street_05208.jpg']
src_path = 'data/realistic_full/'
dims=[]
for img in pic_list:
pixels = load_img(src_path+img)
dims.append(tuple(reversed(pixels.size)))
print(dims)
```
| github_jupyter |
# Load and preprocess 2012 data
We will, over time, look over other years. Our current goal is to explore the features of a single year.
---
```
%pylab --no-import-all inline
import pandas as pd
```
## Load the data.
---
If this fails, be sure that you've saved your own data in the prescribed location, then retry.
```
file = "../data/interim/2012data.dta"
df_rawest = pd.read_stata(file)
good_columns = [#'campfin_limcorp', # "Should gov be able to limit corporate contributions"
'pid_x', # Your own party identification
'abortpre_4point', # Abortion
'trad_adjust', # Moral Relativism
'trad_lifestyle', # "Newer" lifetyles
'trad_tolerant', # Moral tolerance
'trad_famval', # Traditional Families
'gayrt_discstd_x', # Gay Job Discrimination
'gayrt_milstd_x', # Gay Military Service
'inspre_self', # National health insurance
'guarpr_self', # Guaranteed Job
'spsrvpr_ssself', # Services/Spending
'aa_work_x', # Affirmative Action ( Should this be aapost_hire_x? )
'resent_workway',
'resent_slavery',
'resent_deserve',
'resent_try',
]
df_raw = df_rawest[good_columns]
```
## Clean the data
---
```
def convert_to_int(s):
"""Turn ANES data entry into an integer.
>>> convert_to_int("1. Govt should provide many fewer services")
1
>>> convert_to_int("2")
2
"""
try:
return int(s.partition('.')[0])
except ValueError:
warnings.warn("Couldn't convert: "+s)
return np.nan
except AttributeError:
return s
def negative_to_nan(value):
"""Convert negative values to missing.
ANES codes various non-answers as negative numbers.
For instance, if a question does not pertain to the
respondent.
"""
return value if value >= 0 else np.nan
def lib1_cons2_neutral3(x):
"""Rearrange questions where 3 is neutral."""
return -3 + x if x != 1 else x
def liblow_conshigh(x):
"""Reorder questions where the liberal response is low."""
return -x
def dem_edu_special_treatment(x):
"""Eliminate negative numbers and {95. Other}"""
return np.nan if x == 95 or x <0 else x
df = df_raw.applymap(convert_to_int)
df = df.applymap(negative_to_nan)
df.abortpre_4point = df.abortpre_4point.apply(lambda x: np.nan if x not in {1, 2, 3, 4} else -x)
df.loc[:, 'trad_lifestyle'] = df.trad_lifestyle.apply(lambda x: -x) # 1: moral relativism, 5: no relativism
df.loc[:, 'trad_famval'] = df.trad_famval.apply(lambda x: -x) # Tolerance. 1: tolerance, 7: not
df.loc[:, 'spsrvpr_ssself'] = df.spsrvpr_ssself.apply(lambda x: -x)
df.loc[:, 'resent_workway'] = df.resent_workway.apply(lambda x: -x)
df.loc[:, 'resent_try'] = df.resent_try.apply(lambda x: -x)
df.rename(inplace=True, columns=dict(zip(
good_columns,
["PartyID",
"Abortion",
"MoralRelativism",
"NewerLifestyles",
"MoralTolerance",
"TraditionalFamilies",
"GayJobDiscrimination",
"GayMilitaryService",
"NationalHealthInsurance",
"StandardOfLiving",
"ServicesVsSpending",
"AffirmativeAction",
"RacialWorkWayUp",
"RacialGenerational",
"RacialDeserve",
"RacialTryHarder",
]
)))
print("Variables now available: df")
df_rawest.pid_x.value_counts()
df.PartyID.value_counts()
df.describe()
df.head()
df.to_csv("../data/processed/2012.csv")
```
| github_jupyter |
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computationally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "2a".
* You can find your original work saved in the notebook with the previous version name ("v2")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* cosine_similarity
* Additional hints.
* complete_analogy
* Replaces the list of input words with a set, and sets it outside the for loop (to follow best practices in coding).
* Spelling, grammar and wording corrections.
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
#### Load the word vectors
* For this assignment, we will use 50-dimensional GloVe vectors to represent words.
* Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
#### Embedding vectors versus one-hot vectors
* Recall from the lesson videos that one-hot vectors do not do a good job of capturing the level of similarity between words (every one-hot vector has the same Euclidean distance from any other one-hot vector).
* Embedding vectors such as GloVe vectors provide much more useful information about the meaning of individual words.
* Lets now see how you can use GloVe vectors to measure the similarity between two words.
# 1 - Cosine similarity
To measure the similarity between two words, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u \cdot v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
* $u \cdot v$ is the dot product (or inner product) of two vectors
* $||u||_2$ is the norm (or length) of the vector $u$
* $\theta$ is the angle between $u$ and $v$.
* The cosine similarity depends on the angle between $u$ and $v$.
* If $u$ and $v$ are very similar, their cosine similarity will be close to 1.
* If they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure their similarity</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate the similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
#### Additional Hints
* You may find `np.dot`, `np.sum`, or `np.sqrt` useful depending upon the implementation that you choose.
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similarity between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(np.square(u)))
# Compute the L2 norm of v (≈1 line)
norm_v = np.sqrt(np.sum(np.square(v)))
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot/(norm_u*norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
#### Try different words!
* After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words!
* Playing around with the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
* In the word analogy task, we complete the sentence:
<font color='brown'>"*a* is to *b* as *c* is to **____**"</font>.
* An example is:
<font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>.
* We are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner:
$e_b - e_a \approx e_d - e_c$
* We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lowercase
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings e_a, e_b and e_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a],word_to_vec_map[word_b],word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# to avoid best_word being one of the input words, skip the input words
# place the input words in a set for faster searching than a list
# We will re-use this set of input words inside the for-loop
input_words_set = set([word_a, word_b, word_c])
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, skip the input words
if w in input_words_set:
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b - e_a, word_to_vec_map[w] - e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
* Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies.
* Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer:
* For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of the graded portion of the assignment. Here are the main points you should remember:
- Cosine similarity is a good way to compare the similarity between pairs of word vectors.
- Note that L2 (Euclidean) distance also works.
- For NLP applications, using a pre-trained set of word vectors is often a good way to get started.
- Even though you have finished the graded portions, we recommend you take a look at the rest of this notebook to learn about debiasing word vectors.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being an expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not surprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two types of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a 2D screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula given above. (≈ 1 line)
e_biascomponent = np.dot(np.dot(e,g), g)/np.dot(g.T, g)
# Neutralize e by subtracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e - e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical rounding (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
## Versions of used packages
We will check PyTorch version to make sure everything work properly.
We use `python 3.6.9`, `torch==1.6.0`
```
!python --version
!pip freeze | grep torch
!pip install transformers
```
## Error handling
**RuntimeError: CUDA out of memory...**
> 發生原因可能為讀取的 batch 過大或是記憶體未釋放乾淨。若縮小 batch size 後仍出現錯誤請按照以下步驟重新載入 colab。
1. Click 「Runtime」
2. Click 「Factor reset runtime」
3. Click 「Reconnect」
4. Reload all chunk
## Get Data
請先到共用雲端硬碟將檔案`flower_data.zip`,建立捷徑到自己的雲端硬碟中。
> 操作步驟
1. 點開雲端[連結](https://drive.google.com/file/d/1rTfeCpKXoQXI978QiTWC-AI1vwGvd5SU/view?usp=sharing)
2. 點選右上角「新增雲端硬碟捷徑」
3. 點選「我的雲端硬碟」
4. 點選「新增捷徑」
完成以上流程會在你的雲端硬碟中建立一個檔案的捷徑,接著我們在colab中取得權限即可使用。
執行此段後點選出現的連結,允許授權後,複製授權碼,貼在空格中後按下ENTER,即完成與雲端硬碟連結。
```
from google.colab import drive
drive.mount('/content/drive')
!unzip -qq ./drive/My\ Drive/twitter_sentiment.zip
```
## Loading the dataset
### Custom dataset
繼承自定義資料集的框架 `torch.utils.data.Dataset`,主要實現 `__getitem__()` 和 `__len__()` 這兩個方法。
常使用來做到設定資料位址、設定讀取方式、子資料集的標籤和轉換條件...等。
See [torch.utils.data.Dataset](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) for more details
```
import csv
import os
import numpy as np
import torch
import torchtext
#from torchtext.datasets import text_classification
from transformers import BertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
```
# sentence to word
step 1 分割
stpe 2 放上token
step 3 設定最大詞句需要多少
step 4 segment 區分不同句子
```
class Twitter(torch.utils.data.Dataset):
def __init__(self, csv_file, mode='train', transform=None):
self.mode = mode # 'train', 'val' or 'test'
self.data_list = []
self.labels = []
self.transform = transform
with open(csv_file, newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
self.data_list.append(row['text'])
if mode != 'test':
self.labels.append(row['sentiment_label'])
def __getitem__(self, index):
#self.data_list[index] = str(self.data_list[index])
encoded_dict = tokenizer.encode_plus(self.data_list[index],add_special_tokens=True,max_length=64,
pad_to_max_length=True,return_attention_mask=True,return_tensors='pt')
sentence = encoded_dict['input_ids'].flatten()
atten_mask= encoded_dict['attention_mask'].flatten()
seg_ids =encoded_dict['token_type_ids'].flatten()
#seg_ids = [0 for _ in range(len(sentence))]
#sentence = tokenizer.tokenize(self.data_list[index])
#sentence = ['[CLS]']+sentence+['[SEP]']
#seg_ids = torch.tensor(seg_ids).unsqueeze(0)
if self.mode == 'test':
return sentence,atten_mask,seg_ids,self.data_list[index]
label = torch.tensor(int(self.labels[index]))
#MAX_sentence =128 #一句最大128
#padded_sentence = sentence + ['[PAD]' for _ in range(128 - len(sentence))]
#attn_mask = [1 if token != '[PAD]' else 0 for token in padded_sentence]
# 有單詞為1 無單詞為0
#seg_ids = [0 for _ in range(len(sentence))] #切割不同句子
#token_ids = tokenizer.convert_tokens_to_ids(padded_sentence)
#token_ids = torch.tensor(token_ids).unsqueeze(0)
#atten_mask = torch.tensor(attn_mask).unsqueeze(0)
return sentence,atten_mask,seg_ids,label
def __len__(self):
return len(self.data_list)
```
### Instantiate dataset
Let's instantiate three `FlowerData` class.
+ dataset_train: for training.
+ dataset_val: for validation.
+ dataset_test: for tesing.
```
dataset_train = Twitter('./twitter_sentiment/train.csv', mode='train')#, transform=transforms_train)
dataset_val = Twitter('./twitter_sentiment/val.csv', mode='val')#, transform=transforms_test)
dataset_test = Twitter('./twitter_sentiment/test.csv', mode='test')#, transform=transforms_test)
print("The first token's shape in dataset_train :", dataset_train.__getitem__(1)[0])
print("There are", dataset_train.__len__(), "twitter text in dataset_train.")
```
### `DataLoader`
`torch.utils.data.DataLoader` define how to sample from `dataset` and some other function like:
+ `shuffle` : set to `True` to have the data reshuffled at every epoch
+ `batch_size` : how many samples per batch to load
See [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for more details
```
from torch.utils.data import DataLoader
# 設定batch數量
batchSizeNums = 64
train_loader = DataLoader(dataset_train, batch_size=batchSizeNums, shuffle=True)
val_loader = DataLoader(dataset_val, batch_size=batchSizeNums, shuffle=False)
test_loader = DataLoader(dataset_test, batch_size=batchSizeNums, shuffle=False)
```
Finally! We have made all data prepared.
Let's go develop our model.
#Deploy model
```
import torch.nn as nn
import torch.nn.functional as F
from transformers import BertModel, BertTokenizer,BertForSequenceClassification,BertConfig
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
num_labels=3
config = BertConfig.from_pretrained("bert-base-uncased", num_labels=num_labels)
model = BertForSequenceClassification.from_pretrained('bert-base-uncased',config=config)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')#, config=config)
"""
class SentimentClassifier(nn.Module):
def __init__(self, n_classes):
super(SentimentClassifier, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-uncased")
self.drop = nn.Dropout(p=0.3)
self.out = nn.Linear(self.bert.config.hidden_size,num_labels)
def forward(self, input_ids, attention_mask):
_, pooled_output = self.bert(
input_ids=input_ids,
attention_mask=attention_mask
)
#output = self.drop(pooled_output)
return self.out(pooled_output)
"""
#model=SentimentClassifier(3)
model.to(device)
```
### Define loss and optimizer
```
import torch.nn as nn
import torch.optim as optim
################################################################################
# TODO: Define loss and optmizer functions
# Try any loss or optimizer function and learning rate to get better result
# hint: torch.nn and torch.optim
################################################################################
criterion = nn.CrossEntropyLoss()
#optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
learning_rate=1e-5
weight_decay = 1e-2
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': weight_decay},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = optim.AdamW(optimizer_grouped_parameters, lr=learning_rate)
#optimizer = optim.Adam(bert_model.parameters(), lr=1e-5)
################################################################################
# End of your code
################################################################################
criterion = criterion.cuda()
```
### Train the model
#### Train function
Let's define train function.
It will iterate input data 1 epoch and update model with optmizer.
Finally, calculate mean loss and total accuracy.
Hint: [torch.max()](https://pytorch.org/docs/stable/generated/torch.max.html#torch-max)
```
# Early Stopping
# 來源: https://github.com/Bjarten/early-stopping-pytorch/blob/master/pytorchtools.py
import numpy as np
import torch
class EarlyStopping:
"""Early stops the training if validation loss doesn't improve after a given patience."""
def __init__(self, patience=7, verbose=False, delta=0, path='checkpoint.pt', trace_func=print):
"""
Args:
patience (int): How long to wait after last time validation loss improved.
Default: 7
verbose (bool): If True, prints a message for each validation loss improvement.
Default: False
delta (float): Minimum change in the monitored quantity to qualify as an improvement.
Default: 0
path (str): Path for the checkpoint to be saved to.
Default: 'checkpoint.pt'
trace_func (function): trace print function.
Default: print
"""
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.early_stop = False
self.val_loss_min = np.Inf
self.delta = delta
self.path = path
self.trace_func = trace_func
def __call__(self, val_loss, model):
score = -val_loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(val_loss, model)
elif score < self.best_score + self.delta:
self.counter += 1
self.trace_func(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.save_checkpoint(val_loss, model)
self.counter = 0
def save_checkpoint(self, val_loss, model):
'''Saves model when validation loss decrease.'''
if self.verbose:
self.trace_func(f'Validation loss decreased ({self.val_loss_min:.6f} --> {val_loss:.6f}). Saving model ...')
torch.save(model.state_dict(), self.path)
self.val_loss_min = val_loss
# mix up寫法
# https://www.kaggle.com/c/bengaliai-cv19/discussion/128592
# !pip install torchtoolbox
# from torchtoolbox.tools import mixup_data, mixup_criterion
def train(input_data, model, criterion, optimizer):
'''
Argement:
input_data -- iterable data, typr torch.utils.data.Dataloader is prefer
model -- nn.Module, model contain forward to predict output
criterion -- loss function, used to evaluate goodness of model
optimizer -- optmizer function, method for weight updating
token_ids,atten_mask,seg_ids,label
'''
model.train()
loss_list = []
total_count = 0
acc_count = 0
for i, data in enumerate(input_data, 0):
token_ids,atten_mask,labels = data[0].cuda(),data[1].cuda(),data[3].cuda()
#print(token_ids.size())
labels = labels.unsqueeze(1)
#print(atten_mask.size())
#print(labels.size())
########################################################################
# TODO: Forward, backward and optimize
# 1. zero the parameter gradients
# 2. process input through the network
# 3. compute the loss
# 4. propagate gradients back into the network’s parameters
# 5. Update the weights of the network
########################################################################
optimizer.zero_grad()
outputs = model(token_ids,attention_mask=atten_mask,labels=labels)
train_loss = outputs.loss#criterion(outputs.loss,labels)
train_loss.backward()
optimizer.step()
########################################################################
# End of your code
########################################################################
########################################################################
# TODO: Get the counts of correctly classified images
# 1. get the model predicted result
# 2. sum the number of this batch predicted images
# 3. sum the number of correctly classified
# 4. save this batch's loss into loss_list
# dimension of outputs: [batch_size, number of classes]
# Hint 1: use outputs.data to get no auto_grad
# Hint 2: use torch.max()
########################################################################
y_pred_prob = outputs[1]
y_pred_label = y_pred_prob.argmax(dim=1)
#_, predicted = torch.max(outputs[1], 1)
#print(y_pred_label)
#print(labels)
total_count += len(labels)
acc_count += (y_pred_label == labels).int().sum()
loss_list.append(train_loss.item())
########################################################################
# End of your code
########################################################################
# Compute this epoch accuracy and loss
acc = acc_count / total_count
loss = sum(loss_list) / len(loss_list)
return acc, loss
```
#### Validate function
Next part is validate function.
It works as training function without optmizer and weight-updating part.
```
def val(input_data, model, criterion):
model.eval()
loss_list = []
total_count = 0
acc_count = 0
with torch.no_grad():
for data in input_data:
token_ids,atten_mask,labels = data[0].cuda(),data[1].cuda(),data[3].cuda()
labels = labels.unsqueeze(1)
####################################################################
# TODO: Get the predicted result and loss
# 1. process input through the network
# 2. compute the loss
# 3. get the model predicted result
# 4. get the counts of correctly classified images
# 5. save this batch's loss into loss_list attention_mask=atten_mask,
####################################################################
outputs = model(token_ids,attention_mask=atten_mask,labels=labels)
val_loss = outputs.loss#criterion(outputs,labels)
y_pred_prob = outputs[1]
y_pred_label = y_pred_prob.argmax(dim=1)
#predicted = torch.max(outputs, 1)[1]
total_count += len(labels)
acc_count += (y_pred_label == labels).int().sum()
loss_list.append(val_loss.item())
####################################################################
# End of your code
####################################################################
acc = acc_count / total_count
loss = sum(loss_list) / len(loss_list)
return acc, loss
```
#### Training in a loop
Call train and test function in a loop.
Take a break and wait.
```
################################################################################
# You can adjust those hyper parameters to loop for max_epochs times #
################################################################################
max_epochs = 4
log_interval = 2 # print acc and loss in per log_interval time
early_stopping_patience = 30
################################################################################
# End of your code #
################################################################################
train_acc_list = []
train_loss_list = []
val_acc_list = []
val_loss_list = []
# initialize the early_stopping object
early_stopping = EarlyStopping(patience=early_stopping_patience, verbose=False)
for epoch in range(1, max_epochs + 1):
train_acc, train_loss = train(train_loader,model, criterion, optimizer)
val_acc, val_loss = val(val_loader, model, criterion)
train_acc_list.append(train_acc)
train_loss_list.append(train_loss)
val_acc_list.append(val_acc)
val_loss_list.append(val_loss)
if epoch % log_interval == 0:
print('=' * 20, 'Epoch', epoch, '=' * 20)
print('Train Acc: {:.6f} Train Loss: {:.6f}'.format(train_acc, train_loss))
print(' Val Acc: {:.6f} Val Loss: {:.6f}'.format(val_acc, val_loss))
# 判斷是否達到Early Stopping
early_stopping(val_loss, model)
if early_stopping.early_stop:
print("Early stopping")
break
# load the last checkpoint with the best model
model.load_state_dict(torch.load('checkpoint.pt'))
```
#### Visualize accuracy and loss
```
best_epoch = val_loss_list.index(min(val_loss_list))
print('Early Stopping Epoch:', best_epoch)
print('Train Acc: {:.6f} Train Loss: {:.6f}'.format(train_acc_list[best_epoch], train_loss_list[best_epoch]))
print(' Val Acc: {:.6f} Val Loss: {:.6f}'.format(val_acc_list[best_epoch], val_loss_list[best_epoch]))
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 4))
plt.plot(range(len(train_loss_list)), train_loss_list)
plt.plot(range(len(val_loss_list)), val_loss_list, c='r')
plt.legend(['train', 'val'])
plt.title('Loss')
plt.axvline(best_epoch, linestyle='--', color='r',label='Early Stopping Checkpoint')
plt.show()
plt.figure(figsize=(12, 4))
plt.plot(range(len(train_acc_list)), train_acc_list)
plt.plot(range(len(val_acc_list)), val_acc_list, c='r')
plt.legend(['train', 'val'])
plt.title('Acc')
plt.axvline(best_epoch, linestyle='--', color='r',label='Early Stopping Checkpoint')
plt.show()
```
### Predict Result
預測`test`並將結果上傳至Kaggle。[**連結**](https://www.kaggle.com/t/a16786b7da97419f9ba90b495dab08aa)
執行完畢此區的程式碼後,會將`test`預測完的結果存下來。
上傳流程
1. 點選左側選單最下方的資料夾圖示
2. 右鍵「result.csv」
3. 點選「Download」
4. 至連結網頁點選「Submit Predictions」
5. 將剛剛下載的檔案上傳
6. 系統會計算並公布其中70%資料的正確率
```
def predict(input_data, model):
model.eval()
output_list = []
text = []
with torch.no_grad():
for data in input_data:
token_ids,atten_mask,segments_tensors,senten= data[0].cuda(),data[1].cuda(),data[2].cuda(),data[3]
print(segments_tensors)
#labels = labels.unsqueeze(1)
#segments_tensors = segments_tensors.flatten()
outputs = model(token_ids,token_type_ids=segments_tensors,attention_mask=atten_mask)
y_pred_prob = outputs[0]
y_pred_label = y_pred_prob.argmax(dim=1)
#outputs = model(images)
#_, predicted = torch.max(outputs.data, 1)
output_list.extend(y_pred_label.to('cpu').numpy().tolist())
for r in (senten):
text.append(r)
return output_list,text
output_csv,text = predict(test_loader, model)
with open('result.csv', 'w', newline='') as csvFile:
writer = csv.DictWriter(csvFile, fieldnames=['index','sentiment_label'])
writer.writeheader()
idx = 0
#'text' :text[idx] ,
for result in output_csv:
writer.writerow({'index':idx,'sentiment_label':result})
idx+=1
```
| github_jupyter |
# Molecular Hydrogen H<sub>2</sub> Ground State
Figure 7.1 from Chapter 7 of *Interstellar and Intergalactic Medium* by Ryden & Pogge, 2021,
Cambridge University Press.
Plot the ground state potential of the H<sub>2</sub> molecule (E vs R) and the bound vibration levels.
Uses files with the H<sub>2</sub> potential curves tabulated by [Sharp, 1971, Atomic Data, 2, 119](https://ui.adsabs.harvard.edu/abs/1971AD......2..119S/abstract).
All of the data files used are in the H2 subfolder that should accompany this notebook.
```
%matplotlib inline
import math
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, LogLocator, NullFormatter
import warnings
warnings.filterwarnings('ignore',category=UserWarning, append=True)
```
## Standard Plot Format
Setup the standard plotting format and make the plot. Fonts and resolution adopted follow CUP style.
```
figName = 'Fig7_1'
# graphic aspect ratio = width/height
aspect = 4.0/3.0 # 4:3
# Text width in inches - don't change, this is defined by the print layout
textWidth = 6.0 # inches
# output format and resolution
figFmt = 'png'
dpi = 600
# Graphic dimensions
plotWidth = dpi*textWidth
plotHeight = plotWidth/aspect
axisFontSize = 10
labelFontSize = 8
lwidth = 0.5
axisPad = 5
wInches = textWidth
hInches = wInches/aspect
# Plot filename
plotFile = f'{figName}.{figFmt}'
# LaTeX is used throughout for markup of symbols, Times-Roman serif font
plt.rc('text', usetex=True)
plt.rc('font', **{'family':'serif','serif':['Times-Roman'],'weight':'bold','size':'16'})
# Font and line weight defaults for axes
matplotlib.rc('axes',linewidth=lwidth)
matplotlib.rcParams.update({'font.size':axisFontSize})
# axis and label padding
plt.rcParams['xtick.major.pad'] = f'{axisPad}'
plt.rcParams['ytick.major.pad'] = f'{axisPad}'
plt.rcParams['axes.labelpad'] = f'{axisPad}'
```
## H<sub>2</sub> energy level potential data
H$_2$ $^{1}\Sigma_{g}^{+}$ ground state data from Sharp 1971:
Potential curve: H2_1Sigma_g+_potl.dat:
* interproton distance, r, in Angstroms
* potential energy, V(r), in eV
Vibrational levels: H2_1Sigma_g+_v.dat:
* v = vibrational quantum number
* eV = energy in eV
* Rmin = minimum inter-proton distance in Angstroms
* Rmax = maximum inter-proton distance in Angstroms
```
potlFile = './H2/H2_1Sigma_g+_potl.dat'
vibFile = './H2/H2_1Sigma_g+_v.dat'
data = pd.read_csv(potlFile,sep=r'\s+')
gsR = np.array(data['R']) # radius in Angstroms
gsE = np.array(data['eV']) # energy in eV
data = pd.read_csv(vibFile,sep=r'\s+')
v = np.array(data['v']) # vibrational quantum number
vE = np.array(data['eV'])
rMin = np.array(data['Rmin'])
rMax = np.array(data['Rmax'])
# plotting limits
minR = 0.0
maxR = 5.0
minE = -0.5
maxE = 6.0
# Put labels on the vibrational levels?
label_v = True
```
### Make the Plot
Plot the ground-state potential curve as a thick black line, then draw the vibrational energy levels.
```
fig,ax = plt.subplots()
fig.set_dpi(dpi)
fig.set_size_inches(wInches,hInches,forward=True)
ax.tick_params('both',length=6,width=lwidth,which='major',direction='in',top='on',right='on')
ax.tick_params('both',length=3,width=lwidth,which='minor',direction='in',top='on',right='on')
plt.xlim(minR,maxR)
ax.xaxis.set_major_locator(MultipleLocator(1))
plt.xlabel(r'Distance between protons r [\AA]',fontsize=axisFontSize)
plt.ylim(minE,maxE)
ax.yaxis.set_major_locator(MultipleLocator(1.0))
plt.ylabel(r'Potential energy V(r) [eV]',fontsize=axisFontSize)
# plot the curves
plt.plot(gsR,gsE,'-',color='black',lw=1.5,zorder=10)
for i in range(len(v)):
plt.plot([rMin[i],rMax[i]],[vE[i],vE[i]],'-',color='black',lw=0.5,zorder=9)
if v[i]==0:
plt.text(rMin[i]-0.05,vE[i],rf'$v={v[i]}$',ha='right',va='center',fontsize=labelFontSize)
elif v[i]==13:
plt.text(rMin[i]-0.05,vE[i],rf'${v[i]}$',ha='right',va='center',fontsize=labelFontSize)
# plot and file
plt.plot()
plt.savefig(plotFile,bbox_inches='tight',facecolor='white')
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import sygma
import matplotlib.pyplot as plt
from galaxy_analysis.plot.plot_styles import *
import galaxy_analysis.utilities.convert_abundances as ca
def plot_settings():
fsize = 21
rc('text',usetex=False)
rc('font',size=fsize)
return
sygma.sygma?
s = {}
metallicities = np.flip(np.array([0.02, 0.01, 0.006, 0.001, 0.0001]))
for z in metallicities:
print(z)
s[z] = sygma.sygma(iniZ = z, sn1a_on=False, #sn1a_rate='maoz',
#iniabu_table = 'yield_tables/iniabu/iniab1.0E-02GN93.ppn',
imf_yields_range=[1,25],
table = 'yield_tables/agb_and_massive_stars_C15_LC18_R_mix_resampled.txt',
mgal = 1.0)
yields = {}
yields_agb = {}
yields_no_agb = {}
for z in metallicities:
yields[z] = {}
yields_agb[z] = {}
yields_no_agb[z] = {}
for i,e in enumerate(s[z].history.elements):
index = s[z].history.elements.index(e)
yields[z][e] = np.array(s[z].history.ism_elem_yield)[:,index]
yields_agb[z][e] = np.array(s[z].history.ism_elem_yield_agb)[:,index]
yields_no_agb[z][e] = yields[z][e] - yields_agb[z][e]
for z in metallicities:
print(np.array(s[0.0001].history.
colors = {0.0001: 'C0',
0.001 : 'C1',
0.01 : 'C2',
0.02 : 'C3'}
colors = {}
for i,z in enumerate(metallicities):
colors[z] = magma((i+1)/(1.0*np.size(metallicities)+1))
colors
plot_settings()
plot_elements = ['C','N','O','Mg','Ca','Mn','Fe','Sr','Ba']
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(5*3,5*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in [0.001,0.01]: # metallicities:
label = z
ax.plot(s[z].history.age[1:]/1.0E9, np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]),
lw = 3, color = colors[z], label = label)
# ax.semilogy()
#ax.set_ylim(0,2)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'O':
ax.legend(loc='lower right')
count += 1
ax.set_xlim(0,2.0)
#ax.semilogy()
ax.set_ylim(-0.5,0.5)
ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('Time (Gyr)')
all_ax[(i,0)].set_ylabel(r'[X/H] - [X/H]$_{0.0001}$')
fig.savefig("X_H_lowz_comparison.png")
plot_settings()
plot_elements = ['C','N','O','Mg','Ca','Mn','Fe','Sr','Ba']
denom = 'Mg'
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(5*3,5*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in [0.0001,0.001,0.01]: # metallicities:
label = z
yvals = ca.abundance_ratio_array(e,
yields_no_agb[z][e][1:],
denom, yields_no_agb[z][denom][1:],input_type='mass')
yvals2 = ca.abundance_ratio_array(e,
yields_no_agb[0.0001][e][1:],
denom, yields_no_agb[0.0001][denom][1:],input_type='mass')
if z == 0.0001 and e == 'Ca':
print(yvals)
ax.plot(s[z].history.age[1:]/1.0E9, yvals,# - yvals2,
lw = 3, color = colors[z], label = label)
# ax.semilogy()
ax.set_ylim(-1,1)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'O':
ax.legend(loc='lower right')
count += 1
ax.set_xlim(0,0.250)
ax.plot(ax.get_xlim(),[0.0,0.0],lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('Time (Gyr)')
all_ax[(i,0)].set_ylabel(r'[X/Fe] - [X/Fe]$_{0.0001}$')
fig.savefig("X_Mg.png")
#fig.savefig("X_Fe_lowz_comparison.png")
s1 = s[0.001]
np.array(s1.history.sn1a_numbers)[ (s1.history.age/ 1.0E9 < 1.1)] * 5.0E4
def wd_mass(mproj, model = 'salaris'):
if np.size(mproj) == 1:
mproj = np.array([mproj])
wd = np.zeros(np.size(mproj))
if model == 'salaris':
wd[mproj < 4.0] = 0.134 * mproj[mproj < 4.0] + 0.331
wd[mproj >= 4.0] = 0.047 * mproj[mproj >= 4.0] + 0.679
elif model == 'mist':
wd[mproj < 2.85] = 0.08*mproj[mproj<2.85]+0.489
select=(mproj>2.85)*(mproj<3.6)
wd[select]=0.187*mproj[select]+0.184
select=(mproj>3.6)
wd[select]=0.107*mproj[select]+0.471
return wd
plot_settings()
plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba']
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(6*3,6*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in metallicities: #[0.0001,0.001,0.01,0.02]: # metallicities:
label = "Z=%.4f"%(z)
y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1])
ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y,
#np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]),
lw = 3, color = colors[z], label = label)
# ax.semilogy()
#ax.set_ylim(0,2)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'Ba':
ax.legend(loc='upper right')
count += 1
ax.set_xlim(0.8,4.2)
#ax.semilogx()
ax.semilogy()
ax.set_ylim(2.0E-9,12.0)
ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('log(Time [Myr])')
all_ax[(i,0)].set_ylabel(r'Rate [M$_{\odot}$ / (10$^4$ M$_{\odot}$) / Myr]')
fig.savefig("C15_LC18_yields_rate.png")
plot_settings()
plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba']
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(6*3,6*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in [0.0001,0.001,0.01,0.02]: # metallicities:
label = "Z=%.4f"%(z)
#y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1])
y = yields[z][e][1:]
ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y,
#np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]),
lw = 3, color = colors[z], label = label)
# ax.semilogy()
#ax.set_ylim(0,2)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'Ba':
ax.legend(loc='upper right')
count += 1
ax.set_xlim(0.8,4.2)
#ax.semilogx()
ax.semilogy()
ax.set_ylim(1.0E-5,2.0E-2)
ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('log(Time [Myr])')
all_ax[(i,0)].set_ylabel(r'Yield [M$_{\odot}$]') #/ (10$^4$ M$_{\odot}$)]')
fig.savefig("C15_LC18_yields_total.png")
plot_settings()
plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba']
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(6*3,6*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in [0.0001,0.001,0.01,0.02]: # metallicities:
label = "Z=%.4f"%(z)
#y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1])
y = 1.0E4 * yields[z][e][1:]
ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y,
#np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]),
lw = 3, color = colors[z], label = label)
# ax.semilogy()
#ax.set_ylim(0,2)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'Ba':
ax.legend(loc='upper right')
count += 1
ax.set_xlim(0.8,4.2)
#ax.semilogx()
ax.semilogy()
ax.set_ylim(1.0E-6,1.0E3)
ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('log(Time [Myr])')
all_ax[(i,0)].set_ylabel(r'Yield [M$_{\odot}$ / (10$^4$ M$_{\odot}$)]')
fig.savefig("C15_LC18_yields_total.png")
plot_settings()
plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba']
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(6*3,6*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in [0.0001,0.001,0.01,0.02]: # metallicities:
label = "Z=%.4f"%(z)
#y = 1.0E4 * 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1])
y = yields[z][e][1:] / yields[z][e][-1]
ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y,
#np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]),
lw = 3, color = colors[z], label = label)
# ax.semilogy()
#ax.set_ylim(0,2)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'O':
ax.legend(loc='lower right')
count += 1
ax.set_xlim(0.8,4.2)
#ax.semilogx()
#ax.semilogy()
ax.set_ylim(0,1.0)
ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('log(Time [Myr])')
all_ax[(i,0)].set_ylabel(r'Cumulative Fraction')
fig.savefig("C15_LC18_yields_fraction.png")
plot_settings()
plot_elements = ['C','N','O','Mg','Si','Ca','Fe','Sr','Ba']
fig, all_ax = plt.subplots(3,3,sharex=True,sharey=True)
fig.subplots_adjust(wspace=0,hspace=0)
fig.set_size_inches(6*3,6*3)
count = 0
for ax2 in all_ax:
for ax in ax2:
e = plot_elements[count]
for z in [0.0001,0.001,0.01,0.02]: # metallicities:
label = "Z=%.4f"%(z)
y = 1.0E6 * (yields[z][e][1:] - yields[z][e][:-1]) / (s[z].history.age[1:] - s[z].history.age[:-1]) / yields[z][e][-1]
ax.plot(np.log10(s[z].history.age[1:]/1.0E6), y,
#np.log10(yields_no_agb[z][e][1:] / yields_no_agb[0.0001][e][1:]),
lw = 3, color = colors[z], label = label)
# ax.semilogy()
#ax.set_ylim(0,2)
xy=(0.1,0.1)
ax.annotate(e,xy,xy,xycoords='axes fraction')
if e == 'Ba':
ax.legend(loc='upper right')
count += 1
ax.set_xlim(0.8,4.2)
#ax.semilogx()
ax.semilogy()
ax.set_ylim(1.0E-5,1.0E-1)
ax.plot(ax.get_xlim(), [0,0], lw=2,ls='--',color='black')
for i in np.arange(3):
all_ax[(2,i)].set_xlabel('log(Time [Myr])')
all_ax[(i,0)].set_ylabel(r'Fractional Rate [Myr$^{-1}$]')
fig.savefig("C15_LC18_yields_fractional_rate.png")
```
| github_jupyter |
```
import tensorflow as tf
import numpy as np
import keras
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
import os
import cv2
import random
import keras.backend as K
import sklearn
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Dropout, Activation, Input, BatchNormalization, GlobalAveragePooling2D
from tensorflow.keras import layers
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, EarlyStopping
from tensorflow.keras.experimental import CosineDecay
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.applications import EfficientNetB3
from tensorflow.keras.layers.experimental.preprocessing import RandomCrop,CenterCrop, RandomRotation
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
ROOT_DIR = '/content/drive/MyDrive/Broner'
train_data = pd.read_csv('/content/drive/MyDrive/Broner/MURA-v1.1/train_path_label.csv' , dtype=str)
test_data = pd.read_csv('/content/drive/MyDrive/Broner/MURA-v1.1/valid_path_label.csv' , dtype=str)
train_data
train_shoulder = train_data[:1300]
train_humerus = train_data[8379:9651]
train_forearm = train_data[29940:31265]
test_shoulder = test_data[1708:2100]
test_forearm = test_data[659:960]
test_humerus = test_data[1420:1708]
def change_class(df,val):
for i in range(len(df)):
df['label'] = val
return df
temp = change_class(train_shoulder,'0')
type(temp['label'][0])
train_shoulder = change_class(train_shoulder,'0')
train_humerus = change_class(train_humerus,'1')
train_forearm = change_class(train_forearm,'2')
test_shoulder = change_class(test_shoulder,'0')
test_humerus = change_class(test_humerus,'1')
test_forearm = change_class(test_forearm,'2')
train_data = pd.concat([train_shoulder , train_forearm , train_humerus] , ignore_index=True)
train_data
test_data = pd.concat([test_shoulder , test_forearm , test_humerus] , ignore_index=True)
test_data
train_data = train_data.sample(frac = 1)
test_data = test_data.sample(frac = 1)
from sklearn.model_selection import train_test_split
x_train , x_val , y_train , y_val = train_test_split(train_data['0'] , train_data['label'] , test_size = 0.2 , random_state=42 , stratify=train_data['label'])
val_data = pd.DataFrame()
val_data['0']=x_val
val_data['label']=y_val
val_data.reset_index(inplace=True,drop=True)
val_data
print(len(train_data) , len(test_data) , len(val_data))
def preproc(image):
image = image/255.
image[:,:,0] = (image[:,:,0]-0.485)/0.229
image[:,:,1] = (image[:,:,1]-0.456)/0.224
image[:,:,2] = (image[:,:,2]-0.406)/0.225
return image
train_datagen = keras.preprocessing.image.ImageDataGenerator(
preprocessing_function = preproc,
rotation_range=20,
horizontal_flip=True,
zoom_range = 0.15,
validation_split = 0.1)
test_datagen = keras.preprocessing.image.ImageDataGenerator(
preprocessing_function = preproc)
train_generator=train_datagen.flow_from_dataframe(
dataframe=train_data,
directory=ROOT_DIR,
x_col="0",
y_col="label",
subset="training",
batch_size=128,
seed=42,
shuffle=True,
class_mode="sparse",
target_size=(320,320))
valid_generator=train_datagen.flow_from_dataframe(
dataframe=train_data,
directory=ROOT_DIR,
x_col="0",
y_col="label",
subset="validation",
batch_size=128,
seed=42,
shuffle=True,
class_mode="sparse",
target_size=(320,320))
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dropout, Flatten, Dense, Activation, Convolution2D, MaxPooling2D
# TARGET_SIZE = 320
# cnn = Sequential()
# cnn.add(Convolution2D(filters=32, kernel_size=5, padding ="same", input_shape=(TARGET_SIZE, TARGET_SIZE, 3), activation='relu'))
# cnn.add(MaxPooling2D(pool_size=(3,3)))
# cnn.add(Convolution2D(filters=64, kernel_size=3, padding ="same",activation='relu'))
# cnn.add(MaxPooling2D(pool_size=(3,3)))
# cnn.add(Convolution2D(filters=128, kernel_size=3, padding ="same",activation='relu'))
# cnn.add(MaxPooling2D(pool_size=(3,3)))
# cnn.add(Flatten())
# cnn.add(Dense(100, activation='relu'))
# cnn.add(Dropout(0.5))
# cnn.add(Dense(3, activation='softmax'))
# cnn.summary()
from keras.layers.normalization import BatchNormalization
from keras.layers import Dropout
def make_model(metrics = None):
base_model = keras.applications.InceptionResNetV2(input_shape=(*[320,320], 3),
include_top=False,
weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
keras.layers.GlobalAveragePooling2D(),
keras.layers.Dense(512),
BatchNormalization(),
keras.layers.Activation('relu'),
Dropout(0.5),
keras.layers.Dense(256),
BatchNormalization(),
keras.layers.Activation('relu'),
Dropout(0.4),
keras.layers.Dense(128),
BatchNormalization(),
keras.layers.Activation('relu'),
Dropout(0.3),
keras.layers.Dense(64),
BatchNormalization(),
keras.layers.Activation('relu'),
keras.layers.Dense(3, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005),
loss='sparse_categorical_crossentropy',
metrics=metrics)
return model
# def exponential_decay(lr0):
# def exponential_decay_fn(epoch):
# if epoch>5 and epoch%3==0:
# return lr0 * tf.math.exp(-0.1)
# else:
# return lr0
# return exponential_decay_fn
# exponential_decay_fn = exponential_decay(0.01)
# lr_scheduler = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn)
# checkpoint_cb = tf.keras.callbacks.ModelCheckpoint("/content/drive/MyDrive/Broner/bone.h5",
# save_best_only=True)
checkpoint_path = "/content/drive/MyDrive/Broner/best.hdf5"
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
monitor='val_sparse_categorical_accuracy',
save_best_only=True,
save_weights_only=True,
mode='max',
verbose=1)
model = make_model(metrics=['sparse_categorical_accuracy'])
model.summary()
# cnn = model
LR = 0.0005
EPOCHS=20
STEPS=train_generator.n//train_generator.batch_size
VALID_STEPS=valid_generator.n//valid_generator.batch_size
# cnn.compile(
# optimizer=tf.keras.optimizers.Adam(learning_rate=LR),
# loss='sparse_categorical_crossentropy',
# metrics=['sparse_categorical_accuracy'])
# checkpoint_path = "/content/drive/MyDrive/Broner/best.hdf5"
# cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
# monitor='val_sparse_categorical_accuracy',
# save_best_only=True,
# save_weights_only=True,
# mode='max',
# verbose=1)
history = model.fit_generator(
train_generator,
steps_per_epoch=STEPS,
epochs=EPOCHS,
validation_data=valid_generator,
callbacks=[cp_callback],
validation_steps=VALID_STEPS)
model.save('/content/drive/MyDrive/Broner/model.h5')
plt.plot(history.history['sparse_categorical_accuracy'])
plt.plot(history.history['val_sparse_categorical_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
from keras.models import load_model
import h5py
from keras.preprocessing import image
m = load_model('/content/drive/MyDrive/Broner/model.h5')
def new_answer(img):
arr = np.empty(5, dtype=int)
# img = image.load_img(path,target_size=(320,320))
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor,axis = 0)
img_tensor /= 255
img_tensor[:,:,0] = (img_tensor[:,:,0]-0.485)/0.229
img_tensor[:,:,1] = (img_tensor[:,:,1]-0.456)/0.224
img_tensor[:,:,2] = (img_tensor[:,:,2]-0.406)/0.225
ans = m.predict(img_tensor)
return np.argmax(ans),ans
img = cv2.imread('/content/drive/MyDrive/Broner/MURA-v1.1/valid/XR_SHOULDER/patient11187/study1_negative/image1.png')
resized = cv2.resize(img, (320,320))
new_answer(resized)
#Shoulder = '0' Humerus = '1' Forearm = '2'
import seaborn as sns
sns.set_theme(style="darkgrid")
ax = sns.countplot(x="label", data=train_data)
```
| github_jupyter |
# Kaggle Home price prediction
### by Mohtadi Ben Fraj
#### In this version, we find the most correlated variables with 'SalePrice' and them in our Sklearn models
```
# Handle table-like data and matrices
import numpy as np
import pandas as pd
# Modelling Algorithms
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
# Modelling Helpers
from sklearn.preprocessing import Imputer , Normalizer , scale, StandardScaler
from sklearn.cross_validation import train_test_split , StratifiedKFold
from sklearn.feature_selection import RFECV
# Stats helpers
from scipy.stats import norm
from scipy import stats
# Visualisation
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
# Configure visualisations
%matplotlib inline
mpl.style.use( 'ggplot' )
sns.set_style( 'white' )
pylab.rcParams[ 'figure.figsize' ] = 8 , 6
```
## Load train and test data
```
# get home price train & test csv files as a DataFrame
train = pd.read_csv("../Data/train.csv")
test = pd.read_csv("../Data/test.csv")
full = train.append(test, ignore_index=True)
print (train.shape, test.shape, full.shape)
train.head()
test.head()
train.columns
```
## Exploring 'SalePrice'
```
train.SalePrice.hist()
```
## 'SalePrice' correlation matrix
```
#correlation matrix
corrmat = train.corr()
#saleprice correlation matrix
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(train[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
```
From this correlation map we can make the following interpretations:
- 'GarageCars' and 'GarageArea' are highly correlated which makes sense. Therefore choosing only one of them is sufficient. Since 'GarageCars' has higher correlation with 'SalePrice', we eliminate 'GarageArea'
- '1stFlrSF' and 'TotalBsmtSF' are highly correlated. Therefore choosing only one of them is reasonable. We keep 'TotalBsmtSF' since it's more correlated with 'SalePrice'
- 'TotRmsAbvGrd' and 'GrLivArea' are highly correlated and therefore we will keep only 'GrLivArea'.
We keep the following variables: 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt'
```
col = ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF',
'FullBath', 'YearBuilt'
]
train_selected = train[col]
test_selected = test[col]
print train_selected.shape, test_selected.shape
```
## Missing Data
```
#missing data in train_selected data
total = train_selected.isnull().sum().sort_values(ascending=False)
percent = (train_selected.isnull().sum()/train_selected.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(6)
#missing data in test_selected data
total = test_selected.isnull().sum().sort_values(ascending=False)
percent = (test_selected.isnull().sum()/test_selected.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(6)
```
Only 1 entry has missing values for the 'TotalBsmtSF' and 'GarageCars'. We will fill them with -1
```
test_selected.TotalBsmtSF.fillna(0, inplace=True);
test_selected.GarageCars.fillna(0, inplace=True);
```
Make sure Test data has no more missing data
```
#missing data in test data
total = test_selected.isnull().sum().sort_values(ascending=False)
percent = (test_selected.isnull().sum()/test_selected.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(6)
```
## Categorical Features
### In this section, we will explore the categorical features in our dataset and find out which ones can be relevant to improve the accuracy of our prediction
### 1. MSSubClass: Identifies the type of dwelling involved in the sale
```
train.MSSubClass.isnull().sum()
#box plot MSSubClass/saleprice
var = 'MSSubClass'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
```
Some observations:
- Newer built houses (1946 and newer) are more expensive
- 2 story properties are more expensive than 1 or 1-1/2 story properties
First option that we can do is to map each choice to a binary feature. This will result in 16 additional features
Second option is to replacce all 16 features with more higher level binary representation. For example: newer or older than 1946, 1 or 1-1/2 story property, 2 or 2-1/2 story property, PUD
### 1.a First option
```
ms_sub_class_train = pd.get_dummies(train.MSSubClass, prefix='MSSubClass')
ms_sub_class_train.shape
```
According to the features description, there is 16 possible values for 'MSSubClass', we only got 15 which means one value is never present in the train data. To solve this, we need to find that value and add a column with zeros to our features
```
ms_sub_class_train.head()
```
The missing value is 150. So we will add a column with label 'MSSubClass_150'
```
ms_sub_class_train['MSSubClass_150'] = 0
ms_sub_class_train.head()
```
Let's do the same thing for the test data
```
ms_sub_class_test = pd.get_dummies(test.MSSubClass, prefix='MSSubClass')
ms_sub_class_test.shape
ms_sub_class_test.head()
```
For the test data we have all 16 values so no columns need to be added
### 2. MSZoning: Identifies the general zoning classification of the sale
```
#box plot MSSubClass/saleprice
var = 'MSZoning'
data =pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
ms_zoning_train = pd.get_dummies(train.MSZoning, prefix='MSZoning')
ms_zoning_train.shape
ms_zoning_train.head()
ms_zoning_test = pd.get_dummies(test.MSZoning, prefix='MSZoning')
ms_zoning_test.shape
```
### 3. Street: Type of road access to property
```
var = 'Street'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
# Transform Street into binary values 0 and 1
street_train = pd.Series(np.where(train.Street == 'Pave', 1, 0), name='Street')
street_train.shape
street_train.head()
street_test = pd.Series(np.where(test.Street == 'Pave', 1, 0), name='Street')
street_test.shape
street_test.head()
```
### 4. Alley: Type of alley access to property
```
var = 'Alley'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
alley_train = pd.get_dummies(train.Alley, prefix='Alley')
alley_train.shape
alley_train.head()
alley_test = pd.get_dummies(test.Alley, prefix='Alley')
alley_test.shape
```
### 5. LotShape: General shape of property
```
train.LotShape.isnull().sum()
var = 'LotShape'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
lot_shape_train = pd.get_dummies(train.LotShape, prefix='LotShape')
lot_shape_train.shape
lot_shape_test = pd.get_dummies(test.LotShape, prefix='LotShape')
lot_shape_test.shape
lot_shape_test.head()
```
### 6. LandContour: Flatness of the property
```
train.LandContour.isnull().sum()
var = 'LandContour'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
land_contour_train = pd.get_dummies(train.LandContour, prefix='LandContour')
land_contour_train.shape
land_contour_test = pd.get_dummies(test.LandContour, prefix='LandContour')
land_contour_test.shape
```
### 7. Utilities: Type of utilities available
```
train.Utilities.isnull().sum()
var = 'Utilities'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
```
### 8. LotConfig: Lot configuration
```
train.LotConfig.isnull().sum()
var = 'LotConfig'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
```
### 9. LandSlope: Slope of property
```
train.LandSlope.isnull().sum()
var = 'LandSlope'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
```
### 10. Neighborhood: Physical locations within Ames city limits
```
train.Neighborhood.isnull().sum()
var = 'Neighborhood'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(20, 10))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
neighborhood_train = pd.get_dummies(train.Neighborhood, prefix='N')
neighborhood_train.shape
neighborhood_test = pd.get_dummies(test.Neighborhood, prefix='N')
neighborhood_test.shape
```
### 11. Condition1: Proximity to various conditions
```
train.Condition1.isnull().sum()
var = 'Condition1'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(20, 10))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
```
### 12. BldgType: Type of dwelling
```
train.BldgType.isnull().sum()
var = 'BldgType'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(20, 10))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
bldgtype_train = pd.get_dummies(train.BldgType, prefix='Bldg')
bldgtype_train.shape
bldgtype_test = pd.get_dummies(test.BldgType, prefix='Bldg')
bldgtype_test.shape
```
### 13. BsmtCond: Evaluates the general condition of the basement
```
train.BsmtCond.isnull().sum()
var = 'BsmtCond'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(20, 10))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
bsmtCond_train = pd.get_dummies(train.BsmtCond, prefix='Bldg')
bsmtCond_train.shape
bsmtCond_test = pd.get_dummies(test.BsmtCond, prefix='Bldg')
bsmtCond_test.shape
```
### 14. SaleCondition: Condition of sale
```
train.SaleCondition.isnull().sum()
var = 'SaleCondition'
data = pd.concat([train['SalePrice'], train[var]], axis=1)
f, ax = plt.subplots(figsize=(20, 10))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
saleCond_train = pd.get_dummies(train.SaleCondition, prefix='saleCond')
saleCond_train.shape
saleCond_test = pd.get_dummies(test.SaleCondition, prefix='saleCond')
saleCond_test.shape
```
## Concatenate features
Let's concatenate the additional features for the train and test data
Features to choose from:
- ms_sub_class
- ms_zoning
- street
- ms_alley
- lot_shape
- land_contour
- neighborhood
- bldgtype
- bsmtCond
- saleCond
```
train_selected = pd.concat([train_selected,
ms_zoning_train,
alley_train,
land_contour_train], axis=1)
train_selected.shape
test_selected = pd.concat([test_selected,
ms_zoning_test,
alley_test,
land_contour_test], axis=1)
test_selected.shape
```
## Train, validation split
```
#train_selected_y = train.SalePrice
train_selected_y = np.log1p(train["SalePrice"])
train_selected_y.head()
train_x, valid_x, train_y, valid_y = train_test_split(train_selected,
train_selected_y,
train_size=0.7)
train_x.shape, valid_x.shape, train_y.shape, valid_y.shape, test_selected.shape
```
## Modelling
```
model = RandomForestRegressor(n_estimators=100)
#model = SVC()
#model = GradientBoostingRegressor()
#model = KNeighborsClassifier(n_neighbors = 3)
#model = GaussianNB()
#model = LogisticRegression()
model.fit(train_x, train_y)
# Score the model
print (model.score(train_x, train_y), model.score(valid_x, valid_y))
model.fit(train_selected, train_selected_y)
```
## Submission
```
test_y = model.predict(test_selected)
test_y = np.expm1(test_y)
test_id = test.Id
test_submit = pd.DataFrame({'Id': test_id, 'SalePrice': test_y})
test_submit.shape
test_submit.head()
test_submit.to_csv('house_price_pred_log.csv', index=False)
```
## Remarks
- Using the correlation method, we were able to go from 36 variables to only 6. Performance wise the score dropped from 0.22628 to 0.22856 using a Random Forest model. I believe we can further improve it by analysing the categorical variables.
- Using binary variables for the categorical feature 'MSSubClass' seemed to decrease the performance of the prediction
- Using binary variable for the categorical feature 'MSZoning' improved the error of the model from 0.22628 to 0.21959.
- Using binary variable for the categorical feature 'Street' decreased the perdormance of the model
- Using binary variable for the categorical feature 'Alley' improved the error of the model from 0.21959 to 0.21904.
- Using binary variable for the categorical feature 'LotShape' decreased the performance of the model.
- Using binary variable for the categorical feature 'LandContour' improved the error from 0.21904 to 0.21623.
- Using binary variable for the categorical feature 'Neighborhood' decreased the performance of the model.
- Using binary variable for the categorical feature 'Building type' decreased the performance of the model.
- Using the binary variable of the categorical feature 'BsmntCond' decreased the performance of the model.
- Using the binary variable fot the categorical feature 'SaleCondition' decreased the performance of the model.
- Never, EVER, use a classification model for regression!!! Changed RandomForestClassifier to RandomForestRegressor and improved error from 0.21623 to 0.16517
- Applied log+1 to 'SalePrice' to remove skewness. Error improved from 0.16517 to 0.16083
## Credits
Many of the analysis and core snippets are from this very detailed post: https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python
## Links
- Are categorical variables getting lost in your random forests? (https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)
| github_jupyter |
# Reinterpreting Tensors
Sometimes the data in tensors needs to be interpreted as if it had different type or shape. For example, reading a binary file into memory produces a flat tensor of byte-valued data, which the application code may want to interpret as an array of data of specific shape and possibly different type.
DALI provides the following operations which affect tensor metadata (shape, type, layout):
* reshape
* reinterpret
* squeeze
* expand_dims
Thsese operations neither modify nor copy the data - the output tensor is just another view of the same region of memory, making these operations very cheap.
## Fixed Output Shape
This example demonstrates the simplest use of the `reshape` operation, assigning a new fixed shape to an existing tensor.
First, we'll import DALI and other necessary modules, and define a utility for displaying the data, which will be used throughout this tutorial.
```
import nvidia.dali as dali
import nvidia.dali.fn as fn
from nvidia.dali import pipeline_def
import nvidia.dali.types as types
import numpy as np
def show_result(outputs, names=["Input", "Output"], formatter=None):
if not isinstance(outputs, tuple):
return show_result((outputs,))
outputs = [out.as_cpu() if hasattr(out, "as_cpu") else out for out in outputs]
for i in range(len(outputs[0])):
print(f"---------------- Sample #{i} ----------------")
for o, out in enumerate(outputs):
a = np.array(out[i])
s = "x".join(str(x) for x in a.shape)
title = names[o] if names is not None and o < len(names) else f"Output #{o}"
l = out.layout()
if l: l += ' '
print(f"{title} ({l}{s})")
np.set_printoptions(formatter=formatter)
print(a)
def rand_shape(dims, lo, hi):
return list(np.random.randint(lo, hi, [dims]))
```
Now let's define out pipeline - it takes data from an external source and returns it both in original form and reshaped to a fixed square shape `[5, 5]`. Additionally, output tensors' layout is set to HW
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example1(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp, shape=[5, 5], layout="HW")
pipe1 = example1(lambda: np.random.randint(0, 10, size=[25], dtype=np.int32))
pipe1.build()
show_result(pipe1.run())
```
As we can see, the numbers from flat input tensors have been rearranged into 5x5 matrices.
## Reshape with Wildcards
Let's now consider a more advanced use case. Imagine you have some flattened array that represents a fixed number of columns, but the number of rows is free to vary from sample to sample. In that case, you can put a wildcard dimension by specifying its shape as `-1`. Whe using wildcards, the output is resized so that the total number of elements is the same as in the input.
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example2(input_data):
np.random.seed(12345)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp, shape=[-1, 5])
pipe2 = example2(lambda: np.random.randint(0, 10, size=[5*np.random.randint(3, 10)], dtype=np.int32))
pipe2.build()
show_result(pipe2.run())
```
## Removing and Adding Unit Dimensions
There are two dedicated operators `squeeze` and `expand_dims` which can be used for removing and adding dimensions with unit extent. The following example demonstrates the removal of a redundant dimension as well as adding two new dimensions.
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_squeeze_expand(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, layout="CHW", dtype=types.INT32)
squeezed = fn.squeeze(inp, axes=[0])
expanded = fn.expand_dims(squeezed, axes=[0, 3], new_axis_names="FC")
return inp, fn.squeeze(inp, axes=[0]), expanded
def single_channel_generator():
return np.random.randint(0, 10,
size=[1]+rand_shape(2, 1, 7),
dtype=np.int32)
pipe_squeeze_expand = example_squeeze_expand(single_channel_generator)
pipe_squeeze_expand.build()
show_result(pipe_squeeze_expand.run())
```
## Rearranging Dimensions
Reshape allows you to swap, insert or remove dimenions. The argument `src_dims` allows you to specify which source dimension is used for a given output dimension. You can also insert a new dimension by specifying -1 as a source dimension index.
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_reorder(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp, src_dims=[1,0])
pipe_reorder = example_reorder(lambda: np.random.randint(0, 10,
size=rand_shape(2, 1, 7),
dtype=np.int32))
pipe_reorder.build()
show_result(pipe_reorder.run())
```
## Adding and Removing Dimensions
Dimensions can be added or removed by specifying `src_dims` argument or by using dedicated `squeeze` and `expand_dims` operators.
The following example reinterprets single-channel data from CHW to HWC layout by discarding the leading dimension and adding a new trailing dimension. It also specifies the output layout.
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_remove_add(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, layout="CHW", dtype=types.INT32)
return inp, fn.reshape(inp,
src_dims=[1,2,-1], # select HW and add a new one at the end
layout="HWC") # specify the layout string
pipe_remove_add = example_remove_add(lambda: np.random.randint(0, 10, [1,4,3], dtype=np.int32))
pipe_remove_add.build()
show_result(pipe_remove_add.run())
```
## Relative Shape
The output shape may be calculated in relative terms, with a new extent being a multiple of a source extent.
For example, you may want to combine two subsequent rows into one - doubling the number of columns and halving the number of rows. The use of relative shape can be combined with dimension rearranging, in which case the new output extent is a multiple of a _different_ source extent.
The example below reinterprets the input as having twice as many _columns_ as the input had _rows_.
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_rel_shape(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp,
rel_shape=[0.5, 2],
src_dims=[1,0])
pipe_rel_shape = example_rel_shape(
lambda: np.random.randint(0, 10,
[np.random.randint(1,7), 2*np.random.randint(1,5)],
dtype=np.int32))
pipe_rel_shape.build()
show_result(pipe_rel_shape.run())
```
## Reinterpreting Data Type
The `reinterpret` operation can view the data as if it was of different type. When a new shape is not specified, the innermost dimension is resized accordingly.
```
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_reinterpret(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False, dtype=types.UINT8)
return inp, fn.reinterpret(inp, dtype=dali.types.UINT32)
pipe_reinterpret = example_reinterpret(
lambda:
np.random.randint(0, 255,
[np.random.randint(1,7), 4*np.random.randint(1,5)],
dtype=np.uint8))
pipe_reinterpret.build()
def hex_bytes(x):
f = f"0x{{:0{2*x.nbytes}x}}"
return f.format(x)
show_result(pipe_reinterpret.run(), formatter={'int':hex_bytes})
```
| github_jupyter |
```
# RJMC for GMMs:
import matplotlib.pyplot as plt
%matplotlib inline
from autograd import numpy as np
np.random.seed(0)
from scipy.stats import norm
from scipy.stats import dirichlet
from scipy.special import logsumexp
def gaussian_mixture_log_likelihood(X, means, stdevs, weights):
component_log_pdfs = np.array([norm.logpdf(X, loc=mean, scale=stdev) + np.log(weight) for ((mean, stdev), weight) in zip(zip(means, stdevs), weights)])
return np.sum(logsumexp(component_log_pdfs, 0))
from scipy.stats import norm, invgamma
from scipy.special import logsumexp
def unpack(theta):
assert(len(theta) % 3 == 0)
n = int(len(theta) / 3)
means, stdevs, weights = np.array(theta[:n]), np.array(theta[n:2*n]), np.array(theta[2*n:])
return means, stdevs, weights
def log_prior(theta):
means, stdevs, weights = unpack(theta)
log_prior_on_means = np.sum(norm.logpdf(means, scale=20))
log_prior_on_variances = np.sum(invgamma.logpdf((stdevs**2), 1.0))
#log_prior_on_weights = dirichlet.logpdf(weights, np.ones(len(weights)))
#log_prior_on_weights = np.sum(np.log(weights))
log_prior_on_weights = 0 # removing the prior on weights to see if this is the culprit...
return log_prior_on_means + log_prior_on_variances + log_prior_on_weights
def flat_log_p(theta):
means, stdevs, weights = unpack(theta)
if np.min(stdevs) <= 0.001: return - np.inf
log_likelihood = gaussian_mixture_log_likelihood(X=data, means=means,
stdevs=stdevs,
weights=weights)
return log_likelihood + log_prior(theta)
#n_components = 10
#true_means = np.random.rand(n_components) * 10 - 5
#true_stdevs = np.random.rand(n_components) * 0.2
#true_weights = np.random.rand(n_components)**2
#true_weights /= np.sum(true_weights)
n_data = 300
data = np.zeros(n_data)
#for i in range(n_data):
# component = np.random.choice(np.arange(n_components), p=true_weights)
# #component = np.random.randint(n_components)
# data[i] = norm.rvs(loc=true_means[component], scale=true_stdevs[component])
#n_components = 10
#true_means = np.linspace(-5,5,n_components)
#true_stdevs = np.random.rand(n_components)*0.5
#true_weights = np.random.rand(n_components)
#true_weights /= np.sum(true_weights)
#n_data = 300
#data = np.zeros(n_data)
#for i in range(n_data):
# component = np.random.randint(n_components)
# data[i] = norm.rvs(loc=true_means[component], scale=true_stdevs[component])
n_components = 3
true_means = [-5.0,0.0,5.0]
true_stdevs = np.ones(n_components)
true_weights = np.ones(n_components) / 3
n_data = 300
data = np.zeros(n_data)
for i in range(n_data):
component = np.random.randint(n_components)
data[i] = norm.rvs(loc=true_means[component], scale=true_stdevs[component])
plt.figure(figsize=(6,6))
ax = plt.subplot(111)
plt.hist(data, bins=50, normed=True, alpha=0.5);
x = np.linspace(-8,8, 1000)
y_tot = np.zeros(x.shape)
for i in range(n_components):
y = norm.pdf(x, loc=true_means[i], scale=true_stdevs[i]) * true_weights[i]
plt.plot(x, y, '--', color='grey',)
plt.fill_between(x, y, color='grey' ,alpha=0.2)
y_tot += y
plt.plot(x,y_tot, color='blue',)
plt.yticks([])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.title("data: {} points sampled from {} mixture components".format(n_data, n_components))
plt.ylabel('probability density')
plt.xlabel('x')
plt.xticks([-8,0,8])
np.mean(data), np.std(data)
max_components = 50
mean_perturbation_scale = 5.0
stdev_perturbation_scale = 2.0
def reversible_birth_death_move(theta, parents):
means, stdevs, weights = unpack(theta)
means, stdevs, weights = map(np.array, (means, stdevs, weights))
sum_weights_before = np.sum(weights)
n_components = len(means)
# decide whether to create a new component
if n_components == 1:
birth_probability = 1.0
log_prob_forward_over_reverse = np.log(1.0 / 0.5) # F: 100% chance of "birth" move, R: 50% chance
elif n_components == max_components:
birth_probability = 0.0
log_prob_forward_over_reverse = np.log(1.0 / 0.5) # F: 100% chance of "death" move, R: 50% chance
else:
birth_probability = 0.5
log_prob_forward_over_reverse = np.log(1.0 / 1.0) # 0
death_probability = 1.0 - birth_probability
if np.random.rand() < birth_probability:
(means, stdevs, weights, parents_prime), log_jac_u_term = reversible_birth_move(means, stdevs, weights, parents)
else:
(means, stdevs, weights, parents_prime), log_jac_u_term = reversible_death_move(means, stdevs, weights, parents)
assert(len(means) == len(stdevs))
theta_prime = np.array(means + stdevs + weights)
sum_weights_after = np.sum(weights)
assert(np.isclose(sum_weights_before, sum_weights_after))
return theta_prime, parents_prime, log_jac_u_term - log_prob_forward_over_reverse
from scipy.stats import uniform, norm
u_1_distribution = uniform(0, 1)
u_2_distribution = norm(0, 1)
u_3_distribution = norm(0, 1)
def reversible_birth_move(means, stdevs, weights, parents):
# make local copies to be extra sure we're not accidentally overwriting...
means, stdevs, weights, parents = map(list, (means, stdevs, weights, parents))
# draw all the random numbers we're going to use
i = np.random.randint(len(means)) # choose a parent component at random
u_1 = u_1_distribution.rvs()
u_2 = u_2_distribution.rvs()
u_3 = u_3_distribution.rvs()
# compute the log probability density of all the random numbers we drew
log_prob_u = np.log(1.0 / len(means)) + u_1_distribution.logpdf(u_1) + u_2_distribution.logpdf(u_2) + u_3_distribution.logpdf(u_3)
# compute the parameters of the new mixture component
weight_new = weights[i] * u_1
mean_new = (u_2 * mean_perturbation_scale) + means[i]
stdev_new = (u_3 * stdev_perturbation_scale) + stdevs[i]
# compute log determinant of the jacobian
log_jacobian_determinant = np.log(weights[i]) + np.log(mean_perturbation_scale) + np.log(stdev_perturbation_scale)
# subtract the new mixture component's weight from its parent
weights[i] -= weight_new
# update means, stdevs, weights, parents
means.append(mean_new)
stdevs.append(stdev_new)
weights.append(weight_new)
parents.append(i)
return (means, stdevs, weights, parents), (log_jacobian_determinant - log_prob_u)
def mmc_move(theta, parents):
"""Standard Metropolis Monte Carlo move.
(Contributed by JDC)
"""
theta_prime = np.array(theta)
parents_prime = list(parents)
n = int(len(theta) / 3)
SIGMA_MEAN = 0.05
SIGMA_STDDEV = 0.05
SIGMA_WEIGHT = 0.05
# different proposal sizes for mean, stdev, weight
i = np.random.randint(n)
j = np.random.randint(n)
delta_mean = SIGMA_MEAN * np.random.randn()
delta_stddev = SIGMA_STDDEV * np.random.randn()
delta_weight = SIGMA_WEIGHT * np.random.randn()
theta_prime[i] += delta_mean
theta_prime[n+i] += delta_stddev
theta_prime[2*n+i] += delta_weight
theta_prime[2*n+j] -= delta_weight
log_jac_u_term = 0.0
if np.any(theta_prime[n:2*n] <= 0.0) or np.any(theta_prime[2*n:] <= 0.0) or not np.isclose(np.sum(theta_prime[2*n:]), 1.0):
# Force reject
#print(theta_prime)
log_jac_u_term = - np.inf
return theta_prime, parents_prime, log_jac_u_term
def reversible_death_move(means, stdevs, weights, parents):
# make local copies to be extra sure we're not accidentally overwriting...
means, stdevs, weights, parents = map(list, (means, stdevs, weights, parents))
# draw all the random numbers we're going to use
i = np.random.randint(1, len(means)) # choose a component at random to remove, except component 0
# compute the log probability density of all the random numbers we drew
log_prob_u = np.log(1.0 / (len(means) - 1))
# and also the log probability density of the random numbers we would have drawn?
weight_new = weights[i]
mean_new = means[i]
stdev_new = stdevs[i]
u_1 = weight_new / weights[parents[i]]
u_2 = (mean_new - means[parents[i]] ) / mean_perturbation_scale
u_3 = (stdev_new - stdevs[parents[i]]) / stdev_perturbation_scale
log_prob_u += u_1_distribution.logpdf(u_1) + u_2_distribution.logpdf(u_2) + u_3_distribution.logpdf(u_3)
# also I think we need to compute the jacobian determinant of the inverse
inv_log_jacobian_determinant = np.log(weights[parents[i]]) + np.log(mean_perturbation_scale) + np.log(stdev_perturbation_scale)
log_jacobian_determinant = - inv_log_jacobian_determinant
# remove this mixture component, and re-allocate its weight to its parent
weights[parents[i]] += weights[i]
# update the parent list, so that any j whose parent just got deleted is assigned a new parent
for j in range(1, len(parents)):
if parents[j] == i:
parents[j] = parents[i]
# wait, this is almost certainly wrong, because the indices will change...
_ = means.pop(i)
_ = stdevs.pop(i)
_ = weights.pop(i)
_ = parents.pop(i)
# fix indices
for j in range(1, len(parents)):
if parents[j] > i:
parents[j] -= 1
return (means, stdevs, weights, parents), (log_jacobian_determinant - log_prob_u)
from tqdm import tqdm
def rjmcmc_w_parents(theta, parents, n_steps=10000):
traj = [(theta, parents)]
old_log_p = flat_log_p(theta)
acceptance_probabilities = []
for t in tqdm(range(n_steps)):
# generate proposal
if np.random.rand() < 0.05:
theta_prime, parents_prime, log_jac_u_term = reversible_birth_death_move(theta, parents)
else:
theta_prime, parents_prime, log_jac_u_term = mmc_move(theta, parents)
new_log_p = flat_log_p(theta_prime)
log_prob_ratio = new_log_p - old_log_p
if not np.isfinite(new_log_p):
A = 0
#print(RuntimeWarning("new_log_p isn't finite: theta = {}, parents = {}".format(theta_prime, parents_prime)))
else:
A = min(1.0, np.exp(log_prob_ratio + log_jac_u_term))
if np.random.rand() < A:
theta = theta_prime
parents = parents_prime
old_log_p = new_log_p
if len(theta) != len(traj[-1][0]):
prev_dim = int(len(traj[-1][0]) / 3)
current_dim = int(len(theta) / 3)
assert(len(theta) % 3 == 0)
print('{}: accepted a cross-model jump! # components: {} --> {}'.format(t, prev_dim, current_dim))
traj.append((theta, parents))
acceptance_probabilities.append(A)
return traj, acceptance_probabilities
np.random.seed(0)
init_n_components = 1
init_means = np.random.randn(init_n_components)
init_stdevs = np.random.rand(init_n_components) + 1
init_weights = np.random.rand(init_n_components)
init_weights /= np.sum(init_weights)
init_theta = np.hstack([init_means, init_stdevs, init_weights])
init_parents = [None] + list(range(init_n_components - 1))
traj, acceptance_probabilities = rjmcmc_w_parents(init_theta, init_parents, n_steps=100000)
plt.plot([a for a in acceptance_probabilities], '.')
plt.hist(acceptance_probabilities, bins=50);
n_components_traj = [len(t[0]) / 3 for t in traj]
ax = plt.subplot(111)
plt.plot(n_components_traj)
plt.hlines(n_components, 0, len(traj), linestyles='--')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.ylabel('# components')
plt.xlabel('iteration')
plt.title(r'birth-death RJMC trace $k$' + '\n(mixture weights ' + r'$w_i$ free)')
#plt.xscale('log')
plt.savefig('birth-death-n-components-starting-from-1.jpg', dpi=300)
plt.figure(figsize=(6,6))
burned_in = n_components_traj[1000:]
counts = np.bincount(burned_in)
n_components_range = list(range(len(counts)))
ax = plt.subplot(111)
plt.bar(n_components_range, counts / sum(counts))
plt.xlabel(r'# of components ($k$)')
plt.ylabel(r'$p(k)$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.title(r'birth-death RJMC estimated marginal distribution of $k$' + '\n(mixture weights ' + r'$w_i$ free)')
plt.xticks(n_components_range)
plt.yticks([])
plt.savefig('birth-death-marginals-starting-from-1.jpg', dpi=300)
change_points = list(np.arange(1, len(n_components_traj))[np.diff(n_components_traj) != 0])
trajs = []
for (start, end) in list(zip([0] + change_points, change_points + [len(traj)])):
trajs.append(np.array([t[0] for t in traj[start:end]]))
plt.figure(figsize=(6,6))
ax = plt.subplot(2, 1, 1)
for i in range(len(trajs)):
x_init = sum([len(t) for t in trajs[:i]])
x_end = x_init + len(trajs[i])
n_components = int(trajs[i].shape[1] / 3)
plt.plot(np.arange(x_init, x_end), trajs[i][:,:n_components], color='blue')
plt.ylim(-6,6)
plt.yticks([-5,0,5])
plt.xticks([0,50000,100000])
plt.xlabel('iteration')
plt.ylabel(r'mixture component means ($x$)')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax = plt.subplot(2, 1, 2, sharex=ax)
plt.plot(n_components_traj)
plt.xlabel('iteration')
plt.ylabel(r'# components')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.savefig('birth-death-branching-1.jpg', dpi=600, bbox_inches='tight')
np.random.seed(1)
init_n_components = 50
init_means = np.random.randn(init_n_components)
init_stdevs = np.random.rand(init_n_components) + 1
init_weights = np.random.rand(init_n_components)
init_weights /= np.sum(init_weights)
init_theta = np.hstack([init_means, init_stdevs, init_weights])
init_parents = [None] + list(range(init_n_components - 1))
traj, acceptance_probabilities = rjmcmc_w_parents(init_theta, init_parents, n_steps=100000)
n_components_traj = [len(t[0]) / 3 for t in traj]
ax = plt.subplot(111)
plt.plot(n_components_traj)
plt.hlines(n_components, 0, len(traj), linestyles='--')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.ylabel('# components')
plt.xlabel('iteration')
plt.title(r'birth-death RJMC trace $k$' + '\n(mixture weights ' + r'$w_i$ free)')
#plt.xscale('log')
plt.savefig('birth-death-n-components-starting-from-50.jpg', dpi=300)
plt.figure(figsize=(6,6))
burned_in = n_components_traj[10000:]
counts = np.bincount(burned_in)
n_components_range = list(range(len(counts)))
ax = plt.subplot(111)
plt.bar(n_components_range, counts / sum(counts))
plt.xlabel(r'# of components ($k$)')
plt.ylabel(r'$p(k)$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.title(r'birth-death RJMC estimated marginal distribution of $k$' + '\n(mixture weights ' + r'$w_i$ free)')
plt.xticks(n_components_range)
plt.yticks([])
plt.savefig('birth-death-marginals-starting-from-50.jpg', dpi=300)
change_points = list(np.arange(1, len(n_components_traj))[np.diff(n_components_traj) != 0])
trajs = []
for (start, end) in list(zip([0] + change_points, change_points + [len(traj)])):
trajs.append(np.array([t[0] for t in traj[start:end]]))
plt.figure(figsize=(6,6))
ax = plt.subplot(2, 1, 1)
for i in range(len(trajs)):
x_init = sum([len(t) for t in trajs[:i]])
x_end = x_init + len(trajs[i])
n_components = int(trajs[i].shape[1] / 3)
plt.plot(np.arange(x_init, x_end), trajs[i][:,:n_components], color='blue')
plt.ylim(-6,6)
plt.yticks([-5,0,5])
plt.xticks([0,50000,100000])
plt.xlabel('iteration')
plt.ylabel(r'mixture component means ($x$)')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax = plt.subplot(2, 1, 2, sharex=ax)
plt.plot(n_components_traj)
plt.xlabel('iteration')
plt.ylabel(r'# components')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
plt.savefig('birth-death-branching-50.jpg', dpi=600, bbox_inches='tight')
trees = [t[-1] for t in traj[1000:]]
for _ in range(20):
print(trees[np.random.randint(len(trees))])
import networkx as nx
def parent_list_to_graph(parent_list):
graph = nx.DiGraph()
for i in range(1, len(parent_list)):
graph.add_edge(parent_list[i], i)
return graph
g = parent_list_to_graph(trees[-1])
g.nodes()
for t in trees[:10]:
g = parent_list_to_graph(t)
plt.figure()
nx.draw(g, pos=nx.drawing.spring_layout(g))
log_posterior = np.array([flat_log_p(t[0]) for t in traj])
log_prior = np.array([log_prior(t[0]) for t in traj])
log_likelihood = log_posterior - log_prior
plt.plot(log_prior, label='prior')
plt.plot(log_likelihood, label='likelihood')
plt.plot(log_posterior, label='posterior')
plt.legend(loc='best')
plt.ylabel('log probability')
plt.xlabel('iteration')
plt.xscale('log')
plt.savefig('birth-death-log-posterior.jpg', dpi=300)
```
| github_jupyter |
<img src='./img/LogoWekeo_Copernicus_RGB_0.png' align='right' width='20%'></img>
# Tutorial on basic land applications (data processing) Version 2
In this tutorial we will use the WEkEO Jupyterhub to access and analyse data from the Copernicus Sentinel-2 and products from the [Copernicus Land Monitoring Service (CLMS)](https://land.copernicus.eu/).
A region in northern Corsica has been selected as it contains representative landscape features and process elements which can be used to demonstrate the capabilities and strengths of Copernicus space component and services.
The tutorial comprises the following steps:
1. Search and download data: We will select and download a Sentinel-2 scene and the CLMS CORINE Land Cover (CLC) data from their original archive locations via WEkEO using the Harmonised Data Access (HAD) API.
2. [Read and view Sentinel-2 data](#load_sentinel2): Once downloaded, we will read and view the Sentinel-2 data in geographic coordinates as true colour image.
3. [Process and view Sentinel-2 data as a vegetation and other spectral indices](#sentinel2_ndvi): We will see how the vegetation density and health can be assessed from optical EO data to support crop and landscape management practices.
4. [Read and view the CLC data](#display_clc): Display the thematic CLC data with the correct legend.
5. [CLC2018 burnt area in the Sentinel-2 NDVI data](#CLC_burn_NDVI): The two products give different results, but they can be combined to provide more information.
NOTE - This Jupyter Notebook contains additonal processing to demonstrate further functionality during the training debrief.
<img src='./img/Intro_banner.jpg' align='center' width='100%'></img>
## <a id='load_sentinel2'></a>2. Load required Sentinel-2 bands and True Color image at 10 m spatial resolution
Before we begin we must prepare our environment. This includes importing the various python libraries that we will need.
### Load required libraries
```
import os
import rasterio as rio
from rasterio import plot
from rasterio.mask import mask
from rasterio.plot import show_hist
import matplotlib.pyplot as plt
import geopandas as gpd
from rasterio.plot import show
from rasterio.plot import plotting_extent
import zipfile
from matplotlib import rcParams
from pathlib import Path
import numpy as np
from matplotlib.colors import ListedColormap
from matplotlib import cm
from matplotlib import colors
import warnings
warnings.filterwarnings('ignore')
from IPython.core.display import HTML
from rasterio.warp import calculate_default_transform, reproject, Resampling
import scipy.ndimage
```
The Sentinel-2 Multiple Spectral Imager (MSI) records 13 spectral bands across the visible and infrared portions of the electromagnetic spectrum at different spatial resolutions from 10 m to 60 m depending on their operation and use. There are currently two Sentinel-2 satellites in suitably phased orbits to give a revisit period of 5 days at the Equator and 2-3 days at European latitudes. Being an optical sensor they are of course also affected by cloud cover and illumination conditions. The two satellites have been fully operational since 2017 and record continuously over land and the adjacent coastal sea areas. Their specification represents a continuation and upgrade of the US Landsat system which has archive data stretching back to the mid 1980s.
<img src='./img/S2_band_comp.png' align='center' width='50%'></img>
For this training session we will only need a composite true colour image (made up of the blue green and red bands) and the individual bands for red (665 nm) and near infrared (833 nm). The cell below loads the required data.
```
#Download folder
download_dir_path = os.path.join(os.getcwd(), 'data/from_wekeo')
data_path = os.path.join(os.getcwd(), 'data')
R10 = os.path.join(download_dir_path, 'S2A_MSIL2A_20170802T101031_N0205_R022_T32TNN_20170802T101051.SAFE/GRANULE/L2A_T32TNN_A011030_20170802T101051/IMG_DATA/R10m') #10 meters resolution folder
b3 = rio.open(R10+'/L2A_T32TNN_20170802T101031_B03_10m.jp2') #green
b4 = rio.open(R10+'/L2A_T32TNN_20170802T101031_B04_10m.jp2') #red
b8 = rio.open(R10+'/L2A_T32TNN_20170802T101031_B08_10m.jp2') #near infrared
TCI = rio.open(R10+'/L2A_T32TNN_20170802T101031_TCI_10m.jp2') #true color
```
### Display True Color and False Colour Infrared images
The true colour image for the Sentinel-2 data downloaded in the previous JN can be displayed as a plot to show we have the required area and assess other aspects such as the presence of cloud, cloud shadow, etc.
In this case we selected region of northern Corsica showing the area around Bastia and the Tyrrhenian Sea out to the Italian island of Elba in the east. The area has typical Mediterranean vegetation with mountainous semi natural habitats and urban and agricultural areas along the coasts.
The cell below displays the true colour image in its native WGS-84 coordinate reference system.
The right hand plot shows the same image in false colour infrared format (FCIR). In this format the green band is displayed as blue, red as green and near infrared as red. Vegetated areas appear red and water is black.
```
fig, (ax, ay) = plt.subplots(1,2, figsize=(21,7))
show(TCI.read(), ax=ax, transform=TCI.transform, title = "TRUE COLOR")
ax.set_ylabel("Northing (m)") # (WGS 84 / UTM zone 32N)
ax.set_xlabel("Easting (m)")
ax.ticklabel_format(axis = 'both', style = 'plain')
# Function to normalize false colour infrared image
def normalize(array):
"""Normalizes numpy arrays into scale 0.0 - 1.0"""
array_min, array_max = array.min(), array.max()
return ((array - array_min)/(array_max - array_min))
nir = b8.read(1)
red = b4.read(1)
green = b3.read(1)
nirn = normalize(scipy.ndimage.zoom(nir,0.5))
redn = normalize(scipy.ndimage.zoom(red,0.5))
greenn = normalize(scipy.ndimage.zoom(green,0.5))
FCIR = np.dstack((nirn, redn, greenn))
FCIR = np.moveaxis(FCIR.squeeze(),-1,0)
show(FCIR, ax=ay, transform=TCI.transform, title = "FALSE COLOR INFRARED")
ay.set_ylabel("Northing (m)") # (WGS 84 / UTM zone 32N)
ay.set_xlabel("Easting (m)")
ay.ticklabel_format(axis = 'both', style = 'plain')
```
## <a id='sentinel2_ndvi'></a>3. Process and view Sentinel-2 data as vegetation and other spectral indices
Vegetation status is a combination of a number of properties of the vegetation related to growth, density, health and environmental factors. By making measurements of surface reflectance in the red and near infrared (NIR) parts of the spectrum optical instruments can summarise crop status through a vegetation index. The red region is related to chlorophyll absorption and the NIR is related to multiple scattering within leaf structures, therefore low red and high NIR represent healthy / dense vegetation. These values are summarised in the commonly used Normalised Difference Vegetation Index (NDVI).
<img src='./img/ndvi.jpg' align='center' width='20%'></img>
We will examine a small subset of the full image were we know differences in vegetation will be present due to natural and anthropogenic processes and calculate the NDVI to show how its value changes.
We will also calculate a second spectral index, the Normalised Difference Water Index (NDWI), which emphasises water surfaces to compare to NDVI.
To do this we'll first load some vector datasets for an area of interest (AOI) and some field boundaries.
### Open Vector Data
```
path_shp = os.path.join(os.getcwd(), 'shp')
aoi = gpd.read_file(os.path.join(path_shp, 'WEkEO-Land-AOI-201223.shp'))
LPSI = gpd.read_file(os.path.join(path_shp, 'LPIS-AOI-201223.shp'))
```
### Check CRS of Vector Data
Before we can use the vector data we must check the coordinate reference system (CRS) and then transpose them to the same CRS as the Sentinel-2 data. In this case we require all the data to be in the WGS 84 / UTM zone 32N CRS with the EPSG code of 32632.
```
print(aoi.crs)
print(LPSI.crs)
aoi_proj = aoi.to_crs(epsg=32632) #convert to WGS 84 / UTM zone 32N (Sentinel-2 crs)
LPIS_proj = LPSI.to_crs(epsg=32632)
print("conversion to S2 NDVI crs:")
print(aoi_proj.crs)
print(LPIS_proj.crs)
```
### Calculate NDVI from red and near infrared bands
First step is to calculate the NDVI for the whole image using some straightforward band maths and write out the result to a geoTIFF file.
```
nir = b8.read()
red = b4.read()
ndvi = (nir.astype(float)-red.astype(float))/(nir+red)
meta = b4.meta
meta.update(driver='GTiff')
meta.update(dtype=rio.float32)
with rio.open(os.path.join(data_path, 'S2_NDVI.tif'), 'w', **meta) as dst:
dst.write(ndvi.astype(rio.float32))
```
### Calculate NDWI from green and near infrared bands
The new step is to calculate the NDWI for the whole image using some straightforward band maths and write out the result to a geoTIFF file.
```
nir = b8.read()
green = b3.read()
ndwi = (green.astype(float) - nir.astype(float))/(nir+green)
meta = b3.meta
meta.update(driver='GTiff')
meta.update(dtype=rio.float32)
with rio.open(os.path.join(data_path, 'S2_NDWI.tif'), 'w', **meta) as dst:
dst.write(ndwi.astype(rio.float32))
```
### Crop the extent of the NDVI and NDWI images to the AOI
The file produced in the previous step is then cropped using the AOI geometry.
```
with rio.open(os.path.join(data_path, "S2_NDVI.tif")) as src:
out_image, out_transform = mask(src, aoi_proj.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],
"width": out_image.shape[2],
"transform": out_transform})
with rio.open(os.path.join(data_path, "S2_NDVI_masked.tif"), "w", **out_meta) as dest:
dest.write(out_image)
with rio.open(os.path.join(data_path, "S2_NDWI.tif")) as src:
out_image, out_transform = mask(src, aoi_proj.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],
"width": out_image.shape[2],
"transform": out_transform})
with rio.open(os.path.join(data_path, "S2_NDWI_masked.tif"), "w", **out_meta) as dest:
dest.write(out_image)
```
### Display NDVI and NDWI for the AOI
The AOI represents an area of northern Corsica centred on the town of Bagnasca. To the west are mountains dominated by forests and woodlands of evergreen sclerophyll oaks which tend to give high values of NDVI intersperse by areas of grassland or bare ground occuring naturally or as a consequnce of forest fires. The patterns are more irregular and follow the terrain and hydrological features. The lowlands to east have been clear of forest for agriculture shown by a fine scale mosaic of regular geometric features representing crop fields with diffrerent NDVIs or the presence of vegetated boundary features. The lower values of NDVI (below zero) in the east are associated with the sea and the large lagoon of the Réserve naturelle de l'étang de Biguglia.
As expected the NDWI gives high values for the open sea and lagoon areas of the image. Interestingly there are relatively high values for some of the fields in the coastal plane suggesting they may be flooded or irrigated. The bare surfaces have NDWI values below zero and the vegetated areas are lower still.
The colour map used to display the NDVI uses a ramp from blue to green to emphasise the increasing density and vigour of vegetation at high NDVI values. If distinction are not so clear the cmap value can be change from "BuGn" or "RdBu" to something more appropriate with reference to the the available colour maps at [Choosing Colormaps in Matplotlib](https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html).
```
ndvi_aoi = rio.open(os.path.join(data_path, 'S2_NDVI_masked.tif'))
fig, (az, ay) = plt.subplots(1,2, figsize=(21, 7))
# use imshow so that we have something to map the colorbar to
image_hidden_1 = az.imshow(ndvi_aoi.read(1),
cmap='BuGn')
# LPIS_proj.plot(ax=ax, facecolor='none', edgecolor='k')
image = show(ndvi_aoi, ax=az, cmap='BuGn', transform=ndvi_aoi.transform, title ="NDVI")
fig.colorbar(image_hidden_1, ax=az)
az.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N)
az.set_xlabel("Easting (m)")
az.ticklabel_format(axis = 'both', style = 'plain')
ndwi_aoi = rio.open(os.path.join(data_path, 'S2_NDWI_masked.tif'))
# use imshow so that we have something to map the colorbar to
image_hidden_1 = ay.imshow(ndwi_aoi.read(1),
cmap='RdBu')
# LPIS_proj.plot(ax=ax, facecolor='none', edgecolor='k')
image = show(ndwi_aoi, ax=ay, cmap='RdBu', transform=ndwi_aoi.transform, title ="NDWI")
fig.colorbar(image_hidden_1, ax=ay)
ay.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N)
ay.set_xlabel("Easting (m)")
ay.ticklabel_format(axis = 'both', style = 'plain')
```
### Histogram of NDVI values
If the NDVI values for the area are summarised as a histogram the two main levels of vegetation density / vigour become apprent. On the left of the plot there is a peak between NDVI values of -0.1 and 0.3 for the water and unvegetated areas together (with the water generally lower) and on the right the peak around an NDVI value of 0.8 is the dense forest and vigorous crops. The region in between shows spare vegetation, grassland and crops that are yet to mature.
In the NDWI histogram there are multiple peaks representing the sea and lagoons, bare surfaes and vegetation respectively. The NDVI and NDWI can be used in combination to characterise regons within satellite images.
```
fig, axhist = plt.subplots(1,1)
show_hist(ndvi_aoi, bins=100, masked=False, title='Histogram of NDVI values', facecolor = 'g', ax =axhist)
axhist.set_xlabel('NDVI')
axhist.set_ylabel('number of pixels')
plt.gca().get_legend().remove()
fig, axhist = plt.subplots(1,1)
show_hist(ndwi_aoi, bins=100, masked=False, title='Histogram of NDWI values', facecolor = 'b', ax =axhist)
axhist.set_xlabel('NDWI')
axhist.set_ylabel('number of pixels')
plt.gca().get_legend().remove()
```
### NDVI index on a cultivation pattern area
We can look in more detail at the agricultural area to see the patterns in the NDVI values caused by differential crop density and growth. As before we load a vector file containing an AOI, subset the original Sentinel-2 NDVI image. This time we over lay a set of field boundaries from the Land Parcel Information System (LPIS) which highlight some of the management units.
This analysis gives us a representation of the biophysical properties of the surface at the time of image acquisition.
```
#Load shapefile of the AOIs
cult_zoom = gpd.read_file(os.path.join(path_shp, 'complex_cultivation_patterns_zoom.shp'))
#Subset the Sentinel-2 NDVI image
with rio.open(os.path.join(data_path, "S2_NDVI.tif")) as src:
out_image, out_transform = mask(src, cult_zoom.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],
"width": out_image.shape[2],
"transform": out_transform})
with rio.open(os.path.join(data_path, "NDVI_cultivation_area.tif"), "w", **out_meta) as dest:
dest.write(out_image.astype(rio.float32))
#Display the results with the LPIS
rcParams['axes.titlepad'] = 20
src_cult = rio.open(os.path.join(data_path, "NDVI_cultivation_area.tif"))
fig, axg = plt.subplots(figsize=(21, 7))
image_hidden_1 = axg.imshow(src_cult.read(1),
cmap='BuGn')
LPIS_proj.plot(ax=axg, facecolor='none', edgecolor='k')
show(src_cult, ax=axg, cmap='BuGn', transform=src_cult.transform, title='NDVI - Complex cultivation patterns')
fig.colorbar(image_hidden_1, ax=axg)
axg.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N)
axg.set_xlabel("Easting (m)")
plt.subplots_adjust(bottom=0.1, right=0.6, top=0.9)
axg.ticklabel_format(axis = 'both', style = 'plain')
```
## <a id='display_clc'></a>4. Read and view the CLC data
The CORINE Land Cover (CLC) inventory has been produced at a European level in 1990, 2000, 2006, 2012, and 2018. It records land cover and land use in 44 classes with a Minimum Mapping Unit (MMU) of 25 hectares (ha) and a minimum feature width of 100 m. The time series of status maps are complemented by change layers, which highlight changes between the land cover land use classes with an MMU of 5 ha. The Eionet network of National Reference Centres Land Cover (NRC/LC) produce the CLC databases at Member State level, which are coordinated and integrated by EEA. CLC is produced by the majority of countries by visual interpretation of high spatial resolution satellite imagery (10 - 30 m spatial resolution). In a few countries semi-automatic solutions are applied, using national in-situ data, satellite image processing, GIS integration and generalisation. CLC has a wide variety of applications, underpinning various policies in the domains of environment, but also agriculture, transport, spatial planning etc.
### Crop the extent of the Corine Land Cover 2018 (CLC 2018) to the AOI and display
As with the Sentinel-2 data it is necesasary to crop the pan-European CLC2018 dataset to be able to review it at the local level.
### Set up paths to data
```
#path to Corine land cover 2018
land_cover_dir = Path(os.path.join(download_dir_path,'u2018_clc2018_v2020_20u1_raster100m/DATA/'))
legend_dir = Path(os.path.join(download_dir_path,'u2018_clc2018_v2020_20u1_raster100m/Legend/'))
#path to the colormap
txt_filename = legend_dir/'CLC2018_CLC2018_V2018_20_QGIS.txt'
```
### Re-project vector files to the same coordinate system of the CLC 2018
```
aoi_3035 = aoi.to_crs(epsg=3035) # EPSG:3035 (ETRS89-extended / LAEA Europe)
```
### Write CLC 2018 subset
```
with rio.open(str(land_cover_dir)+'/U2018_CLC2018_V2020_20u1.tif') as src:
out_image, out_transform = mask(src, aoi_3035.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],
"width": out_image.shape[2],
"transform": out_transform,
"dtype": "int8",
"nodata":0
})
with rio.open("CLC_masked/Corine_masked.tif", "w", **out_meta) as dest:
dest.write(out_image)
```
### Set up the legend for the CLC data
As the CLC data is thematic in nature we must set up a legend to be displayed with the results showing the colour, code and definition of each land cover / land use class.
### Read CLC 2018 legend
A text file is availabe which contains the details of the CLC nomenclature for building the legend when displaying CLC.
```
### Create colorbar
def parse_line(line):
_, r, g, b, a, descr = line.split(',')
return (int(r), int(g), int(b), int(a)), descr.split('\n')[0]
with open(txt_filename, 'r') as txtf:
lines = txtf.readlines()
legend = {nline+1: parse_line(line) for nline, line in enumerate(lines[:-1])}
legend[0] = parse_line(lines[-1])
#print code and definition of each land cover / land use class
def parse_line_class_list(line):
class_id, r, g, b, a, descr = line.split(',')
return (int(class_id), int(r), int(g), int(b), int(a)), descr.split('\n')[0]
with open(txt_filename, 'r') as txtf:
lines = txtf.readlines()
legend_class = {nline+1: parse_line_class_list(line) for nline, line in enumerate(lines[:-1])}
legend_class[0] = parse_line_class_list(lines[-1])
print('Level 3 classes')
for k, v in sorted(legend_class.items()):
print(f'{v[0][0]}\t{v[1]}')
```
### Build the legend for the CLC 2018 in the area of interest
As less than half of the CLC classes are present in the AOI an area specific legend will be built to simplify interpretation.
```
#open CLC 2018 subset
cover_land = rio.open("CLC_masked/Corine_masked.tif")
array_rast = cover_land.read(1)
#Set no data value to 0
array_rast[array_rast == -128] = 0
class_aoi = list(np.unique(array_rast))
legend_aoi = dict((k, legend[k]) for k in class_aoi if k in legend)
classes_list =[]
number_list = []
for k, v in sorted(legend_aoi.items()):
#print(f'{k}:\t{v[1]}')
classes_list.append(v[1])
number_list.append(k)
class_dict = dict(zip(classes_list,number_list))
#create the colobar
corine_cmap_aoi= ListedColormap([np.array(v[0]).astype(float)/255.0 for k, v in sorted(legend_aoi.items())])
# Map the values in [0, 22]
new_dict = dict()
for i, v in enumerate(class_dict.items()):
new_dict[v[1]] = (v[0], i)
fun = lambda x : new_dict[x][1]
matrix = map(np.vectorize(fun), array_rast)
matrix = np.matrix(list(matrix))
```
### Display the CLC2018 data for the AOI
The thematic nature and the 100 m spatial resolution of the CLC2018 give a very different view of the landscape compared to the Sentinel-2 data. CLC2018 offers a greater information content as it is a combination of multiple images, ancillary data and human interpretation while Sentinel-2 offers great spatial information for one instance in time.
The separation of the mountains with woodland habitats and the coastal planes with agriculture can be clearly seen marked by a line of urban areas. The mountains are dominated by deciduous woodland, sclerophyllous vegetation and transitional scrub. The coastal planes consist of various types of agricultural land associated with small field farming practices.
The most striking feature of the CLC2018 data is a large burnt area which resulted from a major forest fire in July 2017.
```
#plot
fig2, axs2 = plt.subplots(figsize=(10,10),sharey=True)
show(matrix, ax=axs2, cmap=corine_cmap_aoi, transform = cover_land.transform, title = "Corine Land Cover 2018")
norm = colors.BoundaryNorm(np.arange(corine_cmap_aoi.N + 1), corine_cmap_aoi.N + 1)
cb = plt.colorbar(cm.ScalarMappable(norm=norm, cmap=corine_cmap_aoi), ax=axs2, fraction=0.03)
cb.set_ticks([x+.5 for x in range(-1,22)]) # move the marks to the middle
cb.set_ticklabels(list(class_dict.keys())) # label the colors
axs2.ticklabel_format(axis = 'both', style = 'plain')
axs2.set_ylabel("Northing (m)") #EPSG:3035 (ETRS89-extended / LAEA Europe)
axs2.set_xlabel("Easting (m)")
```
## <a id='CLC_burn_NDVI'></a>5. CLC2018 burnt area in the Sentinel-2 NDVI data
The area of the burn will have a very low NDVI compared to the surounding unburnt vegetation. The boundary of the burn can be easily seen as well as remnants of the original vegetation which have survived the burn.
```
#Load shapefile of the AOIs and check the crs
burnt_aoi = gpd.read_file(os.path.join(path_shp, 'burnt_area.shp'))
print("vector file crs:")
print(burnt_aoi.crs)
burnt_aoi_32632 = burnt_aoi.to_crs(epsg=32632) #Sentinel-2 NDVI crs
print("conversion to S2 NDVI crs:")
print(burnt_aoi_32632.crs)
```
### Crop the extent of the NDVI image for burnt area
```
with rio.open(os.path.join(data_path, 'S2_NDVI_masked.tif')) as src:
out_image, out_transform = mask(src, burnt_aoi_32632.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],
"width": out_image.shape[2],
"transform": out_transform})
with rio.open(os.path.join(data_path, "NDVI_burnt_area.tif"), "w", **out_meta) as dest:
dest.write(out_image)
```
### Crop the extent of the CLC 2018 for burnt area
```
#open CLC 2018 subset
cover_land = rio.open("CLC_masked/Corine_masked.tif")
print(cover_land.crs) #CLC 2018 crs
burn_aoi_3035 = burnt_aoi.to_crs(epsg=3035) #conversion to CLC 2018 crs
with rio.open(str(land_cover_dir)+'/U2018_CLC2018_V2020_20u1.tif') as src:
out_image, out_transform = mask(src, burn_aoi_3035.geometry,crop=True)
out_meta = src.meta.copy()
out_meta.update({"driver": "GTiff",
"height": out_image.shape[1],"width": out_image.shape[2],
"transform": out_transform,
"dtype": "int8",
"nodata":0
})
with rio.open("CLC_masked/Corine_burnt_area.tif", "w", **out_meta) as dest:
dest.write(out_image)
# Re-project S2 NDVI image to CLC 2018 crs
clc_2018_burnt_aoi = rio.open("CLC_masked/Corine_burnt_area.tif")
dst_crs = clc_2018_burnt_aoi.crs
with rio.open(os.path.join(data_path, "NDVI_burnt_area.tif")) as src:
transform, width, height = calculate_default_transform(
src.crs, dst_crs, src.width, src.height, *src.bounds)
kwargs = src.meta.copy()
kwargs.update({
'crs': dst_crs,
'transform': transform,
'width': width,
'height': height
})
with rio.open(os.path.join(data_path, "NDVI_burnt_area_EPSG_3035.tif"), 'w', **kwargs) as dst:
reproject(source=rio.band(src,1),
destination=rio.band(dst,1),
src_transform=src.transform,
src_crs=src.crs,
dst_transform=transform,
dst_crs=dst_crs,
resampling=Resampling.nearest)
```
### Display NDVI index on the AOIs
```
# Build the legend for the CLC 2018 in the area of interest
array_rast_b = clc_2018_burnt_aoi.read(1)
#Set no data value to 0
array_rast_b[array_rast_b == -128] = 0
class_aoi_b = list(np.unique(array_rast_b))
legend_aoi_b = dict((k, legend[k]) for k in class_aoi_b if k in legend)
classes_list_b =[]
number_list_b = []
for k, v in sorted(legend_aoi_b.items()):
#print(f'{k}:\t{v[1]}')
classes_list_b.append(v[1])
number_list_b.append(k)
class_dict_b = dict(zip(classes_list_b,number_list_b))
#create the colobar
corine_cmap_aoi_b= ListedColormap([np.array(v[0]).astype(float)/255.0 for k, v in sorted(legend_aoi_b.items())])
# Map the values in [0, 22]
new_dict_b = dict()
for i, v in enumerate(class_dict_b.items()):
new_dict_b[v[1]] = (v[0], i)
fun_b = lambda x : new_dict_b[x][1]
matrix_b = map(np.vectorize(fun_b), array_rast_b)
matrix_b = np.matrix(list(matrix_b))
#Plot
rcParams['axes.titlepad'] = 20
src_burnt = rio.open(os.path.join(data_path, "NDVI_burnt_area_EPSG_3035.tif"))
fig_b, (axr_b, axg_b) = plt.subplots(1,2, figsize=(25, 8))
image_hidden_1_b = axr_b.imshow(src_burnt.read(1),
cmap='BuGn')
show(src_burnt, ax=axr_b, cmap='BuGn', transform=src_burnt.transform, title='NDVI - Burnt area')
show(matrix_b, ax=axg_b, cmap=corine_cmap_aoi_b, transform=clc_2018_burnt_aoi.transform, title='CLC 2018 - Burnt area')
fig_b.colorbar(image_hidden_1_b, ax=axr_b)
plt.tight_layout(h_pad=1.0)
norm = colors.BoundaryNorm(np.arange(corine_cmap_aoi_b.N + 1), corine_cmap_aoi_b.N + 1)
cb = plt.colorbar(cm.ScalarMappable(norm=norm, cmap=corine_cmap_aoi_b), ax=axg_b, fraction=0.03)
cb.set_ticks([x+.5 for x in range(-1,6)]) # move the marks to the middle
cb.set_ticklabels(list(class_dict_b.keys())) # label the colors
axg_b.ticklabel_format(axis = 'both', style = 'plain')
axr_b.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N)
axr_b.set_xlabel("Easting (m)")
axg_b.set_ylabel("Northing (m)") #(WGS 84 / UTM zone 32N)
axg_b.set_xlabel("Easting (m)")
axr_b.ticklabel_format(axis = 'both', style = 'plain')
axg_b.ticklabel_format(axis = 'both', style = 'plain')
plt.tight_layout(h_pad=1.0)
```
<hr>
<p><img src='./img/all_partners_wekeo_2.png' align='left' alt='Logo EU Copernicus' width='100%'></img></p>
| github_jupyter |
```
from HARK.ConsumptionSaving.ConsLaborModel import (
LaborIntMargConsumerType,
init_labor_lifecycle,
)
import numpy as np
import matplotlib.pyplot as plt
from time import process_time
mystr = lambda number: "{:.4f}".format(number) # Format numbers as strings
do_simulation = True
# Make and solve a labor intensive margin consumer i.e. a consumer with utility for leisure
LaborIntMargExample = LaborIntMargConsumerType(verbose=0)
LaborIntMargExample.cycles = 0
t_start = process_time()
LaborIntMargExample.solve()
t_end = process_time()
print(
"Solving a labor intensive margin consumer took "
+ str(t_end - t_start)
+ " seconds."
)
t = 0
bMin_orig = 0.0
bMax = 100.0
# Plot the consumption function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(bMin, bMax, 300)
bMin = bMin_orig
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
C = LaborIntMargExample.solution[t].cFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, C)
bMin = np.minimum(bMin, B_temp[0])
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Normalized consumption level")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, None)
plt.show()
# Plot the marginal consumption function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(bMin, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
C = LaborIntMargExample.solution[t].cFunc.derivativeX(
B_temp, Shk * np.ones_like(B_temp)
)
plt.plot(B_temp, C)
bMin = np.minimum(bMin, B_temp[0])
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Marginal propensity to consume")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the labor function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(0.0, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
Lbr = LaborIntMargExample.solution[t].LbrFunc(B_temp, Shk * np.ones_like(B_temp))
bMin = np.minimum(bMin, B_temp[0])
plt.plot(B_temp, Lbr)
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Labor supply")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the marginal value function at various transitory productivity shocks
pseudo_inverse = True
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(0.0, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
if pseudo_inverse:
vP = LaborIntMargExample.solution[t].vPfunc.cFunc(
B_temp, Shk * np.ones_like(B_temp)
)
else:
vP = LaborIntMargExample.solution[t].vPfunc(B_temp, Shk * np.ones_like(B_temp))
bMin = np.minimum(bMin, B_temp[0])
plt.plot(B_temp, vP)
plt.xlabel("Beginning of period bank balances")
if pseudo_inverse:
plt.ylabel("Pseudo inverse marginal value")
else:
plt.ylabel("Marginal value")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, None)
plt.show()
if do_simulation:
t_start = process_time()
LaborIntMargExample.T_sim = 120 # Set number of simulation periods
LaborIntMargExample.track_vars = ["bNrmNow", "cNrmNow"]
LaborIntMargExample.initializeSim()
LaborIntMargExample.simulate()
t_end = process_time()
print(
"Simulating "
+ str(LaborIntMargExample.AgentCount)
+ " intensive-margin labor supply consumers for "
+ str(LaborIntMargExample.T_sim)
+ " periods took "
+ mystr(t_end - t_start)
+ " seconds."
)
N = LaborIntMargExample.AgentCount
CDF = np.linspace(0.0, 1, N)
plt.plot(np.sort(LaborIntMargExample.cNrmNow), CDF)
plt.xlabel(
"Consumption cNrm in " + str(LaborIntMargExample.T_sim) + "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, None)
plt.ylim(0.0, 1.0)
plt.show()
plt.plot(np.sort(LaborIntMargExample.LbrNow), CDF)
plt.xlabel(
"Labor supply Lbr in " + str(LaborIntMargExample.T_sim) + "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, 1.0)
plt.ylim(0.0, 1.0)
plt.show()
plt.plot(np.sort(LaborIntMargExample.aNrmNow), CDF)
plt.xlabel(
"End-of-period assets aNrm in "
+ str(LaborIntMargExample.T_sim)
+ "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, 20.0)
plt.ylim(0.0, 1.0)
plt.show()
# Make and solve a labor intensive margin consumer with a finite lifecycle
LifecycleExample = LaborIntMargConsumerType(**init_labor_lifecycle)
LifecycleExample.cycles = (
1 # Make this consumer live a sequence of periods exactly once
)
start_time = process_time()
LifecycleExample.solve()
end_time = process_time()
print(
"Solving a lifecycle labor intensive margin consumer took "
+ str(end_time - start_time)
+ " seconds."
)
LifecycleExample.unpack('cFunc')
bMax = 20.0
# Plot the consumption function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
C = LifecycleExample.solution[t].cFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, C)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Consumption function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Normalized consumption level")
plt.xlim(b_min, b_max)
plt.ylim(0.0, None)
plt.show()
# Plot the marginal consumption function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
MPC = LifecycleExample.solution[t].cFunc.derivativeX(
B_temp, Shk * np.ones_like(B_temp)
)
plt.plot(B_temp, MPC)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Marginal consumption function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Marginal propensity to consume")
plt.xlim(b_min, b_max)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the labor supply function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
L = LifecycleExample.solution[t].LbrFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, L)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Labor supply function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Labor supply")
plt.xlim(b_min, b_max)
plt.ylim(0.0, 1.01)
plt.show()
# Plot the marginal value function at various transitory productivity shocks
pseudo_inverse = True
TranShkSet = LifecycleExample.TranShkGrid[t]
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) / 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
if pseudo_inverse:
vP = LifecycleExample.solution[t].vPfunc.cFunc(
B_temp, Shk * np.ones_like(B_temp)
)
else:
vP = LifecycleExample.solution[t].vPfunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, vP)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.xlabel("Beginning of period bank balances")
if pseudo_inverse:
plt.ylabel("Pseudo inverse marginal value")
else:
plt.ylabel("Marginal value")
plt.title("Marginal value across periods of the lifecycle")
plt.xlim(b_min, b_max)
plt.ylim(0.0, None)
plt.show()
```
| github_jupyter |
```
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from statsmodels.distributions.empirical_distribution import ECDF
from datetime import date, datetime, timedelta, timezone
%matplotlib inline
#set default plotting styles
sns.set(rc={'figure.figsize':(15, 6)})
sns.set_style("dark")
fig_dims = (15, 6)
print(os.getcwd())
sales_train_validation = pd.read_csv("./datasets/sales_train_validation.csv")
sales_train_validation.head()
master = sales_train_validation.melt(id_vars=['id', 'item_id', 'dept_id', 'cat_id', 'store_id', 'state_id'], var_name='d')
calendar = pd.read_csv("./datasets/calendar.csv")
temp = calendar[['date', 'd']]
temp
#calendar.set_index('date', inplace=True)
#temp = pd.DataFrame(calendar['d'])
master = pd.merge(master, temp, on='d', how='inner')
master['date'] = pd.to_datetime(master['date'])
master = master.rename(columns={'value': 'unit_sales'})
master.head()
calendar = pd.read_csv("./datasets/calendar.csv")
calendar['date'] = pd.to_datetime(calendar['date'])
master = pd.merge(master, calendar, on='date', how='left')
master.head()
id_sales = master.groupby(['date', 'id'])['unit_sales'].agg('sum')
id_sales
store_sales = master.groupby(['date', 'store_id'], as_index=False)['unit_sales'].agg('sum')
store_sales = pd.DataFrame(store_sales)
store_sales['date'] = pd.to_datetime(store_sales['date'])
store_sales
fig, ax = plt.subplots(figsize=fig_dims)
sns.lineplot(data=store_sales,
x='date',
y='unit_sales',
hue='store_id',
ax=ax,
alpha=0.6)
plt.show()
df2 = store_sales
df2 = df2.set_index('date')
# fig, ax = plt.subplots(figsize=fig_dims)
# sns.relplot(x="date",
# y="unit_sales",
# hue="store_id",
# row="store_id",
# facet_kws=dict(sharex=True),
# kind="reg",
# legend="full",
# data=store_sales)
# sns.lmplot(x='date',
# y='unit_sales',
# hue='store_id',
# data=store_sales)
#sns.regplot(x="date", y="unit_sales", data=store_sales)
#sns.lmplot(x="date", y="unit_sales", hue="store_id", data=store_sales)
df2.loc[:, 'unit_sales'].plot(linewidth=0.5)
plt.show()
sns.violinplot(data=store_sales,
x='store_id',
y='unit_sales',
inner=None)
plt.show()
sns.boxplot(data=store_sales,
x='store_id',
y='unit_sales')
plt.show()
stores = master['store_id'].unique()
for store in stores:
temp = store_sales[store_sales['store_id'] == store]
sns.kdeplot(temp['unit_sales'], cumulative=True)
plt.title('{0} - CDF'.format(store))
plt.show()
stores = master['store_id'].unique()
for store in stores:
temp = store_sales[store_sales['store_id'] == store]
sns.kdeplot(temp['unit_sales'])
plt.title('{0} - CDF'.format(store))
plt.show()
master.head()
#sns.heatmap(master.corr(), square=True, cmap='RdYlGn')
from sklearn.linear_model import Lasso
X = master.drop('unit_sales', axis=1).values
y = master['unit_sales'].values
lasso = Lasso(alpha=0.4, normalize=True)
lasso.fit(X, y)
lasso_coef = lasso.coef_
print(lasso_coef)
# Plot the coefficients
plt.plot(range(len(master.columns)), lasso_coef)
plt.xticks(range(len(master.columns)), master.columns.values, rotation=60)
plt.margins(0.02)
plt.show()
master.columns
```
| github_jupyter |
# Factors
This notebook calculates the factors used to add seasonality patterns to reconstructed data.
Factors are multiplicative scalars applied for a numerical feature for each groupby value (e.g. average monthly windspeed ratio as percentage of annual average)
## 0 - Setup
### 0.1 - Imports
Load the necessary dependencies.
```
%%capture
from ydata.connectors import GCSConnector
from ydata.utils.formats import read_json
from typing import List
from pandas import concat
```
## 0.2 - Auxiliary Functions
The auxiliary functions are custom-designed utilities developed for the use case.
```
from factors import save_json
```
## 1 - Load Data
```
# Load the credentials
credentials = read_json('gcs_credentials.json')
# Create the connector for Google Cloud Storage
connector = GCSConnector('ydatasynthetic', gcs_credentials=credentials)
```
## 2 - Calculate Factors
Calculate the average windspeed per month for meters in relevant provinces.
```
YEARS = [2018, 2019, 2020, 2021]
# Internal IDs that identify stations which are in the same provinces
meter_ids = read_json('meter_ids.json')
def get_monthly_avg_per_year(connector: GCSConnector, meter_ids: List, years=YEARS):
"Calculates the monthly factor over the anual average, per year, for meters within same provinces as the original data."
# create a map of each month to corresponding index
meses = ['january', 'february', 'march', 'april', 'may', 'june', 'july',
'august', 'september', 'october', 'november', 'december']
months = {k : v for (k, v) in zip(meses, range(1, len(meses) + 1))}
factors = [] # will contain a Series of monthly averages over anual, per each year
for year in years:
filepath = f'gs://pipelines_artifacts/wind_measurements_pipeline/data/df_for_factors_{str(year)}.csv'
df = connector.read_file(filepath, assume_missing=True).to_pandas().set_index('name_station') # read the yearly monthly averages
df = df[df.index.isin(meter_ids)] # filter for meters in same provinces
df = df.dropna() # drop the missing values
for month in months.keys(): # calculate factor as ratio of monthly average
df[month] = df[month] / df['yearly'] # over the anual average
factors.append(df[months.keys()].mean().copy())
# Aggregate data
agg_data = concat(factors, axis=1)
agg_data.columns = years
agg_data = agg_data.rename(index=months)
agg_data['avg'] = agg_data.mean(axis=1)
return agg_data
factors = get_monthly_avg_per_year(connector=connector, meter_ids=meter_ids)
```
## 3 - Store Data
Save factors as a JSON of windspeed factor per each month.
```
# Store the average per month of year
save_json({'windspeed': factors['avg'].to_dict()}, 'df_factors_2018_2021.json')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from pathlib import Path
# visualization
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
```
## Read and clean datasets
```
def clean_Cohen_datasets(path):
"""Read local raw datasets and clean them"""
# read datasets
df = pd.read_csv(path)
# rename columns
df.rename(columns={"abstracts":"abstract", "label1":"label_abstract_screening", "label2":"label_included"}, inplace=True)
# recode inclusion indicators
df.label_abstract_screening = np.where(df.label_abstract_screening == "I", 1, 0)
df.label_included = np.where(df.label_included == "I", 1, 0)
# add record id
df.insert(0, "record_id", df.index + 1)
return df
df_ACEInhibitors = clean_Cohen_datasets("raw/ACEInhibitors.csv")
df_ADHD = clean_Cohen_datasets("raw/ADHD.csv")
df_Antihistamines = clean_Cohen_datasets("raw/Antihistamines.csv")
df_AtypicalAntipsychotics = clean_Cohen_datasets("raw/AtypicalAntipsychotics.csv")
df_BetaBlockers = clean_Cohen_datasets("raw/BetaBlockers.csv")
df_CalciumChannelBlockers = clean_Cohen_datasets("raw/CalciumChannelBlockers.csv")
df_Estrogens = clean_Cohen_datasets("raw/Estrogens.csv")
df_NSAIDS = clean_Cohen_datasets("raw/NSAIDS.csv")
df_Opiods = clean_Cohen_datasets("raw/Opiods.csv")
df_OralHypoglycemics = clean_Cohen_datasets("raw/OralHypoglycemics.csv")
df_ProtonPumpInhibitors = clean_Cohen_datasets("raw/ProtonPumpInhibitors.csv")
df_SkeletalMuscleRelaxants = clean_Cohen_datasets("raw/SkeletalMuscleRelaxants.csv")
df_Statins = clean_Cohen_datasets("raw/Statins.csv")
df_Triptans = clean_Cohen_datasets("raw/Triptans.csv")
df_UrinaryIncontinence = clean_Cohen_datasets("raw/UrinaryIncontinence.csv")
```
## Export datasets
```
Path("output/local").mkdir(parents=True, exist_ok=True)
df_ACEInhibitors.to_csv("output/local/ACEInhibitors.csv", index=False)
df_ADHD.to_csv("output/local/ADHD.csv", index=False)
df_Antihistamines.to_csv("output/local/Antihistamines.csv", index=False)
df_AtypicalAntipsychotics.to_csv("output/local/AtypicalAntipsychotics.csv", index=False)
df_BetaBlockers.to_csv("output/local/BetaBlockers.csv", index=False)
df_CalciumChannelBlockers.to_csv("output/local/CalciumChannelBlockers.csv", index=False)
df_Estrogens.to_csv("output/local/Estrogens.csv", index=False)
df_NSAIDS.to_csv("output/local/NSAIDS.csv", index=False)
df_Opiods.to_csv("output/local/Opiods.csv", index=False)
df_OralHypoglycemics.to_csv("output/local/OralHypoglycemics.csv", index=False)
df_ProtonPumpInhibitors.to_csv("output/local/ProtonPumpInhibitors.csv", index=False)
df_SkeletalMuscleRelaxants.to_csv("output/local/SkeletalMuscleRelaxants.csv", index=False)
df_Statins.to_csv("output/local/Statins.csv", index=False)
df_Triptans.to_csv("output/local/Triptans.csv", index=False)
df_UrinaryIncontinence.to_csv("output/local/UrinaryIncontinence.csv", index=False)
```
## Dataset statistics
See `process_Cohen_datasets_online.ipynb`.
| github_jupyter |
<a href="https://colab.research.google.com/github/jpabloglez/Master_IA_Sanidad/blob/main/2_3_3_Exploracion_visual_de_datos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Exploración de datos
## Conjunto de datos de Diabetes
```
"""
En este cuaderno trabajaremos con el Diabetes Dataset de Sci-Kit Learn
https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset
Y utilizaremos la librería Seaborn para la visualización de datos
https://seaborn.pydata.org/
"""
diabetes = Diabetes.get_tabular_dataset()
diabetes_df = diabetes.to_pandas_dataframe()
print("INFO", diabetes_df.info())
from sklearn.datasets import load_diabetes
import pandas as pd
import seaborn as sns
sns.set_style("whitegrid") # muestra la cuadrícula de posición en los gráficos
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', None) # Esta opción permite que se muestren
# todas las columnas en el dataset
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
data = load_diabetes()
df = pd.DataFrame(data=data.data, columns=data.feature_names)
print(df.describe())
print("\n", 50 * "*", "\n Descripción del dataset: \n\n",data.DESCR)
# El método DESCR del dataset nos permite ampliar la información sobre el dataset de trabajo
# Vamos a renombrar las columnas siguiendo la nomenclatura de las variables clínicas
df = df.rename(columns={'s1': 'tc', 's2': 'ldl', 's3': 'hdl', 's4': 'tch', 's5': 'ltg', 's6': 'glu'})
print(df)
# Nota, como habrás observado, este dataset ha sido previamente normalizado
# para facilitar la aplicación de ciertos modelos de aprendizaje automático
```
## Gráfico de dispersión
```
# Veámos la cómo se relacionan los niveles de LDL y el Índice de Masa Corporal (BMI)
fig = plt.figure(1, figsize=(15,5))
fig.add_subplot(121)
sns.scatterplot(data=data, x=df['bmi'], y=df['ldl'], palette='Pastel1', legend='full')
plt.xlabel('BMI')
plt.ylabel('LDL')
# Para evaluar la separación entre datos, necesitaríamos etiqueta categórica para
# cada variable, hagamos una dummy
import numpy as np
class_l = np.random.randint(2, size=len(df['bmi'].values))
df_l = df
df_l['class'] = class_l
fig.add_subplot(122)
sns.scatterplot(data=df_l, x=df_l['bmi'], y=df_l['ldl'], palette='Pastel1', legend='full', hue='class')
# Como vemos, al haber utilizado una distribución aleatoria los datos aparecen mezclados
```
## Gráficos de cajas
```
# Por legibilidad y para reducir el tiempo de ejecución
# reducimos el dataset a las siguientes tres variables
df_s = df[['age', 'bmi', 'ldl']]
sns.boxplot(data=df_s)
```
## Gráficos de distribución
```
from scipy.stats import norm # Usamos la función normal de scipy stats para
# ajustar el histograma de la distribución
pars = norm.fit(df['bmi'].values)
sns.distplot(df['bmi'], kde=False, fit=norm, fit_kws={'color': 'r', 'linewidth': 2.5})
plt.title("Mean: {:.2f}, Sigma: {:.2f}".format(pars[0], pars[1]))
```
## Gráficos de pares
```
df_l = df_l[['age', 'bmi', 'ldl', 'class']]
sns.pairplot(df_l, palette='Pastel1', hue='class')
plt.show()
```
| github_jupyter |
# To solve 'Tower of Hanoi' using recursion
______

Source pillar Helper pillar Destination pillar
Let the number of disk $(n)$ be 3, and the first piller is source, middle piller is helper piller, and last pillar is destination pillar
The objective is to move the entire disk to destination pillar, obeying the following rules:
1) Only one disk can be moved at a time.
2) Each move consists of taking the upper disk from one of the pillar and move to another pillar.
3) No bigger disk may be placed on top of a smaller disk.
Now,
When we consider only 3 disks on source pillar. We can complete the task within $2^3-1= 7$ moves
Steps should be followed to move the disk from source pillar to destination pillar:
Step 1 - Move top disk or smaller disk to destination pillar
Step 2 - Move second disk or second largest disk to helper pillar
Step 3 - Move the smaller disk from destination pillar to helper pillar
Now, the bigger disk remains at source pillar and other disks is in helper pillar. When we consider n = 3 we can say that n-1 (i.e, 3-1 = 2) disks is moved to helper pillar.
Step 4 - Move the bigger disk from source to destination pillar
Step 5 - Now move the smaller disk from helper pillar to source pillar
Step 6 - Move the second bigger disk from helper pillar to destination pillar
Step 7 - Finally move smaller disk from source pillar to destination pillar
So, Now the task is completed and by considering steps 5, 6 and 7 we can say that n-1 disk is moved to destination pillar
### Recursive Algorithm for 'Tower of Hanoi':
By understanding above steps we can write algorithm for recursion. They are:
Step 1 - Move (n-1) disk to helper pillar
Step 2 - Move bigger disk from source pillar to destination pilar
Step 3 - Move (n-1) disk to destination pillar
### PROGRAM:
```
def Tower_of_hanoi(n , source_pillar, destination_pillar, helper_pillar):
# Base case: If only one disk is in source pillar. Then we can directly move disk from source pillar to distination pillar
if n == 1:
print ("Move disk 1 from", source_pillar,"to",destination_pillar)
return
# Or if there is no disk in source pillar:
elif n == 0:
print("No disk in Tower of Hanoi")
return
Tower_of_hanoi(n-1, source_pillar, helper_pillar, destination_pillar)
print ("Move disk",n,"from", source_pillar,"to",destination_pillar)
Tower_of_hanoi(n-1, helper_pillar, destination_pillar, source_pillar)
n = int(input("Number of disk in Tower of Hanoi: "))
print()
Tower_of_hanoi(n, 'source pillar', 'destination pillar', 'helper pillar')
print()
print("Task Completed!")
```
_____
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from astropy.time import Time
import astropy.units as u
from rms import Planet
times, spotted_lc, spotless_lc = np.loadtxt('ring.txt', unpack=True)
d = Planet(per=4.049959, inc=90, a=39.68, t0=0,
rp=(0.3566/100)**0.5, lam=0, ecc=0, w=90)
t14 = d.per/np.pi * np.arcsin( np.sqrt((1 + d.rp)**2 - d.b**2) / np.sin(np.radians(d.inc)) / d.a)
t23 = d.per/np.pi * np.arcsin( np.sqrt((1 - d.rp)**2 - d.b**2) / np.sin(np.radians(d.inc)) / d.a)
# plt.plot(times, spotted_lc - spotless_lc)
plt.plot(times, spotless_lc)
plt.plot(times, spotted_lc)
for i in [1, -1]:
plt.axvline(i*t14/2, color='k')
plt.axvline(i*t23/2, color='k')
from scipy.optimize import fmin_l_bfgs_b
from batman import TransitModel
from copy import deepcopy
d.limb_dark = 'quadratic'
d.u = [0.2, 0.1]
d.fp = 0
def transit_model(times, rprs, params):
trial_params = deepcopy(params)
params.rp = rprs
m = TransitModel(params, times, supersample_factor=7,
exp_time=times[1]-times[0])
lc = m.light_curve(params)
return lc
def chi2(p, times, y, params):
rprs = p[0]
return np.sum((transit_model(times, rprs, params) - y)**2)
initp =[d.rp]
d0 = fmin_l_bfgs_b(chi2, initp, approx_grad=True,
args=(times, spotless_lc, d),
bounds=[[0, 0.5]])[0][0]
mask_in_transit = (times > 0.5*(t14 + t23)/2) | (times < -0.5*(t14 + t23)/2)
# mask_in_transit = (times > t23/2) | (times < -t23/2)
bounds = [[0.5 * d.rp, 1.5 * d.rp]]
d1 = fmin_l_bfgs_b(chi2, initp, approx_grad=True,
args=(times[mask_in_transit], spotless_lc[mask_in_transit], d),
bounds=bounds)[0][0]
d2 = fmin_l_bfgs_b(chi2, initp, approx_grad=True,
args=(times, spotted_lc, d),
bounds=bounds)[0][0]
d3 = fmin_l_bfgs_b(chi2, initp, approx_grad=True,
args=(times[mask_in_transit], spotted_lc[mask_in_transit], d),
bounds=bounds)[0][0]
print("unspotted full LC \t = {0}\nunspotted only OOT \t = {1}\nspotted full LC "
"\t = {2}\nspotted only OOT \t = {3}".format(d0, d1, d2, d3))
fractional_err = [(d0-d.rp)/d.rp, (d1-d.rp)/d.rp, (d2-d.rp)/d.rp, (d3-d.rp)/d.rp]
print("unspotted full LC \t = {0}\nunspotted only OOT \t = {1}\nspotted full LC "
"\t = {2}\nspotted only OOT \t = {3}".format(*fractional_err))
fig, ax = plt.subplots(1, 2, figsize=(10, 4), sharey='row', sharex=True)
ax[0].plot(times, spotless_lc, label='unspotted')
ax[0].plot(times, spotted_lc, label='spotted')
ax[1].scatter(times[mask_in_transit], spotted_lc[mask_in_transit], label='obs', zorder=-10,
s=5, color='k')
ax[1].scatter(times[~mask_in_transit], spotted_lc[~mask_in_transit], label='obs masked', zorder=-10,
s=5, color='gray')
ax[1].plot(times, transit_model(times, d2, d), label='fit: full')
ax[1].plot(times, transit_model(times, d3, d), label='fit: $T_{1,1.5}$+$T_{3.5,4}$')
# ax[1, 1].scatter(range(2), fractional_err[2:])
for axis in fig.axes:
axis.grid(ls=':')
for s in ['right', 'top']:
axis.spines[s].set_visible(False)
axis.legend()
fig.savefig('ringofspots.pdf', bbox_inches='tight')
fig, ax = plt.subplots(2, 1, figsize=(5, 8))
ax[0].plot(times, spotless_lc, label='Spotless')
ax[0].plot(times, spotted_lc, label='Spotted')
from scipy.signal import savgol_filter
filtered = savgol_filter(spotted_lc, 101, 2, deriv=2)
n = len(times)//2
mins = [np.argmin(filtered[:n]), n + np.argmin(filtered[n:])]
maxes = [np.argmax(filtered[:n]), n + np.argmax(filtered[n:])]
ax[1].plot(times, filtered)
# t14 = -1*np.diff(times[mins])[0]
# t23 = -1*np.diff(times[maxes])[0]
ax[1].scatter(times[mins], filtered[mins], color='k', zorder=10)
ax[1].scatter(times[maxes], filtered[maxes], color='k', zorder=10)
for ts, c in zip([times[mins], times[maxes]], ['k', 'gray']):
for t in ts:
ax[0].axvline(t, ls='--', color=c, zorder=-10)
ax[1].axvline(t, ls='--', color=c, zorder=-10)
for axis in fig.axes:
axis.grid(ls=':')
for s in ['right', 'top']:
axis.spines[s].set_visible(False)
axis.legend()
ax[0].set_ylabel('$\mathcal{F}$', fontsize=20)
ax[1].set_ylabel('$\ddot{\mathcal{F}}$', fontsize=20)
ax[1].set_xlabel('Time [d]')
fig.savefig('savgol.pdf', bbox_inches='tight')
plt.show()
one_plus_k = np.sqrt((np.sin(t14*np.pi/d.per) * np.sin(np.radians(d.inc)) * d.a)**2 + d.b**2)
one_minus_k = np.sqrt((np.sin(t23*np.pi/d.per) * np.sin(np.radians(d.inc)) * d.a)**2 + d.b**2)
k = (one_plus_k - one_minus_k)/2
print((k - d.rp)/d.rp)
```
| github_jupyter |
[Bag of Words Meets Bags of Popcorn](https://www.kaggle.com/c/word2vec-nlp-tutorial/data)
======
## Data Set
The labeled data set consists of 50,000 IMDB movie reviews, specially selected for sentiment analysis. The sentiment of reviews is binary, meaning the IMDB rating < 5 results in a sentiment score of 0, and rating >=7 have a sentiment score of 1. No individual movie has more than 30 reviews. The 25,000 review labeled training set does not include any of the same movies as the 25,000 review test set. In addition, there are another 50,000 IMDB reviews provided without any rating labels.
## File descriptions
labeledTrainData - The labeled training set. The file is tab-delimited and has a header row followed by 25,000 rows containing an id, sentiment, and text for each review.
## Data fields
* id - Unique ID of each review
* sentiment - Sentiment of the review; 1 for positive reviews and 0 for negative reviews
* review - Text of the review
## Objective
Objective of this dataset is base on **review** we predict **sentiment** (positive or negative) so X is **review** column and y is **sentiment** column
## 1. Load Dataset
we only forcus on "labeledTrainData.csv" file
Let's first of all have a look at the data.
[Click here to download dataset](https://s3-ap-southeast-1.amazonaws.com/ml101-khanhnguyen/week3/assignment/labeledTrainData.tsv)
```
# Import pandas, numpy
import pandas as pd
import numpy as np
# Read dataset with extra params sep='\t', encoding="latin-1"
df = pd.read_csv('labeledTrainData.tsv',sep='\t',encoding="latin-1")
df.head()
```
## 2. Preprocessing
```
import nltk
nltk.download()
from nltk.corpus import brown
brown.words()
from nltk.corpus import stopwords
stop = stopwords.words('english')
stop
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = (re.sub('[\W]+', ' ', text.lower()) + ' ' + ' '.join(emoticons).replace('-', ''))
return text
#test the function preprocessor()
print(preprocessor('With all this stuff going down at the moment #$::? )'))
from nltk.stem import PorterStemmer
#split a text into list of words
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
token = [porter.stem(word) for word in text.split()]
return token
# split the dataset in train and test
from sklearn.model_selection import train_test_split
X = df['review']
y = df['sentiment']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
```
## 3. Create Model and Train
Using **Pipeline** to concat **tfidf** step and **LogisticRegression** step
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words=stop,
tokenizer=tokenizer_porter,
preprocessor=preprocessor)
# Import Pipeline, LogisticRegression, TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
clf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
clf.fit(X_train, y_train)
```
## 4. Evaluate Model
## 5. Export Model
| github_jupyter |
```
# Import data from Excel sheet
import pandas as pd
df = pd.read_excel('aibl_ptdemog_final.xlsx', sheet_name='aibl_ptdemog_final')
#print(df)
sid = df['RID']
grp = df['DXCURREN']
age = df['age']
sex = df['PTGENDER(1=Female)']
tiv = df['Total'] # TIV
field = df['field_strength']
grpbin = (grp > 1) # 1=CN, ...
# Scan for nifti file names
import glob
dataAIBL = sorted(glob.glob('mwp1_MNI_AIBL/*.nii.gz'))
dataFiles = dataAIBL
numfiles = len(dataFiles)
print('Found ', str(numfiles), ' nifti files')
# Match covariate information
import re
import numpy as np
from pandas import DataFrame
from keras.utils import to_categorical
debug = False
cov_idx = [-1] * numfiles # list; array: np.full((numfiles, 1), -1, dtype=int)
print('Matching covariates for loaded files ...')
for i,id in enumerate(sid):
p = [j for j,x in enumerate(dataFiles) if re.search('_%d_MR_' % id, x)] # extract ID numbers from filename, translate to Excel row index
if len(p)==0:
if debug: print('Did not find %04d' % id) # did not find Excel sheet subject ID in loaded file selection
else:
if debug: print('Found %04d in %s: %s' % (id, p[0], dataFiles[p[0]]))
cov_idx[p[0]] = i # store Excel index i for data file index p[0]
print('Checking for scans not found in Excel sheet: ', sum(x<0 for x in cov_idx))
labels = pd.DataFrame({'Group':grpbin}).iloc[cov_idx, :]
labels = to_categorical(np.asarray(labels)) # use grps to access original labels
grps = pd.DataFrame({'Group':grp, 'RID':sid}).iloc[cov_idx, :]
# Load original data from disk
import h5py
hf = h5py.File('orig_images_AIBL_wb_mwp1_CAT12_MNI.hdf5', 'r')
hf.keys # read keys
images = np.array(hf.get('images'))
hf.close()
print(images.shape)
# specify version of tensorflow
#%tensorflow_version 1.x # <- use this for Google colab
import tensorflow as tf
# downgrade to specific version
#!pip install tensorflow-gpu==1.15
#import tensorflow as tf
print(tf.__version__)
# disable tensorflow deprecation warnings
import logging
logging.getLogger('tensorflow').disabled=True
# helper function to obtain performance result values
def get_values(conf_matrix):
assert conf_matrix.shape==(2,2)
tn, fp, fn, tp = conf_matrix.ravel()
sen = tp / (tp+fn)
spec = tn / (fp+tn)
ppv = tp / (tp+fp)
npv = tn / (tn+fn)
f1 = 2 * ((ppv * sen) / (ppv + sen))
bacc = (spec + sen) / 2
return bacc, sen, spec, ppv, npv, f1
# validation
import numpy as np
from sklearn.metrics import roc_curve, auc
from matplotlib import pyplot as plt
%matplotlib inline
import keras
from keras import models
import tensorflow as tf
from sklearn.metrics import confusion_matrix
acc_AD, acc_MCI, auc_AD, auc_MCI = [], [], [], []
bacc_AD, bacc_MCI = [], []
sen_AD, sen_MCI, spec_AD, spec_MCI = [], [], [], []
ppv_AD, ppv_MCI, npv_AD, npv_MCI = [], [], [], []
f1_AD, f1_MCI = [], []
num_kfold = 10 # number of cross-validation loops equal to number of models
batch_size = 20
for k in range(num_kfold):
print('validating model model_rawdat_checkpoints/rawmodel_wb_cv%d.best.hdf5' % (k+1))
mymodel = models.load_model('model_rawdat_checkpoints/rawmodel_wb_cv%d.best.hdf5' % (k+1))
# calculate area under the curve
# AUC as optimization function during training: https://stackoverflow.com/questions/41032551/how-to-compute-receiving-operating-characteristic-roc-and-auc-in-keras
pred = mymodel.predict(images, batch_size=batch_size)
fpr = dict()
tpr = dict()
roc_auc = dict()
acc = dict()
for i in range(2): # classes dummy vector: 0 - CN, 1 - MCI/AD
fpr[i], tpr[i], _ = roc_curve(labels[:, i], pred[:,i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Plot the ROC curve
plt.figure()
plt.plot(fpr[1], tpr[1], color='darkorange', label='ROC curve (area = %0.2f)' % roc_auc[1])
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
# redo AUC for binary comparison: AD vs. HC and MCI vs. HC
for i in [2,3]:
grpi = np.equal(grps.Group.to_numpy(dtype=np.int), np.ones((grps.shape[0],), dtype=np.int)*i)
grp1 = np.equal(grps.Group.to_numpy(dtype=np.int), np.ones((grps.shape[0],), dtype=np.int))
grpidx = np.logical_or(grpi, grp1)
fpr[i], tpr[i], _ = roc_curve(labels[grpidx, 1], pred[grpidx, 1])
roc_auc[i] = auc(fpr[i], tpr[i])
acc[i] = np.mean((labels[grpidx, 1] == np.round(pred[grpidx, 1])).astype(int))*100
print('AUC for MCI vs. CN = %0.3f' % roc_auc[2])
print('AUC for AD vs. CN = %0.3f' % roc_auc[3])
print('Acc for MCI vs. CN = %0.1f' % acc[2])
print('Acc for AD vs. CN = %0.1f' % acc[3])
auc_AD.append(roc_auc[3])
auc_MCI.append(roc_auc[2])
acc_AD.append(acc[3])
acc_MCI.append(acc[2])
print('confusion matrix')
confmat = confusion_matrix(grps.Group, np.round(pred[:, 1]))
bacc, sen, spec, ppv, npv, f1 = get_values(confmat[(1,2),0:2]) # MCI
bacc_MCI.append(bacc); sen_MCI.append(sen); spec_MCI.append(spec); ppv_MCI.append(ppv); npv_MCI.append(npv); f1_MCI.append(f1)
bacc, sen, spec, ppv, npv, f1 = get_values(confmat[(1,3),0:2]) # AD
bacc_AD.append(bacc); sen_AD.append(sen); spec_AD.append(spec); ppv_AD.append(ppv); npv_AD.append(npv); f1_AD.append(f1)
print(confmat[1:4,0:2])
# print model performance summary
from statistics import mean,stdev
print('Mean AUC for MCI vs. CN = %0.3f +/- %0.3f' % (mean(auc_MCI), stdev(auc_MCI)))
print('Mean AUC for AD vs. CN = %0.3f +/- %0.3f' % (mean(auc_AD), stdev(auc_AD)))
print('Mean Acc for MCI vs. CN = %0.3f +/- %0.3f' % (mean(acc_MCI), stdev(acc_MCI)))
print('Mean Acc for AD vs. CN = %0.3f +/- %0.3f' % (mean(acc_AD), stdev(acc_AD)))
print('Mean Bacc for MCI vs. CN = %0.3f +/- %0.3f' % (mean(bacc_MCI), stdev(bacc_MCI)))
print('Mean Bacc for AD vs. CN = %0.3f +/- %0.3f' % (mean(bacc_AD), stdev(bacc_AD)))
print('Mean Sen for MCI vs. CN = %0.3f +/- %0.3f' % (mean(sen_MCI), stdev(sen_MCI)))
print('Mean Sen for AD vs. CN = %0.3f +/- %0.3f' % (mean(sen_AD), stdev(sen_AD)))
print('Mean Spec for MCI vs. CN = %0.3f +/- %0.3f' % (mean(spec_MCI), stdev(spec_MCI)))
print('Mean Spec for AD vs. CN = %0.3f +/- %0.3f' % (mean(spec_AD), stdev(spec_AD)))
print('Mean PPV for MCI vs. CN = %0.3f +/- %0.3f' % (mean(ppv_MCI), stdev(ppv_MCI)))
print('Mean PPV for AD vs. CN = %0.3f +/- %0.3f' % (mean(ppv_AD), stdev(ppv_AD)))
print('Mean NPV for MCI vs. CN = %0.3f +/- %0.3f' % (mean(npv_MCI), stdev(npv_MCI)))
print('Mean NPV for AD vs. CN = %0.3f +/- %0.3f' % (mean(npv_AD), stdev(npv_AD)))
print('Mean F1 for MCI vs. CN = %0.3f +/- %0.3f' % (mean(f1_MCI), stdev(f1_MCI)))
print('Mean F1 for AD vs. CN = %0.3f +/- %0.3f' % (mean(f1_AD), stdev(f1_AD)))
results = pd.DataFrame({'AUC_MCI':auc_MCI, 'Acc_MCI':acc_MCI, 'Bacc_MCI':bacc_MCI, 'f1_MCI':f1_MCI,
'sen_MCI':sen_MCI, 'spec_MCI':spec_MCI, 'ppv_MCI':ppv_MCI, 'npv_MCI':npv_MCI,
'AUC_AD':auc_AD, 'Acc_AD':acc_AD, 'Bacc_AD':bacc_AD, 'f1_AD':f1_AD,
'sen_AD':sen_AD, 'spec_AD':spec_AD, 'ppv_AD':ppv_AD, 'npv_AD':npv_AD})
print(results)
results.to_csv('results_xval_rawdat_AIBL_checkpoints.csv')
```
| github_jupyter |
# PyTorch Basics
```
import torch
import numpy as np
torch.manual_seed(1234)
```
## Tensors
* Scalar is a single number.
* Vector is an array of numbers.
* Matrix is a 2-D array of numbers.
* Tensors are N-D arrays of numbers.
#### Creating Tensors
You can create tensors by specifying the shape as arguments. Here is a tensor with 5 rows and 3 columns
```
def describe(x):
print("Type: {}".format(x.type()))
print("Shape/size: {}".format(x.shape))
print("Values: \n{}".format(x))
describe(torch.Tensor(2, 3))
describe(torch.randn(2, 3))
```
It's common in prototyping to create a tensor with random numbers of a specific shape.
```
x = torch.rand(2, 3)
describe(x)
```
You can also initialize tensors of ones or zeros.
```
describe(torch.zeros(2, 3))
x = torch.ones(2, 3)
describe(x)
x.fill_(5)
describe(x)
```
Tensors can be initialized and then filled in place.
Note: operations that end in an underscore (`_`) are in place operations.
```
x = torch.Tensor(3,4).fill_(5)
print(x.type())
print(x.shape)
print(x)
```
Tensors can be initialized from a list of lists
```
x = torch.Tensor([[1, 2,],
[2, 4,]])
describe(x)
```
Tensors can be initialized from numpy matrices
```
npy = np.random.rand(2, 3)
describe(torch.from_numpy(npy))
print(npy.dtype)
```
#### Tensor Types
The FloatTensor has been the default tensor that we have been creating all along
```
import torch
x = torch.arange(6).view(2, 3)
describe(x)
x = torch.FloatTensor([[1, 2, 3],
[4, 5, 6]])
describe(x)
x = x.long()
describe(x)
x = torch.tensor([[1, 2, 3],
[4, 5, 6]], dtype=torch.int64)
describe(x)
x = x.float()
describe(x)
x = torch.randn(2, 3)
describe(x)
describe(torch.add(x, x))
describe(x + x)
x = torch.arange(6)
describe(x)
x = x.view(2, 3)
describe(x)
describe(torch.sum(x, dim=0))
describe(torch.sum(x, dim=1))
describe(torch.transpose(x, 0, 1))
import torch
x = torch.arange(6).view(2, 3)
describe(x)
describe(x[:1, :2])
describe(x[0, 1])
indices = torch.LongTensor([0, 2])
describe(torch.index_select(x, dim=1, index=indices))
indices = torch.LongTensor([0, 0])
describe(torch.index_select(x, dim=0, index=indices))
row_indices = torch.arange(2).long()
col_indices = torch.LongTensor([0, 1])
describe(x[row_indices, col_indices])
```
Long Tensors are used for indexing operations and mirror the `int64` numpy type
```
x = torch.LongTensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
describe(x)
print(x.dtype)
print(x.numpy().dtype)
```
You can convert a FloatTensor to a LongTensor
```
x = torch.FloatTensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
x = x.long()
describe(x)
```
### Special Tensor initializations
We can create a vector of incremental numbers
```
x = torch.arange(0, 10)
print(x)
```
Sometimes it's useful to have an integer-based arange for indexing
```
x = torch.arange(0, 10).long()
print(x)
```
## Operations
Using the tensors to do linear algebra is a foundation of modern Deep Learning practices
Reshaping allows you to move the numbers in a tensor around. One can be sure that the order is preserved. In PyTorch, reshaping is called `view`
```
x = torch.arange(0, 20)
print(x.view(1, 20))
print(x.view(2, 10))
print(x.view(4, 5))
print(x.view(5, 4))
print(x.view(10, 2))
print(x.view(20, 1))
```
We can use view to add size-1 dimensions, which can be useful for combining with other tensors. This is called broadcasting.
```
x = torch.arange(12).view(3, 4)
y = torch.arange(4).view(1, 4)
z = torch.arange(3).view(3, 1)
print(x)
print(y)
print(z)
print(x + y)
print(x + z)
```
Unsqueeze and squeeze will add and remove 1-dimensions.
```
x = torch.arange(12).view(3, 4)
print(x.shape)
x = x.unsqueeze(dim=1)
print(x.shape)
x = x.squeeze()
print(x.shape)
```
all of the standard mathematics operations apply (such as `add` below)
```
x = torch.rand(3,4)
print("x: \n", x)
print("--")
print("torch.add(x, x): \n", torch.add(x, x))
print("--")
print("x+x: \n", x + x)
```
The convention of `_` indicating in-place operations continues:
```
x = torch.arange(12).reshape(3, 4)
print(x)
print(x.add_(x))
```
There are many operations for which reduce a dimension. Such as sum:
```
x = torch.arange(12).reshape(3, 4)
print("x: \n", x)
print("---")
print("Summing across rows (dim=0): \n", x.sum(dim=0))
print("---")
print("Summing across columns (dim=1): \n", x.sum(dim=1))
```
#### Indexing, Slicing, Joining and Mutating
```
x = torch.arange(6).view(2, 3)
print("x: \n", x)
print("---")
print("x[:2, :2]: \n", x[:2, :2])
print("---")
print("x[0][1]: \n", x[0][1])
print("---")
print("Setting [0][1] to be 8")
x[0][1] = 8
print(x)
```
We can select a subset of a tensor using the `index_select`
```
x = torch.arange(9).view(3,3)
print(x)
print("---")
indices = torch.LongTensor([0, 2])
print(torch.index_select(x, dim=0, index=indices))
print("---")
indices = torch.LongTensor([0, 2])
print(torch.index_select(x, dim=1, index=indices))
```
We can also use numpy-style advanced indexing:
```
x = torch.arange(9).view(3,3)
indices = torch.LongTensor([0, 2])
print(x[indices])
print("---")
print(x[indices, :])
print("---")
print(x[:, indices])
```
We can combine tensors by concatenating them. First, concatenating on the rows
```
x = torch.arange(6).view(2,3)
describe(x)
describe(torch.cat([x, x], dim=0))
describe(torch.cat([x, x], dim=1))
describe(torch.stack([x, x]))
```
We can concentate along the first dimension.. the columns.
```
x = torch.arange(9).view(3,3)
print(x)
print("---")
new_x = torch.cat([x, x, x], dim=1)
print(new_x.shape)
print(new_x)
```
We can also concatenate on a new 0th dimension to "stack" the tensors:
```
x = torch.arange(9).view(3,3)
print(x)
print("---")
new_x = torch.stack([x, x, x])
print(new_x.shape)
print(new_x)
```
#### Linear Algebra Tensor Functions
Transposing allows you to switch the dimensions to be on different axis. So we can make it so all the rows are columsn and vice versa.
```
x = torch.arange(0, 12).view(3,4)
print("x: \n", x)
print("---")
print("x.tranpose(1, 0): \n", x.transpose(1, 0))
```
A three dimensional tensor would represent a batch of sequences, where each sequence item has a feature vector. It is common to switch the batch and sequence dimensions so that we can more easily index the sequence in a sequence model.
Note: Transpose will only let you swap 2 axes. Permute (in the next cell) allows for multiple
```
batch_size = 3
seq_size = 4
feature_size = 5
x = torch.arange(batch_size * seq_size * feature_size).view(batch_size, seq_size, feature_size)
print("x.shape: \n", x.shape)
print("x: \n", x)
print("-----")
print("x.transpose(1, 0).shape: \n", x.transpose(1, 0).shape)
print("x.transpose(1, 0): \n", x.transpose(1, 0))
```
Permute is a more general version of tranpose:
```
batch_size = 3
seq_size = 4
feature_size = 5
x = torch.arange(batch_size * seq_size * feature_size).view(batch_size, seq_size, feature_size)
print("x.shape: \n", x.shape)
print("x: \n", x)
print("-----")
print("x.permute(1, 0, 2).shape: \n", x.permute(1, 0, 2).shape)
print("x.permute(1, 0, 2): \n", x.permute(1, 0, 2))
```
Matrix multiplication is `mm`:
```
torch.randn(2, 3, requires_grad=True)
x1 = torch.arange(6).view(2, 3).float()
describe(x1)
x2 = torch.ones(3, 2)
x2[:, 1] += 1
describe(x2)
describe(torch.mm(x1, x2))
x = torch.arange(0, 12).view(3,4).float()
print(x)
x2 = torch.ones(4, 2)
x2[:, 1] += 1
print(x2)
print(x.mm(x2))
```
See the [PyTorch Math Operations Documentation](https://pytorch.org/docs/stable/torch.html#math-operations) for more!
## Computing Gradients
```
x = torch.tensor([[2.0, 3.0]], requires_grad=True)
z = 3 * x
print(z)
```
In this small snippet, you can see the gradient computations at work. We create a tensor and multiply it by 3. Then, we create a scalar output using `sum()`. A Scalar output is needed as the the loss variable. Then, called backward on the loss means it computes its rate of change with respect to the inputs. Since the scalar was created with sum, each position in z and x are independent with respect to the loss scalar.
The rate of change of x with respect to the output is just the constant 3 that we multiplied x by.
```
x = torch.tensor([[2.0, 3.0]], requires_grad=True)
print("x: \n", x)
print("---")
z = 3 * x
print("z = 3*x: \n", z)
print("---")
loss = z.sum()
print("loss = z.sum(): \n", loss)
print("---")
loss.backward()
print("after loss.backward(), x.grad: \n", x.grad)
```
### Example: Computing a conditional gradient
$$ \text{ Find the gradient of f(x) at x=1 } $$
$$ {} $$
$$ f(x)=\left\{
\begin{array}{ll}
sin(x) \text{ if } x>0 \\
cos(x) \text{ otherwise } \\
\end{array}
\right.$$
```
def f(x):
if (x.data > 0).all():
return torch.sin(x)
else:
return torch.cos(x)
x = torch.tensor([1.0], requires_grad=True)
y = f(x)
y.backward()
print(x.grad)
```
We could apply this to a larger vector too, but we need to make sure the output is a scalar:
```
x = torch.tensor([1.0, 0.5], requires_grad=True)
y = f(x)
# this is meant to break!
y.backward()
print(x.grad)
```
Making the output a scalar:
```
x = torch.tensor([1.0, 0.5], requires_grad=True)
y = f(x)
y.sum().backward()
print(x.grad)
```
but there was an issue.. this isn't right for this edge case:
```
x = torch.tensor([1.0, -1], requires_grad=True)
y = f(x)
y.sum().backward()
print(x.grad)
x = torch.tensor([-0.5, -1], requires_grad=True)
y = f(x)
y.sum().backward()
print(x.grad)
```
This is because we aren't doing the boolean computation and subsequent application of cos and sin on an elementwise basis. So, to solve this, it is common to use masking:
```
def f2(x):
mask = torch.gt(x, 0).float()
return mask * torch.sin(x) + (1 - mask) * torch.cos(x)
x = torch.tensor([1.0, -1], requires_grad=True)
y = f2(x)
y.sum().backward()
print(x.grad)
def describe_grad(x):
if x.grad is None:
print("No gradient information")
else:
print("Gradient: \n{}".format(x.grad))
print("Gradient Function: {}".format(x.grad_fn))
import torch
x = torch.ones(2, 2, requires_grad=True)
describe(x)
describe_grad(x)
print("--------")
y = (x + 2) * (x + 5) + 3
describe(y)
z = y.mean()
describe(z)
describe_grad(x)
print("--------")
z.backward(create_graph=True, retain_graph=True)
describe_grad(x)
print("--------")
x = torch.ones(2, 2, requires_grad=True)
y = x + 2
y.grad_fn
```
### CUDA Tensors
PyTorch's operations can seamlessly be used on the GPU or on the CPU. There are a couple basic operations for interacting in this way.
```
print(torch.cuda.is_available())
x = torch.rand(3,3)
describe(x)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
x = torch.rand(3, 3).to(device)
describe(x)
print(x.device)
cpu_device = torch.device("cpu")
# this will break!
y = torch.rand(3, 3)
x + y
y = y.to(cpu_device)
x = x.to(cpu_device)
x + y
if torch.cuda.is_available(): # only is GPU is available
a = torch.rand(3,3).to(device='cuda:0') # CUDA Tensor
print(a)
b = torch.rand(3,3).cuda()
print(b)
print(a + b)
a = a.cpu() # Error expected
print(a + b)
```
### Exercises
Some of these exercises require operations not covered in the notebook. You will have to look at [the documentation](https://pytorch.org/docs/) (on purpose!)
(Answers are at the bottom)
#### Exercise 1
Create a 2D tensor and then add a dimension of size 1 inserted at the 0th axis.
```
```
#### Exercise 2
Remove the extra dimension you just added to the previous tensor.
```
```
#### Exercise 3
Create a random tensor of shape 5x3 in the interval [3, 7)
```
```
#### Exercise 4
Create a tensor with values from a normal distribution (mean=0, std=1).
```
```
#### Exercise 5
Retrieve the indexes of all the non zero elements in the tensor torch.Tensor([1, 1, 1, 0, 1]).
```
```
#### Exercise 6
Create a random tensor of size (3,1) and then horizonally stack 4 copies together.
```
```
#### Exercise 7
Return the batch matrix-matrix product of two 3 dimensional matrices (a=torch.rand(3,4,5), b=torch.rand(3,5,4)).
```
```
#### Exercise 8
Return the batch matrix-matrix product of a 3D matrix and a 2D matrix (a=torch.rand(3,4,5), b=torch.rand(5,4)).
```
```
Answers below
```
```
Answers still below.. Keep Going
```
```
#### Exercise 1
Create a 2D tensor and then add a dimension of size 1 inserted at the 0th axis.
```
a = torch.rand(3,3)
a = a.unsqueeze(0)
print(a)
print(a.shape)
```
#### Exercise 2
Remove the extra dimension you just added to the previous tensor.
```
a = a.squeeze(0)
print(a.shape)
```
#### Exercise 3
Create a random tensor of shape 5x3 in the interval [3, 7)
```
3 + torch.rand(5, 3) * 4
```
#### Exercise 4
Create a tensor with values from a normal distribution (mean=0, std=1).
```
a = torch.rand(3,3)
a.normal_(mean=0, std=1)
```
#### Exercise 5
Retrieve the indexes of all the non zero elements in the tensor torch.Tensor([1, 1, 1, 0, 1]).
```
a = torch.Tensor([1, 1, 1, 0, 1])
torch.nonzero(a)
```
#### Exercise 6
Create a random tensor of size (3,1) and then horizonally stack 4 copies together.
```
a = torch.rand(3,1)
a.expand(3,4)
```
#### Exercise 7
Return the batch matrix-matrix product of two 3 dimensional matrices (a=torch.rand(3,4,5), b=torch.rand(3,5,4)).
```
a = torch.rand(3,4,5)
b = torch.rand(3,5,4)
torch.bmm(a, b)
```
#### Exercise 8
Return the batch matrix-matrix product of a 3D matrix and a 2D matrix (a=torch.rand(3,4,5), b=torch.rand(5,4)).
```
a = torch.rand(3,4,5)
b = torch.rand(5,4)
torch.bmm(a, b.unsqueeze(0).expand(a.size(0), *b.size()))
```
### END
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 初学者的 TensorFlow 2.0 教程
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/quickstart/beginner"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 TensorFlow.org 观看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/quickstart/beginner.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/quickstart/beginner.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/quickstart/beginner.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载笔记本</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
[官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
[docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
这是一个 [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) 笔记本文件。 Python程序可以直接在浏览器中运行,这是学习 Tensorflow 的绝佳方式。想要学习该教程,请点击此页面顶部的按钮,在Google Colab中运行笔记本。
1. 在 Colab中, 连接到Python运行环境: 在菜单条的右上方, 选择 *CONNECT*。
2. 运行所有的代码块: 选择 *Runtime* > *Run all*。
下载并安装 TensorFlow 2.0 测试版包。将 TensorFlow 载入你的程序:
```
# 安装 TensorFlow
import tensorflow as tf
```
载入并准备好 [MNIST 数据集](http://yann.lecun.com/exdb/mnist/)。将样本从整数转换为浮点数:
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
```
将模型的各层堆叠起来,以搭建 `tf.keras.Sequential` 模型。为训练选择优化器和损失函数:
```
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
训练并验证模型:
```
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test, verbose=2)
```
现在,这个照片分类器的准确度已经达到 98%。想要了解更多,请阅读 [TensorFlow 教程](https://tensorflow.google.cn/tutorials/)。
| github_jupyter |
## DEMs coregistration demo
### Note: The data for co-registration should be utm projected.
```
import os
root_proj = '/Users/luo/OneDrive/GitHub/Glacier-in-RGI1305'
os.chdir(root_proj)
import numpy as np
import matplotlib.pyplot as plt
from utils.geotif_io import readTiff, writeTiff
from utils.imgShow import imgShow
from utils.crop_to_extent import crop_to_extent
from utils.raster_vec import vec2mask
import pybob.coreg_tools as ct
from pybob.GeoImg import GeoImg
path_srtm = 'data/dem-data/srtm-c/SRTMGL1_E_wkunlun_utm.tif' # master dem
path_tandem = 'data/dem-data/tandem-x/dems_mosaic_wkunlun_utm.tif' # slave dem
path_l8img = 'data/rsimg/l8_kunlun_20200914.tif'
path_water_jrc = 'data/water_jrc/wkl_water_jrc_utm.tif' # jrc water map for water mask
path_rgi_1305 = 'data/rgi60-wkunlun/rgi60_1305.gpkg' # rgi glacier data for glacier mask
srtm, srtm_info = readTiff(path_srtm) # master dem
tandem, tandem_info = readTiff(path_tandem) # slave dem
l8_img, l8_img_info = readTiff(path_l8img)
water_jrc, water_jrc_info = readTiff(path_water_jrc)
print('srtm shape:', srtm.shape, 'extent:', srtm_info['geoextent'])
print('tandem shape:', tandem.shape, 'extent', tandem_info['geoextent'])
print('water_jrc shape:', water_jrc.shape, 'extent', water_jrc_info['geoextent'])
### Image alignment for the tandem data.
tandem_align = crop_to_extent(path_img=path_tandem, \
extent=srtm_info['geoextent'], size_target=srtm.shape)
print('aligned tandem shape:', tandem_align.shape)
```
### Check dem image
```
plt.figure(figsize=(15,5))
plt.subplot(1,3,1); imgShow(l8_img); plt.title('rs image')
plt.subplot(1,3,2); plt.imshow(srtm, vmin=2000, vmax=7000); plt.title('srtm (master)')
plt.subplot(1,3,3); plt.imshow(tandem_align, vmin=2000, vmax=7000); plt.title('aligned tandem (slave)')
```
### **1. Generate mask image.**
```
### -- 2.1. water mask
water_jrc_crop = crop_to_extent(path_img=path_water_jrc, \
extent=srtm_info['geoextent'], size_target=srtm.shape)
water_jrc_crop = np.ma.masked_where(water_jrc_crop>50, water_jrc_crop)
### -- 2.2. glacier mask
rgi60_mask = vec2mask(path_vec=path_rgi_1305, path_raster=path_srtm, path_save=None)
rgi60_mask = np.ma.masked_equal(rgi60_mask, 1)
### -- 2.3 merge the water and glacier masks
mask = np.logical_or.reduce([water_jrc_crop.mask, rgi60_mask.mask])
plt.imshow(mask); plt.title('water/glacier mask image')
print(mask.shape)
```
### **2. Co-registration to srtm-c dem by using open-source pybob code.**
##### Reference: Nuth and Kääb (2011) (https://www.the-cryosphere.net/5/271/2011/tc-5-271-2011.html)
```
srtm_geo = GeoImg(path_srtm) # master dem
tandem_geo = srtm_geo.copy(new_raster=tandem_align)
slope_geo = ct.get_slope(srtm_geo) # calculate slope from master DEM, scale is 111120 if using wgs84 projection
aspect_geo = ct.get_aspect(srtm_geo) # calculate aspect from master DEM
print(srtm_geo.img.shape, tandem_geo.img.shape)
init_dh_geo = tandem_geo.copy(new_raster=tandem_geo.img-srtm_geo.img) # initial dem difference (a new GeoImg dataset)
```
### **2.1. co-registration and obtain the adjust values**
```
## --- 1. copy the slave_dem as the medium processing dem
tandem_proc = tandem_geo.copy() # make a copy of the slave DEM
## --- 2. pre-processing: data mask by provided mask data and the calculated outlier values;
## xdata->masked aspect, ydata->masked dH, sdata->masked tan(a)
dH, xdata, ydata, sdata = ct.preprocess(stable_mask=mask, slope=slope_geo.img, \
aspect=aspect_geo.img, master=srtm_geo, slave=tandem_proc)
fig = ct.false_hillshade(dH, 'DEM difference before coregistration', clim=(-5, 5))
## --- 3. initial the shift values (will be updated during this process).
## --- 4. co-registration, obtain the adjust values.
xadj, yadj, zadj = ct.coreg_fitting(xdata, ydata, sdata, 'Iteration 1')
```
### **2.2. Rectify the original image with obtained shift values**
```
## update the shift values
x_shift = y_shift = z_shift = 0
x_shift += xadj; y_shift += yadj; z_shift += zadj
print('shift values (x,y,z):', x_shift, y_shift, z_shift)
## --- 1. rectify the x and y.
tandem_proc.shift(xadj, yadj) # rectify the slave dem in terms of x and y.
tandem_proc = tandem_proc.reproject(srtm_geo) # re-align the grid of the slave DEMs after shifting
## --- 2. rectify the z.
tandem_proc = tandem_proc.copy(new_raster=tandem_proc.img + zadj) # shift the DEM in the z direction
final_dh_geo = tandem_geo.copy(new_raster=tandem_proc.img - srtm_geo.img)
```
### **2.3. Co-registration with more iterations**
```
def dems_coreg(master_geo, slave_geo, mask_img, iteration=10):
slope_geo = ct.get_slope(master_geo) # calculate slope from master DEM
aspect_geo = ct.get_aspect(master_geo) # calculate aspect from master DEM
slave_proc = slave_geo.copy() # make a copy of the slave DEM
for i in range(iteration):
x_shift = y_shift = z_shift = 0
dH, xdata, ydata, sdata = ct.preprocess(stable_mask=mask_img, slope=slope_geo.img, \
aspect=aspect_geo.img, master=master_geo, slave=slave_proc)
## --- 1. calculate shift values.
i_iter = 'Iteration '+str(i)
xadj, yadj, zadj = ct.coreg_fitting(xdata, ydata, sdata, i_iter, plot=False)
x_shift += xadj; y_shift += yadj; z_shift += zadj # update shift value
## --- 2. rectify original dem.
slave_proc.shift(xadj, yadj) # rectify the slave dem in terms of x and y.
slave_proc = slave_proc.reproject(master_geo) # re-align the grid of the slave DEMs after shifting
slave_proc = slave_proc.copy(new_raster=slave_proc.img + zadj) # shift the DEM in the z direction
print('shift values in iteration '+str(i)+' (x,y,z):', x_shift, y_shift, z_shift)
return slave_proc
tandem_coreg = dems_coreg(master_geo=srtm_geo, slave_geo=tandem_geo, mask_img=mask, iteration=10)
init_dh_geo = tandem_geo.copy(new_raster=tandem_geo.img-srtm_geo.img) # initial dem difference (GeoImg dataset)
final_dh_geo = tandem_geo.copy(new_raster=tandem_coreg.img-srtm_geo.img) # initial dem difference
```
### **Visualize the dems difference before and affter co-registration**
```
fig1 = plt.figure(figsize=(14,4))
plt.subplot(1,2,1)
plt.imshow(init_dh_geo.img, vmin=-10, vmax=10, cmap='RdYlBu')
cb = plt.colorbar(fraction=0.03, pad=0.02);
# cb.set_label('elevation difference (m)')
plt.title('before co-registration')
plt.subplot(1,2,2)
plt.imshow(final_dh_geo.img, vmin=-10, vmax=10, cmap='RdYlBu')
cb = plt.colorbar(fraction=0.03, pad=0.02);
cb.set_label('elevation difference (m)')
plt.title('after co-registration')
```
| github_jupyter |
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
from sklearn.metrics import confusion_matrix,balanced_accuracy_score,roc_auc_score,roc_curve
from p4tools import io
ResultsPath = '../../Data/SummaryResults/'
FiguresPath = '../../Data/Figures/'
if not os.path.isdir(FiguresPath):
os.mkdir(FiguresPath)
NumRepeats=3
ResultsList=[]
for Rep in range(NumRepeats):
ResultsList.append(pd.read_csv(ResultsPath+'TileClassifier_LORO_final_repeat'+str(Rep)+'.csv'))
Y_true=ResultsList[-1]['GroundTruth'].astype('uint8')
Y_pred=ResultsList[0]['ClassifierConf'].values
for Rep in range(1,NumRepeats):
Y_pred=Y_pred+ResultsList[Rep]['ClassifierConf'].values
Y_pred=Y_pred/NumRepeats
Results_df = ResultsList[-1]
Results_df['ClassifierConf']=Y_pred
Recall95PC_Threshold=0.24
AUC = roc_auc_score(Y_true, Y_pred)
conf_matrix = confusion_matrix(Y_true,Y_pred>Recall95PC_Threshold,labels=[0,1])
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
Precision = conf_matrix[1,1]/(conf_matrix[1,1]+conf_matrix[0,1])
Balanced_accuracy = balanced_accuracy_score(Y_true, Y_pred>Recall95PC_Threshold)
print('Number of tiles classified in Leave-One-Region-Out Cross-Validation= ',Results_df.shape[0])
print('')
print('Confusion matrix = ')
print(conf_matrix)
print('')
print('sensitivity=',round(100*Sensitivity,2),'%')
print('Specificity=',round(100*Specificity,2),'%')
print('Precision=',round(100*Precision,2),'%')
print('AUC=',round(AUC,3))
print('Balanced Accuracy =',round(100*Balanced_accuracy))
fig = plt.figure(figsize=(10,10))
fpr, tpr, thresholds = roc_curve(Y_true, Y_pred)
plt.plot(1-fpr,tpr,linewidth=3)
plt.xlabel('Specificity',fontsize=20)
plt.ylabel('Recall',fontsize=20)
plt.plot(Specificity,Sensitivity,'og',linewidth=30,markersize=20)
plt.text(0.5, 0.5, 'AUC='+str(round(AUC,2)), fontsize=20)
plt.text(0.2, 0.9, '95% recall point at', fontsize=20,color='green')
plt.text(0.2, 0.85, '54% specificity', fontsize=20,color='green')
matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)
fig.tight_layout()
plt.savefig(FiguresPath+'Figure13.pdf')
plt.show()
#regions
region_names_df = io.get_region_names()
region_names_df = region_names_df.set_index('obsid')
region_names_df.at['ESP_012620_0975','roi_name'] = 'Buffalo'
region_names_df.at['ESP_012277_0975','roi_name'] = 'Buffalo'
region_names_df.at['ESP_012348_0975','roi_name'] = 'Taichung'
#other meta data
ImageResults_df = io.get_meta_data()
ImageResults_df = ImageResults_df.set_index('OBSERVATION_ID')
ImageResults_df = pd.concat([ImageResults_df, region_names_df], axis=1, sort=False)
ImageResults_df=ImageResults_df.dropna()
UniqueP4Regions = ImageResults_df['roi_name'].unique()
print("Number of P4 regions = ",len(UniqueP4Regions))
BAs=[]
for ToLeaveOut in UniqueP4Regions:
This_df = Results_df[Results_df['Region']==ToLeaveOut]
y_true = This_df['GroundTruth'].values
y_pred = This_df['ClassifierConf'].values
Balanced_accuracy_cl = balanced_accuracy_score(y_true, y_pred>0.5)
BAs.append(Balanced_accuracy_cl)
regions_sorted=[x for y, x in sorted(zip(BAs,UniqueP4Regions))]
fig=plt.figure(figsize=(15,15))
plt.bar(regions_sorted,100*np.array(sorted(BAs)))
ax=fig.gca()
ax.set_xticks(np.arange(0,len(regions_sorted)))
ax.set_xticklabels(regions_sorted,rotation=90,fontsize=20)
ax.set_ylabel('Balanced Accuracy (%)',fontsize=30)
matplotlib.rc('xtick', labelsize=30)
matplotlib.rc('ytick', labelsize=30)
fig.tight_layout()
plt.savefig(FiguresPath+'Figure14.pdf')
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/JSJeong-me/CNN-Cats-Dogs/blob/main/5_aug_pretrained.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
%matplotlib inline
!ls -l
!cp ./drive/MyDrive/training_data.zip .
!unzip training_data.zip
import glob
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
IMG_DIM = (150, 150)
train_files = glob.glob('training_data/*')
train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files]
train_imgs = np.array(train_imgs)
train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files]
validation_files = glob.glob('validation_data/*')
validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files]
validation_imgs = np.array(validation_imgs)
validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files]
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
train_imgs_scaled = train_imgs.astype('float32')
validation_imgs_scaled = validation_imgs.astype('float32')
train_imgs_scaled /= 255
validation_imgs_scaled /= 255
batch_size = 50
num_classes = 2
epochs = 150
input_shape = (150, 150, 3)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
# encode wine type labels
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
print(train_labels[0:5], train_labels_enc[0:5])
train_datagen = ImageDataGenerator( zoom_range=0.3, rotation_range=50, # rescale=1./255,
width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2,
horizontal_flip=True, fill_mode='nearest')
val_datagen = ImageDataGenerator() # rescale=1./255
train_generator = train_datagen.flow(train_imgs, train_labels_enc, batch_size=30)
val_generator = val_datagen.flow(validation_imgs, validation_labels_enc, batch_size=20)
from tensorflow.keras.applications import vgg16
from tensorflow.keras.models import Model
import tensorflow.keras
vgg = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=input_shape)
output = vgg.layers[-1].output
output = tensorflow.keras.layers.Flatten()(output)
vgg_model = Model(vgg.input, output)
vgg_model.trainable = False
for layer in vgg_model.layers:
layer.trainable = False
vgg_model.summary()
vgg_model.trainable = True
set_trainable = False
for layer in vgg_model.layers:
if layer.name in ['block5_conv1', 'block4_conv1']:
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
print("Trainable layers:", vgg_model.trainable_weights)
import pandas as pd
pd.set_option('max_colwidth', -1)
layers = [(layer, layer.name, layer.trainable) for layer in vgg_model.layers]
pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable'])
#print("Trainable layers:", vgg_model.trainable_weights)
bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1])
print(bottleneck_feature_example.shape)
plt.imshow(bottleneck_feature_example[0][:,:,0])
def get_bottleneck_features(model, input_imgs):
features = model.predict(input_imgs, verbose=0)
return features
train_features_vgg = get_bottleneck_features(vgg_model, train_imgs_scaled)
validation_features_vgg = get_bottleneck_features(vgg_model, validation_imgs_scaled)
print('Train Bottleneck Features:', train_features_vgg.shape,
'\tValidation Bottleneck Features:', validation_features_vgg.shape)
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer
from tensorflow.keras.models import Sequential
from tensorflow.keras import optimizers
model = Sequential()
model.add(vgg_model)
model.add(Dense(512, activation='relu', input_dim=input_shape))
model.add(Dropout(0.3))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['accuracy'])
model.summary()
history = model.fit_generator(train_generator, epochs=30,
validation_data=val_generator, verbose=1)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Pre-trained CNN (Transfer Learning) Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = list(range(1,31))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, 31, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, 31, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
model.save('4-2-augpretrained_cnn.h5')
```
| github_jupyter |
<img src="./img/Circuit.png" style="width: 50%; height: 50%"> </img>
<img src="./img/treqs.png" style="width: 50%; height: 50%"> </img>
$$ CPE_{x} = \frac{1}{Q_x(\imath\omega)^{p_x}}, \ x=H, M, L, E $$
$$ R(\omega) = R_{\infty}+\sum_{x=H, M, L, E}\frac{1}{\frac{1}{R_x}+\frac{1}{CPE_x(\omega)}}$$
$$ R(\omega) = R_{\infty}+\sum_{x=M, L}\frac{1}{\frac{1}{R_x}+\frac{1}{CPE_x(\omega)}}$$
- H: High frequency
- M: Middle (possibly
```
%pylab inline
import pandas as pd
data = pd.read_excel("../data/Kimberlite-2015-07-17.xls")
data_active = data.loc[np.logical_and((data['Facies'] == 'XVK')|(data['Facies'] == 'PK')|(data['Facies'] == 'HK')|(data['Facies'] == 'VK'), data.notnull()['Rinf']==True)][["Facies", "Peregrine ID", "(Latitude)", "(Longitude)", "Depth (m)","Mag Susc [SI]","Resistivity [Ohm.m]","Geometric Factor [m]","Sat Geometric Dens [g/cc]","Chargeability [ms]","Rinf","Ro","Rh","Qh","Ph", "pRh", "pQh","Rm","Qm","Pm", "pRm", "pQm","Rl","Ql","Pl", "pRl", "pQl","Re","Qe","Pe-f","Pe-i"]]
data_active
def CPEfun(Rx, Fx, px, freq):
out = np.zeros_like(freq, dtype=complex128)
out = 1./(1./Rx + Qx*(np.pi*2*freq*1j)**px)
return out
def CPEfunElec(Rx, Qx, pex, pix, freq):
out = np.zeros_like(freq, dtype=complex128)
out = 1./(1./Rx + (1j)**pix*Qx*(np.pi*2*freq)**pex)
return out
def CPEfunSeries(Rx, Qx, px, freq):
out = np.zeros_like(freq, dtype=complex128)
out = Rx + 1./(Qx*(np.pi*2*freq*1j)**px)
return out
f0peak = lambda R, Q, P: (R*Q)**(-1./P)/np.pi/2.
taupeak = lambda R, Q, P: (R*Q)**(1./P)
rhoinf = lambda rhom, rhol, rho0: 1./(1./rho0+1./rhom+1./rhol)
charg = lambda rhoinf, rho0: (rho0-rhoinf) / rhoinf
def TKCColeColeParallel(frequency, PID, data):
Rh, Qh, Ph = data[data["Peregrine ID"]==PID]['pRh'].values[0], data[data["Peregrine ID"]==PID]['pQh'].values[0], data[data["Peregrine ID"]==PID]['Ph'].values[0]
Rm, Qm, Pm = data[data["Peregrine ID"]==PID]['pRm'].values[0], data[data["Peregrine ID"]==PID]['pQm'].values[0], data[data["Peregrine ID"]==PID]['Pm'].values[0]
Rl, Ql, Pl = data[data["Peregrine ID"]==PID]['pRl'].values[0], data[data["Peregrine ID"]==PID]['pQl'].values[0], data[data["Peregrine ID"]==PID]['Pl'].values[0]
geom = data[data["Peregrine ID"]==PID]['Geometric Factor [m]'].values[0]
fpeakm = f0peak(Rm, Qm, Pm)
fpeakl = f0peak(Rl, Ql, Pl)
# geom = 1.
rhom = CPEfunSeries(Rm, Qm, Pm, frequency)*geom
rhol = CPEfunSeries(Rl, Ql, Pl, frequency)*geom
rho0 = data[data["Peregrine ID"]==PID]['Ro'].values[0]*geom
rho = 1./(1./rho0+1./rhol)
m = (rho.real[0]-rho.real[-1])/rho.real[0]
rhoinf = rho0*(1.-m)
fig, ax = plt.subplots(1, 2, figsize = (15, 3))
ax[0].semilogx(frequency, rho.real, 'k-', lw=2)
ax1 = ax[0].twinx()
ax1.semilogx(frequency, (rho.imag), 'k--', lw=2)
ax1.invert_yaxis()
ax[0].grid(True)
ax[1].plot(rho.real, rho.imag, 'k-')
ax[1].invert_yaxis()
ax[1].grid(True)
print data[data["Peregrine ID"]==PID]['Facies'].values[0], PID
print "R0 = ", rho0
print "Rinf = ", rhoinf
print "Chargeability = ", m
print "Taum = ", 1./fpeakm
print "Taul = ", 1./fpeakl
print Pl, Ql, Rl, geom
return
data_active[data_active['Facies']=='PK']
frequency = np.logspace(-4, 8, 211)
TKCColeColeParallel(frequency, "K1P-0825", data_active)
frequency = np.logspace(-4, 8, 211)
TKCColeColeParallel(frequency, "K1P-0807", data_active)
data_active[data_active['Facies']=='XVK']
TKCColeColeParallel(frequency, "K2P-0031", data_active)
data_active[data_active['Facies']=='HK']
TKCColeColeParallel(frequency, "K1P-0591", data_active)
data_active[data_active['Facies']=='VK']
TKCColeColeParallel(frequency, "K1P-0589", data_active)
RlPK = data_active[data_active['Facies']=='PK']["pRl"].values[:]
QlPK = data_active[data_active['Facies']=='PK']["pQl"].values[:]
PlPK = data_active[data_active['Facies']=='PK']["Pl"].values[:]
fpeakPK = f0peak(RlPK, QlPK, PlPK)
rhoinfPK = rhoinf(data_active[data_active['Facies']=='PK']["Ro"].values[:], data_active[data_active['Facies']=='PK']["pRm"].values[:],data_active[data_active['Facies']=='PK']["pRl"].values[:])
mPK = charg(rhoinfPK, data_active[data_active['Facies']=='PK']["Ro"].values[:])
RlXVK = data_active[data_active['Facies']=='XVK']["pRl"].values[:]
QlXVK = data_active[data_active['Facies']=='XVK']["pQl"].values[:]
PlXVK = data_active[data_active['Facies']=='XVK']["Pl"].values[:]
fpeakXVK = f0peak(RlXVK, QlXVK, PlXVK)
rhoinfXVK = rhoinf(data_active[data_active['Facies']=='XVK']["Ro"].values[:], data_active[data_active['Facies']=='XVK']["pRm"].values[:],data_active[data_active['Facies']=='XVK']["pRl"].values[:])
mXVK = charg(rhoinfXVK, data_active[data_active['Facies']=='XVK']["Ro"].values[:])
RlVK = data_active[data_active['Facies']=='VK']["pRl"].values[:]
QlVK = data_active[data_active['Facies']=='VK']["pQl"].values[:]
PlVK = data_active[data_active['Facies']=='VK']["Pl"].values[:]
fpeakVK = f0peak(RlVK, QlVK, PlVK)
rhoinfVK = rhoinf(data_active[data_active['Facies']=='VK']["Ro"].values[:], data_active[data_active['Facies']=='VK']["pRm"].values[:],data_active[data_active['Facies']=='VK']["pRl"].values[:])
mVK = charg(rhoinfVK, data_active[data_active['Facies']=='VK']["Ro"].values[:])
RlHK = data_active[data_active['Facies']=='HK']["pRl"].values[:]
QlHK = data_active[data_active['Facies']=='HK']["pQl"].values[:]
PlHK = data_active[data_active['Facies']=='HK']["Pl"].values[:]
fpeakHK = f0peak(RlHK, QlHK, PlHK)
rhoinfHK = rhoinf(data_active[data_active['Facies']=='HK']["Ro"].values[:], data_active[data_active['Facies']=='HK']["pRm"].values[:],data_active[data_active['Facies']=='HK']["pRl"].values[:])
mHK = charg(rhoinfHK, data_active[data_active['Facies']=='HK']["Ro"].values[:])
import matplotlib as mpl
mpl.rcParams["font.size"] = 16
mpl.rcParams["text.usetex"] = True
figsize(5,5)
indactHK = 1./fpeakHK > 0.1
plt.semilogx(1./fpeakXVK*1e6, mXVK, 'kx', ms = 15, lw=3)
plt.semilogx(1./fpeakVK*1e6, mVK, 'bx', ms = 15)
plt.semilogx(1./fpeakHK[~indactHK]*1e6, mHK[~indactHK], 'mx', ms = 15)
plt.semilogx(1./fpeakPK*1e6, mPK, 'rx', ms = 15)
ylim(-0.1, 1.)
xlim(1e-5*1e6, 1e-2*1e6)
plt.grid(True)
plt.ylabel("Chargeability ")
plt.xlabel("Time constant (micro-s)")
plt.legend(("XVK", "VK", "HK", "PK"), bbox_to_anchor=(1.5, 1))
mus = 1e6
plt.semilogx(1./fpeakXVK*mus, PlXVK,'ko')
plt.semilogx(1./fpeakVK*mus, PlVK, 'bo')
plt.semilogx(1./fpeakHK[~indactHK]*mus, PlHK[~indactHK], 'bo')
plt.semilogx(1./fpeakPK*mus, PlPK, 'mo')
ylim(0., 1.)
xlim(1e-5*mus, 1e-2*mus)
plt.grid(True)
# plt.legend(("XVK", "VK", "HK", "PK"), loc=4)
plt.ylabel("Frequency dependency ")
plt.xlabel("Time constant (micro-s)")
figsize(5,5)
plt.semilogy(data_active[data_active["Facies"]=="XVK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="XVK"]["Resistivity [Ohm.m]"], 'ko')
plt.semilogy(data_active[data_active["Facies"]=="VK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="VK"]["Resistivity [Ohm.m]"], 'bo')
plt.semilogy(data_active[data_active["Facies"]=="HK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="HK"]["Resistivity [Ohm.m]"], 'mo')
plt.semilogy(data_active[data_active["Facies"]=="PK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="PK"]["Resistivity [Ohm.m]"], 'ro')
plt.grid(True)
plt.legend(("XVK", "VK", "HK", "PK"), loc=4)
plt.xlabel("Susceptibility (SI)")
plt.ylabel("Resistivity (ohm-m)")
figsize(5,5)
plt.plot(data_active[data_active["Facies"]=="XVK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="XVK"]["Sat Geometric Dens [g/cc]"], 'ko')
plt.plot(data_active[data_active["Facies"]=="VK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="VK"]["Sat Geometric Dens [g/cc]"], 'bo')
plt.plot(data_active[data_active["Facies"]=="HK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="HK"]["Sat Geometric Dens [g/cc]"], 'ro')
plt.plot(data_active[data_active["Facies"]=="PK"]["Mag Susc [SI]"], data_active[data_active["Facies"]=="PK"]["Sat Geometric Dens [g/cc]"], 'go')
plt.grid(True)
plt.legend(("XVK", "VK", "HK", "PK"), loc=4)
plt.xlabel("Susceptibility (SI)")
plt.ylabel("Density (g/cc)")
figsize(5,5)
```
| github_jupyter |
# Nonlinear Equations
We want to find a root of the nonlinear function $f$ using different methods.
1. Bisection method
2. Newton method
3. Chord method
4. Secant method
5. Fixed point iterations
```
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
import sympy as sym
t = sym.symbols('t')
f_sym = t/8. * (63.*t**4 - 70.*t**2. +15.) # Legendre polynomial of order 5
f_prime_sym = sym.diff(f_sym,t)
f = sym.lambdify(t, f_sym, 'numpy')
f_prime = sym.lambdify(t,f_prime_sym, 'numpy')
phi = lambda x : 63./70.*x**3 + 15./(70.*x)
#phi = lambda x : 70.0/15.0*x**3 - 63.0/15.0*x**5
#phi = lambda x : sqrt((63.*x**4 + 15.0)/70.)
# Let's plot
n = 1025
x = linspace(-1,1,n)
c = zeros_like(x)
_ = plot(x,f(x))
_ = plot(x,c)
_ = grid()
# Initial data for the variuos algorithms
# interval in which we seek the solution
a = 0.7
b = 1.
# initial points
x0 = (a+b)/2.0
x00 = b
# stopping criteria
eps = 1e-10
n_max = 1000
```
## Bisection method
$$
x^k = \frac{a^k+b^k}{2}
$$
```
if (f(a_k) * f(x_k)) < 0:
b_k1 = x_k
a_k1 = a_k
else:
a_k1 = x_k
b_k1 = b_k
```
```
def bisect(f,a,b,eps,n_max):
assert(f(a) * f(b) < 0)
a_new = a
b_new = b
x = mean([a,b])
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
if ( f(a_new) * f(x) < 0 ):
# root in (a_new,x)
b_new = x
else:
# root in (x,b_new)
a_new = x
x_new = mean([a_new,b_new])
#err = 0.5 *(b_new -a_new)
err = abs(f(x_new))
#err = abs(x-x_new)
errors.append(err)
x = x_new
it += 1
semilogy(errors)
print(it)
print(x)
print(err)
return errors
errors_bisect = bisect(f,a,b,eps,n_max)
# is the number of iterations coherent with the theoretical estimation?
```
In order to find out other methods for solving non-linear equations, let's compute the Taylor's series of $f(x^k)$ up to the first order
$$
f(x^k) \simeq f(x^k) + (x-x^k)f^{\prime}(x^k)
$$
which suggests the following iterative scheme
$$
x^{k+1} = x^k - \frac{f(x^k)}{f^{\prime}(x^k)}
$$
The following methods are obtained applying the above scheme where
$$
f^{\prime}(x^k) \approx q^k
$$
## Newton's method
$$
q^k = f^{\prime}(x^k)
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q^k}
$$
```
def newton(f,f_prime,x0,eps,n_max):
x_new = x0
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
x_new = x_new - (f(x_new)/f_prime(x_new))
err = abs(f(x_new))
errors.append(err)
it += 1
semilogy(errors)
print(it)
print(x_new)
print(err)
return errors
%time errors_newton = newton(f,f_prime,1.0,eps,n_max)
```
## Chord method
$$
q^k \equiv q = \frac{f(b)-f(a)}{b-a}
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q}
$$
```
def chord(f,a,b,x0,eps,n_max):
x_new = x0
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
x_new = x_new - (f(x_new)/((f(b) - f(a)) / (b - a)))
err = abs(f(x_new))
errors.append(err)
it += 1
semilogy(errors)
print(it)
print(x_new)
print(err)
return errors
errors_chord = chord (f,a,b,x0,eps,n_max)
```
## Secant method
$$
q^k = \frac{f(x^k)-f(x^{k-1})}{x^k - x^{k-1}}
$$
$$
x^{k+1} = x^k - \frac{f(x^k)}{q^k}
$$
Note that this algorithm requirs **two** initial points
```
def secant(f,x0,x00,eps,n_max):
xk = x00
x_new = x0
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
temp = x_new
x_new = x_new - (f(x_new)/((f(x_new)-f(xk))/(x_new - xk)))
xk = temp
err = abs(f(x_new))
errors.append(err)
it += 1
semilogy(errors)
print(it)
print(x_new)
print(err)
return errors
errors_secant = secant(f,x0,x00,eps,n_max)
```
## Fixed point iterations
$$
f(x)=0 \to x-\phi(x)=0
$$
$$
x^{k+1} = \phi(x^k)
$$
```
def fixed_point(phi,x0,eps,n_max):
x_new = x0
err = eps + 1.
errors = [err]
it = 0
while (err > eps and it < n_max):
x_new = phi(x_new)
err = abs(f(x_new))
errors.append(err)
it += 1
semilogy(errors)
print(it)
print(x_new)
print(err)
return errors
errors_fixed = fixed_point(phi,0.3,eps,n_max)
```
## Comparison
```
# plot the error convergence for the methods
loglog(errors_bisect, label='bisect')
loglog(errors_chord, label='chord')
loglog(errors_secant, label='secant')
loglog(errors_newton, label ='newton')
loglog(errors_fixed, label ='fixed')
_ = legend()
# Let's compare the scipy implmentation of Newton's method with our..
import scipy.optimize as opt
%time opt.newton(f, 1.0, f_prime, tol = eps)
```
We see that the scipy method is 1000 times slower than the `scipy` one
| github_jupyter |
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-PY0101EN-SkillsNetwork/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Exception Handling**
Estimated time needed: **15** minutes
## Objectives
After completing this lab you will be able to:
* Understand exceptions
* Handle the exceptions
## Table of Contents
* What is an Exception?
* Exception Handling
***
## What is an Exception?
In this section you will learn about what an exception is and see examples of them.
### Definition
An exception is an error that occurs during the execution of code. This error causes the code to raise an exception and if not prepared to handle it will halt the execution of the code.
### Examples
Run each piece of code and observe the exception raised
```
1/0
```
<code>ZeroDivisionError</code> occurs when you try to divide by zero.
```
y = a + 5
```
<code>NameError</code> -- in this case, it means that you tried to use the variable a when it was not defined.
```
a = [1, 2, 3]
a[10]
```
<code>IndexError</code> -- in this case, it occured because you tried to access data from a list using an index that does not exist for this list.
There are many more exceptions that are built into Python, here is a list of them [https://docs.python.org/3/library/exceptions.html](https://docs.python.org/3/library/exceptions.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01)
## Exception Handling
In this section you will learn how to handle exceptions. You will understand how to make your program perform specified tasks instead of halting code execution when an exception is encountered.
### Try Except
A <code>try except</code> will allow you to execute code that might raise an exception and in the case of any exception or a specific one we can handle or catch the exception and execute specific code. This will allow us to continue the execution of our program even if there is an exception.
Python tries to execute the code in the <code>try</code> block. In this case if there is any exception raised by the code in the <code>try</code> block, it will be caught and the code block in the <code>except</code> block will be executed. After that, the code that comes <em>after</em> the try except will be executed.
```
# potential code before try catch
try:
# code to try to execute
except:
# code to execute if there is an exception
# code that will still execute if there is an exception
```
### Try Except Example
In this example we are trying to divide a number given by the user, save the outcome in the variable <code>a</code>, and then we would like to print the result of the operation. When taking user input and dividing a number by it there are a couple of exceptions that can be raised. For example if we divide by zero. Try running the following block of code with <code>b</code> as a number. An exception will only be raised if <code>b</code> is zero.
```
a = 1
try:
b = int(input("Please enter a number to divide a"))
a = a/b
print("Success a=",a)
except:
print("There was an error")
```
### Try Except Specific
A specific <code>try except</code> allows you to catch certain exceptions and also execute certain code depending on the exception. This is useful if you do not want to deal with some exceptions and the execution should halt. It can also help you find errors in your code that you might not be aware of. Furthermore, it can help you differentiate responses to different exceptions. In this case, the code after the try except might not run depending on the error.
<b>Do not run, just to illustrate:</b>
```
# potential code before try catch
try:
# code to try to execute
except (ZeroDivisionError, NameError):
# code to execute if there is an exception of the given types
# code that will execute if there is no exception or a one that we are handling
# potential code before try catch
try:
# code to try to execute
except ZeroDivisionError:
# code to execute if there is a ZeroDivisionError
except NameError:
# code to execute if there is a NameError
# code that will execute if there is no exception or a one that we are handling
```
You can also have an empty <code>except</code> at the end to catch an unexpected exception:
<b>Do not run, just to illustrate:</b>
```
# potential code before try catch
try:
# code to try to execute
except ZeroDivisionError:
# code to execute if there is a ZeroDivisionError
except NameError:
# code to execute if there is a NameError
except:
# code to execute if ther is any exception
# code that will execute if there is no exception or a one that we are handling
```
### Try Except Specific Example
This is the same example as above, but now we will add differentiated messages depending on the exception, letting the user know what is wrong with the input.
```
a = 1
try:
b = int(input("Please enter a number to divide a"))
a = a/b
print("Success a=",a)
except ZeroDivisionError:
print("The number you provided cant divide 1 because it is 0")
except ValueError:
print("You did not provide a number")
except:
print("Something went wrong")
```
### Try Except Else and Finally
<code>else</code> allows one to check if there was no exception when executing the try block. This is useful when we want to execute something only if there were no errors.
<b>do not run, just to illustrate</b>
```
# potential code before try catch
try:
# code to try to execute
except ZeroDivisionError:
# code to execute if there is a ZeroDivisionError
except NameError:
# code to execute if there is a NameError
except:
# code to execute if ther is any exception
else:
# code to execute if there is no exception
# code that will execute if there is no exception or a one that we are handling
```
<code>finally</code> allows us to always execute something even if there is an exception or not. This is usually used to signify the end of the try except.
```
# potential code before try catch
try:
# code to try to execute
except ZeroDivisionError:
# code to execute if there is a ZeroDivisionError
except NameError:
# code to execute if there is a NameError
except:
# code to execute if ther is any exception
else:
# code to execute if there is no exception
finally:
# code to execute at the end of the try except no matter what
# code that will execute if there is no exception or a one that we are handling
```
### Try Except Else and Finally Example
You might have noticed that even if there is an error the value of <code>a</code> is always printed. Let's use the <code>else</code> and print the value of <code>a</code> only if there is no error.
```
a = 1
try:
b = int(input("Please enter a number to divide a"))
a = a/b
except ZeroDivisionError:
print("The number you provided cant divide 1 because it is 0")
except ValueError:
print("You did not provide a number")
except:
print("Something went wrong")
else:
print("success a=",a)
```
Now lets let the user know that we are done processing their answer. Using the <code>finally</code>, let's add a print.
```
a = 1
try:
b = int(input("Please enter a number to divide a"))
a = a/b
except ZeroDivisionError:
print("The number you provided cant divide 1 because it is 0")
except ValueError:
print("You did not provide a number")
except:
print("Something went wrong")
else:
print("success a=",a)
finally:
print("Processing Complete")
```
## Authors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkPY0101ENSkillsNetwork19487395-2021-01-01" target="_blank">Joseph Santarcangelo</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------- |
| 2020-09-02 | 2.0 | Simran | Template updates to the file |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
```
# python the hardway https://learnpythonthehardway.org/book/index.html
# Exercise 1 - Hello world
import sys
print ("Hello Snake")
# Exercise 2 - simple math operations
print ("5 + 2 = ", 5 + 2)
print ("5 > 2 ? ", 5 > 2)
print ("7 / 4 = ", 7/4)
# print ("7 % 4 = ", 7%4)
# Exercise 3 - variables and names
my_name = 'Zed A. Shaw'
my_age = 35 # not a lie
my_height = 74 # inches
my_weight = 180 # lbs
my_eyes = 'Blue'
my_teeth = 'White'
my_hair = 'Brown'
print ("Let's talk about %s." % my_name)
print ("He's %d inches tall." % my_height)
print ("He's %d pounds heavy." % my_weight)
print ("Actually that's not too heavy.")
print ("He's got %s eyes and %s hair." % (my_eyes, my_hair))
print ("His teeth are usually %s depending on the coffee." % my_teeth)
# this line is tricky, try to get it exactly right
print ("If I add %d, %d, and %d I get %d." % ( my_age, my_height, my_weight, my_age + my_height + my_weight))
# Exercise 4 - input from console
age = input()
print ("What's your age ? ", age)
#Exercise 13 using parameters
from sys import argv
program_name = argv
print ("second = ", program_name)
# Exercise 15 - Read files
from sys import argv
file = open("t.txt","r")
for line in file:
print(line.rstrip())
file.close()
# Exercise 16 - Write file
from sys import argv
file = open("t.txt","r")
out = open("t2.txt","w")
for line in file:
out_line = line.rstrip()
print (out_line)
out.write(out_line)
file.close()
#Exercise 18 - functions
def print_two(*args):
arg1, arg2 = args
print ("arg1: %r, arg2: %r" % (arg1, arg2))
def print_twoV2(arg1, arg2):
print ("arg1: %r, arg2: %r" % (arg1, arg2))
print_two("Zed", "Ned")
print_twoV2("Zed", "Ned")
# Exercise 19 - functions continue ...
def add(num1, num2):
return (num1 + num2)
print (add(1,2))
print (add(5+5,10+10))
# Exercise 28 - Booleans
True and True
False and True
"test" == "test"
3 == 3 and (not ("testing" == "testing" or "Python" == "Fun"))
#Exercise 29 - If conditions
people = 30
cars = 40
trucks = 15
if cars > people:
print ("We should take the cars.")
elif cars < people:
print ("We should not take the cars.")
else:
print ("We can't decide.")
# Exercise 32 - Loop and List
numbers = [1, 2, 3]
change = [1, 'pennies', 2, 'dimes', 3, 'quarters']
for num in numbers:
print (num)
for i in change:
print("I got %r" % i)
# Exercise 33 - while loops
i = 0
numbers = []
while i < 6:
print ("At the top i is %d" % i)
numbers.append(i)
i = i + 1
print ("Numbers now: ", numbers)
print ("At the bottom i is %d" % i)
print ("The numbers: ")
for num in numbers:
print (num)
# Exercise 34 - access list elems
animals = ['bear', "wolf"]
animals[1]
# Exercise 39 - Dictionaries
stuff = {'name': 'Nelson', 'age' : 33}
print (stuff['name'])
# create a mapping of state to abbreviation
states = {
'Oregon': 'OR',
'Florida': 'FL',
'California': 'CA',
'New York': 'NY',
'Michigan': 'MI'
}
print (states)
# create a basic set of states and some cities in them
cities = {
'CA': 'San Francisco',
'MI': 'Detroit',
'FL': 'Jacksonville'
}
print ("Testing ....", cities[states['California']])
# print every state abbreviation
for state, abbrev in states.items():
print ("%s is abbreviated %s" % (state, abbrev))
# Exercise 40 - Object Oriented Programming
class Song(object):
def __init__(self,lyrics):
self.lyrics = lyrics
def sing_me_a_song(self):
for line in self.lyrics:
print(line)
happy_bday = Song(["tada ttttt tada ..."])
happy_bday.sing_me_a_song()
## Animal is-a object (yes, sort of confusing) look at the extra credit
class Animal(object):
pass
## ??
class Dog(Animal):
def __init__(self, name):
## ??
self.name = name
## ??
class Cat(Animal):
def __init__(self, name):
## ??
self.name = name
## ??
class Person(object):
def __init__(self, name):
## ??
self.name = name
## Person has-a pet of some kind
self.pet = None
## ??
class Employee(Person):
def __init__(self, name, salary):
## ?? hmm what is this strange magic?
super(Employee, self).__init__(name)
## ??
self.salary = salary
## ??
class Fish(object):
pass
## ??
class Salmon(Fish):
pass
## ??
class Halibut(Fish):
pass
## rover is-a Dog
rover = Dog("Rover")
## ??
satan = Cat("Satan")
## ??
mary = Person("Mary")
## ??
mary.pet = satan
## ??
frank = Employee("Frank", 120000)
## ??
frank.pet = rover
## ??
flipper = Fish()
## ??
crouse = Salmon()
## ??
harry = Halibut()
# Exercise 44 - Inheritance vs Composition
#Implicit ineritance
class Parent(object):
def implicit(self):
print ("PARENT implicit()")
class Child(Parent):
pass
dad = Parent()
son = Child()
dad.implicit()
son.implicit()
#Override explicit
class Parent(object):
def override(self):
print ("PARENT override()")
class Child(Parent):
def override(self):
print ("CHILD override()")
dad = Parent()
son = Child()
dad.override()
son.override()
# Altered
class Parent(object):
def altered(self):
print ("PARENT altered()")
class Child(Parent):
def altered(self):
print ("CHILD, BEFORE PARENT altered()")
super(Child, self).altered()
print ("CHILD, AFTER PARENT altered()")
dad = Parent()
son = Child()
dad.altered()
son.altered()
# Exercise 47 - Automated Testing
from nose.tools import *
class Room(object):
def __init__(self, name, description):
self.name = name
self.description = description
self.paths = {}
def go(self, direction):
return self.paths.get(direction, None)
def add_paths(self, paths):
self.paths.update(paths)
def test_room():
gold = Room("GoldRoom",
"""This room has gold in it you can grab. There's a
door to the north.""")
assert_equal(gold.name, "GoldRoom")
assert_equal(gold.paths, {})
```
| github_jupyter |
<a href="https://colab.research.google.com/github/codeforhk/python_course/blob/master/py_class_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img src="https://www.codefor.hk/wp-content/themes/DC_CUSTOM_THEME/img/logo-code-for-hk-logo.svg" height="150" width="150" align="center"/>
<h1><center>Code For Hong Kong - Python class 1</center></h1>
<h6><center>Written by Patrick Leung</center></h6>
# 0.0.0 Introduction
## 0.1.0 Course structure
### 0.1.1 Overview
- You are expected to attend >75% of all classes in order to pass.
- There will be a take home exercise after every class for you to complete during the week.
- There will be an office hour on slack on Wednesday 7pm - 8pm, where our instructor will standby on slack to answer your question. (On other time during the week, you can also drop questions on the slack channel)
- There will be an in-class test & challenges on every class, and there will be a final exam at the end of the course.
- Your certificate will be graded as Distinction / Merit / Pass.
### 0.1.2 Course sylabus
- Class 1: Basic Python Operations I (Variables, data structure, loops)
- Class 2: Basic Python Operations II (Functions, libraries)
- Class 3: Practical application of Python I (Document processing + Web scraping)
- Class 4: Practical application of Python II ( More examples + Exam)
### 0.1.3 Install Anaconda
First of all, you would need to install anaconda for this course. You would need to download python 3.7.
Anaconda
https://www.anaconda.com/distribution/#windows
Anaconda is a python distribution. It aims to provide everything you need for python & data science "out of the box".
It includes:
- The core python language
- 100+ python "packages" (libraries)
- Spyder (IDE/editor - like pycharm) and Jupyter
For window user, please install from this guide:
https://www.datacamp.com/community/tutorials/installing-anaconda-windows
For mac user, please install from this guide:
https://www.datacamp.com/community/tutorials/installing-anaconda-mac-os-x
<img src="https://www.codefor.hk/wp-content/themes/DC_CUSTOM_THEME/img/logo-code-for-hk-logo.svg" height="150" width="150" align="center"/>
# 1.0.0 Background of python
## 1.1.0 What is python?
* Widely used high-level, interpreted programming language for general-purpose programming
* Created by Guido van Rossum and first released in 1991.
* Python has a design philosophy that emphasizes code readability that allows programmers to express concepts in fewer lines of code than might be used in languages such as C++ or Java.
* The language provides constructs intended to enable writing clear programs on both a small and large scale.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/66/Guido_van_Rossum_OSCON_2006.jpg/440px-Guido_van_Rossum_OSCON_2006.jpg" />
### 1.1.1 What is a programming language?
<img src="http://cdn.osxdaily.com/wp-content/uploads/2011/05/top-command-sorted-by-cpu-use-610x383.jpg
" />
### 1.1.2 Why do we need a computer to help out?
<img src="http://img.picturequotes.com/2/509/508709/one-good-thing-about-my-computer-it-never-asks-why-quote-1.jpg" />
## 1.2.0 Why Python?
- Python is very popular - meaning a lot of the functionality is already written as package, and you can copy/use other people's code for free!
- Python is very beginner friendly
- Python is powerful together with machine learning
https://medium.freecodecamp.org/what-can-you-do-with-python-the-3-main-applications-518db9a68a78
<img src="https://cdn-images-1.medium.com/max/1600/1*_R4CyVH0DSXkJsoRk-Px_Q.png" />
### 1.2.1 Python is a very popular language
https://www.economist.com/graphic-detail/2018/07/26/python-is-becoming-the-worlds-most-popular-coding-language
<img src="https://www.economist.com/sites/default/files/imagecache/1280-width/20180728_WOC883.png" />
<img src="https://www.dotnetlanguages.net/wp-content/uploads/2018/05/Most-popular-programming-languages.png" />
### 1.2.2 Python has a simple & elegant syntax
python is a very clean & elegant language. Look how easy for it to write a program!
<img src="https://img.devrant.io/devrant/rant/r_672680_SGP4G.jpg"/>
Python is meant to be a very elegant language. It even includes a poem built into it. Try to import your first library.
Try to click "shift enter" after type "import this" in the box
```
import this
```
### 1.2.3 Python is good for machine learning
a nice language to do data analytics/ data science/ machine learning!
<img src="https://i.ytimg.com/vi/vISRn5qFrkM/maxresdefault.jpg" />
<img src="https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Cheat+Sheets/content_pythonfordatascience.png" />
### 1.2.4 Python 3 or Python 2?
<img src="https://mk0learntocodew6bl5f.kinstacdn.com/wp-content/uploads/2014/06/python-2-vs-3-2018.png" />
https://learntocodewith.me/programming/python/python-2-vs-python-3/
## 1.3.0 Get familiar with the python tools
### 1.3.1 Jupyter notebook
Jupyter notebook is previously called IPython Notebook ( http://ipython.org/notebook.html ). It is a web based extension of iPython. It's very powerful and convenient and It allows you to mix code, annotation, text, figures into a single interactive document.
IPython is included in all of the most popular python distributions, like Anaconda , or Canopy. If you have one of those distributions installed on your computer, you can already start using the notebook.
A simple youtube video to remind you how to open jupyter notebook
https://www.youtube.com/watch?v=MJKVuzYZvo0
<img src="https://i.ytimg.com/vi/-MyjG00la2k/maxresdefault.jpg" />
### 1.3.2 How to navigate within jupyter notebook?
Jupyter notebook is just a user-friendly way of coding in python. Every code is written in a "block"/"cell",
<img src="https://i.pinimg.com/originals/f5/7e/07/f57e074be4503a39f6d9d8d15f0e8aa5.png" />
#### Example 1.3.2 - navigating the jupyter notebook
1) Click on the "white edge" next to any box to select the box
```
#click me (the white edge)! Now you selected me
```
2) While selecting a box, click "a" to create a box above, and "b" to create a box below
```
# click me (on the white edge, not the text!), and try type a to create a box above, and b to create a box below
```
3) To delete a box , click "dd" (dobule d)
```
# Try select me, and click "dd" to delete me!!
```
4) Realize there are "text" based box & "code" based box? Jupyter notebook has two type of "box" - those you see with a grey box are actually python code, while those with the entire white edge are so called the "text" box. all new box will be a "code" box automatically. To transform the "code box" to "text box", select the box & type m.
```
# Change me to a text box!
```
5) All good? Now that you know how to create a box, you should learn how to execute a code. There are 2 ways to execute the code - you can click "shift enter" to run a code & jump to a box below, or you can click "ctrl enter" to only execute the box
```
# I am a code box. Try execute me to run the code!
1+1 # should be two!
print("one plus one is two! is two!")
```
6) Do you notice the number next to the box? It indicates the sequence of the box running. If a box has no number, it means it had not been executed.
```
## The box "In []:" on the right should be empty now. Once you ran it, a number will appear in it.
##It means the box is executed!
```
7) Sometimes python could crash - you would notice it when you see it is all frozen and you can't execute the code. The "In []" box will become "In [*]". If that happens, you could try to click "stop". If it's not working, click "kernel" & restart
<img src="https://storage.googleapis.com/bwdb/codeforhk/python_course/image/Screenshot%202019-02-15%20at%2012.59.47%20AM.png" />
```
#This will freeze your python
# while True:
# #do nothing
# #This will force python to keep calculating without stopping
# 1+1
# # If you click stop, an error message will appear below
```
8) Text can be created with markdown, as below:
Markdown are useful for lists, markup and formatting
* *italics*
* **bold**
* `fixed_font`
###### You can add titles of various size using "#" at the beginning of the line
##### this has 5 "#####"
#### Exercise 1.3.2 - navigating the jupyter notebook
1) Try to create a box and execute 1 + 1
2) Try to create a box and create a mark down (a text box) that shows your name
```
1+1
```
Hello world
3) Try to delete the box below:
```
# delete me!!
```
4) Try to execute the box below without error boxing
```
Try to execute me!
```
5) Try to execute the box to output the maths
1+1
### 1.3.3 How does native python works?
<img src="https://storage.googleapis.com/bwdb/codeforhk/python_course/image/Screenshot%202019-02-15%20at%209.04.05%20AM.png" />
#### Exercise 1.3.3 How does native python works
1) For mac users, type "command space" & search for "terminal". Then type "python3" to get to your python terminal.
2) For window uses, you would need to set your environmental variable to do so.
3) Try to run "helloworld.py" by running "python3 helloworld.py"
### 1.3.4 How does Colab works?
<img src="https://miro.medium.com/max/1086/1*g_x1-5iYRn-SmdVucceiWw.png" />
Colab is the "Google doc" equivalent of jupyter notebook, for while you can save your file on google drive, share it with your friends, and run it directly on the cloud.
https://colab.research.google.com/drive/1Fx592qrAnfzJiHZ1XFuqDMc98FNQzF-x
### 1.3.5 How does Github works?
Github is a "social network" for developers (in a way). It is also a centre for vesion control - You can also think of it as the colaboration work for all developers
<img src="https://leanpub.com/site_images/git-flow/git-flow-nvie.png" />
## 1.4.0 How does a computer/programming work?
### 1.4.1 Programming is all about problem solving
<img src="https://cs50.harvard.edu/college/2018/fall/weeks/0/notes/input_output.png" />
### 1.4.2 Programming is to think about the steps


#### Example 1.4.2 Programming is to think about the steps
How do give instruction to computer to make a toast?
<img src="https://www.wikihow.com/images/thumb/4/45/Make-Buttered-Toast-Step-14-Version-2.jpg/aid599255-v4-728px-Make-Buttered-Toast-Step-14-Version-2.jpg" />
define bread, peanut butter, jelly
1) Get a piece of bread
2) Spread peanut butter on it
3) Get another piece of bread
4) Spread jelly on it
5) Put the two pieces of bread together
6) Eat it!
#### Exercise 1.4.2 Programming is to think about the steps
How to give instruction to computer to make a tea?
<img src="http://m1.wish.co.uk/blog/wp-content/uploads/2014/10/lovetea.jpg" />
#### Example 1.4.3 - solve problems in block-wise/step-wise approach
To make this into a code format, try to solve this
https://studio.code.org/hoc/1
#### Exercise 1.4.3
Try to complete challenge 2-5
```
for i in range(2,6):
print('https://studio.code.org/hoc/'+str(i))
```
## Test 1.0.0
https://goo.gl/forms/MNvu81mO1SOx7AVu2
<img src="https://www.codefor.hk/wp-content/themes/DC_CUSTOM_THEME/img/logo-code-for-hk-logo.svg" height="150" width="150" align="center"/>
# 2.0.0 Python basics
To start learning python, you would need to learn the syntax. You can think syntax as equivalent to grammar in natural language - it is the rule you write the language, so python understand you correctly!
Python is an object oriented language - everything is stored as an object
<img src="http://www.lanl.gov/museum/events/calendar/2015/June/_images/dday.jpg"/>
## 2.1.0 Understanding Python as an Object Oriented Language
Everything in Python is an object. Still, this begs the question. What is an object? You can think of it like a variables in mathematics. You can assign any value to this variable. E.g, you can assign:
* x = 1
* cristano_ronaldo = 'a footballer'
* Ultimate_answer_for_everything = 42
Why do we want to do this? Because it is a clean and efficient way to manage a large amount of information, especially when the program starts to grow very large. It is sort of like the naming system for military operation or typhoon. For example, It is way cleaner to call it d-day than "normandy landing in June 6 1944.
* d_day = "normandy landing in June 6 1944"
* hato = 'the #10 typhoon that hits macau in August 2017'
In python, it means any value can be assigned to a variable/object, or passed as an argument to a function. (Don't worry about function for now)
This is so important that I'm going to repeat it in case you missed it the first few times: everything in Python is an object. Strings are objects. Lists are objects. Functions are objects. Even modules are objects.
### 2.1.1 Assign value to an object.
Use "=" to assign a value to an object. Let's say we want to assign the value 1 to x, then execute the cell by CTRL+ENTER
```
#Let's say assign the value 1 to x, then execute the cell by CTRL+ENTER
x = 1
#From now on, python recognise x is equal to 1
```
Don't put x on the right & do it the other way round! 1=x would give you an error, because python thinks you are assigning x to the integer 1
```
# This will create an error, called "SyntaxError". It means python doesn't understand it!
#(You should see this very often from now on)
1 = x
```
The value executed will be stored in the memory. From now on x = 1. You can try execute the cell below, it should return 1, telling you the value of x is 1
```
x #try execute me!
```
<img src="https://tse1.mm.bing.net/th?id=OIP.I7QCS0SfLtoGA-q81cFbygEsEf&w=199&h=190&c=7&qlt=90&o=4&dpr=2&pid=1.7"/>
### 2.1.2 Python is case-sensitive!
Try wih capital X? What would you expect? Will the value be the same?
```
X
```
### 2.1.3 You can re-assign the value for an object.
Let's try to assign another value for x
```
# You can assign anything to the variable.
x = 'an expensive iphone'
```
### 2.1.4 The value of the object changes everytime you re-assign
Now, try to execute x again on the cell below? You can also try to execute the cell 1.03 above. What would you expect?
```
x
```
#### Exercise 2.1.4
What is the value of x?
```
x = 1
x = 1 + x
print(x)
#what is X now?
```
A Python Code Sample
```
x = 34 - 23 # These are discrete integers, x is a variable assinged to the math equation 34 - 23, = is an assignement operator
#(Hash) here can be used to add comments, without affecting the code. Python won't read comments as part of the code
y = "Hello" # This is an example of a string. The alphabets between " " are strings. " Denotes that we are working with strings
z = 3.45 # z is a variable where 3.45, a continous number, is assigned to it. In Python we call this type as floats
if z == 3.45 or y == "Hello": # == is a comparison operator i.e. is z equal to 3.45 true? and is y equal to "Hello" true? In this
# case yes. So we move down the if statement assigning x = x + 1 and y = y + "World"
x = x + 1 # "if" statement is a loop where if the statement is true we move down the loop and execute the
# code. Otherwise we stay out of the loop. Notice the indentation in the if loop. It clearly, shows
y = y + "World" # if z == 3.45 or y == "Hello": we move into the loop to execute x = x + 1, y = y + "World"
# Don't forget to put the ":" at the end of the if statement, otherwise, Python will give you an
# error
# Python will give x = x + 1 a discrete value. That's one of the amazing features of Python, unlike
# other computer languages, where it can indentify what type of variable you are trying to add an
# integer to. In this case it knows x a is discrete value so it will condiser 1 to be a discrete
# addition to x. With y = y + "World", python knows y is a string so "World" gets added to the
# y string type.
x = 1 #this is a comment
x
```
### 2.1.5 How to seek help in python
You can use dir() or ? to seek help in python
```
## dir() gives information on the attributes of an object
dir(wiki)
## to learn more about something we can use help()
help(wiki.replace)
## or the fancy ipython version "?"
wiki.replace?
```
## 2.2.0 Data type in python (Integer, Float, String)
### 2.2.1 integer
Integer is literally, the integer, e.g 1,2,3,4. You can use type() to check the data type of the variable
```
y = 2
type(y)
```
You can also check the data type directly
```
type(2)
```
### 2.2.2 Float
Float is also a number. However float contains more information because it contains decimal place
```
z = 3.1416
type(z)
```
You can use int() to transform float to integer, or float() to transform integer to float
```
int(z)
```
### 2.2.3 String
String can be any character you can find on the keyboard, as long as it can be bound within " or '.
```
this_is_a_string = 'Patrick'
this_is_also_a_string = "$%%^&*()njfnksdjfsdyftuy32uygsjdhfbmnasd"
number_can_also_be_string = "123456"
#Anything within a " or ' will be a string
myname = "Patrick"
type(myname)
```
String can be concatenated by using +
```
#example
firstname = 'Cristiano'
lastname = 'Ronaldo'
fullname = firstname + lastname
fullname
```
Remember, string tend to contain less information. It doesn't treat numbers inside a string as numbers. What would you expect to see below?
```
one = '1'
two = '2'
one+two
```
You can use str() to transform float or int to string
```
float(one) + int(two)
```
<img src="http://ivl-net.eu/wp-content/uploads/2015/04/workshops-chalkboard.jpg"/>
#### Example 2.2.3 - Work out your full name
1. Try input your first name and last name in the variable, and work out your full name
2. Now, realize there is no space between your name? Can you amend the code so there is a space?
3. Type in your age, and assign it as integer to the age variable
4. complete the code so it prints out "[Yourname] is now [age] years old".
e.g Cristiano Ronadlo is now 32 years old
```
firstname = 'cristiano'
lastname = 'ronaldo'
fullname = firstname + lastname
print(fullname)
age = 32
print(fullname + 'is now ' + str(age))
```
#### Exercise 2.2.3 - Work out your full name
```
#exercise: Try input your first name and last name in the variable
firstname = 'Denis'
lastname = 'Tsoi'
fullname = firstname + lastname
print(fullname)
#How do we add a space in between?
fullname = firstname + " " + lastname
age = 32
print(fullname + ' is now ' + str(age) + ' years old.')
```
### 2.2.4 Object can be of any data type
Try execute the cell below
```
x = 1
print(x)
house_price = 100
print(house_price)
girlfriend = 'pretty'
print(girlfriend)
girlfriend
```
### 2.2.5 Reserved words
There are a bunch of words that you are not allowed to use
This is because they have special meanings in python
https://www.programiz.com/python-programming/keywords-identifier
```
# Don't use these words!! They have special meanings
False
None
and
if
for
from
import
as
is
in
```
## 2.3.0 Python mathematics operations
### 2.3.1 Sum
<img src="https://i-cdn.phonearena.com/images/article/98010-thumb/The-fall-2017-Apple-iPhone-lineup-we-now-have-8-iPhones-to-choose-from-more-than-ever-before.jpg"/>
#### Example 2.2.1
```
#you can use it like a calculator. Try 6+7? you should be expecting 13?
6+7
#Let's say I want to know how much to does it cost to the buy whole set of iphone
iphoneX = 9998
airpods = 1288
price_for_new_iphone = iphoneX + airpods
price_for_new_iphone
```
#### Exercise 2.2.1 - Cost of Samsung S8
Using the previous example, now that samsung s8 cost $5200.
1. Assign 5200 to samsung_s8
2. Assign 0 to normal_headphone (becoz we already have one)
3. findout how much in total to buy samsung s8 & normal_headphone? asign it to variable called price_for_new_samsung
```
#Your code in here!
samsung_s8 = 5200
normal_headphone = 0
price_for_new_samsung = samsung_s8 + normal_headphone
price_for_new_samsung
```
### 2.3.2 Subtraction
<img src="http://na.signwiki.org/images/public/d/db/Subtraction.gif"/>
#### Example 2.3.2
```
# You can also do calculation like excel
6-7
11-9
## read interactive input
a = input("enter the first number: " )
b = input("enter the second number: ")
#example
#Assign 2017 to a variable called THISYEAR,
THISYEAR = 2019
#Assign the year of your birthday to a variable called BIRTHYEAR
BIRTHYEAR = 1987
#work out your age by substracting BIRTHYEAR by THISYEAR
age = THISYEAR-BIRTHYEAR
print(age)
```
<img src="http://cdn1.knowyourmobile.com/sites/knowyourmobilecom/files/styles/gallery_wide/public/2016/02/screen_shot_2016-02-22_at_4.48.09_pm.jpg?itok=H2I28t7Q"/>
#### Exercise 2.3.2 - Iphone vs Samsung
Using the previous example, you have the price to buy samsung s8. Find out how much more expensive are iphone, compare to samsung?
1. Create a new object called price_for_new_samsung
2. we already have an object called price_for_new_iphone
3. execute price_for_new_iphone - price_for_new_samsung, and assign to a variable called "money_saved"
```
#Your code in here!
price_for_new_samsung = 2000
price_for_new_iphone = 3000
money_saved = price_for_new_iphone - price_for_new_samsung
print(money_saved)
```
### 2.3.3 multiplication
#### Example 2.3.3
```
7*6
#exercise
#let's assign 7.8 to a variable called exchange_rate
exchange_rate = 7.8
#Assign 100 to a variable called usd
usd = 100
#workout how much hkd is worth of 100 usd
hkd = exchange_rate*usd
print(hkd)
```
#### Exercise 2.2.3 - HKD & USD
Using the previous example, can you try to convert hkd back to usd?
```
#your code here!
def convert_hkd_to_usd(hkd):
exchange_rate = 7.8
return hkd / exchange_rate
convert_hkd_to_usd(780)
```
### 2.3.4 Divison
```
## note that we are doing calculation among integers
6/7
## let's make sure that the interpreter understands we want to use floating point
6./7
## You don't need to define the type of variable. The interpreter will guess.
a=6
b=7
print(a*b , a+b, a-b, a/b)
## As in the previous example, if one element is floating point, the interpreter will do an automatic cast
print
a=6. ## this is now float
b=7
print(a*b , a+b, a-b, a/b)
```
### 2.3.5 Logical operation in python
Play around with logic!
== means checking whether if they equal
```
x = 1
x ==1 # Check if x equals one
```
!= means check if they don't equal
```
!=1
## Some basic logic operators
a = 2
b = 3
print "a = " ,a
print "b = " ,b
## == stands for "is equal to"
## be careful and do not confuse
## == which is an operator that compares the two operand
## with = , which is an assignment operator.
print "a == b is " , a == b
## != "not equal to"
print "a != b is " , a != b
## greater and smaller than
print "a < b is " , a < b
print "a > b is " , a > b
## the basic boolean types.
print "True is ... well ... " , True
print "...and obviously False is " , False
```
">" and "<" means bigger or smaller
```
iphone = 9000
samsung = 5000
iphone > samsung
```
#### Exercise 2.3.5 - check if 1 equals 1.0
```
x = 1
y = 1.0
#Your code here!
x == y
```
## 2.4.0 Python string / text operations
```
## a string is just a sequence of characters within quotes "" or ''
mystr1 = "i am"
mystr2 = 'i am'
mystr1==mystr2
```
### 2.4.1 String is just text
Python can handle long text, or even an entire novel, or even the entire wikipedia!
```
wiki = "Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991"
print(wiki)
```
### 2.4.2 Indexing in string
You can navigate each letter in the string by using square brackets and index
```
print(wiki[3])
print(wiki[-1])
print(wiki[0:20])
```
### 2.4.3 String related functions
Some very useful functions to find characters in strings
```
## manipulating strings is very easy
## finding the location of a substring
print(wiki.find("Python"))
## changing to uppercase
print(wiki.upper())
## replacing substrings
print(wiki.replace('high','low'))
## these operations do not modify the original string
print(wiki)
## we can count the occurrences of a letter
print(wiki.count('python') )
#Lower change all the letters to lower case
wiki.lower().count('python')
## "in" returns a boolean
print("python" in wiki)
print("language" in wiki)
## .split() separates fields
print(wiki.split())
```
#### Challenge 2.4.3 - Change the shape of the song "Shape of you"
1. This is a popular song from Ed Sheeran called shape of you. You can listen to it from the youtube link (please wear your airpods, or plug in your earphone if you are not using iphone)
2. Count the occurance of the word "baby"
3. Count the number of words in the entire lyrics
4. Replace "Oh I oh I oh I oh I " to something else
https://www.azlyrics.com/lyrics/edsheeran/shapeofyou.html
```
from IPython.display import YouTubeVideo
shape_of_you = "The club isn't the best place to find a lover. So the bar is where I go (mmmm). Me and my friends at the table doing shots. Drinking fast and then we talk slow (mmmm). And you come over and start up a conversation with just me. And trust me I'll give it a chance now (mmmm). Take my hand, stop, put Van The Man on the jukebox. And then we start to dance. And now I'm singing like. . Girl, you know I want your love. Your love was handmade for somebody like me. Come on now, follow my lead. I may be crazy, don't mind me. Say, boy, let's not talk too much. Grab on my waist and put that body on me. Come on now, follow my lead. Come, come on now, follow my lead (mmmm). . I'm in love with the shape of you. We push and pull like a magnet do. Although my heart is falling too. I'm in love with your body. Last night you were in my room. And now my bedsheets smell like you. Every day discovering something brand new. I'm in love with your body. . Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Every day discovering something brand new. I'm in love with the shape of you. . One week in we let the story begin. We're going out on our first date (mmmm). You and me are thrifty, so go all you can eat. Fill up your bag and I fill up a plate (mmmm). We talk for hours and hours about the sweet and the sour. And how your family is doing okay (mmmm). And leave and get in a taxi, then kiss in the backseat. Tell the driver make the radio play. And I'm singing like. . Girl, you know I want your love. Your love was handmade for somebody like me. Come on now, follow my lead. I may be crazy, don't mind me. Say, boy, let's not talk too much. Grab on my waist and put that body on me. Come on now, follow my lead. Come, come on now, follow my lead (mmmm). . I'm in love with the shape of you. We push and pull like a magnet do. Although my heart is falling too. I'm in love with your body. Last night you were in my room. And now my bedsheets smell like you. Every day discovering something brand new. I'm in love with your body. . Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Every day discovering something brand new. I'm in love with the shape of you. . Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. . I'm in love with the shape of you. We push and pull like a magnet do. Although my heart is falling too. I'm in love with your body. Last night you were in my room. And now my bedsheets smell like you. Every day discovering something brand new. I'm in love with your body. . Come on, be my baby, come on. Come on, be my baby, come on. I'm in love with your body. Come on, be my baby, come on. Come on, be my baby, come on. I'm in love with your body. Come on, be my baby, come on. Come on, be my baby, come on. I'm in love with your body. Every day discovering something brand new. I'm in love with the shape of you"
#print(shape_of_you)
(YouTubeVideo('JGwWNGJdvx8'))
#Your code here!
# print("baby occurances", shape_of_you.count("baby"))
# print("word occurances", len(shape_of_you.split()))
# shape_of_you.replace("Oh I oh I oh I oh I", "meep")
def find_count_of_word_in_song(word, song):
if word is None:
raise Exception("Word is not provided")
elif song is None:
raise Exception("Song is not provided")
return song.count(word)
shape_of_you = "The club isn't the best place to find a lover. So the bar is where I go (mmmm). Me and my friends at the table doing shots. Drinking fast and then we talk slow (mmmm). And you come over and start up a conversation with just me. And trust me I'll give it a chance now (mmmm). Take my hand, stop, put Van The Man on the jukebox. And then we start to dance. And now I'm singing like. . Girl, you know I want your love. Your love was handmade for somebody like me. Come on now, follow my lead. I may be crazy, don't mind me. Say, boy, let's not talk too much. Grab on my waist and put that body on me. Come on now, follow my lead. Come, come on now, follow my lead (mmmm). . I'm in love with the shape of you. We push and pull like a magnet do. Although my heart is falling too. I'm in love with your body. Last night you were in my room. And now my bedsheets smell like you. Every day discovering something brand new. I'm in love with your body. . Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Every day discovering something brand new. I'm in love with the shape of you. . One week in we let the story begin. We're going out on our first date (mmmm). You and me are thrifty, so go all you can eat. Fill up your bag and I fill up a plate (mmmm). We talk for hours and hours about the sweet and the sour. And how your family is doing okay (mmmm). And leave and get in a taxi, then kiss in the backseat. Tell the driver make the radio play. And I'm singing like. . Girl, you know I want your love. Your love was handmade for somebody like me. Come on now, follow my lead. I may be crazy, don't mind me. Say, boy, let's not talk too much. Grab on my waist and put that body on me. Come on now, follow my lead. Come, come on now, follow my lead (mmmm). . I'm in love with the shape of you. We push and pull like a magnet do. Although my heart is falling too. I'm in love with your body. Last night you were in my room. And now my bedsheets smell like you. Every day discovering something brand new. I'm in love with your body. . Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Oh I oh I oh I oh I. I'm in love with your body. Every day discovering something brand new. I'm in love with the shape of you. . Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. Come on, be my baby, come on. . I'm in love with the shape of you. We push and pull like a magnet do. Although my heart is falling too. I'm in love with your body. Last night you were in my room. And now my bedsheets smell like you. Every day discovering something brand new. I'm in love with your body. . Come on, be my baby, come on. Come on, be my baby, come on. I'm in love with your body. Come on, be my baby, come on. Come on, be my baby, come on. I'm in love with your body. Come on, be my baby, come on. Come on, be my baby, come on. I'm in love with your body. Every day discovering something brand new. I'm in love with the shape of you"
find_count_of_word_in_song("baby", shape_of_you)
"100" + "100"
```
<img src="https://www.codefor.hk/wp-content/themes/DC_CUSTOM_THEME/img/logo-code-for-hk-logo.svg" height="150" width="150" align="center"/>
| github_jupyter |
## Lists
A **List** is a common way to store a collection of objects in Python. Lists are defined with square brackets `[]` in Python.
```
# An empty list can be assigned to a variable and aded to later
empty_list = []
# Or a list can be initialized with a few elements
fruits_list = ['apple', 'banana', 'orange', 'watermelon']
type(fruits_list)
```
Lists are *ordered*, meaning they can be indexed to access their elements, much like accessing characters in a **String**.
```
fruits_list[0]
fruits_list[1:3]
```
Lists are also *mutable*, meaning they can be changed in place, extended and shortened at will.
```
# Let's replace apple with pear
fruits_list[0] = 'pear'
fruits_list
```
We can also append to lists with the `.append()` method to add an **element** to the end.
```
fruits_list.append('peach')
fruits_list
```
Or we can remove and return the last element from a list with `pop()`.
```
fruits_list.pop()
# Notice that 'peach' is no longer in the fruits list
fruits_list
```
To understand mutability, let's compare the list with an **immutable** collection of ordered elements, the **Tuple**.
# Tuples
Tuples in Python are defined with `()` parentheses.
```
# Tuples are defined similarly to lists, but with () instead of []
empty_tuple = ()
fruits_tuple = ('apple', 'banana', 'orange', 'watermelon')
fruits_tuple
```
Like the *ordered* **String** and **List**, the **Tuple** can be indexed to access its elements.
```
fruits_tuple[0]
fruits_tuple[1:3]
```
Unlike the *mutable* list, we get an error if we try to change the elements of the **Tuple** in place, or append to it.
```
fruits_tuple[0] = 'pear'
fruits_tuple.append('peach')
```
## Why would I ever use an inferior version of the list?
Tuples are less flexible than lists, but sometimes that is *exactly what you want* in your program.
Say I have a bunch of *constants* and I want to make sure that they stay... constant. In that case, a tuple would be a better choice to store them than a list. Then if any future code tries to change or extend your tuple as if it were a list, it would throw an error and you'd know something was not behaving as expected.
## List methods
Let's explore some useful **methods** of the **list**. We have already used the `.append()` and `.pop()` methods above. To see what methods are available, we can always use `help()`.
```
# Recall, underscores denote special methods reserved by Python
# We can scroll past the _methods_ to append(...)
help(list)
```
Let's try out the `.index()` and `.sort()` methods.
```
pets = ['dog', 'cat', 'snake', 'turtle', 'guinea pig']
# Let's find out what index cat is at
pets.index('cat')
# Now let's try sorting the list
pets.sort()
pets
```
Sorting a list of strings rearranges them to be in alphabetical order. Lists are not restricted to holding strings, let's see what happens when we sort a list of **Int**.
```
ages = [12, 24, 37, 9, 71, 42, 5]
ages.sort()
ages
```
Sorting can be a very useful feature for ordering your data in Python.
A useful built-in function that can be used to find the length of an ordered data structure is `len()`. It works on lists, tuples, strings and the like.
```
print(len(['H', 'E', 'L', 'L', 'O']), len((1, 2, 3, 4)), len('Hello'))
```
Great, you now know the basics of lists and tuples. Lastly, we will explore **sets** and **dictionaries**.
| github_jupyter |
```
import matplotlib as mpl
import matplotlib.pyplot as plt
from agglio_lib import *
#-------------------------------Data Generation section---------------------------#
n = 1000
d = 50
sigma=0.5
w_radius = 10
wAst = np.random.randn(d,1)
X = getData(0, 1, n, d)/np.sqrt(d)
w0 =w_radius*np.random.randn(d,1)/np.sqrt(d)
ipAst = np.matmul(X, wAst)
# y = sigmoid(ipAst)
y = sigmoid_noisy_pre(ipAst,sigma_noise=sigma)
#-----------AGGLIO-GD-------------#
params={}
params['algo']='AG_GD'
params['w0']=w0
params['wAst']=wAst
objVals_agd,distVals_agd,time_agd = cross_validate(X,y,params,cross_validation=True)
#-----------AGGLIO-SGD-------------#
params={}
params['algo']='AG_SGD'
params['w0']=w0
params['wAst']=wAst
objVals_agsgd,distVals_agsgd,time_agsgd = cross_validate(X,y,params,cross_validation=True)
#-----------AGGLIO-SVRG-------------#
params={}
params['algo']='AG_SVRG'
params['w0']=w0
params['wAst']=wAst
objVals_agsvrg,distVals_agsvrg,time_agsvrg = cross_validate(X,y,params,cross_validation=True)
#-----------AGGLIO-ADAM-------------#
hparams['AG_ADAM']={}
hparams['AG_ADAM']['alpha']=np.power(10.0, [0, -1, -2, -3]).tolist()
hparams['AG_ADAM']['B_init']=np.power(10.0, [0, -1, -2, -3]).tolist()
hparams['AG_ADAM']['B_step']=np.linspace(start=1.01, stop=3, num=5).tolist()
hparams['AG_ADAM']['beta_1'] = [0.3, 0.5, 0.7, 0.9]
hparams['AG_ADAM']['beta_2'] = [0.3, 0.5, 0.7, 0.9]
hparams['AG_ADAM']['epsilon'] = np.power(10.0, [-3, -5, -8]).tolist()
hparam = hparams['AG_ADAM']
cv = ShuffleSplit( n_splits = 1, test_size = 0.3, random_state = 42 )
grid = GridSearchCV( AG_ADAM(), param_grid=hparam, refit = False, cv=cv) #, verbose=3
grid.fit( X, y.ravel(), w_init=w0.ravel(), w_star=wAst.ravel(), minibatch_size=50)
best = grid.best_params_
print("The best parameters are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_))
print("The best parameters are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_))
#ag_adam = AG_ADAM(alpha= best["alpha"], B_init=best['B_init'], B_step=best['B_step'], beta_1=best['beta_1'], beta_2=best['beta_2'] )
ag_adam = AG_ADAM(alpha= best["alpha"], B_init=best['B_init'], B_step=best['B_step'], beta_1=best['beta_1'], beta_2=best['beta_2'], epsilon=best['epsilon'] )
ag_adam.fit( X, y.ravel(), w_init=w0.ravel(), w_star=wAst.ravel(), from google.colab import drive
drive.mount('/content/drive')
%cd /cmax_iter=600 )
distVals_ag_adam = ag_adam.distVals
time_ag_adam=ag_adam.clock
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['ps.fonttype'] = 42
fig = plt.figure()
plt.plot(time_agd, distVals_agd, label='AGGLIO-GD', color='#1b9e77', linewidth=3)
plt.plot(time_agsgd, distVals_agsgd, label='AGGLIO-SGD', color='#5e3c99', linewidth=3)
plt.plot(time_agsvrg, distVals_agsvrg, label='AGGLIO-SVRG', color='#d95f02', linewidth=3)
plt.plot(time_ag_adam, distVals_ag_adam, label='AGGLIO-ADAM', color='#01665e', linewidth=3)
plt.legend()
plt.ylabel("$||w^t-w^*||_2$",fontsize=12)
plt.xlabel("Time",fontsize=12)
plt.grid()
plt.yscale('log')
plt.xlim(time_agd[0], time_agd[-1])
plt.title(f'n={n}, d={d}, $\sigma$ = {sigma}, pre-activation')
plt.savefig('Agglio_pre-noise_sigmoid.pdf', dpi=300)
plt.show()
```
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 3, Module 2*
---
# Permutation & Boosting
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your work.
- [ ] If you haven't completed assignment #1, please do so first.
- [ ] Continue to clean and explore your data. Make exploratory visualizations.
- [ ] Fit a model. Does it beat your baseline?
- [ ] Try xgboost.
- [ ] Get your model's permutation importances.
You should try to complete an initial model today, because the rest of the week, we're making model interpretation visualizations.
But, if you aren't ready to try xgboost and permutation importances with your dataset today, that's okay. You can practice with another dataset instead. You may choose any dataset you've worked with previously.
The data subdirectory includes the Titanic dataset for classification and the NYC apartments dataset for regression. You may want to choose one of these datasets, because example solutions will be available for each.
## Reading
Top recommendations in _**bold italic:**_
#### Permutation Importances
- _**[Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)**_
- [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
#### (Default) Feature Importances
- [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/)
- [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html)
#### Gradient Boosting
- [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/)
- _**[A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)**_
- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8
- [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html)
- _**[Boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw) (2.5 minute video)**_
```
# all imports needed for this sheet
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
from sklearn.tree import DecisionTreeRegressor
import xgboost as xgb
%matplotlib inline
import seaborn as sns
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
pip install category_encoders
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
!pip install eli5
# If you're working locally:
else:
DATA_PATH = '../data/'
df = pd.read_excel(DATA_PATH+'/Unit_2_project_data.xlsx')
exit_reasons = ['Rental by client with RRH or equivalent subsidy',
'Rental by client, no ongoing housing subsidy',
'Staying or living with family, permanent tenure',
'Rental by client, other ongoing housing subsidy',
'Permanent housing (other than RRH) for formerly homeless persons',
'Staying or living with friends, permanent tenure',
'Owned by client, with ongoing housing subsidy',
'Rental by client, VASH housing Subsidy'
]
# pull all exit destinations from main data file and sum up the totals of each destination,
# placing them into new df for calculations
exits = df['3.12 Exit Destination'].value_counts()
# create target column (multiple types of exits to perm)
df['perm_leaver'] = df['3.12 Exit Destination'].isin(exit_reasons)
# base case
df['perm_leaver'].value_counts(normalize=True)
# replace spaces with underscore
df.columns = df.columns.str.replace(' ', '_')
# see size of df prior to dropping empties
df.shape
# drop rows with no exit destination (current guests at time of report)
df = df.dropna(subset=['3.12_Exit_Destination'])
# shape of df after dropping current guests
df.shape
# verify no NaN in exit destination feature
df['3.12_Exit_Destination'].isna().value_counts()
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
train = df
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['perm_leaver'], random_state=42)
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# drop any private information
X = X.drop(columns=['3.1_FirstName', '3.1_LastName', '3.2_SocSecNo',
'3.3_Birthdate', 'V5_Prior_Address'])
# drop unusable columns
X = X.drop(columns=['2.1_Organization_Name', '2.4_ProjectType',
'WorkSource_Referral_Most_Recent', 'YAHP_Referral_Most_Recent',
'SOAR_Enrollment_Determination_(Most_Recent)',
'R7_General_Health_Status', 'R8_Dental_Health_Status',
'R9_Mental_Health_Status', 'RRH_Date_Of_Move-In',
'RRH_In_Permanent_Housing', 'R10_Pregnancy_Due_Date',
'R10_Pregnancy_Status', 'R1_Referral_Source',
'R2_Date_Status_Determined', 'R2_Enroll_Status',
'R2_Reason_Why_No_Services_Funded', 'R2_Runaway_Youth',
'R3_Sexual_Orientation', '2.5_Utilization_Tracking_Method_(Invalid)',
'2.2_Project_Name', '2.6_Federal_Grant_Programs', '3.16_Client_Location',
'3.917_Stayed_Less_Than_90_Days',
'3.917b_Stayed_in_Streets,_ES_or_SH_Night_Before',
'3.917b_Stayed_Less_Than_7_Nights', '4.24_In_School_(Retired_Data_Element)',
'CaseChildren', 'ClientID', 'HEN-HP_Referral_Most_Recent',
'HEN-RRH_Referral_Most_Recent', 'Emergency_Shelter_|_Most_Recent_Enrollment',
'ProgramType', 'Days_Enrolled_Until_RRH_Date_of_Move-in',
'CurrentDate', 'Current_Age', 'Count_of_Bed_Nights_-_Entire_Episode',
'Bed_Nights_During_Report_Period'])
# drop rows with no exit destination (current guests at time of report)
X = X.dropna(subset=['3.12_Exit_Destination'])
# remove columns to avoid data leakage
X = X.drop(columns=['3.12_Exit_Destination', '5.9_Household_ID', '5.8_Personal_ID',
'4.2_Income_Total_at_Exit', '4.3_Non-Cash_Benefit_Count_at_Exit'])
# Drop needless feature
unusable_variance = ['Enrollment_Created_By', '4.24_Current_Status_(Retired_Data_Element)']
X = X.drop(columns=unusable_variance)
# Drop columns with timestamp
timestamp_columns = ['3.10_Enroll_Date', '3.11_Exit_Date',
'Date_of_Last_ES_Stay_(Beta)', 'Date_of_First_ES_Stay_(Beta)',
'Prevention_|_Most_Recent_Enrollment', 'PSH_|_Most_Recent_Enrollment',
'Transitional_Housing_|_Most_Recent_Enrollment', 'Coordinated_Entry_|_Most_Recent_Enrollment',
'Street_Outreach_|_Most_Recent_Enrollment', 'RRH_|_Most_Recent_Enrollment',
'SOAR_Eligibility_Determination_(Most_Recent)', 'Date_of_First_Contact_(Beta)',
'Date_of_Last_Contact_(Beta)', '4.13_Engagement_Date', '4.11_Domestic_Violence_-_When_it_Occurred',
'3.917_Homeless_Start_Date']
X = X.drop(columns=timestamp_columns)
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
train.columns
# Assign to X, y to avoid data leakage
features = ['3.15_Relationship_to_HoH', 'CaseMembers',
'3.2_Social_Security_Quality', '3.3_Birthdate_Quality',
'Age_at_Enrollment', '3.4_Race', '3.5_Ethnicity', '3.6_Gender',
'3.7_Veteran_Status', '3.8_Disabling_Condition_at_Entry',
'3.917_Living_Situation', 'Length_of_Time_Homeless_(3.917_Approximate_Start)',
'3.917_Times_Homeless_Last_3_Years', '3.917_Total_Months_Homeless_Last_3_Years',
'V5_Last_Permanent_Address', 'V5_State', 'V5_Zip', 'Municipality_(City_or_County)',
'4.1_Housing_Status', '4.4_Covered_by_Health_Insurance', '4.11_Domestic_Violence',
'4.11_Domestic_Violence_-_Currently_Fleeing_DV?', 'Household_Type',
'R4_Last_Grade_Completed', 'R5_School_Status',
'R6_Employed_Status', 'R6_Why_Not_Employed', 'R6_Type_of_Employment',
'R6_Looking_for_Work', '4.2_Income_Total_at_Entry',
'4.3_Non-Cash_Benefit_Count', 'Barrier_Count_at_Entry',
'Chronic_Homeless_Status', 'Under_25_Years_Old',
'4.10_Alcohol_Abuse_(Substance_Abuse)', '4.07_Chronic_Health_Condition',
'4.06_Developmental_Disability', '4.10_Drug_Abuse_(Substance_Abuse)',
'4.08_HIV/AIDS', '4.09_Mental_Health_Problem',
'4.05_Physical_Disability'
]
target = 'perm_leaver'
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
# Arrange data into X features matrix and y target vector
target = 'perm_leaver'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
from scipy.stats import randint, uniform
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
'n_estimators': randint(5, 500, 5),
'max_depth': [10, 15, 20, 50, 'None'],
'max_features': [.5, 1, 1.5, 2, 2.5, 3, 'sqrt', None],
}
search = RandomizedSearchCV(
RandomForestRegressor(random_state=42),
param_distributions=param_distributions,
n_iter=20,
cv=5,
scoring='neg_mean_absolute_error',
verbose=10,
return_train_score=True,
n_jobs=-1,
random_state=42
)
search.fit(X_train, y_train);
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import GradientBoostingClassifier
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=100, n_jobs=-1, max_features=None, random_state=42
)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
# Get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
# Plot feature importances
%matplotlib inline
import matplotlib.pyplot as plt
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
xgb.XGBClassifier(n_estimators=110, n_jobs=-1, num_parallel_tree=200,
random_state=42
)
)
# Fit on Train
pipeline.fit(X_train, y_train)
# Score on val
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
# cross validation
k = 3
scores = cross_val_score(pipeline, X_train, y_train, cv=k,
scoring='accuracy')
print(f'MAE for {k} folds:', -scores)
-scores.mean()
# get and plot feature importances
# Linear models have coefficients whereas decision trees have "Feature Importances"
import matplotlib.pyplot as plt
model = pipeline.named_steps['xgbclassifier']
encoder = pipeline.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_val).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
plt.figure(figsize=(10,30))
importances.sort_values().plot.barh(color='grey')
df['4.1_Housing_Status'].value_counts()
X_train.shape
X_train.columns
X_train.Days_Enrolled_in_Project.value_counts()
column = 'Days_Enrolled_in_Project'
# Fit without column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=250, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train.drop(columns=column), y_train)
score_without = pipeline.score(X_val.drop(columns=column), y_val)
print(f'Validation Accuracy without {column}: {score_without}')
# Fit with column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=250, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
score_with = pipeline.score(X_val, y_val)
print(f'Validation Accuracy with {column}: {score_with}')
# Compare the error with & without column
print(f'Drop-Column Importance for {column}: {score_with - score_without}')
column = 'Days_Enrolled_in_Project'
# Fit without column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=250, max_depth=7, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train.drop(columns=column), y_train)
score_without = pipeline.score(X_val.drop(columns=column), y_val)
print(f'Validation Accuracy without {column}: {score_without}')
# Fit with column
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=250, max_depth=7, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
score_with = pipeline.score(X_val, y_val)
print(f'Validation Accuracy with {column}: {score_with}')
# Compare the error with & without column
print(f'Drop-Column Importance for {column}: {score_with - score_without}')
# Fit with all the data
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
score_with = pipeline.score(X_val, y_val)
print(f'Validation Accuracy with {column}: {score_with}')
# Before: Sequence of features to be permuted
feature = 'Days_Enrolled_in_Project'
X_val[feature].head()
# Before: Distribution of quantity
X_val[feature].value_counts()
# Permute the dataset
X_val_permuted = X_val.copy()
X_val_permuted[feature] = np.random.permutation(X_val[feature])
# After: Sequence of features to be permuted
X_val_permuted[feature].head()
# Distribution hasn't changed!
X_val_permuted[feature].value_counts()
# Get the permutation importance
score_permuted = pipeline.score(X_val_permuted, y_val)
print(f'Validation Accuracy with {column} not permuted: {score_with}')
print(f'Validation Accuracy with {column} permuted: {score_permuted}')
print(f'Permutation Importance for {column}: {score_with - score_permuted}')
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent')
)
X_train_transformed = pipeline.fit_transform(X_train)
X_val_transformed = pipeline.transform(X_val)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_train_transformed, y_train)
pip install eli5
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val_transformed, y_val)
permuter.feature_importances_
eli5.show_weights(
permuter,
top=None,
feature_names=X_val.columns.tolist()
)
print('Shape before removing features:', X_train.shape)
minimum_importance = 0
mask = permuter.feature_importances_ > minimum_importance
features = X_train.columns[mask]
X_train = X_train[features]
print('Shape after removing features:', X_train.shape)
X_val = X_val[features]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy:', pipeline.score(X_val, y_val))
from xgboost import XGBClassifier
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy:', pipeline.score(X_val, y_val))
encoder = ce.OrdinalEncoder()
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
model = XGBClassifier(n_estimators=1000, # <= 1000 trees, early stopping depency
max_depth=7, # try deeper trees with high cardinality data
learning_rate=0.1, # try higher learning rate
random_state=42,
num_class=1,
n_jobs=-1)
eval_set = [(X_train_encoded, y_train),
(X_val_encoded, y_val)]
# Fit on train, score on val
model.fit(X_train_encoded, y_train,
eval_metric='auc',
eval_set=eval_set,
early_stopping_rounds=25)
from sklearn.metrics import mean_absolute_error as mae
results = model.evals_result()
train_error = results['validation_0']['auc']
val_error = results['validation_1']['auc']
iterations = range(1, len(train_error) + 1)
plt.figure(figsize=(10,7))
plt.plot(iterations, train_error, label='Train')
plt.plot(iterations, val_error, label='Validation')
plt.title('XGBoost Validation Curve')
plt.ylabel('Classification Error')
plt.xlabel('Model Complexity (n_estimators)')
plt.legend();
```
| github_jupyter |
```
import numpy as np
from scipy.optimize import fsolve
% matplotlib inline
import time
import pylab as pl
from IPython import display
# PSl – Power of solar radiation arriving to the Earth (short wave radiation)
# Pz – Power of radiation emitted from Earth (long wave radiation)
# A – mean albedo of the Earth surface
# S – solar constant
# PowZ – area of the Earth
# sbc -Stefan-Boltzmann constant
# PSl = S * (PowZ/4) *(1 - A)
# Pz = sbc * (T**4) * PowZ
# Pz = PSl
# sbc * (T**4) * PowZ = S * (PowZ/4) *(1 - A)
# (T**4) = (S * (PowZ/4) * (1 - A))/(sbc*PowZ) = (S * (1 - A))/(4 * sbc)
# Diagram methods
def doPlot(x, y, col):
pl.plot(x, y, col, markersize=3)
display.clear_output(wait=True)
display.display(pl.gcf())
time.sleep(0.1)
def setRanges():
pl.xlim(S_range_left * 0.95, S_range_right * 1.05)
pl.ylim(-80, 40)
pl.xlabel('Fraction of solar constant value')
pl.ylabel('Mean temperature in Celsius degrees')
pl.title('Glacial-interglacial transition')
def changeColor(i, val):
if i >= val:
return 'b+'
else:
return 'ro'
# Computation methods
def k_to_c(k):
return k - 273.15
# No atmosphere methods
def withoutAtmosphere(S, A, sbc):
return pow((S * (1 - A)) / (4 * sbc), 0.25)
# Taking atmosphere methods
def withAtmosphereEquations(p):
Ts, Ta = p
e1 = (-sw_ta) * (1 - sw_as) * S / 4 + c * (Ts - Ta) + sbc * (Ts ** 4) * (1 - lw_aa) - sbc * (Ta ** 4)
e2 = -(1 - sw_aa - sw_ta + sw_as * sw_ta) * S / 4 - c * (Ts - Ta) - sbc * (Ts ** 4) * (
1 - lw_ta - lw_aa) + 2 * sbc * (Ta ** 4)
return e1, e2
def withAtmosphere():
return fsolve(withAtmosphereEquations, (0.0, 0.0))
def changeAs(t):
if (t < Tc):
return sw_as_init_r
else:
return sw_as_init
# Variable initiation
# PSl = 0.0
# Pz = 0.0
A = 0.3
S_init = 1366.0 # W/m 2
S = S_init # W/m 2
# PowZ = 0.0
sbc = 5.67 * pow(10.0, -8) # W/m^2*K^4
# Short-wave radiation
sw_as_init = 0.19
sw_as_init_r = 0.65
sw_as = sw_as_init
sw_ta = 0.53
sw_aa = 0.30
# Long-wave radiation
lw_ta = 0.06
lw_aa = 0.31
c = 2.7 # Wm^-2 K^-1
# S in range 0.8 to 1.2 S
S_range_left = 0.4
S_range_right = 1.4
S_range_step = 0.01
Tc = -10 # C degrees
# No atmosphere
T = withoutAtmosphere(S, A, sbc)
print("No atmosphere in Celsius degrees: " + str(k_to_c(T)) + ".")
# Taking atmosphere
Ts, Ta = withAtmosphere()
print("Mean temperature of the atmosphere in Celsius degrees " + str(k_to_c(Ta)) + ".")
print("Mean surface temperature in Celsius degrees " + str(k_to_c(Ts)) + ".")
setRanges()
arr = list(np.arange(S_range_left, S_range_right, S_range_step))
iterator = list(arr)
iterator.reverse()
iterator.extend(arr)
sp1_val = None
sp2_val = None
sp1_frac = None
sp2_frac = None
sp1_temp = None
sp2_temp = None
# Main loop
for i in range(len(iterator)):
S = iterator[i] * S_init
Ts, Ta = withAtmosphere()
TaC = k_to_c(Ta)
TsC = k_to_c(Ts)
sw_as = changeAs(TsC)
if (sw_as != sw_as_init) and (sp1_val is None):
sp1_val = S
sp1_frac = iterator[i]
sp1_temp = TsC
if sw_as != sw_as_init:
sp2_val = S
sp2_frac = iterator[i]
sp2_temp = TsC
col = changeColor(i, len(iterator) / 2)
doPlot(iterator[i], TsC, col)
print("S: " + str(S) + " TaC: " + str(TaC) + " TsC: " + str(TsC) + " Ta: " + str(Ta) + " Ts: " + str(Ts))
print("sp1_val: " + str(sp1_val) + " W/m^2 sp1_frac: " + str(sp1_frac) + " temp1: " + str(sp1_temp))
print("sp2_val: " + str(sp2_val) + " W/m^2 sp2_frac: " + str(sp2_frac) + " temp2: " + str(sp2_temp))
```
| github_jupyter |
```
from selenium import webdriver
import time
import numpy as np
DRIVER = webdriver.Chrome()
def get_stats_from_window(driver, handle_number):
driver.switch_to.window(handle_number)
new_link = driver.current_url
statistics1=new_link[:-7]+"statistics;1"
time.sleep(2)
try:
driver.get(statistics1)
time.sleep(2)
login_form=driver.find_element_by_xpath('//div[@id="tab-statistics-1-statistic"]')
# if login_form:
statistics1=login_form.text
# print(statistics1)
with open('half.txt', 'a') as f:
f.write('-----')
f.write(statistics1)
f.write('-----')
new_link = driver.current_url
statistics2=new_link[:-7]+"statistics;0"
# print(statistics2)
driver.get(statistics2)
time.sleep(2)
login_form=driver.find_element_by_xpath('//div[@id="tab-statistics-0-statistic"]')
statistics2=login_form.text
# print(statistics2)
with open('end.txt', 'a') as f:
f.write('-----')
f.write(statistics2)
f.write('-----')
new_link = driver.current_url
goals=new_link[:-12]+"summary"
# print(goals)
time.sleep(2)
driver.get(goals)
time.sleep(2)
login_form=driver.find_element_by_xpath('//div[@id="summary-content"]')
goals=login_form.text
# print(goals)
with open('goal.txt', 'a') as f:
f.write('-----')
f.write(goals)
f.write('-----')
new_link = driver.current_url
info=new_link[:-12]+"summary"
# print(info)
time.sleep(3)
driver.get(info)
time.sleep(3)
login_form=driver.find_element_by_xpath('//td[@id="flashscore_column"]')
info=login_form.text
# print(info)
with open('info.txt', 'a') as f:
f.write('-----')
f.write(info)
f.write('-----')
except:
print("No Statistics for this game!!!")
```
---
```
# DRIVER.get("https://www.soccerstand.com/match/rsmTAHhE/#match-summary")
DRIVER.get("https://www.soccerstand.com/match/8jTixcnk/#match-summary")
game_info1 = DRIVER.find_elements_by_xpath('//td[@class="tname-home logo-enable"]')
game_info2 = DRIVER.find_elements_by_xpath('//td[@class="tname-away logo-enable"]')
# game_info = '-'.join([i.text for i in game_info])
game_info1 = '-'.join([i.text for i in game_info1])
game_info2 = '-'.join([i.text for i in game_info2])
game_info = game_info1+'-'+game_info2
#print(f'TESTING!!!:{game_info3}\n\n')
print(f'TESTING!!!:{game_info}\n\n')
game_info3
game_info3 = '-'.join(game_info.split('\n')[0].split()[::2])
game_info3
```
---
```
DRIVER.get("https://www.soccerstand.com/team/amiens-sc/lKkBAsxF/results/")
# LOCATION = ELEMENT.location
time.sleep(2)
action = webdriver.common.action_chains.ActionChains(DRIVER)
# ELEMENT = DRIVER.find_element_by_class_name('basketball')
# ELEMENT = DRIVER.find_element_by_class_name('padr')
ELEMENTS = DRIVER.find_elements_by_class_name('padr')
ELEMENTS2 = DRIVER.find_elements_by_class_name('padl')
# ELEMENT = DRIVER.find_element_by_xpath('//td[@title="Click for match detail!"]')
# action.move_to_element(ELEMENT)
print('Done')
for home,away in zip(ELEMENTS, ELEMENTS2):
print(home.text, away.text)
```
### Observation: it actually matter if the xpath is on the screen! otherwise the automated software wont be able to click it.
```
# DRIVER.execute_script("arguments[0].scrollIntoView();", ELEMENTS[0])
# DRIVER.execute_script("$(arguments[0]).click();", ELEMENTS[0])
# print(f'There are {len(DRIVER.window_handles)} windows:')
# start_window = DRIVER.window_handles[0]
# current_window = DRIVER.window_handles[-1]
# print(f'Starting window (main page): {start_window}')
# print(f'Current window: {current_window}')
# # get_stats_from_window(DRIVER, current_window)
# DRIVER.switch_to.window(start_window)
# DRIVER.close() #closes the current window
# print(f'There are {len(DRIVER.window_handles)} windows:')
# start_window = DRIVER.window_handles[0]
# current_window = DRIVER.window_handles[-1]
# print(f'Starting window (main page): {start_window}')
# print(f'Current window: {current_window}')
for elem in ELEMENTS[:5]:
time.sleep(3)
# DRIVER.execute_script("arguments[0].scrollIntoView(true);", e)
print(elem, elem.text)
# action.move_to_element(e).click().perform()
# action.click(on_element=e)
# action.perform()
# get_stats_from_window(DRIVER, handle_number)
# e.click()
DRIVER.execute_script("arguments[0].scrollIntoView();", elem)
DRIVER.execute_script("$(arguments[0]).click();", elem)
start_window = DRIVER.window_handles[0]
current_window = DRIVER.window_handles[-1]
get_stats_from_window(DRIVER, current_window)
print(f'Successfully wrote data....')
DRIVER.close() #closes the current window
DRIVER.switch_to.window(start_window)
assert len(DRIVER.window_handles) == 1
# for i in ELEMENTS:
# print(i.location_once_scrolled_into_view, i.text)
# action.move_to_element_with_offset(ELEMENT, 0, 0)
action.move_to_element(ELEMENT)
action.click()
action.perform()
DRIVER.close()
HANDLES = DRIVER.window_handles
print(HANDLES)
import pandas as pd
pd.read_csv('info.txt', engine='python', sep='-----')
pd.read_csv('half.txt', header=None).head()
# pd.read_csv('goal.txt')
```
| github_jupyter |
```
from tsfresh.feature_extraction import extract_features
from tsfresh.feature_extraction.settings import ComprehensiveFCParameters, MinimalFCParameters, EfficientFCParameters
from tsfresh.feature_extraction.settings import from_columns
import numpy as np
import pandas as pd
```
This notebooks illustrates the `"fc_parameters"` or `"kind_to_fc_parameters"` dictionaries.
For a detailed explanation, see also http://tsfresh.readthedocs.io/en/latest/text/feature_extraction_settings.html
## Construct a time series container
We construct the time series container that includes two sensor time series, _"temperature"_ and _"pressure"_, for two devices _"a"_ and _"b"_
```
df = pd.DataFrame({"id": ["a", "a", "b", "b"], "temperature": [1,2,3,1], "pressure": [-1, 2, -1, 7]})
df
```
## The default_fc_parameters
Which features are calculated by tsfresh is controlled by a dictionary that contains a mapping from feature calculator names to their parameters.
This dictionary is called `fc_parameters`. It maps feature calculator names (=keys) to parameters (=values). As keys, always the same names as in the tsfresh.feature_extraction.feature_calculators module are used.
In the following we load an exemplary dictionary
```
settings_minimal = MinimalFCParameters() # only a few basic features
settings_minimal
```
This dictionary can passed to the extract method, resulting in a few basic time series beeing calculated:
```
X_tsfresh = extract_features(df, column_id="id", default_fc_parameters = settings_minimal)
X_tsfresh.head()
```
By using the settings_minimal as value of the default_fc_parameters parameter, those settings are used for all type of time series. In this case, the `settings_minimal` dictionary is used for both _"temperature"_ and _"pressure"_ time series.
Now, lets say we want to remove the length feature and prevent it from beeing calculated. We just delete it from the dictionary.
```
del settings_minimal["length"]
settings_minimal
```
Now, if we extract features for this reduced dictionary, the length feature will not be calculated
```
X_tsfresh = extract_features(df, column_id="id", default_fc_parameters = settings_minimal)
X_tsfresh.head()
```
## The kind_to_fc_parameters
now, lets say we do not want to calculate the same features for both type of time series. Instead there should be different sets of features for each kind.
To do that, we can use the `kind_to_fc_parameters` parameter, which lets us finely specifiy which `fc_parameters` we want to use for which kind of time series:
```
fc_parameters_pressure = {"length": None,
"sum_values": None}
fc_parameters_temperature = {"maximum": None,
"minimum": None}
kind_to_fc_parameters = {
"temperature": fc_parameters_temperature,
"pressure": fc_parameters_pressure
}
print(kind_to_fc_parameters)
```
So, in this case, for sensor _"pressure"_ both _"max"_ and _"min"_ are calculated. For the _"temperature"_ signal, the length and sum_values features are extracted instead.
```
X_tsfresh = extract_features(df, column_id="id", kind_to_fc_parameters = kind_to_fc_parameters)
X_tsfresh.head()
```
So, lets say we lost the kind_to_fc_parameters dictionary. Or we apply a feature selection algorithm to drop
irrelevant feature columns, so our extraction settings contain irrelevant features.
In both cases, we can use the provided "from_columns" method to infer the creating dictionary from
the dataframe containing the features
```
recovered_settings = from_columns(X_tsfresh)
recovered_settings
```
Lets drop a column to show that the inferred settings dictionary really changes
```
X_tsfresh.iloc[:, 1:]
recovered_settings = from_columns(X_tsfresh.iloc[:, 1:])
recovered_settings
```
## More complex dictionaries
We provide custom fc_parameters dictionaries with greater sets of features.
The `EfficientFCParameters` contain features and parameters that should be calculated quite fastly:
```
settings_efficient = EfficientFCParameters()
settings_efficient
```
The `ComprehensiveFCParameters` are the biggest set of features. It will take the longest to calculate
```
settings_comprehensive = ComprehensiveFCParameters()
settings_comprehensive
```
You see those parameters as values in the fc_paramter dictionary? Those are the parameters of the feature extraction methods.
In detail, the value in a fc_parameters dicitonary can contain a list of dictionaries. Every dictionary in that list is one feature.
So, for example
```
settings_comprehensive['large_standard_deviation']
```
would trigger the calculation of 20 different 'large_standard_deviation' features, one for r=0.05, for n=0.10 up to r=0.95. Lets just take them and extract some features
```
settings_value_count = {'large_standard_deviation': settings_comprehensive['large_standard_deviation']}
settings_value_count
X_tsfresh = extract_features(df, column_id="id", default_fc_parameters=settings_value_count)
X_tsfresh.head()
```
The nice thing is, we actually contain the parameters in the feature name, so it is possible to reconstruct
how the feature was calculated.
```
from_columns(X_tsfresh)
```
This means that you should never change a column name. Otherwise the information how it was calculated can get lost.
| github_jupyter |
## External Compton

### Broad Line Region
```
import jetset
print('tested on jetset',jetset.__version__)
from jetset.jet_model import Jet
my_jet=Jet(name='EC_example',electron_distribution='bkn',beaming_expr='bulk_theta')
my_jet.add_EC_component(['EC_BLR','EC_Disk'],disk_type='BB')
```
The `show_model` method provides, among other information, information concerning the accretion disk, in this case we use a mono temperature black body `BB`
```
my_jet.show_model()
```
### change Disk type
the disk type can be set as a more realistic multi temperature black body (MultiBB). In this case the `show_model` method provides physical parameters regarding the multi temperature black body accretion disk:
- the Schwarzschild (Sw radius)
- the Eddington luminosity (L Edd.)
- the accretion rate (accr_rate)
- the Eddington accretion rate (accr_rate Edd.)
```
my_jet.add_EC_component(['EC_BLR','EC_Disk'],disk_type='MultiBB')
my_jet.set_par('L_Disk',val=1E46)
my_jet.set_par('gmax',val=5E4)
my_jet.set_par('gmin',val=2.)
my_jet.set_par('R_H',val=3E17)
my_jet.set_par('p',val=1.5)
my_jet.set_par('p_1',val=3.2)
my_jet.set_par('R',val=3E15)
my_jet.set_par('B',val=1.5)
my_jet.set_par('z_cosm',val=0.6)
my_jet.set_par('BulkFactor',val=20)
my_jet.set_par('theta',val=1)
my_jet.set_par('gamma_break',val=5E2)
my_jet.set_N_from_nuLnu(nu_src=3E13,nuLnu_src=5E45)
my_jet.set_IC_nu_size(100)
my_jet.show_model()
```
now we set some parameter for the model
```
my_jet.eval()
p=my_jet.plot_model(frame='obs')
p.rescale(y_min=-13.5,y_max=-9.5,x_min=9,x_max=27)
```
### Dusty Torus
```
my_jet.add_EC_component('DT')
my_jet.show_model()
my_jet.eval()
p=my_jet.plot_model()
p.rescale(y_min=-13.5,y_max=-9.5,x_min=9,x_max=27)
my_jet.add_EC_component('EC_DT')
my_jet.eval()
p=my_jet.plot_model()
p.rescale(y_min=-13.5,y_max=-9.5,x_min=9,x_max=27)
my_jet.save_model('test_EC_model.pkl')
my_jet=Jet.load_model('test_EC_model.pkl')
```
### Changing the external field transformation
Default method, is the transformation of the external photon field from the disk/BH frame to the relativistic blob
```
my_jet.set_external_field_transf('blob')
```
Alternatively, in the case of istropric fields as the CMB or the BLR and DT within the BLR radius, and DT radius, respectively, the it is possible to transform the the electron distribution, moving the blob to the disk/BH frame.
```
my_jet.set_external_field_transf('disk')
```
### External photon field energy density along the jet
```
def iso_field_transf(L,R,BulckFactor):
beta=1.0 - 1/(BulckFactor*BulckFactor)
return L/(4*np.pi*R*R*3E10)*BulckFactor*BulckFactor*(1+((beta**2)/3))
def external_iso_behind_transf(L,R,BulckFactor):
beta=1.0 - 1/(BulckFactor*BulckFactor)
return L/((4*np.pi*R*R*3E10)*(BulckFactor*BulckFactor*(1+beta)**2))
```
EC seed photon fields, in the Disk rest frame
```
%matplotlib inline
fig = plt.figure(figsize=(8,6))
ax=fig.subplots(1)
N=50
G=1
R_range=np.logspace(13,25,N)
y=np.zeros((8,N))
my_jet.set_verbosity(0)
my_jet.set_par('R_BLR_in',1E17)
my_jet.set_par('R_BLR_out',1.1E17)
for ID,R in enumerate(R_range):
my_jet.set_par('R_H',val=R)
my_jet.set_external_fields()
my_jet.energetic_report(verbose=False)
y[1,ID]=my_jet.energetic_dict['U_BLR_DRF']
y[0,ID]=my_jet.energetic_dict['U_Disk_DRF']
y[2,ID]=my_jet.energetic_dict['U_DT_DRF']
y[4,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,my_jet.parameters.R_DT.val,G)
y[3,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,my_jet.parameters.R_BLR_in.val,G)
y[5,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,R_range,G)
y[6,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,R_range,G)
y[7,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative,R_range,G)
ax.plot(np.log10(R_range),np.log10(y[0,:]),label='Disk')
ax.plot(np.log10(R_range),np.log10(y[1,:]),'-',label='BLR')
ax.plot(np.log10(R_range),np.log10(y[2,:]),label='DT')
ax.plot(np.log10(R_range),np.log10(y[3,:]),'--',label='BLR uniform')
ax.plot(np.log10(R_range),np.log10(y[4,:]),'--',label='DT uniform')
ax.plot(np.log10(R_range),np.log10(y[5,:]),'--',label='BLR 1/R2')
ax.plot(np.log10(R_range),np.log10(y[6,:]),'--',label='DT 1/R2')
ax.plot(np.log10(R_range),np.log10(y[7,:]),'--',label='Disk 1/R2')
ax.set_xlabel('log(R_H) cm')
ax.set_ylabel('log(Uph) erg cm-3 s-1')
ax.legend()
%matplotlib inline
fig = plt.figure(figsize=(8,6))
ax=fig.subplots(1)
L_Disk=1E45
N=50
G=my_jet.parameters.BulkFactor.val
R_range=np.logspace(15,22,N)
y=np.zeros((8,N))
my_jet.set_par('L_Disk',val=L_Disk)
my_jet._blob.theta_n_int=100
my_jet._blob.l_n_int=100
my_jet._blob.theta_n_int=100
my_jet._blob.l_n_int=100
for ID,R in enumerate(R_range):
my_jet.set_par('R_H',val=R)
my_jet.set_par('R_BLR_in',1E17*(L_Disk/1E45)**.5)
my_jet.set_par('R_BLR_out',1.1E17*(L_Disk/1E45)**.5)
my_jet.set_par('R_DT',2.5E18*(L_Disk/1E45)**.5)
my_jet.set_external_fields()
my_jet.energetic_report(verbose=False)
y[1,ID]=my_jet.energetic_dict['U_BLR']
y[0,ID]=my_jet.energetic_dict['U_Disk']
y[2,ID]=my_jet.energetic_dict['U_DT']
y[4,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,my_jet.parameters.R_DT.val,G)
y[3,:]=iso_field_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,my_jet.parameters.R_BLR_in.val,G)
y[5,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_BLR.val,R_range,G)
y[6,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative*my_jet.parameters.tau_DT.val,R_range,G)
y[7,:]=external_iso_behind_transf(my_jet._blob.L_Disk_radiative,R_range,G)
ax.plot(np.log10(R_range),np.log10(y[0,:]),label='Disk')
ax.plot(np.log10(R_range),np.log10(y[1,:]),'-',label='BLR')
ax.plot(np.log10(R_range),np.log10(y[2,:]),'-',label='DT')
ax.plot(np.log10(R_range),np.log10(y[3,:]),'--',label='BLR uniform')
ax.plot(np.log10(R_range),np.log10(y[4,:]),'--',label='DT uniform')
ax.plot(np.log10(R_range),np.log10(y[5,:]),'--',label='BLR 1/R2')
ax.plot(np.log10(R_range),np.log10(y[6,:]),'--',label='DT 1/R2')
ax.plot(np.log10(R_range),np.log10(y[7,:]),'--',label='Disk 1/R2')
ax.axvline(np.log10( my_jet.parameters.R_DT.val ))
ax.axvline(np.log10( my_jet.parameters.R_BLR_out.val))
ax.set_xlabel('log(R_H) cm')
ax.set_ylabel('log(Uph`) erg cm-3 s-1')
ax.legend()
```
### IC against the CMB
```
my_jet=Jet(name='test_equipartition',electron_distribution='lppl',beaming_expr='bulk_theta')
my_jet.set_par('R',val=1E21)
my_jet.set_par('z_cosm',val= 0.651)
my_jet.set_par('B',val=2E-5)
my_jet.set_par('gmin',val=50)
my_jet.set_par('gamma0_log_parab',val=35.0E3)
my_jet.set_par('gmax',val=30E5)
my_jet.set_par('theta',val=12.0)
my_jet.set_par('BulkFactor',val=3.5)
my_jet.set_par('s',val=2.58)
my_jet.set_par('r',val=0.42)
my_jet.set_N_from_nuFnu(5E-15,1E12)
my_jet.add_EC_component('EC_CMB')
```
We can now compare the different beaming pattern for the EC emission if the CMB, and realize that the beaming pattern is different.
This is very important in the case of radio galaxies. The `src` transformation is the one to use in the case of radio galaies or
misaligned AGNs, and gives a more accurate results.
Anyhow, be careful that this works only for isotropic external fields, suchs as the CMB, or the BLR
seed photons whitin the Dusty torus radius, and BLR radius, respectively
```
from jetset.plot_sedfit import PlotSED
p=PlotSED()
my_jet.set_external_field_transf('blob')
c= ['k', 'g', 'r', 'c']
for ID,theta in enumerate(np.linspace(2,20,4)):
my_jet.parameters.theta.val=theta
my_jet.eval()
my_jet.plot_model(plot_obj=p,comp='Sum',label='blob, theta=%2.2f'%theta,line_style='--',color=c[ID])
my_jet.set_external_field_transf('disk')
for ID,theta in enumerate(np.linspace(2,20,4)):
my_jet.parameters.theta.val=theta
my_jet.eval()
my_jet.plot_model(plot_obj=p,comp='Sum',label='disk, theta=%2.2f'%theta,line_style='',color=c[ID])
p.rescale(y_min=-17.5,y_max=-12.5,x_max=28)
```
## Equipartition
It is also possible to set our jet at the equipartition, that is achieved not using analytical approximation, but by numerically finding the equipartition value over a grid.
We have to provide the value of the observed flux (`nuFnu_obs`) at a given observed frequency (`nu_obs`), the minimum value of B (`B_min`), and the number of grid points (`N_pts`)
```
my_jet.parameters.theta.val=12
B_min,b_grid,U_B,U_e=my_jet.set_B_eq(nuFnu_obs=5E-15,nu_obs=1E12,B_min=1E-9,N_pts=50,plot=True)
my_jet.show_pars()
my_jet.eval()
p=my_jet.plot_model()
p.rescale(y_min=-16.5,y_max=-13.5,x_max=28)
```
| github_jupyter |
```
from sklearn.linear_model import LogisticRegression
import csv
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn import utils
%matplotlib inline
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
df = pd.read_csv('data_1000.csv')
# for i, row in df.iterrows():
# integer = int(row['correct_answ'])
# #result = row['cosine_sim']/2 + 0.5
# #df.set_value(i,'cosine_sim', result)
# df.set_value(i,'correct_answ', integer)
df.head(50)
df.boxplot(by='correct_answ', column=['bleu_score', 'levenstein_sim', 'cosine_sim', 'jaccard_sim'],
grid=True, figsize=(15,15))
df.boxplot(by='hof_answ', column=['bleu_score', 'levenstein_sim', 'cosine_sim', 'jaccard_sim'],
grid=True, figsize=(15,15))
similarities = [column for column in df.columns if 'sim' in column or 'score' in column]
df[similarities].hist(bins=50, figsize=(20,15))
plt.show()
df['correct_answ'].value_counts()
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(14,10))
df.plot(kind="scatter", x="bleu_score", y="correct_answ",alpha=0.2, ax=axes[0,0])
df.plot(kind="scatter", x="levenstein_sim", y="correct_answ",alpha=0.2, ax=axes[0,1])
df.plot(kind="scatter", x="jaccard_sim", y="correct_answ",alpha=0.2, ax=axes[1,0])
df.plot(kind="scatter", x="cosine_sim", y="correct_answ",alpha=0.2, ax=axes[1,1])
scatter_matrix(df[similarities], figsize=(14, 10))
corr_matrix = df.corr()
corr_matrix["correct_answ"].sort_values(ascending=False)
lab_enc = preprocessing.LabelEncoder()
encoded = lab_enc.fit_transform(np.array(df['cosine_sim']))
clf = LogisticRegression(random_state=0).fit(np.array(df[similarities]), np.array(df['correct_answ']).reshape(-1,1))
predicted = clf.predict(np.array(df.loc[:,similarities]))
print(predicted)
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(np.array(df[similarities]), np.array(df['correct_answ']).reshape(-1,1))
random_for = np.around(forest_reg.predict(df[similarities]))
df['forest'] = random_for
print(df['correct_answ'].sub(random_for, axis=0).value_counts())
forest_single = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(np.array(df['cosine_sim']).reshape(-1,1), np.array(df['correct_answ']).reshape(-1,1))
random_single_val = np.around(forest_reg.predict(np.array(df['cosine_sim']).reshape(-1,1)))
print(forest_reg.predict(np.array(df['cosine_sim']).reshape(-1,1)))
print('-----------------')
print(df['correct_answ'].sub(random_single_val, axis=0).value_counts())
forest_single = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(np.array(df['levenstein_sim']).reshape(-1,1), np.array(df['correct_answ']).reshape(-1,1))
random_single_val = np.around(forest_reg.predict(np.array(df['levenstein_sim']).reshape(-1,1)))
print(forest_reg.predict(np.array(df['levenstein_sim']).reshape(-1,1)))
print('-----------------')
print(df['correct_answ'].sub(random_single_val, axis=0).value_counts())
forest_single = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(np.array(df['jaccard_sim']).reshape(-1,1), np.array(df['correct_answ']).reshape(-1,1))
random_single_val = np.around(forest_reg.predict(np.array(df['jaccard_sim']).reshape(-1,1)))
print(forest_reg.predict(np.array(df['jaccard_sim']).reshape(-1,1)))
print('-----------------')
print(df['correct_answ'].sub(random_single_val, axis=0).value_counts())
forest_single = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(np.array(df['bleu_score']).reshape(-1,1), np.array(df['correct_answ']).reshape(-1,1))
random_single_val = np.around(forest_reg.predict(np.array(df['bleu_score']).reshape(-1,1)))
print(df['correct_answ'].sub(random_single_val, axis=0).value_counts())
forest_reg = RandomForestRegressor(n_estimators=100, random_state=42)
forest_reg.fit(np.array(df[['cosine_sim', 'levenstein_sim']]), np.array(df['correct_answ']).reshape(-1,1))
random_for = np.around(forest_reg.predict(df[['cosine_sim','levenstein_sim']]))
print(df['correct_answ'].sub(random_for, axis=0).value_counts())
plt.plot(X_test, ols.coef_ * X_test + ols.intercept_, linewidth=1)
plt.axhline(.5, color='.5')
plt.ylabel('y')
plt.xlabel('X')
plt.xticks(range(-5, 10))
plt.yticks([0, 0.5, 1])
plt.ylim(-.25, 1.25)
plt.xlim(-4, 10)
plt.legend(('Logistic Regression Model', 'Linear Regression Model'),
loc="lower right", fontsize='small')
plt.tight_layout()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/dafrie/fin-disclosures-nlp/blob/master/Multi_class_classification_with_Transformers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Multi-Class classification with Transformers
# Setup
```
# Load Google drive where the data and models are stored
from google.colab import drive
drive.mount('/content/drive')
############################## CONFIG ##############################
TASK = "multi-class" #@param ["multi-class"]
# Set to true if fine-tuning should be enabled. Else it loads fine-tuned model
ENABLE_FINE_TUNING = True #@param {type:"boolean"}
# See list here: https://huggingface.co/models
TRANSFORMER_MODEL_NAME = 'distilbert-base-cased' #@param ["bert-base-uncased", "bert-large-uncased", "albert-base-v2", "albert-large-v2", "albert-xlarge-v2", "albert-xxlarge-v2", "roberta-base", "roberta-large", "distilbert-base-uncased", "distilbert-base-cased"]
# The DataLoader needs to know our batch size for training. BERT Authors recommend 16 or 32, however this leads to an error due to not enough GPU memory
BATCH_SIZE = 16 #@param ["8", "16", "32"] {type:"raw"}
MAX_TOKEN_SIZE = 256 #@param [512,256,128] {type:"raw"}
EPOCHS = 4 # @param [1,2,3,4] {type:"raw"}
LEARNING_RATE = 2e-5
WEIGHT_DECAY = 0.0 # TODO: Necessary?
# Evaluation metric config. See for context: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html
AVERAGING_STRATEGY = 'macro' #@param ["micro", "macro", "weighted"]
# To make the notebook reproducible (not guaranteed for pytorch on different releases/platforms!)
SEED_VALUE = 0
# Enable comet-ml logging
DISABLE_COMET_ML = True #@param {type:"boolean"}
####################################################################
full_task_name = TASK
parameters = {
"task": TASK,
"enable_fine_tuning": ENABLE_FINE_TUNING,
"model_type": "transformer",
"model_name": TRANSFORMER_MODEL_NAME,
"batch_size": BATCH_SIZE,
"max_token_size": MAX_TOKEN_SIZE,
"epochs": EPOCHS,
"learning_rate": LEARNING_RATE,
"weight_decay": WEIGHT_DECAY,
"seed_value": SEED_VALUE,
}
# TODO: This could then be used to send to cometml to keep track of experiments...
```
```
# Install transformers library + datasets helper
!pip install transformers --quiet
!pip install datasets --quiet
!pip install optuna --quiet
import os
import pandas as pd
import numpy as np
import torch
import textwrap
import random
from sklearn.metrics import accuracy_score, f1_score, roc_auc_score
from transformers import logging, AutoTokenizer
model_id = TRANSFORMER_MODEL_NAME
print(f"Selected {TRANSFORMER_MODEL_NAME} as transformer model for the task...")
# Setup the models path
saved_models_path = "/content/drive/My Drive/{YOUR_PROJECT_HERE}/models/finetuned_models/"
expected_model_path = os.path.join(saved_models_path, TASK, model_id)
has_model_path = os.path.isdir(expected_model_path)
model_checkpoint = TRANSFORMER_MODEL_NAME if ENABLE_FINE_TUNING else expected_model_path
# Check if model exists
if not ENABLE_FINE_TUNING:
assert has_model_path, f"No fine-tuned model found at '{expected_model_path}', you need first to fine-tune a model from a pretrained checkpoint by enabling the 'ENABLE_FINE_TUNING' flag!"
```
# Data loading
```
# Note: Uses https://huggingface.co/docs/datasets/package_reference/main_classes.html
from datasets import DatasetDict, Dataset, load_dataset, Sequence, ClassLabel, Features, Value, concatenate_datasets
# TODO: Adapt
doc_column = 'text' # Contains the text
label_column = 'cro' # Needs to be an integer that represents the respective class
# TODO: Load train/test data
df_train = pd.read_pickle("/content/drive/My Drive/fin-disclosures-nlp/data/labels/Firm_AnnualReport_Labels_Training.pkl")
df_test = pd.read_pickle("/content/drive/My Drive/fin-disclosures-nlp/data/labels/Firm_AnnualReport_Labels_Test.pkl")
df_train = df_train.query(f"{label_column} == {label_column}")
df_test = df_test.query(f"{label_column} == {label_column}")
category_labels = df_train[label_column].unique().tolist()
no_of_categories = len(category_labels)
# TODO: Not sure if this step is necessary, but if you have the category in text and not integers
# This assumes that there is t
df_train[label_column] = df_train[label_column].astype('category').cat.codes.to_numpy(copy=True)
df_test[label_column] = df_test[label_column].astype('category').cat.codes.to_numpy(copy=True)
train_dataset = pd.DataFrame(df_train[[doc_column, label_column]].to_numpy(), columns=['text', 'labels'])
test_dataset = pd.DataFrame(df_test[[doc_column, label_column]].to_numpy(), columns=['text', 'labels'])
features = Features({'text': Value('string'), 'labels': ClassLabel(names=category_labels, num_classes=no_of_categories)})
# Setup Hugginface Dataset
train_dataset = Dataset.from_pandas(train_dataset, features=features)
test_dataset = Dataset.from_pandas(test_dataset, features=features)
dataset = DatasetDict({ 'train': train_dataset, 'test': test_dataset })
```
## Tokenization
```
# Load the tokenizer.
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
# Encode the whole dataset
def encode(data, max_len=MAX_TOKEN_SIZE):
return tokenizer(data["text"], truncation=True, padding='max_length', max_length=max_len)
dataset = dataset.map(encode, batched=True)
```
## Validation set preparation
```
from torch.utils.data import TensorDataset, random_split, DataLoader, RandomSampler, SequentialSampler
# See here for this workaround: https://github.com/huggingface/datasets/issues/767
dataset['train'], dataset['valid'] = dataset['train'].train_test_split(test_size=0.1, seed=SEED_VALUE).values()
dataset['train'].features
```
# Model Setup and Training
```
from sklearn.metrics import accuracy_score, precision_recall_fscore_support, roc_auc_score, matthews_corrcoef
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
from transformers.trainer_pt_utils import nested_detach
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss
from scipy.special import softmax
# Check if GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Sets the evaluation metric depending on the task
# TODO: Set your evaluation metric! Needs to be also in the provided "compute_metrics" function below
metric_name = "matthews_correlation"
# The training arguments
args = TrainingArguments(
output_dir=f"/content/models/{TASK}/{model_id}",
evaluation_strategy = "epoch",
learning_rate = LEARNING_RATE,
per_device_train_batch_size = BATCH_SIZE,
per_device_eval_batch_size = BATCH_SIZE,
num_train_epochs = EPOCHS,
weight_decay = WEIGHT_DECAY,
load_best_model_at_end = True,
metric_for_best_model = metric_name,
greater_is_better = True,
seed = SEED_VALUE,
)
def model_init():
"""Model initialization. Disabels logging temporarily to avoid spamming messages and loads the pretrained or fine-tuned model"""
logging.set_verbosity_error() # Workaround to hide warnings that the model weights are randomly set and fine-tuning is necessary (which we do later...)
model = AutoModelForSequenceClassification.from_pretrained(
model_checkpoint, # Load from model checkpoint, i.e. the pretrained model or a previously saved fine-tuned model
num_labels = no_of_categories, # The number of different categories/labels
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.)
)
logging.set_verbosity_warning()
return model
def compute_metrics(pred):
"""Computes classification task metric"""
labels = pred.label_ids
preds = pred.predictions
# Convert to probabilities
preds_prob = softmax(preds, axis=1)
# Convert to 0/1, i.e. set to 1 the class with the highest logit
preds = preds.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average=AVERAGING_STRATEGY)
acc = accuracy_score(labels, preds)
matthews_corr = matthews_corrcoef(labels, preds)
return {
'f1': f1,
'precision': precision,
'recall': recall,
'matthews_correlation': matthews_corr
}
class CroTrainer(Trainer):
# Note: If you need to do extra customization (like to alter the loss computation by adding weights), this can be done here
pass
trainer = CroTrainer(
model_init=model_init,
args=args,
train_dataset=dataset["train"],
eval_dataset=dataset["valid"],
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
# Only train if enabled, else we just want to load the model
if ENABLE_FINE_TUNING:
trainer.train()
trainer.save_model()
eval_metrics = trainer.evaluate()
# experiment.log_metrics(eval_metrics)
predict_result = trainer.predict(dataset['test'])
from sklearn.metrics import multilabel_confusion_matrix, classification_report
from scipy.special import softmax
preds = predict_result.predictions
labels = predict_result.label_ids
test_roc_auc = roc_auc_score(labels, preds, average=AVERAGING_STRATEGY)
print("Test ROC AuC: ", test_roc_auc)
preds_prob = softmax(preds, axis=1)
threshold = 0.5
preds_bool = (preds_prob > threshold)
label_list = test_dataset.features['labels'].feature.names
multilabel_confusion_matrix(labels, preds_bool)
print(classification_report(labels, preds_bool, target_names=label_list))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import warnings
import os
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Flatten
from tensorflow.compat.v1.keras.layers import TimeDistributed
from tensorflow.keras.layers import Conv1D
from tensorflow.keras.layers import MaxPooling1D
from tensorflow.keras.layers import ConvLSTM2D
warnings.simplefilter('ignore')
countryName = 'Russia'
nFeatures = 1
nDaysMin = 10
k = 7
nValid = 10
nTest = 10
dataDir = os.path.join('C:\\Users\\AMC\\Desktop\\Roshi\\Data')
confirmedFilename = 'confirmed_july.csv'
deathsFilename = 'deaths_july.csv'
recoveredFilename = 'recovered_july.csv'
confirmed = pd.read_csv(confirmedFilename)
confirmed.head()
# split a univariate sequence into samples
def split_sequence(sequence, n_steps, k):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix + k >= len(sequence):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:end_ix+k]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
def meanAbsolutePercentageError(yTrueList, yPredList):
absErrorList = [np.abs(yTrue - yPred) for yTrue, yPred in zip(yTrueList, yPredList)]
absPcErrorList = [absError/yTrue for absError, yTrue in zip(absErrorList, yTrueList)]
MAPE = 100*np.mean(absPcErrorList)
return MAPE
def meanAbsolutePercentageError_kDay(yTrueListList, yPredListList):
# Store true and predictions for day 1 in a list, day 2 in a list and so on
# Keep each list of these lists in a respective dict with key as day #
yTrueForDayK = {}
yPredForDayK = {}
for i in range(len(yTrueListList[0])):
yTrueForDayK[i] = []
yPredForDayK[i] = []
for yTrueList, yPredList in zip(yTrueListList, yPredListList):
for i in range(len(yTrueList)):
yTrueForDayK[i].append(yTrueList[i])
yPredForDayK[i].append(yPredList[i])
# Get MAPE for each day in a list
MAPEList = []
for i in yTrueForDayK.keys():
MAPEList.append(meanAbsolutePercentageError(yTrueForDayK[i], yPredForDayK[i]))
return np.mean(MAPEList)
def meanForecastError(yTrueList, yPredList):
forecastErrors = [yTrue - yPred for yTrue, yPred in zip(yTrueList, yPredList)]
MFE = np.mean(forecastErrors)
return MFE
def meanAbsoluteError(yTrueList, yPredList):
absErrorList = [np.abs(yTrue - yPred) for yTrue, yPred in zip(yTrueList, yPredList)]
return np.mean(absErrorList)
def meanSquaredError(yTrueList, yPredList):
sqErrorList = [np.square(yTrue - yPred) for yTrue, yPred in zip(yTrueList, yPredList)]
return np.mean(sqErrorList)
def rootMeanSquaredError(yTrueList, yPredList):
return np.sqrt(meanSquaredError(yTrueList, yPredList))
def medianSymmetricAccuracy(yTrueList, yPredList):
'''https://helda.helsinki.fi//bitstream/handle/10138/312261/2017SW001669.pdf?sequence=1'''
logAccRatioList = [np.abs(np.log(yPred/yTrue)) for yTrue, yPred in zip(yTrueList, yPredList)]
MdSA = 100*(np.exp(np.median(logAccRatioList))-1)
return MdSA
def medianSymmetricAccuracy_kDay(yTrueListList, yPredListList):
# Store true and predictions for day 1 in a list, day 2 in a list and so on
# Keep each list of these lists in a respective dict with key as day #
yTrueForDayK = {}
yPredForDayK = {}
for i in range(len(yTrueListList[0])):
yTrueForDayK[i] = []
yPredForDayK[i] = []
for yTrueList, yPredList in zip(yTrueListList, yPredListList):
for i in range(len(yTrueList)):
yTrueForDayK[i].append(yTrueList[i])
yPredForDayK[i].append(yPredList[i])
# Get MdSA for each day in a list
MdSAList = []
for i in yTrueForDayK.keys():
MdSAList.append(medianSymmetricAccuracy(yTrueForDayK[i], yPredForDayK[i]))
return(np.mean(MdSAList))
# Function to get all three frames for a given country
def getCountryCovidFrDict(countryName):
countryCovidFrDict = {}
for key in covidFrDict.keys():
dataFr = covidFrDict[key]
countryCovidFrDict[key] = dataFr[dataFr['Country/Region'] == countryName]
return countryCovidFrDict
# Load all 3 csv files
covidFrDict = {}
covidFrDict['confirmed'] = pd.read_csv(confirmedFilename)
covidFrDict['deaths'] = pd.read_csv(deathsFilename)
covidFrDict['recovered'] = pd.read_csv(recoveredFilename)
countryCovidFrDict = getCountryCovidFrDict(countryName)
# date list
colNamesList = list(countryCovidFrDict['confirmed'])
dateList = [colName for colName in colNamesList if '/20' in colName]
dataList = [countryCovidFrDict['confirmed'][date].iloc[0] for date in dateList]
dataDict = dict(zip(dateList, dataList))
# Only take time series from where the cases were >100
daysSince = 100
nCasesGreaterDaysSinceList = []
datesGreaterDaysSinceList = []
for key in dataDict.keys():
if dataDict[key] > daysSince:
datesGreaterDaysSinceList.append(key)
nCasesGreaterDaysSinceList.append(dataDict[key])
XList, yList = split_sequence(nCasesGreaterDaysSinceList, nDaysMin, k)
XTrainList = XList[0:len(XList)-(nValid + nTest)]
XValidList = XList[len(XList)-(nValid+nTest):len(XList)-(nTest)]
XTestList = XList[-nTest:]
yTrain = yList[0:len(XList)-(nValid + nTest)]
yValid = yList[len(XList)-(nValid+nTest):len(XList)-(nTest)]
yTest = yList[-nTest:]
print('Total size of data points for LSTM:', len(yList))
print('Size of training set:', len(yTrain))
print('Size of validation set:', len(yValid))
print('Size of test set:', len(yTest))
# Convert the list to matrix
XTrain = XTrainList.reshape((XTrainList.shape[0], XTrainList.shape[1], nFeatures))
XValid = XValidList.reshape((XValidList.shape[0], XValidList.shape[1], nFeatures))
XTest = XTestList.reshape((XTestList.shape[0], XTestList.shape[1], nFeatures))
```
# Vanilla LSTM
```
nNeurons = 100 # number of neurones
nFeatures = 1 # number of features
bestValidMAPE = 100 # 100 validation for best MAPE
bestSeed = -1
for seed in range(100):
tf.random.set_seed(seed=seed)
# define model
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
history = model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XValidNew = XValid.copy()
for day in range(k):
yPred = model.predict(np.float32(XValidNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XValidNew = np.delete(XValidNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XValidNew = np.append(XValidNew, yPred, axis=1)
# for yTrue, yPred in zip(yTest, yPredList):
# print(yTrue, yPred)
MAPE = meanAbsolutePercentageError_kDay(yValid, yPredListList)
print(seed, MAPE)
if MAPE < bestValidMAPE:
print('Updating best MAPE to {}...'.format(MAPE))
bestValidMAPE = MAPE
print('Updating best seed to {}...'.format(seed))
bestSeed = seed
# define model
print('Training model with best seed...')
tf.random.set_seed(seed=bestSeed)
#model = Sequential()
model.add(LSTM(nNeurons, activation='relu', input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
history = model.fit(XTrain, yTrain[:,0], validation_data = (XValid, yValid), epochs=1000, verbose=0)
plt.figure(figsize=(8,5))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valid'], loc='upper left')
plt.show()
model.summary()
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XTestNew = XTest.copy()
for day in range(k):
yPred = model.predict(np.float32(XTestNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XTestNew = np.delete(XTestNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XTestNew = np.append(XTestNew, yPred, axis=1)
MAPE = meanAbsolutePercentageError_kDay(yTest, yPredListList)
print('Test MAPE:', MAPE)
MdSA = medianSymmetricAccuracy_kDay(yTest, yPredListList)
print('Test MdSA:', MdSA)
MSE = meanSquaredError(yTest, yPredList)
print('Test MSE:', MSE)
RMSE = rootMeanSquaredError(yTest, yPredList)
print('Test RMSE:', RMSE)
yPredVanilla = yPredListList
```
# Stacked LSTM
```
nNeurons = 50
nFeatures = 1
bestValidMAPE = 100
bestSeed = -1
for seed in range(100):
tf.random.set_seed(seed=seed)
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', return_sequences=True, input_shape=(nDaysMin, nFeatures)))
model.add(LSTM(nNeurons, activation='relu'))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XValidNew = XValid.copy()
for day in range(k):
yPred = model.predict(np.float32(XValidNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XValidNew = np.delete(XValidNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XValidNew = np.append(XValidNew, yPred, axis=1)
# for yTrue, yPred in zip(yTest, yPredList):
# print(yTrue, yPred)
MAPE = meanAbsolutePercentageError_kDay(yValid, yPredListList)
print(seed, MAPE)
if MAPE < bestValidMAPE:
print('Updating best MAPE to {}...'.format(MAPE))
bestValidMAPE = MAPE
print('Updating best seed to {}...'.format(seed))
bestSeed = seed
# define model
print('Training model with best seed...')
tf.random.set_seed(seed=bestSeed)
model = Sequential()
model.add(LSTM(nNeurons, activation='relu', return_sequences=True, input_shape=(nDaysMin, nFeatures)))
model.add(LSTM(nNeurons, activation='relu'))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
model.summary()
# fit model
history = model.fit(XTrain, yTrain[:,0], validation_data = (XValid, yValid), epochs=1000, verbose=0)
plt.figure(figsize=(8,5))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valid'], loc='upper left')
plt.show()
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XTestNew = XTest.copy()
for day in range(k):
yPred = model.predict(np.float32(XTestNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XTestNew = np.delete(XTestNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XTestNew = np.append(XTestNew, yPred, axis=1)
MAPE = meanAbsolutePercentageError_kDay(yTest, yPredListList)
print('Test MAPE:', MAPE)
MdSA = medianSymmetricAccuracy_kDay(yTest, yPredListList)
print('Test MdSA:', MdSA)
MSE = meanSquaredError(yTest, yPredListList)
print('Test MSE:', MSE)
RMSE = rootMeanSquaredError(yTest, yPredListList)
print('Test RMSE:', RMSE)
yPredStacked = yPredListList
model.summary()
```
# Bi-directional LSTM
```
# define model
nNeurons = 50
nFeatures = 1
bestValidMAPE = 100
bestSeed = -1
for seed in range(100):
tf.random.set_seed(seed=seed)
model = Sequential()
model.add(Bidirectional(LSTM(nNeurons, activation='relu'), input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
# fit model
history = model.fit(XTrain, yTrain[:,0], epochs=1000, verbose=0)
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XValidNew = XValid.copy()
for day in range(k):
yPred = model.predict(np.float32(XValidNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XValidNew = np.delete(XValidNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XValidNew = np.append(XValidNew, yPred, axis=1)
# for yTrue, yPred in zip(yTest, yPredList):
# print(yTrue, yPred)
MAPE = meanAbsolutePercentageError_kDay(yValid, yPredListList)
print(seed, MAPE)
if MAPE < bestValidMAPE:
print('Updating best MAPE to {}...'.format(MAPE))
bestValidMAPE = MAPE
print('Updating best seed to {}...'.format(seed))
bestSeed = seed
# define model
print('Training model with best seed...')
tf.random.set_seed(seed=bestSeed)
model = Sequential()
model.add(Bidirectional(LSTM(nNeurons, activation='relu'), input_shape=(nDaysMin, nFeatures)))
model.add(Dense(1))
opt = Adam(learning_rate=0.1)
model.compile(optimizer=opt, loss='mse')
model.summary()
# fit model
history = model.fit(XTrain, yTrain[:,0], validation_data = (XValid, yValid), epochs=1000, verbose=0)
plt.figure(figsize=(8,5))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Valid'], loc='upper left')
plt.show()
yPredListList = []
for day in range(nTest):
yPredListList.append([])
XTestNew = XTest.copy()
for day in range(k):
yPred = model.predict(np.float32(XTestNew), verbose=0)
for i in range(len(yPred)):
yPredListList[i].append(yPred[i][0])
XTestNew = np.delete(XTestNew, 0, axis=1)
yPred = np.expand_dims(yPred, 2)
XTestNew = np.append(XTestNew, yPred, axis=1)
MAPE = meanAbsolutePercentageError_kDay(yTest, yPredListList)
print('Test MAPE:', MAPE)
MdSA = medianSymmetricAccuracy_kDay(yTest, yPredListList)
print('Test MdSA:', MdSA)
MSE = meanSquaredError(yTest, yPredListList)
print('Test MSE:', MSE)
RMSE = rootMeanSquaredError(yTest, yPredListList)
print('Test RMSE:', RMSE)
yPredBidirectional = yPredListList
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from matplotlib.dates import DateFormatter
#from statsmodels.tsa.statespace.sarimax import SARIMAX
# Format y tick labels
def y_fmt(y, pos):
decades = [1e9, 1e6, 1e3, 1e0, 1e-3, 1e-6, 1e-9 ]
suffix = ["G", "M", "k", "" , "m" , "u", "n" ]
if y == 0:
return str(0)
for i, d in enumerate(decades):
if np.abs(y) >=d:
val = y/float(d)
signf = len(str(val).split(".")[1])
if signf == 0:
return '{val:d} {suffix}'.format(val=int(val), suffix=suffix[i])
else:
if signf == 1:
if str(val).split(".")[1] == "0":
return '{val:d} {suffix}'.format(val=int(round(val)), suffix=suffix[i])
tx = "{"+"val:.{signf}f".format(signf = signf) +"} {suffix}"
return tx.format(val=val, suffix=suffix[i])
#return y
return y
plt.figure(figsize=(10,10))
datesForPlottingList = datesGreaterDaysSinceList[-k:]
groundTruthList = nCasesGreaterDaysSinceList[-k:]
plt.ylabel('Number of confirmed cases for Russia', fontsize=20)
plt.plot(datesForPlottingList, groundTruthList, '-o', linewidth=3, label='Actual confirmed numbers');
plt.plot(datesForPlottingList, yPredVanilla[-1], '-o', linewidth=3, label='Vanilla LSTM predictions');
plt.plot(datesForPlottingList, yPredStacked[-1], '-o', linewidth=3, label='Stacked LSTM predictions');
plt.plot(datesForPlottingList, yPredBidirectional[-1], '-o', linewidth=3, label='Bidirectional LSTM predictions');
plt.xlabel('Date', fontsize=20);
plt.legend(fontsize=14);
plt.xticks(fontsize=16);
plt.yticks(fontsize=16);
ax = plt.gca()
ax.yaxis.set_major_formatter(FuncFormatter(y_fmt))
#date_form = DateFormatter("%d-%m")
#ax.xaxis.set_major_formatter(date_form)
# plt.grid(axis='y')
plt.savefig(os.path.join('Plots_10days_k7', 'predictions_{}.png'.format(countryName)), dpi=400)
plt.savefig(os.path.join('Plots_10days_k7', 'predictions_{}.pdf'.format(countryName)), dpi=400)
datesForPlottingList
groundTruthList
yPredVanilla
datesForPlottingList
groundTruthList
yPredVanilla
allvalues = {
'Date': ['7/22/20', '7/23/20', '7/24/20', '7/25/20', '7/26/20', '7/27/20', '7/28/20'],
'Actual':,
'Predicted_Vanilla': ,
'Predicted_Stacked': ,
'Predicted_BiLSTM' : }
print(allvalues)
allvalues = pd.to_csv ('russia_10d_k7_predictions.csv')
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/coleman-word/DevOps-Notebooks/blob/master/Markdown_Guide.ipynb)
Formatting text in Colaboratory: A guide to Colaboratory markdown
===
## What is markdown?
Colaboratory has two types of cells: text and code. The text cells are formatted using a simple markup language called markdown, based on [the original](https://daringfireball.net/projects/markdown/syntax).
## Quick reference
To see the markdown source, double-click a text cell, showing both the markdown source (above) and the rendered version (below). Above the markdown source there is a toolbar to assist editing.
Headers are created using \#. Use multiple \#\#\# for less emphasis. For example:
>\# This is equivalent to an <h1> tag
>\##### This is equivalent to an <h5> tag
To make text **bold** surround it with \*\*two asterisks\*\*. To make text *italic* use a \*single asterisk\* or \_underscore\_. \
_**Bold** inside italics_ and **vice-_versa_** also work. ~~Strikethrough~~ uses \~\~two tildes\~\~ while `monospace` (such as code) uses \`backtick\`.
Blocks are indented with \>, and multiple levels of indentation are indicated by repetition: \>\>\> indents three levels.
Ordered lists are created by typing any number followed by a period at the beginning of a line. Unordered lists are \* or - at the beginning of a line. Lists can be nested by indenting using two spaces for each level of nesting.
[Links](https://research.google.com/colaboratory) are created with \[brackets around the linked text\](and-parentheses-around-the-url.html). Naked URLs, like https://google.com, will automatically be linkified.
Another way to create links is using references, which look like [brackets around the linked text][an-arbitrary-reference-id] and then, later anywhere in the cell on its own line, \[an-arbitrary-reference-id]: followed-by-a-URL.html
A '!' character in front of a link turns it into an inline image link: !\[Alt text]\(link-to-an-image.png).
$\LaTeX$ equations are surrounded by `$`. For example, \$$y = 0.1 x$\$ for an inline equation. Double the `$` to set the contents off on its own centered line.
Horizontal rules are created with three or more hyphens, underscores, or asterisks (\-\-\-, \_\_\_, or \*\*\*) on their own line.
Tables are created using \-\-\- for the boundary between the column header and columns and \| between the columns.
Please see also [GitHub's documentation](https://help.github.com/articles/basic-writing-and-formatting-syntax/) for a similar (but not identical) version of markdown.
## Examples
Examples of markdown text with tags repeated, escaped the second time, to clarify their function:
##### \#\#\#\#\#This text is treated as an <h5> because it has five hashes at the beginning
\**italics*\* and \__italics_\_
**\*\*bold\*\***
\~\~~~strikethrough~~\~\~
\``monospace`\`
No indent
>\>One level of indentation
>>\>\>Two levels of indentation
An ordered list:
1. 1\. One
1. 1\. Two
1. 1\. Three
An unordered list:
* \* One
* \* Two
* \* Three
A naked URL: https://google.com
Linked URL: \[[Colaboratory](https://research.google.com/colaboratory)]\(https://research.google.com/colaboratory)
A linked URL using references:
>\[[Colaboratory][colaboratory-label]]\[colaboratory-label]
>\[colaboratory-label]: https://research.google.com/colaboratory
[colaboratory-label]: https://research.google.com/colaboratory
An inline image: !\[Google's logo](https://www.google.com/images/logos/google_logo_41.png)
>
Equations:
>\$y=x^2\$ $\Rightarrow$ $y=x^2$
>\$e^{i\\pi} + 1 = 0\$ $\Rightarrow$ $e^{i\pi} + 1 = 0$
>\$e^x=\\sum_{i=0}^\\infty \\frac{1}{i!}x^i\$ $\Rightarrow$ $e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$
>\$\\frac{n!}{k!(n-k)!} = {n \\choose k}\$ $\Rightarrow$ $\frac{n!}{k!(n-k)!} = {n \choose k}$
Tables:
>```
First column name | Second column name
--- | ---
Row 1, Col 1 | Row 1, Col 2
Row 2, Col 1 | Row 2, Col 2
```
becomes:
>First column name | Second column name
>--- | ---
>Row 1, Col 1 | Row 1, Col 2
>Row 2, Col 1 | Row 2, Col 2
Horizontal rule done with three dashes (\-\-\-):
---
## Differences between Colaboratory markdown and other markdown dialects
Colaboratory uses [marked.js](https://github.com/chjj/marked) and so is similar but not quite identical to the markdown used by Jupyter and Github.
The biggest differences are that Colaboratory supports (MathJax) $\LaTeX$ equations like Jupyter, but does not allow HTML tags in the markdown unlike most other markdowns.
Smaller differences are that Colaboratory does not support syntax highlighting in code blocks, nor does it support some GitHub additions like emojis and to-do checkboxes.
If HTML must be included in a Colaboratory notebook, see the [%%html magic](/notebooks/basic_features_overview.ipynb#scrollTo=qM4myQGfQboQ).
## Useful references
* [Github markdown basics](https://help.github.com/articles/markdown-basics/)
* [Github flavored markdown](https://help.github.com/articles/github-flavored-markdown/)
* [Original markdown spec: Syntax](http://daringfireball.net/projects/markdown/syntax)
* [Original markdown spec: Basics](http://daringfireball.net/projects/markdown/basics)
* [marked.js library used by Colaboratory](https://github.com/chjj/marked)
* [LaTex mathematics for equations](https://en.wikibooks.org/wiki/LaTeX/Mathematics)
| github_jupyter |
```
# default_exp models.XResNet1dPlus
```
# XResNet1dPlus
> This is a modified version of fastai's XResNet model in github. Changes include:
* API is modified to match the default timeseriesAI's API.
* (Optional) Uber's CoordConv 1d
```
#export
from tsai.imports import *
from tsai.models.layers import *
from tsai.models.utils import *
#export
class XResNet1dPlus(nn.Sequential):
@delegates(ResBlock1dPlus)
def __init__(self, block, expansion, layers, fc_dropout=0.0, c_in=3, n_out=1000, stem_szs=(32,32,64),
widen=1.0, sa=False, act_cls=defaults.activation, ks=3, stride=2, coord=False, **kwargs):
store_attr('block,expansion,act_cls,ks')
if ks % 2 == 0: raise Exception('kernel size has to be odd!')
stem_szs = [c_in, *stem_szs]
stem = [ConvBlock(stem_szs[i], stem_szs[i+1], ks=ks, coord=coord, stride=stride if i==0 else 1,
act=act_cls)
for i in range(3)]
block_szs = [int(o*widen) for o in [64,128,256,512] +[256]*(len(layers)-4)]
block_szs = [64//expansion] + block_szs
blocks = self._make_blocks(layers, block_szs, sa, coord, stride, **kwargs)
backbone = nn.Sequential(*stem, MaxPool(ks=ks, stride=stride, padding=ks//2, ndim=1), *blocks)
head = nn.Sequential(AdaptiveAvgPool(sz=1, ndim=1), Flatten(), nn.Dropout(fc_dropout),
nn.Linear(block_szs[-1]*expansion, n_out))
super().__init__(OrderedDict([('backbone', backbone), ('head', head)]))
self._init_cnn(self)
def _make_blocks(self, layers, block_szs, sa, coord, stride, **kwargs):
return [self._make_layer(ni=block_szs[i], nf=block_szs[i+1], blocks=l, coord=coord,
stride=1 if i==0 else stride, sa=sa and i==len(layers)-4, **kwargs)
for i,l in enumerate(layers)]
def _make_layer(self, ni, nf, blocks, coord, stride, sa, **kwargs):
return nn.Sequential(
*[self.block(self.expansion, ni if i==0 else nf, nf, coord=coord, stride=stride if i==0 else 1,
sa=sa and i==(blocks-1), act_cls=self.act_cls, ks=self.ks, **kwargs)
for i in range(blocks)])
def _init_cnn(self, m):
if getattr(self, 'bias', None) is not None: nn.init.constant_(self.bias, 0)
if isinstance(self, (nn.Conv1d,nn.Conv2d,nn.Conv3d,nn.Linear)): nn.init.kaiming_normal_(self.weight)
for l in m.children(): self._init_cnn(l)
#export
def _xresnetplus(expansion, layers, **kwargs):
return XResNet1dPlus(ResBlock1dPlus, expansion, layers, **kwargs)
#export
@delegates(ResBlock)
def xresnet1d18plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [2, 2, 2, 2], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d34plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [3, 4, 6, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d50plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3, 4, 6, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d101plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3, 4, 23, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d152plus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3, 8, 36, 3], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d18_deepplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [2,2,2,2,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d34_deepplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [3,4,6,3,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d50_deepplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3,4,6,3,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d18_deeperplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [2,2,1,1,1,1,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d34_deeperplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(1, [3,4,6,3,1,1,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
@delegates(ResBlock)
def xresnet1d50_deeperplus (c_in, c_out, act=nn.ReLU, **kwargs): return _xresnetplus(4, [3,4,6,3,1,1,1,1], c_in=c_in, n_out=c_out, act_cls=act, **kwargs)
net = xresnet1d18plus(3, 2, coord=True)
x = torch.rand(32, 3, 50)
net(x)
bs, c_in, seq_len = 2, 4, 32
c_out = 2
x = torch.rand(bs, c_in, seq_len)
archs = [
xresnet1d18plus, xresnet1d34plus, xresnet1d50plus,
xresnet1d18_deepplus, xresnet1d34_deepplus, xresnet1d50_deepplus, xresnet1d18_deeperplus,
xresnet1d34_deeperplus, xresnet1d50_deeperplus
# # Long test
# xresnet1d101, xresnet1d152,
]
for i, arch in enumerate(archs):
print(i, arch.__name__)
test_eq(arch(c_in, c_out, sa=True, act=Mish, coord=True)(x).shape, (bs, c_out))
m = xresnet1d34plus(4, 2, act=Mish)
test_eq(len(get_layers(m, is_bn)), 38)
test_eq(check_weight(m, is_bn)[0].sum(), 22)
# hide
out = create_scripts()
beep(out)
```
| github_jupyter |
```
# cleanup data and put them in the new csv files
import numpy as np
import scipy
import pandas as pd
from sklearn import tree, ensemble, linear_model, svm, cross_validation, grid_search
import math
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
%matplotlib inline
## load test data (this one does not need any more preprocessing)
test = np.genfromtxt("data/clean/test.csv",delimiter=",",skip_header=1)
#A function to calculate Root Mean Squared Logarithmic Error (RMSLE)
def rmsle(y, y_pred):
assert len(y) == len(y_pred)
terms_to_sum = [(math.log(y_pred[i] + 1) - math.log(y[i] + 1)) ** 2.0 for i,pred in enumerate(y_pred)]
return (sum(terms_to_sum) * (1.0/len(y))) ** 0.5
def normalize(x):
return (x - x.mean()) / (x.max() - x.min())
# combine results
def output_results(predictions):
test_t = pd.read_csv("data/original/test.csv")
dates = test_t["datetime"]
predictions = np.maximum(predictions,0.0)
results = pd.DataFrame(predictions)
dates = pd.DataFrame(dates)
x = pd.concat([dates,results],axis=1)
x.columns = ["datetime","count"]
x.to_csv("data/result.csv",delimiter=",",index=False)
# set up two histograms to see how the distribution of variables
# looks for the training and test labels
def hist_evaluate(train_lab, svr_pred):
n, bins, patches = plt.hist(train_lab, 50, normed=1, facecolor='green', alpha=0.75)
plt.show()
n, bins, patches = plt.hist(svr_pred, 50, normed=1, facecolor='green', alpha=0.75)
plt.show()
# import data to numpy
def process_train():
train_comp = np.genfromtxt("data/clean/train.csv",delimiter=",",skip_header=1)
# split data into train and validation
np.random.shuffle(train_comp)
y = train_comp[:,0]
x = np.delete(train_comp,0,1)
return cross_validation.train_test_split(x,y, test_size=0.4, random_state=1)
(train,valid,train_lab,valid_lab) = process_train()
# first attempt - SVR with all parameters
svr = svm.SVR()
svr.fit(train,train_lab)
svr_pred = svr.predict(valid)
rmsle(valid_lab, svr_pred)
# outputs around 1.43, terrible
# SVR with a limited columns
# it might make sense to try to use smaller set of vars: 1,2,5
temp_train = train[:,[1,2,5,6]]
temp_valid = valid[:,[1,2,5,6]]
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svr_min = svm.SVR()
clf_svr_min = grid_search.GridSearchCV(svr_min,parameters)
clf_svr_min.fit(temp_train,train_lab)
svr_min_pred = clf_svr_min.predict(temp_valid)
# SVR with a limited columns
# result evaluation
print svr_min_pred.min()
svr_min_pred = np.maximum(svr_min_pred,0.0)
print rmsle(valid_lab, svr_min_pred)
hist_evaluate(valid_lab, svr_min_pred)
# result: 0.53
temp_test = test[:,[1,2,5,6]]
svr_min_test_predict = clf_svr_min.predict(temp_test)
output_results(svr_min_test_predict)
test[:,0]
# random forest classifier
clf_forest = ensemble.RandomForestRegressor()
clf_forest.fit(train,train_lab)
svr_pred_forest = clf_forest.predict(valid)
print svr_pred.min()
print svr_pred.max()
rmsle(valid_lab, svr_pred)
```
| github_jupyter |
```
#export
from pathlib import Path
import urllib.request as u_request
from zipfile import ZipFile
import csv
import pandas as pd
from andi import andi_datasets, normalize
import numpy as np
from fastai.text.all import *
#hide
from nbdev.showdoc import *
# default_exp data
```
# Data
> Here we deal with the data acquisition and processing.
## Data acquirement
```
#export
DATA_PATH = Path("../data")
#export
def acquire_data(train=True, val=True):
"""Obtains the train and validation datasets of the competition.
The train url maight fail. Get it from https://drive.google.com/drive/folders/1RXziMCO4Y0Fmpm5bmjcpy-Genhzv4QJ4"""
DATA_PATH.mkdir(exist_ok=True)
train_url = ("https://doc-4k-88-drive-data-export.googleusercontent.com/download/qh9kfuk2n3khcj0qvrn9t3a4j19nve1a/" +
"rqpd3tajosn0gta5f9mmbbb1e4u8csnn/1599642000000/17390da5-4567-4189-8a62-1749e1b19b06/108540842544374891611/" +
"ADt3v-N9HwRAxXINIFMKGcsrjzMlrvhOOYitRyphFom1Ma-CUUekLTkDp75fOegXlyeVVrTPjlnqDaK0g6iI7eDL9YJw91-" +
"jiityR3iTfrysZP6hpGA62c4lkZbjGp_NJL-XSDUlPcwiVi5Hd5rFtH1YYP0tiiFCoJZsTT4akE8fjdrkZU7vaqFznxuyQDA8YGaiuYlKu" +
"-F1HiAc9kG_k9EMgkMncNflNJtlugxH5pFcNDdrYiOzIINRIRivt5ScquQ_s4KyuV-zYOQ_g2_VYri8YAg0IqbBrcO-exlp5j-" +
"t02GDh5JZKU3Hky5b70Z8brCL5lvK0SFAFIKOer45ZrFaACA3HGRNJg==?authuser=0&nonce=k5g7m53pp3cqq&user=" +
"108540842544374891611&hash=m7kmrh87gmekjhrdcpbhuf1kj13ui0l2")
val_url = ("https://competitions.codalab.org/my/datasets/download/7ea12913-dfcf-4a50-9f5d-8bf9666e9bb4")
if train:
data = _download_bytes(train_url)
_write_bytes(data, DATA_PATH)
train_path = DATA_PATH/"Development dataset for Training"
train_path.rename(train_path.parent/"train")
if val:
data = _download_bytes(val_url)
_write_bytes(data, DATA_PATH)
val_path = DATA_PATH/"validation_for_scoring"
val_path.rename(val_path.parent/"val")
rmtree(DATA_PATH/"__MACOSX")
def _download_bytes(url):
"Downloads data from `url` as bytes"
u = u_request.urlopen(url)
data = u.read()
u.close()
return data
def _write_bytes(data, path):
"Saves `data` (bytes) into path."
zip_path = _zip_bytes(data)
_unzip_file(zip_path, new_path=path)
def _zip_bytes(data, path=None):
"Saves bytes data as .zip in `path`."
if path is None: path = Path("../temp")
zip_path = path.with_suffix(".zip")
with open(zip_path, "wb") as f:
f.write(data)
return zip_path
def _unzip_file(file_path, new_path=None, purge=True):
"Unzips file in `file_path` to `new_path`."
if new_path is None: new_path = file_path.with_suffix("")
zip_path = file_path.with_suffix(".zip")
with ZipFile(zip_path, 'r') as f:
f.extractall(new_path)
if purge: zip_path.unlink()
def rmtree(root):
for p in root.iterdir():
if p.is_dir(): rmtree(p)
else: p.unlink()
root.rmdir()
df = pd.DataFrame(columns=['dim', 'model', 'exp', 'x', 'len'], dtype=object)
for dim in range(1, 4):
trajs = pd.read_pickle(DATA_PATH/f"custom_val/dataset_{dim}D_task_2.pkl")['dataset_og_t2']
for traj in trajs:
model, exp, x = traj[0], traj[1], traj[2:]
x = tensor(x).view(dim,-1).T
x = x[:torch.randint(10, 1000, (1,))]
df = df.append({'dim': dim, 'model': model, 'exp': exp, 'x': x, 'len': len(x)}, ignore_index=True)
df.to_pickle(DATA_PATH/f"custom_val/custom_{dim}D.pkl")
```
## Data conditioning
```
#export
def load_custom_data(dim=1, models=None, exps=None, path=None):
"Loads data from custom dataset."
path = DATA_PATH/f"custom{dim}.pkl" if path is None else path
df = pd.read_pickle(path)
mod_mask = sum([df['model'] == m for m in models]) if models is not None else np.ones(df.shape[0], dtype=bool)
exp_mask = sum([df['exp'] == e for e in exps]) if exps is not None else np.ones(df.shape[0], dtype=bool)
mask = mod_mask & exp_mask
return df[mask].reset_index(drop=True)
def load_data(task, dim=1, ds='train'):
"Loads 'train' or 'val' data of corresponding dimension."
path = DATA_PATH/ds
try:
df = pd.read_pickle(path/f"task{task}.pkl")
except:
_txt2df(task, ds=[ds])
df = pd.read_pickle(path/f"task{task}.pkl")
return df[df['dim']==dim].reset_index(drop=True)
def _txt2df(task, ds=['train', 'val']):
"Extracts dataset and saves it in df form"
if 'train' in ds:
df = pd.DataFrame(columns=['dim', 'y', 'x', 'len'], dtype=object)
train_path = DATA_PATH/"train"
if not (train_path/f"task{task}.txt").exists(): acquire_data(train=True, val=False)
with open(train_path/f"task{task}.txt", "r") as D, open(train_path/f"ref{task}.txt") as Y:
trajs = csv.reader(D, delimiter=";", lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
labels = csv.reader(Y, delimiter=";", lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
for t, y in zip(trajs, labels):
dim, x = int(t[0]), t[1:]
x = tensor(x).view(dim, -1).T
label = tensor(y[1:]) if task is 3 else y[1]
df = df.append({'dim': dim, 'y': label, 'x': x, 'len': len(x)}, ignore_index=True)
df.to_pickle(train_path/f"task{task}.pkl")
if 'val' in ds:
df = pd.DataFrame(columns=['dim', 'x', 'len'], dtype=object)
val_path = DATA_PATH/"val"
task_path = val_path/f"task{task}.txt"
if not task_path.exists(): acquire_data(train=False, val=True)
with open(task_path, "r") as D:
trajs = csv.reader(D, delimiter=";", lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
for t in trajs:
dim, x = int(t[0]), t[1:]
x = tensor(x).view(dim, -1).T
df = df.append({'dim': dim, 'x': x, 'len': len(x)}, ignore_index=True)
df['y'] = ""
df.to_pickle(val_path/f"task{task}.pkl")
```
## Dataloaders
```
#export
def pad_trajectories(samples, pad_value=0, pad_first=True, backwards=False):
"Pads trajectories assuming shape (len, dim)"
max_len = max([s.shape[0] for s, _ in samples])
if backwards: pad_first = not pad_first
def _pad_sample(s):
s = normalize_trajectory(s)
diff = max_len - s.shape[0]
pad = s.new_zeros((diff, s.shape[1])) + pad_value
pad_s = torch.cat([pad, s] if pad_first else [s, pad])
if backwards: pad_s = pad_s.flip(0)
return pad_s
return L((_pad_sample(s), y) for s, y in samples)
def normalize_trajectory(traj):
"Normalizes the trajectory displacements."
n_traj = torch.zeros_like(traj)
disp = traj[1:]-traj[:-1]
n_traj[1:] = disp.div_(disp.std(0)).cumsum(0)
return n_traj
#export
@delegates(pad_trajectories)
def get_custom_dls(target='model', dim=1, models=None, exps=None, bs=128, split_pct=0.2, path=None, balance=False, **kwargs):
"Obtain `DataLoaders` from custom dataset filtered by `models` and `exps` to predict `target`."
data = load_custom_data(dim=dim, models=models, exps=exps, path=path)
if balance: data = _subsample_df(data)
ds = L(zip(data['x'], data[target])) if target is 'exp' else L(zip(data['x'], data[target].astype(int)))
sorted_dl = partial(SortedDL, before_batch=partial(pad_trajectories, **kwargs), shuffle=True)
return get_dls_from_ds(ds, sorted_dl, split_pct=split_pct, bs=bs)
@delegates(pad_trajectories)
def get_discriminative_dls(task, dim=1, bs=128, split_pct=0.2, ds='train', **kwargs):
"Obtain `DataLoaders` for classification/regression models."
data = load_data(task, dim=dim, ds=ds)
ds = L(zip(data['x'], data['y'])) if task==1 else L(zip(data['x'], data['y'].astype(int)))
sorted_dl = partial(SortedDL, before_batch=partial(pad_trajectories, **kwargs), shuffle=True)
return get_dls_from_ds(ds, sorted_dl, split_pct=split_pct, bs=bs)
@delegates(SortedDL.__init__)
def get_turning_point_dls(task=3, dim=1, bs=128, split_pct=0.2, ds='train', **kwargs):
"Obtain `DataLoaders` to predict change points in trajecotries."
data = load_data(task, dim=dim, ds=ds)
ds = L(zip(data['x'], torch.stack(list(data['y'].values))[:, 0]))
sorted_dl = partial(SortedDL, shuffle=True, **kwargs)
return get_dls_from_ds(ds, sorted_dl, split_pct=split_pct, bs=bs)
@delegates(pad_trajectories)
def get_1vall_dls(target=0, dim=1, models=None, exps=None, bs=128, split_pct=0.2, **kwargs):
data = load_custom_data(dim=dim, models=models, exps=exps)
x, y = data['x'], (data['model'] != target).astype(int)
ds = L(zip(x, y))
sorted_dl = partial(SortedDL, before_batch=partial(pad_trajectories, **kwargs), shuffle=True)
return get_dls_from_ds(ds, sorted_dl, split_pct=split_pct, bs=bs)
@delegates(pad_trajectories)
def get_validation_dl(task, dim=1, bs=64, ds='val', **kwargs):
"Obtain `DataLoaders` for validation."
data = load_data(task, dim=dim, ds=ds)
ds = L(zip(data['x'], data['y']))
return DataLoader(ds, bs=bs, before_batch=partial(pad_trajectories, **kwargs), device=default_device())
def get_dls_from_ds(ds, dl_type, split_pct=0.2, bs=128):
idx = L(int(i) for i in torch.randperm(len(ds)))
cut = int(len(ds)*split_pct)
train_ds, val_ds = ds[idx[cut:]], ds[idx[:cut]]
return DataLoaders.from_dsets(train_ds, val_ds, bs=bs, dl_type=dl_type, device=default_device())
def _subsample_df(df):
"Subsamples df to balance models"
models = df.model.unique()
max_s = min([len(df[df.model==m]) for m in models])
sub_dfs = [df[df.model==m].sample(frac=1)[:max_s] for m in models]
return pd.concat(sub_dfs, ignore_index=True)
dls = get_discriminative_dls(task=1, dim=2)
x, y = dls.one_batch()
x.shape, y.shape
```
## Custom dataset
```
#export
def create_custom_dataset(N, max_T=1000, min_T=10, dimensions=[1, 2, 3], save=True):
ad = andi_datasets()
exponents = np.arange(0.05, 2.01, 0.05)
n_exp, n_models = len(exponents), len(ad.avail_models_name)
# Trajectories per model and exponent. Arbitrarely chose to fulfill balanced classes
N_per_model = np.ceil(1.6*N/5)
subdif, superdif = n_exp//2, n_exp//2+1
num_per_class = np.zeros((n_models, n_exp))
num_per_class[:2,:subdif] = np.ceil(N_per_model/subdif) # ctrw, attm
num_per_class[2, :] = np.ceil(N_per_model/(n_exp-1)) # fbm
num_per_class[2, exponents == 2] = 0 # fbm can't be ballistic
num_per_class[3, subdif:] = np.ceil((N_per_model/superdif)*0.8) # lw
num_per_class[4, :] = np.ceil(N_per_model/n_exp) # sbm
for dim in dimensions:
dataset = ad.create_dataset(T=max_T, N=num_per_class, exponents=exponents,
dimension=dim, models=np.arange(n_models))
# Normalize trajectories
n_traj = dataset.shape[0]
norm_trajs = normalize(dataset[:, 2:].reshape(n_traj*dim, max_T))
dataset[:, 2:] = norm_trajs.reshape(dataset[:, 2:].shape)
# Add localization error, Gaussian noise with sigma = [0.1, 0.5, 1]
loc_error_amplitude = np.random.choice(np.array([0.1, 0.5, 1]), size=n_traj*dim)
loc_error = (np.random.randn(n_traj*dim, int(max_T)).transpose()*loc_error_amplitude).transpose()
dataset = ad.create_noisy_localization_dataset(dataset, dimension=dim, T=max_T, noise_func=loc_error)
# Add random diffusion coefficients
trajs = dataset[:, 2:].reshape(n_traj*dim, max_T)
displacements = trajs[:, 1:] - trajs[:, :-1]
# Get new diffusion coefficients and displacements
diffusion_coefficients = np.random.randn(trajs.shape[0])
new_displacements = (displacements.transpose()*diffusion_coefficients).transpose()
# Generate new trajectories and add to dataset
new_trajs = np.cumsum(new_displacements, axis=1)
new_trajs = np.concatenate((np.zeros((new_trajs.shape[0], 1)), new_trajs), axis=1)
dataset[:, 2:] = new_trajs.reshape(dataset[:, 2:].shape)
df = pd.DataFrame(columns=['dim', 'model', 'exp', 'x', 'len'], dtype=object)
for traj in dataset:
mod, exp, x = int(traj[0]), traj[1], traj[2:]
x = cut_trajectory(x, np.random.randint(min_T, max_T), dim=dim)
x = tensor(x).view(dim, -1).T
df = df.append({'dim': dim, 'model': mod, 'exp': exp, 'x': x, 'len': len(x)}, ignore_index=True)
if save:
DATA_PATH.mkdir(exist_ok=True)
ds_path = DATA_PATH/f"custom{dim}.pkl"
df.to_pickle(ds_path, protocol=pickle.HIGHEST_PROTOCOL)
return df
def cut_trajectory(traj, t_cut, dim=1):
"Takes a trajectory and cuts it to `T_max` length."
cut_traj = traj.reshape(dim, -1)[:, :t_cut]
return cut_traj.reshape(1, -1)
df = create_custom_dataset(20, max_T=25, save=False)
```
## Validation
```
#export
def validate_model(model, task, dim=1, bs=256, act=False, **kwargs):
"Validates model on specific task and dimension."
val_dl = get_validation_dl(task, dim=dim, bs=bs, **kwargs)
if act: return torch.cat([to_detach(model(batch)[0].softmax(1)) for batch, _ in val_dl])
else: return torch.cat([to_detach(model(batch)) for batch, _ in val_dl])
@delegates(validate_model)
def validate_task(models, task, dims, **kwargs):
"Validates `models` on task for `dims`."
if not hasattr(models, '__iter__'): models = [models]
if not hasattr(dims, '__iter__'): dims = [dims]
if len(models) != len(dims):
raise InputError(f"There are {len(models)} models and {len(dims)} dimensions")
pred_path = DATA_PATH/"preds"
pred_path.mkdir(exist_ok=True)
task_path = pred_path/f"task{task}.txt"
preds_dim = []
for model, dim in zip(models, dims): preds_dim.append(validate_model(model, task, dim=dim, **kwargs))
with open(task_path, "w") as f:
for dim, preds in zip(dims, preds_dim):
for pred in preds:
f.write(f"{int(dim)}; {';'.join(str(i.item()) for i in pred)}\n")
```
# Export-
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
Simulation Demonstration
=====================
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import soepy
```
In this notebook we present descriptive statistics of a series of simulated samples with the soepy toy model.
soepy is closely aligned to the model in Blundell et. al. (2016). Yet, we wish to use the soepy package for estimation based on the German SOEP. In this simulation demonstration, some parameter values are partially set close to the parameters estimated in the seminal paper of Blundell et. al. (2016). The remainder of the parameter values are altered such that simulated wage levels and employment choice probabilities (roughly) match the statistics observed in the SOEP Data.
- the constants in the wage process gamma_0 equal are set to ensure alignment with SOEP data.
- the returns to experience in the wage process gamma_1 are set close to the coefficient values on gamma0, Blundell Table VIII, p. 1733
- the part-time experience accumulation parameter is set close to the coefficient on g(P), Blundell Table VIII, p. 1733,
- the experience depreciation parameter delta is set close to the coefffient values on delta, Blundell Table VIII, p. 1733,
- the disutility of part-time work parameter theta_p is set to ensure alignment with SOEP data,
- the disutility of full-time work parameter theta_f is set to ensure alignment with SOEP data.
To ensure that some individuals also choose to be non-emplyed, we set the period wage for nonemployed to be equal to some fixed value, constant over all periods. We call this income in unemployment "benefits".
```
data_frame_baseline = soepy.simulate('toy_model_init_file_01_1000.yml')
data_frame_baseline.head(20)
#Determine the observed wage given period choice
def get_observed_wage (row):
if row['Choice'] == 2:
return row['Period Wage F']
elif row['Choice'] ==1:
return row['Period Wage P']
elif row['Choice'] ==0:
return row['Period Wage N']
else:
return np.nan
# Add to data frame
data_frame_baseline['Wage Observed'] = data_frame_baseline.apply(
lambda row: get_observed_wage (row),axis=1
)
# Determine the education level
def get_educ_level(row):
if row["Years of Education"] >= 10 and row["Years of Education"] < 12:
return 0
elif row["Years of Education"] >= 12 and row["Years of Education"] < 16:
return 1
elif row["Years of Education"] >= 16:
return 2
else:
return np.nan
data_frame_baseline["Educ Level"] = data_frame_baseline.apply(
lambda row: get_educ_level(row), axis=1
)
```
Descriptive statistics to look at:
- average part-time, full-time and nonemployment rate - ideally close to population rates
- frequency of each choice per period - ideally more often part-time in early periods, more full-time in later periods
- frequency of each choice over all periods for individuals with different levels of education - ideally, lower educated more often unemployed and in part-time jobs
- average period wages over all individuals - series for all periods
- average period individuals over all individuals - series for all periods
```
# Average non-employment, part-time, and full-time rates over all periods and individuals
data_frame_baseline['Choice'].value_counts(normalize=True).plot(kind = 'bar')
data_frame_baseline['Choice'].value_counts(normalize=True)
# Average non-employment, part-time, and full-time rates per period
data_frame_baseline.groupby(['Period'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True)
```
As far as the evolution of choices over all agents and periods is concerned, we first observe a declining tendency of individuals to be unemployed as desired in a perfectly calibrated simulation. Second, individuals in our simulation tend to choose full-time and non-employment less often in the later periods of the model. Rates of part-time employment increase for the same period.
```
# Average non-employment, part-time, and full-time rates for individuals with different level of education
data_frame_baseline.groupby(['Years of Education'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True)
```
As should be expected, the higher the education level of the individuals the lower the observed.
```
# Average wage for each period and choice
fig,ax = plt.subplots()
# Generate x axes values
period = np.arange(1,31)
# Generate plot lines
ax.plot(period,
data_frame_baseline[data_frame_baseline['Choice'] == 2].groupby(['Period'])['Period Wage F'].mean(),
color='green', label = 'F')
ax.plot(period,
data_frame_baseline[data_frame_baseline['Choice'] == 1].groupby(['Period'])['Period Wage P'].mean(),
color='orange', label = 'P')
ax.plot(period,
data_frame_baseline[data_frame_baseline['Choice'] == 0].groupby(['Period'])['Period Wage N'].mean(),
color='blue', label = 'N')
# Plot settings
ax.set_xlabel("period")
ax.set_ylabel("wage")
ax.legend(loc='best')
```
The period wage of non-employment actually refers to the unemployment benefits individuals receive. The amount of the benefits is constant over time. Part-time and full-time wages rise as individuals gather more experience.
```
# Average wages by period
data_frame_baseline.groupby(['Period'])['Wage Observed'].mean().plot()
```
Comparative Statics
------------------------
In the following, we discuss some comparative statics of the model.
While changing other parameter values we wish to assume that the parameters central to the part-time penalty phenomenon studied in Blundell (2016) stay the same as in the benchmark specification:
- part-time experience accumulation g_s1,2,3
- experience depreciation delta
Comparative statics:
Parameters in the systematic wage govern the choice between employment (either part-time, or full-time) and nonemployment. They do not determine the choice between part-time and full-time employment since the systematic wage is equal for both options.
- constnat in wage process gamma_0: lower/higher value of the coefficient implies that other components such as accumulated work experience and the productivity shock are relatively more/less important in determining the choice between employment and nonemployment. Decreasing the constant for individuals of a certain education level, e.g., low, results in these individuals choosing nonemployment more often.
- return to experience gamma_1: lower value of the coefficient implies that accumulated work experience is less relevant in determining the wage in comparison to other factors such as the constant or the productivity shock. Higher coefficients should lead to agents persistently choosing employment versus non-employment.
The productivity shock:
- productivity shock variances - the higher the variances, the more switching between occupational alternatives.
Risk aversion:
- risk aversion parameter mu: the more negative the risk aversion parameter, the more eager are agents to ensure themselves against productivity shoks through accumulation of experience. Therefore, lower values of the parameter are associated with higher rates of full-time employment.
The labor disutility parameters directly influence:
- benefits - for higher benefits individuals of all education levels would choose non-employment more often
- labor disutility for part-time theta_p - for a higher coefficient, individuals of all education levels would choose to work part-time more often
- labor disutility for full-time theta_f - for a higher coefficient, individuals of all education levels would choose to work part-time more often
Finally, we illustrate one of the changes discussed above. In the alternative specifications the return to experience coefficient gamma_1 for the individuals with medium level of educations is increased from 0.157 to 0.195. As a result, experience accumulation matters more in the utility maximization. Therefore, individuals with medium level of education choose to be employed more often. Consequently, also aggregate levels of nonemployment are lower in the model.
```
data_frame_alternative = soepy.simulate('toy_model_init_file_01_1000.yml')
# Average non-employment, part-time, and full-time rates for individuals with different level of education
[data_frame_alternative.groupby(['Years of Education'])['Choice'].value_counts(normalize=True),
data_frame_baseline.groupby(['Years of Education'])['Choice'].value_counts(normalize=True)]
# Average non-employment, part-time, and full-time rates for individuals with different level of education
data_frame_alternative.groupby(['Years of Education'])['Choice'].value_counts(normalize=True).unstack().plot(kind = 'bar', stacked = True)
# Average non-employment, part-time, and full-time rates over all periods and individuals
data_frame_alternative['Choice'].value_counts(normalize=True).plot(kind = 'bar')
data_frame_alternative['Choice'].value_counts(normalize=True)
```
| github_jupyter |
# Quickstart
In this tutorial, we will show how to solve a famous optimization problem, minimizing the Rosenbrock function, in simplenlopt. First, let's define the Rosenbrock function and plot it:
$$
f(x, y) = (1-x)^2+100(y-x^2)^2
$$
```
import numpy as np
def rosenbrock(pos):
x, y = pos
return (1-x)**2 + 100 * (y - x**2)**2
xgrid = np.linspace(-2, 2, 500)
ygrid = np.linspace(-1, 3, 500)
X, Y = np.meshgrid(xgrid, ygrid)
Z = (1 - X)**2 + 100 * (Y -X**2)**2
x0=np.array([-1.5, 2.25])
f0 = rosenbrock(x0)
#Plotly not rendering correctly on Readthedocs, but this shows how it is generated! Plot below is a PNG export
import plotly.graph_objects as go
fig = go.Figure(data=[go.Surface(z=Z, x=X, y=Y, cmax = 10, cmin = 0, showscale = False)])
fig.update_layout(
scene = dict(zaxis = dict(nticks=4, range=[0,10])))
fig.add_scatter3d(x=[1], y=[1], z=[0], mode = 'markers', marker=dict(size=10, color='green'), name='Optimum')
fig.add_scatter3d(x=[-1.5], y=[2.25], z=[f0], mode = 'markers', marker=dict(size=10, color='black'), name='Initial guess')
fig.show()
```

The crux of the Rosenbrock function is that the minimum indicated by the green dot is located in a very narrow, banana shaped valley with a small slope around the minimum. Local optimizers try to find the optimum by searching the parameter space starting from an initial guess. We place the initial guess shown in black on the other side of the banana.
In simplenlopt, local optimizers are called by the minimize function. It requires the objective function and a starting point. The algorithm is chosen by the method argument. Here, we will use the derivative-free Nelder-Mead algorithm. Objective functions must be of the form ``f(x, ...)`` where ``x`` represents a numpy array holding the parameters which are optimized.
```
import simplenlopt
def rosenbrock(pos):
x, y = pos
return (1-x)**2 + 100 * (y - x**2)**2
res = simplenlopt.minimize(rosenbrock, x0, method = 'neldermead')
print("Position of optimum: ", res.x)
print("Function value at Optimum: ", res.fun)
print("Number of function evaluations: ", res.nfev)
```
The optimization result is stored in a class whose main attributes are the position of the optimum and the function value at the optimum. The number of function evaluations is a measure of performance: the less function evaluations are required to find the minimum, the faster the optimization will be.
Next, let's switch to a derivative based solver. For better performance, we also supply the analytical gradient which is passed to the jac argument.
```
def rosenbrock_grad(pos):
x, y = pos
dx = 2 * x -2 - 400 * x * (y-x**2)
dy = 200 * (y-x**2)
return dx, dy
res_slsqp = simplenlopt.minimize(rosenbrock, x0, method = 'slsqp', jac = rosenbrock_grad)
print("Position of optimum: ", res_slsqp.x)
print("Function value at Optimum: ", res_slsqp.fun)
print("Number of function evaluations: ", res_slsqp.nfev)
```
As the SLSQP algorithm can use gradient information, it requires less function evaluations to find the minimum than the
derivative-free Nelder-Mead algorithm.
Unlike vanilla NLopt, simplenlopt automatically defaults to finite difference approximations of the gradient if it is
not provided:
```
res = simplenlopt.minimize(rosenbrock, x0, method = 'slsqp')
print("Position of optimum: ", res.x)
print("Function value at Optimum: ", res.fun)
print("Number of function evaluations: ", res.nfev)
```
As the finite differences are not as precise as the analytical gradient, the found optimal function value is higher than with analytical gradient information. In general, it is aways recommended to compute the gradient analytically or by automatic differentiation as the inaccuracies of finite differences can result in wrong results and bad performance.
For demonstration purposes, let's finally solve the problem with a global optimizer. Like in SciPy, each global optimizer is called by a dedicated function such as crs() for the Controlled Random Search algorithm. Instead of a starting point, the global optimizers require a region in which they seek to find the minimum. This region is provided as a list of (lower_bound, upper_bound) for each coordinate.
```
bounds = [(-2., 2.), (-2., 2.)]
res_crs = simplenlopt.crs(rosenbrock, bounds)
print("Position of optimum: ", res_crs.x)
print("Function value at Optimum: ", res_crs.fun)
print("Number of function evaluations: ", res_crs.nfev)
```
Note that using a global optimizer is overkill for a small problem like the Rosenbrock function: it requires many more function
evaluations than a local optimizer. Global optimization algorithms shine in case of complex, multimodal functions where local
optimizers converge to local minima instead of the global minimum. Check the Global Optimization page for such an example.
| github_jupyter |
# Categorical Data Plots
Now let's discuss using seaborn to plot categorical data! There are a few main plot types for this:
* factorplot
* boxplot
* violinplot
* stripplot
* swarmplot
* barplot
* countplot
Let's go through examples of each!
```
import seaborn as sns
%matplotlib inline
tips = sns.load_dataset('tips')
tips.head()
```
## barplot and countplot
These very similar plots allow you to get aggregate data off a categorical feature in your data. **barplot** is a general plot that allows you to aggregate the categorical data based off some function, by default the mean:
```
sns.barplot(x='sex',y='total_bill',data=tips)
import numpy as np
```
You can change the estimator object to your own function, that converts a vector to a scalar:
```
sns.barplot(x='sex',y='total_bill',data=tips,estimator=np.std)
```
### countplot
This is essentially the same as barplot except the estimator is explicitly counting the number of occurrences. Which is why we only pass the x value:
```
sns.countplot(x='sex',data=tips)
```
## boxplot and violinplot
boxplots and violinplots are used to shown the distribution of categorical data. A box plot (or box-and-whisker plot) shows the distribution of quantitative data in a way that facilitates comparisons between variables or across levels of a categorical variable. The box shows the quartiles of the dataset while the whiskers extend to show the rest of the distribution, except for points that are determined to be “outliers” using a method that is a function of the inter-quartile range.
```
sns.boxplot(x="day", y="total_bill", data=tips,palette='rainbow')
# Can do entire dataframe with orient='h'
sns.boxplot(data=tips,palette='rainbow',orient='h')
sns.boxplot(x="day", y="total_bill", hue="smoker",data=tips, palette="coolwarm")
```
### violinplot
A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one (or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual datapoints, the violin plot features a kernel density estimation of the underlying distribution.
```
sns.violinplot(x="day", y="total_bill", data=tips,palette='rainbow')
sns.violinplot(x="day", y="total_bill", data=tips,hue='sex',palette='Set1')
sns.violinplot(x="day", y="total_bill", data=tips,hue='sex',split=True,palette='Set1')
```
## stripplot and swarmplot
The stripplot will draw a scatterplot where one variable is categorical. A strip plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution.
The swarmplot is similar to stripplot(), but the points are adjusted (only along the categorical axis) so that they don’t overlap. This gives a better representation of the distribution of values, although it does not scale as well to large numbers of observations (both in terms of the ability to show all the points and in terms of the computation needed to arrange them).
```
sns.stripplot(x="day", y="total_bill", data=tips)
sns.stripplot(x="day", y="total_bill", data=tips,jitter=True)
sns.stripplot(x="day", y="total_bill", data=tips,jitter=True,hue='sex',palette='Set1')
sns.stripplot(x="day", y="total_bill", data=tips,jitter=True,hue='sex',palette='Set1',split=True)
sns.swarmplot(x="day", y="total_bill", data=tips)
sns.swarmplot(x="day", y="total_bill",hue='sex',data=tips, palette="Set1", split=True)
```
### Combining Categorical Plots
```
sns.violinplot(x="tip", y="day", data=tips,palette='rainbow')
sns.swarmplot(x="tip", y="day", data=tips,color='black',size=3)
```
## factorplot
factorplot is the most general form of a categorical plot. It can take in a **kind** parameter to adjust the plot type:
```
sns.factorplot(x='sex',y='total_bill',data=tips,kind='bar')
```
# Great Job!
| github_jupyter |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Data Science 2: Advanced Topics in Data Science
## Lecture 5.5 - Smoothers and Generalized Additive Models - Model Fitting
<div class="discussion"><b>JUST A NOTEBOOK</b></div>
**Harvard University**<br>
**Spring 2021**<br>
**Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner<br>
**Lab Instructor:** Eleni Kaxiras<br><BR>
*Content:* Eleni Kaxiras and Will Claybaugh
---
```
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text
HTML(styles)
import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
```
## Table of Contents
* 1 - Overview - A Top View of LMs, GLMs, and GAMs to set the stage
* 2 - A review of Linear Regression with `statsmodels`. Formulas.
* 3 - Splines
* 4 - Generative Additive Models with `pyGAM`
* 5 - Smooting Splines using `csaps`
## Overview

*image source: Dani Servén Marín (one of the developers of pyGAM)*
### A - Linear Models
First we have the **Linear Models** which you know from 109a. These models are linear in the coefficients. Very *interpretable* but suffer from high bias because let's face it, few relationships in life are linear. Simple Linear Regression (defined as a model with one predictor) as well as Multiple Linear Regression (more than one predictors) are examples of LMs. Polynomial Regression extends the linear model by adding terms that are still linear for the coefficients but non-linear when it somes to the predictiors which are now raised in a power or multiplied between them.

$$
\begin{aligned}
y = \beta{_0} + \beta{_1}{x_1} & \quad \mbox{(simple linear regression)}\\
y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_2} + \beta{_3}{x_3} & \quad \mbox{(multiple linear regression)}\\
y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_1^2} + \beta{_3}{x_3^3} & \quad \mbox{(polynomial multiple regression)}\\
\end{aligned}
$$
<div class="discussion"><b>Questions to think about</b></div>
- What does it mean for a model to be **interpretable**?
- Are linear regression models interpretable? Are random forests? What about Neural Networks such as Feed Forward?
- Do we always want interpretability? Describe cases where we do and cases where we do not care.
### B - Generalized Linear Models (GLMs)

**Generalized Linear Models** is a term coined in the early 1970s by Nelder and Wedderburn for a class of models that includes both Linear Regression and Logistic Regression. A GLM fits one coefficient per feature (predictor).
### C - Generalized Additive Models (GAMs)
Hastie and Tidshirani coined the term **Generalized Additive Models** in 1986 for a class of non-linear extensions to Generalized Linear Models.

$$
\begin{aligned}
y = \beta{_0} + f_1\left(x_1\right) + f_2\left(x_2\right) + f_3\left(x_3\right) \\
y = \beta{_0} + f_1\left(x_1\right) + f_2\left(x_2, x_3\right) + f_3\left(x_3\right) & \mbox{(with interaction terms)}
\end{aligned}
$$
In practice we add splines and regularization via smoothing penalties to our GLMs.
*image source: Dani Servén Marín*
### D - Basis Functions
In our models we can use various types of functions as "basis".
- Monomials such as $x^2$, $x^4$ (**Polynomial Regression**)
- Sigmoid functions (neural networks)
- Fourier functions
- Wavelets
- **Regression splines**
### 1 - Piecewise Polynomials a.k.a. Splines
Splines are a type of piecewise polynomial interpolant. A spline of degree k is a piecewise polynomial that is continuously differentiable k − 1 times.
Splines are the basis of CAD software and vector graphics including a lot of the fonts used in your computer. The name “spline” comes from a tool used by ship designers to draw smooth curves. Here is the letter $epsilon$ written with splines:

*font idea inspired by Chris Rycroft (AM205)*
If the degree is 1 then we have a Linear Spline. If it is 3 then we have a Cubic spline. It turns out that cubic splines because they have a continous 2nd derivative (curvature) at the knots are very smooth to the eye. We do not need higher order than that. The Cubic Splines are usually Natural Cubic Splines which means they have the added constrain of the end points' second derivative = 0.
We will use the CubicSpline and the B-Spline as well as the Linear Spline.
#### scipy.interpolate
See all the different splines that scipy.interpolate has to offer: https://docs.scipy.org/doc/scipy/reference/interpolate.html
Let's use the simplest form which is interpolate on a set of points and then find the points between them.
```
from scipy.interpolate import splrep, splev
from scipy.interpolate import BSpline, CubicSpline
from scipy.interpolate import interp1d
# define the range of the function
a = -1
b = 1
# define the number of knots
num_knots = 11
knots = np.linspace(a,b,num_knots)
# define the function we want to approximate
y = 1/(1+25*(knots**2))
# make a linear spline
linspline = interp1d(knots, y)
# sample at these points to plot
xx = np.linspace(a,b,1000)
yy = 1/(1+25*(xx**2))
plt.plot(knots,y,'*')
plt.plot(xx, yy, label='true function')
plt.plot(xx, linspline(xx), label='linear spline');
plt.legend();
```
<div class="exercise"><b>Exercise</b></div>
The Linear interpolation does not look very good. Fit a Cubic Spline and plot along the Linear to compare. Feel free to solve and then look at the solution.
```
# your answer here
# solution
# define the range of the function
a = -1
b = 1
# define the knots
num_knots = 10
x = np.linspace(a,b,num_knots)
# define the function we want to approximate
y = 1/(1+25*(x**2))
# make the Cubic spline
cubspline = CubicSpline(x, y)
print(f'Num knots in cubic spline: {num_knots}')
# OR make a linear spline
linspline = interp1d(x, y)
# plot
xx = np.linspace(a,b,1000)
yy = 1/(1+25*(xx**2))
plt.plot(xx, yy, label='true function')
plt.plot(x,y,'*', label='knots')
plt.plot(xx, linspline(xx), label='linear');
plt.plot(xx, cubspline(xx), label='cubic');
plt.legend();
```
<div class="discussion"><b>Questions to think about</b></div>
- Change the number of knots to 100 and see what happens. What would happen if we run a polynomial model of degree equal to the number of knots (a global one as in polynomial regression, not a spline)?
- What makes a spline 'Natural'?
```
# Optional and Outside of the scope of this class: create the `epsilon` in the figure above
x = np.array([1.,0.,-1.5,0.,-1.5,0.])
y = np.array([1.5,1.,2.5,3,4,5])
t = np.linspace(0,5,6)
f = interp1d(t,x,kind='cubic')
g = interp1d(t,y,kind='cubic')
tplot = np.linspace(0,5,200)
plt.plot(x,y, '*', f(tplot), g(tplot));
```
#### B-Splines (de Boor, 1978)
One way to construct a curve given a set of points is to *interpolate the points*, that is, to force the curve to pass through the points.
A B-splines (Basis Splines) is defined by a set of **control points** and a set of **basis functions** that fit the function between these points. By choosing to have no smoothing factor we force the final B-spline to pass though all the points. If, on the other hand, we set a smothing factor, our function is more of an approximation with the control points as "guidance". The latter produced a smoother curve which is prefferable for drawing software. For more on Splines see: https://en.wikipedia.org/wiki/B-spline)

We will use [`scipy.splrep`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splrep.html#scipy.interpolate.splrep) to calulate the coefficients for the B-Spline and draw it.
#### B-Spline with no smooting
```
from scipy.interpolate import splev, splrep
x = np.linspace(0, 10, 10)
y = np.sin(x)
# (t,c,k) is a tuple containing the vector of knots, coefficients, degree of the spline
t,c,k = splrep(x, y)
x2 = np.linspace(0, 10, 200)
y2 = BSpline(t,c,k)
plt.plot(x, y, 'o', x2, y2(x2))
plt.show()
from scipy.interpolate import splrep
x = np.linspace(0, 10, 10)
y = np.sin(x)
t,c,k = splrep(x, y, k=3) # (tck) is a tuple containing the vector of knots, coefficients, degree of the spline
# define the points to plot on (x2)
print(f'Knots ({len(t)} of them): {t}\n')
print(f'B-Spline coefficients ({len(c)} of them): {c}\n')
print(f'B-Spline degree {k}')
x2 = np.linspace(0, 10, 100)
y2 = BSpline(t, c, k)
plt.figure(figsize=(10,5))
plt.plot(x, y, 'o', label='true points')
plt.plot(x2, y2(x2), label='B-Spline')
tt = np.zeros(len(t))
plt.plot(t, tt,'g*', label='knots eval by the function')
plt.legend()
plt.show()
```
<a id=splineparams></a>
#### What do the tuple values returned by `scipy.splrep` mean?
- The `t` variable is the array that contains the knots' position in the x axis. The length of this array is, of course, the number of knots.
- The `c` variable is the array that holds the coefficients for the B-Spline. Its length should be the same as `t`.
We have `number_of_knots - 1` B-spline basis elements to the spline constructed via this method, and they are defined as follows:<BR><BR>
$$
\begin{aligned}
B_{i, 0}(x) = 1, \textrm{if $t_i \le x < t_{i+1}$, otherwise $0$,} \\ \\
B_{i, k}(x) = \frac{x - t_i}{t_{i+k} - t_i} B_{i, k-1}(x)
+ \frac{t_{i+k+1} - x}{t_{i+k+1} - t_{i+1}} B_{i+1, k-1}(x)
\end{aligned}
$$
- t $\in [t_1, t_2, ..., t_]$ is the knot vector
- c : are the spline coefficients
- k : is the spline degree
#### B-Spline with smooting factor s
```
from scipy.interpolate import splev, splrep
x = np.linspace(0, 10, 5)
y = np.sin(x)
s = 0.5 # add smoothing factor
task = 0 # task needs to be set to 0, which represents:
# we are specifying a smoothing factor and thus only want
# splrep() to find the optimal t and c
t,c,k = splrep(x, y, task=task, s=s)
# draw the line segments
linspline = interp1d(x, y)
# define the points to plot on (x2)
x2 = np.linspace(0, 10, 200)
y2 = BSpline(t, c, k)
plt.plot(x, y, 'o', x2, y2(x2))
plt.plot(x2, linspline(x2))
plt.show()
```
#### B-Spline with given knots
```
x = np.linspace(0, 10, 100)
y = np.sin(x)
knots = np.quantile(x, [0.25, 0.5, 0.75])
print(knots)
# calculate the B-Spline
t,c,k = splrep(x, y, t=knots)
curve = BSpline(t,c,k)
curve
plt.scatter(x=x,y=y,c='grey', alpha=0.4)
yknots = np.sin(knots)
plt.scatter(knots, yknots, c='r')
plt.plot(x,curve(x))
plt.show()
```
### 2 - GAMs
https://readthedocs.org/projects/pygam/downloads/pdf/latest/
#### Classification in `pyGAM`
Let's get our (multivariate!) data, the `kyphosis` dataset, and the `LogisticGAM` model from `pyGAM` to do binary classification.
- kyphosis - wherther a particular deformation was present post-operation
- age - patient's age in months
- number - the number of vertebrae involved in the operation
- start - the number of the topmost vertebrae operated on
```
kyphosis = pd.read_csv("../data/kyphosis.csv")
display(kyphosis.head())
display(kyphosis.describe(include='all'))
display(kyphosis.dtypes)
# convert the outcome in a binary form, 1 or 0
kyphosis = pd.read_csv("../data/kyphosis.csv")
kyphosis["outcome"] = 1*(kyphosis["Kyphosis"] == "present")
kyphosis.describe()
from pygam import LogisticGAM, s, f, l
X = kyphosis[["Age","Number","Start"]]
y = kyphosis["outcome"]
kyph_gam = LogisticGAM().fit(X,y)
```
#### Outcome dependence on features
To help us see how the outcome depends on each feature, `pyGAM` has the `partial_dependence()` function.
```
pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95)
```
For more on this see the : https://pygam.readthedocs.io/en/latest/api/logisticgam.html
```
res = kyph_gam.deviance_residuals(X,y)
for i, term in enumerate(kyph_gam.terms):
if term.isintercept:
continue
XX = kyph_gam.generate_X_grid(term=i)
pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95)
pdep2, _ = kyph_gam.partial_dependence(term=i, X=X, width=0.95)
plt.figure()
plt.scatter(X.iloc[:,term.feature], pdep2 + res)
plt.plot(XX[:, term.feature], pdep)
plt.plot(XX[:, term.feature], confi, c='r', ls='--')
plt.title(X.columns.values[term.feature])
plt.show()
```
Notice that we did not specify the basis functions in the .fit(). `pyGAM` figures them out for us by using $s()$ (splines) for numerical variables and $f()$ for categorical features. If this is not what we want we can manually specify the basis functions, as follows:
```
kyph_gam = LogisticGAM(s(0)+s(1)+s(2)).fit(X,y)
res = kyph_gam.deviance_residuals(X,y)
for i, term in enumerate(kyph_gam.terms):
if term.isintercept:
continue
XX = kyph_gam.generate_X_grid(term=i)
pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95)
pdep2, _ = kyph_gam.partial_dependence(term=i, X=X, width=0.95)
plt.figure()
plt.scatter(X.iloc[:,term.feature], pdep2 + res)
plt.plot(XX[:, term.feature], pdep)
plt.plot(XX[:, term.feature], confi, c='r', ls='--')
plt.title(X.columns.values[term.feature])
plt.show()
```
#### Regression in `pyGAM`
For regression problems, we can use a `linearGAM` model. For this part we will use the `wages` dataset.
https://pygam.readthedocs.io/en/latest/api/lineargam.html
#### The `wages` dataset
Let's inspect another dataset that is included in `pyGAM` that notes the wages of people based on their age, year of employment and education.
```
# from the pyGAM documentation
from pygam import LinearGAM, s, f
from pygam.datasets import wage
X, y = wage(return_X_y=True)
## model
gam = LinearGAM(s(0) + s(1) + f(2))
gam.gridsearch(X, y)
## plotting
plt.figure();
fig, axs = plt.subplots(1,3);
titles = ['year', 'age', 'education']
for i, ax in enumerate(axs):
XX = gam.generate_X_grid(term=i)
ax.plot(XX[:, i], gam.partial_dependence(term=i, X=XX))
ax.plot(XX[:, i], gam.partial_dependence(term=i, X=XX, width=.95)[1], c='r', ls='--')
if i == 0:
ax.set_ylim(-30,30)
ax.set_title(titles[i]);
```
### 3 - Smoothing Splines using csaps
**Note**: this is the spline model that minimizes <BR>
$MSE - \lambda\cdot\text{wiggle penalty}$ $=$ $\sum_{i=1}^N \left(y_i - f(x_i)\right)^2 - \lambda \int \left(f''(t)\right)^2 dt$, <BR>
across all possible functions $f$.
```
from csaps import csaps
np.random.seed(1234)
x = np.linspace(0,10,300000)
y = np.sin(x*2*np.pi)*x + np.random.randn(len(x))
xs = np.linspace(x[0], x[-1], 1000)
ys = csaps(x, y, xs, smooth=0.99)
print(ys.shape)
#plt.plot(x, y, 'o', xs, ys, '-')
plt.plot(x, y, 'o', xs, ys, '-')
plt.show()
```
### 4 - Data fitting using pyGAM and Penalized B-Splines
When we use a spline in pyGAM we are effectively using a penalized B-Spline with a regularization parameter $\lambda$. E.g.
```
LogisticGAM(s(0)+s(1, lam=0.5)+s(2)).fit(X,y)
```
Let's see how this smoothing works in `pyGAM`. We start by creating some arbitrary data and fitting them with a GAM.
```
X = np.linspace(0,10,500)
y = np.sin(X*2*np.pi)*X + np.random.randn(len(X))
plt.scatter(X,y);
# let's try a large lambda first and lots of splines
gam = LinearGAM(lam=1e6, n_splines=50). fit(X,y)
XX = gam.generate_X_grid(term=0)
plt.scatter(X,y,alpha=0.3);
plt.plot(XX, gam.predict(XX));
```
We see that the large $\lambda$ forces a straight line, no flexibility. Let's see now what happens if we make it smaller.
```
# let's try a smaller lambda
gam = LinearGAM(lam=1e2, n_splines=50). fit(X,y)
XX = gam.generate_X_grid(term=0)
plt.scatter(X,y,alpha=0.3);
plt.plot(XX, gam.predict(XX));
```
There is some curvature there but still not a good fit. Let's try no penalty. That should have the line fit exactly.
```
# no penalty, let's try a 0 lambda
gam = LinearGAM(lam=0, n_splines=50). fit(X,y)
XX = gam.generate_X_grid(term=0)
plt.scatter(X,y,alpha=0.3)
plt.plot(XX, gam.predict(XX))
```
Yes, that is good. Now let's see what happens if we lessen the number of splines. The fit should not be as good.
```
# no penalty, let's try a 0 lambda
gam = LinearGAM(lam=0, n_splines=10). fit(X,y)
XX = gam.generate_X_grid(term=0)
plt.scatter(X,y,alpha=0.3);
plt.plot(XX, gam.predict(XX));
```
| github_jupyter |
## Applicazione del transfer learning con MobileNet_V2
A high-quality, dataset of images containing fruits. The following fruits are included: Apples - (different varieties: Golden, Golden-Red, Granny Smith, Red, Red Delicious), Apricot, Avocado, Avocado ripe, Banana (Yellow, Red), Cactus fruit, Carambula, Cherry, Clementine, Cocos, Dates, Granadilla, Grape (Pink, White, White2), Grapefruit (Pink, White), Guava, Huckleberry, Kiwi, Kaki, Kumsquats, Lemon (normal, Meyer), Lime, Litchi, Mandarine, Mango, Maracuja, Nectarine, Orange, Papaya, Passion fruit, Peach, Pepino, Pear (different varieties, Abate, Monster, Williams), Pineapple, Pitahaya Red, Plum, Pomegranate, Quince, Raspberry, Salak, Strawberry, Tamarillo, Tangelo.
Training set size: 28736 images.
Validation set size: 9673 images.
Number of classes: 60 (fruits).
Image size: 100x100 pixels.
```
import numpy as np
import keras
from tensorflow.keras.layers import InputLayer, Input, ReLU
from tensorflow.keras.layers import Reshape, MaxPooling2D, Cropping2D, BatchNormalization, AveragePooling2D
from tensorflow.keras.layers import Conv2D, Dense, Flatten, Dropout, SeparableConv2D, DepthwiseConv2D
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.python.keras.utils import to_categorical
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing import image
from skimage import transform
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.python.keras.applications.mobilenet_v2 import MobileNetV2
from tensorflow.python.keras.models import Model, load_model
from tensorflow.python.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.python.keras import backend as K
```
## Upload dataset to colab:
With the help of pydrive package it is possible to upload dataset which is a shared link on google drive directly to the colab. So no need to download the dataset and then upload it on google colab. It can be done right away. To proceed google need to authorize your account. Running below cell emarges a link that give you a code to authorize your google account to use google clould SDK. the last step is to copy and paste the code. Done!
```
# Google Drive Authentication
!pip install pydrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
```
Each shared file on google drive has a unique id. This id can be found in the shareable link of the file. So instead of file link we must use id to be capable upload the file to colab.
```
# Download a file from shareable Link on Elearning
file_id = '1wJw2Ugn0L0DZIs_3_oZ0KNoOFTFxOsFS'
downloaded = drive.CreateFile({'id': file_id})
downloaded.GetContentFile('FRUTTA.rar')
```
The uploaded file is a .rar extention file. The rarfile package can extract(unrar) file.
```
# Unrar file on colab
!pip install rarfile
import rarfile
rf = rarfile.RarFile('FRUTTA.rar')
rf.extractall(path = 'FRUTTA')
```
### Change the path of directories!!!
```
# Setting path location for validation, traing and testing images
validationPath = 'FRUTTA/Validation'
trainPath = 'FRUTTA/Training'
```
### Plot an image, for example E:/Training/Cocos/15_100.jpg
To show an image directly readed from the disk, IPython module is needed. However there are several ways to do that.
```
from IPython.display import Image as image_show
image_show('FRUTTA/Training/Cocos/15_100.jpg', width = 200, height = 200)
```
### Now you define the functions ables to read mini-batch of data
```
# Making an image data generator object with augmentation for training
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Making an image data generator object with no augmentation for validation
test_datagen = ImageDataGenerator(rescale=1./255)
```
### why train_datagen and test_datagen are different? answer . . .
The idea for manipulating the train data set using rotation, zoom, width/height shift, flipping and process such like is that to expose the model with all probably possible samples that it may deal with on prediction/classification phase. So to streghten the model and getting the most information from train data, we manipulate the samples in train set such that the manipulated samples also can be valid and possible samples as part of train set. But if we do the same process on validation and test sets we will increase the correlation between train and validation/test sets that is in conflict with the assumtion that sample are independent. That causes generally more accuracy on validation and test sets which is not real. To recap, it is recommended that do manipulation just on seen data rather than unseen data.
The beloved keras package make it easy to read image from the disk and convert it directly to a generator. we can do it with flow_from_directory method.
```
# Using the generator with batch size 32 for training directory
train_generator = train_datagen.flow_from_directory(trainPath,
target_size=(128, 128),
batch_size=64,
class_mode='categorical')
# Using the generator with batch size 17 for validation directory
validation_generator = test_datagen.flow_from_directory(validationPath,
target_size=(128, 128),
batch_size=64,
class_mode='categorical')
```
### you can control the dimensions of the generator outputs
```
validation_generator[0][0].shape
```
Since on training phase for each epoch once gradian descend algorithm run on each mini-batch. we set the mini-batch to 64. But for validation there is no significant diffrence for small or large mini-batch size as it is only used to derive validation metrices.
```
print("Number of batches in validation generator is", len(validation_generator))
print("Number of batches in train generator is", len(train_generator))
validation_generator[0][1].shape
```
### Now you need to define your model . . .
the default definition of MobileNet_V2 is:
MobileNetV2(input_shape=None, alpha=1.0, depth_multiplier=1, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
but you have a different number of classes . . . . .
# Using MobileNetV2:
It is possible to use a pre-trained model such as MobileNetV2 and the keras package enable us to modify the model to 60 class
MobileNetV2 is a deep CNN model with 20 sequentional layers that is trained to classify 1000 classes. In this project we only have 60 classes, So the top layer are dropped must be substitute with final output layers which includes usually GlobalAveraging and Dense layer. First the a look at the MobileNetV2 as a base model that is shown below:
```
base_model = MobileNetV2(weights='imagenet', include_top=False)
print(base_model.summary())
```
### Define what layers you want to train . . .
In order to use a pre-trained model like MobileNetV2 for only 60 classes we can keep the layers freezed and just only drop the last layers which is done setting top_layer=False.
In below code the last two layers were removed and substituted with an average pooling and a dense layer. Adding up the base model and trainable layers, the new model summarize as below. It can be seen that the shape of each output layer is None that stem from dropping top layer in base model causes the model to train with different input shape.
```
x = base_model.output
x = GlobalAveragePooling2D()(x)
predictions = Dense(60, activation = 'softmax')(x)
model_mnv2 = Model(inputs = base_model.input, outputs = predictions)
print(model_mnv2.summary())
```
The below code freez the layers setup from base_model and keep the rest layers trainable. It provide us with chance of using the weight of the base model in or new model that is going to be modified to cover 60 classes of fruits.
```
for layer in base_model.layers:
layer.trainable = False
```
### Compile the model . . .
Now we have the model and we defined its architecture. It is time to compile the model and determine the optimizer algorithm, the loss function which is used in optimization and the metrices that are intresting to monitor.
```
model_mnv2.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
```
## to fit the model you can write an expression as:
history = model.fit_generator(train_generator,
epochs=20,validation_data=validation_generator,)
The way keras train a model and make prediction is diffrent. In diffrent modes some layers such as batch_normalization operate diffrent. For example in train mode the mean and variance of each mini_batch is used to rescale the tensors but in test mode the moving average of mean and variance is pluging in for rescaling. So what is the problem. If the mode is set test and we train the data. The model shows a significant fit on train data but very poor results on validation data which is not a real overfitting problem. To understand when the result steming from this problem it is better to use same data for both train and validation. when the results in train and validation are different means that the learning phase is wrong.
```
history = model_mnv2.fit_generator(validation_generator,
epochs=5,validation_data=validation_generator)
model_mnv2.save('testlp.h5')
####WARNING!!!!!!The below code make a copy of the model file on your google drive. keep it comment if you dont want the copy
model_file = drive.CreateFile({'title' : 'testlp.h5'})
model_file.SetContentFile('testlp.h5')
model_file.Upload()
```
It can be seen that with feeding the model the same data for both train and validation, the resuts of each epoch is sharply different. Let change the learning phase
```
K.clear_session()
K.set_learning_phase(1)
model_mnv2 = load_model('testlp.h5')
print(model_mnv2.evaluate_generator(validation_generator))
```
Setting learning phase to train yields the same results. So keep the learning phase equal 1 to set it as train for the rest part of training the model.
```
for layer in model_mnv2.layers:
layer.trainable = False
model_mnv2.layers[-1].trainable = True
model_mnv2.layers[-2].trainable = True
model_mnv2.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model_mnv2.fit_generator(train_generator,
epochs=5,validation_data=validation_generator)
model_mnv2.save('mnv2_01.h5')
####WARNING!!!!!!The below code make a copy of the model file on your google drive. keep it comment if you dont want the copy
model_file = drive.CreateFile({'title' : 'mnv2_01.h5'})
model_file.SetContentFile('mnv2_01.h5')
model_file.Upload()
```
### Fine tuning?
Although training the model only on last two layer make happy with 99.8% accuracy on validation data (there is still difference between train and validation accuracy train_acc < val_acc keeping in mind we manipulated the train data but not the test data). It might can be even better. Lets give a try to fine tuning and release the constrains of frozen layers.
```
K.clear_session()
K.set_learning_phase(1)
model_mnv2 = load_model('mnv2_01.h5')
for layer in model_mnv2.layers:
layer.trainable = True
model_mnv2.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = model_mnv2.fit_generator(train_generator,
epochs=20,validation_data=validation_generator)
model_mnv2.save('mnv2_final.h5')
####WARNING!!!!!!The below code make a copy of the model file on your google drive. keep it comment if you dont want the copy
model_file = drive.CreateFile({'title' : 'mnv2_final.h5'})
model_file.SetContentFile('mnv2_final.h5')
model_file.Upload()
```
### once you have obtained the final estimate of the model you must evaluate it with more details . . .
```
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.legend(['Train', 'Test'])
plt.suptitle('Accuracy')
plt.xlabel('Iteration')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Test'])
plt.suptitle('Categorical Cross Entropy')
plt.xlabel('Iteration')
plt.show()
y_true = validation_generator.classes
y_pred = model_mnv2.predict_generator(validation_generator,verbose=1).argmax(axis=-1)
print(y_pred.shape)
print(y_true.shape)
```
### take an image of a papaya from internet and try to apply your model . . .
```
import requests
f = open('papaya.jpg','wb')
f.write(requests.get('https://www.xspo.it/media/image/16/0a/59/18_m-neal-pt_apple-green_600x600.jpg').content)
f.close()
img = image.load_img('papaya.jpg', target_size=(128, 128))
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
print(img.shape)
img_pred = int(model_mnv2.predict(img).argmax(axis=-1))
x = list(validation_generator.class_indices.keys())
print(x[img_pred])
```
| github_jupyter |
```
import pandas as pd
import seaborn as sns
from sklearn.neighbors import NearestNeighbors
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
```
# Clustering
```
data = pd.read_csv('2019_data.csv', encoding='utf-8', decimal=',')
data.shape
prob = data
#to give insight in the crimes per LSOA in 2019
crimerate = prob.groupby('LSOA code')['Month'].count()
#sum(crimerate.values)
crimerate
#to get the cluster data for each lsoa once
prob = prob.drop_duplicates(subset="LSOA code", keep='first')
#remove the nan values
a = prob[prob['Employment Domain Score'].notna()]
### Get all the features columns except the class
features_lst = ['LSOA code','Employment Domain Score','Income Domain Score','IDACI Score','IDAOPI Score','Police Strength',
'Police Funding','Population']
### Get the features data
data = a[features_lst].reset_index()
fit_data = data[['Employment Domain Score', 'Income Domain Score',
'IDACI Score', 'IDAOPI Score', 'Police Strength', 'Police Funding',
'Population']]
#remove the 1.101.360 -> this is not computable to float
fit_data['Population'] = fit_data['Population'].replace(['1.101.360'],'1101.360')
#replace the nan values with the mode of the column
for column in fit_data.columns:
fit_data[column].fillna(fit_data[column].mode()[0], inplace=True)
fit_data
```
# constrained clustering
```
from k_means_constrained import KMeansConstrained
clf = KMeansConstrained(n_clusters = 50, size_min=250, size_max=800,random_state=0)
clf.fit(fit_data)
#make a new dataframe
label_data = pd.DataFrame({'LSOA code': data['LSOA code'], 'Cluster': clf.labels_})
#label_data
label_data.sort_values(by=['LSOA code'], inplace = True)
label_data['crime numb'] = crimerate.values
label_data.groupby('Cluster')['crime numb'].sum().plot(kind ='bar', ylabel='Number of crimes',
title=' Distribution of crimes over the clusters');
```
## code for merging dataclusters
```
label_data.groupby('Cluster')
#makes a dataframe for each cluster
df = [x for _, x in label_data.groupby('Cluster')]
numbcrimes = []
for i in df:
numbcrimes.append(i['crime numb'].sum())
plt.hist(numbcrimes, bins=23)
cluster1 = df[2]['LSOA code'].values.tolist()
#cluster1
df_street = pd.read_csv('city-of-london_street.csv')
df_street.index = pd.to_datetime(df_street['Month'])
df_notna = df_street[df_street['LSOA code'].notna()]
df_notna = df_notna[df_notna['LSOA code'].isin(cluster1)]
data = df_notna.groupby(by=[df_notna.index.date])['Month'].count()
data
```
# The sarima model
```
import datetime
import pmdarima as pm
# split the data into train and testdata and remove the covid data
for t in range(0,len(data.index)):
if data.index[t] >= datetime.date(2019, 1, 1):
break
for m in range(t+1, len(data.index)):
if data.index[m] >= datetime.date(2020,1,1):
break
data_train = data[:t]
data_test = data[t:m]
#plot the train test split
sns.lineplot(data=data_train)
sns.lineplot(data=data_test);
# Seasonal - fit stepwise auto-ARIMA
Sarima = pm.auto_arima(data_train, start_p=1, start_q=1,
test='adf',
max_p=3, max_q=3, m=12,
start_P=0, seasonal=True,
d=None, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
Sarima.summary()
n_periods = 12
fitted, confint = Sarima.predict(n_periods=n_periods, return_conf_int=True)
index_of_fc = pd.date_range(data_train.index[-1], periods = n_periods, freq='MS')
# make series for plotting purpose
fitted_series = pd.Series(fitted, index=index_of_fc)
lower_series = pd.Series(confint[:, 0], index=index_of_fc)
upper_series = pd.Series(confint[:, 1], index=index_of_fc)
# Plot
plt.plot(data_train)
plt.plot(fitted_series, color='darkgreen')
plt.fill_between(lower_series.index,
lower_series,
upper_series,
color='k', alpha=.15)
plt.title("SARIMA model for LSOA E01000001")
plt.plot(data_test)
plt.legend()
plt.show()
```
| github_jupyter |
```
from gridworld import *
% matplotlib inline
# create the gridworld as a specific MDP
gridworld=GridMDP([[-0.04,-0.04,-0.04,1],[-0.04,None, -0.04, -1], [-0.04, -0.04, -0.04, -0.04]], terminals=[(3,2), (3,1)], gamma=1.)
example_pi = {(0,0): (0,1), (0,1): (0,1), (0,2): (1,0), (1,0): (1,0), (1,2): (1,0), (2,0): (0,1), (2,1): (0,1), (2,2): (1,0), (3,0):(-1,0), (3,1): None, (3,2):None}
example_V = {(0,0): 0.1, (0,1): 0.2, (0,2): 0.3, (1,0): 0.05, (1,2): 0.5, (2,0): 0., (2,1): -0.2, (2,2): 0.5, (3,0):-0.4, (3,1): -1, (3,2):+1}
"""
1) Complete the function policy evaluation below and use it on example_pi!
The function takes as input a policy pi, and an MDP (including its transition model,
reward and discounting factor gamma), and gives as output the value function for this
specific policy in the MDP. Use equation (1) in the lecture slides!
"""
def policy_evaluation(pi, V, mdp, k=20):
"""Return an updated value function V for each state in the MDP """
R, T, gamma = mdp.R, mdp.T, mdp.gamma # retrieve reward, transition model and gamma from the MDP
for i in range(k): # iterative update of V
for s in mdp.states:
V[s] = R(s) + gamma
action = pi[s]
probabilities = T(s, action)
aux = 0
for p, state in probabilities:
aux += p * V[state]
V[s] += aux
# raise NotImplementedError # implement iterative policy evaluation here
return V
def policy_evaluation(pi, V, mdp, k=20):
"""Return an updated value function V for each state in the MDP """
R, T, gamma = mdp.R, mdp.T, mdp.gamma # retrieve reward, transition model and gamma from the MDP
for i in range(k): # iterative update of V
for s in mdp.states:
V[s] = R(s) + gamma * sum([p * V[s1] for (p, s1) in T(s, pi[s])])
return V
R = gridworld.R
T = gridworld.T
V=policy_evaluation(example_pi, example_V, gridworld)
gridworld.policy_plot(example_pi)
print(V)
gridworld.v_plot(V)
"""
2) Complete the function value iteration below and use it to compute the optimal value function for the gridworld.
The function takes as input the MDP (including reward function and transition model) and is supposed to compute
the optimal value function using the value iteration algorithm presented in the lecture. Use the function best_policy
to compute to compute the optimal policy under this value function!
"""
def value_iteration(mdp, epsilon=0.0001):
"Solving an MDP by value iteration. epsilon determines the convergence criterion for stopping"
V1 = dict([(s, 0) for s in mdp.states]) # initialize value function
R, T, gamma = mdp.R, mdp.T, mdp.gamma
while True:
V = V1.copy()
delta = 0
for s in mdp.states:
raise NotImplementedError # implement the value iteration step here
delta = max(delta, abs(V1[s] - V[s]))
if delta < epsilon:
return V
def argmax(seq, fn):
best = seq[0]; best_score = fn(best)
for x in seq:
x_score = fn(x)
if x_score > best_score:
best, best_score = x, x_score
return best
def expected_utility(a, s, V, mdp):
"The expected utility of doing a in state s, according to the MDP and U."
return sum([p * V[s1] for (p, s1) in mdp.T(s, a)])
def best_policy(mdp, V):
"""Given an MDP and a utility function V, best_policy determines the best policy,
as a mapping from state to action. """
pi = {}
for s in mdp.states:
pi[s] = argmax(mdp.actions(s), lambda a:expected_utility(a, s, V, mdp))
return pi
Vopt=value_iteration(gridworld)
piopt = best_policy(gridworld, Vopt)
gridworld.policy_plot(piopt)
gridworld.v_plot(Vopt)
"""
3) Complete the function policy iteration below and use it to compute the optimal policy for the gridworld.
The function takes as input the MDP (including reward function and transition model) and is supposed to compute
the optimal policy using the policy iteration algorithm presented in the lecture. Compare the result with what
you got from running value_iteration and best_policy!
"""
def policy_iteration(mdp):
"Solve an MDP by policy iteration"
V = dict([(s, 0) for s in mdp.states])
pi = dict([(s, random.choice(mdp.actions(s))) for s in mdp.states])
while True:
raise NotImplementedError # find value function for this policy
unchanged = True
for s in mdp.states:
raise NotImplementedError # update policy
if a != pi[s]:
unchanged = False
if unchanged:
return pi
```
| github_jupyter |
<h2>Quadratic Regression Dataset - Linear Regression vs XGBoost</h2>
Model is trained with XGBoost installed in notebook instance
In the later examples, we will train using SageMaker's XGBoost algorithm.
Training on SageMaker takes several minutes (even for simple dataset).
If algorithm is supported on Python, we will try them locally on notebook instance
This allows us to quickly learn an algorithm, understand tuning options and then finally train on SageMaker Cloud
In this exercise, let's compare XGBoost and Linear Regression for Quadratic regression dataset
```
# Install xgboost in notebook instance.
#### Command to install xgboost
!conda install -y -c conda-forge xgboost
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, mean_absolute_error
# XGBoost
import xgboost as xgb
# Linear Regression
from sklearn.linear_model import LinearRegression
df = pd.read_csv('quadratic_all.csv')
df.head()
plt.plot(df.x,df.y,label='Target')
plt.grid(True)
plt.xlabel('Input Feature')
plt.ylabel('Target')
plt.legend()
plt.title('Quadratic Regression Dataset')
plt.show()
train_file = 'quadratic_train.csv'
validation_file = 'quadratic_validation.csv'
# Specify the column names as the file does not have column header
df_train = pd.read_csv(train_file,names=['y','x'])
df_validation = pd.read_csv(validation_file,names=['y','x'])
df_train.head()
df_validation.head()
plt.scatter(df_train.x,df_train.y,label='Training',marker='.')
plt.scatter(df_validation.x,df_validation.y,label='Validation',marker='.')
plt.grid(True)
plt.xlabel('Input Feature')
plt.ylabel('Target')
plt.title('Quadratic Regression Dataset')
plt.legend()
plt.show()
X_train = df_train.iloc[:,1:] # Features: 1st column onwards
y_train = df_train.iloc[:,0].ravel() # Target: 0th column
X_validation = df_validation.iloc[:,1:]
y_validation = df_validation.iloc[:,0].ravel()
# Create an instance of XGBoost Regressor
# XGBoost Training Parameter Reference:
# https://github.com/dmlc/xgboost/blob/master/doc/parameter.md
regressor = xgb.XGBRegressor()
regressor
regressor.fit(X_train,y_train, eval_set = [(X_train, y_train), (X_validation, y_validation)])
eval_result = regressor.evals_result()
training_rounds = range(len(eval_result['validation_0']['rmse']))
plt.scatter(x=training_rounds,y=eval_result['validation_0']['rmse'],label='Training Error')
plt.scatter(x=training_rounds,y=eval_result['validation_1']['rmse'],label='Validation Error')
plt.grid(True)
plt.xlabel('Iteration')
plt.ylabel('RMSE')
plt.title('Training Vs Validation Error')
plt.legend()
plt.show()
xgb.plot_importance(regressor)
plt.show()
```
## Validation Dataset Compare Actual and Predicted
```
result = regressor.predict(X_validation)
result[:5]
plt.title('XGBoost - Validation Dataset')
plt.scatter(df_validation.x,df_validation.y,label='actual',marker='.')
plt.scatter(df_validation.x,result,label='predicted',marker='.')
plt.grid(True)
plt.legend()
plt.show()
# RMSE Metrics
print('XGBoost Algorithm Metrics')
mse = mean_squared_error(df_validation.y,result)
print(" Mean Squared Error: {0:.2f}".format(mse))
print(" Root Mean Square Error: {0:.2f}".format(mse**.5))
# Residual
# Over prediction and Under Prediction needs to be balanced
# Training Data Residuals
residuals = df_validation.y - result
plt.hist(residuals)
plt.grid(True)
plt.xlabel('Actual - Predicted')
plt.ylabel('Count')
plt.title('XGBoost Residual')
plt.axvline(color='r')
plt.show()
# Count number of values greater than zero and less than zero
value_counts = (residuals > 0).value_counts(sort=False)
print(' Under Estimation: {0}'.format(value_counts[True]))
print(' Over Estimation: {0}'.format(value_counts[False]))
# Plot for entire dataset
plt.plot(df.x,df.y,label='Target')
plt.plot(df.x,regressor.predict(df[['x']]) ,label='Predicted')
plt.grid(True)
plt.xlabel('Input Feature')
plt.ylabel('Target')
plt.legend()
plt.title('XGBoost')
plt.show()
```
## Linear Regression Algorithm
```
lin_regressor = LinearRegression()
lin_regressor.fit(X_train,y_train)
```
Compare Weights assigned by Linear Regression.
Original Function: 5*x**2 -23*x + 47 + some noise
Linear Regression Function: -15.08 * x + 709.86
Linear Regression Coefficients and Intercepts are not close to actual
```
lin_regressor.coef_
lin_regressor.intercept_
result = lin_regressor.predict(df_validation[['x']])
plt.title('LinearRegression - Validation Dataset')
plt.scatter(df_validation.x,df_validation.y,label='actual',marker='.')
plt.scatter(df_validation.x,result,label='predicted',marker='.')
plt.grid(True)
plt.legend()
plt.show()
# RMSE Metrics
print('Linear Regression Metrics')
mse = mean_squared_error(df_validation.y,result)
print(" Mean Squared Error: {0:.2f}".format(mse))
print(" Root Mean Square Error: {0:.2f}".format(mse**.5))
# Residual
# Over prediction and Under Prediction needs to be balanced
# Training Data Residuals
residuals = df_validation.y - result
plt.hist(residuals)
plt.grid(True)
plt.xlabel('Actual - Predicted')
plt.ylabel('Count')
plt.title('Linear Regression Residual')
plt.axvline(color='r')
plt.show()
# Count number of values greater than zero and less than zero
value_counts = (residuals > 0).value_counts(sort=False)
print(' Under Estimation: {0}'.format(value_counts[True]))
print(' Over Estimation: {0}'.format(value_counts[False]))
# Plot for entire dataset
plt.plot(df.x,df.y,label='Target')
plt.plot(df.x,lin_regressor.predict(df[['x']]) ,label='Predicted')
plt.grid(True)
plt.xlabel('Input Feature')
plt.ylabel('Target')
plt.legend()
plt.title('LinearRegression')
plt.show()
```
Linear Regression is showing clear symptoms of under-fitting
Input Features are not sufficient to capture complex relationship
<h2>Your Turn</h2>
You can correct this under-fitting issue by adding relavant features.
1. What feature will you add and why?
2. Complete the code and Test
3. What performance do you see now?
```
# Specify the column names as the file does not have column header
df_train = pd.read_csv(train_file,names=['y','x'])
df_validation = pd.read_csv(validation_file,names=['y','x'])
df = pd.read_csv('quadratic_all.csv')
```
# Add new features
```
# Place holder to add new features to df_train, df_validation and df
# if you need help, scroll down to see the answer
# Add your code
X_train = df_train.iloc[:,1:] # Features: 1st column onwards
y_train = df_train.iloc[:,0].ravel() # Target: 0th column
X_validation = df_validation.iloc[:,1:]
y_validation = df_validation.iloc[:,0].ravel()
lin_regressor.fit(X_train,y_train)
```
Original Function: -23*x + 5*x**2 + 47 + some noise (rewritten with x term first)
```
lin_regressor.coef_
lin_regressor.intercept_
result = lin_regressor.predict(X_validation)
plt.title('LinearRegression - Validation Dataset')
plt.scatter(df_validation.x,df_validation.y,label='actual',marker='.')
plt.scatter(df_validation.x,result,label='predicted',marker='.')
plt.grid(True)
plt.legend()
plt.show()
# RMSE Metrics
print('Linear Regression Metrics')
mse = mean_squared_error(df_validation.y,result)
print(" Mean Squared Error: {0:.2f}".format(mse))
print(" Root Mean Square Error: {0:.2f}".format(mse**.5))
print("***You should see an RMSE score of 30.45 or less")
df.head()
# Plot for entire dataset
plt.plot(df.x,df.y,label='Target')
plt.plot(df.x,lin_regressor.predict(df[['x','x2']]) ,label='Predicted')
plt.grid(True)
plt.xlabel('Input Feature')
plt.ylabel('Target')
plt.legend()
plt.title('LinearRegression')
plt.show()
```
## Solution for under-fitting
add a new X**2 term to the dataframe
syntax:
df_train['x2'] = df_train['x']**2
df_validation['x2'] = df_validation['x']**2
df['x2'] = df['x']**2
### Tree Based Algorithms have a lower bound and upper bound for predicted values
```
# True Function
def quad_func (x):
return 5*x**2 -23*x + 47
# X is outside range of training samples
# New Feature: Adding X^2 term
X = np.array([-100,-25,25,1000,5000])
y = quad_func(X)
df_tmp = pd.DataFrame({'x':X,'y':y,'x2':X**2})
df_tmp['xgboost']=regressor.predict(df_tmp[['x']])
df_tmp['linear']=lin_regressor.predict(df_tmp[['x','x2']])
df_tmp
plt.scatter(df_tmp.x,df_tmp.y,label='Actual',color='r')
plt.plot(df_tmp.x,df_tmp.linear,label='LinearRegression')
plt.plot(df_tmp.x,df_tmp.xgboost,label='XGBoost')
plt.legend()
plt.xlabel('X')
plt.ylabel('y')
plt.title('Input Outside Range')
plt.show()
# X is inside range of training samples
X = np.array([-15,-12,-5,0,1,3,5,7,9,11,15,18])
y = quad_func(X)
df_tmp = pd.DataFrame({'x':X,'y':y,'x2':X**2})
df_tmp['xgboost']=regressor.predict(df_tmp[['x']])
df_tmp['linear']=lin_regressor.predict(df_tmp[['x','x2']])
df_tmp
# XGBoost Predictions have an upper bound and lower bound
# Linear Regression Extrapolates
plt.scatter(df_tmp.x,df_tmp.y,label='Actual',color='r')
plt.plot(df_tmp.x,df_tmp.linear,label='LinearRegression')
plt.plot(df_tmp.x,df_tmp.xgboost,label='XGBoost')
plt.legend()
plt.xlabel('X')
plt.ylabel('y')
plt.title('Input within range')
plt.show()
```
<h2>Summary</h2>
1. In this exercise, we compared performance of XGBoost model and Linear Regression on a quadratic dataset
2. The relationship between input feature and target was non-linear.
3. XGBoost handled it pretty well; whereas, linear regression was under-fitting
4. To correct the issue, we had to add additional features for linear regression
5. With this change, linear regression performed much better
XGBoost can detect patterns involving non-linear relationship; whereas, algorithms like linear regression may need complex feature engineering
| github_jupyter |
# Table of Contents
<div class="toc" style="margin-top: 1em;"><ul class="toc-item" id="toc-level0"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Purpose" data-toc-modified-id="Purpose-1"><span class="toc-item-num">1 </span>Purpose</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Requirements" data-toc-modified-id="Requirements-2"><span class="toc-item-num">2 </span>Requirements</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Abstract-Stakeholder" data-toc-modified-id="Abstract-Stakeholder-2.1"><span class="toc-item-num">2.1 </span>Abstract Stakeholder</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Actual-Stakeholder" data-toc-modified-id="Actual-Stakeholder-2.2"><span class="toc-item-num">2.2 </span>Actual Stakeholder</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Dependencies" data-toc-modified-id="Dependencies-3"><span class="toc-item-num">3 </span>Dependencies</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#R-installation" data-toc-modified-id="R-installation-3.1"><span class="toc-item-num">3.1 </span>R installation</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#An-R-kernel-for-Jupyter-notebooks" data-toc-modified-id="An-R-kernel-for-Jupyter-notebooks-3.2"><span class="toc-item-num">3.2 </span>An R kernel for Jupyter notebooks</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Load-R-libraries-for-the-analyses" data-toc-modified-id="Load-R-libraries-for-the-analyses-3.3"><span class="toc-item-num">3.3 </span>Load R libraries for the analyses</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Analyses" data-toc-modified-id="Analyses-4"><span class="toc-item-num">4 </span>Analyses</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#First-Leaf" data-toc-modified-id="First-Leaf-4.1"><span class="toc-item-num">4.1 </span>First Leaf</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Inputs" data-toc-modified-id="Inputs-4.1.1"><span class="toc-item-num">4.1.1 </span>Inputs</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Outputs" data-toc-modified-id="Outputs-4.1.2"><span class="toc-item-num">4.1.2 </span>Outputs</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Histogram" data-toc-modified-id="Histogram-4.1.2.1"><span class="toc-item-num">4.1.2.1 </span>Histogram</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Boxplots" data-toc-modified-id="Boxplots-4.1.2.2"><span class="toc-item-num">4.1.2.2 </span>Boxplots</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Ridgeline-Plots" data-toc-modified-id="Ridgeline-Plots-4.1.2.3"><span class="toc-item-num">4.1.2.3 </span>Ridgeline Plots</a></span></li></ul></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#First-Bloom" data-toc-modified-id="First-Bloom-4.2"><span class="toc-item-num">4.2 </span>First Bloom</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Code" data-toc-modified-id="Code-5"><span class="toc-item-num">5 </span>Code</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Provenance" data-toc-modified-id="Provenance-6"><span class="toc-item-num">6 </span>Provenance</a></span></li><li><span><a href="http://localhost:8888/notebooks/work/bisdev/phenology-baps/spring-indices/annual-indices-of-spring.ipynb#Citations" data-toc-modified-id="Citations-7"><span class="toc-item-num">7 </span>Citations</a></span></li></ul></div>
# Purpose
This [biogeographical analysis package](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst) (BAP) uses the [USA National Phenology Network](https://www.usanpn.org/usa-national-phenology-network) (USA-NPN)'s modeled information on phenological changes to inform and support management decisions on the timing and coordination of season-specific activities within the boundaries of a user-specified management unit. While various categories of phenological information are applicable to the seasonal allocation of resources, this package focuses on one of those, USA-NPN's modeled spring indices of first leaf and first bloom. The use case for design and development of the BAP was that of a resource manager using this analysis package and USA-NPN's Extended Spring Indices to guide the timing and location of treaments within their protected area.
# Requirements
## Abstract Stakeholder
Stakeholders for the information produced by this analysis package are people making decisions based on the timing of seasonal events at a specific location. Examples include resource managers, health professionals, and recreationalists.
Note: For more on the concept of "Abstract Stakeholder" please see this [reference](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst#abstract-stakeholder).
## Actual Stakeholder
To be determined
Note: For more on the concept of "Actual Stakeholder" see this [reference](https://github.com/usgs-bis/nbmdocs/blob/master/docs/baps.rst#actual-stakeholder).
# Dependencies
This notebook was developed using the R software environment. Several R software packages are required to run this scientific code in a Jupyter notebook. An R kernel for Jupyter notebooks is also required.
## R installation
Guidance on installing the R software environment is available at the [R Project](https://www.r-project.org). Several R libraries, listed below, are used for the analyses and visualizations in this notebook. General instructions for finding and installing libraries are also provided at the [R Project](https://www.r-project.org) website.
## An R kernel for Jupyter notebooks
This notebook uses [IRkernel](https://irkernel.github.io). At the time of this writing (2018-05-06), Karlijn Willems provides excellent guidance on installing the IRkernel and running R in a Jupyter notebook in her article entitled ["Jupyter And R Markdown: Notebooks With R"](https://www.datacamp.com/community/blog/jupyter-notebook-r#markdown)
## Load R libraries for the analyses
```
library(tidyverse)
library(ggplot2)
library(ggridges)
library(jsonlite)
library(viridis)
```
# Analyses
An understanding of the USA National Phenology Network's suite of [models and maps](https://www.usanpn.org/data/maps) is required to properly use this analysis package and to assess the results.
The Extended Spring Indices, the model used to estimate the timing of "first leaf" and "first bloom" events for early spring indicator species at a specific location, are detailed on this [page](https://www.usanpn.org/data/spring_indices) of the USA-NPN website. Note both indices are based on the 2013 version of the underlying predictive model (Schwartz et al. 2013). The current model and its antecedents are described on the USA-NPN site and in peer-reviewed literatire (Ault et al. 2015, Schwartz 1997, Schwartz et al. 2006, Schwartz et al. 2013). Crimmins et al. (2017) documents the USA National Phenology Network gridded data products used in this analysis package. USA-NPN also provides an assessment of Spring Index uncertainty and error with their [Spring Index and Plausibility Dashboard](https://www.usanpn.org/data/si-x_plausibility).
## First Leaf
This analysis looks at the timing of First Leaf or leaf out for a specific location as predicted by the USA-NPN Extended Spring Indices models (https://www.usanpn.org/data/spring_indices, accessed 2018-01-27). The variable *average_leaf_prism* which is based on [PRISM](http://www.prism.oregonstate.edu) temperature data was used for this analysis.
### Inputs
The operational BAP prototype retrieves data in real-time from the [USA National Phenology Network](https://www.usanpn.org)'s Web Processing Service (WPS) using a developer key issued by USA-NPN. Their WPS allows a key holder to request and retrieve model output values for a specified model, area of interest and time period. Model output for the variable *average_leaf_prism* was retrieved 2018-01-27. The area of interest, Yellowstone National Park, was analyzed using information from the [Spatial Feature Registry](https://github.com/usgs-bis/nbmdocs/blob/master/docs/bis.rst). The specified time period was 1981 to 2016. This notebook provides a lightly processsed version of that retrieval, [YellowstoneNP-1981-2016-processed-numbers.json](./YellowstoneNP-1981-2016-processed-numbers.json), for those who do not have a personal developer key.
```
# transform the BIS emitted JSON into something ggplot2 can work with
yell <- read_json("YellowstoneNP-1981-2016-processed-numbers.json", simplifyDataFrame = TRUE, simplifyVector = TRUE, flatten = TRUE)
yelldf <- as_tibble(yell)
yellt <- gather(yelldf, Year, DOY)
```
### Outputs
#### Histogram
Produce a histogram of modeled results for Yellowstone National Park for all years within the specified period of interest (1981 to 2016). The visualization allows the user to assess the range and distribution of all the modeled values for the user-selected area for the entire, user-specified time period. Here, the modeled Leaf Spring Index values for each of the grid that fall within the boundary of Yellowstone National Park are binned by Day of Year for the entire period of interest. The period of interest is 1981 to 2016 inclusive. Dotted vertical lines indicating the minimum (green), mean (red), and maximum (green) values of the dataset are also shown.
```
# produce a histogram for all years
ggplot(yellt, aes(DOY)) +
geom_histogram(binwidth = 1, color = "grey", fill = "lightblue") +
ggtitle("Histogram of First Leaf Spring Index, Yellowstone National Park (1981 - 2016)") +
geom_vline(aes(xintercept=mean(DOY, na.rm=T)), color = "red", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = min(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = max(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5)
```
This notebook uses the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the above histogram. Operationalized, online versions of this visualization should be based on the guidance provided by the ggplot2 developers. See their section entitled [*Histograms and frequency polygons*](https://ggplot2.tidyverse.org/reference/geom_histogram.html) for details and approaches. The webpage provides links to their source code. Also, note the modeled grid cell values are discrete and should be portrayed as such in an operationalized graphic.
#### Boxplots
Produce a multiple boxplot display of the modeled results for Yellowstone National Park for each year within the specified time period. Each individual boxplot portrays that year's median, hinges, whiskers and "outliers". The multiple boxplot display allows the user to explore the distribution of modeled spring index values through time.
```
# Produce a mulitple boxplot display with a boxplot for each year
ggplot(yellt, aes(y = DOY, x = Year, group = Year)) +
geom_boxplot() +
geom_hline(aes(yintercept = median(DOY, na.rm=T)), color = "blue", linetype = "dotted", size = 0.5) +
ggtitle("DRAFT: Boxplot of Spring Index, Yellowstone National Park (1981 to 2016)")
```
This notebook uses the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the multiple boxplot above. Base any operationalized, online versions of this visualization on the guidance provided by the ggplot2 developers. See their section entitled [*A box and whiskers plot (in the style of Tukey)*](https://ggplot2.tidyverse.org/reference/geom_boxplot.html) for details and approaches. Links to their source code are available at that web location.
#### Ridgeline Plots
Produce ridgeline plots for each year to better visualize changes in the distributions over time.
```
# ridgeline plot with gradient coloring based on day of year for each available year
ggplot(yellt, aes(x = DOY, y = Year, group = Year, fill = ..x..)) +
geom_density_ridges_gradient(scale = 3, rel_min_height = 0.01, gradient_lwd = 1.0, from = 80, to = 180) +
scale_x_continuous(expand = c(0.01, 0)) +
scale_y_continuous(expand = c(0.01, 0)) +
scale_fill_viridis(name = "Day of\nYear", option = "D", direction = -1) +
labs(title = 'DRAFT: Spring Index, Yellowstone National Park',
subtitle = 'Annual Spring Index by Year for the Period 1981 to 2016\nModel Results from the USA National Phenology Network',
y = 'Year',
x = 'Spring Index (Day of Year)',
caption = "(model results retrieved 2018-01-26)") +
theme_ridges(font_size = 12, grid = TRUE) +
geom_vline(aes(xintercept = mean(DOY, na.rm=T)), color = "red", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = min(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5) +
geom_vline(aes(xintercept = max(DOY, na.rm=T)), color = "green", linetype = "dotted", size = 0.5)
```
This notebook used the [ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) R package to produce the ridgeline above. Base any operationalized, online versions of this visualization on the guidance provided by the ggridges developer. See their R package vignette [Introduction to ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) for details and approaches. Source code is available at their [GitHub repo](https://github.com/clauswilke/ggridges).
## First Bloom
This analysis looks at the timing of First Bloom for a specific location as predicted by the USA-NPN Extended Spring Indices models (https://www.usanpn.org/data/spring_indices, accessed 2018-01-27). The variable *average_bloom_prism* which is based on [PRISM](http://www.prism.oregonstate.edu) temperature data was used for this analysis.
Output visualizations and implementation notes follow the approach and patterns used for First Leaf: histograms, multiple boxplots and ridgeline plots.
# Code
Code used for this notebook is available at the [usgs-bcb/phenology-baps](https://github.com/usgs-bcb/phenology-baps) GitHub repository.
# Provenance
This prototype analysis package was a collaborative development effort between USGS [Core Science Analytics, Synthesis, and Libraries](https://www.usgs.gov/science/mission-areas/core-science-systems/csasl?qt-programs_l2_landing_page=0#qt-programs_l2_landing_page) and the [USA National Phenology Network](https://www.usanpn.org). Members of the scientific development team met and discussed use cases, analyses, and visualizations during the third quarter of 2016. Model output choices as well as accessing the information by means of the USA-NPN Web Processing Service were also discussed at that time.
This notebook was based upon those group discussions and Tristan Wellman's initial ideas for processing and visualizing the USA-NPN spring index data. That initial body of work and other suppporting code is available at his GitHub repository, [TWellman/USGS_BCB-NPN-Dev-Space](https://github.com/TWellman/USGS_BCB-NPN-Dev-Space). This notebook used the [ggplot2](https://ggplot2.tidyverse.org/index.html) R library to produce the histograms and boxplots and ridgeplots. The ggplot2 developers provide online guidance and links to their source code for these at [*Histograms and frequency polygons*](https://ggplot2.tidyverse.org/reference/geom_histogram.html) and [*A box and whiskers plot (in the style of Tukey)*](https://ggplot2.tidyverse.org/reference/geom_boxplot.html). The [ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html) R package is used to produce the ridgeline plot. Usage is described in the R package vignette [Introduction to ggridges](https://cran.r-project.org/web/packages/ggridges/vignettes/introduction.html). The underlying source code is available at the author Claus O. Wilke's [GitHub repo](https://github.com/clauswilke/ggridges). Software developers at the Fort Collins Science Center worked with members of the team to operationalize the scientific code and make it publically available on the web. An initial prototype application is available at (https://my-beta.usgs.gov/biogeography/).
# Citations
Ault, T. R., M. D. Schwartz, R. Zurita-Milla, J. F. Weltzin, and J. L. Betancourt (2015): Trends and natural variability of North American spring onset as evaluated by a new gridded dataset of spring indices. Journal of Climate 28: 8363-8378.
Crimmins, T.M., R.L. Marsh, J. Switzer, M.A. Crimmins, K.L. Gerst, A.H. Rosemartin, and J.F. Weltzin. 2017. USA National Phenology Network gridded products documentation. U.S. Geological Survey Open-File Report 2017–1003. DOI: 10.3133/ofr20171003.
Monahan, W. B., A. Rosemartin, K. L. Gerst, N. A. Fisichelli, T. Ault, M. D. Schwartz, J. E. Gross, and J. F. Weltzin. 2016. Climate change is advancing spring onset across the U.S. national park system. Ecosphere 7(10):e01465. 10.1002/ecs2.1465
Schwartz, M. D. 1997. Spring index models: an approach to connecting satellite and surface phenology. Phenology in seasonal climates I, 23-38.
Schwartz, M.D., R. Ahas, and A. Aasa, 2006. Onset of spring starting earlier across the Northern Hemisphere. Global Change Biology, 12, 343-351.
Schwartz, M. D., T. R. Ault, and J. L. Betancourt, 2013: Spring onset variations and trends in the continental United States: past and regional assessment using temperature-based indices. International Journal of Climatology, 33, 2917–2922, 10.1002/joc.3625.
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:cores-oracle.run1",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "CORES_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"dataset_seed": 7,
"seed": 7,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Analyse data with Python Pandas
Welcome to this Jupyter Notebook!
Today you'll learn how to import a CSV file into a Jupyter Notebook, and how to analyse already cleaned data. This notebook is part of the course Python for Journalists at [datajournalism.com](https://datajournalism.com/watch/python-for-journalists). The data used originally comes from [the Electoral Commission website](http://search.electoralcommission.org.uk/Search?currentPage=1&rows=10&sort=AcceptedDate&order=desc&tab=1&open=filter&et=pp&isIrishSourceYes=false&isIrishSourceNo=false&date=Reported&from=&to=&quarters=2018Q12&rptPd=3617&prePoll=false&postPoll=false&donorStatus=individual&donorStatus=tradeunion&donorStatus=company&donorStatus=unincorporatedassociation&donorStatus=publicfund&donorStatus=other&donorStatus=registeredpoliticalparty&donorStatus=friendlysociety&donorStatus=trust&donorStatus=limitedliabilitypartnership&donorStatus=impermissibledonor&donorStatus=na&donorStatus=unidentifiabledonor&donorStatus=buildingsociety®ister=ni®ister=gb&optCols=Register&optCols=IsIrishSource&optCols=ReportingPeriodName), but is edited for training purposes. The edited dataset is available on the course website.
## About Jupyter Notebooks and Pandas
Right now you're looking at a Jupyter Notebook: an interactive, browser based programming environment. You can use these notebooks to program in R, Julia or Python - as you'll be doing later on. Read more about Jupyter Notebook in the [Jupyter Notebook Quick Start Guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html).
To analyse up our data, we'll be using Python and Pandas. Pandas is an open-source Python library - basically an extra toolkit to go with Python - that is designed for data analysis. Pandas is flexible, easy to use and has lots of useful functions built right in. Read more about Pandas and its features in [the Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/). That Pandas functions in ways similar to both spreadsheets and SQL databases (though the latter won't be discussed in this course), makes it beginner friendly. :)
**Notebook shortcuts**
Within Jupyter Notebooks, there are some shortcuts you can use. If you'll be using more notebooks for your data analysis in the future, you'll remember these shortcuts soon enough. :)
* `esc` will take you into command mode
* `a` will insert cell above
* `b` will insert cell below
* `shift then tab` will show you the documentation for your code
* `shift and enter` will run your cell
* ` d d` will delete a cell
**Pandas dictionary**
* **dataframe**: dataframe is Pandas speak for a table with a labeled y-axis, also known as an index. (The index usually starts at 0.)
* **series**: a series is a list, a series can be made of a single column within a dataframe.
Before we dive in, a little more about Jupyter Notebooks. Every notebooks is made out of cells. A cell can either contain Markdown text - like this one - or code. In the latter you can execute your code. To see what that means, type the following command in the next cell `print("hello world")`.
```
print("hello world")
```
## Getting started
In the module 'Clean data' from this course, we cleaned up a dataset with donations to political parties in the UK. Now, we're going to analyse the data in that dataset. Let's start by importing the Pandas library, using `import pandas as pd`.
```
import pandas as pd
```
Now, import the cleaned dataset, use `df = pd.read_csv('/path/to/file_with_clean_data.csv')`.
## Importing data
```
df = pd.read_csv('results_clean.csv')
```
Let's see if the data is anything like you'd expect, use `df.head()`, `df.tail()` or `df.sample()`.
```
df.head(10)
```
Whoops! When we saved the data after cleaning it, the index was saved in an unnamed column. With importing, Pandas added a new index... Let's get rid of the 'Unnamed: 0' column. Drop it like it's hot... `df = df.drop('Unnamed: 0', 1)`.
```
df = df.drop('Unnamed: 0', 1)
```
Let's see if this worked, use `df.head()`, `df.tail()` or `df.sample()`.
```
df.tail(10)
```
Now, if this looks better, let's get started and analyse some data.
# Analyse data
## Statistical summary
In the module Clean data, you already saw the power of `df.describe()`. This function gives a basic statistical summary of every column in the dataset. It will give you even more information when you tell the function that you want everything included, like this: `df.describe(include='all')`
```
df.describe(include='all')
```
For columns with numeric values, `df.describe()` will give back the most information, here's a full list of the parameters and their meaning:
**df.describe() parameters**
* **count**: number of values in that column
* **unique**: number of unique values in that column
* **top**: first value in that column
* **freq**: the most common value’s frequency
* **mean**: average
* **std**: standard deviation
* **min**: minimum value, lowest value in the column
* **25%**: first percentile
* **50%**: second percentile, this is the same as the median
* **75%**: thirth percentile
* **max**: maximum value, highest value in the column
If a column does not contain numeric value, only those parameters that are applicable are returned. Python gives you NaN-values when that's the case - NaN is short for Not a Number.
Notice that 'count' is 300 for every column. This means that every column has a value for every row in the dataset. How do I know? I looked at the total number of rows, using `df.shape`.
```
df.shape
```
## Filter
Let's try to filter the dataframe based on the value in the Value column. You can do this using `df[df['Value'] > 10000 ]`. This will give you a dataframe with only donations from 10.000 pound or more.
```
df[df['Value'] > 10000 ]
```
## Sort
Let's try to sort the data. Using the command `df.sort_values(by='column_name')` will sort the dataframe based on the column of your choosing. Sorting by default happens ascending, from small to big.
In case you want to see the sorting from big to small, descending, you'll have to type: `df.sort_values(by='column_name', ascending=False)`.
Now, let's try to sort the dataframe based on the number in the Value column it's easy to find out who made the biggest donation.
The above commands will sort the dataframe by a column, but - since we never asked our notebook to - won't show the data. To sort the data and show us the new order of the top 10, we'll have to combine the command with `.head(10)` like this: `df.sort_values(by='column_name').head(10)`.
Now, what would you type if you want to see the 10 smallest donations?
```
df.sort_values(by='Value').head(10)
```
If you want to see the biggest donations made, there are two ways to do that. You could use `df.tail(10)` to see the last 10 rows of the dataframe as it is now. Since the dataframe is ordered from small to big donation, the biggest donations will be in the last 10 rows.
Another way of doing this, is using `df.sort_values(by='Value', ascending=False).head(10)`. This would sort the dataframe based on the Value column from big to small. Personally I prefer the latter...
```
df.sort_values(by='Value', ascending=False).head(10)
```
## Sum
Wow! There are some big donations in our dataset. If you want to know how much money was donated in total, you need to get the sum of the column Value. Use `df['Value'].sum()`.
```
df['Value'].sum()
```
## Count
Let's look at the receivers of all this donation money. Use `df['RegulatedEntityName'].count()` to count the number of times a regulated entity received a donation.
```
df['RegulatedEntityName'].count()
```
Not really what we were looking for, right? Using `.count()` gives you the number of values in a column. Not the number of appearances per unique value in the column.
You'll need to use `df['RegulatedEntityName'].value_counts()` if you want to know that...
```
df['RegulatedEntityName'].value_counts()
```
Ok. Let's see if you really understand the difference between `.value_counts()` and `.count()` If you want to know how many donors have donated, you should count the values in the DonorName column. Do you use `df['DonorName'].value_counts()` or `df['DonorName'].count()`?
When in doubt, try both. Remember: we're using a Jupyter Notebook here. It's a **Notebook**, so you can't go wrong here. :)
```
df['DonorName'].count()
df['DonorName'].value_counts()
```
Interesting: apparently Ms Jane Mactaggart, Mr Duncan Greenland, and Lord Charles Falconer of Thoroton have donated most often. Let's look into that...
## Groupby
If you're familiar with Excel, you probably heard of 'pivot tables'. Python Pandas has a function very similar to those pivot tables.
Let's start with a small refresher: pivot tables are summaries of a dataset inside a new table. Huh? That's might be a lot to take in.
Look at our example: data on donations to political parties in the UK. If we want to know how much each unique donor donated, we are looking for a specific summary of our dataset. To get the anwer to this question: 'How much have Ms Jane Mactaggart, Mr Duncan Greenland, and Lord Charles Falconer of Thoroton donated in total?' We need Pandas to sum up all donation for every donor in the dataframe. In a way, this is a summary of the original dataframe by grouping values by in this case the column DonorName.
Using Python this can be done using the group by function. Let's create a new dataframe called donors, that has all donors and the total sum of their donations in there. Use `donors = df.groupby('DonorName')['Value'].sum()`. This is a combination of several functions: group data by 'DonorName', and sum the data in the 'Value' column...
```
donors = df.groupby('DonorName')['Value'].sum()
donors.head(10)
```
To see if it worked, you'll have to add `donors.head(10)`, otherwise your computer won't know that you actually want to see the result of your effort. :)
## Pivot tables
But Python has it's own pivot table as well. You can get a similar result in a better looking table using de `df.pivot_table` function.
Here's a perfectly fine `.pivot_table` example:
`df.pivot_table(values="Value", index="DonorName", columns="Year", aggfunc='sum').sort_values(2018).head(10)`
Let's go over this code before running it. What will `df.pivot_table(values="Value", index="DonorName", columns="Year", aggfunc='sum').sort_values(2018).head(10)` actually do?
For the dataframe called df, create a pivot table where:
- the values in the pivot table should be based on the Value column
- the index of the pivot table should be base don the DonorName column, in other words: create a row for every unique value in the DonorName column
- create a new column for every unique value in the Year column
- aggregate the data that fills up these columns (from the Value column, see?) by summing it for every row.
Are you ready to try it yourself?
```
df.pivot_table(values="Value", index="DonorName", columns="Year", aggfunc='sum').sort_values(2018).head(10)
```
## Save your data
Now that we've put all this work into cleaning our dataset, let's save a copy. Off course Pandas has a nifty command for that too. Use `dataframe.to_csv('filename.csv', encoding='utf8')`.
Be ware: use a different name than the filename of the original data file, or it will be overwritten.
```
df.to_csv('results clean - pivot table.csv')
```
In case you want to check if a new file was created in your directory, you can use the `pwd` and `ls` commands. At the beginning of this module, we used these commands to print the working directory (`pwd`) and list the content of the working directory (`ls`).
First, use `pwd` to see in which folder - also known as directory - you are:
```
pwd
```
Now use `ls` to get a list of all files in this directory. If everything worked your newly saved datafile should be among the files in the list.
```
ls
```
| github_jupyter |
# Example File:
In this package, we show three examples:
<ol>
<li>4 site XY model</li>
<li>4 site Transverse Field XY model with random coefficients</li>
<li><b> Custom Hamiltonian from OpenFermion </b> </li>
</ol>
## Clone and Install The Repo via command line:
```
git clone https://github.com/kemperlab/cartan-quantum-synthesizer.git
cd ./cartan-quantum-synthesizer/
pip install .
```
# Building Custom Hamiltonians
In this example, we will use OpenFermion to generate a Hubbard Model Hamiltonian, then use the Jordan-Wigner methods of OpenFermion and some custom functions to feed the output into the Cartan-Quantum-Synthesizer package
## Step 1: Build the Hamiltonian in OpenFermion
```
from CQS.methods import *
from CQS.util.IO import tuplesToMatrix
import openfermion
from openfermion import FermionOperator
t = 1
U = 8
mu = 1
systemSize = 4 #number of qubits neeed
#2 site, 1D lattice, indexed as |↑_0↑_1↓_2↓_3>
#Hopping terms
H = -t*(FermionOperator('0^ 1') + FermionOperator('1^ 0') + FermionOperator('2^ 3') + FermionOperator('3^ 2'))
#Coulomb Terms
H += U*(FermionOperator('0^ 0 2^ 2') + FermionOperator('1^ 1 3^ 3'))
#Chemical Potential
H += -mu*(FermionOperator('0^ 0') + FermionOperator('1^ 1') + FermionOperator('2^ 2') + FermionOperator('3^ 3'))
print(H)
#Jordan Wigner Transform
HPauli = openfermion.jordan_wigner(H)
print(HPauli)
#Custom Function to convert OpenFermion operators to a format readable by CQS:
#Feel free to use or modify this code, but it is not built into the CQS package
def OpenFermionToCQS(H, systemSize):
"""
Converts the Operators to a list of (PauliStrings)
Args:
H(obj): The OpenFermion Operator
systemSize (int): The number of qubits in the system
"""
stringToTuple = {
'X': 1,
'Y': 2,
'Z': 3
}
opList = []
coList = []
for op in H.terms.keys(): #Pulls the operator out of the QubitOperator format
coList.append(H.terms[op])
opIndexList = []
opTypeDict = {}
tempTuple = ()
for (opIndex, opType) in op:
opIndexList.append(opIndex)
opTypeDict[opIndex] = opType
for index in range(systemSize):
if index in opIndexList:
tempTuple += (stringToTuple[opTypeDict[index]],)
else:
tempTuple += (0,)
opList.append(tempTuple)
return (coList, opList)
#The new format looks like:
print(OpenFermionToCQS(HPauli, systemSize))
#Now, we can put all this together:
#Step 1: Create an Empty Hamiltonian Object
HubbardH = Hamiltonian(systemSize)
#Use Hamiltonian.addTerms to build the Hubbard model Hamiltonian:
HubbardH.addTerms(OpenFermionToCQS(HPauli, systemSize))
#This gives:
HubbardH.getHamiltonian(type='printText')
#There's an IIII term we would rather not deal with, so we can remove it like this:
HubbardH.removeTerm((0,0,0,0))
#This gives:
print('Idenity/Global Phase removed:')
HubbardH.getHamiltonian(type='printText')
#Be careful choosing an involution, because it might now decompose such that the Hamiltonian is in M:
try:
HubbardC = Cartan(HubbardH)
except Exception as e:
print('Default Even/Odd Involution does not work:')
print(e)
print('countY does work though. g = ')
HubbardC = Cartan(HubbardH, involution='countY')
print(HubbardC.g)
```
| github_jupyter |
```
import requests
import simplejson as json
import pandas as pd
import numpy as np
import os
import json
import math
from openpyxl import load_workbook
df={"mapping":{
"Afferent / Efferent Arteriole Endothelial": "Afferent Arteriole Endothelial Cell",
"Ascending Thin Limb": "Ascending Thin Limb Cell",
"Ascending Vasa Recta Endothelial": "Ascending Vasa Recta Endothelial Cell",
"B": "B cell",
"Classical Dendritic": "Dendritic Cell (classical)",
"Connecting Tubule": "Connecting Tubule Cell",
"Connecting Tubule Intercalated Type A": "Connecting Tubule Intercalated Cell Type A",
"Connecting Tubule Principal": "Connecting Tubule Principal Cell",
"Cortical Collecting Duct Intercalated Type A": "Collecting Duct Intercalated Cell Type A",
"Cortical Collecting Duct Principal": "Cortical Collecting Duct Principal Cell",
"Cortical Thick Ascending Limb": "Cortical Thick Ascending Limb Cell",
"Cortical Vascular Smooth Muscle / Pericyte": "Vascular Smooth Muscle Cell/Pericyte (general)",
"Cycling Mononuclear Phagocyte": "Monocyte",
"Descending Thin Limb Type 1": "Descending Thin Limb Cell Type 1",
"Descending Thin Limb Type 2": "Descending Thin Limb Cell Type 2",
"Descending Thin Limb Type 3": "Descending Thin Limb Cell Type 3",
"Descending Vasa Recta Endothelial": "Descending Vasa Recta Endothelial Cell",
"Distal Convoluted Tubule Type 1": "Distal Convoluted Tubule Cell Type 1",
"Distal Convoluted Tubule Type 2": "Distal Convoluted Tubule Cell Type 1",
"Fibroblast": "Fibroblast",
"Glomerular Capillary Endothelial": "Glomerular Capillary Endothelial Cell",
"Inner Medullary Collecting Duct": "Inner Medullary Collecting Duct Cell",
"Intercalated Type B": "Intercalated Cell Type B",
"Lymphatic Endothelial": "Lymphatic Endothelial Cell",
"M2 Macrophage": "M2-Macrophage",
"Macula Densa": "Macula Densa cell",
"Mast": "Mast cell",
"Medullary Fibroblast": "Fibroblast",
"Medullary Thick Ascending Limb": "Medullary Thick Ascending Limb Cell",
"Mesangial": "Mesangial Cell",
"Monocyte-derived": "Monocyte",
"Natural Killer T": "Natural Killer T Cell",
"Neutrophil": "Neutrophil",
"Non-classical monocyte": "non Classical Monocyte",
"Outer Medullary Collecting Duct Intercalated Type A": "Outer Medullary Collecting Duct Intercalated Cell Type A",
"Outer Medullary Collecting Duct Principal": "Outer Medullary Collecting Duct Principal Cell",
"Papillary Tip Epithelial": "Endothelial",
"Parietal Epithelial": "Parietal Epithelial Cell",
"Peritubular Capilary Endothelial": "Peritubular Capillary Endothelial Cell",
"Plasma": "Plasma cell",
"Plasmacytoid Dendritic": "Dendritic Cell (plasmatoid)",
"Podocyte": "Podocyte",
"Proximal Tubule Epithelial Segment 1": "Proximal Tubule Epithelial Cell Segment 1",
"Proximal Tubule Epithelial Segment 2": "Proximal Tubule Epithelial Cell Segment 2",
"Proximal Tubule Epithelial Segment 3": "Proximal Tubule Epithelial Cell Segment 3",
"Renin-positive Juxtaglomerular Granular": "Juxtaglomerular granular cell (Renin positive)",
"Schwann / Neural": "other",
"T": "T Cell",
"Vascular Smooth Muscle / Pericyte": "Vascular Smooth Muscle Cell/Pericyte (general)"}
}
df = pd.DataFrame(df)
df.reset_index(inplace=True)
df.rename(columns = {"index":"AZ.CT/LABEL","mapping":"ASCTB.CT/LABEL"},inplace = True)
df.reset_index(inplace=True,drop=True)
df
len(df)
df_1=pd.read_excel('./Data/Final/kidney.xlsx',sheet_name='Final_Matches')
len(df_1)
df_1
```
| github_jupyter |
Single-channel CSC (Constrained Data Fidelity)
==============================================
This example demonstrates solving a constrained convolutional sparse coding problem with a greyscale signal
$$\mathrm{argmin}_\mathbf{x} \sum_m \| \mathbf{x}_m \|_1 \; \text{such that} \; \left\| \sum_m \mathbf{d}_m * \mathbf{x}_m - \mathbf{s} \right\|_2 \leq \epsilon \;,$$
where $\mathbf{d}_{m}$ is the $m^{\text{th}}$ dictionary filter, $\mathbf{x}_{m}$ is the coefficient map corresponding to the $m^{\text{th}}$ dictionary filter, and $\mathbf{s}$ is the input image.
```
from __future__ import print_function
from builtins import input
import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
from sporco import util
from sporco import signal
from sporco import plot
plot.config_notebook_plotting()
import sporco.metric as sm
from sporco.admm import cbpdn
```
Load example image.
```
img = util.ExampleImages().image('kodim23.png', scaled=True, gray=True,
idxexp=np.s_[160:416,60:316])
```
Highpass filter example image.
```
npd = 16
fltlmbd = 10
sl, sh = signal.tikhonov_filter(img, fltlmbd, npd)
```
Load dictionary and display it.
```
D = util.convdicts()['G:12x12x36']
plot.imview(util.tiledict(D), fgsz=(7, 7))
```
Set [admm.cbpdn.ConvMinL1InL2Ball](http://sporco.rtfd.org/en/latest/modules/sporco.admm.cbpdn.html#sporco.admm.cbpdn.ConvMinL1InL2Ball) solver options.
```
epsilon = 3.4e0
opt = cbpdn.ConvMinL1InL2Ball.Options({'Verbose': True, 'MaxMainIter': 200,
'HighMemSolve': True, 'LinSolveCheck': True,
'RelStopTol': 5e-3, 'AuxVarObj': False, 'rho': 50.0,
'AutoRho': {'Enabled': False}})
```
Initialise and run CSC solver.
```
b = cbpdn.ConvMinL1InL2Ball(D, sh, epsilon, opt)
X = b.solve()
print("ConvMinL1InL2Ball solve time: %.2fs" % b.timer.elapsed('solve'))
```
Reconstruct image from sparse representation.
```
shr = b.reconstruct().squeeze()
imgr = sl + shr
print("Reconstruction PSNR: %.2fdB\n" % sm.psnr(img, imgr))
```
Display low pass component and sum of absolute values of coefficient maps of highpass component.
```
fig = plot.figure(figsize=(14, 7))
plot.subplot(1, 2, 1)
plot.imview(sl, title='Lowpass component', fig=fig)
plot.subplot(1, 2, 2)
plot.imview(np.sum(abs(X), axis=b.cri.axisM).squeeze(), cmap=plot.cm.Blues,
title='Sparse representation', fig=fig)
fig.show()
```
Display original and reconstructed images.
```
fig = plot.figure(figsize=(14, 7))
plot.subplot(1, 2, 1)
plot.imview(img, title='Original', fig=fig)
plot.subplot(1, 2, 2)
plot.imview(imgr, title='Reconstructed', fig=fig)
fig.show()
```
Get iterations statistics from solver object and plot functional value, ADMM primary and dual residuals, and automatically adjusted ADMM penalty parameter against the iteration number.
```
its = b.getitstat()
fig = plot.figure(figsize=(20, 5))
plot.subplot(1, 3, 1)
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional', fig=fig)
plot.subplot(1, 3, 2)
plot.plot(np.vstack((its.PrimalRsdl, its.DualRsdl)).T,
ptyp='semilogy', xlbl='Iterations', ylbl='Residual',
lgnd=['Primal', 'Dual'], fig=fig)
plot.subplot(1, 3, 3)
plot.plot(its.Rho, xlbl='Iterations', ylbl='Penalty Parameter', fig=fig)
fig.show()
```
| github_jupyter |
# 六軸史都華平台模擬
```
import numpy as np
import pandas as pd
from sympy import *
init_printing(use_unicode=True)
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import seaborn as sns
sns.set()
%matplotlib inline
```
### Stewart Func
```
α, β, γ = symbols('α β γ')
x, y, z = symbols('x y z')
r, R, w, W, t = symbols('r R w W t')
# x, y, z三軸固定軸旋轉矩陣
rotx = lambda θ : Matrix([[1, 0, 0],
[0, cos(θ), -sin(θ)],
[0, sin(θ), cos(θ)]])
roty = lambda θ : Matrix([[cos(θ), 0, sin(θ)],
[0, 1, 0],
[-sin(θ), 0, cos(θ)]])
rotz = lambda θ : Matrix([[cos(θ), -sin(θ), 0],
[sin(θ), cos(θ), 0],
[0, 0, 1]])
# 姿勢產生器 固定座標旋轉
def poses(α, β, γ):
return rotz(γ)*roty(β)*rotx(α)
# 質心位置產生器
def posit(x, y, z):
return Matrix([x, y, z])
# Basic 6 軸點設定
def basic(r, w):
b1 = Matrix([w/2, r, 0])
b2 = Matrix([-w/2, r, 0])
b3 = rotz(pi*2/3)*b1
b4 = rotz(pi*2/3)*b2
b5 = rotz(pi*2/3)*b3
b6 = rotz(pi*2/3)*b4
return [b1, b2, b3, b4, b5, b6]
# 平台 6 軸點設定
def plat(r, w, pos=poses(0, 0, 0), pit=posit(0, 0, 5)):
p1 = Matrix([-w/2, r, 0])
p1 = rotz(-pi/3)*p1
p2 = Matrix([[-1, 0, 0], [0, 1, 0], [0, 0, 1]])*p1
p3 = rotz(pi*2/3)*p1
p4 = rotz(pi*2/3)*p2
p5 = rotz(pi*2/3)*p3
p6 = rotz(pi*2/3)*p4
lst = [p1, p2, p3, p4, p5, p6]
for n in range(6):
lst[n] = (pos*lst[n]) + pit
return lst
# 六軸長度
def leng(a, b):
if a.ndim == 1:
return (((a - b)**2).sum())**0.5
else:
return (((a - b)**2).sum(1))**0.5
```
### Basic & plane
```
basic(R, W)
plat(r, w, poses(α, β, γ), posit(x, y, z))
```
### 設定α, β, γ, x, y, z 可設定為時間t的函數
```
pos = poses(sin(2*t), cos(t), 0)
pit = posit(x, y, z)
baspt = basic(R, W)
baspt = np.array(baspt)
pltpt = plat(r, w, pos, pit)
pltpt = np.array(pltpt)
```
### 6軸的長度 (示範軸1)
```
length = leng(baspt ,pltpt)
```
### 設定參數 r = 10, R = 5, w = 2, W = 2, x = 0, y = 0, z = 10
```
x1 = length[0].subs([(r, 10), (R, 5), (w, 2), (W, 2), (x, 0), (y, 0), (z, 10)])
x1
```
### 微分一次獲得速度變化
```
dx1 = diff(x1, t)
dx1
```
### 微分兩次獲得加速度變化
```
ddx1 = diff(dx1, t)
ddx1
```
### 畫圖
```
tline = np.linspace(0, 2*np.pi, 361)
xlst = lambdify(t, x1, 'numpy')
dxlst = lambdify(t, dx1, 'numpy')
ddxlst = lambdify(t, ddx1, 'numpy')
plt.rcParams['figure.figsize'] = [16, 6]
plt.plot(tline, xlst(tline), label = 'x')
plt.plot(tline, dxlst(tline), label = 'v')
plt.plot(tline, ddxlst(tline), label = 'a')
plt.ylabel('Length')
plt.xlabel('Time')
plt.legend()
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import os, sys
sys.path.append('/home/sandm/Notebooks/stay_classification/src/')
```
# TODOs (from 09.06.2020)
1. Strip away the non-useful functions
2. Document the remaining functions
3. Move the remaining functions to modules
4. Test the modules
5. Clean up this NB
# Introduction: movement analysis
From a sequence of signaling events, _eg_ GPS measurements, determine locations where the user remains for a significant duration of time, called "stays". For each of these, there should be a beginning and end, as well as a location.
Generally, this is meant for movement on the surface of the earth, but for present purposes, it is easiest to illustrate in one spatial dimension "1D"; all of the problems and strategies can be generalized to 2D as needed.
**Note** the signaling events for a given user, form a set $\mathcal{E} := \{e_i = (\mathbf{x}_i, t_i), i=[0,N-1] \; | \; t_{i+1}>t_i\}$
```
from synthetic_data.trajectory import get_stay
from synthetic_data.trajectory import get_journey_path, get_segments
from synthetic_data.masking import get_mask_with_duplicates
from synthetic_data.trajectory import get_stay_segs, get_adjusted_stays
from synthetic_data.noise import get_noisy_segs, get_noisy_path, get_noise_arr
dsec = 1/3600.0
time = np.arange(0,24,dsec)
stays = [
get_stay( 0.00, 6.00,-1.00), #home
get_stay( 7.50, 16.50, 1.00), #work, afternoon
get_stay( 18.00, 24.00,-1.00) # overnight
]
t_segs, x_segs = get_stay_segs(stays)
raw_journey = get_journey_path(time, get_segments(time, stays, threshold=0.5))
dup_mask = get_mask_with_duplicates(time, 0.05, 0.3)
time_sub = time[dup_mask]
raw_journey_sub = raw_journey[dup_mask]
segments = get_segments(time, stays, threshold=0.5)
new_stays = get_adjusted_stays(segments, time_sub)
new_t_segs, new_x_segs = get_stay_segs(new_stays)
noises = get_noise_arr(0.02, 0.15, len(segments))
noise_segments = get_noisy_segs(segments, noises)
noise_journey_sub = get_noisy_path(time_sub, raw_journey_sub, noise_segments)
```
**Figure** in the above plot, time is on the "x"-axis. An interpretation of this motion, is that the user is initially making a "stay" at a location near $x=-1$, then makes another stay at $x=1$, ending with a return to the initial location. As with all of this data, there is an intrinsic noise that must be considered.
# Goal: Stay detection and positioning
The goal is to identify stays by their beginnings and ends, and associate them to a position. This effectively means to optimally match clusters of points with flat lines.
For a set of events within a cluster $\mathcal{C}_l = \{e_i \; | \; i = [m,n]_l \subset [0,N-1]\}$, a "flat line" has $|\mathbf{x}_m-\mathbf{x}_n| = 0$. Again, this is easiest to see in 1D but it also holds in 2D.
# Strategy
To find the stays, we consider some requirements.
Firstly, there are two main requirements (previously noted):
1. identify the start/stop of a stay
2. estimate a dominant location, _ie_ the "central location" of the stay
* this is where the agent spends the majority of the time, _e.g._ within a building, at a park, etc.
Then, there are some minor rquirements:
1. the clusters should contain a minimum number of points
2. the duration between the first and last points should exceed $\Delta t$
3. the clusters should be as long (in time) as possible
* if there is a sufficient temporal break between two consecutive events without a significant location change, then on must specify how this is to be dealt with.
4. the clusters should contain as many events as possible
One additional requirement which affects all of the above: **cluster outliers should be identified and ignored**
* outliers can change the central location and also the beginning/ending of the clusters
* from the calculation of the central location
* counting them could still help in identifying a cluster wihtout using their position
All of these must be considered together.
This clearly defines an optimization problem:
* maximize the length of the fit line, while
* mimimizing its error by adjusting its position ($\mathrm{x}_{\mathrm{opt.}}$) and end-regions ($t_{\mathrm{start}}, t_{\mathrm{stop}}$)
* ignoring the outliers.
**Notes**
* When the error is the mean squared error: $\epsilon := \sqrt{ \frac{1}{N}\sum^N_i(\mathrm{x}_i-\mathrm{x}_{\mathrm{opt.}})^2}$;
* this simplifies the position since the mean (or "centroid") is the value of $\mathrm{x}_{\mathrm{opt.}}$ which minimizes this error, leaving only the adjustment of the end-regions and outliers for the optimization task.
This suggests that there is at least a solution to this problem:
One could consider all possible combinations of subsequences and all possible combinations of their outliers, and measure error, and then pick any from the set of lowest error subsequence-outlier combination fits. However, this is similar to the maximum subarray problem, and in the worst case it would be $\mathcal{O}(n^3)$.
It's countably finite, but impractical approach; itcan be a benchmark to compare all other algorithms which aim to do the same within some acceptable error.
### The Box
```
eps = 0.25
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
fig, ax = plt.subplots(1,1,figsize=(20,10))
begin = t_segs[3]+1
begin_buff = begin-2
end = t_segs[4]-1
end_buff = end+2
loc = x_segs[3]
rect_inner = Rectangle((begin, loc-eps), end-begin, 2*eps)
rect_outer = Rectangle((begin_buff, loc-eps), end_buff-begin_buff, 2*eps)
rect_outer2 = Rectangle((begin_buff/2, loc-eps), end+(begin-begin_buff), 2*eps)
# The adjusted rwa-stays
#plt.plot(t_segs, x_segs, '--', marker='o', color='k', linewidth=4.0, markerfacecolor='w', markersize=6.0, markeredgewidth=2.0, label='adjusted raw stays')
ax.plot([begin,end], [loc,loc], 'k--', dashes=[3,2], linewidth=3.0)
ax.plot([begin_buff,begin], [loc,loc], color='k', marker='|', markersize=40.0, dashes=[1,2], linewidth=3.0)
ax.plot([end,end_buff], [loc,loc], color='k', marker='|', markersize=40.0, dashes=[1,2], linewidth=3.0)
#plt.plot(t_segs[3:5], x_segs[3:5], '--', marker='o', color='k', linewidth=4.0, markerfacecolor='w', markersize=6.0, markeredgewidth=2.0, label='adjusted raw stays')
ax.plot(time_sub, raw_journey_sub, ':', label='raw journey')
ax.plot(time_sub, noise_journey_sub, '.-', label='noisy journey', alpha=0.5)
rect_inner = Rectangle((begin, loc-eps), end-begin, 2*eps)
rect_outer = Rectangle((begin_buff, loc-eps), end_buff-begin_buff, 2*eps)
rect_outer2 = Rectangle((begin_buff/2, loc-eps), end+(begin-begin_buff), 2*eps)
# Create patch collection with specified colour/alpha
pc = PatchCollection([rect_outer2], \
facecolor='gray', alpha=0.2, edgecolor='k',linewidth=0)
# Create patch collection with specified colour/alpha
pc1 = PatchCollection([rect_outer], \
facecolor='gray', alpha=0.2, edgecolor='k',linewidth=0)
pc2 = PatchCollection([rect_inner], \
facecolor='gray', alpha=0.5, edgecolor='k',linewidth=0)
# Add collection to axes
ax.add_collection(pc)
ax.add_collection(pc1)
ax.add_collection(pc2)
# interface tracking profiles
arrowprops=dict(arrowstyle="<->",
shrinkA=0.5,
mutation_scale=30.0,
connectionstyle="arc3", linewidth=4.0)
# the arrows
arrowcentery = 1.0
arrowcenterx = 12
arrowcenterh = 0.5
ax.annotate("", xy=(arrowcenterx, arrowcentery-arrowcenterh), xytext=(arrowcenterx, arrowcentery+arrowcenterh),
arrowprops=arrowprops)
dt_arrowcentery = 0.25
ax.annotate("", xy=(begin_buff, arrowcentery-dt_arrowcentery), xytext=(begin, arrowcentery-dt_arrowcentery),
arrowprops=arrowprops)
ax.annotate("", xy=(end, arrowcentery-dt_arrowcentery), xytext=(end_buff, arrowcentery-dt_arrowcentery),
arrowprops=arrowprops)
mid_point = lambda x1,x2: 0.5*(x1+x2)
delta_t_texty = 1.5
ax.annotate(r"Adjust starting point by $\Delta t$", fontsize=24.0,
xy=(mid_point(begin,begin_buff), 1.1), xycoords='data',
xytext=(begin-8,delta_t_texty), textcoords='data',
arrowprops=dict(arrowstyle="->", color="0.5",
shrinkA=5, shrinkB=5,
patchA=None, patchB=None,
connectionstyle="arc3,rad=-0.3", linewidth=2.0
),
)
ax.annotate(r"Adjust ending point by $\Delta t$", fontsize=24.0,
xy=(mid_point(end,end_buff), 1.1), xycoords='data',
xytext=(end, delta_t_texty), textcoords='data',
arrowprops=dict(arrowstyle="->", color="0.5",
shrinkA=5, shrinkB=5,
patchA=None, patchB=None,
connectionstyle="arc3,rad=0.3", linewidth=2.0
),
)
ax.annotate(r"Adjust $x$-position of central location",fontsize= 24,
xy=(arrowcenterx+0.25, arrowcentery-0.1), xycoords='data',
xytext=(arrowcenterx-5, arrowcentery-2.0), textcoords='data',
arrowprops=dict(arrowstyle="->", color="0.5",
shrinkA=5, shrinkB=5,
patchA=None, patchB=None,
connectionstyle="arc3,rad=0.3", linewidth=2.0
),
)
'''plt.text(0, 0.1, r'$\delta$',
{'color': 'black', 'fontsize': 24, 'ha': 'center', 'va': 'center',
'bbox': dict(boxstyle="round", fc="white", ec="black", pad=0.2)})
'''
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
plt.ylim(-1.5, 2)
plt.grid(visible=False);
```
Once the box (width is provided by the spatial tolerance) is positioned in a good way (_ie_ the centroid), extending the box forwards or backwards in time makes no change to the _score_ of the box.
Here, the score could be something like the number of points, the std/MSE; whatever it is, it should be saturated at some point and extending the box makes no difference, meaning that something converges which provides a stopping criterion.
### Box method
```
import random
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, 20)]
random.shuffle(colors)
```
## From here
```
from stay_classification.box_method import asymm_box_method_modular
from stay_classification.box_method import get_thresh_duration, get_slope
eps = 0.25
time_thresh = 5/12
long_time_thresh = 1.0
slope_thresh = 1.0
count_thresh = 50
min_t, max_t = 0.5, 23.5
fig, ax = plt.subplots(1,1,figsize=(20,10))
# The adjusted rwa-stays
#plt.plot(t_segs, x_segs, '--', marker='o', color='k', linewidth=4.0, markerfacecolor='w', markersize=6.0, markeredgewidth=2.0, label='adjusted raw stays')
ax.plot(time_sub, noise_journey_sub, '.-', color='gray', label='noisy journey', alpha=0.25)
ax.plot(time_sub, raw_journey_sub, ':', color='k', label='raw journey')
#ax.plot(time_sub, noise_journey_sub, '.-', label='noisy journey', alpha=0.5)
last_ind = 0
nnn = 0
for timepoint in np.arange(min_t,max_t,0.25):
if timepoint < time_sub[last_ind]:
continue
mean, start_ind, last_ind = asymm_box_method_modular(time_sub,time_thresh,noise_journey_sub,eps,timepoint)
if time_sub[last_ind]-time_sub[start_ind] < time_thresh:
continue
t0, t1 = get_thresh_duration(eps, mean)(noise_journey_sub[start_ind:last_ind],start_ind)
if time_sub[t1]-time_sub[t0] < time_thresh:
continue
# If the stay is less than 1 hour, check the slope of the segement
if time_sub[t1]-time_sub[t0] < long_time_thresh:
xdata = time_sub[t0:t1]
ydata = noise_journey_sub[t0:t1]
slope = get_slope(xdata, ydata)
print(timepoint, slope)
if abs(slope) > slope_thresh:
continue
#print(timepoint,":", mean, start_ind, last_ind)
ax.plot([time_sub[start_ind],time_sub[last_ind]], [mean,mean], '--', color=colors[nnn], label=f'ID #{timepoint}')
t_diff = abs(time_sub[t1]-time_sub[t0])
ax.plot([time_sub[t0],time_sub[t1]], [mean,mean], '--', \
dashes=[1,1], linewidth=5, color=colors[nnn], \
label=f'ID #{timepoint}: {round(t_diff,2)}, sub')
# Add the box
rect_color = "gray" #colors[nnn]
rect = Rectangle((time_sub[t0], mean-eps), t_diff, 2*eps)
pc = PatchCollection([rect], \
facecolor=rect_color, alpha=0.2, edgecolor=rect_color,linewidth=1)
ax.add_collection(pc)
print(timepoint, t_diff,t0,t1)
nnn += 1
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
plt.ylim(-1.5, 2)
#plt.xlim(min_t, max_t)
plt.xlim(-0.1, 24.1)
#plt.xlim(15.1, 19.1)
plt.title('Rough cut', fontsize=36)
plt.grid(visible=False);
plt.legend();
rand_range = lambda size, max_, min_: (max_-min_)*np.random.random_sample(size=size) + min_
from synthetic_data.trajectory import get_stay
from synthetic_data.trajectory import get_journey_path, get_segments
from synthetic_data.masking import get_mask_with_duplicates
from synthetic_data.trajectory import get_stay_segs, get_adjusted_stays
from synthetic_data.noise import get_noisy_segs, get_noisy_path, get_noise_arr
from synthetic_data.noise import get_noisy_segs, get_noisy_path, get_noise_arr
from synthetic_data.trajectory_class import get_trajectory
dsec = 1/3600.0
time = np.arange(0,24,dsec)
event_frac = rand_range(1,0.01,0.001)[0]
duplicate_frac = rand_range(1,0.3,0.05)[0]
print(event_frac, duplicate_frac)
configs = {
'threshold':0.5,
'event_frac':event_frac,
'duplicate_frac':duplicate_frac,
'noise_min':0.02,
'noise_max':0.15
}
nr_stays = np.random.randint(10)
stay_time_bounds = np.concatenate((np.array([0]),rand_range(2*nr_stays, 24, 0),np.array([24])))
stay_time_bounds = np.sort(stay_time_bounds)
stay_xlocs = rand_range(nr_stays+1, 2, - 2.0)
stays = []
for n in range(nr_stays+1):
nn = 2*n
stay = get_stay(stay_time_bounds[nn], stay_time_bounds[nn+1], stay_xlocs[n])
#print(n,nn,nn+1,stay)
stays.append(stay)
time_sub, raw_journey_sub, noise_journey_sub = get_trajectory(stays, time, configs)
segments = get_segments(time, stays, threshold=0.5)
new_stays = get_adjusted_stays(segments, time_sub)
new_t_segs, new_x_segs = get_stay_segs(new_stays)
plt.figure(figsize=(20,10))
#plt.plot(t_segs, x_segs, ':', marker='|', color='grey', linewidth=2.0, markerfacecolor='w', markersize=30.0, markeredgewidth=1.0, dashes=[0.5,0.5], label='raw stays')
#plt.plot(new_t_segs, new_x_segs, 'ko--', linewidth=3.0, markerfacecolor='w', markersize=4.0, markeredgewidth=1.0, label='adjusted raw stays')
plt.plot(time_sub, raw_journey_sub, ':', label='raw journey')
plt.plot(time_sub, noise_journey_sub, '.-', label='noisy journey', alpha=0.5)
plt.legend();
#plt.xlim([6.2,6.6]);
# DONE
def get_thresh_duration2(eps, mean):
# TODO: this fails when sub_arr.size = 0
upper = mean+eps
lower = mean-eps
def meth(sub_arr, start):
mask = np.where((sub_arr < upper) & (sub_arr > lower))[0] + start
#print(mask)
return mask.min(), mask.max(0)
return meth
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
segs_plot_kwargs = {'linestyle':'--', 'marker':'o', 'color':'k', 'linewidth':4.0, 'markerfacecolor':'w', 'markersize':6.0, 'markeredgewidth':2.0}
```
### Debugging: when there is no thresh. mean
```
from stay_classification.box_method import extend_box, get_thresh_mean
from stay_classification.checks import check_means
#DONE
def bug_get_converged(converged, means, indices, count_thresh, time_diff, time_thresh=0.5):
# Originally, there were a few conditions for convergence -. moved to "extend_box"
'''
print("BGC:",indices[0],indices[-1],len(indices))
# If it converged early, get the converged results;
if converged: & ((len(indices)>count_thresh) | ((time_diff>time_thresh) & (len(indices)>10))):
index = min(len(indices), count_thresh)
last_mean = means[-index]
last_index = indices[-index]
'''
# If it converged early, get the converged results;
if converged:
last_mean = means[-1]
last_index = indices[-1]
else:
# else, get the boundary value
last_mean = means[-1]
last_index = indices[-1]
return last_mean, last_index
embedded = lambda t0,t1,pair: ((t0 >= pair[0]) & (t0 <= pair[1]) | (t1 >= pair[0]) & (t1 <= pair[1]))
# DONE
def get_time_ind(t_arr, timepoint, time_thresh, direction):
index = np.where((time_sub < (timepoint)) & \
(time_sub > (timepoint-2*time_thresh)))
n = 2
while index[0].size == 0:
index = np.where((time_sub < (timepoint)) & \
(time_sub > (timepoint + direction*n*time_thresh)))
n+=1
if direction == 1:
return index[0].max()
else:
return index[0].min()
# DONE
def bbug_extend_box(t_arr, x_loc, working_index, fixed_index, means, count_thresh = 50):
keep_running = (working_index > 1) & (working_index < len(x_loc)-1)
indices = []
if working_index < fixed_index:
# Go backwards in time
direction = -1
else:
# Go forwards in time
direction = 1
mean = means[-1]
converged_mean = mean
converged_mean_ind0 = working_index
while keep_running:
#print(mean, direction)
# Update and store the working index
working_index += direction*1
indices.append(working_index)
# Update and store the mean
if direction == -1:
mean = get_thresh_mean(eps,mean)(x_loc[working_index:fixed_index])
else:
mean = get_thresh_mean(eps,mean)(x_loc[fixed_index:working_index])
means.append(mean)
if np.isnan(mean):
#print(mean)
break
# Stopping criteria:
# if the thresholded mean doesn't change upon getting new samples
# * if the duration is too long and there are sufficient number of samples
if mean != converged_mean:
converged_mean = mean
converged_mean_ind = working_index
converged_mean_ind0 = working_index
else:
converged_mean_ind = working_index
time_diff = abs(t_arr[fixed_index]-t_arr[working_index])
ctime_diff = abs(t_arr[converged_mean_ind0]-t_arr[converged_mean_ind])
if ((ctime_diff>1.0) & (mean == converged_mean)):
print('cdrop',ctime_diff)
break
# When the mean either converges or stops
if ((len(indices)>count_thresh) | ((time_diff>0.5) & (len(indices)>5))):
#print(time_diff,len(indices))
nr_events = min(len(indices), count_thresh)
# see also: bug_check_means(means,nr_events,0.25)
if check_means(means,nr_events):
print('drop',time_diff)
break
#print(f"{t_arr[working_index]:.3f} {time_diff:.3f} {ctime_diff:.3f}", \
# len(indices), fixed_index, working_index, converged_mean_ind0, converged_mean_ind, \
# f"\t{mean:.5f} {x_loc[working_index]:.3f} {mean+eps:.5f}",)#,[m == m0 for m in means[-count_thresh:]])
keep_running = (working_index > 1) & (working_index < len(x_loc)-1)
return means, indices, keep_running
```
~~`get_thresh_duration`~~ $\to$ `get_bounded_indices` in `box_classifier.py` **TODO** check args!
~~`extend_box`~~
~~`get_counts`~~ $\to$ `get_bounded_indices` in `box_classifier.py`
**TODO** check args!
~~`get_thresh_mean`~~
~~`get_converged`~~
`get_slope`$\to$ `get_slope` in `box_classifier.py`
New:<br/>
1. $\checkmark$ ~~`bug_asymm_box_method_modular`~~ $\to$ `make_box`
* $\checkmark$ ~~`get_time_ind`~~
* $\checkmark$ ~~`bbug_extend_box`~~
* $\checkmark$ ~~`bug_get_converged`~~
2. $\checkmark$ ~~`bbug_extend_box`~~ $\to$ `extend_edge`
* $\checkmark$ ~~`get_thresh_mean`~~
* $\checkmark$ ~~`check_means`~~
```
# DONE
def bug_asymm_box_method_modular(t_arr, time_thresh, x_loc, eps, timepoint, count_thresh = 50, verbose=False):
# 1. Initialization
# 1.1. Init and store the start, end points from the timepoint
if verbose: print(f"\t1. {timepoint-time_thresh:.3f} < {timepoint:.3f} < {timepoint+time_thresh:.3f}")
start = get_time_ind(t_arr, timepoint, time_thresh, -1)
end = get_time_ind(t_arr, timepoint, time_thresh, 1)
starts, ends = [], []
starts.append(start)
ends.append(end)
# 1.2. Initialize and store the mean for the region
mean = np.mean(x_loc[start:end])
means = [mean]
# 2. Extend the box in the backwards direction
# 2.1. Extension phase
if verbose: print(f"\t2. {mean:.4f} {start:4d} {end:4d}")
means, indices, keep_running = bbug_extend_box(t_arr, x_loc, start, end, means)
# 2.2. Check if NAN --> TODO: check why this happens!
if verbose: print(f"\t2.1. {mean:.4f} {start:4d} {end:4d}", keep_running)
if np.isnan(means[-1]):
if verbose: print(f"\t2.1. \t\tDrop {means[-1]:.4f} {starts[-1]:4d}", end)
return means[-1], starts[-1], end
starts += indices
# 2.3. If it converged early, get the converged results;
if verbose: print(f"\t2.2. {means[-1]:.4f} {starts[-1]:4d} {end:4d}")
tdiff = t_arr[end]-t_arr[starts[0]]
mean, start = bug_get_converged(keep_running, means, starts, count_thresh, tdiff)
# 2.4. Additional check if NAN
if np.isnan(mean):
if verbose: print(f"\t2.3. \t\tDrop {means[-1]:.4f} {starts[-1]:4d} {end:4d}")
return means[-1], starts[-1], end
# 3. Extend the box in the forwards direction
# 3.1. Extension phase
if verbose: print(f"\t3. {mean:.4f} {start:4d} {end:4d}", keep_running)
means, indices, keep_running = bbug_extend_box(t_arr, x_loc, end, start, means)
# 3.2. Check if NAN --> TODO: check why this happens!
if verbose: print(f"\t3.1. {mean:.4f} {start:4d} {end:4d}", keep_running)
if np.isnan(means[-1]):
if verbose: print(f"\t3.1. \t\tDrop {means[-1]:.4f} {start:4d} {ends[-1]:4d}")
return means[-1], start, ends[-1]
ends += indices
# 3.3. If it converged early, get the converged results
if verbose: print(f"\t3.1. {means[-1]:.4f} {start:4d} {end:4d}", keep_running)
tdiff = t_arr[ends[-1]]-t_arr[start]
mean, end = bug_get_converged(keep_running, means, ends, count_thresh, tdiff)
# 2.4. Additional check if NAN
if np.isnan(mean):
if verbose: print(f"\n\t3.3. \t\tDrop {means[-1]:.4f} {start:4d} {ends[-1]:4d}")
return means[-1], start, ends[-1]
# 4
if verbose: print(f"\t4. {mean:.4f} {start:4d} {end:4d}")
return mean, start, end
def time_overlap(t0,t1,tt0,tt1):
if ((t0>=tt0)&(t0<tt1)):
return True
elif ((t1>=tt0)&(t1<tt1)):
return True
elif ((t0 <= tt0) & (tt1 <= t1)):
return True
else:
return False
def bug_get_slope(eps, mean):
upper = mean+eps
lower = mean-eps
def meth(t_subarr, x_subarr):
mask = np.where((x_subarr < upper) & (x_subarr > lower))
ub_xdata = t_subarr[mask] - t_subarr[mask].mean()
ub_ydata = x_subarr[mask] - x_subarr[mask].mean()
return (ub_xdata.T.dot(ub_ydata))/(ub_xdata.T.dot(ub_xdata))
return meth
def check_true(t_s, t_l, tsegs):
for t_1, t_2 in zip(tsegs.tolist()[::3],tsegs.tolist()[1::3]):
if time_overlap(t_s, t_l, t_1, t_2 ):
return t_1, t_2
```
### Run
```
configs = {
'eps':0.25,
'time_thresh':5/12,
'long_time_thresh':1.0,
'slope_thresh':0.50,
'count_thresh':50
}
eps = configs['eps']
time_thresh = configs['time_thresh']
long_time_thresh = configs['long_time_thresh']
slope_thresh = configs['slope_thresh']
count_thresh = configs['count_thresh']
min_t, max_t = 0.1, 23.95
fig, ax = plt.subplots(1,1,figsize=(22,10))
# The adjusted raw-stays
plt.plot(new_t_segs, new_x_segs, **segs_plot_kwargs, label='adjusted raw stays')
ax.plot(time_sub, noise_journey_sub, '.-', color='gray', label='noisy journey', alpha=0.25)
ax.plot(time_sub, raw_journey_sub, ':', color='k', label='raw journey')
t0,t1 = 0,1
start_ind, last_ind = t0, t1
pairs = []
nnn = 0
for timepoint in np.arange(min_t,23.95,0.25):
# If the current timepoint is less than the last box-end, skip ahead
# TODO: this is useful but possibly, without refinement, misses some stays
# HOWEVER: without it, it doesn't work!
if (time_sub[start_ind] <= timepoint) & (timepoint <= time_sub[last_ind]):
#print("\t\t\talready processed, skip")
continue
else:
print(f"\nStart at {timepoint:.3f}, dt = {t_diff:.3f}, {t0}, {t1}")
# Process the time point
mean, start_ind, last_ind = bug_asymm_box_method_modular(\
time_sub,time_thresh,noise_journey_sub,eps,timepoint, 50, True)
# Drop if a NAN was encountered --. failed to find a mean
if np.isnan(mean):
print("\t\t\tmean = NaN, skip")
continue
# If the duration of the stay is too small, skip ahead
if time_sub[last_ind]-time_sub[start_ind] < time_thresh:
print("\t\t\ttoo short, skip")
continue
# If the duration due to the thresholded box is too short, skip ahead
#print(start_ind,last_ind,mean)
t0, t1 = get_thresh_duration2(eps, mean)(noise_journey_sub[start_ind:last_ind],start_ind)
#t0, t1 = get_thresh_duration(eps, mean)(noise_journey_sub[start_ind:last_ind],start_ind)
print(start_ind,last_ind,t0, t1)
if time_sub[t1]-time_sub[t0] < time_thresh:
print("\t\t\talso too short, skip")
continue
# If the stay is less than 1 hour, check the slope of the segement --> This isn't watertight
if time_sub[t1]-time_sub[t0] < long_time_thresh:
xdata = time_sub[t0:t1]
ydata = noise_journey_sub[t0:t1]
slope = bug_get_slope(eps, mean)(xdata, ydata)
print(f"\tAt {timepoint:.3f}, slope = {slope:.3f}")
if abs(slope) > slope_thresh:
print("\t\t\tslope is too big, skip")
continue
# If the stay is less than 2 hour, check the slope of the segement
if time_sub[t1]-time_sub[t0] < 2*long_time_thresh:
xdata = time_sub[t0:t1]
ydata = noise_journey_sub[t0:t1]
slope = bug_get_slope(eps, mean)(xdata, ydata)
print(f"\tAgain, at {timepoint:.3f}, slope = {slope:.3f}")
if abs(slope) > slope_thresh:
print("\t\t\tAgain, slope is too big, skip")
continue
# If the stay is embedded with other stays --> This is tricky!
if any([embedded(t0,t1, p) for p in pairs] + [embedded(p[0],p[1],[t0,t1]) for p in pairs]):
print("\t\t\tEmbedded, skip")
continue
# PLOTTING
dashing = "-"+23*" -"
ddashing = "="+30*"=="
t_start = time_sub[t0]
t_last = time_sub[t1]
'''seg_ind = min(3*nnn+1,len(new_t_segs))
t_seg_0 = new_t_segs[seg_ind-1]
t_seg_1 = new_t_segs[seg_ind]'''
#t_seg_0, t_seg_1 = check_true(t_start, t_last, new_t_segs)
true_vals = ""
'''if time_overlap(t_start, t_last, t_seg_0, t_seg_1 ):
true_vals = f'{t_seg_0:2.3f}, {t_seg_1:2.3f}'
else:
true_vals = 'Unmatched'
'''
true_vals = f"True: {true_vals};"
print(f'\nPLOT: ID #{nnn:3d} {dashing}\n\t{timepoint:.3f}', \
f'{t0}, {t1}', \
true_vals,\
f'Pred: {t_start:.3f}, {t_last:.3f}',\
f'{mean:.3f}', \
f'\n\n{ddashing}')
#print(timepoint,":, mean, start_ind, last_ind)
ax.plot([time_sub[start_ind],time_sub[last_ind]], [mean,mean],\
'--', color=colors[nnn], \
label=f'ID #{nnn},{timepoint:.3f}, {time_sub[start_ind]:.3f}, {time_sub[last_ind]:.3f}')
t_diff = abs(time_sub[t1]-time_sub[t0])
ax.plot([time_sub[t0],time_sub[t1]], [mean,mean], '--', \
dashes=[1,1], linewidth=5, color=colors[nnn], \
label=f'ID #{nnn}: {round(t_diff,2)}, sub')
# Add the box
rect_color = "gray" #colors[nnn]
rect = Rectangle((time_sub[t0], mean-eps), t_diff, 2*eps)
pc = PatchCollection([rect], \
facecolor=rect_color, alpha=0.2, edgecolor=rect_color,linewidth=1)
ax.add_collection(pc)
#print(f"End At {timepoint:.3f}, dt = {t_diff:.3f}, {t0}, {t1}")
pairs.append([t0,t1])
start_ind, last_ind = t0, t1
nnn += 1
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
ymin = noise_journey_sub.min()-1*eps
ymax = noise_journey_sub.max()+1*eps
plt.ylim(ymin, ymax)
ax.xaxis.set_major_locator(MultipleLocator(1))
#ax.xaxis.set_major_formatter(FormatStrFormatter('%d'))
# For the minor ticks, use no labels; default NullFormatter.
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
plt.xlim(min_t, max_t)
#plt.xlim(-0.1, 19.1
#plt.xlim(15.1, 19.1)
plt.title('Rough cut', fontsize=36)
plt.grid(visible=True);
plt.legend(bbox_to_anchor=(1.2, 0.5), loc='right', ncol=1);
```
**Note**
* a box will extend too _far_ when
* if the duration of constant mean is too long,
* if the number of events for a constant mean is too large
* **!** need to consider numbers of samples because the samples can increase but time-delta not
* a box will cut too _early_ when
* if the number of events for a temporal constant mean is too large
### Run2
```
min_t, max_t = 0.1, 23.95
fig, ax = plt.subplots(1,1,figsize=(22,10))
# The adjusted raw-stays
plt.plot(new_t_segs, new_x_segs, **segs_plot_kwargs, label='adjusted raw stays')
ax.plot(time_sub, noise_journey_sub, '.-', color='gray', label='noisy journey', alpha=0.25)
ax.plot(time_sub, raw_journey_sub, ':', color='k', label='raw journey')
t0,t1 = 0,1
start_ind, last_ind = t0, t1
pairs = []
nnn = 0
timepoint = min_t
while timepoint < max_t:
# If the current timepoint is less than the last box-end, skip ahead
# TODO: this is useful but possibly, without refinement, misses some stays
# HOWEVER: without it, it doesn't work!
if (time_sub[start_ind] <= timepoint) & (timepoint <= time_sub[last_ind]):
timepoint = timepoint+time_thresh
print(f"\t\t\t {timepoint:.3f} already processed, skip")
continue
else:
print(f"\nStart at {timepoint:.3f}, dt = {t_diff:.3f}, {t0}, {t1}")
# Process the time point
mean, start_ind, last_ind = bug_asymm_box_method_modular(\
time_sub,time_thresh,noise_journey_sub,eps,timepoint, 50, True)
# Drop if a NAN was encountered --. failed to find a mean
if np.isnan(mean):
print("\t\t\tmean = NaN, skip")
timepoint = time_sub[last_ind]+time_thresh
continue
# If the duration of the stay is too small, skip ahead
if time_sub[last_ind]-time_sub[start_ind] < time_thresh:
print("\t\t\ttoo short, skip")
timepoint = time_sub[last_ind]+time_thresh
continue
# If the duration due to the thresholded box is too short, skip ahead
#print(start_ind,last_ind,mean)
t0, t1 = get_thresh_duration2(eps, mean)(noise_journey_sub[start_ind:last_ind],start_ind)
#t0, t1 = get_thresh_duration(eps, mean)(noise_journey_sub[start_ind:last_ind],start_ind)
print(start_ind,last_ind,t0, t1)
if time_sub[t1]-time_sub[t0] < time_thresh:
print("\t\t\talso too short, skip")
timepoint = time_sub[t1]+time_thresh
continue
# If the stay is less than 1 hour, check the slope of the segement --> This isn't watertight
if time_sub[t1]-time_sub[t0] < long_time_thresh:
xdata = time_sub[t0:t1]
ydata = noise_journey_sub[t0:t1]
slope = bug_get_slope(eps, mean)(xdata, ydata)
print(f"\tAt {timepoint:.3f}, slope = {slope:.3f}")
if abs(slope) > slope_thresh:
print("\t\t\tslope is too big, skip")
timepoint = time_sub[t1]+time_thresh
continue
# If the stay is less than 2 hour, check the slope of the segement
if time_sub[t1]-time_sub[t0] < 2*long_time_thresh:
xdata = time_sub[t0:t1]
ydata = noise_journey_sub[t0:t1]
slope = bug_get_slope(eps, mean)(xdata, ydata)
print(f"\tAgain, at {timepoint:.3f}, slope = {slope:.3f}")
if abs(slope) > slope_thresh:
print("\t\t\tAgain, slope is too big, skip")
timepoint = time_sub[t1]+time_thresh
continue
# If the stay is embedded with other stays --> This is tricky!
if any([embedded(t0,t1, p) for p in pairs] + [embedded(p[0],p[1],[t0,t1]) for p in pairs]):
print("\t\t\tEmbedded, skip")
timepoint = time_sub[t1]+time_thresh
continue
# PLOTTING
dashing = "-"+23*" -"
ddashing = "="+30*"=="
t_start = time_sub[t0]
t_last = time_sub[t1]
'''seg_ind = min(3*nnn+1,len(new_t_segs))
t_seg_0 = new_t_segs[seg_ind-1]
t_seg_1 = new_t_segs[seg_ind]'''
t_seg_0, t_seg_1 = check_true(t_start, t_last, new_t_segs)
true_vals = ""
if time_overlap(t_start, t_last, t_seg_0, t_seg_1 ):
true_vals = f'{t_seg_0:2.3f}, {t_seg_1:2.3f}'
else:
true_vals = 'Unmatched'
true_vals = f"True: {true_vals};"
print(f'\nPLOT: ID #{nnn:3d} {dashing}\n\t{timepoint:.3f}', \
f'{t0}, {t1}', \
true_vals,\
f'Pred: {t_start:.3f}, {t_last:.3f}',\
f'{mean:.3f}', \
f'\n\n{ddashing}')
#print(timepoint,":, mean, start_ind, last_ind)
ax.plot([time_sub[start_ind],time_sub[last_ind]], [mean,mean],\
'--', color=colors[nnn], \
label=f'ID #{nnn},{timepoint:.3f}, {time_sub[start_ind]:.3f}, {time_sub[last_ind]:.3f}')
t_diff = abs(time_sub[t1]-time_sub[t0])
ax.plot([time_sub[t0],time_sub[t1]], [mean,mean], '--', \
dashes=[1,1], linewidth=5, color=colors[nnn], \
label=f'ID #{nnn}: {round(t_diff,2)}, sub')
# Add the box
rect_color = "gray" #colors[nnn]
rect = Rectangle((time_sub[t0], mean-eps), t_diff, 2*eps)
pc = PatchCollection([rect], \
facecolor=rect_color, alpha=0.2, edgecolor=rect_color,linewidth=1)
ax.add_collection(pc)
#print(f"End At {timepoint:.3f}, dt = {t_diff:.3f}, {t0}, {t1}")
pairs.append([t0,t1])
start_ind, last_ind = t0, t1
timepoint = time_sub[t1]+time_thresh
nnn += 1
plt.xlabel(r'time, $t$ [arb.]')
plt.ylabel(r'position, $x$ [arb.]')
ymin = noise_journey_sub.min()-1*eps
ymax = noise_journey_sub.max()+1*eps
plt.ylim(ymin, ymax)
ax.xaxis.set_major_locator(MultipleLocator(1))
#ax.xaxis.set_major_formatter(FormatStrFormatter('%d'))
# For the minor ticks, use no labels; default NullFormatter.
ax.xaxis.set_minor_locator(MultipleLocator(0.5))
plt.xlim(min_t, max_t)
#plt.xlim(-0.1, 19.1)
#plt.xlim(15.1, 19.1)
plt.title('Rough cut', fontsize=36)
plt.grid(visible=True);
plt.legend(bbox_to_anchor=(1.2, 0.5), loc='right', ncol=1);
timepoint, timepoint-time_thresh
start = np.where((time_sub < (timepoint)) & \
(time_sub > (timepoint-1*time_thresh)))
start[0].size
get_time_ind(time_sub, timepoint, time_thresh, 1)
time_sub[70:80]
end = np.where((time_sub < (timepoint+time_thresh)) & \
(time_sub > (timepoint)))[0].max()
print(start,end)
```
### Slope testing
```
slope = bug_get_slope(0.25, 1.2652784536739390)(time_sub[239:249], noise_journey_sub[239:249])
get_slope(time_sub[239:249], noise_journey_sub[239:249])
aaa,bbb = 181,201
meano = 0.9723058514956978
slope = bug_get_slope(0.25, meano)(time_sub[aaa:bbb], noise_journey_sub[aaa:bbb])
plt.plot(time_sub[aaa:bbb], noise_journey_sub[aaa:bbb], 'C0o-')
plt.plot(time_sub[aaa:bbb], slope*(time_sub[aaa:bbb]-time_sub[aaa:bbb].mean())+noise_journey_sub[aaa:bbb].mean(), 'C1--')
plt.plot([time_sub[aaa],time_sub[bbb]], [meano,meano], 'C2:')
```
### Test
```
# DONE
def bug_extend_box(t_arr, x_loc, working_index, fixed_index, means, count_thresh = 50):
keep_running = (working_index > 1) & (working_index < len(x_loc)-1)
indices = []
if working_index < fixed_index:
# Go backwards in time
direction = -1
else:
# Go forwards in time
direction = 1
mean = means[-1]
while keep_running:
#print(mean, direction)
# Update and store the working index
working_index += direction*1
indices.append(working_index)
# Update and store the mean
if direction == -1:
mean = get_thresh_mean(eps,mean)(x_loc[working_index:fixed_index])
else:
mean = get_thresh_mean(eps,mean)(x_loc[fixed_index:working_index])
means.append(mean)
if np.isnan(mean):
#print(mean)
break
time_diff = abs(t_arr[fixed_index]-t_arr[working_index])
# When the mean either converges or stops
if ((len(indices)>count_thresh) | ((time_diff>0.5) & (len(indices)>5))):
#print(time_diff,len(indices))
nr_events = min(len(indices), count_thresh)
if check_means(means,nr_events):
#print('drop',time_diff)
break
keep_running = (working_index > 1) & (working_index < len(x_loc)-1)
return means, indices, keep_running
# DONE
def bug_check_means(means,nr_samples,eps):
m0 = means[-nr_samples]
#TODO: or means could be less than 10% of eps
#return all([m == m0 for m in means[-count_thresh:]])
return all([abs(m - m0)<=eps/10 for m in means[-nr_samples:]])
ms, inds, flag = bbug_extend_box(time_sub, noise_journey_sub, 193, 177, [0.8226433365921534], 20)
no_arr = noise_journey_sub[177:205]
ti_arr = time_sub[177:205]
bug_get_thresh_mean(eps,0.9807449844948974)(no_arr)
get_thresh_mean(eps,0.9807449844948974)(no_arr)
indl = [ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24]
ti_arr[np.array(indl)], \
no_arr[np.array(indl)]
# DONE
def bug_get_thresh_mean(eps, mean):
upper = mean+eps
lower = mean-eps
print(lower,upper)
def meth(sub_arr):
mask = np.where((sub_arr < upper) & (sub_arr > lower))
print(mask)
return np.mean(sub_arr[mask])
return meth
ms, inds
def func(in_):
print(in_)
return in_, out=False
```
| github_jupyter |
## 目录
- [10. 文本聚类](#10-文本聚类)
- [10.1 概述](#101-概述)
- [10.2 文档的特征提取](#102-文档的特征提取)
- [10.3 k均值算法](#103-k均值算法)
- [10.4 重复二分聚类算法](#104-重复二分聚类算法)
- [10.5 标准化评测](#105-标准化评测)
## 10. 文本聚类
正所谓物以类聚,人以群分。人们在获取数据时需要整理,将相似的数据归档到一起,自动发现大量样本之间的相似性,这种根据相似性归档的任务称为聚类。
### 10.1 概述
1. **聚类**
**聚类**(cluster analysis )指的是将给定对象的集合划分为不同子集的过程,目标是使得每个子集内部的元素尽量相似,不同子集间的元素尽量不相似。这些子集又被称为**簇**(cluster),一般没有交集。

一般将聚类时簇的数量视作由使用者指定的超参数,虽然存在许多自动判断的算法,但它们往往需要人工指定其他超参数。
根据聚类结果的结构,聚类算法也可以分为**划分式**(partitional )和**层次化**(hierarchieal两种。划分聚类的结果是一系列不相交的子集,而层次聚类的结果是一棵树, 叶子节点是元素,父节点是簇。本章主要介绍划分聚类。
2. **文本聚类**
**文本聚类**指的是对文档进行聚类分析,被广泛用于文本挖掘和信息检索领域。
文本聚类的基本流程分为特征提取和向量聚类两步, 如果能将文档表示为向量,就可以对其应用聚类算法。这种表示过程称为**特征提取**,而一旦将文档表示为向量,剩下的算法就与文档无关了。这种抽象思维无论是从软件工程的角度,还是从数学应用的角度都十分简洁有效。
### 10.2 文档的特征提取
1. **词袋模型**
**词袋**(bag-of-words )是信息检索与自然语言处理中最常用的文档表示模型,它将文档想象为一个装有词语的袋子, 通过袋子中每种词语的计数等统计量将文档表示为向量。比如下面的例子:
```
人 吃 鱼。
美味 好 吃!
```
统计词频后如下:
```
人=1
吃=2
鱼=1
美味=1
好=1
```
文档经过该词袋模型得到的向量表示为[1,2,1,1,1],这 5 个维度分别表示这 5 种词语的词频。
一般选取训练集文档的所有词语构成一个词表,词表之外的词语称为 oov,不予考虑。一旦词表固定下来,假设大小为 N。则任何一个文档都可以通过这种方法转换为一个N维向量。词袋模型不考虑词序,也正因为这个原因,词袋模型损失了词序中蕴含的语义,比如,对于词袋模型来讲,“人吃鱼”和“鱼吃人”是一样的,这就不对了。
<font color="red">不过目前工业界已经发展出很好的词向量表示方法了: word2vec/bert 模型等。</font>
2. **词袋中的统计指标**
词袋模型并非只是选取词频作为统计指标,而是存在许多选项。常见的统计指标如下:
- 布尔词频: 词频非零的话截取为1,否则为0,适合长度较短的数据集
- TF-IDF: 适合主题较少的数据集
- 词向量: 如果词语本身也是某种向量的话,则可以将所有词语的词向量求和作为文档向量。适合处理 OOV 问题严重的数据集。
- 词频向量: 适合主题较多的数据集
定义由 n 个文档组成的集合为 S,定义其中第 i 个文档 di 的特征向量为 di,其公式如下:

其中 tj表示词表中第 j 种单词,m 为词表大小, TF(tj, di) 表示单词 tj 在文档 di 中的出现次数。为了处理长度不同的文档,通常将文档向量处理为单位向量,即缩放向量使得 ||d||=1。
### 10.3 k均值算法
一种简单实用的聚类算法是k均值算法(k-means),由Stuart Lloyd于1957年提出。该算法虽然无法保证一定能够得到最优聚类结果,但实践效果非常好。基于k均值算法衍生出许多改进算法,先介绍 k均值算法,然后推导它的一个变种。
1. **基本原理**
形式化啊定义 k均值算法所解决的问题,给定 n 个向量 d1 到 dn,以及一个整数 k,要求找出 k 个簇 S1 到 Sk 以及各自的质心 C1 到 Ck,使得下式最小:

其中 ||di - Cr|| 是向量与质心的欧拉距离,I(Euclidean) 称作聚类的**准则函数**。也就是说,k均值以最小化每个向量到质心的欧拉距离的平方和为准则进行聚类,所以该准则函数有时也称作**平方误差和**函数。而质心的计算就是簇内数据点的几何平均:

其中,si 是簇 Si 内所有向量之和,称作**合成向量**。
生成 k 个簇的 k均值算法是一种迭代式算法,每次迭代都在上一步的基础上优化聚类结果,步骤如下:
- 选取 k 个点作为 k 个簇的初始质心。
- 将所有点分别分配给最近的质心所在的簇。
- 重新计算每个簇的质心。
- 重复步骤 2 和步骤 3 直到质心不再发生变化。
k均值算法虽然无法保证收敛到全局最优,但能够有效地收敛到一个局部最优点。对于该算法,初级读者重点需要关注两个问题,即初始质心的选取和两点距离的度量。
2. **初始质心的选取**
由于 k均值不保证收敏到全局最优,所以初始质心的选取对k均值的运行结果影响非常大,如果选取不当,则可能收敛到一个较差的局部最优点。
一种更高效的方法是, 将质心的选取也视作准则函数进行迭代式优化的过程。其具体做法是,先随机选择第一个数据点作为质心,视作只有一个簇计算准则函数。同时维护每个点到最近质心的距离的平方,作为一个映射数组 M。接着,随机取准则函数值的一部分记作。遍历剩下的所有数据点,若该点到最近质心的距离的平方小于0,则选取该点添加到质心列表,同时更新准则函数与 M。如此循环多次,直至凑足 k 个初始质心。这种方法可行的原理在于,每新增一个质心,都保证了准则函数的值下降一个随机比率。 而朴素实现相当于每次新增的质心都是完全随机的,准则函数的增减无法控制。孰优孰劣,一目了然。
考虑到 k均值是一种迭代式的算法, 需要反复计算质心与两点距离,这部分计算通常是效瓶颈。为了改进朴素 k均值算法的运行效率,HanLP利用种更快的准则函数实现了k均值的变种。
3. **更快的准则函数**
除了欧拉准则函数,还存在一种基于余弦距离的准则函数:

该函数使用余弦函数衡量点与质心的相似度,目标是最大化同簇内点与质心的相似度。将向量夹角计算公式代人,该准则函数变换为:

代入后变换为:

也就是说,余弦准则函数等于 k 个簇各自合成向量的长度之和。比较之前的准则函数会发现在数据点从原簇移动到新簇时,I(Euclidean) 需要重新计算质心,以及两个簇内所有点到新质心的距离。而对于I(cos),由于发生改变的只有原簇和新簇两个合成向量,只需求两者的长度即可,计算量一下子减小不少。
基于新准则函数 I(cos),k均值变种算法流程如下:
- 选取 k 个点作为 k 个簇的初始质心。
- 将所有点分别分配给最近的质心所在的簇。
- 对每个点,计算将其移入另一个簇时 I(cos) 的增大量,找出最大增大量,并完成移动。
- 重复步骤 3 直到达到最大迭代次数,或簇的划分不再变化。
4. **实现**
在 HanLP 中,聚类算法实现为 ClusterAnalyzer,用户可以将其想象为一个文档 id 到文档向量的映射容器。
此处以某音乐网站中的用户聚类为案例讲解聚类模块的用法。假设该音乐网站将 6 位用户点播的歌曲的流派记录下来,并且分别拼接为 6 段文本。给定用户名称与这 6 段播放历史,要求将这 6 位用户划分为 3 个簇。实现代码如下:
```python
from pyhanlp import *
ClusterAnalyzer = JClass('com.hankcs.hanlp.mining.cluster.ClusterAnalyzer')
if __name__ == '__main__':
analyzer = ClusterAnalyzer()
analyzer.addDocument("赵一", "流行, 流行, 流行, 流行, 流行, 流行, 流行, 流行, 流行, 流行, 蓝调, 蓝调, 蓝调, 蓝调, 蓝调, 蓝调, 摇滚, 摇滚, 摇滚, 摇滚")
analyzer.addDocument("钱二", "爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲")
analyzer.addDocument("张三", "古典, 古典, 古典, 古典, 民谣, 民谣, 民谣, 民谣")
analyzer.addDocument("李四", "爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 金属, 金属, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲")
analyzer.addDocument("王五", "流行, 流行, 流行, 流行, 摇滚, 摇滚, 摇滚, 嘻哈, 嘻哈, 嘻哈")
analyzer.addDocument("马六", "古典, 古典, 古典, 古典, 古典, 古典, 古典, 古典, 摇滚")
print(analyzer.kmeans(3))
```
结果如下:
```
[[李四, 钱二], [王五, 赵一], [张三, 马六]]
```
通过 k均值聚类算法,我们成功的将用户按兴趣分组,获得了“人以群分”的效果。
聚类结果中簇的顺序是随机的,每个簇中的元素也是无序的,由于 k均值是个随机算法,有小概率得到不同的结果。
该聚类模块可以接受任意文本作为文档,而不需要用特殊分隔符隔开单词。
### 10.4 重复二分聚类算法
1. **基本原理**
**重复二分聚类**(repeated bisection clustering) 是 k均值算法的效率加强版,其名称中的bisection是“二分”的意思,指的是反复对子集进行二分。该算法的步骤如下:
- 挑选一个簇进行划分。
- 利用 k均值算法将该簇划分为 2 个子集。
- 重复步骤 1 和步骤 2,直到产生足够舒朗的簇。
每次产生的簇由上到下形成了一颗二叉树结构。

正是由于这个性质,重复二分聚类算得上一种基于划分的层次聚类算法。如果我们把算法运行的中间结果存储起来,就能输出一棵具有层级关系的树。树上每个节点都是一个簇,父子节点对应的簇满足包含关系。虽然每次划分都基于 k均值,由于每次二分都仅仅在一个子集上进行,输人数据少,算法自然更快。
在步骤1中,HanLP采用二分后准则函数的增幅最大为策略,每产生一个新簇,都试着将其二分并计算准则函数的增幅。然后对增幅最大的簇执行二分,重复多次直到满足算法停止条件。
2. **自动判断聚类个数k**
读者可能觉得聚类个数 k 这个超参数很难准确估计。在重复二分聚类算法中,有一种变通的方法,那就是通过给准则函数的增幅设定阈值 β 来自动判断 k。此时算法的停止条件为,当一个簇的二分增幅小于 β 时不再对该簇进行划分,即认为这个簇已经达到最终状态,不可再分。当所有簇都不可再分时,算法终止,最终产生的聚类数量就不再需要人工指定了。
3. **实现**
```python
from pyhanlp import *
ClusterAnalyzer = JClass('com.hankcs.hanlp.mining.cluster.ClusterAnalyzer')
if __name__ == '__main__':
analyzer = ClusterAnalyzer()
analyzer.addDocument("赵一", "流行, 流行, 流行, 流行, 流行, 流行, 流行, 流行, 流行, 流行, 蓝调, 蓝调, 蓝调, 蓝调, 蓝调, 蓝调, 摇滚, 摇滚, 摇滚, 摇滚")
analyzer.addDocument("钱二", "爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲")
analyzer.addDocument("张三", "古典, 古典, 古典, 古典, 民谣, 民谣, 民谣, 民谣")
analyzer.addDocument("李四", "爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 爵士, 金属, 金属, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲, 舞曲")
analyzer.addDocument("王五", "流行, 流行, 流行, 流行, 摇滚, 摇滚, 摇滚, 嘻哈, 嘻哈, 嘻哈")
analyzer.addDocument("马六", "古典, 古典, 古典, 古典, 古典, 古典, 古典, 古典, 摇滚")
print(analyzer.repeatedBisection(3)) # 重复二分聚类
print(analyzer.repeatedBisection(1.0)) # 自动判断聚类数量k
```
运行结果如下:
```
[[李四, 钱二], [王五, 赵一], [张三, 马六]]
[[李四, 钱二], [王五, 赵一], [张三, 马六]]
```
与上面音乐案例得出的结果一致,但运行速度要快不少。
### 10.5 标准化评测
本次评测选择搜狗实验室提供的文本分类语料的一个子集,我称它为“搜狗文本分类语料库迷你版”。该迷你版语料库分为5个类目,每个类目下1000 篇文章,共计5000篇文章。运行代码如下:
```python
from pyhanlp import *
import zipfile
import os
from pyhanlp.static import download, remove_file, HANLP_DATA_PATH
def test_data_path():
"""
获取测试数据路径,位于$root/data/test,根目录由配置文件指定。
:return:
"""
data_path = os.path.join(HANLP_DATA_PATH, 'test')
if not os.path.isdir(data_path):
os.mkdir(data_path)
return data_path
## 验证是否存在 MSR语料库,如果没有自动下载
def ensure_data(data_name, data_url):
root_path = test_data_path()
dest_path = os.path.join(root_path, data_name)
if os.path.exists(dest_path):
return dest_path
if data_url.endswith('.zip'):
dest_path += '.zip'
download(data_url, dest_path)
if data_url.endswith('.zip'):
with zipfile.ZipFile(dest_path, "r") as archive:
archive.extractall(root_path)
remove_file(dest_path)
dest_path = dest_path[:-len('.zip')]
return dest_path
sogou_corpus_path = ensure_data('搜狗文本分类语料库迷你版', 'http://file.hankcs.com/corpus/sogou-text-classification-corpus-mini.zip')
## ===============================================
## 以下开始聚类
ClusterAnalyzer = JClass('com.hankcs.hanlp.mining.cluster.ClusterAnalyzer')
if __name__ == '__main__':
for algorithm in "kmeans", "repeated bisection":
print("%s F1=%.2f\n" % (algorithm, ClusterAnalyzer.evaluate(sogou_corpus_path, algorithm) * 100))
```
运行结果如下:
```
kmeans F1=83.74
repeated bisection F1=85.58
```
评测结果如下表:
| 算法 | F1 | 耗时 |
| ------------ | ----- | ---- |
| k均值 | 83.74 | 67秒 |
| 重复二分聚类 | 85.58 | 24秒 |
对比两种算法,重复二分聚类不仅准确率比 k均值更高,而且速度是 k均值的 3 倍。然而重复二分聚类成绩波动较大,需要多运行几次才可能得出这样的结果。
无监督聚类算法无法学习人类的偏好对文档进行划分,也无法学习每个簇在人类那里究竟叫什么。
| github_jupyter |
<a href="https://colab.research.google.com/github/davidnoone/PHYS332_FluidExamples/blob/main/04_ColloidViscosity_SOLUTION.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Colloids and no-constant viscosity (1d case)
Colloids are a group of materials that include small particles emersen in a fluid (could be liquid or gas). Some examples include emulsions and gels, which emcompass substances like milk, whipped cream, styrofoam, jelly, and some glasses.
We imagine a "pile" of substance that undergoes a viscous dissipation following a simple law.
$$
\frac{\partial h}{\partial t} = \frac{\partial}{\partial x}
\left( \nu \frac{\partial h}{\partial x} \right)
$$
where h is the depth of the colloidal material, and $\nu$ is the kinematic viscosity, at constant density.
The viscosity follows a simple law:
$$
\nu = \nu_0 (1 + 2.5 \phi)
$$
where $\phi$ is the volume fraction. In the case that $\phi = \phi(h)$ some there are some non-linear consequences on the viscous flow.
## Work tasks
1. Create a numerical model of viscous flow using finite differences.
(Hint: You have basically done this in previous excersizes).
2. Compute the height at a futute time under the null case where $\phi = 0$
3. Repeate the experiment for the case that $\phi$ has positive and negative values of a range of sizes. You may choose to assume $\phi = \pm h/h_{max}$, where $h_{max} $ is the maximum value of your initial "pile".
4. Compare the results of your experiments.
```
import math
import numpy as np
import matplotlib.pyplot as plt
```
The main component of this problem is developing an equation to calculate vicsous derivative using finite differences. Notice that unlike the previous case in which the viscosity is constant, here we must keep the viscosity within derivative estimates. We wish to evaluate the second grid on a discrete grid between 0 and 2$\pi$, with steps $\Delta x$ indicated by index $i = 0, N-1$. (Note python has arrays startning at index 0
Using a finite difference method, we can obtain scheme with second order accuracy as:
$$
\frac{\partial}{\partial x}
\left( \nu \frac{\partial h}{\partial x} \right)=
\frac{F_{i+1/2} - F_{i-1/2}}{(\Delta x)}
$$
where we have used fluxes $F$ at the "half" locations defined by
$$
F_{i-1/2} = \nu_{i-1/2} (\frac{h_{i} - h_{i-1})}{\Delta x}
$$
and
$$
F_{i+1/2} = \nu_{i+1/2} (\frac{h_{i+1} - h_{i})}{\Delta x}
$$
Notice that $\nu$ needs to be determined at the "half" locations, which means that $h$ needs to be estimated at those points. It is easiest to assume it is the average of the values on either side.
i.e., $h_{i-1/2} = 0.5(h_i + h_{i-1})$, and similarly for $h_{i+1/2}$.
We are working with periodic boundary conditions so we may "wrap arround" such that $f_{-1} = f_{N-1}$ and $f_{N} = f_{1}$. You may choose to do this with python array indices, or take a look at the numpy finction [numpy.roll()](https://numpy.org/doc/stable/reference/generated/numpy.roll.html).
```
# Create a coordinate, which is periodix
npts = 50
xvals = np.linspace(-math.pi,math.pi,npts)
dx = 2*math.pi/npts
hmax = 1.0 # maximum height of pile ["metres"]
vnu0 = 0.5 # reference viscosity [m2/sec]
# Define the an initial "pile" of substance: a gaussian
width = 3*dx
h = hmax*np.exp(-(xvals/width)**2)
```
Make a plot showing your initial vorticity: vorticity as a function of X
```
# PLot!
fig = plt.figure()
plt.plot(xvals,h)
```
Let's define a function to perform some number of time steps
```
def viscosity(h):
global hmax
phi = 0.
phi = h/hmax
vnu = vnu0*(1 + 2.5*phi)
return vnu
def forward_step(h_old, nsteps, dtime):
for n in range(nsteps):
dhdt = np.zeros_like(h_old)
hmid = 0.5*(h_old + np.roll(h_old,+1)) # at indices i:nx1 = i-1/2 upward
vmid = viscosity(hmid)
hflx = vmid*(h_old - np.roll(h_old,+1))/dx
dhdt = (np.roll(hflx,-1) - hflx)/dx # hflx(i+1/2) - hflx(i-1/2)
h_new = h_old + dtime*dhdt
return h_new
```
Use your integration function to march forward in time to check the analytic result. Note, the time step must be small enough for a robust solution. It must be:
$$
\Delta t \lt \frac{(\Delta x)^2} {4 \eta}
$$
```
dt_max = 0.25*dx*dx/vnu0
print("maximum allowed dtime is ",dt_max," seconds")
dtime = 0.005
nsteps = 200
# step forward more steps, and plot again
nlines = 10
for iline in range(nlines):
h = forward_step(h.copy(),nsteps,dtime)
plt.plot(xvals,h)
#Rerun with a different phi (redefine the viscosity function - but clumsy)
def viscosity(h):
global hmax
phi = -h/hmax # Try this?
vnu = vnu0*(1 + phi)
return vnu
# step forward more steps, and plot again
h = hmax*np.exp(-(xvals/width)**2)
for iline in range(nlines):
h = forward_step(h.copy(),nsteps,dtime)
plt.plot(xvals,h)
```
#Results!
How did the shapes differ with thinning vs thickening?
| github_jupyter |
# Image Classification
In this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
## Get the Data
Run the following cell to download the [CIFAR-10 dataset for python](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
```
## Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the `batch_id` and `sample_id`. The `batch_id` is the id for a batch (1-5). The `sample_id` is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 4
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
```
## Implement Preprocess Functions
### Normalize
In the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`.
```
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
_min = np.min(x)
_max = np.max(x)
return (x - _min) / (_max - _min)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
```
### One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to `one_hot_encode`. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
```
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return np.eye(10)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
```
### Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
## Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
```
## Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
>**Note:** If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
>However, if you would like to get the most out of this course, try to solve all the problems _without_ using anything from the TF Layers packages. You **can** still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the `conv2d` class, [tf.layers.conv2d](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d), you would want to use the TF Neural Network version of `conv2d`, [tf.nn.conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).
Let's begin!
### Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement `neural_net_image_input`
* Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder)
* Set the shape using `image_shape` with batch size set to `None`.
* Name the TensorFlow placeholder "x" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).
* Implement `neural_net_label_input`
* Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder)
* Set the shape using `n_classes` with batch size set to `None`.
* Name the TensorFlow placeholder "y" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).
* Implement `neural_net_keep_prob_input`
* Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).
These names will be used at the end of the project to load your saved model.
Note: `None` for shapes in TensorFlow allow for a dynamic size.
```
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
return tf.placeholder(tf.float32, [None, *image_shape], name = 'x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
return tf.placeholder(tf.float32, [None, n_classes], name = 'y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
return tf.placeholder(tf.float32, name = 'keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
```
### Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling:
* Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`.
* Apply a convolution to `x_tensor` using weight and `conv_strides`.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using `pool_ksize` and `pool_strides`.
* We recommend you use same padding, but you're welcome to use any padding.
**Note:** You **can't** use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for **this** layer, but you can still use TensorFlow's [Neural Network](https://www.tensorflow.org/api_docs/python/tf/nn) package. You may still use the shortcut option for all the **other** layers.
```
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
depth_in = int(x_tensor.shape[3])
depth_out = conv_num_outputs
w_shape = [*conv_ksize, depth_in, depth_out]
# weight = tf.Variable(tf.truncated_normal(w_shape))
weight = tf.Variable(tf.random_normal(w_shape, stddev=0.1))
bias = tf.Variable(tf.zeros(depth_out))
conv_strides = [1, *conv_strides, 1]
x = tf.nn.conv2d(x_tensor, weight, strides=conv_strides, padding='SAME')
x = tf.nn.bias_add(x, bias)
x = tf.nn.relu(x)
pool_ksize = [1, *pool_ksize, 1]
pool_strides = [1, *pool_strides, 1]
return tf.nn.max_pool(x, pool_ksize, pool_strides, padding='SAME')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
```
### Flatten Layer
Implement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
```
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
```
### Fully-Connected Layer
Implement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
```
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
```
### Output Layer
Implement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
**Note:** Activation, softmax, or cross entropy should **not** be applied to this.
```
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
```
### Create Convolutional Model
Implement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model:
* Apply 1, 2, or 3 Convolution and Max Pool layers
* Apply a Flatten Layer
* Apply 1, 2, or 3 Fully Connected Layers
* Apply an Output Layer
* Return the output
* Apply [TensorFlow's Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) to one or more layers in the model using `keep_prob`.
```
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x, 32, (3, 3), (1, 1), (2, 2), (2, 2))
x = conv2d_maxpool(x, 32, (3, 3), (2, 2), (2, 2), (2, 2))
x = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2))
# x = tf.nn.dropout(x, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 512)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 128)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(x, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
```
## Train the Neural Network
### Single Optimization
Implement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following:
* `x` for image input
* `y` for labels
* `keep_prob` for keep probability for dropout
This function will be called for each batch, so `tf.global_variables_initializer()` has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
```
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
```
### Show Stats
Implement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy.
```
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
l = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1
})
a = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1
})
print('Loss: {:5.5f} Validation Accuracy: {:5.5f}'.format(l, a))
```
### Hyperparameters
Tune the following parameters:
* Set `epochs` to the number of iterations until the network stops learning or start overfitting
* Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set `keep_probability` to the probability of keeping a node using dropout
```
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = .6
```
### Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
```
### Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
```
# Checkpoint
The model has been saved to disk.
## Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
```
## Why 50-80% Accuracy?
You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores [well above 80%](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130). That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.
## Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
| github_jupyter |
# 1. Load Data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import QuantileTransformer
from time import time
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.neural_network import MLPRegressor
years = ['2004', '2005','2006','2007','2008','2009','2010','2011','2012','2013','2014','2015']
cols = pd.read_csv('data_flag/2004_flag.csv').columns
data = pd.DataFrame(columns = cols[1:].append(pd.Index(['YEAR'])))
for year_ in years:
#np.asarray(data['default_flag'].astype(int))
# flag = np.asarray(.astype(int))
# print(flag.values())
data_path = 'data_flag/{year}_flag.csv'.format(year=year_)
cols = pd.read_csv(data_path).columns
data_this_year = pd.read_csv(data_path, usecols = cols[1:])
print(data_this_year['default_flag'].value_counts())
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import QuantileTransformer
from time import time
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.neural_network import MLPRegressor
#Load data
data_list = []
for fname in sorted(os.listdir(data_path)):
subject_data_path = os.path.join(data_path, fname)
print(subject_data_path)
if not os.path.isfile(subject_data_path): continue
data_list.append(
pd.read_csv(
subject_data_path,
sep='|',
header=None,
names = [
'CREDIT_SCORE',
'FIRST_PAYMENT_DATE',
'FIRST_TIME_HOMEBUYER_FLAG',
'4','5','6',
'NUMBER_OF_UNITS',
'OCCUPANCY_STATUS',
'9',
'ORIGINAL_DTI_RATIO',
'ORIGINAL_UPB',
'ORIGINAL_LTV',
'ORIGINAL_INTEREST_RATE',
'CHANNEL',
'15',
'PRODUCT_TYPE',
'PROPERTY_STATE',
'PROPERTY_TYPE',
'19',
'LOAN_SQ_NUMBER',
'LOAN_PURPOSE',
'ORIGINAL_LOAN_TERM',
'NUMBER_OF_BORROWERS',
'24','25','26'#,'27'#data from every year may have different column number
#2004-2007: 27 2008: 26 2009: 27
],
usecols=[
'CREDIT_SCORE',
'FIRST_TIME_HOMEBUYER_FLAG',
'NUMBER_OF_UNITS',
'OCCUPANCY_STATUS',
'ORIGINAL_DTI_RATIO',
'ORIGINAL_UPB',
'ORIGINAL_LTV',
'ORIGINAL_INTEREST_RATE',
'CHANNEL',
'PROPERTY_TYPE',
'LOAN_SQ_NUMBER',
'LOAN_PURPOSE',
'ORIGINAL_LOAN_TERM',
'NUMBER_OF_BORROWERS'
],
dtype={'CREDIT_SCORE':np.float_,
'FIRST_TIME_HOMEBUYER_FLAG':np.str,
'NUMBER_OF_UNITS':np.int_,
'OCCUPANCY_STATUS':np.str,
'ORIGINAL_DTI_RATIO':np.float_,
'ORIGINAL_UPB':np.float_,
'ORIGINAL_LTV':np.float_,
'ORIGINAL_INTEREST_RATE':np.float_,
'CHANNEL':np.str,
'PROPERTY_TYPE':np.str,
'LOAN_SQ_NUMBER':np.str,
'LOAN_PURPOSE':np.str,
'ORIGINAL_LOAN_TERM':np.int_,
'NUMBER_OF_BORROWERS':np.int_},
low_memory=False
)
)
data = pd.concat(data_list)
```
# Static and Visualise
```
print(PRODUCT_TYPE.value_counts())
```
- **Credit Score**
```
CREDIT_clean = CREDIT_SCORE[CREDIT_SCORE != 9999]
bin_size = CREDIT_clean.max()-CREDIT_clean.min()
CREDIT_SCORE_UNKNOWN_RATIO = (CREDIT_SCORE.size-CREDIT_clean.size)/CREDIT_SCORE.size
plt.figure()
figure = plt.hist(CREDIT_clean,bins = bin_size, weights=np.ones(CREDIT_clean.size) / CREDIT_clean.size)
plt.show
print(CREDIT_SCORE_UNKNOWN_RATIO)
```
- **First Time Homebuyer Flag**
```
FIRST_TIME_HOMEBUYER_FLAG_Y = (FIRST_TIME_HOMEBUYER_FLAG[FIRST_TIME_HOMEBUYER_FLAG=='Y'].size)/FIRST_TIME_HOMEBUYER_FLAG.size
FIRST_TIME_HOMEBUYER_FLAG_N = (FIRST_TIME_HOMEBUYER_FLAG[FIRST_TIME_HOMEBUYER_FLAG=='N'].size)/FIRST_TIME_HOMEBUYER_FLAG.size
FIRST_TIME_HOMEBUYER_FLAG_NG = (FIRST_TIME_HOMEBUYER_FLAG[FIRST_TIME_HOMEBUYER_FLAG=='9'].size)/FIRST_TIME_HOMEBUYER_FLAG.size
print('FIRST_TIME_HOMEBUYER_FLAG_Y = {}'.format(FIRST_TIME_HOMEBUYER_FLAG_Y))
print('FIRST_TIME_HOMEBUYER_FLAG_N = {}'.format(FIRST_TIME_HOMEBUYER_FLAG_N))
print('FIRST_TIME_HOMEBUYER_FLAG_NG = {}'.format(FIRST_TIME_HOMEBUYER_FLAG_NG))
```
- **Number of Units**
```
NUMBER_OF_UNITS_clean = NUMBER_OF_UNITS[NUMBER_OF_UNITS!=99]
plt.bar(NUMBER_OF_UNITS_clean.value_counts().index,NUMBER_OF_UNITS_clean.value_counts().array)
```
- **State**
```
plt.figure(figsize=(50,10))
# dif = np.diff(np.unique(data[16])).min()
# left_of_first_bin = data[16].min() - float(dif)/2
# right_of_last_bin = data[16].max() + float(dif)/2
plt.bar(data[16].value_counts().index,data[16].value_counts().array)
#print(data[16].value_counts()) #value_counts() shows how many different value in a series use size to see the total number
#lable = np.asarray(pd.get_dummies(data[2]))
```
# 3. Imputation
Use Mean substitution approach
```
data[0] = data[0].apply(lambda x : CREDIT_clean.mean() if x == 9999 else x)
data[9] = data[9].apply(lambda x : CREDIT_clean.mean() if x == 999 else x)
data[11] = data[11].apply(lambda x : CREDIT_clean.mean() if x == 999 else x)
```
- **Prepare input data for Linear Rgression**
```
data.shape
data_c = data[[0,9,10,11,21]]
#data_d = data[[2,6,7,13,15,17,20,22]]
lable_2 = np.asarray(pd.get_dummies(data[2]))
lable_6 = np.asarray(pd.get_dummies(data[6]))
lable_7 = np.asarray(pd.get_dummies(data[7]))
lable_13 = np.asarray(pd.get_dummies(data[13]))
lable_15 = np.asarray(pd.get_dummies(data[15]))
lable_17 = np.asarray(pd.get_dummies(data[17]))
lable_20 = np.asarray(pd.get_dummies(data[20]))
lable_22 = np.asarray(pd.get_dummies(data[22]))
input_array = np.c_[data_c.to_numpy(),lable_2,lable_6,lable_7,lable_13,lable_15,lable_17,lable_20,lable_22]
output_array = data[12].to_numpy()
print(input_array.shape)
# Split the data into training/testing sets
X_train = input_array[:-80000]
X_test = input_array[-80000:]
# Split the targets into training/testing sets
y_train = output_array[:-80000]
y_test = output_array[-80000:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print('Mean squared error: %.2f'
% mean_squared_error(y_test, y_pred))
# The coefficient of determination: 1 is perfect prediction
print('Coefficient of determination: %.2f'
% r2_score(y_test, y_pred))
# Plot outputs
axis = np.arange(80000)
size = np.linspace(0.5,0.5,80000)
plt.scatter(axis, y_test,s=size, color='red',marker='x')
plt.scatter(axis, y_pred, s = size, color='blue')
plt.show()
#try NN
X_train, X_test, y_train, y_test = train_test_split(input_array, output_array, test_size=0.2,
random_state=0)
print("Training MLPRegressor...")
tic = time()
est = make_pipeline(QuantileTransformer(),
MLPRegressor(hidden_layer_sizes=(50, 50),
learning_rate_init=0.01,
early_stopping=True))
est.fit(X_train, y_train)
print("done in {:.3f}s".format(time() - tic))
print("Test R2 score: {:.2f}".format(est.score(X_test, y_test)))
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.