text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
#!pip3 install sklearn
from sklearn.datasets import make_classification
from sklearn.calibration import CalibratedClassifierCV
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, brier_score_loss
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
import numpy as np
```
## Create Dataset
Making a ton of adjustments to make the dataset as real as actual transaction data as possible.
- `price` is the value of the laptop
- `num_past_orders` is the number of orders this person has made in the past with grandma fixes
```
X, y = make_classification(n_samples=10000,
n_features=2,
n_redundant=0,
random_state=42,
weights=[0.9])
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
y = scaler.fit_transform(y.reshape(-1,1))
Xs = pd.DataFrame(X, columns = ['price', 'num_past_orders'])
ys = pd.DataFrame(y, columns=['label'])
Xs['price'] = Xs['price'].apply(lambda x: 50 + int(x*2000))
Xs['num_past_orders'] = Xs['num_past_orders'].apply(lambda x: int(x*50))
Xs.describe()
X_train_raw, X_test, y_train_raw, y_test = train_test_split(Xs, ys, test_size=0.10, shuffle=False)
X_train, X_val, y_train, y_val = train_test_split(X_train_raw, y_train_raw, test_size=0.10, shuffle=False)
y_train['label'].value_counts()
y_test['label'].value_counts()
```
## Create (and calibrate) model
Calibration is done to ensure the output of the model is actually a probability. Required depending on the model you use. If you sample a subset of data, or weight certain samples over others, calibration becomes more important.
We will take a look into this more in another video
```
clf = LogisticRegression(class_weight='balanced')
calibrated_clf = CalibratedClassifierCV(base_estimator=clf, cv=3, method='isotonic')
calibrated_clf.fit(X_train, y_train.values.ravel())
y_pred = calibrated_clf.predict_proba(X_test)[:, 1]
roc_auc_score(y_test, y_pred)
y_pred_df = pd.DataFrame(y_pred, columns=['prediction'])
pred_df = pd.concat([y_pred_df, y_test.reset_index()],axis=1)[['prediction', 'label']]
y_pred_df.describe()
```
## Cost Calculations
```
df = X_test.merge(y_test,left_index=True, right_index=True)
```
### Case 1: Insure nothing
We pay full price for the laptops we lose
```
df['price'][df['label']==1].sum()
```
### Case 2: Insure Everything
We pay \\$30 for every laptop regardless of whether we lose them or not
```
df.shape[0] * 30
```
### Case 3: Insure Based on Model
```
predictions = df.reset_index().drop('index', axis=1).merge(pred_df[['prediction']], left_index=True, right_index=True)
predictions.sample(2)
predictions['E_x'] = predictions['price'] * predictions['prediction']
predictions['insure'] = predictions['E_x'] > 30
predictions.sample(2)
predictions['insure'].value_counts()
def cal_loss(x):
if x['insure']:
return 30
if not x['insure'] and x['label']==1:
return x['price']
return 0
predictions['loss'] = predictions.apply(cal_loss, axis=1)
predictions['loss'].sum()
```
| github_jupyter |
# RecipeClassification
## Identifying Which feature is best to classify Recipe Dataset
### Importing necessary libraries
###### The following code is written in Python 3.x. Libraries provide pre-written functionally to perform necessary tasks
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('ggplot')
import warnings
warnings.filterwarnings('ignore')
#reading the data
recipe = pd.read_csv('D:/Manjiri/Internshala_Sonu/Data/recipe_classification.csv')
#print number of rows and number of columns of data
recipe.shape
#printing top 6 rows
recipe.head(6)
#Check Missing Values
recipe.isnull().sum()
#Removing one column name is'Unnamed :0'
recipe.drop(['Unnamed: 0'], axis = 1, inplace=True)
```
Here, I am removing uncessary column with drop function. The resulting dataframe will have 8 columns out of 9. The column containing 'unnamed' variable will be removed.
### Check Numeric and Categorical Features
A dataset consists of numerical and categorical columns.
Looking at the dataset, I can identify the categorical and continuous columns in it.But it might also be possible that the numerical values are represented as strings in some feature. Or the categorical values in some features might be represented as some other datatypes instead of strings. I will be going to check for the datatypes of all the features.
```
#Identifying Numeric Features
numeric_data = recipe.select_dtypes(include=np.number) # select_dtypes select data with numeric features
numeric_col = numeric_data.columns # we will store the numeric features in a variable
print("Numeric Features:")
print(numeric_data.head())
print("==="*20)
#Identifying Categorical Features
categorical_data = recipe.select_dtypes(exclude=np.number) # we will exclude data with numeric features
categorical_col = categorical_data.columns # we will store the categorical features in a variable
print("Categorical Features:")
print(categorical_data.head())
print("==="*20)
#CHECK THE DATATYPES OF ALL COLUMNS:
print(recipe.dtypes)
```
### Check for Class Imabalance
Class imbalance occurs when the observation belonging to one class in the target are significantly higher than the other class or classes.This dataset is multi-class distribution problem.
Since most machine learning algorithms assume that the data is equally distributed, applying them on imbalance data often results in bias towards majority classes and poor classification of minority classes. Hence it's need to identify and deal with Class Imbalance.
```
# we are finding the percentage of each class in the feature 'Cuisine'
class_values = (recipe['Cuisine'].value_counts()/recipe['Cuisine'].value_counts().sum())*100
print(class_values)
# we are finding the percentage of each class in the feature 'Cuisine'
class_values = (recipe['Category'].value_counts()/recipe['Category'].value_counts().sum())*100
print(class_values)
# we are finding the percentage of each class in the feature 'Cuisine'
class_values = (recipe['Yield'].value_counts()/recipe['Yield'].value_counts().sum())*100
print(class_values)
```
#### Observations:
The Class Distribution in 'Cuisine' feature is ~67:22:6:4 for Indian,European,American & Chinese Classes. This is clear indication of imbalance.
The Class Distribution in 'Category' feature is ~29:24:22:11:10:1. It is quite imbalance.
Even 'Yield' feature is also imbalanced.
Now, I will be going to identify which feature is best suitable to classify recipe dataset.
### Univariate Analysis of Categorical Variable
Univariate Analysis means analysis of single variable. It's mainly describe the characteristics of the variable.
'Recipe_Name', 'Nutrition' have too many classes.This features are not suitable for classification.
```
recipe['Cuisine'].value_counts().plot.bar()
plt.title('Recipe_Cuisine')
recipe['Category'].value_counts().plot.bar()
plt.title('Recipe_Category')
#Creating frequency table for Categorical Variable 'Yield'
recipe['Yield'].value_counts().plot.bar()
plt.title('Recipe_Yield')
```
#### Observations:
From the above visuals, we can make the following observations.
Most recipes belong to Indian Cuisine.There are least Chinese recipes in a dataset.
Lunch, Snacks and Dessert recipes are more compared to dinner, breakfast and salads.
Percentage of yield 4 serving is high compared to other classes. Majority recipes will serve four number of people.
### Univariate Analysis of Continuous Variable
By performing the univariate analysis of Continuous variable, we can get sense of the distribution values in every column and of the outlier in the data.
```
#Plotting 'histogram' for the 'Preptime' Variable
plt.figure(figsize=(20,5))
plt.subplot(121)
sns.distplot(recipe['Preptime'])
plt.title('Recipe_Preptime')
#Plotting 'histogram' for the 'Tottime' Variable
plt.figure(figsize=(20,5))
plt.subplot(121)
sns.distplot(recipe['Tottime'])
plt.title('Recipe_Tottime')
```
### Categorical - Continuous Bivariate Analysis
```
recipe.groupby('Cuisine')['Preptime'].mean().plot.bar()
plt.title('Cuisine Vs Preptime')
recipe.groupby('Category')['Preptime'].mean().plot.bar()
plt.title('Category Vs Preptime')
```
#### Observations:
More time is required to prepare American recipes. While less time is required to cook Chinese recipes.
However, Salad recipes are really required less time to prepare while as desserts required more time.
```
plt.figure(figsize=(20,4))
plt.subplot(121)
sns.countplot(x=recipe['Category'],hue=recipe['Cuisine'],data=recipe)
plt.title('Cuisine Vs Category')
plt.xticks(rotation=90)
```
#### Observation:
From the above Visual understanding, there are many Lunch recipes belong to Indian Cuisine. Most of the salad recipes are Europian Cuisine.
#### Univariate Outlier Detection
Boxplots are the best choice for visualizing outliers.
```
#Creating 'Preptime' box plot
recipe['Preptime'].plot.box()
```
#### Bivariate Outlier Detection
```
recipe.plot.scatter('Preptime','Tottime')
```
### Removing outliers from the Dataset
```
recipe = recipe[recipe['Preptime']<150]
recipe.shape
```
### Replacing Outliers in 'Preptime' with the mean 'Preptime'
```
recipe.loc[recipe['Preptime']>100,'Preptime']=np.mean(recipe['Preptime'])
recipe.plot.scatter('Preptime','Tottime')
```
Here, I have completed Exploratory Data Analyis. From the observations, I found that Some features have high cardinality.
This is Multiclass Classification Problem. Only 'Cuisine' and 'Category' feature is suitable for Classification.
| github_jupyter |
# Model zoo
```
import torch
import numpy as np
import tensorflow as tf
```
## Generate toy data
```
def generate_data(n=16, samples_per_class=1000):
"""
Generate some classification data
Args:
n (int): square root of the number of features.
samples_per_class (int): number of samples per class.
Returns:
a tuple containing data and labels.
"""
# data for a class
a_class_samples = np.random.rand(samples_per_class, n, n).astype(np.float32)
a_class_labels = np.zeros(samples_per_class, dtype=int)
# data for another class
another_class_samples = np.array([
np.eye(n)*np.random.rand(1).item()
for _ in range(samples_per_class)
]).astype(np.float32)
another_class_labels = np.ones(samples_per_class, dtype=int)
# aggregate data
data = np.vstack([a_class_samples, another_class_samples])
labels = np.hstack([a_class_labels, another_class_labels])
# prepare a shuffled index
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
return data[indices], labels[indices]
# get data
n = 16
features = n*n
number_of_classes = 2
X_train, y_train = generate_data(n=n)
X_test, y_test = generate_data(n=n)
```
## MLP
```
# parameters
units = [32, 8]
```
### PyTorch
```
class MLP(torch.nn.Module):
"""A MultiLayer Perceptron class."""
def __init__(
self, features,
units=[8], number_of_classes=2,
activation_module=torch.nn.ReLU
):
"""
Inititalize the MLP.
Args:
features (int): number of features.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
activation_module (torch.nn.Module): module representing
the activation function to apply in the hidden layers.
"""
super(MLP, self).__init__()
self.units = [features] + units
self.activation_module = activation_module
self.hidden_layers = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Linear(input_size, output_size),
self.activation_module()
)
for input_size, output_size in zip(
self.units, self.units[1:]
)
])
self.last_layer = self.last_layer = torch.nn.Sequential(*[
torch.nn.Linear(self.units[-1], number_of_classes),
torch.nn.Softmax(dim=1)
])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing softmaxed predictions.
"""
encoded_sample = self.hidden_layers(sample)
return self.last_layer(encoded_sample)
X = torch.from_numpy(X_train.reshape(-1, features))
model = MLP(features=features, units=units, number_of_classes=number_of_classes)
model(X)
```
### TensorFlow/Keras
```
def mlp(
features,
units=[8], number_of_classes=2,
activation='relu'
):
"""
Build a MLP.
Args:
features (int): number of features.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
activation (str): string identifying the activation used.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units[0], activation=activation, input_shape=(features,)))
for unit in units[1:]:
model.add(tf.keras.layers.Dense(unit, activation=activation))
model.add(tf.keras.layers.Dense(number_of_classes, activation='softmax'))
return model
X = X_train.reshape(-1, features)
model = mlp(features=features, units=units, number_of_classes=number_of_classes)
model.predict(X)
```
## AE
```
# parameters
units = [32, 8]
```
### PyTorch
```
class AE(torch.nn.Module):
"""An AutoEncoder class."""
def __init__(
self, features,
units=[8], activation_module=torch.nn.ReLU
):
"""
Inititalize the AE.
Args:
features (int): number of features.
units (list): list of hidden layer units.
activation_module (torch.nn.Module): module representing
the activation function to apply in the hidden layers.
"""
super(AE, self).__init__()
self.units = [features] + units
self.activation_module = activation_module
zipped_units = list(zip(
self.units, self.units[1:]
))
# encoding
self.encoder = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Linear(input_size, output_size),
self.activation_module()
)
for input_size, output_size in zipped_units
])
# decoding
last_decoder_units, *hidden_decoder_units = zipped_units
self.decoder = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Linear(input_size, output_size),
self.activation_module()
)
for input_size, output_size in map(
lambda t: t[::-1],
hidden_decoder_units[::-1]
)
])
self.last_layer = torch.nn.Linear(*last_decoder_units[::-1])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing the reconstructed example.
"""
encoded_sample = self.encoder(sample)
decoded_sample = self.decoder(encoded_sample)
return self.last_layer(decoded_sample)
X = torch.from_numpy(X_train.reshape(-1, features))
model = AE(features=features, units=units)
model(X)
# get encoded representation
model.encoder(X)
```
### TensorFlow/Keras
```
def ae(features, units=[8], activation='relu'):
"""
Build an AE.
Args:
features (int): number of features.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
activation (str): string identifying the activation used.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
# encoding
model.add(tf.keras.layers.Dense(
units[0], activation=activation, input_shape=(features,)
))
for unit in units[1:]:
model.add(tf.keras.layers.Dense(unit, activation=activation))
# decoding
for unit in units[::-1][1:]:
model.add(tf.keras.layers.Dense(unit, activation=activation))
model.add(tf.keras.layers.Dense(features))
return model
X = X_train.reshape(-1, features)
model = ae(features=features, units=units)
model.predict(X)
# get encoded representation
encoder = tf.keras.Model(
inputs=model.input,
outputs=model.layers[len(units) - 1].output
)
encoder.predict(X)
```
## CNN
```
# parameters
filters = [64, 32]
kernel_size = (3, 3)
channels = 1
```
### PyTorch
```
class CNN(torch.nn.Module):
"""A Convolutional Neural Network class."""
def __init__(
self, channels,
filters=[8], kernel_size=(3,3),
number_of_classes=2,
activation_module=torch.nn.ReLU
):
"""
Inititalize the CNN.
Args:
channels (int): number of input channels.
filters (list): list of filters.
kernel_size (tuple): size of the kernel.
number_of_classes (int): number of classes to predict.
activation_module (torch.nn.Module): module representing
the activation function to apply in the hidden layers.
"""
super(CNN, self).__init__()
self.filters = [channels] + filters
self.kernel_size = kernel_size
self.activation_module = activation_module
self.stacked_convolutions = torch.nn.Sequential(*[
torch.nn.Sequential(
torch.nn.Conv2d(input_size, output_size, kernel_size),
self.activation_module(),
torch.nn.MaxPool2d((2,2), stride=2)
)
for input_size, output_size in zip(
self.filters, self.filters[1:]
)
])
self.last_layer = torch.nn.Sequential(*[
torch.nn.Linear(self.filters[-1], number_of_classes),
torch.nn.Softmax(dim=1)
])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing softmaxed predictions.
"""
encoded_sample = self.stacked_convolutions(sample)
return self.last_layer(encoded_sample.mean((2,3)))
X = torch.from_numpy(np.expand_dims(X_train, 1))
model = CNN(
channels=channels, filters=filters,
kernel_size=kernel_size,
number_of_classes=number_of_classes
)
model(X)
```
### TensorFlow/Keras
```
def cnn(
channels, input_shape,
filters=[8], kernel_size=(3,3),
number_of_classes=2, activation='relu'):
"""
Build a CNN.
Args:
channels (int): number of input channels.
input_shape (tuple): input shape.
filters (list): list of filters.
kernel_size (tuple): size of the kernel.
number_of_classes (int): number of classes to predict.
activation (str): string identifying the activation used.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(
filters[0], kernel_size, activation=activation,
input_shape=input_shape
))
for a_filter in filters[1:]:
model.add(tf.keras.layers.Conv2D(
a_filter, kernel_size, activation=activation
))
model.add(tf.keras.layers.GlobalAveragePooling2D())
model.add(tf.keras.layers.Dense(number_of_classes, activation='softmax'))
return model
X = np.expand_dims(X_train, 3)
model = cnn(
channels=channels, input_shape=X.shape[1:],
filters=filters, kernel_size=kernel_size,
number_of_classes=number_of_classes
)
model.predict(X)
```
## RNN
```
# parameters
units = [32, 8]
```
### PyTorch
```
class RNN(torch.nn.Module):
"""A Recurrent Neural Network class."""
def __init__(
self, input_size, units=[8],
number_of_classes=2, rnn_cell=torch.nn.GRU
):
"""
Inititalize the RNN.
Args:
input_size (int): size of the input.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
rnn_cell (torch.nn.RNNBase): a RNN cell.
"""
super(RNN, self).__init__()
self.units = [input_size] + units
self.rnn_layers = [
rnn_cell(input_size, output_size)
for input_size, output_size in zip(
self.units, self.units[1:]
)
]
self.last_layer = torch.nn.Sequential(*[
torch.nn.Linear(self.units[-1], number_of_classes),
torch.nn.Softmax(dim=1)
])
def forward(self, sample):
"""
Apply the forward pass of the model.
Args:
sample (torch.Tensor): a torch.Tensor representing a sample.
Returns:
a torch.Tensor containing softmaxed predictions.
"""
encoded_sample = sample
for rnn_layer in self.rnn_layers[:-1]:
encoded_sample, _ = rnn_layer(encoded_sample)
encoded_sample = self.rnn_layers[-1](encoded_sample)[1].squeeze(0)
return self.last_layer(encoded_sample)
X = torch.from_numpy(X_train.transpose((1,0,2)))
model = RNN(
input_size=n, units=units,
number_of_classes=number_of_classes
)
model(X)
```
### TensorFlow/Keras
```
def rnn(
sequence_length, input_size,
units=[8], number_of_classes=2,
rnn_cell=tf.keras.layers.GRU
):
"""
Build a RNN.
Args:
sequence_length (int): length of the sequence.
input_size (int): size of the input.
units (list): list of hidden layer units.
number_of_classes (int): number of classes to predict.
rnn_cell (tf.keras.layers.RNN): a RNN cell.
Returns:
a tf.keras.Model.
"""
model = tf.keras.Sequential()
is_stacked = len(units) > 1
model.add(rnn_cell(units=units[0], input_shape=(16, 16,), return_sequences=is_stacked))
for unit in units[1:-1]:
model.add(rnn_cell(units=unit, return_sequences=True))
if is_stacked:
model.add(rnn_cell(units=units[-1]))
model.add(tf.keras.layers.Dense(number_of_classes, activation='softmax'))
return model
X = X_train
model = rnn(
sequence_length=n, input_size=n, units=units,
number_of_classes=number_of_classes
)
model.predict(X)
```
| github_jupyter |
```
pip install ekphrasis
pip install transformers
import pandas as pd
import os
import numpy as np
import torch
import random
import functools
import operator
import cv2
import seaborn as sns
import matplotlib.pyplot as plt
from torch import nn, optim
from torch.utils.data import TensorDataset, DataLoader, Dataset, SequentialSampler
from transformers import get_linear_schedule_with_warmup, RobertaModel, RobertaConfig, RobertaTokenizer, AutoTokenizer, AutoModel, AutoConfig
from sklearn.metrics import matthews_corrcoef, confusion_matrix, accuracy_score, f1_score, precision_score, recall_score
from tqdm import tqdm, trange
# from ekphrasis.classes.preprocessor import TextPreProcessor
# from ekphrasis.classes.tokenizer import SocialTokenizer
# from ekphrasis.dicts.emoticons import emoticons
from keras.models import load_model, Model
```
### Helper Functions
```
def clean_text(data, normalize_list, annotate_list):
"""
This function preprocesses the text using the Ekphrasis library
data: Pandas series object containing strings of text
normalize_list: list of data features to clean
annotate_list: list of data features to annotate
"""
text_processor = TextPreProcessor(
normalize= normalize_list,
annotate= annotate_list,
fix_html=True,
segmenter="twitter",
unpack_hashtags=True,
unpack_contractions=True,
spell_correct_elong=True,
tokenizer=SocialTokenizer(lowercase=True).tokenize,
dicts=[emoticons]
)
clean_data = data.map(lambda x: " ".join(text_processor.pre_process_doc(x)))
return clean_data
def early_stopping(val_loss_values, early_stop_vals):
"""
Determines whether or not the model will keep running based on the patience and delta given relative to the val loss
"""
if len(val_loss_values) > early_stop_vals["patience"]:
if val_loss_values[-1] <= np.mean(np.array(val_loss_values[-1-early_stop_vals["patience"]:-1])) - early_stop_vals["delta"]:
return False
else:
return True
else:
return False
def metrics(labels, preds, argmax_needed: bool = False):
"""
Returns the Matthew's correlation coefficient, accuracy rate, true positive rate, true negative rate, false positive rate, false negative rate, precission, recall, and f1 score
labels: list of correct labels
pred: list of model predictions
argmax_needed (boolean): converts logits to predictions. Defaulted to false.
"""
if argmax_needed == True:
preds = np.argmax(preds, axis=1).flatten()
mcc = matthews_corrcoef(labels, preds)
acc = accuracy_score(labels, preds)
cm = confusion_matrix(labels, preds)
f1 = f1_score(labels, preds, average= "weighted")
precision = precision_score(labels, preds, average= "weighted")
recall = recall_score(labels, preds, average= "weighted")
results = {
"mcc": mcc,
"acc": acc,
"confusion_matrix": cm,
"precision": precision,
"recall": recall,
"f1": f1,
}
return results, labels, preds
def combine_text(df):
"""
Combines tweet and image text into one column
df: Dataframe which holds the data
"""
combined_text = []
for row_num in range(len(df)):
tweet_text = df.loc[row_num, "tweet_text"]
image_text = df.loc[row_num, "img_text"]
if type(image_text) == str:
combined_text.append(tweet_text + image_text)
else:
combined_text.append(tweet_text)
return combined_text
def training_plot(train_loss_values, val_loss_values):
"""
Plots loss after each epoch
training_loss_values: list of floats; output from fine_tune function
val_loss_values: list of floats; output from fine_tune function
"""
sns.set(style='darkgrid')
plt.rcParams["figure.figsize"] = (12,6)
plt.plot(train_loss_values, 'b-o', label="train")
plt.plot(val_loss_values, 'g-o', label="valid")
#plt.title("Training and Validation loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
#plt.savefig("dogwhistle_train_plot.png",bbox_inches='tight')
return plt.show()
def model_saver(model, model_type, model_implementation, output_directory, training_dict, labels, preds, ids, results, tokenizer= None):
"""
Saves Model and other outputs
model: Model to be saved
model_type (string): Name of model
model_implementation: Keras or Pytorch
output_directory: Directory to folder to save file in
training_dict: Dictionary of training and validation values
labels: List of labels for test set
preds: List of model predictions after passed through argmax()
results: Dictionary of metrics
tokenizer: Tokenizer to be saved. Defaulted to None.
"""
output_directory = os.path.join(output_directory, model_type)
if not os.path.exists(output_directory):
os.makedirs(output_directory)
os.chdir(output_directory)
np.save(model_type+"_dogwhistle_train_results.npy", training_dict) #save training dict
np.save(model_type+"_dogwhistle_test_results.npy", results) #save test metrics
test_predictions = pd.DataFrame([ids, labels, preds]) #save predictions and labels
test_predictions = test_predictions.T
test_predictions = test_predictions.rename(columns={0: 'Ids', 1: 'Labels', 2: 'Predictions'})
test_predictions.to_csv(model_type+"_dogwhistle_predictions.csv")
#save models
if model_implementation == "Pytorch":
torch.save(model.state_dict(), model_type+"_model")
if model_implementation == "Keras":
model.save("image_model.h5") #save model
return print("Saving complete.")
```
### Text Feature Extraction
```
class Transformer_features(nn.Module):
def __init__(self, method_type):
"""
method_type: Extracts features from Bert either using the method in Devlin et al or Sabat el al
"""
super(Transformer_features, self).__init__()
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if method_type == "Devlin":
self.config = AutoConfig.from_pretrained('/content/drive/My Drive/Dog_Whistle_Code/Fine_Tuned_Models/Text/RoBERTa', output_hidden_states = True)
self.model = AutoModel.from_config(self.config).to(self.device)
if method_type == "Sabat":
self.model = RobertaModel.from_pretrained('/content/drive/My Drive/Dog_Whistle_Code/Fine_Tuned_Models/Text/RoBERTa').to(self.device)
def forward(self, dataloader, method_type):
"""
This function recieves tokenized tensors and the sentence pair IDs and returns a sentence embedding for each input sequence
dataloader: dataloader object containing combined text and IDs
method_type: Extracts features from Bert either using the method in Devlin et al or Sabat el al
"""
self.model.eval()
if method_type == "Devlin": # averages word embeddings to get sentence embeddings, then concatenates last four layers
combined_layers = torch.zeros(1, 4096).to(self.device)
id_list = []
for batch in dataloader:
with torch.no_grad():
_, _, encoded_layers = self.model(batch[0].to(self.device), attention_mask=batch[1].to(self.device)) #shape [25 x len(tokens) x 100 x 1024]
concat_layers = torch.cat((torch.mean(encoded_layers[-4], dim=1), torch.mean(encoded_layers[-3], dim=1), torch.mean(encoded_layers[-2], dim=1), torch.mean(encoded_layers[-1], dim=1)), dim=1)
combined_layers = torch.cat((combined_layers, concat_layers), dim=0)
id_list.append(batch[2])
if method_type == "Sabat": # averages word embeddings from last layer
combined_layers = torch.zeros(1, 1024).to(self.device)
id_list = []
for batch in dataloader:
with torch.no_grad():
output, _ = self.model(batch[0].to(self.device)) #shape [batch_size x pad_length x 1024]
text_features = torch.mean(output, dim=1)
combined_layers = torch.cat((combined_layers, text_features), dim=0)
id_list.append(batch[2])
combined_layers = combined_layers[1:, :].to(torch.int64) #input len x 4096
id_list = torch.as_tensor(functools.reduce(operator.iconcat, id_list, [])).to(torch.int64) #input length
out_matrix = torch.cat((id_list.unsqueeze(dim= 1).to(self.device), combined_layers.to(self.device)), dim=1)
return out_matrix
#Text Hyperparameters
NORMALIZE_LIST = ['url', 'email', 'percent', 'money', 'phone', 'user', 'time', 'date', 'number']
ANNOTATE_LIST = ['hashtag', 'allcaps', 'elongated', 'repeated', 'emphasis', 'censored']
TOKENIZER = RobertaTokenizer.from_pretrained('/content/drive/My Drive/Dog_Whistle_Code/Fine_Tuned_Models/Text/RoBERTa')
PAD_LENGTH = 100
class DogWhistleDatasetText(Dataset):
def __init__(self, df, tokenizer, pad_length: int=100):
self.data = df
self.tokenizer = tokenizer
self.pad_length = pad_length
def __len__(self):
return (self.data.shape[0])
def __getitem__(self, i):
text = self.data.loc[i, "combined_text"]
encoded_dict = self.tokenizer.encode_plus(text, add_special_tokens = True, max_length = self.pad_length, pad_to_max_length = True, return_attention_mask = True, return_tensors = 'pt')
image_number = self.data.loc[i, "image_number"]
return (torch.sum(encoded_dict['input_ids'], dim=0), torch.sum(encoded_dict['attention_mask'], dim=0), image_number) #reshape encoded_dict from 1x100 to 100
# Prepare data
#Load data
train = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/dog_whistle_train.csv", encoding='utf-8')
dev = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/dog_whistle_dev.csv", encoding='utf-8')
test = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/dog_whistle_test.csv", encoding='utf-8')
#Clean data
train["combined_text"] = combine_text(train)
train["combined_text"] = clean_text(train["combined_text"], NORMALIZE_LIST, ANNOTATE_LIST)
dev["combined_text"] = combine_text(dev)
dev["combined_text"] = clean_text(dev["combined_text"], NORMALIZE_LIST, ANNOTATE_LIST)
test["combined_text"] = combine_text(test)
test["combined_text"] = clean_text(test["combined_text"], NORMALIZE_LIST, ANNOTATE_LIST)
#Subset necessary data
train = train[["image_number", "combined_text"]]
dev = dev[["image_number", "combined_text"]]
test = test[["image_number", "combined_text"]]
#Create Dataset
train_dataset = DogWhistleDatasetText(train, TOKENIZER)
dev_dataset = DogWhistleDatasetText(dev, TOKENIZER)
test_dataset = DogWhistleDatasetText(test, TOKENIZER)
#Create dataloader
train_dataloader = DataLoader(train_dataset, batch_size=32)
dev_dataloader = DataLoader(dev_dataset, batch_size=32)
test_dataloader = DataLoader(test_dataset, batch_size=32)
TextExtractor = Transformer_features("Devlin")
train_text_features = TextExtractor(train_dataloader, "Devlin")
print("Done")
dev_text_features = TextExtractor(dev_dataloader, "Devlin")
print("Done")
test_text_features = TextExtractor(test_dataloader, "Devlin")
print("Done")
TextExtractor = Transformer_features("Sabat")
train_text_features = TextExtractor(train_dataloader, "Sabat")
print("Done")
dev_text_features = TextExtractor(dev_dataloader, "Sabat")
print("Done")
test_text_features = TextExtractor(test_dataloader, "Sabat")
print("Done")
#Save Devlin
train_text_features = train_text_features.cpu().numpy()
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/text_features.npy", train_text_features)
dev_text_features = dev_text_features.cpu().numpy()
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/text_features.npy", dev_text_features)
test_text_features = test_text_features.cpu().numpy()
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/text_features.npy", test_text_features)
#Save Sabat
train_text_features= train_text_features.cpu().numpy()
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/text_features_sabat.npy", train_text_features)
dev_text_features = dev_text_features.cpu().numpy()
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/text_features_sabat.npy", dev_text_features)
test_text_features = test_text_features.cpu().numpy()
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/text_features_sabat.npy", test_text_features)
```
### Image Feature Extraction
```
def Image_features(trained_model, dataloader):
""" Extracts image features from images
trained_model: pre-trained image model
dataloader: dataloader object containing image paths and IDs
"""
combined_output = np.zeros((1, 1024))
id_list = []
for num, batch in enumerate(dataloader):
if num % 25 == 0:
print("Processing batch {} of {}".format(num, len(dataloader)))
batch_output = Model(trained_model.input, trained_model.layers[-2].output).predict(batch[0]) #32 x 1024
combined_output = np.concatenate((combined_output, batch_output), axis=0)
id_list.append(batch[1])
combined_output = combined_output[1:, :]
id_list = np.array(functools.reduce(operator.iconcat, id_list, []))
out_matrix = np.concatenate((np.expand_dims(id_list, axis=1), combined_output), axis=1)
return out_matrix
class DogWhistleDatasetImage(Dataset):
def __init__(self, df, base_path, image_size: int=299):
self.data = df
self.base_path = base_path
self.image_size = image_size
def __len__(self):
return (self.data.shape[0])
def __getitem__(self, i):
image_path = str(self.data.loc[i, "image_number"])
path = self.base_path + "/" + image_path + ".jpg"
image = cv2.imread(path)
image = cv2.resize(image, (self.image_size, self.image_size))
sample = (image, self.data.loc[i, "image_number"])
return sample
#Load data
train = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/dog_whistle_train.csv", encoding='utf-8')
dev = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/dog_whistle_dev.csv", encoding='utf-8')
test = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/dog_whistle_test.csv", encoding='utf-8')
#Subset necessary data
train = train[["image_number"]]
dev = dev[["image_number"]]
test = test[["image_number"]]
#Create Dataset
train_dataset = DogWhistleDatasetImage(train, "/content/drive/My Drive/Dog_Whistle_Code/Data/Images")
dev_dataset = DogWhistleDatasetImage(dev, "/content/drive/My Drive/Dog_Whistle_Code/Data/Images")
test_dataset = DogWhistleDatasetImage(test, "/content/drive/My Drive/Dog_Whistle_Code/Data/Images")
#Create dataloader
train_dataloader = DataLoader(train_dataset, batch_size=32)
dev_dataloader = DataLoader(dev_dataset, batch_size=32)
test_dataloader = DataLoader(test_dataset, batch_size=32)
ImageExtractor = load_model('/content/drive/My Drive/Dog_Whistle_Code/Fine_Tuned_Models/Image/Xception/image_model.h5') #using pre-trained Xception
train_image_features = Image_features(ImageExtractor, train_dataloader)
dev_image_features = Image_features(ImageExtractor, dev_dataloader)
test_image_features = Image_features(ImageExtractor, test_dataloader)
# Save
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/image_features.npy", train_image_features)
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/image_features.npy", dev_image_features)
np.save("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/image_features.npy", test_image_features)
```
### Combine Feature Data
```
# Load Text data
# train_text = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/text_features.npy", allow_pickle=True)
# dev_text = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/text_features.npy", allow_pickle=True)
# test_text = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/text_features.npy", allow_pickle=True)
train_text = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/text_features_sabat.npy", allow_pickle=True)
dev_text = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/text_features_sabat.npy", allow_pickle=True)
test_text = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/text_features_sabat.npy", allow_pickle=True)
# Load Image data
train_image = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/image_features.npy", allow_pickle=True)
dev_image = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/image_features.npy", allow_pickle=True)
test_image = np.load("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/image_features.npy", allow_pickle=True)
# Load Other data
train2 = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/dog_whistle_train.csv", encoding='utf-8')
dev2 = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/dog_whistle_dev.csv", encoding='utf-8')
test2 = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/dog_whistle_test.csv", encoding='utf-8')
# Merge
train = pd.concat((pd.DataFrame(train_text[:, 1:]), pd.DataFrame(train_image[:, 1:])), axis = 1)
train["ids"] = train_text[:, :1]
train["labels"] = train2["Primary_numeric_gt"]
dev = pd.concat((pd.DataFrame(dev_text[:, 1:]), pd.DataFrame(dev_image[:, 1:])), axis = 1)
dev["ids"] = dev_text[:, :1]
dev["labels"] = dev2["Primary_numeric_gt"]
test = pd.concat((pd.DataFrame(test_text[:, 1:]), pd.DataFrame(test_image[:, 1:])), axis = 1)
test["ids"] = test_text[:, :1]
test["labels"] = test2["Primary_numeric_gt"]
# Save
# train.to_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/combined_features.csv")
# dev.to_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/combined_features.csv")
# test.to_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/combined_features.csv")
train.to_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/combined_features_sabat.csv")
dev.to_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/combined_features_sabat.csv")
test.to_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/combined_features_sabat.csv")
```
### Pytorch Implementation
```
class MultimodalClassifier(nn.Module):
def __init__(self, MLP_type, hidden_size: int=50, dropout: float=0.2, num_labels: int=4, input_len: int = 5120):
"""Initializes the network structure
MLP_type: Which paper's MLP structure to use
image_model: CovNet from Keras library to use as image feature extractor
text_model: Transformer Model from Hugging Face to use as the text feature extractor
hidden_size (int): Number of nodes in the hidden layer. Defaulted to 50.
dropout (float): Rate at which nodes are deactivated. Defaulted to 0.2.
num_labels (int): Number of labels to predict. Defaulted to 4.
input_len (int): Length of input vector. Defaulted to 5120 (Image feature length (4096) + text feature length (1024)).
"""
super(MultimodalClassifier, self).__init__()
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if MLP_type == "Sabat":
self.classifier = nn.Sequential(
nn.Linear(input_len, hidden_size),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(hidden_size, hidden_size),
nn.ReLU(),
#nn.Dropout(dropout),
nn.Linear(hidden_size, num_labels)
#nn.Softmax(dim=1)
)
if MLP_type == "Gomez":
self.classifier = nn.Sequential(
nn.Linear(input_len, input_len),
nn.BatchNorm1d(input_len),
nn.ReLU(),
#nn.Dropout(dropout),
nn.Linear(input_len, 1024),
nn.BatchNorm1d(1024),
nn.ReLU(),
#nn.Dropout(dropout),
nn.Linear(1024, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
#nn.Dropout(dropout),
nn.Linear(512, num_labels),
nn.Softmax(dim=1)
)
def forward(self, features):
"""Initiaties foward pass through network
features: Matrix of size number of tweets x 5120 containing concatenated image and text features
"""
out = self.classifier(features.to(torch.float))
return out
def trainer(self, input_model, train_data, dev_data, early_stop_vals: dict, epochs: int = 25, learning_rate: float = 1e-5, weight_decay: float = 0.1, warmup: float = 0.06):
"""
Trains multimodal model
input_model: Instatiation of model
train_data: Dataloader object containing train data- image, text, labels
dev_data: Dataloader object containing dev data- image, text, labels
early_stopping: Dictionary containing patience value (int) and delta value (float). The patience determines the number of epochs to wait to achieve the given delta
epochs (int): Number of times to run through all batches. Default value is 25.
learning_rate (float): Default value is 1e-5.
weight decay (float): Default value is 0.1
warmup (float): Default value is 0.06; percentage of training steps in warmup.
"""
model = input_model.to(self.device)
self.optimizer = optim.AdamW(model.classifier.parameters(), lr = learning_rate, weight_decay = weight_decay)
self.scheduler = get_linear_schedule_with_warmup(self.optimizer, num_warmup_steps = warmup * (len(train_data) * epochs), num_training_steps = (1-warmup) * (len(train_data) * epochs))
criterion = nn.CrossEntropyLoss().to(self.device)
train_loss_values, val_loss_values, train_acc_values, val_acc_values = [], [], [], []
for epoch in trange(epochs, desc= "Epoch"):
if early_stopping(val_loss_values, early_stop_vals) == False:
print('======== Epoch {:} / {:} ========'.format(epoch + 1, epochs))
print('Training...')
train_total_loss, train_total_len, train_num_correct = 0, 0, 0
model.train()
for step, batch in enumerate(train_data):
if step % 50 == 0:
print("Processing batch...{} of {}".format(step, len(train_data)))
#model.zero_grad()
self.optimizer.zero_grad()
batch_features, batch_labels, _ = tuple(t.to(self.device) for t in batch)
train_total_len += batch_features.shape[0]
logits = model(batch_features)
loss = criterion(logits, batch_labels).to(self.device)
train_total_loss += loss
loss.backward()
self.optimizer.step()
self.scheduler.step()
pred = logits.argmax(1, keepdim=True).float()
correct_tensor = pred.eq(batch_labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.cpu().numpy())
train_num_correct += np.sum(correct)
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
train_acc = train_num_correct / train_total_len
train_acc_values.append(train_acc)
avg_train_loss = train_total_loss / len(train_data)
train_loss_values.append(avg_train_loss)
print()
print("Running Validation...")
print()
val_total_loss, val_total_len, val_num_correct = 0, 0, 0
model.eval()
for batch in dev_data:
batch_features, batch_labels, _ = tuple(t.to(self.device) for t in batch)
val_total_len += batch_features.shape[0]
with torch.no_grad():
logits = model(batch_features)
loss = criterion(logits, batch_labels)
val_total_loss += loss
pred = logits.argmax(1, keepdim=True).float()
correct_tensor = pred.eq(batch_labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.cpu().numpy())
val_num_correct += np.sum(correct)
val_acc = val_num_correct / val_total_len
val_acc_values.append(val_acc)
avg_val_loss = val_total_loss / len(dev_data)
val_loss_values.append(avg_val_loss.cpu().numpy())
print("Epoch | Train Accuracy | Validation Accuracy | Training Loss | Validation Loss")
print(f"{epoch+1:3d} | {train_acc:.3f} | {val_acc:.3f} | {avg_train_loss:.3f} | {avg_val_loss:.3f}")
print()
if epoch == (epochs-1):
training_plot(train_loss_values, val_loss_values)
training_dict = {"Train Accuracy": train_acc_values, "Train Loss": train_loss_values, "Val Accuracy": val_acc_values, "Val Loss": val_loss_values}
print("Training complete!")
return training_dict
else:
continue
else:
print("Stopping early...")
training_plot(train_loss_values, val_loss_values)
training_dict = {"Train Accuracy": train_acc_values, "Train Loss": train_loss_values, "Val Accuracy": val_acc_valuess, "Val Loss": val_loss_values}
print("Training complete!")
return training_dict
def test(self, input_model, test_data):
"""
Tests the model's performance based on a several metrics
input_model: Instatiation of model
test_data: Dataloader object containing test data- image, text, labels
"""
print('Predicting labels for {} sentences...'.format(len(test_data)))
model = input_model.to(self.device)
model.eval()
predictions, true_labels, ids = [], [], []
for batch in test_data:
batch_features, batch_labels, batch_ids = tuple(t.to(self.device) for t in batch)
with torch.no_grad():
logits = model(batch_features)
predictions.append(logits.detach().cpu().numpy())
true_labels.append(batch_labels.to('cpu').numpy())
ids.append(batch_ids.cpu().numpy())
predictions = functools.reduce(operator.iconcat, predictions, [])
true_labels = functools.reduce(operator.iconcat, true_labels, [])
ids = functools.reduce(operator.iconcat, ids, [])
print(' DONE.')
return metrics(true_labels, predictions, argmax_needed= True), ids
#Hyperparamters
DROPOUT = 0.2
HIDDEN_SIZE = 100
BATCH_SIZE = 8
NUM_LABELS = 4
NUM_EPOCHS = 100
EARLY_STOPPING = {"patience": 5, "delta": 0.005}
LEARNING_RATES = [0.0001, 0.001, 0.01, 0.1]
WEIGHT_DECAY = 0.1
WARMUP = 0.06
OUTPUT_DIR = "/content/drive/My Drive/Dog_Whistle_Code/Fine_Tuned_Models/Multimodal/Feature Concatenation"
class DogWhistleDataset(Dataset):
def __init__(self, df):
self.data = df
def __len__(self):
return (self.data.shape[0])
def __getitem__(self, i):
features = np.array(self.data.iloc[0, 1:-2]) #start at 1 because of the Unnamed:0 header that gets added
labels = self.data.loc[i, "labels"]
ids = self.data.loc[i, "ids"]
sample = (features, labels, ids)
return sample
# Load data
# train = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/combined_features.csv")
# dev = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/combined_features.csv")
# test = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/combined_features.csv")
train = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/combined_features_sabat.csv")
dev = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/combined_features_sabat.csv")
test = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/combined_features_sabat.csv")
# Create dataset object
train_dataset = DogWhistleDataset(train)
dev_dataset = DogWhistleDataset(dev)
test_dataset = DogWhistleDataset(test)
# Create dataloader
train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
dev_dataloader = DataLoader(dev_dataset, batch_size=BATCH_SIZE, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True)
Classifier = MultimodalClassifier("Sabat", HIDDEN_SIZE, DROPOUT, NUM_LABELS, 2048)
#logits, batch_labels = Classifier.trainer(Classifier, train_dataloader, dev_dataloader, EARLY_STOPPING, 10, 0.1, WEIGHT_DECAY, WARMUP)
train_dict = Classifier.trainer(Classifier, train_dataloader, dev_dataloader, EARLY_STOPPING, 5, LEARNING_RATES[0], WEIGHT_DECAY, WARMUP)
(metric_vals, labels, preds), ids = Classifier.test(Classifier, test_dataloader)
#model_saver(Classifier, "Multimodal", OUTPUT_DIR, train_dict, labels, preds, metrics, ids)
print(metric_vals)
```
### Random Forrest
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(train.iloc[:, 1:-2])
X_test = sc.transform(test.iloc[:, 1:-2])
y_train = train.loc[:, "labels"].values
y_test = test.loc[:, "labels"].values
clf = RandomForestClassifier(max_depth=10, random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metric_vals, _, _ = metrics(y_test, y_pred)
metric_vals
```
### Keras Implementation
```
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, BatchNormalization
from keras.optimizers import Adam
from keras.callbacks import LearningRateScheduler, EarlyStopping
#Hyperparamters
DROPOUT = 0.2
HIDDEN_SIZE = 100
BATCH_SIZES = [8, 16, 32]
NUM_LABELS = 4
NUM_EPOCHS = 100
EARLY_STOPPING = {"patience": 3, "delta": 0.005}
LEARNING_RATES = [0.0001, 0.001, 0.01, 0.1]
WEIGHT_DECAY = 0.1
WARMUP = 0.06
OUTPUT_DIR = "/content/drive/My Drive/Dog_Whistle_Code/Fine_Tuned_Models/Multimodal/Feature Concatenation"
def decay(epoch, lr):
epochs_drop = 5
DECAY_RATE = 0.94
lrate = lr * (DECAY_RATE**((1+epoch)/epochs_drop))
return lrate
SCHEDULER = LearningRateScheduler(decay)
# Load data
# train = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/combined_features.csv")
# dev = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/combined_features.csv")
# test = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/combined_features.csv")
train = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Train/combined_features_sabat.csv")
dev = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Validation/combined_features_sabat.csv")
test = pd.read_csv("/content/drive/My Drive/Dog_Whistle_Code/Data/Test/combined_features_sabat.csv")
# Divide labels and features
x_train = train.iloc[:, 1:-2]
y_train = pd.get_dummies(train.loc[:, "labels"])
x_dev = dev.iloc[:, 1:-2]
y_dev = pd.get_dummies(dev.loc[:, "labels"])
x_test = test.iloc[:, 1:-2]
y_test = test.loc[:, "labels"].values.tolist()
def construct_model(MLP_type, hidden_size: int=50, dropout: float=0.2, num_labels: int=4, input_len: int = 5120):
"""Builds the network structure
image_model: CovNet from Keras library to use as image feature extractor
text_model: Transformer Model from Hugging Face to use as the text feature extractor
hidden_size (int): Number of nodes in the hidden layer. Defaulted to 50.
dropout (float): Rate at which nodes are deactivated. Defaulted to 0.2.
num_labels (int): Number of labels to predict. Defaulted to 4.
input_len (int): Length of input vector. Defaulted to 5120 (Text feature length (4096) + image feature length (1024)).
"""
if MLP_type == "Sabat":
model = Sequential()
model.add(Dense(units=hidden_size, activation='relu',input_dim=input_len))
model.add(Dropout(0.2))
model.add(Dense(units=hidden_size, activation='relu',input_dim=hidden_size))
#model.add(Dropout(0.2))
model.add(Dense(units=num_labels, activation='softmax', input_dim=hidden_size))
if MLP_type == "Gomez":
model = Sequential()
model.add(Dense(units=input_len, activation='relu',input_dim=input_len))
model.add(BatchNormalization())
#model.add(Dropout(0.2))
model.add(Dense(units=1024, activation='relu',input_dim=input_len))
model.add(BatchNormalization())
#model.add(Dropout(0.2))
model.add(Dense(units=512, activation='relu',input_dim=1024))
model.add(BatchNormalization())
#model.add(Dropout(0.2))
model.add(Dense(units=num_labels, activation='softmax', input_dim=512))
return model
def model_trainer(input_model, x_train, x_test, x_dev, y_dev, early_stop_vals: dict, scheduler, epochs: int = 25, learning_rate: float = 1e-5, batch_size: int=8):
"""
Trains multimodal model
input_model: Instatiation of model
x_train: Dataframe containing train features
y_train: Pandas series containing train labels
x_dev: Dataframe containing validation features
y_dev: Pandas series containing validation labels
early_stopping: Dictionary containing patience value (int) and delta value (float). The patience determines the number of epochs to wait to achieve the given delta
epochs (int): Number of times to run through all batches. Default value is 25.
learning_rate (float): Default value is 1e-5.
batch_size (int): Number of examples to be passed through the model at a given time. Defaulted to 8.
"""
Early_Stop = EarlyStopping(monitor='val_loss', min_delta=early_stop_vals["delta"], patience=early_stop_vals["patience"], verbose=1, mode='auto')
opt = Adam(learning_rate=learning_rate)
input_model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=["accuracy"])
history = input_model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, verbose=1, validation_data=(x_dev, y_dev), callbacks=[Early_Stop, scheduler])
train_dict = {"Train Accuracy": history.history['accuracy'], "Train Loss": history.history['loss'], "Val Accuracy": history.history['val_accuracy'], "Val Loss": history.history['val_loss'] }
return input_model, train_dict
def model_tester(input_model, x_test, y_test):
"""
Tests the model's performance based on a several metrics
input_model: Instatiation of model
x_test: Dataframe containing test features
y_test: Pandas series containing test labels
"""
print('Predicting labels for {} sentences...'.format(len(x_test)))
preds = input_model.predict(x_test)
results, labels, predictions = metrics(y_test, preds, argmax_needed=True)
return results, labels, predictions
# Run Gomez
Keras_Classifier = construct_model("Gomez", HIDDEN_SIZE, DROPOUT, NUM_LABELS, 2048)
trained_model, train_dict = model_trainer(Keras_Classifier, x_train, x_test, x_dev, y_dev, EARLY_STOPPING, SCHEDULER, NUM_EPOCHS, LEARNING_RATES[0], 8)
results, labels, predictions = model_tester(trained_model, x_test, y_test)
print(results)
# Run Sabat
Keras_Classifier = construct_model("Sabat", HIDDEN_SIZE, DROPOUT, NUM_LABELS, 2048)
trained_model, train_dict = model_trainer(Keras_Classifier, x_train, x_test, x_dev, y_dev, EARLY_STOPPING, SCHEDULER, NUM_EPOCHS, LEARNING_RATES[0], 25) #Note: lr from paper was LEARNING_RATES[-1]
results, labels, predictions = model_tester(trained_model, x_test, y_test)
print(results)
results_dict = {}
max_f1_value = 0
for i in BATCH_SIZES:
learning_rate_dict = {}
for j in LEARNING_RATES:
Keras_Classifier = construct_model("Sabat", HIDDEN_SIZE, DROPOUT, NUM_LABELS, 2048)
trained_model, train_dict = model_trainer(Keras_Classifier, x_train, x_test, x_dev, y_dev, EARLY_STOPPING, SCHEDULER, NUM_EPOCHS, j, i)
learning_rate_dict[j], labels, predictions = model_tester(trained_model, x_test, y_test)
if learning_rate_dict[j]["f1"] >= max_f1_value: #only save best model
max_f1_value = learning_rate_dict[j]["f1"]
print("The new top F1 score is: {}. Saving model...".format(max_f1_value))
model_saver(trained_model, "Sabat", "Keras", OUTPUT_DIR, train_dict, labels, predictions, test.loc[:, "ids"].values.tolist(), learning_rate_dict[j])
results_dict[i] = learning_rate_dict
#save complete training results
np.save(os.path.join(os.path.join(OUTPUT_DIR, "Sabat"), "dogwhistle_total_training_results_sabat.npy"), results_dict)
results_dict = {}
max_f1_value = 0
for i in BATCH_SIZES:
learning_rate_dict = {}
for j in LEARNING_RATES:
Keras_Classifier = construct_model("Gomez", HIDDEN_SIZE, DROPOUT, NUM_LABELS, 2048)
trained_model, train_dict = model_trainer(Keras_Classifier, x_train, x_test, x_dev, y_dev, EARLY_STOPPING, SCHEDULER, NUM_EPOCHS, j, i)
learning_rate_dict[j], labels, predictions = model_tester(trained_model, x_test, y_test)
if learning_rate_dict[j]["f1"] >= max_f1_value: #only save best model
max_f1_value = learning_rate_dict[j]["f1"]
print("The new top F1 score is: {}. Saving model...".format(max_f1_value))
model_saver(trained_model, "Gomez", "Keras", OUTPUT_DIR, train_dict, labels, predictions, test.loc[:, "ids"].values.tolist(), learning_rate_dict[j])
results_dict[i] = learning_rate_dict
#save complete training results
np.save(os.path.join(os.path.join(OUTPUT_DIR, "Gomez"), "dogwhistle_total_training_results_gomez.npy"), results_dict)
```
| github_jupyter |
```
import numpy as np
import numba
import matplotlib.pyplot as plt
import scipy.optimize as sopt
from pysimu import ode2numba, ssa
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
%matplotlib notebook
import freq2
syst = freq2.freq2_class()
syst.struct[0].p_load = 0.1
syst.struct[0].B_1 = 10
syst.struct[0].B_2 = 10
syst.struct[0].p_load = 0.5
syst.struct[0].K_imw_1 = 0.000001
syst.struct[0].K_imw_2 = 0.000001
N_x = syst.N_x
N_y = syst.N_y
x0 = np.zeros(N_x+N_y)
s = sopt.fsolve(syst.run_problem,x0 )
print(f'phi = {s[8]}')
print(f'phi_1 = {s[0]-s[8]}, phi_2 = {s[5]-s[8]}')
print(f'p_1 = {s[N_x+0]}, p_2 = {s[N_x+3]}')
print(f'omega_1 = {s[1]}, omega_2 = {s[6]}')
run = freq2.run
@numba.njit(cache=True)
def perturbations(t,struct):
struct[0].p_load = 0.5
if t>1.0: struct[0].p_load= 0.6
return
@numba.njit(cache=True)
def solver(struct):
sin = np.sin
cos = np.cos
sqrt = np.sqrt
i = 0
Dt = struct[i].Dt
N_steps = struct[i].N_steps
N_store = struct[i].N_store
N_x = struct[i].N_x
N_y = struct[i].N_y
N_outs = 1
decimation = struct[i].decimation
eye = np.eye(N_x)
# initialization
#t = struct[i].t
t = 0.0
run(0.0,struct, 1)
it_store = 0
struct[i]['T'][0] = t
struct[i].X[0,:] = struct[i].x[:,0]
Y = np.zeros((N_store,N_y))
Y[0,:] = struct[i].y[:,0]
solver = struct[i].solvern
for it in range(N_steps-1):
t += Dt
perturbations(t,struct)
if solver == 1:
# forward euler solver
run(t,struct, 2)
struct[i].x[:] += Dt*struct[i].f
if solver == 2:
# bacward euler solver
x_0 = np.copy(struct[i].x[:])
for j in range(struct[i].imax):
run(t,struct, 2)
run(t,struct, 3)
run(t,struct, 10)
phi = x_0 + Dt*struct[i].f - struct[i].x
Dx = np.linalg.solve(-(Dt*struct[i].Fx - np.eye(N_x)), phi)
struct[i].x[:] += Dx[:]
if np.max(np.abs(Dx)) < struct[i].itol: break
print(struct[i].f)
if solver == 3:
# trapezoidal solver
run(t,struct, 2)
f_0 = np.copy(struct[i].f[:])
x_0 = np.copy(struct[i].x[:])
for j in range(struct[i].imax):
run(t,struct, 10)
phi = x_0 + 0.5*Dt*(f_0 + struct[i].f) - struct[i].x
Dx = np.linalg.solve(-(0.5*Dt*struct[i].Fx - np.eye(N_x)), phi)
struct[i].x[:] += Dx[:]
run(t,struct, 2)
if np.max(np.abs(Dx)) < struct[i].itol: break
if solver == 4:
#print(t)
run(t,struct, 2)
run(t,struct, 3)
x = np.copy(struct[i].x[:])
y = np.copy(struct[i].y[:])
f = np.copy(struct[i].f[:])
g = np.copy(struct[i].g[:])
for iter in range(1):
run(t,struct, 2)
run(t,struct, 3)
run(t,struct,10)
run(t,struct,11)
x_i = struct[i].x[:]
y_i = struct[i].y[:]
f_i = struct[i].f[:]
g_i = struct[i].g[:]
F_x_i = struct[i].Fx[:,:]
F_y_i = struct[i].Fy[:,:]
G_x_i = struct[i].Gx[:,:]
G_y_i = struct[i].Gy[:,:]
A_c_i = np.vstack((np.hstack((eye-0.5*Dt*F_x_i, -0.5*Dt*F_y_i)),
np.hstack((G_x_i, G_y_i))))
f_n_i = x_i - x - 0.5*Dt*(f_i+f)
#print(t,iter,np.linalg.det(G_y_i),struct[i].x[1,0])
Dxy_i = np.linalg.solve(-A_c_i,np.vstack((f_n_i,g_i)))
x_i = x_i + Dxy_i[0:N_x]
y_i = y_i + Dxy_i[N_x:(N_x+N_y)]
struct[i].x[:] = x_i
struct[i].y[:] = y_i
if np.max(np.abs(Dxy_i[:,0]))<1.0e-6:
break
struct[i].x[:] = x_i
struct[i].y[:] = y_i
# channels
if it >= it_store*decimation:
struct[i]['T'][it_store+1] = t
struct[i].X[it_store+1,:] = struct[i].x[:,0]
Y[it_store+1,:] = struct[i].y[:,0]
it_store += 1
struct[i].t = t
return struct[i]['T'][:], struct[i].X[:], Y
syst.solvern = 4
syst.t_end = 60.0
syst.Dt = 0.010
syst.decimation =1
syst.update()
syst.struct[0].B_1 = 10
syst.struct[0].B_2 = 10
syst.struct[0].p_load = 0.5
syst.struct[0].K_imw_1 = 0.0001
syst.struct[0].K_imw_2 = 0.01
syst.struct[0].T_b_1 = 0.5
syst.struct[0].T_c_1 = 0.0
syst.struct[0].T_b_2 = 0.5
syst.struct[0].T_c_2 = 0.0
x0 = np.zeros(syst.N_x+syst.N_y)
s = sopt.fsolve(syst.run_problem,x0 )
#syst.struct[0].v_f = 1.2
#syst.struct[0].H = 5
#syst.struct[0].T_pss_1 = 3.6415847004537487
#syst.struct[0].T_pss_2 = 0.6398979816027691
#syst.struct[0].D = x_pso[1]
x0 = np.ones((syst.N_x+syst.N_y,1))
s = sopt.fsolve(syst.run_problem,x0 )
syst.struct[0].x[:,0] = s[0:syst.N_x]
syst.struct[0].y[:,0] = s[syst.N_x:]
T,X,Y = solver(syst.struct)
#%timeit solver(syst.struct)
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(7, 4), sharex = True)
axes[0].plot(T[:-1], X[:-1,1])
axes[0].plot(T[:-1], X[:-1,6])
axes[1].plot(T[:-1], Y[:,3])
axes[1].plot(T[:-1], Y[:,7])
#axes[0].plot(T[:-1], Y[:,-1])
#axes[1].plot(T[:-1], Y[:,6])
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(7, 4), sharex = True)
axes[0].plot(T[:-1], X[:-1,8])
axes[0].plot(T[:-1], X[:-1,1])
curve1 = axes[1].plot(T[:-1], Y[:,7])
axes[2].plot(T[:-1], Y[:,0])
curve2 = axes[2].plot(T[:-1], Y[:,4]- Y[:,1])
#axes[1].plot(T[:-1], Y[:,3])
axes[1].set_ylim([0,1.2])
#axes[0].set_xlim([0,15])
axes[0].grid(True)
fig.canvas.draw()
def update(p_m = 0.9,T_pss_1 = 1.281,T_pss_2 = 0.013):
x0 = np.ones((syst.N_x+syst.N_y,1))
s = sopt.fsolve(syst.ini_problem,x0 )
syst.struct[0].x[:,0] = s[0:syst.N_x]
syst.struct[0].y[:,0] = s[syst.N_x:]
syst.struct[0].p_m = p_m
syst.struct[0].T_pss_1 = T_pss_1
syst.struct[0].T_pss_2 = T_pss_2
T,X,Y = solver(syst.struct)
curve1[0].set_xdata(T[:-1])
curve1[0].set_ydata(Y[:,7])
curve2[0].set_xdata(T[:-1])
curve2[0].set_ydata(Y[:,4]- Y[:,1])
fig.canvas.draw()
update()
interact(update,
p_m =widgets.FloatSlider(min=0.0,max=1.2,step=0.1,value=0.8, continuous_update=False),
T_pss_1 =widgets.FloatSlider(min=0.0,max=10.0,step=0.01,value=1.281, continuous_update=False),
T_pss_2 =widgets.FloatSlider(min=0.0,max=1.0,step=0.01,value=0.013, continuous_update=False)
);
import operator
import random
import time
import math
import multiprocessing as mp
from deap import base, creator, tools
#def cost_func(part):
# x1, x2 = part[0], part[1]
# return ((x1**2+x2**2)**0.25)*((math.sin(50*(x1**2+x2**2)**0.1))**2 +1.0),
s = sopt.fsolve(syst.ini_problem,x0 )
def cost_func(part):
T_pss_1, T_pss_2 = part[0], part[1]
x0 = np.ones((syst.N_x+syst.N_y,1))
syst.struct[0].x[:,0] = s[0:syst.N_x]
syst.struct[0].y[:,0] = s[syst.N_x:]
syst.struct[0].T_pss_1 = T_pss_1
syst.struct[0].T_pss_2 = T_pss_2
T,X,Y = solver(syst.struct)
cost = np.sum((Y[:,0] - (Y[:,4]- Y[:,1]))**2)
# a = ((x1**2+x2**2)**0.25)*((math.sin(50*(x1**2+x2**2)**0.1))**2 +1.0)
return cost,
def generate(size, pmin, pmax, smin, smax):
part = creator.Particle(random.uniform(pmin, pmax) for _ in range(size))
part.speed = [random.uniform(smin, smax) for _ in range(size)]
part.smin = smin
part.smax = smax
return part
def updateParticle(best, part, phi1, phi2):
u1 = (random.uniform(0, phi1) for _ in range(len(part)))
u2 = (random.uniform(0, phi2) for _ in range(len(part)))
v_u1 = map(operator.mul, u1, map(operator.sub, part.best, part))
v_u2 = map(operator.mul, u2, map(operator.sub, best, part))
part.speed = list(map(operator.add, part.speed, map(operator.add, v_u1, v_u2)))
for i, speed in enumerate(part.speed):
if speed < part.smin:
part.speed[i] = part.smin
elif speed > part.smax:
part.speed[i] = part.smax
part[:] = list(map(operator.add, part, part.speed))
return part
creator.create("FitnessMax", base.Fitness, weights=(-1.0,))
creator.create("Particle", list, fitness=creator.FitnessMax, speed=list, smin=None, smax=None, best=None)
toolbox = base.Toolbox()
#toolbox.register("particle", generate, size=2, pmin=-10, pmax=10, smin=-2, smax=2)
toolbox.register("particle", generate, size=2, pmin=0.001, pmax=10, smin=0.001, smax=10)
toolbox.register("population", tools.initRepeat, list, toolbox.particle)
toolbox.register("update", updateParticle, phi1=1.0, phi2=1.0)
toolbox.register("evaluate", cost_func)
def pso(pop,toolbox,maxmov):
MOVES = maxmov
best = None
valor_best = None
i = 0
while i < MOVES:
print('iteracion', i)
fitnesses = toolbox.map(toolbox.evaluate,pop)
for part, fit in zip(pop, fitnesses):
part.fitness.values = fit
for part in pop:
if not part.best or part.best.fitness < part.fitness:
part.best = creator.Particle(part)
part.best.fitness.values = part.fitness.values
if not best or best.fitness < part.fitness:
best = creator.Particle(part)
best.fitness.values = part.fitness.values
valor_best1 = best.fitness.values
if valor_best == valor_best1:
i += 1
else:
valor_best = valor_best1
i = 0
for part in pop:
toolbox.update(best, part)
return best, best.fitness
n=10
pop = toolbox.population(n)
MOVES = 80
BestParticle, BestFitness = pso(pop,toolbox,MOVES)
print(BestParticle, BestFitness)
#pool.close()
from pyswarm import pso
s = sopt.fsolve(syst.ini_problem,x0 )
def cost_func(part):
T_pss_1, T_pss_2 = part[0], part[1]
x0 = np.ones((syst.N_x+syst.N_y,1))
syst.struct[0].x[:,0] = s[0:syst.N_x]
syst.struct[0].y[:,0] = s[syst.N_x:]
syst.struct[0].T_pss_1 = T_pss_1
syst.struct[0].T_pss_2 = T_pss_2
T,X,Y = solver(syst.struct)
cost = np.sum((Y[:,0] - (Y[:,4]- Y[:,1]))**2)
# a = ((x1**2+x2**2)**0.25)*((math.sin(50*(x1**2+x2**2)**0.1))**2 +1.0)
return cost
lb = [1, 1]
ub = [5, 5]
xopt, fopt = pso(cost_func, lb, ub)
from scipy import optimize
s = sopt.fsolve(syst.ini_problem,x0 )
def cost_func(part):
T_pss_1, T_pss_2 = part[0], part[1]
x0 = np.ones((syst.N_x+syst.N_y,1))
syst.struct[0].x[:,0] = s[0:syst.N_x]
syst.struct[0].y[:,0] = s[syst.N_x:]
syst.struct[0].T_pss_1 = T_pss_1
syst.struct[0].T_pss_2 = T_pss_2
T,X,Y = solver(syst.struct)
cost = np.sum((Y[:,0] - (Y[:,4]- Y[:,1]))**2)
# a = ((x1**2+x2**2)**0.25)*((math.sin(50*(x1**2+x2**2)**0.1))**2 +1.0)
return cost
bnds = ((1, 5), (2, 5))
res = optimize.minimize(cost_func, (2, 2), method='COBYLA', bounds=bnds)
res
s
```
| github_jupyter |
# Running delay filter, followed by xrfi
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import os
from hera_cal.delay_filter import DelayFilter
from hera_cal.data import DATA_PATH
from pyuvdata import UVData
import shutil
from hera_qm import xrfi
from hera_qm import utils as qm_utils
from hera_cal import io
import copy
```
## Read Data
Load data by creating a `DelayFilter` object and then reading in some data.
```
dfil = DelayFilter()
fname = os.path.join(DATA_PATH, "zen.2458043.12552.xx.HH.uvORA")
uv = UVData()
uv.read_miriad(fname)
dfil.load_data(uv)
uv_raw = copy.deepcopy(uv) # For comparing later
```
## Perform Delay Filter
```
dfil.run_filter()
```
## Update UVData object
```
io.update_uvdata(dfil.input_data, data=dfil.filtered_residuals)
```
## Run xrfi
```
# Construct arg parser
a = qm_utils.get_metrics_ArgumentParser('xrfi_run')
# Here are some example arguments. There are others too - check hera_qm.utils
arg0 = "--infile_format=miriad"
arg1 = "--xrfi_path={}".format(os.path.join(DATA_PATH, 'test_output'))
arg2 = "--algorithm=xrfi"
arg3 = "--kt_size=2"
arg4 = "--kf_size=2"
arg5 = "--sig_init=6"
arg6 = "--sig_adj=2"
arg7 = "--summary"
arg8 = "--model_file={}".format(os.path.join(DATA_PATH, 'zen.2458043.12552.xx.HH.uvA.vis.uvfits'))
arg9 = "--calfits_file={}".format(os.path.join(DATA_PATH, 'zen.2458043.12552.xx.HH.uvORA.abs.calfits'))
arguments = ' '.join([arg0, arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9])
cmd = ' '.join([arguments, fname])
args = a.parse_args(cmd.split())
xrfi.xrfi_run(dfil.input_data, args, cmd)
```
## Apply to data
```
# Construct new arg parser
a = qm_utils.get_metrics_ArgumentParser('xrfi_apply')
arg0 = "--infile_format=miriad"
arg1 = "--outfile_format=miriad"
arg2 = "--extension=R"
arg3 = "--xrfi_path={}".format(os.path.join(DATA_PATH, 'test_output'))
flag_file = os.path.join(DATA_PATH, 'test_output', 'zen.2458043.12552.xx.HH.uvORA.flags.npz')
arg4 = "--flag_file=" + flag_file
wf_file1 = os.path.join(DATA_PATH, 'test_output', 'zen.2458043.12552.xx.HH.uvORA.abs.calfits.g.flags.npz')
wf_file2 = os.path.join(DATA_PATH, 'test_output', 'zen.2458043.12552.xx.HH.uvORA.abs.calfits.x.flags.npz')
# Note the x-flags from this file result in all flags, but this is how you'd do it.
arg5 = "--waterfalls=" + wf_file1 # + "," + wf_file2
arg6 = "--overwrite" # Overwrite test files
arguments = ' '.join([arg0, arg1, arg2, arg3, arg4, arg5, arg6])
cmd = ' '.join([arguments, fname])
args = a.parse_args(cmd.split())
xrfi.xrfi_apply(args.filename, args, cmd)
uvR = UVData()
uvR.read_miriad(os.path.join(DATA_PATH, 'test_output', os.path.basename(fname) + args.extension))
bl = (37, 39, 'xx')
plt.figure(figsize=(15,5))
plt.subplot(131)
d = uv_raw.get_data(bl)
f = uv_raw.get_flags(bl)
d = np.ma.masked_where(f, d)
plt.imshow(np.abs(d), aspect='auto', norm=colors.LogNorm(vmin=1e-4))
plt.colorbar()
plt.title(str(bl) + ': Abs(Data)')
plt.xlabel('Frequency Channel')
plt.ylabel('Integration')
plt.subplot(132)
res = dfil.filtered_residuals[bl]
res[dfil.flags[bl]] = 0
plt.imshow(np.abs(res), aspect='auto', norm=colors.LogNorm(vmin=1e-4))
plt.colorbar()
plt.title(str(bl) + ': Abs(Filtered Data)')
plt.xlabel('Frequency Channel')
plt.ylabel('Integration')
plt.subplot(133)
d = uvR.get_data(bl)
f = uvR.get_flags(bl)
d = np.ma.masked_where(f, d)
plt.imshow(np.abs(d), aspect='auto', norm=colors.LogNorm(vmin=1e-4))
plt.colorbar()
plt.title(str(bl) + ': Abs(RFId Data)')
plt.xlabel('Frequency Channel')
plt.ylabel('Integration')
```
| github_jupyter |
# 4. Indexing, slicing
Each element of an array can be located by its position in each dimension. Numpy offers multiple ways to access single elements or groups of elements in very efficient ways. We will illustrate these concepts both with small simple matrices as well as a regular image, in order to illustrate them.
```
import numpy as np
import matplotlib.pyplot as plt
plt.gray();
import skimage
```
We first load an image included in the scikit-image package:
```
image = skimage.data.chelsea()
plt.imshow(image);
```
We can check the dimensions of the image and see that it is an RGB image with 3 channels:
```
image.shape
```
## 4.1 Accessing single values
We create a small 2D array to use as an example:
```
normal_array = np.random.normal(10, 2, (3,4))
normal_array
```
It is very easy to access an array's values. One can just pass an *index* for each dimensions. For example to recover the value on the last row and second column of the ```normal_array``` array we just write (remember counting starts at 0):
```
single_value = normal_array[2,1]
single_value
```
What is returned in that case is a single number that we can re-use:
```
single_value += 10
single_value
```
And that change doesn't affect the original value in the array:
```
normal_array
```
However we can also directly change the value in an array:
```
normal_array[2,1] = 23
normal_array
```
## 4.2 Accessing part of an array with indices: slicing
### 4.2.1 Selecting a range of elements
One can also select multiple elements in each dimension (e.g. multiple rows and columns in 2D) by using the ```start:end:step``` syntax. By default, if omitted, ```start=0```, ```end=last element``` and ```step=1```. For example to select the first **and** second rows of the first column, we can write:
```
normal_array[0:2,0]
```
Note that the ```end``` element is **not** included. One can use the same notation for all dimensions:
```
normal_array[0:2,2:4]
normal_array[1:,2:4]
```
### 4.2.2 Selecting all elements
If we only specify ```:```, it means we want to recover all elements in that dimension:
```
normal_array[:,2:4]
```
Also in general, if you only specify the value for a single axis, this will take the first element of the first dimension:
```
normal_array
normal_array[1]
```
Finally note that if you want to recover only one element along a dimension (single row, column etc), you can do that in two ways:
```
normal_array[0,:]
```
This returns a one-dimensional array containing a single row from the original array:
```
normal_array[0,:].shape
```
Instead, if you specify actual boundaries that still return only a single row:
```
normal_array[0:1,:]
normal_array[0:1,:].shape
```
you recover a tow dimensional array where one of the dimensions has a size of 1.
### 4.2.3 Illustration on an image
We can for example only select half the rows of the image but all columns and channels:
```
image.shape
sub_image = image[0:150,:,:]
plt.imshow(sub_image);
```
Or we can take every fith column and row from a single channel, which returns a pixelated version of the original image:
```
plt.imshow(image[::5,::5,0]);
```
## 4.3 Sub-arrays are not copies!
As often with Python when you create a new variable using a sub-array, that variable **is not independent** from the original variable:
```
sub_array = normal_array[:,2:4]
sub_array
normal_array
```
If for example we modify ```normal_array```, this is going to be reflected in ```sub_array``` too:
```
normal_array[0,2] = 100
normal_array
sub_array
```
The converse is also true:
```
sub_array[0,1] = 50
sub_array
normal_array
```
If you want your sub-array to be an *independent* copy of the original, you have to use the ```.copy()``` method:
```
sub_array_copy = normal_array[1:3,:].copy()
sub_array_copy
sub_array_copy[0,0] = 500
sub_array_copy
normal_array
```
## 4.4. Accessing parts of an array with coordinates
In the above case, we are limited to select rectangular sub-regions of the array. But sometimes we want to recover a series of specific elements for example the elements (row=0, column=3) and (row=2, column=2). To achieve that we can simply index the array with a list containing row indices and another with columns indices:
```
row_indices = [0,2]
col_indices = [3,2]
normal_array[row_indices, col_indices]
normal_array
selected_elements = normal_array[row_indices, col_indices]
selected_elements
```
## 4.5 Logical indexing
The last way of extracting elements from an array is to use a boolean array of same shape. For example let's create a boolean array by comparing our original matrix to a threshold:
```
bool_array = normal_array > 40
bool_array
```
We see that we only have two elements which are above the threshold. Now we can use this logical array to *index* the original array. Imagine that the logical array is a mask with holes only in ```True``` positions and that we superpose it to the original array. Then we just take all the values visible in the holes:
```
normal_array[bool_array]
```
Coming back to our real image, we can for example first create an image that contains a single channel and then find bright regions in it:
```
single_channel = image[:,:,0]
mask = single_channel > 150
plt.imshow(mask);
```
And now we can recover all the pixels that are "selected" by this mask:
```
single_channel[mask]
```
## 4.6 Reshaping arrays
Often it is necessary to reshape arrays, i.e. keep elements unchanged but change their position. There are multiple functions that allow one to do this. The main one is of course ```reshape```.
### 4.6.1 ```reshape```
Given an array of $MxN$ elements, one can reshape it with a shape $OxP$ as long as $M*N = O*P$.
```
reshaped = np.reshape(normal_array,(2,6))
reshaped
reshaped.shape
300*451/150
```
With the image as example, we can reshape the array from $300x451x3$ to $150x902x3$:
```
plt.imshow(np.reshape(image, (150,902,3)))
```
### 4.6.2 Flattening
It's also possible to simply flatten an array i.e. remove all dimensions to create a 1D array. This can be useful for example to create a histogram of a high-dimensional array.
```
flattened = np.ravel(normal_array)
flattened
flattened.shape
```
### 4.6.3 Dimension collapse
Another common way that leads to reshaping is projection. Let's consider again our ```normal_array```:
```
normal_array
```
We can project all values along the first or second axis, to recover for each row/column the largest value:
```
proj0 = np.max(normal_array, axis = 0)
proj0
proj0.shape
```
We see that our projected array has lost a dimension, the one along which we performed the projection. With the image, we could project all channels along the third dimension:
```
plt.imshow(image.max(axis=2));
```
### 4.6.4 Swaping dimensions
We can also simply exchange the position of dimensions. This can be achieved in different ways. For example we can ```np.roll``` dimensions, i.e. circularly shift dimensions. This conserves the relative oder of all axes:
```
array3D = np.ones((4, 10, 20))
array3D.shape
array_rolled = np.rollaxis(array3D, axis=1, start=0)
array_rolled.shape
```
Alternatively you can swap two axes. This doesn't preserver their relative positions:
```
array_swapped = np.swapaxes(array3D, 0,2)
array_swapped.shape
```
With the image, we can for example swap the two first axes:
```
plt.imshow(np.swapaxes(image, 0, 1));
```
### 4.6.5 Change positions
Finally, we can also change the position of elements without changing the shape of the array. For example if we have an array with two columns, we can swap them:
```
array2D = np.random.normal(0,1,(4,2))
array2D
np.fliplr(array2D)
```
Similarly, if we have two rows:
```
array2D = np.random.normal(0,1,(2,4))
array2D
np.flipud(array2D)
```
For more complex cases you can also use the more general ```np.flip()``` function.
With the image, flipping a dimension just mirrors the picture. To do that we select a single channel:
```
plt.imshow(np.flipud(image[:,:,0]));
```
| github_jupyter |
So far, you have worked with datasets that we have provided for you. In this tutorial, you'll learn how to use your own datasets. Then, in the following exercise, you'll design and create your own data visualizations.
You'll learn all about Kaggle Datasets, a tool that you can use to store your own datasets and quickly access tens of thousands of publicly available data sources.
# Kaggle Datasets
You can access Kaggle Datasets by visiting the link below:
> https://www.kaggle.com/datasets
The link will bring you to a webpage with a long list of datasets that you can use in your own projects.

Note that the list of datasets that you see will likely look different from what's shown in the screenshot above, since many new datasets are uploaded every day!
There are many different file types on Kaggle Datasets, including CSV files, but also more exotic file types such as JSON, SQLite, and BigQuery. To restrict the list of datasets to CSV files only, you need only select from the drop-down menu towards the top of the screen: **\[File types\] > [CSV]**. Please do this now, and then take the time to peruse the list of CSV datasets.

To search for a specific dataset, use the search bar at the top of the screen. Say, for instance, you'd like to work with a dataset about comic book characters. Begin by typing **"comic"** in the search window.

Find the **"FiveThirtyEight Comic Characters Dataset"**, and click on it to select it. This will bring you to a webpage that describes the dataset.

You can see the list of files in the dataset under **Data Sources**, on the left of the window. The dataset contains three files: (1) **dc-wikia-data.csv**, (2) **marvel-wikia-data.csv**, and (3) **README.md**. The first file is selected as default, and scrolling down shows a quick preview of the file. You can also take a look at the second CSV file by clicking on it.
Take the time now to explore the other tabs on the page; for instance, check out the **Overview** or **OverviewV2** tab to get a more detailed description of the dataset.
# Use your own dataset
In the following exercise, you'll work with any CSV dataset of your choosing. As you learned above, Kaggle Datasets contains a large collection of datasets that you can use. If you'd prefer to work with your own dataset, you'll need to first upload it to Kaggle Datasets.
To learn more about how to do that, please watch the video below!
```
#$HIDE_INPUT$
from IPython.display import YouTubeVideo
YouTubeVideo('LBqmIfAA4A8', width=800, height=450)
```
If you're not familiar with CSV file types, note that the Kaggle Datasets platform will automatically convert any tabular data that you have to a CSV file. So, feel free to upload something like a Google spreadsheet or an Excel worksheet, and it will be transformed to a CSV file for you!
# What's next?
Visualize any dataset of your choosing in a **[coding exercise](#$NEXT_NOTEBOOK_URL$)**!
| github_jupyter |
## Load necessary modules
```
# show images inline
%matplotlib inline
# automatically reload modules when they have changed
%load_ext autoreload
%autoreload 2
# import keras
import keras
# import keras_retinanet
from keras_retinanet import models
from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
from keras_retinanet.utils.visualization import draw_box, draw_caption
from keras_retinanet.utils.colors import label_color
# import miscellaneous modules
import matplotlib.pyplot as plt
import cv2
import os
import numpy as np
import time
# set tf backend to allow memory to grow, instead of claiming everything
import tensorflow as tf
def get_session():
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
return tf.Session(config=config)
# use this environment flag to change which GPU to use
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# set the modified tf session as backend in keras
keras.backend.tensorflow_backend.set_session(get_session())
```
## Load RetinaNet model
```
# adjust this to point to your downloaded/trained model
# models can be downloaded here: https://github.com/fizyr/keras-retinanet/releases
# model_path = os.path.join('..', 'snapshots', 'resnet50_coco_best_v2.1.0.h5')
model_path = os.path.join('..', 'snapshots', 'infer_resnet50_csv_50.h5')
# load retinanet model
model = models.load_model(model_path, backbone_name='resnet50')
# if the model is not converted to an inference model, use the line below
# see: https://github.com/fizyr/keras-retinanet#converting-a-training-model-to-inference-model
#model = models.convert_model(model)
print(model.summary())
# load label to names mapping for visualization purposes
# labels_to_names = {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
labels_to_names = {0: '0_Point', 1: '1_Point', 2: '2_Point', 3: '3_Point', 4: '4_Point', 5: '5_Point', 6: '6_Point', 7: '7_Point', 8: '8_Point', 9: '9_Point', 10: '10_Point', 11: '11_Point', 12: '12_Point', 13: '13_Point', 14: '14_Point', 15: '15_Point', 16: '16_Point', 17: '17_Point', 18: '18_Point', 19: '19_Point', 20: '20_Point', 21: '21_Point', 22: '22_Point', 23: '23_Point', 24: '24_Point', 25: '25_Point', 26: '26_Point', 27: '27_Point'}
```
## Run detection on example
```
# load image
# image = read_image_bgr('000000008021.jpg')
image = read_image_bgr('Test/959.jpg')
# copy to draw on
draw = image.copy()
draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)
# preprocess image for network
image = preprocess_image(image)
print(image.shape)
image, scale = resize_image(image)
print(image.shape)
# process image
start = time.time()
# boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))
boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))
print(len(result))
print(result[0].shape)
print(result[1].shape)
print(result[1][0][0])
print(result[1][0][5])
print("processing time: ", time.time() - start)
# correct for image scale
boxes /= scale
# visualize detections
for box, score, label in zip(boxes[0], scores[0], labels[0]):
# scores are sorted so we can break
if score < 0.5:
break
color = label_color(label)
b = box.astype(int)
draw_box(draw, b, color=color)
caption = "{} {:.3f}".format(labels_to_names[label], score)
draw_caption(draw, b, caption)
plt.figure(figsize=(15, 15))
plt.axis('off')
plt.imshow(draw)
plt.show()
boxes[0].shape
model.predict_on_batch?
```
| github_jupyter |
## Background Information
In a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example RED, BLUE. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example PURPLE, ORANGE. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition.
## Questions For Investigation
**Question 1:**
What is our independent variable? What is our dependent variable?
**Answer 1:**
- Our independent variable will be the congruency of the word (congruent or incongruent).
- The dependent variable will be the time taken to name the ink color.
**Question 2:**
What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.
**Answer 2:**
- **Null Hypothesis ($H_0$)**: Incongruency of word will have no effect or decrease the time taken to name the ink color.
- **Alternative Hypothesis ($H_1$)**: The incongruency of word will increase the time taken to name the ink color.
$$H_0: \mu_i \le \mu_c$$
$$H_1: \mu_i > \mu_c$$
Where,
- $\mu_i$ = Population mean of time taken to name the ink color for incongruent words
- $\mu_c$ = Population mean of time taken to name the ink color for congruent words
**Statistical Test**: *Paired one tail (positive) t-test* because both tests have been performed on the same set of users one after other. This means that both tests are dependent and paired. We will be performing one tail t-test because we are looking to compare means in one direction only. We are using t-test because population parameters are unknown.
Assumptions:
- 95% Confidence Interval i.e. $\alpha = 0.05$
```
# Use inline plotting
%matplotlib inline
# Import modules
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Read dataset
df = pd.read_csv("Stroop-Dataset.csv")
# View the dataset
df.head(5)
# Print dataset description
df.describe()
# Calculate median of values
print("Median for congruent: {}".format(df['Congruent'].median()))
print("Median for incongruent: {}".format(df['Incongruent'].median()))
```
**Question 3**
Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.
**Answer 3**
*Central Tendency*
- **Mean**: Congruent = 14.05, Incongruent = 22.01
- **Median**: Congruent = 14.3565, Incongruent = 21.0175
*Variability*
- **Standard deviation**: Congruent = 3.559, Incongruent = 4.797
**Question 4**
Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
```
dataset = np.genfromtxt('Stroop-Dataset.csv', delimiter=',',dtype=np.float32)
dataset=np.delete(dataset,(0),axis=0)
plot = plt.boxplot(dataset,vert=True,widths = 0.2,patch_artist=True)
plt.setp(plot['boxes'], linewidth=2, facecolor='#1b9e77')
plt.setp(plot['whiskers'], linewidth=2)
plt.setp(plot['caps'], linewidth=2)
plt.setp(plot['fliers'], marker='x', markersize=8)
plt.setp(plot['medians'], linewidth=2)
df.hist()
plt.show()
```
From the **histogram**, it's clear that both distributions are slightly positively skewed. The mean in both cases is also near the peak for each peak.
From the **boxplot**, it's clear that the incongruent data has two outliers which can also increase the mean for that dataset.
**Question 5**
Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
```
df
df['Difference'] = df['Incongruent'] - df['Congruent']
df
mean_difference = df['Difference'].mean()
mean_difference
standard_deviation = np.std(df['Difference'],ddof=1)
standard_deviation
standard_error = standard_deviation/np.sqrt(len(df['Difference']))
standard_error
t_statistic = mean_difference/standard_error
t_statistic
# t_critical value at degree of freedom (24-1 = 23) = 1.714
```
Results are as follows:
- **Mean difference** = 7.965
- **Standard deviation** = 4.865 (corrected)
- **Standard error** = 0.993
- **t statistic** = 8.021
- **t critical** = 1.714
- **p value** < 0.0001 => **Result is significant** (since the p-value is less than 0.05)
Thus, the null hypothesis is **rejected**.
**Question 6**
What do you think is responsible for the effects observed? Can you think of an alternative or similar task that would result in a similar effect? Some research about the problem will be helpful for thinking about these two questions!
**Answer 6**
The lower time for congruent words maybe because of the habitual behavior. One part of the brain can recognize the color and the other can recognize the word. When both the results are same, it takes lesser time to give the result as no further correction is required (which is necessary in case of incongruent words).
A similar task can be a task where words are jumbled in such a manner that the first and last letters stay at the same place and users are asked to write them. In most cases, one can recognize the word if it's very familiar to him/her but while typing it, they will tend to write the correct spelling (because of muscle memory) and then fix it to write the given incorrect spelling. This in turn should take more time.
| github_jupyter |
# Read in the data
```
import pandas as pd
import numpy
import re
data_files = [
"ap_2010.csv",
"class_size.csv",
"demographics.csv",
"graduation.csv",
"hs_directory.csv",
"sat_results.csv"
]
data = {}
for f in data_files:
d = pd.read_csv("schools/{0}".format(f))
data[f.replace(".csv", "")] = d
```
# Read in the surveys
```
all_survey = pd.read_csv("schools/survey_all.txt", delimiter="\t", encoding='windows-1252')
d75_survey = pd.read_csv("schools/survey_d75.txt", delimiter="\t", encoding='windows-1252')
survey = pd.concat([all_survey, d75_survey], axis=0)
survey["DBN"] = survey["dbn"]
survey_fields = [
"DBN",
"rr_s",
"rr_t",
"rr_p",
"N_s",
"N_t",
"N_p",
"saf_p_11",
"com_p_11",
"eng_p_11",
"aca_p_11",
"saf_t_11",
"com_t_11",
"eng_t_11",
"aca_t_11",
"saf_s_11",
"com_s_11",
"eng_s_11",
"aca_s_11",
"saf_tot_11",
"com_tot_11",
"eng_tot_11",
"aca_tot_11",
]
survey = survey.loc[:,survey_fields]
data["survey"] = survey
```
# Add DBN columns
```
data["hs_directory"]["DBN"] = data["hs_directory"]["dbn"]
def pad_csd(num):
string_representation = str(num)
if len(string_representation) > 1:
return string_representation
else:
return "0" + string_representation
data["class_size"]["padded_csd"] = data["class_size"]["CSD"].apply(pad_csd)
data["class_size"]["DBN"] = data["class_size"]["padded_csd"] + data["class_size"]["SCHOOL CODE"]
```
# Convert columns to numeric
```
cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score']
for c in cols:
data["sat_results"][c] = pd.to_numeric(data["sat_results"][c], errors="coerce")
data['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]]
def find_lat(loc):
coords = re.findall("\(.+, .+\)", loc)
lat = coords[0].split(",")[0].replace("(", "")
return lat
def find_lon(loc):
coords = re.findall("\(.+, .+\)", loc)
lon = coords[0].split(",")[1].replace(")", "").strip()
return lon
data["hs_directory"]["lat"] = data["hs_directory"]["Location 1"].apply(find_lat)
data["hs_directory"]["lon"] = data["hs_directory"]["Location 1"].apply(find_lon)
data["hs_directory"]["lat"] = pd.to_numeric(data["hs_directory"]["lat"], errors="coerce")
data["hs_directory"]["lon"] = pd.to_numeric(data["hs_directory"]["lon"], errors="coerce")
```
# Condense datasets
```
class_size = data["class_size"]
class_size = class_size[class_size["GRADE "] == "09-12"]
class_size = class_size[class_size["PROGRAM TYPE"] == "GEN ED"]
class_size = class_size.groupby("DBN").agg(numpy.mean)
class_size.reset_index(inplace=True)
data["class_size"] = class_size
data["demographics"] = data["demographics"][data["demographics"]["schoolyear"] == 20112012]
data["graduation"] = data["graduation"][data["graduation"]["Cohort"] == "2006"]
data["graduation"] = data["graduation"][data["graduation"]["Demographic"] == "Total Cohort"]
```
# Convert AP scores to numeric
```
cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5']
for col in cols:
data["ap_2010"][col] = pd.to_numeric(data["ap_2010"][col], errors="coerce")
```
# Combine the datasets
```
combined = data["sat_results"]
combined = combined.merge(data["ap_2010"], on="DBN", how="left")
combined = combined.merge(data["graduation"], on="DBN", how="left")
to_merge = ["class_size", "demographics", "survey", "hs_directory"]
for m in to_merge:
combined = combined.merge(data[m], on="DBN", how="inner")
combined = combined.fillna(combined.mean())
combined = combined.fillna(0)
```
# Add a school district column for mapping
```
def get_first_two_chars(dbn):
return dbn[0:2]
combined["school_dist"] = combined["DBN"].apply(get_first_two_chars)
```
# Find correlations
```
correlations = combined.corr()
correlations = correlations["sat_score"]
print(correlations)
```
# Plotting survey correlations
```
# Remove DBN since it's a unique identifier, not a useful numerical value for correlation.
survey_fields.remove("DBN")
```
| github_jupyter |
# Mémo concernant les images dans un Jupyter NoteBook :

## En Markdown :
L'image ci-dessus correspont au code markdown : ``
### Images intégrées :
Il est possible d’insérer une image dans une cellule Markdown par le menu ``Edit>Insert Image`` ou en glissant/déposant le fichier image depuis votre explorateur vers la cellule Markdown du Notebook.
La syntaxe générée automatiquement est alors de la forme ````
Inutile d’ajouter le fichier image dans le dossier du Notebook car il lui est alors « attaché ».
En ouvrant ce fichier ``.ipynb`` dans un éditeur de texte comme Sublime3, on obserse le code de l'image attachée dans le code :
````json
{
"cells": [
{
"attachments": {
"JupyterNoteBook.png": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAyAAAAJYCAIAAAAVFBUnAAAAAXNSR0Img................."
}
},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Mémo concernant les images dans un Jupyter NoteBook :\n",
"\n",
"\n"
]
},````
> **/!\ ATTENTION** car la taille d'un carnet jupyter avec des images intégrées explose...
### Images liées :
Pour afficher une image, il faut commencer par un point d’exclamation. Puis indiquer le texte alternatif entre crochets. Ce dernier sera affiché si l’image n’est pas chargée et lu par les moteurs de recherche. Terminer par l’URL de l’image entre parenthèses. Cette URL peut être un lien absolu vers le web ou un chemin relatif de ce type : /dossier_images/nom_de_mon_image.jpg. Après le lien vers l’image, il est possible d’ajouter un titre lu par les navigateurs textuels et affiché au survol de l’image par les autres.
Exemple ci-dessous :
``

> **Remarque 1** :
>
> `https://ericecmorlaix.github.io` est un dépot GitHub qui contient toutes mes images publiées...
> **Remarque 2** :
>
> Il n'est pas possible de gérer simplement la taille d'affichage d'une image en Markdown. Il faut soit dimensionner correctement l'image en amont, soit utiliser un code HTML adapté.
## En HTML :
### Images liées :
<img src="http://www.google.com/logos/2012/turing-doodle-static.jpg" alt="Turing's Device" title="Alan Turing's 100th Birthday">
> Pour afficher une image, il faut utiliser un balise unique **`<img>`** ;
Indiquer **`src=`** l’URL de l’image entre guillemets. Cette URL peut être un lien vers le web ou un chemin local de ce type : "/dossier_images/nom_de_mon_image.jpg".
Puis ajouter **`alt=`** le texte alternatif entre guillemets. Ce dernier sera affiché si l’image n’est pas chargé et lu par les moteurs de recherche.
Après, il est possible d’ajouter un titre lu par les navigateurs textuels et affiché au survol de l’image par les autres.
<img src="https://ericecmorlaix.github.io/img/ECAM-Lycee-Notre-Dame-Du-Mur.png" alt="Logo du Lycee Notre Dame Du Mur" title="Logo du Lycee Notre Dame Du Mur de Morlaix" width="50%">
<center>Logo du Lycee Notre Dame Du Mur de Morlaix.</center>
> Le gros avantage du code **HTML** comparativement au **Markdown** concernant les images est que l'on peut gérer très simplement sa taille d'affichage en renseignant les attributs **`width=`** et/ou **`height=`**
### Images intégrées :
Ce code HTML d'une image au format `png` dans une cellule markdown produit l'image ci-dessous :
`<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAU8AAAFMCAYAAABLWT5cAAAgAElEQVR4XuydCZgUxdnHf9Wzy4KAeH8qGjVRo9EEYWfB4G2MiVc8Em8xqMwsHgRN4hFPvGOMBx7I9i6I4m1iNFGJMTFovGBnORLN4RFvjYoCcgi7O13f88504zDO7M7RPdO7W/U8PApTXfXWv3r+81a9l8I0g4BBwCBgECgaAVX0E+F8YJIrlvffcEpppDIIGAR6DQK9hTy1uyO9ZT295gUzCzEI9FYEegvZGPLsrW+oWZdBIKQIGPIM6cYYsQwCBoFwI2DIM9z7Y6QzCBgEQoqAIc+QbowRyyBgEAg3AoY8w70/RjqDgEEgpAgY8gzpxhixDAIGgXAjYMgz3PtjpDMIGARCioAhz5BujBHLIGAQCDcChjzDvT9GOoOAQSCkCBjyDOnGGLEMAgaBcCNgyDPc+2OkMwgYBEKKgCHPkG6MEcsgYBAINwKGPMO9P0Y6g4BBIKQIGPIM6cYYsQwCBoFwIxAkeW4NTAAOAr4O/AX4JfAU4GTBMhg4D9gZGAssLhI2v7IqbQxMBE4ARKZHgMnA3wFvjiJFM90NAgaB3ohAUOS5GzDNJc1M3JYB5wI2kHQ/2BT4FTAGeAI4HvikSLD9IM+dgCZAZM9snwLHAE8WKZPpbhAwCPRiBIIgz4HALa4GmQu6fwFHAi8D67pEerTbsVrkuRlwJ7Bfnr2+GLi8F78HZmkGAYNAkQgEQZ7bAvcBNcCpwBxXphHAlcD+riZ3PzDA1Trls23c43E1NM/1gZuBjYBfA88AEeB04FrgfODqIrE13Q0CBoFejEAQ5Cn3m0KM0jLJU47nFwHj3TvFuzNw9Z75XxWP7dnbLHee1wAHujI914vfA7M0g4BBoEgEgiBP0eLkvvPwLmQR7TPzDjFM5CmYDAcuBUa72qf8GBiDUZEvl+luEOjNCARBnoKX3GlOBTbIAd5CQO44/xNCzVPkPQc4DfgdcAnwZm9+AczaDAIGgdIQCIo8ZdxvuW4/hwIfu+KJhnmTS1CrQ0aeW7iEv4l7x5nLpao0lM1TBgGDQK9DICjyzAZqO9cCL+QpBqHs+8NqH9vrXGOW3G+KW5L4dZpmEDAIGATyIhAkefYDtnfvPk9xj/DZPp5ibb8MaHSd0kVQOc4LwbYVsW/l+nluDogBa+8cc8rxXWRcUIQ8pqtBwCDQyxEIijx3cd2VRKOU1ur6ST6WFV3kuTXVZ+EsET6Z1vjutqFc8vQ032F5JhLn+bOAz7sTxHxuEDAI9A0EgiTPu4BXXSf42XmIRzTPq4AzM+CuhuaZSw5PJImKEj/PW43FvW98KcwqDQKFIBAUeRYyt599ytU8/ZTFjGUQMAj0AQQMefaBTTZLNAgYBPxHwJCn/5iaEQ0CBoE+gIAhzz6wyWaJBgGDgP8IGPL0H1MzokHAINAHEDDk2Qc22SzRIGAQ8B8BQ57+Y2pGNAgYBPoAAoY8+8AmmyUaBAwC/iNgyNN/TM2IBgGDQB9AwJBnH9hks0SDgEHAfwR6G3lKAuPsJtFH2evM/rfu/u4H8pP8GMSMYRAwCIQDgd5GnuFANbcUvQXrMGNsZDMIVAyB3vKF9mLbc2meFQMzz0R7uf/+NGC0z2rvhpnfIOATAr2NPMO4HiFMKechxG7I06cX1wxjEKg2AmEkm1IwCXNWJUOepeyoecYgEHIEqkae45m/dZKOrQSfDmoXzmD4kjKwMuRZBnjmUYOAQaB4BMomzziJwzRMVLAesIsGSXy8VlO5y1sUI+2bGt5UMNkm+nCOBw15FoOm6WsQMAiUjUBZ5BknIfV9DitRirdAuWV9tZTtGFLgOA/bRLNrwhvyLBA8080gYBDwB4GSyVM0Tre2+VLQN1rUzEjSsV4EJRroWm0VNQu6OJb3B+SPtOVAZ/bzcsR36BwLSgwv0g7P0kANefrzPphRDAIGgQIRKIM822aD3gv0pTYNYhTZCBgK1ACDgFx3mF8i1gLlTHX7EfedsAHbjnPoeL6Fb++W8awhz2KANH0NAgaBshEogzwT8+WO0yKyzVSGD3TJsxSBVgHyR5qQrpBvzjack7dt4LSWDla8djt7ieb7stvRkGcpyJtnDAIGgZIRKIc8U4RlE90Y2Mk9br8H/M8lwFwkmPNYXoT0/eMkUuV/baJSY13Ic1FGVcuS11OEDMV2Na5KxSJm+hsEegACJZNNnIRHnmLskeP4W4BrAApu5RnzCnnK1cBCQ57B4W1GNggYBHIj4Ad5Rt3j9pyM43dgeGeRp2iybYY8A4PbDGwQMAjkQcAP8hQNUJrEbgfessjTm9fceQaOvJnAIGAQyETAkGfw74O58wweYzODQaDiCBjyDB5yQ57BY2xmMAhUHIFqkOdg4DxgZ2AssLiYVQd0bN8DuBX4JzAB+BhYF7gFGJNHvjOBmzLuW/Mtw5BnMRts+hoEeggClSbPTYFfuYT0BHA88EkxWAVAnjsCtwOjXMv90cB/gHWAycC4PPJNAX5WgJHMT/IUt7CJwAmA/Ag94sr49wJIvBiYTV+DgEGgGwRKIs8Yc2MKSzS1WoeOxe+RmDaLCWd3M5docjYg5CQtDOQpzv03ADFXJnF78shzgPtZY451/Q04B3ixgDfML/IUX9omIDOySqb/FDgGeLIAWUwXg4BBwCcEiiZPlzh/7R5rU2I4OCsUnNXMyOYu5BIyEq1zBLANINpSNTVPWbtolULovwTE5Uo0O488JVFJSyoqFHYHnisRcz/IczPgTmC/PDJcDFxeonzmMYOAQaAEBIomzziJd90Y9uzp3rOJblGADF8H7ncjkapJnqLB3Q08BlzjkqhcK3jkKUR6jxs99UcJRQWGA/8AbnSfXV3Aev0gz/WBm90QWPnhegaIAKcD1wLnA1cXIIvpYhAwCPiEQMHkGaf1bFCiqW2fb26baCHjhYE8/889AksyE7k/XOaSYSZ5enIOy7NeObZfDyS72Qs/yDPXFHLnKaR/oKvBl6oZ+/QqmWEMAn0LgULIjjhtMzV6gxrUxUm0GCkke1JP1Twl5l4MPXJUz9XkDvEyVzsWo1B27lDvmeeB49yw1K7eGr/JU/ZMNGCpiTTa1T5Fk/cCBfrWG2xWaxCoEgLdkmechBwHd/ASELt3nmJkEWNLqhV45+l1r7bmeSgw07VW54NdDDNnAakkJFlNSofI82IA8474lSLPDVxD1WluLlXJbxp4PoEqvZtmWoNAqBHokjxjzN1LoR76nH5fmcmwFd5KGklcq+Hn8vck7R+/T9sdBVjbw0Keu7qGK/HtzNXkCC/aopT7EGOSxOzLPaP4o8oPxgHu3yUpSSXJU+6TpwKbuHecT6V+t0wzCBgEqoJAl+Tpltn4k030tkzpxtO6t4P6qyb5bDOjLnQzwEuCjq6aWNvlOCyuP3JfJ038KcVo1N2za8aNk1gADGtlyrj5TH/NjakvN7Z9wxx3nvXAHwCxdOdqM4AzgDU/Knn6+XFsrwOudO83xS1JPBVMMwgYBKqIQF7ybGTOtzU1D9rUf8mC7pGnQ+dzLex6QUZquK6Wsi1wHyCklNnEYCNW74JanHQG+39y/5nPcq0QqSQkCYI8PcKS+9HsJtroScC/ChDaD/Lc3MXIS8KSOa3UkZIfJcHCNIOAQaBCCOQlzxgJCT1c1ExUvphrtRLJUzTPqwAJa/RaCZpnIOTp+aCKIUw0YwnPlCb//mNA7hi/CUj2/AeAaRl9utsqP8izO8t/V3e03clnPjcIGARKQCAneR6JjqxP27sOtQ0tDBO/Tj/IswTxvvyIp3m+yqONf2WSkK8c+eWeUlq3BjBfhChuED/IM9cPjyeFrF38PCXiy1jci9sb09sgUDICOckmTuIg0BfYNIgrzJfaF3eezj+aGSmJNLyM7iULUuiDHnm+zqxxf+EiufOUkEovuUhvJc9C4TH9DAIGgQohkI88JSHGBzbRfL6Q5CmHEbjYOcp/GPIMHHUzgUHAIJCNQD7ylBDE42yi8t+czZBnwS+TH8f2giczHQ0CBoHKIPAl8mzkxe00Nc+7VTHzSpGnllDgUhvNM3CIzQQGAYNAAQjkIM+2Exz0Ec1Ej+jq+TgJiWzZajaTjnmFR6XccOA1jGLM3UVhicV7oU1U8lpK88NVqQCoSu5iNM+SoTMPGgTCi8CXyDNOYrLCequJEZL0ogvNM+0ytIh/z3iIE8RhPHDyjNM6CdQloJ62qZfQREOe4X23jGQGgV6NQA7ybH1OEfl5EyNe6EbzPEziqzWdyz/htd9syPaTmxnZpaP2eOZvLWNOZXhR8djynIPzY9CixWGh95lKg+eWYzTPXv2KmsUZBMKJwFrkOYmX+73P50sW89/BD3JUd6nWiJF4WIEk2sjZNMxWkCsqphw07rCJSlikRCqZuu3lIGmeNQgYBEpGYC3yjJOQLO/32ER3KHTEA7jpzI3ZaWwtA/pF6Cf1gLpqbwHrAZKlvZj2Fqg3LZxJU2mYDYgGK9mNPP/ScsMzi5Gl2L7mzrNYxEx/g0APQGAt8mxk7gka6wc20aOKkF0SCkt9nU5AopE+Gcv8mlqS00HPaKbh9znG6u/+26oi5pGu8pwkMk4d/10HeSFQQ55FAmm6GwQMAuUhkKV5tl2h0e254tm7mUbIU0g01fbn+v22Yo9zPuSl6b/nJEkGIgQnGqef7UPg3+6APYE8RWPONKqJzJn4Z/89F1aF9CkV49R9smkGAYNAYQhkkWfrbzTc20zDbwt7fK1eQp6SWKPmh9xz5oZsf8Jqlr10B/vI/WR2E41TMrrLn2KaPCd/5PgvhOy1nkCexayz0n09YjcEWmnkzXw9FoFszXOhouPYJnb9ZzkripF4VYGkoHsviT5kGg1vZJFdOcPnetb70ofxy+/deWZrnn5jUM54nttXGHMDlLMu86xBIDAE1nxZ3ExKyzbn0UGTmFRyhvK0nyhjdbpMxQqFNbOJEacGtoLwD9wTDEZh1tzDv8NGwj6JwBryPIW2bS30081EcxV3KxicOK3Pg/q2gtUa6kA9b1MvZX77ajPk2Vd33qy7VyOwhjwbmfcdjTPJJpqvtk+3QJzF8wNW0E9CNduBDTUsUqj+Fqu2ncpuH3U7QAAdxME+ScdWiojrb6rf1DgpJ30LtZ7GknrsX2oKZ4GDXtJVH+8hTXJ2hNq38jj/G/IMYF/NkAaBaiOwhjxjtJ4sDu02DSeWKlQj8w+F5FRQ52u01D2SCpSX1KBOm0L9Q6WOW8pzY5m/Xj+ciV5UUiljFP+MmtSONXkGwzONWYY8iwfSPGEQCD0Ca8gzTtsloCM20YvLkfoUnt9gGqM/jZNYlKR9+wj9ltlEO8oZs5Rnv4iDTz29ELRUwxTvoK3Tf8Q5VC9R6JwhpRq1i0K57lWOGHu6aEpCVYelO+hLbRoyDVeGPEvZQPOMQSDkCGRonokWC/1CEw1Sn6fsFqftlRo4cAr1ku29oi0dC58UC7+0w22iQpziYC9/vOgmcXla7fYRl6k1deizhJXqmBIA0FUffsSDP9qAbaREMRaRbTKO8IY8K7r7ZjKDQGUQyNQ8Zymc65toeNKPqRtJPGOhLriN+r/5MV4xY3hlQtzUdVHX/9SLSipmqKL6jmX2tH4M+tr7zDnmUU4XX1khXUOeRaFoOhsEegYCGeSZkFrgR9lEvaidslYQp22mg/NkCw13ljVQCQ9/cWRPHaElXZ7EwUuTRCKfuP8vWqiUF5YmJJev/rpopJ4z/9KuxDmWR64azNBj/8eCW37PuOsAMUwZ8ixhD80jBoGwI5BJnos6WbXNdHb3KlGWJXuMxMUKXWfTIHXdK9o88kyy+spp7OZp0i9LKeUgBfHmzchxKvXdzxOjGXCpS6RBilDq2MbPs1TkzHN9FoEUecZJyD3gRzZRTxMrG5AYbYcr9Ck20YPLHqzIATwSa2fZNTPYZ1ZG6roiRyquuzfvYv5rP8hR97iJS6ROvSHP4qA0vQ0CoUcgRZ6NzN9Ok/yzTdQ73pYt+DgWbmHROc+mfpOyBytyAI/ElvPh5Hs46HduLHxRCZiLnDLV3Zt3Ge/ffC8/kDvP14FxhjxLQdM8YxAINwIp8hxP624O3GDTMNJPceMkXnZQJ7ZQ3+bnuN2NVW3yzCLtsYY8u9sx87lBoOch4B3bD1NwShPRQ/xcQpzW60Ettole7ue43Y1lyLM7hL70ubnzLBoy80BfR8Ajz3GgR9s0nOwnII3M21Ojb7KpzxkC6edcmWMZ8iwaWUOeRUNmHujrCHjkeR7o9W0azvUbkDgpF6iJNtG/+j12vvHKIM/BrnV8Z0CO24uLkTnPvKUe28V4J+WfpcTyKOAvgIS8Pprh3C/ibez2OQEQ+R8BJgOCu0eK3S3DL/L0Q5buZDWfGwRCgYBrMEpc68CHzURTETJ+tkYSEzTsUWRpj7JEKJE8NwV+BYwBngCOz/AJLUgeH8lTSPBKYEKOiS8D5BpEfFMlg38TkJ216lPgGKDQgAc/yNMvWQrC2nQyCFQbAVfzbJumSD7bxMjbgxAoTuItB3VCS4Wijbyqnp/y+tm/4ejWAqztknvUBo52119N8oy4BqaL8uyFJ1s/QAIQ9svTT3IUFHrXXC55buajLEG8gmZMg4DvCHjH9t9p9O15irWVPWmMtphCH2sT3bfswQoYoATNc4CrdUr10G3cI2+1NE/Zk9NTThDpY7r4i0pk054uwb/iaseCxM1u7Sg5MTwDCPHKs9cC5wNXFwCXdCmXPNf3UZYCRTbdDALVRcDTPGdbWBdMZfhzQYkTp/VxhXq6ieg1Qc3hjVsCeXqPfh24H5CcpNUiz1zwWMBxwBRXm7weSOboKMd9wfdAV/5C97Nc8swlc6myBP16mPENAr4gkCLPGImFFtaxTYwoq3ZRVxLFSXwNVKvGOb6ZBon6Cax55LmUd6bcz+EPAP9xCbG7OcNInmKE+RlwGjAdkHDX7Dh82cfhbgjoaFf7lB+BShuMBN9yZeluj8znBoFQIOAd26Ua5a420Q+ClCpG6w8U6k4Lvf9UGuYGNVechOTX/F07K56cwV5ieJH1FRJhVCZ5ts0GvddHvNT4MGOFsGXeUq3tkozkSPf+8zP3GP4UkF1fagPgHJdcJZpKQkELWWsm/H5pnn7IEtRrYcY1CPiKgEeeSxbTvtmDjP7c19FzDBZj7liFda3C+mETI+SezvdWbfJ8jxePfYwz5IeoVPIUNyUhRImLv8K968yV9WkLYCogIbByx5mLXAvB1w/y9EuWQuQ1fQwCVUdATeDVutUsXWUTrVjZ2RitxyisexTJU4Kw8PeCO89DgZnucb0lz/FbCFa0arnfFLck8esstZVLnn7KUuoazHMGgYoioE5i3sa1OK/ZRL0M6xURYBxtuyp0kwWtNQw891Z29PJslj2/R56rWHLtnez3mJuKTlLS5WtibRf/yUbX0Vz6ybFbjEYFx+V78y7hzakP8KP7AJnzjCJj28ViLi5Gv8gh7D+Aq4DfuNrm3YBb2G6t3nJ8l/XkLDGSY9xyyXNzwC9Zyt5/M4BBoBIIqJNp/WoN6mmb6JaVmDB7jkYS12rUOIW+cjMevb6cmvHe2J6f51Le/vH9HCFHZ8nj2RV5bgsI2dVnySdRO0IKBbUc5LnQPXoXk5JOiPwGl8hzzStjev6oYhRyayd9qas4z0sBvkKuYsolT++u2A9ZCsLadDIIVBsBNY62byn0fc1Ev1EtYU6hdXgN6mydcvhWdifOzOk0iOZXdIsxN6aI3Ay6TuN8+i5zps9iwq3dGFGEsESjkztGr5WsefqQVUkMRXKXKQaY7CZkLpFHUoMpW2avryS0ljtQWXchFvdyyTMXfqXKUvSemwcMAtVAQMWYPxqSNzQTlfjpqrbxtO6sUSfplE+j+i84jztYf2mh/sVCBEsTpyUO4xIxlGoOzoqlvHnFgxz1y0LGKKePj+GZ5YhRyrPlkmcpc5pnDAI9GgEVJ7G/hnObiX4nTCsRuUB/H9TeoHfUqDaFekmjX4mg3nDQ73eiP1Z0LNmSIcsnsVN7nMS7brG3tZaicT5oZqTcywXaDHkGCq8Z3CAQKgRUulyGM9amQSy8oWxSJsRCD3dSiTCsbRVspdFChhsB64EeCEpivSUSJ2erhDeBIc9Qvj5GKINAIAioRuaeoLEOsImKZblHt/yap36/mYahQS/OkGfQCJvxDQLhQUA10hbX6AabaCw8YpUmSb47z9UsPn8m37uptFELf6qRxI0aJn7Ge5ffx6GSf7NUJ/nCJ/Wnp7nz9AdHM0ofQkDFaTtTobduIpppae6xELjW9itBbwy68x1evHEWE8SZfEnQi4qTmA/s4kOEUdCiZo9vyLPSiJv5ejwCKk7r+Qo1qImouLb0ijae+Vs7JN+QxbzNc7/8IxOvC7pme4y2sQot+VCXLuTOg+dwkzi7G82zV7xRZhEGgS8joGLMvdyiZnUTIySGutc07/5RFpSk458Rav4Elqt96ohGHxFBveCgxELfTdO76JRhKt0seFOjMpJvaElE4tZp0pfaNNzlWv1N6eHuoDWfGwR6KAIqHeHD/2yiop31mjaW+evBkvP6Mdj3ukx5QFoK+kabhklupNIgoJQIo2rsgTm2VwN1M2ePRkCO7bco1D+biEqi3d7W+m/PwYduwo6bbs131l2HDeUoLbrjqaAlE9E7oCVHZjdNLbDQa+5Mk1hbK5ytMzTR2VDz5lSGizYqtZAkXFFqDElc/HlFxrZ3J0wQnxvyDAJVM2avRkDFSLSAfr6ZhgJIpEdiISS3lSv58tGct+HOHPEgWOs6JN9uYZSUtyi0iTYpeTa9JiGS8sdr4ncqfaR5OURFEy0mtr1QWfzs55HnpTkGlc+yM25l/1t3f/dDVsHRNINAaBAQa/tM0LNsolIrpzc2ITvJNZki0O/yq323Zu+fK6x1JHTzQxZM/wPx3/q4cNE438uIpe9J5OkjDL4P5RG7IVHfoTUDloKAitH6oIV1bxP1D5UyQA96pj/Q/ySevrOWgYd8oToufeBOvlPovehy9zjuPZ4aMwMDOdpna6M9iTxzaZ5h2OK93NR7Ip8hzzDsiJEB0TwfAW3bRCXvZa9vcRLLQIlxZ3OF+lCjB9lEpVhZUC3zyx7WL37Y7zx7wg9QUO+PGTekCIi1/Y8O1q+bGfHnkMrom1in0vr1JMxR6JM1kXssOEXD9Qrn51NpkMztfbUZ8uyrO2/WXTICklXpKYUzqYmRgdQTKlmygB48iWc2vp09P46T+KSdyNf6kWy3ia4MaLqeMqwhz56yU0bO0CAg1vZnFfzMJjonNFJVQJA4idcdavdrYVgqEqkaTXxR60je4MDWKu1kv0R3U/lSgXgPrHGTKkBuGXOBQi9op+bSGQzPFaZqyLMAIE0Xg0AmAuLnORfUeJvovL4ETYzECxbJnzYx6oVqrHs8rXs7KKk1tCZyqQJyLNE4+zQzMru2kSHPCoBvpuhdCMixfb6FHjOVhpd619K6Xk2cxAMW6jdTqX+gGuv2koiAeloikywiC6CzAI1yjTN+wWKnSzGrM6WmvIbZzUT3yXrYkGfBaJqOBoE0AkKeL0XQP7ytxJpBPRXIOImrQS+3aZCMSxVtrtb5V0ki0k5k6xkMFxeor7luT+JkL76imc73ueTLdpPqcg2bMKz/odi/UUQGJVm53zT2lJR5XjPkWdE3wEzWGxCQY/t/InDAbTT8tzcsqNA1NNJ6ooP6fjPR4wp9xq9+GRmY7rDTqQB3yopc8muqtcY5nllXDmTj3f7Ln3/xZ867BRDSlmbIMxDEzaC9GQHRPCXzzz420bd780Kz1xYn8U0FDzYR3aHS6/4i41MqA9Mf3JBOMeRIZJIQWqbjfT7xsp3xu13Gifz5V/1Z7+wVfPzc3Rxwupu4xJBnt8iZDgaBLyMg5Ckx2LvaRD/oawDFaV3USeRb0xnxfiXX7pFnO8uumcE+s9xjung7yHE9sJa+++R3K/nkb3fxvYtc8hTSNppnYKibgXsrAkKe79Vh7XIzIz7urYvMty4xGgGP2UTvqOTaPfL8nE+vm8n+onl+CPw7aBm8eZfyzpT7OVzW/rKbJNqQZ9Dgm/F7HQJCnv9L0v6NaYz+tNetrpsFjadtrINzsE3Djyq5du/OczWf/fYO9r05IwNToGLkKFC3yCVQQ56BIm8G740IiMFoUR3Jr97Mrp/1xgV2tSYpaQx8mmTAxtPYqWI/Hp61vZNVc6azuyQl8dLXBboFHnmuYsm1d7Kf5DIw5Bko4mbw3oyARBgtXkXtFjMZtqI3LzTf2hpJ3O7Ay81Ef12p9VebPJfz4eR7OEgc9D3SNppnpTbfzNNrEJDEIEtXs+H/zWCb7vwKe82iMxfSyLxva5x7bKLbVGqBHnkmaX9hGqN/UcSd55bAz4AxrqySh/TydEb87lueuvKS/b5U8rSAI9w0ceOAF/NIIVmrJKP+zsBYYHH30q7Vw8+sShsDE4ETAJHrEWAy8PcMHIoUz3TviwiI5rl8CQM2eJCd2vsiALLmOK3iPP58EyOurwQGHnk6dD7Xwq4XuGWRpd5RV21bYAawW1YnsdILmb7anew+k2ddCjqQIAMhod2B53LIIGVJfuXK+ARwPPBJd7Jmfe4XeYo/bVMODOXK5hjgySLlMt37MAJiMFr5CsvWnc0+gbrJhBnjGHN3UVjPOtTu0MKwAqpplreaEshTiEpISrTOXE20T0kUnOxKMp/J8wxAjF1ey0We6wI2cLTbqZrkuRlwJ7BfHowudrX48jbXPN1nEBDyXLU5j64ziUlOn1l1joXGaRPtZmeb+sAt7yWQpxSUux/4KiD+mUJI0s5ySXW2q9F16a/qM3mKpnY+IO/NsDya5wBX6xwByLWIHI2rpXmu75K91JmS+21JwSgFASVY4Fp3LVf35e+AWXtxCAh5Sj7LfsU9Fp7eblo3+SJvplEfJVGPTmP426C8e7yChY2R+JOCv9lERQ2+lv4AACAASURBVJMLrMVpkyQdN3Sycup09ryvgGP7d4E/ucd20fg8497mwN2AEINod/+poOYpUwk53gA0dnFsl34e+f+viuSZCxq5brgGONCVK9e1Q2DvgRm4ZyMg5NlhE63tics4hec3iNDvXmD/DPk7FPx6Hdovv4HRnxezrjOYt3k7zjOgr7VpkLuxQFqOCCOJ8unqzlO0tbuA7KPlhi55yr2iIc/Cd0uqgQ53rzpGu9qnaPZF/+AWPqXp2dsQEPLstIlmltPtMWuMMWd7ReRxic6xiJyRpGNzC+uXGvZQ8JMm6m8pVgNtZM4wTeQxUFfb1N8aBBjVI8+EGJx+/Cmvn/0bjm71wVWpJ2qeGwDnAKdJqKpbFlo8DkwzCBSFgBSAS9rUy91PD2ha5N0bWMem/vEJvNZvNZ9NAX24wjqyiRF/iZPYDNS9oMVAcKBNVBKfFNUamfcNjXN/uiRzg3zRfG0eea7k4xvu4gBxlZGEIK91MYnUln/atRTLPaenUYv1+EHX9UeyQ4nfZt4Wp2225PR8jxePfYwzJJdBuX6ePY08pQT1VGAT947zKffO1tf9NYP1DQRE83RsouKvF/p2MvM2ryH5CCixKh9sE10UY+5OCitVNlnjHNHMyJfH0zrcQc1wUCe3UN9WysIm8OK67dQ0a9SmSZyzp9Ewt5Rxcj3TheEm3xTenaF86U+VeHzXPUgs8PJ3KV4nho9lXckYI/GwgkM/4qXGhxkr96N9iTw9jwW535Q7cjFemWYQKBkBIU9tE5U7oNC38SwY6tD5qIa3kqwaM53dU2TRyLzvaBzRwFqTtB/rZ5x+I4nxGnWVRt/ViXPt7YwsyCG9aw2wdRKoS3JE+uR7bCAg+TfFwTxXEwf1ad1toEfan/HurfdxmOAlBCpGnFKc5MUrQazWW7nzyl6I9V2uOrzxRDO9zDUoeeWdZU65wy3mR80PP0/PuCYnl+wmx3eRM7s8SXeQms/7MAI9ijwn8dea9xh8jYIJCtXYxIgZ6TtNrRppO0PDTa5G6msN+rRhqu480HJknqGJ3N7M8OdLeW9izI0pLCGddTV6+bu8OHUWE4Rwurt3E+d4sax7ZOVNL8d+sXZLZqYum0eeS3hz6gP8SKz8YqQqJSWd5Bu9zr03zJxTjGyZ1wri2C/z1GcJJtE9spZCmx/k6Wnv4laVq2XLXqhspl8fRaBHkafsUcad5mgFV0dwbptCw4ceeWr0Kc00TA9iP+MkxEdQtLwTQClwRAt+cgnrPFNIhFYmcXryOTgrlvLmFQ9y1C+7kdmzEEuY4/cloYlrgZfQwoLSCfqcku5I9/5QDDDS8mmeVwGSLd9r1dI8RQvOlsWTKZfsQbxCZsxehIAYYByb+h5x5+nhfhpzt+zEaslwUZKjtLjrfKxxDtJEBlvoveVu1ML5/VSiLxdrde9uj+MkRmk4UMG+6XA/1QbOP9J5OdUbCutdjfNRkgGL1+GtFTdz4Oo4CYleGpo9tkPyfy2MEgNXoO2LrEqf3n4n+0sO03LvPAOVN2NwPzTPSslq5ukjCIjmmWyivkaV4FReTYwm8GrdKj47VqHlrkoSZsjd3UWum9KJGbJ1rH3E91/qCTxe186mIxR6Z43eXuqqa/RQUJKEYj0FAzWIwSLvj1Ql7p19jjDyH8j8IxryrCTaZq6CEEhFGG3OsnUm9djYdq0mMGdwHQOSn9F5HWi5//u7xhnn0Pl6hNrJoA5yUAe2UJ8v609BYJXbKSyap0lJV+5OmucNAunSwyvb2XCDnp6SLsNhXu625B6uv4I2B1oVHAnqept6ufOqWivzzrNsuXNonp5/aSnW9rLlKWIAo3kWAZbpWhkEhDyXdjJoi+ns0KWPYGXEKX2WcczZxiLymIZZDu1X1lB3qE5bx78pTuQa5/ChrHjuA4bsonHeFB/R0mcr/cm1CVSveocXb5nFBIn8kXpCgTZzbA8UXjN4H0NAyPMj4BvVIhO/8M5wYzrJizYSF6axLBhisSL5FTo+f4/BcQU3arimmahkJ6pK82oYdbL63unsJi4y3cW2+yJnAE7yvshVwCBG8ywAJNOlsghIDaN3HJLfbmHXwPNYBr20DDcmsYTfbWFd38SIf4pxaTVLJoK6ApAkKPdo1OOrqHm4GuVHvJR0oJc+xEnHL+Ilwb67ZMhlwSfZp/qRfEMMWD6GZ5YlUxEPG/IsAizTtTIIiOb5iqLzoCZ27TYTeWVEKm+W05mzYQfWpXKdq6C5k/aLIvS7BPhJ9sgq5fDO6TbRleXNWvzTcRISzTJsFUsWfMK/pm/M7jNnMFw00JxtPPO3TtKRcpDvoHZhV32zB2ik9VCNEgLaBdTTNvUSqSRjGVel4rfOPGEQSCEg5DlfYY1tYkSgmk+l8ZZj/CcMWGcVNTcrENclcWW6rAPrN7U47aCmgv5mEuugaYzoMqFGELKntU+pn6Mk23p2e1O7EUcKcoUTlirSUo2zdzMjOwBx+Dd120tF0jzX5xGQGkbPRtDnTqWh1yWCjdE6RqHu1DC3Bn3ibTT8x70bTd19gppex7oTb2a71dV4E4Zz+lY78L2p/Ri0aR3rSqZ1KYWcr70Fyg3h1Lt00zdrDPW0Qi9YTWTSDIYvB0YBkoaw1PDMSsNlju2VRtzM1y0CEmE0S5Gc3MTIP3bbu4d1aGT+dpA8pAPrvumMeN8lzvMVXAjqedDHpk7BcJjGeVWs8VXwd5W470Gu0Uhch5bESEwVbVmhr19NzS+zjugSVy5/pAkRFlN7SrRNOa5783mnjZ7iqiTlRiQ1n9dE7sykNtl/z/XGFtKn1DddSN60PoKAlB6+30H9tpn6B3r7muMkJB77btFENckx/Rj0cScr7wAl5XMlFdCd/Uj+9FZGFVvdsRzo1gMkL+eahNQn8+zdNfQfuowPfn8vh0j2eGnSz68mhCvEKeTrLj3137Bm1/I0T7/WH8Q4HrEbAg0C3RCOKZpnk8Jpa6LBKyoWQjHLFylOYge3RnedhT54Kg0vuYk+HlXwLw3zpZ6Nhici1ManMkxcuCrVhDglA1HdKH4yameOvjJC3UadrH7/TvY6vpPOzPIQqwD5I000yGKqAIhBSuofyfE/U2PtqZpnpfankHnEKBnmH6BC1mD6FIGAuCpdo1CfNhGVQli9trkRSPdD8sYkVluSmnfXoV+naJ4KtZ5O1QDSYyQSSaGPa6JBaiNVvMVpe0Sjv6+gn4YVHaxsmsGeUpCu2CN6MbJ72lJYtaaecOcZ9h+gYt4H07cABETzPAecjYIoN1HA/BXv4mmboBaIsWgVSw5XKFvjSJq3lYrI3gNZ3VRs8Ti/FhInIb6YW7vaZX9QL9jUS5GyvtwMefbl3Q/p2lWMeScrkrvbNJwcUhl9Fcs1Gl2i4FyFPlWjpLyGhHVe10xUcmNWrZ3E3C1rsf6tYLVO33G+A9YGSdi5Gu5UVQPiyxMb8gzRZhhR0ghIBvZDQMebiB7SV0BJRyIh8eRSslhi+geLs3wz0SnVxKCRthM0+tcW1tkOTrODOs5CT9boS4JK8FzIemPM3ctKXW1Yu2iSYhjpplm7yFVId72++Fy/KfkGmhmZaUnPfNyQZ+Fgmp4VQkCNp3Wkg3WLTf3ICs0ZimnGsHDgADomABNAPWNRM7HCRqKcOJzJ/PVuZPiSOIlFSdq3b+Hbi6uVa9UNI70hHZlUkbbAQp81lYZsgjbkWRH4zSTFICARRl8BnrOJSkJh00KCQJy2V2rgwCnUd1WSODBpM2PhgaUaUoRWiEYpDvkanTfU9MtCK7njPcx1/F/STmSbLN9WQ56B7bQZuFQElNwBvs/g1TbRHlK7vdSllvNcOjuTjDCDXZb6XdIjl2SNJJ6xUBfcRv3fypG81GfjJORa48cSC9+OdZhLZnIUF9coqea5tICxpV9BrlRf5cB19+Xiey1qxOf1DptoZqVQQ54FgG26VBaBlFN0nMR7Fu0jpzJaIlxMy0BgHG31EfQNUt4jrXnxlIU+LR3qqa33mHeswjlT4xzfzKhX/AIvTttMB+fJFhru9GvMYsaJ0zYb9F7A4TZR0Tq/5vqVFjNMUX335crdt+V7V7SzfN4M9v6OG3UlYxjyLApJ07kSCHjk+aIieVYTo16oxKQ9ZQ4xlCgsqektse8PgBbXoRNBzbGwxjo4PwZ9AajWJOo4Py3iMRIXK3SdTcMF1cArTiLlt2gTlRR+Xiy8ONZ7x/FCNEpxyC84fHRL9tr4AK6bq+lc3syu4jo2x33ekGc1XgIzZ5cIpMgzRuJ+0L9rpkFqbJuWKkj+8iA3dLNe4xzWzEhJISda+kGAEKrcRe6o4TkJ9WxhlPhn+tZitB2u0KfYRA/2bdAiBsogzx2B/3NJU7IwFUyGRUyX6urlOe3k81ems0fcrUX/b6N5Fouk6V8JBDzy/KUFS5uIXl2JSXvCHF840/O6/L54OT9jzN1JYc1KV+xU99XSeUYQsfDjWLiFRec8m/pNqoFXBnmKpV3uOr30dYGJ45GnQ8fzLXz7/IwM+0bzDAx1M3CpCKTIczxtsSR6VDPRcaUO1NueE2tzLckHLIh00n7kNEZ/Op6FmyTpsBUcCtxUw4ALprCTl1zDdwjiJF52UCe2UN/m++DdDJhBnl4+0Xw+mL6JlpFh/xmbBkmIIlqupEo05OkbymYgvxDw7jz3AT3JpkEMBKa5CDTSeqJGtQDPavTtCnU0cJCGK/oz5Iqg84DGab0e1GKbqMS2V7TFScjd5pCHGHvwIl6SH4iKkadD53Mt7Cp3vTKv/HAY8qzo7pvJCkHA1TwXDNV0JpqISuSNaS4CbijnmQqkZLEYTsTafpmQp8bZxcL6pYZ9pTon8KuBtE/2Mya+kXl7avRNNvWVclJfs/dejH0rU8bNZ7rc7wZOnjHm7qKw5mucfzQzUgIYvMJ4hjzNtzJ0CKzJ3xin7dNaOrcL4v4udKsuSiCtxpMY7aAe0vDHJIPOqGXZ4RolVS8/B/0YqPUl1FPB1Zux7HI/EyrHSfwdmGgT/WtRYpfZuYxj+2DgPGBnQHw15Yel4JY1r5/kKeVObgHG5BHmTLmKSad1Lan5lVVpY9lv4AQJG3bTKErOBXkPSpWtpAWZh7pGYA15SjkOC+v8JkY8Y0BbGwHRQKXe+2rUa/1Ibgg8DmyfTtyhbgZ9v4aYSr/037WJiouNL62RxATxMbWJHuXLgAUOkkFiu7uO7nL32J2lfVPRwF2CegI4HigqsXSA5LkOICSU715f8hr8LCNXaoFIrenmB3lKgID8KO+WNfmnwDHAk8UKZfoHh0Cm5jkVnH/YNNwa3HQ9f2TX93M2qCtBy8surktypE8lGFHoI5to+I2fK42TeMtBndBSoWgjqdTppMsUv2UTlXteKfshd49dGcdEs5OE2tJfmh/k6eed5wBA4vQbc+yNRHGdA7xYxr6VS55yZSYBEfvlkUEMaBW/+y4Dj17/6BrybCRxmkYNs6nP9XL1eiAKXWCchETaPK5g1mYs+/lHDNyoE0u0wsOkvEctA67w2wIfoy2m0MfaROV+NfD2hdU7VabYy5De3Z2nkJNonSMAKWYnx8xyNU9Zq8zrx52nhNeK8e9HgGjTfhc8LJc85ernZreq6a8BOQFKyPTpwLWAuG4ZV8LA3/7CJ8g4ts8frei80aahT2VXKhyqdM/MInIKfrYO7S1+GonyyROn9XGFeroSGf9LJE9P9K8D97ulnsNEnnKXeI9bL0qKHYoRbjjwD1KVVLnbjSQr9pXw+pdLnrnmlTtPqfBwoPtD5Dfhl7pW81xmwa8jeX7A+vRbtjmP9pvEJMegkx+BdPTR51cCP3ENImfYROWL2W0bR9u3IuhGRe2lxaTAS2u8qjUdQ98gTvqBNY88Nc7fmhl5kXsPWOg9bljJ05NrWB7g5Nh+PZAsEVg/yVOUGiH2SwGpIiDap/wgGYNRiZsTxGNrVUuMk5B7rdP8NHgEIXQYxpSkIB8wfx8H51CNuqeF+m7vy4Q4LfSDwMYKa/8mRiSKWUuM1h9IHXoLvf9UGiQDfiDti0ifNf6WntW7kPlKJk83DZ5Y55faRCUQQZpfx3bRPMUYc3ieRTwPHCf3vIUsMkcfv8hzA/f+9TQ3DFiuTaRgn2khQyCbPKdo+E+1y1GEDCNfxMkgzq8odGMT0ZmlpLaLMXeswrpWYf0wKM+IL0hML7VpEBKrCHl2cV3gx51nvn2UOvYzATF4ibHrPyVuuB/kuQUwFZCQXLnjfAowp8ASNyTox9Yiz3G0nmhhHWhTL24RpvmEgF/E6YkTo/UYhXWPInlKEyNv90nMtYbJ4zJUyFQla54Bk6eQ5DQ3U5MYZETDlXyjBwDyd/mBqCZ51gFyFST3m/L9E4ObaSFGIFvzFEvy0zZR+QU0zQcE/CZOT6RxtO2q0E0WtNYw8Nxb2bEof8qulvaFq5Lzjs1IcSovRPMUa/tlriuQGDqkiRYnRqOCYvO/IE8W2kTFZ9bP2PZ64A9Avig6Sf58hlvXvpSdL1fz3Nw1Wnm5BDJlkCxegm0qs5dp4UBgLfIUkeIkXlNYP2hixD/DIWLPlSIo4sxEpJHEtRo1TqGv3IxHr/fD2NdFgo6uNmNbQFIaCkllNomUEUt2QS1AJ3lPsxNH+OwmxrCTgH8VJGTuTuWSZ3cGLbmvPSsd1WZaGBD4EnnGSLQo1HybeuMsX8YO5SLOs3ih/zLqhimcQQ6RV6cx/O1S7j2zxTqF1uE1qLN1ysFa2Z04M6fTUNLdXYy5MUVEAgA2BrX6HZ6/eRYTHi0gtl00T8kBIGGOXitK83R/vL0kzKKB+RmeKcOLjD9OpWuFbwLz00muU8f5j8vYbnm0XPLMhZ8nkgRgyB2ofCeNxb3MjfLr8Vyap1gcj7SJ5rNK+jV3rx3HPfY+IiGcrnHorkbm765x5EsqGpq0DkkwMpRlV/kVCz+e1p016iSdshqr/4LzuIP1l0I8AUSgNHFacv8nxpNUc3BW/I95Nz3KePnyBt6yNE8/I4yClr1c8gxaPjO+zwh8iTxPYt7GtThv2kTlMt20EhBws9BfrrFeaGbEgzFa93TLedQpOE+nYuP1GaBOV+hxTQHUKYqT2B/090HtDXpHjWpTqJc0+pUI6g0H/X4n+mNFx5ItGbJ8Eju1x0m8CwzNXrJDx0ctfFuyyQfe8iQkCdLa7teaDHn6hWQPGedL5Clyx2l7WjIENVEvkRimlYHAkbzcbwM+v0USh7jDdAB/0uiHQJ0MLEqyasx0dpejWSAtTmKIhR7uwE5gbatgK40WA8VG6SzxeiCofoCVTwCbaM53xW+BDXn6jagZLygEcn4hGkmcq9Fb2jSI9dG0MhAYy1/71zJYrOJbODg/UUQOBC25KrdMD6ua6lh3YtCJlQtZgtE8C0Epbx+jeZYFX897OA95zvuGJvlHm4av9LwlhU/itF+mulOhGpsYMWMSsyMfMGR7h86NV1GXmMkwqTJZ9ZbvzvNjXrruEU72EoQEKqfRPAOF1wzuIwJ5j2KNJBIK/fOpNEjNbtPKQEDuQDv4/FYFR4G6KYLTdBvRNyaBepd5u0XQZ2j4brouurZrcS6pVlJql0CFKFN3n+/wwq9nMeEuYGEZEBT0aMDhmQXJUEYno3mWAV5PfDQvecZI/EKhhtrUm6O7Dzs7gRfXbadGSniIm8wbndT+IELH0QoudIeXErvSdlAwRdLd+WWFL0X8OG2zQe/1Gk9c+BQXyA9o4Bl94iQOS8dzfykVnjEYlbKJ5plAEchLnqfQtm0E/YJNVBIqmOYLAlqdxPyNFNRG6ByqsJ7Q8EwtzoQpjHwHtIqTOB9UzEEd0EJ9OU7bZUkcJyERNz9exZIFj3PmhYt46bGyBuzmYVfrlFIjuyiY3ERUomqk+ZUYJEjxZWyjeQaNcMjG79KCGifxhEZNa6ZeHIlN8xGBGK1j0veg1n5NjPiLN3Qj86IaZ4aD+rEiaXVQ++o69OtMsvLoTiJ/nsaIUrP+FCW9S2aSzWeIpnO5oubPoNYKD9Qkv3SlY6HW01h5CtY562lU6jOFXgCWOMEL78i/iVO81Idf+gGvbPsHjpMs/X6GZxa1/hI6G/IsAbSe/EiX5JlOFKKOsoke3JMXGUbZ4yRGSU0aBRc0UX9LdqSRS6J/0vCqBR9pODhtcKqXUhcVaRKm2c6qphoGSL2mCjT1tCZ5ZjMjxd9UyNPvCKMg12DIM0h0Qzh2t757cRIfAXvaRL07uRAuo+eJFCexjoJbdSqDjrrJgRktjPi3R6LponODL9IgtWvkTPhIhNp4MQmUfUJlh905+3uD2VINZeQGFhEvRVqtkDmof+qMtGkavSStVeZq1hILJ/WZk9JOHdE0RQ9dYKGXuMbJGkASFg8C3gOk7HFPuvOUBMbZTYg1+7uW/W/d/d2P7RQcTfMJgW7JM0bilwpqbKI/92lOM4yLwHgWbuLQORn0D4FXklgHecfybPIUIwrpOkYfVBhAITPRkuW/cowWbXD5gdwyagt2vWsJb13yAD+8LYdM0j9flJp8JuQoTUIwvaqc8m9CqN5ckrBDPutJ5Fnh7Slqum6/70WN1sc7dwvmqbR+NYm1EPT/2URX9nG8Aln+KTy/gSIypIWRb4rm6dZJiiu4UYrKKZRYvs/RcE0zUSmLUekmhCbpCj3C4yh+e+Z6fOWQlSxuvYv9pU67n00I+vWMap09iTxzaZ5+YlPqWHtlGN9KHcM8l4FAt+QpfWMk7lDol20apDqiaQEikC7vkThJo24T4tQkx2zBync+YMheq1Fv19LRrz/rv1aliKSUVtjAadsPY8yDFrWbOyT/18Ko7+eARDTGfM7/8plXxlgIWTTNTC3UNSStGTXzuBnWo2fY7zzDLl+A36pghi6QPOc2KCK/s6k3SZKD2YfUqKJxvs+gCaCkYuIbGueIZka+LJ+lj/gdt0jGK40+sZkGKR1RlRYnISnqxKlf4uGXarizmagUw+vLLezkFHb5ety7UxB5utrn/RbMaSIqFQZNCwABLw5eQb3COspLSB0nsRmoe8VpHRCN74wqGI7WrDhOayuoaAYEz9pE9wgAkp40ZNjJKezy9aS9TslaBHmK9qn+sJh1vvIgO7X3uJX2EIElC5OI6mGcEdp5ouT/7M+QK9JHdq0mMGfwhny+spKRSGNYOHAAHe9rdKdCra/RHynUoCQDvjKNnT6tFswx5u6V6WOq0bnKWeQUT6WMYF/4sIr/qiKS9bx+U+O82cxIcdrP1cJOTmGXr1qvTsnzFkyeMkOc1ukK9U4T0YokiSh5Vb3mQYk4ajsXuFqOxrUMOH0KOy2fwKt1q1kyEdQVCnVGJX0/Y7Qdr+AG4HbQEx103IJfaTizmQYpw1HR5pYMEXnyOOb7K46G2eCc1czIbHessJNT2OXzd6MqMFpR5Jm2vKvXHGq/1sKwNyogX5+eQvJwgr4P1NAanIMkhFOIcxVLL3Rj4l/OPN5XCqzT+OugKeyzPE5iUZL27esY0i6kXqn5vXncKCh5D1ORScDDoN0a52nf0UJkclCS0zSDfK29wcmKnlJbAxJ7P0TctdqJbDOD4Znjh52cwi5fIVsVqj5FkWda+2y7RKG3ayIqhb1MCxAB12XpGgUTQF/YyerbIvQ/R4hTwVNJkuMi1A7QOPeBvt2mQTSwirU4ba/UwIFTqBdH9oo3L/5efGDbsQ5zyczzE/V8TF1H/ILEy/QCkAeEkIUsU+2rHLjuvlx8r0XNTm78fWa9prCTU9jlK2iDwtSpaPJME2jiZQt14VTqveQNYVpTr5IlbSxCknTsD0i01yaSiR4Yq3E2UFgPSa0k4B4LffWmRP85CeVFAQWKRSOJZyzUBbdR/7dAJ8ozuJf5CTjcJiqa4lq+qEHItC9X7r4t37uinRVtM9hL7kU9jTvs5BR2+YLYrkDHLIk8G5l7iMa6oY4hO1XJ3zBQUMI2uBhp1qHjSg1SyzwXcWaKfFMNAy6oxDE6TttMB+fJlgBqMBWyB3ESi+XIbhOtzYqCkhr2q9wxMiOYuhs2M/JJ+orWKdrnmjacU77WwKl/lmQpzewq/q1eFFTYySns8nW3N6H7vCTylFU0krjVQVvNNJwaulX1QoHG0barhR4P/CJD49xYoU9qIvr7sSwY0o/Om0HJdYpoYg8HDUOMxMUKXWfTcEHQc+UaPyPr/I6AFKiTO0jxi/XCPX0Xy6tp38nnr0xnj3hG/H3YySns8vm+V0EPWDJ5xknIr71YHK+0id4TtKBm/DQCGWWNt/SIU0I6vSxMoP5eqRj4GG2HK/Qp1cq6lUGeYuyRu00hzkVBvisx5u6isOY7JN9pYdSYjMxPYSensMsX5LYFMnbJ5CnSjKN1bwt+3wkN02n4TyASmkHXQiAdadR+nca6v5kRj3lZmMQf1Mv5qVn5aQ3964CVQeYjGMfCLSw659nUyz1sxVsGee7uhnjm88H0VbasOkteztGwk1PY5fN1jyoxWFnkKQLGSPwU9BHNNMgLbFoVETidORt2YF0KSo6TcjJYBfr8zVl+c1CO9GI8dFAntlDfVuml5ykWF7gYWfPKnaqsPezkFHb5At83vycomzxFoDiJZklb10T0JL8FNOMVhsDaIZzqPo3ziIKDQB3tVe3MTrhc2Mhd94rTej2oxTbRy/0Yr5gxQkKeYpgSo1HYySns8hWz9aHo6wt5uhronxR6rk2DV9AsFAvsK0LESJwmyZXd9S7WYIPzewvr5xq9aQeRQ29nxMd+49HIvD01+iab+opE+GTKXwZ5DgYkjd7O4vIFiNW+4JZnXr/ISWqGiVeFGP5EzkeAycDfMwi6YFkzOvolXylz98pnfCPPU3h5gwifPwXMtIle1yvRCvGi3KTVP1Cocx30oQqOB/qnRdaJTiKHTmfE+0EsIU5CvtgTbaJSwK1izfPz/Cf3n/ks14rxspA7z00BSa0oxp4nSOMkrk0F9bGlYgAAIABJREFUtwDJU0qPNAG7ZQkjOQOOkbItBQv55Y5+kWdQ5F7G0qrzqG/kKeLHSYiT8hOgbrCp97Sg6qysj83aSNv3Nfr3Cn1qE9HpcdrW1ah9FXpnjfOHHLHYviHUSGKChj1sokf5NmgBA5VAnuuS0sg52h0+TOQpwRB3AvvlWbqUYynnasQP8gyS3AvY8XB18ZU8ZWmNzPuGxnlUw+RmonLcMK0CCGRkXzoEEL/Le2yiazl4BylGnMRbDuqElgpGG5VAngNcrXMEsI17FA6L5rk+cDOwEfBr4BkgApwOXAucLwliytjDcskzaHIvY2nVedR38pRlnErr1ztRv7PgniaiV1RnaX1vVrG2txO5XsGJco+nsPZvYkSiEkjEaIupdI2lfSsxX/qk0yblSfZ6nVnj/sJFEl+/0PW77E6ErwP3A/8r9tjuJiORO9KlNtFD3YnkuqBccsols9x5SmLsA105n+tuYV18Xq58QZN7GUurzqOBkKcs5WSe3byG/veDWmhTf0Z1ltf3Zk2X8Zg/AvRm67D6zzcw+vNKoRCn9XGFerqJqHzhA285nOQDJ08vwkiSkdjUe6kZ/SZP+V4OB6Qe0mhX+xSy9wiwFGzLJc+gyb2UNVX1mcDIU1YlX+T3aLtLodcHdXIVKj9WFdy+Nnn6zlu1apzjm2mYFfT6eyl5bgCcA5wGSOIdIWg3zV5ZiPpJnkGQe1mLq8bDgZKnt6A4rZeAilmo8VOpl/o3plUZgTiJjRT6QI3aSuM8M5QVz33hSK/VOObtsJqat2cyLF8Rt5wriNEqFv87LfT+U2mYG9Qy3TBVyeUpx2cpTyLhmRXUPPUzNg1ixPEzwkhqhE11M2fJHad4r/iVIcsv8gyK3IN6VQIbtyLkKdI30naIxpkC3GvTIL+splUFAa0amXewRovVWdx2Us3LVB+hX4eXpR7UlTb1RZfSjTF3rMK6VmH9sIkRYvjwvXVxfC5krpLvPL15Nc4/mhk5wcfYdgmnvdK93xS3JHH/8rP5QZ5Bkrufa63IWBUjT1mN1Ce36HeDgl001nnNjAj8aFcRFHvQJHESYmn+gxiUHNSpFloMLeOAi0BfC0pKAf9E7vRqSI6R7PWlLC9G6zEK6x5F8pQmRt5eyhhdPZOheS6wiXpJibvz8xRr+2Wp3/K0A7o0yckgFveCw0uz/Dwlk5NovOWS0+bA3UCu2ktyfBe5s0t/FANrufIFTe7FrCUUfStKnt6KY8w9SmFJfsrnFVxmE309FGj0ASFiJC5XMFHjfL+Zkc/Lkk/m2cER+s9UkLIea3gkQm283AqdkkZPoZssaK1h4Lm3smNRzujdbUeeBB1dPbYtIHWW6rM6STSPEFdBLSDy9LThYXmEEOf5s4BSDYDlkmfQ5F4Q9mHqVBXy9ACI0zoJuFCjbtB0/rqFXT8MEzi9URY3EukYB3VAC/X/SpPnvM0jOA+odGSL78mUG0lcq1HjFPrKzXj0+klM8uUeLw+JdbVtonleBWSWzyhF8xQDzlatTBk3n+miDfqRGCSXbN5alrl+nhJ4UqrFvVzyDJrce9zXrarkKWiNZ8FQh85zQIsz8E1JrCnTqlQTp8ftXgkCp4/T6l4FP9mMZbe9z+CN3Zrwo+WOs46OG1ZTe7JG66Esu9WvbEyn0Dq8BnW2TkXQKLsTZ2apaQxjzI0pLLFCDxUI3uGFX89igmiUBR+9S4BuzSN5nPPLJadyRCrk2XLlC5rcC1lDqPpUnTw9NE5j7padqDNAyX3UnxSRGU0MfzxUaPUCYTLrwLtO4rKqDTWcr+mcaVF7I+hjQP8R1DF+RymNp3VnjTpJw3Gg/gvO4w7WX1qof7EQeF3ilAgcCbVMNQdnxXvMuW0WE84uZIxy+/RR8iwXtl73fGjI00P2SF7utwGfi+ZzIqhNQT8I6iGbqKT9Ms0HBCbw4rrt1FykxYYH68u98wCW3bCCwZMlOkks75rOc4K+RomT2B/090HtDXpHjWpTqJc0+pUI6g0H/X4n+mNFx5ItGbJ8Eju1x0m862mcmVAkaf94GqMrkpQ5TmK+1ImfR8uJCaa+7Wq8crSWFrrvlCtXuZqnD29e7xoirBudQjnG/F0Uzo9AH67SGYL+6KD+XMPK2bexR1FpxHrXtpW/Gqn/vprPJmv0gIFEJn5O8iwNF3suS+kCclrFadtbo7cYyvJ7/TrC55JeatRb6OEO7ATWtgq20mgxUkist9RVHwiqH2DlW71NtCLvcx7nfO99rIgMJbwBhjxLAK2rR8K60V+SOU7imxrnewr1HVDiFC3GjjkaK6FwFixmwEsPslO7z/j0ieHiJEal052peV79I7dm/JkKrtIwvxbn0CmMlFjwqraQaJ4pIrKJerWTxFXJkGdV34zKT95jyDMbmlNp27UTPUqh6zVqmEppLHKHxmsK/YaGdxT6fQfrQ4vOTzSRxRa1yyxqVnby+Wqb+s4gMqtXfgvLnzFO4iDgUdA/tYneOInZkfcYnCJOQKJ4TgzLtUm17zyzEoN8BxC/WDFUmWN7+a9ijxqhx5Lnl1FOHTHFneJroCXd2JZgbaZgEwe9kUqH78mLvg6oOo2uVWvup5QDWtxnJNRO/nQAqzWsVmm/upUKVmiU1Kv5TKE/0+gloD5V6E9BL9JYHykiH3aw4n/T2d37IvWIl0GKymk67pWcnKDEfUw8YiZpmKtJjmlhlBBoaFq2tX0pbz18Pz+8scBkyGWtI07isHTMeaCJQcqSMc/D5tjuM6q9iDzLQ0aSmLxPm+RPlMJptRa1dQ4d/TvRAxSsE0ENVKhBTtrKO0ThSLITSdMld3IbgdoEtIQ7St5DIdx3FbytUW9q9BtgvabRr27BH/7jl59jeSte++lxzNkmQqRFQyqlnIbnwkicmVLHScwAfryKJQse58wLF/HSY35ikj2Wq3VKtvxdFExuIiqRP9L8zqoUxDIMefqMqiFPnwGV4U7l7+t30LFlJG302EbBVx30dgq1PbC1hn9a8A+N/ruC+Z10tE1jtJRaqGpzyxo/plGrw06cApRLZuKwPkTTuVxR82dQXYYwyv14+sfBKqjmkiY5G6xdrFRIMaJ1yglm6Qe8su0fOE4yq/uZGCTI/Tfk6TO6hjx9BrS74eIkapPonWvgm+4XWHJvNgCvg35Bw7OSE9MmKi4wFW5axZi7nSKyrLv0gaKpv81zA6ez2/Jq3h1Loo52VjXVMEB+mCrQ1NOa5JnNjBSXKSFPv2Lbg5bdkKfPCBvy9BnQUoeThB0SHunAngq9D6h3QP3Jwpk1lYbZpY4bxHPi5rSKpRcqOBfY1yb6bBDzFDHmDrtz9vcGs6UaysgNLCJyfy132/tr9B9Vqn691yxxvVqi0AUm2bBSiTo0zIjgvOnuRQ0gMehyh/6eGCl9SAxSxHJL6mrIsyTY8j9kyNNnQP0azk2qcYAFB2n4P+Ah4MFqE1UGcUqJ6Q4FDyRRTVsw4rlJiOGtKk3ITNyt5L9yjF5yFA8ePIRtLvuMt6+8nyPuyCGVl6d0YIESSz0o6SuEKffc3lwSvCFzhp2cPPlypRiUz7K5IPvfuvt7gTB22c01VvoxVPBjGPIMHuOyZ5CaUA7qSJ0Km0x5AtzRyeBp09mholb9LOJ8A/QnoKRi6iANVwxl2VVBOtJ3A6TcRaZkkX7H8PDZ67LFQUt463cP8MMgChHKcV2ygYkHhqucpv4b1u9UqQlFyn5/ixjAI/YeQaJh3egi8O5bXeMk9pGwSoU6TKNv68S56fYSc24Wg9zaxKnvqiN5+s3s+plbblqqAySTWAdNY8RbxYwbQN/1BrJN3bHcO8eiZiuHzjda2DVXUbohrsZYaKZ86S9NMn/JNYCQZ2bzvvBh/eJ3pXkGsA1FDymBL3JFIgQaVgzXWpQhz6L3OBwPjOeFrTW1Ui/9p6B+Bfoqv5N4ZK40RusYKa8BXxCnfB6n7RLQFyhUYxMjZnjGoxhztofILSp9xB0I+ubNWT61EprpONr2tdCPiBaqUJ+BM6GJBqmJ3pdb2K8VhDAlU5Yhz778llZy7ZLST9N5voYjQV1kUy9Jc31vjczfTpPcr47Ou0Xj9CYQd6EanK+sR83rS+j8psIZ5BB51YGVtSSv0SixSMtZ9qnFDLikEiG0LqFfmA58UP0kuXMzUXEz6svNkKfPu280T58BrdZw43hxV0XN1XIHV4OeeBsNEqpagSY1kebvoXGmAZKpXVpHNe9A47RJ1JcYdAaB+hicQRHqtriNb/XlZDKGPH3+Nhjy9BnQag8XI/ELBecodKyJht8ELU+MuXspLIm0qVNwnobHQUte1tMVelylj8vjaN3bQv1WwXQNE0BNBP1LhfppE/W+11IKGl8fxzfk6SOY7mnK5xHNcFVHIEbrfgol94+TbeqvDUogN/fqLVqyB7oapySy1uiHQJ0MLEqyakylY/1/RmKj64guipNYlKR9+zpq+lUzI5QUq0vSsZVEKilYonGKrsNuodbLFxWVzrPgLIhQ+9ZUhucb25Cnz18Eo3n6DGhYhpNYdYuaB8RJvJnoRUHINZa/9q9lsBR428LB+YkiciBoKce7ZXo+1VTHuhNvZrvVQczf3Zhx2l6pgQOnVKmsSzp81BHNt1LWY0lWc2OectGGPLt7YYr83JBnkYD1pO5S6rmGusdAPdbEiCuCkN2tiXSnZ22XdHYfMGR7h86NV1GXGECH1L45TOO8OpQVz1XC2u6ts5HEMxbqgtuo/1sQa+9uzHSBQyUWZGkLFczWqIJi6nOP7XQRaabEIOZW3tSX2jRkE7Yhz+42rMjPDXkWCVhP6x4nIVme5Et7dTP1koXI15ZRE+koUDdFcJpuI/qGuCzJZ52svAPUETKpZKnvR/KntzLK1xLE+RYUp22mg/NkSxXclDLqyot4h9tEHyZdDUH+SKSSRCWVopGLISxnVNQR3L3fRny9RXxQLSLDs47whjx9ffPDGw3h8zL79nBiibeIPAtquE30H36j4dZEukzDaZI8OYk6SCqgxkmIj+ejCv4l2eiBazQ84UdN+ELWECNxsULX2TRcUEh/P/tIwhIHJenrFtpEo27dpa39nCPXWGOZPa0fg772PnOOeZTT78/oY8jTZ/CN5ukzoGEdLk7iJ6AOsan/bjAyanUS8zdKYvWrpWP9JDXvrkO/TtE8VcrYwdGgx4C6XqGPa6Lh3mDk+GLUGG2HK/QpNtGDg54re/wvjuypI7Ro/Fu5fSSc04uTL1UseT5nO5ZHrhrM0GMX8e8ZD3GCOJx7BiRDnqWinec5Q54+Axrm4RpJ/AX0PU00iE9mIM3TNiWvphiLVrHkcIWyNc730xn5I3uvw2czVjBwaH/Wfy1IY9I4Fm5h0TnPpr4iVTUzAfXIM8nqK6ex25PuZy+LB0IgwLuDxmgbq9C3r2Lpwjv5zuluiRAJJzXk6TPwhjx9BjTMw0lcPDDVJirlSgJpbuG4SyRdnUKfqlFzgcc0XNdMdHI64XLHLYAkOjmxmYaZgQjiDhon8bKDOrGFeqkzVLHmkWc7y66ZwT6z3AQigcvgXRe0s3zeDPb+qVwbuHH4hjx93n1Dnj4DGvbh4iSe0Kg7m6m/OyhZXSOVHFX3dwujDdZwukrX/rkXtCSBeNCi9oypDPsoKDlk3Dit14NabBO9PMh5ssf2yHM5H06+h4MkiEASphTt31mszB55dvL53OnscU7GvIY8iwWzm/6GPH0GNOzDpV2LrB/b1B8QpKxjWDhwAB3i8ylRPs841Jyv6Jik4EQJ3ezPkCvSR3atJjBn8IZ8vtJzY0prr+se4aDmlJulqZF5e2r0TTb1ZbgIFY9UnLYzQd+wms9+ewf73lxp8kzS/sI0Rv+iB2W6N4lBin/NzBOVRMAtdLesA2vr2xnxcWXmTlU2lazzV4u7Ui0DTp/CTsslzd1qlkwEdYVCndFEvS3E+QGDL9JwsZCsHw7+cRJ/BybaRMX6XZH2hQa4as50dpe1F6p5SoDBz4AxrqC/BURrfqcQwb3qng6dz7Wwq3gZBFEmZDBwHrBzqpTUFzXrCxExXx8/yXNj2W/gBEBklQxbktNV3gPf8poazbOc7e6hz8ZJ/E6h7muiPtOVJbDVxEkMAX0fqKE1OAdNYeQ7WYmVX1ZYR23G0lc84pTSvqCP7a6WUiFCN5KQ1H172ESPKqS/H31KJE9JrCLXHbtlySDZ6oVMX+1Othx3rX6Tp1SI/ZUrzxPA8YAffrt+kadk8ZLMYtkYSoFFSSbuGe+6g7Lbzw15dgtR7+sQp+1scDa3aTirEqtzjUjXqNQRXl/YyerbIvQ/R4HUQXoqSXLcFqx8Jwji9NYXJ/GWgzqhpULRRh55Jln94jR2Ey1Nkij/uwu864ArXa0zVzfRPsX1KNnVngVMnlJ22ybldpZqYSNPCQiRvK375cHoYleL9+W1N+TpC4w9a5AYbd9TOD+3aQjI5/PLeGQZkcRIJO5Df5Jj3+Ys+zhI4hRpYrTFVFqTzZVV3vcN9Mgzx/E531ziASEnga8CkotASEqa/MAJqUpopmh573clbCOJGzVM/JzFF87ku1KYz0/NU0JtRescAWzz/+2dCZhcZZnvf9+p7qazh30fQJAxBDWhq4Mw6oCOIAwDyiYwKIGkKxnAq9eB4Q6oRO9FRYRx2JKuTtjiBSEC4mW7M4OBUQJJVydhZBOQLRCWGBJCSNLprvPN8686p6lUal+6q5PvfZ56upM+5zvf955T//N+7/J/g21wI1meOwLyL6s442fAfwIRQClbIsi5VK6jWt1sB5610uQwGecoFjYdwtj9+7EPx4kepOqgTHLjei5DQaSR9F2hL3ch4PSwO/uYz4l4uZemNgNnWrg0TrSqHMkY3Q+qrXMn0SvruU6NXQF46kWml4m27RcCYXuQvQBlRggYZPH9sbDl2fOoshne5MkzH+DCt+oUbQ+B/u0G3LZnq0c+T93v44O5Pl6re+/As1aaHAbjzKRnqo+9wuD9syV5vSVyrsG/3uJf1MWUuqUuZapGXUE97Ezgn/NZnB0kzjdwQ7DNVedQBbaOjxNVw7WKJd1vyXRb/L/vol25l3WTCsBTFtwvgOyt5c4BeMrXWBQ8O0j82sBJ7/L0jF8zVUAbBqpqmao0HMBT2DY5cHUcGVifsuxdwKhuT/02PPA0eg6KwFKwvRZ2NrARULvgferZ/yiXSgM/6KUm5cfbMjiUJhTZ1EW6W+iGdDJ9VByhVT/4HXSfqF5MHvaYObQrgb8uMlTgGfo8P2DldXdwoiL1euG8UeMKo0YHz50A5biKa0E5tmK2qnmOrbM86/LVadxBYySeBSaA3WQx8gf9sItoXejqCmkhAzwVFZ2aGVXvYMlEg6c+9QdrjMz0plpotoMlUw3eVQbvlE4Ok1+s5lIBeH4eeCyIFMvPqRebRNHjBUE60FmBJZl3viF4ruXVOXdx6i/rVGHUyOC5j6roAp+6fJy/DQyEmt9jB541V2ljDziDnpssVtu/kUHJ4ClxovK1NYSoR30/5jaT3nIpB3RvsLIivhAnqgBITSQoFrjdkJzWyZSat+cI8y2T9D04jyMUZAkDN/nmHwKSAmn/oJLWIEdRwSL9W2WsCnx8UEgBYW37RlbfN59j/wUI6+m3h217mLEg/6Z2LcrrrJs48Kybaht34BiJVwDRo/1JQaNGmanY7w2R+QamqC5+T6I3w6Pe24zfZw8mvT4LIxdDzUT+V4MVE353E6MuuYEJtchXTM0vBLF+eu+4ib9S3mEx8BRHp2r+lXSeS6YDRQldCuSX1gI8FW3/ITAjAHbNU35V+WurrduvRZ5nGFxT//ds0fZdc19eqwfIgWetNDmMxlF9u+rODdzX2SAteaexdL8m/LstfCq786a2+CvYYWyEFm8NI9bVun3xDBJXWcx0g71iT+6/ZhazqgbpCrbteoLkwlDgLqSvC58qVcgIsJQrWlAK1NTXAjz1opUroC1rEqrkqTbgWAvwDK33gFF/K1XpJZbpEimmzoJ/b0jwPI1FI8bTfLzBS7H/WPw/rqXvwQUcGfqBqlr09n5yjMQ/grkC7GVxolc3gj4EnhH8O8HeuRfrrwvr3GMkPgtoWx1ayGsM5n/tybqbatnSYxrdk5swF9tUgrWJ9+PPv4n2gmlB+fTWwZIOQ0T63dXC5jd44tqH+KZSkLSFLiRhhFhJ9aLwU1WMIvAqLSyplDYEz428d/V8jvl/Gcn5tQBPWZ4/Ar6dsYhGsjxzzS+cqtwd8oEqi6PqwKMGbTjwjNFzGVgRGmS3GvgQzI/jtMkH5KQCDXTQfZzBnAsop3B8wC250GJvrnfqTgXTJUhrehDQl+IeAy9Z+AxwdL3aGs+k+1CLOdfCWWBeBv9BH++RubQ9Wcoa0sDpKUFb1Tgp8fE/fI8/XnkPX687s1OYJN/Luktv5QuLapwkX4oKKj2mFpZnpdeu6LyGAc8YiZEWFph0MqteDo8oJy/4vR3MF9NWKA8aOC1OdENFK94OTzoNG9mJnrl2S3+atqZeqA4Dt7xH2/QFmILlf4Olvoy2xmcowNVF9OF0X6Qle/ThqR/9hj4ip9/CZPkS6yIxEseA/TKYo8BOsJgeg3naYl+IYF7xsSv7sasMfWv3Zdz6WUzcHCOhtKC9syfk0/fuXI5QzmpdJcagJMnXYw0OPCvVaoyeh9IPKs94RC6cw+QtOgXOZNlRPkk51CeCebjelGqVrqPRzjuNu1rG87GHDKgscS3YHzXR+ssb+eSK8/nDvv1sOgOMtjNqlfHbtbx83AJO3zzU61DnT4+W36hlhYGzlIcqIuUkfXElgYN9eAdaOjbRu2cfzS+q5UeSDV/rJ/If1dLY5Vq7yE087GQ/9fx5BxnYz2IVoFAp4Hiwo8C0ZL6QsseJE627seLAc/Ce3LrfzFKWEqPnp2AvVk5aH96X8lGlncvSXZvxxYryaTBXxWlTCouTAhroIHG7yhtlxfv0f20uhyvSvoUEPd7lb2y3cEcXUeUTDqlkMNJfYDFnJtm4KELr9eIDBXvVKPou30DrRIv/bxZe9OBdCyekWyC3hXXhg76GobY8w237+7x+xZ2crO9KPZLk66FXZ3mWq9UZLD7CEpFvBktkcheTC6YSdLBskiGpTowYkkd2cvgT5V5zezl+Jj0dPlZAssJj8xFzOPLNfGufyaK9fVqky309TGwObV1Drac8jPQDRMqZ3J/p54f7BqszZz7d5PN5ruLpq+/jvLCHe91UO0hJ8vWYvwPPcrUao+dusCdDqsugFFhUPupMaO6J03ZK0RO2wwNiJJqB14E9LPbULtpVqldQOug+xWDkTxThw1/EifYVO6fef0/zfq6VRXktKR9j5KSbOCzFLJQNnrXkAK1mXUG0/TqwO4Bds4In55UYba/msqlzHXhWrcKSBxjSbfv59Ezox6pccN1mPtj9Fo5Wl7+iMpWFrS2MUc7b2CbMITfS9lzRk7azA9KthvlXBdi6iP5tqcvvIPFAELQT8/q1pZ5Xz+Myat2fitN2pQJHwbY+ZkAUbEsMRmxC/2Thylqwz1e7nnD7PNg9jNy2vdo7V/r5QwqeHSz5vsETwevsOFEV8WfKx4FrAsJVBYq2kBiJG1W2ZvEv72KKKgckamGgJNiTAZEDPAz8BNA2v9TcroLXzaPaplRRCcgKPi+w+IrdBZ1zYkBeoEwC5cvNBuZm0JEVG0PpXMoJPDDFggbrwhNiJFTxcZiH+bs5tN2fNZDSfX4eEMOqDHBAZtJzgo9VfuDSONHsZOhi8xmUv6uVyFskzrWY2QJOS/LraTLlcX/dDy/VI2BU7sI+IkPue3YeR+jZLrUNR7mXyv5e6FmfVGdKuqrmmOdkt20vR6sxEsqdO9xgj+mkPZMeX2SrsnpUcXFOwA69xdAz6P6Sxagme3GcqMBgQpBMfXjWHPTQClgEpMWk6HVzDKAcRPWcUfBK7RIUbCmW0Czg1DkC9my5KXgBDABhnknr5aC8QX0xBbjiyEylbwXEGk+DfSNOu14ooehleWwAnCpAUBfLrYgxYnSvALOPxT+0iynFEruL6bSmf5fFuZLRaionjsZXLP7JjTZHLXgmy/b3Scp/P+4l/v93f8tlqm6pio+0mCLDklBlKDzFbScs5loRv9SDkq7YVCr5uwPPUrV2Hs+PaWL9OlVgdBFVQX8o2SVqeUu/Okj0Gmjp5pa9l3G9GKK/GoCSqjJ6gSkBUAhU1AMmb8AkR2lcKSVnIbO2yGslpbYlUEte+RYFXLKUXwpqhdVnW0GFWACI+axlNbjSl1HrlWxRdhaj5wKw14O5OU6bLOFQRJagAJIIYiWq3tmKHDZGz01gzwVzYZw2VWQ0jMhl08yYTgNt6nvUyWFy+zSkfOSbV++MvmcjtNxV/UTtJJsucBiQwGXxFVmc6f9MxQ/0HVC+qYu2V6/0nCMM2ba9gyVHGrzHDSzpJJppLSpJXmlLIkG4KKDPz1k3O4PEYgtTlnHzt7q5QbRqYq+RNdefsdoQqMRMU6j+tuTrZoytKhLR/qsxlwBJNbXF2hKEBBBiDToTyPTXhuOJWUdgn89SkTWpaLi219HgRTFQs9tB4tZ0i18T69oyai4dqL/LrwNrPid4Bi0r4qKC6yIqy7+hRAn0mlCta9zrscipLPxJEyMv9IhkV8zV43Lvg/15EHiVy2V0nSjp6jH30PJUfreo+UKRAZGJU9n/zjWXUo6pdA0DQe0hA88ZLDnb4olm6/Y4UQFOtnwC0Jta27OcoBcjof8/6ylu/dVirpOvMhfTdsi0ojQc9YYpVkFT9Lo55io9ymKU+6AYeIbkBbI8VWqabV2KAEJb+lNLoNQSrZzqnrWmAfCM0d0NRqD6mThRuRKyJeSOzGN5JvQyexJsIk57e6VPmTsvpYHWv+SkE3bl4L0P4JjRIxivLIiUWIxYnfQyU0vkEttDmOUedqCqyscoQX9Hims0AAAUgElEQVSSl+px1PTqHCaL9Fes83rOZETI961AbC1q2+t5S0PwrOc1qh1b8ZmhB880OQU/s3BNF1H9ni3avou/sdC2/WoD33mGu554nJ9q65uL71AWobae2rqXwqhS9Lo55qrtu7gTRfNWDDzD8WUR52qDGvayyQlsWdcOWzToC5Npeb5rYFeP5t3n8Gk1W8uWsOVDzmuoksen7x0Lq7qIygp2Up0G9FyETEnrwx3FuTw2u5lRx/WxceHNfE6cA+WIrEr5zjNFoKqKJ/1NkhmkGi7gmW15lqOTwTh26MGzg8T31YLBwA87ieZKHg5BJmdQQ1qaQeIHFr7/NHe9tIifPpIHHENg0ynlgGfe6xYATznoBwI3ee5kQeAKfK96aZQDngq86a2Y+oLESMjiiOxFWyQPB6bmcAmgPuZbtcNVNHslPbJmk3Gi2V/QwXhAt8VrCEDlgxzQ53n8/vYmWvdKsnnVEmb/0x+Yv1X1V4WK0P2Xfz+z9UT4pS8pl7rC61ZzmgsYlaq9EsCzGMgMgOez3P3s7/nx7/KAY7mWZ9Hr5lhjTgswjy6UniR+xnzgWI7lGboAlACfYuwRCciO9OjL0x8nOrBFzJqL3BdKq8rbUCxGQgnyTYNRj13qM7MNHNeqbbwswy9xVfv+/PXtBq8V7IZe1v3mVr4oNrFSRRZspm9foCyLU9antukl5UyXerFBOM6BZ6lKLmHbLhBT4CjvF7yDROa2XW9asW2/nzUHRaZvB8TQVIrPs+h1C4Cn/FbFaMdCi/okQMQX2fIN4Lsl+jxD8FRP6gG/cAmWp/SgeeR0MTjLs9SnuPLjZrD0uxb/ewashR3ALIrTpnsyJKLUqiR9+4E3yaT8qB+JTflY/Rxl01sfW2jyluSjEZpfC/yy2Yc68Cz1zpcQMCoKYmHAaDk337uEGz6WB2gVdVRUWttapfQUk6LXrRI8w4CUgmHZASNFkZUtoEi85pEqQywgOcGzg0Qxn2dB8HQ+z2KPSHV/D0pnRXQsN4usxVcN7OzRsv9sPrWmutHLO3sqy8a34H8L7CBu582szXj/mkUn6MCz1FtXIFUpHELgIV9o3qhzmKqUYPYlS5mnqHx2fqS2MqrAUQL7cSVGNItetwB4Li3Buh0X5HBqq5+de6pE/zvSke6S/LMheKoyaeDFUEK0XeApirqcAB3DRdtLfY4rOa6DxEUG+y1DZKrFf8BgzrHYuQbmdRLNZGmvZPiyzsnMRVVak4FH05bmRyJL1GKCHNLM/7fLs48tfHGjXNSgRcZWXBYOPEu9cwWS5MMhCqbT6KAwSf4/mHXwy9yvck3lPypyr2R1+fuUR6ktrYJJSgEq5a1e9Lo51pgzZaiALqYFAKo5Kz9VRBxqM6EXgFJXBKryixaT0CWhZPsBd0GBPM9wPLkGlJCf0yXS6HmexZQynP4eI/HnJJsP9hi5SxeTXhjMuQdVUGGQ6qtxosr/Df2yykvNDhbKx/phjjnmOjbnUk5lwak7cYCY9vGIHJCxhXfgWc7Nz1OeqWR15X+q/DAUdb4T+A2UPeYoz1R0/NYczbNUMy6wKpZHV9J1s9anpHbV3QvsQlGvFFnAapSVT/SwKaE/u55fx6vmXOTEhfo16SWhvj4pdv1AlJYiS/LxAhVGSpIXQIcVRjpV85dlPnC9Rq4wKuf5Gg7Hxuh5oQmOv5E2pdoNqoT197I440SVF6xsAGUF1FWm8ui8FkYfuJLFZ9zPBQp2CpQdeJaj9TzEICWBWA5ikOzmWYoW68aoYVVmyka+KZZ03RqBp4YRgCrAJUDT9lvJ7Ep4vyeoGCqkyoLgWaC2vUTwbNza9nKer+Fw7AwS/+lhLptNm7JFBlU+2rKnttBqUJeZi6rAa2Y0X3OTJZqrUkrWaPaxeddyJvf9aAx7n/k2y6//DdPVgFDfTwee5dx9R0lXjrbKO7YIq1LewYYDq1J5mmjso2P0zPfx/30u7bcN9kxD8EzSe8U8/ios2BARTF0JTMLr/pnnb7mHswXaMhwUm1CMY4sqnsHWSTnXG7LyzHCSjgy5nNtV+rHbEp9n6asefkem853tDnHaLxvs2YcgtpkPrryFox8ClDuqcs66Ssg5uornrrmXrytd76mgnbEDz3I079pwlKOt0o/dVpjkS1/x8Dyyg56vGuy0ONETBnsFIXgONmFz2KRuBb87+yH+p7qNivlJLiwHnuU+BK4BXLkaK+344d7DqLRVDu+jpvPUPh79S+O0DTqHwFCDZxZh81QHnhU+y671cIWKK3LacO2eWR9tNOaoMRLP+JhvzKWt7lvmTA0MNXg6y7NGz2OMxEgLC4L+OSq+eETtctPD23YwqbQc9eQxcFqcaIo13UlhDQzXvu3b032N0X0NmDVxosVKe2uqlqEGzz/x0PRH+J5StJzPsxZ3NkbPZWBFkJCdEvEhmB/HaVNJo5MyNCCykJ3omWtBW6NQ/FSeciAGbnmPtukLMMX4Tsu4sju0FA3MYOnnLfbaOG1bVfGUcn6lx1QBnsoTVnT8UNLPVCnFJwPTDH2edQTPsFgmzL8OiXNWVKqrXOcNebQ916ROY9GI8TQfb/AmW+zfQeSna9l4zwKOLJQ4Xku9bJNjddB9nMGIN/LogPdRKSkLLfbmLtoVbXUyRBqIkfgv0RnGiYoYeVCkQvAU0bIKPARMpbad2WI9dQZPVeop/SmbZEXpUJqzuj7URBoSPMOVddAt1qTzIfKdOIepfttJjTQg5qQ8XJ81uoIbphwNzCDxTQufixMVx+qgSNgwrpd1d9/KF9ROpliHT1XUqQeWynoljQae6oWmnWkucnXNV24R5ZHWZHfV0OAZvI0/CTYep13lmU6cBrZZDcRIvOZjzp47SNVGFVieYcNDdZk9IGgTU6xzwlb3q46WZ0iUI4Y1kd8I6CUiQReoiqW+FLaykp6xhgVPdUlsYYxYzvcD82KctoNLWpE7yGlgmGogIGQ5M05UjFd1lwrAM5xTCFIitCkbPMMk+dW8cNXdnPVADQNGIZG4tu3qaBuSmIR9zHYsxA9crsIbFjw7SFxrsFPByDm93sIlXUTFQuTEaWCb1UCM7gcN5rFOoiJwqauE4LmR966ezzHivH0nV1uWHJOoCjzD667l1Tl3caoIdGoVbQ+7QHw/i5Q87PQgf21ecvVyld2w4Bmj+zEwoodLiY+5Yi5tYlh34jSwzWogRuJApehZ/L+vdxBvqCxPB551fHwvYPHOfUSUViB6N3FlyhH8YpzoxDpe1g3tNNAQGuig+0SDuc3DHjOH9iX1mtRQg+cHrLzuDk5UGlGtyjNDLl4Rg2c2exRuLAhSqs4KAmNVq7UhLc8YPd8Ge7mPOcXDzgUSwFc8zMlzaLu/6lW7AZwGGlwDHSyZavCuMnindHKYyK5rLkMNnlk19bUozwzdCSp1Ff2i/Kly+ylYpH+LJzhXe/KKdNuQ4BmuRFVHwOo40RHTWfyFuRz+24pW6U5yGhiGGuig+wyDd7shOa2TKSK/rqnESCiwcs57/OniX/E1VfMVS1VStP2HATF5SKgtsnH5GksuLc0D2rUATxXWiNw7sxgkU2ciH5lXKyU2NHhqkTF6XknCl+YNAdN2rZTsxnEaqFQD0+n5jMF2etDdxKhLbmDC6krHyj4vTBnKIugoRByuBHQFeNRUMVPOzuzeWmx+dQRPXVrJ8eokGxI7h9NRWxulOyooVhNpePCcQeJOsA90DgFZbE007AZxGqiBBmbQc5WF6QZ7xZ7cf80sZqm8tiqpADxleaozQ2aTukayPKWP7I4S6lL6i6BLw0Abn6oUF5zc8OAZIzEd7N/EaT+jFgt2YzgNDFcNTKN7chPmYptqEmji/fjzb6Jd4FW2dLCkwxC5DuwOluSaN1gy7yG+qZYYyt2sq9TZ8qzr3DMHb3jw/DbLxm8gqbfHbnGidW0PMGhadxdyGqhCAzPpPtRizrVwFpiXwX/Qx3tkLm1qWV1U0sDpqYOlyi1T4uN/2MuaS+dz7LVFB6jyAAeeVSqwnNNj9MwB/+047WoS5cRpwGkg0ECMxDFgvwzmKLATLKbHYJ+2mBcimFd87Mp+7CpD39p9Gbd+FhM3x0iIvV2dMrcQi13ZRftW/19rZTvwrLVGC4z3D3T/ZRLzlEfTgXOY9OYgXtpdymlg2GggRmKch53sw0TwDjKwn8WqNHEXYDzYUWBaMqkIsxcXJ1r33agDz0F+pGJ0/x8wB8SJKi3CidOA00CFGhhqyzOsbV/Hm//7l5z0SJAiVYtUpQo1UtlpdX/LVDat3GfF6FkE3B6nTblcTpwGnAYq0MDQ+zwTy0BbyCfPfIAL33LgWcFNLPcUOcp9zO8N5uudtInIwInTgNNABRoIAFQJ7yLL4H1e+/WdnNJR757tIYeoLvkUt52wmGsjDjwruIGVnCI2dA/zG7DHd9L+75WM4c5xGnAaSGsg9D/q9yR9z0Zo+jfw1hbSj8Ff7mO3OMbDjLd4QRsRO8nAWovJSLj3x6eDWoTH/CBOu/IvFaCqVW37oN7WYbVtDzUTI/EV4F6Ld2oXh4lYwInTgNNAhRqYysKftDDmkgpPL/e098H+PMicUaXS6BpS0pU7l6qOH5bgqRXPYOkXLf7/tfCzLqLKWXPiNOA0UJkGWg/mhJN2Y8IeB3DM6BGMby48jDkJzAQl2IPpBfuahecMdnn6PLPcYsYb/P1T/0pZpeZVD385NL06h8mySOUuEJFHf1AXr4ZylwdtMoZFSuKwBU/dlPPpOagff67BrOxn84XzOFLJ9E6cBpwGyteAgC6sB18PaFsuYNtCTmfBGePY/zKTJikPZd1GVn9vPsfeBjRldL6VVakxNmUcq7+PDyxO/XdIRiLAdOBZ/n2r7owOEj/x4BzwL+5kivwoTpwGnAbK10AmgOY8exqLFkRo2TX7j0k2r5rHkaeVcUmBqnK2Q7+oA88ylFfTQ2ew5Is+3o8M9h0ws+JEl9b0Am4wp4HtQwOtgD6yGmUlbiExEq/kU0OcqIJBAsWwd5AsWI2h8ST6XVao/l8/My1SB55D/XzF6P42GG0r7oW+qzr5TM36NA/12tz1nQaGWgP5EuxlRcaJ7lPF/DL9nM7nWYUiqzr1PJ4f08z6iyx8B+xdSbh+Hu1KzHXiNOA0UIUGciXYA+ss/kVdTOmqYuhhd+qwDhgV0/ZUlo1voV+0+zMN5r/AzOuk7Z5i57m/Ow04DeTXQACgCu4oR/NNi/+D7Q04pZ1tGjwzb3+MxDkWphrsgQbzCwt3xIn+wX1JnAacBsrXwAx6vmyxV8SJZrPKlz/YMD1juwHP8P50sGySwT8TOB1YBfaeCPbe2RWSyg7T++6m7TRQsQam09PmYR/0MN+cQ9tdFQ80zE/c7sAz83510P03Br4C5kQw75n0A/HQbNp+N8zvq5u+00BdNBCUc/6jwfyPTtpq3pSuLpOu06DbNXhmbes/myaV9UQu+3Ewj4C/0GIf7WLKM3XSvxvWaaDhNTCDJw+BpmkWZsrdBf6/uO/EduTzLOcJncnyvS3Jo33sUQY+b2CMxTxusIt8/CfX8uriBZyeLGdMd6zTwHDSQIzEZ4G/BU4EImB+BfZOFyf46C46y7OEJ3o6iw/wiBwB9giDmWJhCtiEwSTALvOxywze8jjRvhKGc4c4DTSUBs5l6a5N2MMN/hHp1r1GwPkEsNDiPdLFYY811IQbZDIOPCu4EVN5pbWVP7dZzGEWMwmsKis+BSgh/2mwz4B51sM+twfrn5/F0VvVCFdwWXeK00BVGvgGi3duJTLBYA+xmIkGc6jF6rndZGG5SfU/SvYkGbFkLp+sWX/zqibdwCc78KzRzTmNuyI7cdChFn8i2EOACWA+YeBgi33dYl40GIHrnyz+ywbzShMjXrmRiSpVc+I0ULUGLuC5nZOs/wsfs5+F/Qxmf4M9wMLHgAPTZB96Du0LFtSy+PkI9rnZtL9c9cW3wwEceNb5ps/Ceu+Q+HgSc1DwAB+oXkxgDwD2N9Br4XWwK8BbAfYNi/emR//KJJGV/Xhv3cLkguS0dV6CG36INHAUC5v2Z9QurZhdfMyuBrMb2N0sZnewu5s0rdueFvYE9gLzpsW+abBvGMwKCyuA1z3sa9Dy6hw+/e4QLWWbvKwDzyG+rdN5cvcILfv62H097D4+7G2we4OnL8SeYPXlGGXgbbDvGsy7NpWfinrY/9liVnuw2uC/Z/HXWCJr+hm95iY+8cEQL227vvz5PDMaPhwNkdE+/hhLZIxP31gPb4zFGwv+OCD1SXNfMt7CeIPZEeyOYHay2H4Dq4HVNrjfBlbpY7GrwLxj8d9tovXt9bz1znyODQk5tmvdD9biHXgOlqaruE6MxEiPyG7Qv1sSfzdo2gVs6uPh7WTxdyb1pSP8qOXBWLDrUu1pYB3YD8B8AHa9xaz3QD8/BH8DsAG8DQZ/o8VsNJhN4G8C22vxei2RXg+7OYndbKEPTJ9+trCxvw/TPxKvfy0jkhHGJj9kgz+Rd304yp+FsYA+ZcssZnnPcLnZkR5vL0aYVxnptbDa62dTpIlWL0KLl2RzxKc/4tMSaU39bI70s
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAU8AAAFMCAYAAABLWT5cAAAgAElEQVR4XuydCZgUxdnHf9Wzy4KAeH8qGjVRo9EEYWfB4G2MiVc8Em8xqMwsHgRN4hFPvGOMBx7I9i6I4m1iNFGJMTFovGBnORLN4RFvjYoCcgi7O13f88504zDO7M7RPdO7W/U8PApTXfXWv3r+81a9l8I0g4BBwCBgECgaAVX0E+F8YJIrlvffcEpppDIIGAR6DQK9hTy1uyO9ZT295gUzCzEI9FYEegvZGPLsrW+oWZdBIKQIGPIM6cYYsQwCBoFwI2DIM9z7Y6QzCBgEQoqAIc+QbowRyyBgEAg3AoY8w70/RjqDgEEgpAgY8gzpxhixDAIGgXAjYMgz3PtjpDMIGARCioAhz5BujBHLIGAQCDcChjzDvT9GOoOAQSCkCBjyDOnGGLEMAgaBcCNgyDPc+2OkMwgYBEKKgCHPkG6MEcsgYBAINwKGPMO9P0Y6g4BBIKQIGPIM6cYYsQwCBoFwIxAkeW4NTAAOAr4O/AX4JfAU4GTBMhg4D9gZGAssLhI2v7IqbQxMBE4ARKZHgMnA3wFvjiJFM90NAgaB3ohAUOS5GzDNJc1M3JYB5wI2kHQ/2BT4FTAGeAI4HvikSLD9IM+dgCZAZM9snwLHAE8WKZPpbhAwCPRiBIIgz4HALa4GmQu6fwFHAi8D67pEerTbsVrkuRlwJ7Bfnr2+GLi8F78HZmkGAYNAkQgEQZ7bAvcBNcCpwBxXphHAlcD+riZ3PzDA1Trls23c43E1NM/1gZuBjYBfA88AEeB04FrgfODqIrE13Q0CBoFejEAQ5Cn3m0KM0jLJU47nFwHj3TvFuzNw9Z75XxWP7dnbLHee1wAHujI914vfA7M0g4BBoEgEgiBP0eLkvvPwLmQR7TPzDjFM5CmYDAcuBUa72qf8GBiDUZEvl+luEOjNCARBnoKX3GlOBTbIAd5CQO44/xNCzVPkPQc4DfgdcAnwZm9+AczaDAIGgdIQCIo8ZdxvuW4/hwIfu+KJhnmTS1CrQ0aeW7iEv4l7x5nLpao0lM1TBgGDQK9DICjyzAZqO9cCL+QpBqHs+8NqH9vrXGOW3G+KW5L4dZpmEDAIGATyIhAkefYDtnfvPk9xj/DZPp5ibb8MaHSd0kVQOc4LwbYVsW/l+nluDogBa+8cc8rxXWRcUIQ8pqtBwCDQyxEIijx3cd2VRKOU1ur6ST6WFV3kuTXVZ+EsET6Z1vjutqFc8vQ032F5JhLn+bOAz7sTxHxuEDAI9A0EgiTPu4BXXSf42XmIRzTPq4AzM+CuhuaZSw5PJImKEj/PW43FvW98KcwqDQKFIBAUeRYyt599ytU8/ZTFjGUQMAj0AQQMefaBTTZLNAgYBPxHwJCn/5iaEQ0CBoE+gIAhzz6wyWaJBgGDgP8IGPL0H1MzokHAINAHEDDk2Qc22SzRIGAQ8B8BQ57+Y2pGNAgYBPoAAoY8+8AmmyUaBAwC/iNgyNN/TM2IBgGDQB9AwJBnH9hks0SDgEHAfwR6G3lKAuPsJtFH2evM/rfu/u4H8pP8GMSMYRAwCIQDgd5GnuFANbcUvQXrMGNsZDMIVAyB3vKF9mLbc2meFQMzz0R7uf/+NGC0z2rvhpnfIOATAr2NPMO4HiFMKechxG7I06cX1wxjEKg2AmEkm1IwCXNWJUOepeyoecYgEHIEqkae45m/dZKOrQSfDmoXzmD4kjKwMuRZBnjmUYOAQaB4BMomzziJwzRMVLAesIsGSXy8VlO5y1sUI+2bGt5UMNkm+nCOBw15FoOm6WsQMAiUjUBZ5BknIfV9DitRirdAuWV9tZTtGFLgOA/bRLNrwhvyLBA8080gYBDwB4GSyVM0Tre2+VLQN1rUzEjSsV4EJRroWm0VNQu6OJb3B+SPtOVAZ/bzcsR36BwLSgwv0g7P0kANefrzPphRDAIGgQIRKIM822aD3gv0pTYNYhTZCBgK1ACDgFx3mF8i1gLlTHX7EfedsAHbjnPoeL6Fb++W8awhz2KANH0NAgaBshEogzwT8+WO0yKyzVSGD3TJsxSBVgHyR5qQrpBvzjack7dt4LSWDla8djt7ieb7stvRkGcpyJtnDAIGgZIRKIc8U4RlE90Y2Mk9br8H/M8lwFwkmPNYXoT0/eMkUuV/baJSY13Ic1FGVcuS11OEDMV2Na5KxSJm+hsEegACJZNNnIRHnmLskeP4W4BrAApu5RnzCnnK1cBCQ57B4W1GNggYBHIj4Ad5Rt3j9pyM43dgeGeRp2iybYY8A4PbDGwQMAjkQcAP8hQNUJrEbgfessjTm9fceQaOvJnAIGAQyETAkGfw74O58wweYzODQaDiCBjyDB5yQ57BY2xmMAhUHIFqkOdg4DxgZ2AssLiYVQd0bN8DuBX4JzAB+BhYF7gFGJNHvjOBmzLuW/Mtw5BnMRts+hoEeggClSbPTYFfuYT0BHA88EkxWAVAnjsCtwOjXMv90cB/gHWAycC4PPJNAX5WgJHMT/IUt7CJwAmA/Ag94sr49wJIvBiYTV+DgEGgGwRKIs8Yc2MKSzS1WoeOxe+RmDaLCWd3M5docjYg5CQtDOQpzv03ADFXJnF78shzgPtZY451/Q04B3ixgDfML/IUX9omIDOySqb/FDgGeLIAWUwXg4BBwCcEiiZPlzh/7R5rU2I4OCsUnNXMyOYu5BIyEq1zBLANINpSNTVPWbtolULovwTE5Uo0O488JVFJSyoqFHYHnisRcz/IczPgTmC/PDJcDFxeonzmMYOAQaAEBIomzziJd90Y9uzp3rOJblGADF8H7ncjkapJnqLB3Q08BlzjkqhcK3jkKUR6jxs99UcJRQWGA/8AbnSfXV3Aev0gz/WBm90QWPnhegaIAKcD1wLnA1cXIIvpYhAwCPiEQMHkGaf1bFCiqW2fb26baCHjhYE8/889AksyE7k/XOaSYSZ5enIOy7NeObZfDyS72Qs/yDPXFHLnKaR/oKvBl6oZ+/QqmWEMAn0LgULIjjhtMzV6gxrUxUm0GCkke1JP1Twl5l4MPXJUz9XkDvEyVzsWo1B27lDvmeeB49yw1K7eGr/JU/ZMNGCpiTTa1T5Fk/cCBfrWG2xWaxCoEgLdkmechBwHd/ASELt3nmJkEWNLqhV45+l1r7bmeSgw07VW54NdDDNnAakkJFlNSofI82IA8474lSLPDVxD1WluLlXJbxp4PoEqvZtmWoNAqBHokjxjzN1LoR76nH5fmcmwFd5KGklcq+Hn8vck7R+/T9sdBVjbw0Keu7qGK/HtzNXkCC/aopT7EGOSxOzLPaP4o8oPxgHu3yUpSSXJU+6TpwKbuHecT6V+t0wzCBgEqoJAl+Tpltn4k030tkzpxtO6t4P6qyb5bDOjLnQzwEuCjq6aWNvlOCyuP3JfJ038KcVo1N2za8aNk1gADGtlyrj5TH/NjakvN7Z9wxx3nvXAHwCxdOdqM4AzgDU/Knn6+XFsrwOudO83xS1JPBVMMwgYBKqIQF7ybGTOtzU1D9rUf8mC7pGnQ+dzLex6QUZquK6Wsi1wHyCklNnEYCNW74JanHQG+39y/5nPcq0QqSQkCYI8PcKS+9HsJtroScC/ChDaD/Lc3MXIS8KSOa3UkZIfJcHCNIOAQaBCCOQlzxgJCT1c1ExUvphrtRLJUzTPqwAJa/RaCZpnIOTp+aCKIUw0YwnPlCb//mNA7hi/CUj2/AeAaRl9utsqP8izO8t/V3e03clnPjcIGARKQCAneR6JjqxP27sOtQ0tDBO/Tj/IswTxvvyIp3m+yqONf2WSkK8c+eWeUlq3BjBfhChuED/IM9cPjyeFrF38PCXiy1jci9sb09sgUDICOckmTuIg0BfYNIgrzJfaF3eezj+aGSmJNLyM7iULUuiDHnm+zqxxf+EiufOUkEovuUhvJc9C4TH9DAIGgQohkI88JSHGBzbRfL6Q5CmHEbjYOcp/GPIMHHUzgUHAIJCNQD7ylBDE42yi8t+czZBnwS+TH8f2giczHQ0CBoHKIPAl8mzkxe00Nc+7VTHzSpGnllDgUhvNM3CIzQQGAYNAAQjkIM+2Exz0Ec1Ej+jq+TgJiWzZajaTjnmFR6XccOA1jGLM3UVhicV7oU1U8lpK88NVqQCoSu5iNM+SoTMPGgTCi8CXyDNOYrLCequJEZL0ogvNM+0ytIh/z3iIE8RhPHDyjNM6CdQloJ62qZfQREOe4X23jGQGgV6NQA7ybH1OEfl5EyNe6EbzPEziqzWdyz/htd9syPaTmxnZpaP2eOZvLWNOZXhR8djynIPzY9CixWGh95lKg+eWYzTPXv2KmsUZBMKJwFrkOYmX+73P50sW89/BD3JUd6nWiJF4WIEk2sjZNMxWkCsqphw07rCJSlikRCqZuu3lIGmeNQgYBEpGYC3yjJOQLO/32ER3KHTEA7jpzI3ZaWwtA/pF6Cf1gLpqbwHrAZKlvZj2Fqg3LZxJU2mYDYgGK9mNPP/ScsMzi5Gl2L7mzrNYxEx/g0APQGAt8mxk7gka6wc20aOKkF0SCkt9nU5AopE+Gcv8mlqS00HPaKbh9znG6u/+26oi5pGu8pwkMk4d/10HeSFQQ55FAmm6GwQMAuUhkKV5tl2h0e254tm7mUbIU0g01fbn+v22Yo9zPuSl6b/nJEkGIgQnGqef7UPg3+6APYE8RWPONKqJzJn4Z/89F1aF9CkV49R9smkGAYNAYQhkkWfrbzTc20zDbwt7fK1eQp6SWKPmh9xz5oZsf8Jqlr10B/vI/WR2E41TMrrLn2KaPCd/5PgvhOy1nkCexayz0n09YjcEWmnkzXw9FoFszXOhouPYJnb9ZzkripF4VYGkoHsviT5kGg1vZJFdOcPnetb70ofxy+/deWZrnn5jUM54nttXGHMDlLMu86xBIDAE1nxZ3ExKyzbn0UGTmFRyhvK0nyhjdbpMxQqFNbOJEacGtoLwD9wTDEZh1tzDv8NGwj6JwBryPIW2bS30081EcxV3KxicOK3Pg/q2gtUa6kA9b1MvZX77ajPk2Vd33qy7VyOwhjwbmfcdjTPJJpqvtk+3QJzF8wNW0E9CNduBDTUsUqj+Fqu2ncpuH3U7QAAdxME+ScdWiojrb6rf1DgpJ30LtZ7GknrsX2oKZ4GDXtJVH+8hTXJ2hNq38jj/G/IMYF/NkAaBaiOwhjxjtJ4sDu02DSeWKlQj8w+F5FRQ52u01D2SCpSX1KBOm0L9Q6WOW8pzY5m/Xj+ciV5UUiljFP+MmtSONXkGwzONWYY8iwfSPGEQCD0Ca8gzTtsloCM20YvLkfoUnt9gGqM/jZNYlKR9+wj9ltlEO8oZs5Rnv4iDTz29ELRUwxTvoK3Tf8Q5VC9R6JwhpRq1i0K57lWOGHu6aEpCVYelO+hLbRoyDVeGPEvZQPOMQSDkCGRonokWC/1CEw1Sn6fsFqftlRo4cAr1ku29oi0dC58UC7+0w22iQpziYC9/vOgmcXla7fYRl6k1deizhJXqmBIA0FUffsSDP9qAbaREMRaRbTKO8IY8K7r7ZjKDQGUQyNQ8Zymc65toeNKPqRtJPGOhLriN+r/5MV4xY3hlQtzUdVHX/9SLSipmqKL6jmX2tH4M+tr7zDnmUU4XX1khXUOeRaFoOhsEegYCGeSZkFrgR9lEvaidslYQp22mg/NkCw13ljVQCQ9/cWRPHaElXZ7EwUuTRCKfuP8vWqiUF5YmJJev/rpopJ4z/9KuxDmWR64azNBj/8eCW37PuOsAMUwZ8ixhD80jBoGwI5BJnos6WbXNdHb3KlGWJXuMxMUKXWfTIHXdK9o88kyy+spp7OZp0i9LKeUgBfHmzchxKvXdzxOjGXCpS6RBilDq2MbPs1TkzHN9FoEUecZJyD3gRzZRTxMrG5AYbYcr9Ck20YPLHqzIATwSa2fZNTPYZ1ZG6roiRyquuzfvYv5rP8hR97iJS6ROvSHP4qA0vQ0CoUcgRZ6NzN9Ok/yzTdQ73pYt+DgWbmHROc+mfpOyBytyAI/ElvPh5Hs46HduLHxRCZiLnDLV3Zt3Ge/ffC8/kDvP14FxhjxLQdM8YxAINwIp8hxP624O3GDTMNJPceMkXnZQJ7ZQ3+bnuN2NVW3yzCLtsYY8u9sx87lBoOch4B3bD1NwShPRQ/xcQpzW60Ettole7ue43Y1lyLM7hL70ubnzLBoy80BfR8Ajz3GgR9s0nOwnII3M21Ojb7KpzxkC6edcmWMZ8iwaWUOeRUNmHujrCHjkeR7o9W0azvUbkDgpF6iJNtG/+j12vvHKIM/BrnV8Z0CO24uLkTnPvKUe28V4J+WfpcTyKOAvgIS8Pprh3C/ibez2OQEQ+R8BJgOCu0eK3S3DL/L0Q5buZDWfGwRCgYBrMEpc68CHzURTETJ+tkYSEzTsUWRpj7JEKJE8NwV+BYwBngCOz/AJLUgeH8lTSPBKYEKOiS8D5BpEfFMlg38TkJ216lPgGKDQgAc/yNMvWQrC2nQyCFQbAVfzbJumSD7bxMjbgxAoTuItB3VCS4Wijbyqnp/y+tm/4ejWAqztknvUBo52119N8oy4BqaL8uyFJ1s/QAIQ9svTT3IUFHrXXC55buajLEG8gmZMg4DvCHjH9t9p9O15irWVPWmMtphCH2sT3bfswQoYoATNc4CrdUr10G3cI2+1NE/Zk9NTThDpY7r4i0pk054uwb/iaseCxM1u7Sg5MTwDCPHKs9cC5wNXFwCXdCmXPNf3UZYCRTbdDALVRcDTPGdbWBdMZfhzQYkTp/VxhXq6ieg1Qc3hjVsCeXqPfh24H5CcpNUiz1zwWMBxwBRXm7weSOboKMd9wfdAV/5C97Nc8swlc6myBP16mPENAr4gkCLPGImFFtaxTYwoq3ZRVxLFSXwNVKvGOb6ZBon6Cax55LmUd6bcz+EPAP9xCbG7OcNInmKE+RlwGjAdkHDX7Dh82cfhbgjoaFf7lB+BShuMBN9yZeluj8znBoFQIOAd26Ua5a420Q+ClCpG6w8U6k4Lvf9UGuYGNVechOTX/F07K56cwV5ieJH1FRJhVCZ5ts0GvddHvNT4MGOFsGXeUq3tkozkSPf+8zP3GP4UkF1fagPgHJdcJZpKQkELWWsm/H5pnn7IEtRrYcY1CPiKgEeeSxbTvtmDjP7c19FzDBZj7liFda3C+mETI+SezvdWbfJ8jxePfYwz5IeoVPIUNyUhRImLv8K968yV9WkLYCogIbByx5mLXAvB1w/y9EuWQuQ1fQwCVUdATeDVutUsXWUTrVjZ2RitxyisexTJU4Kw8PeCO89DgZnucb0lz/FbCFa0arnfFLck8esstZVLnn7KUuoazHMGgYoioE5i3sa1OK/ZRL0M6xURYBxtuyp0kwWtNQw891Z29PJslj2/R56rWHLtnez3mJuKTlLS5WtibRf/yUbX0Vz6ybFbjEYFx+V78y7hzakP8KP7AJnzjCJj28ViLi5Gv8gh7D+Aq4DfuNrm3YBb2G6t3nJ8l/XkLDGSY9xyyXNzwC9Zyt5/M4BBoBIIqJNp/WoN6mmb6JaVmDB7jkYS12rUOIW+cjMevb6cmvHe2J6f51Le/vH9HCFHZ8nj2RV5bgsI2dVnySdRO0IKBbUc5LnQPXoXk5JOiPwGl8hzzStjev6oYhRyayd9qas4z0sBvkKuYsolT++u2A9ZCsLadDIIVBsBNY62byn0fc1Ev1EtYU6hdXgN6mydcvhWdifOzOk0iOZXdIsxN6aI3Ay6TuN8+i5zps9iwq3dGFGEsESjkztGr5WsefqQVUkMRXKXKQaY7CZkLpFHUoMpW2avryS0ljtQWXchFvdyyTMXfqXKUvSemwcMAtVAQMWYPxqSNzQTlfjpqrbxtO6sUSfplE+j+i84jztYf2mh/sVCBEsTpyUO4xIxlGoOzoqlvHnFgxz1y0LGKKePj+GZ5YhRyrPlkmcpc5pnDAI9GgEVJ7G/hnObiX4nTCsRuUB/H9TeoHfUqDaFekmjX4mg3nDQ73eiP1Z0LNmSIcsnsVN7nMS7brG3tZaicT5oZqTcywXaDHkGCq8Z3CAQKgRUulyGM9amQSy8oWxSJsRCD3dSiTCsbRVspdFChhsB64EeCEpivSUSJ2erhDeBIc9Qvj5GKINAIAioRuaeoLEOsImKZblHt/yap36/mYahQS/OkGfQCJvxDQLhQUA10hbX6AabaCw8YpUmSb47z9UsPn8m37uptFELf6qRxI0aJn7Ge5ffx6GSf7NUJ/nCJ/Wnp7nz9AdHM0ofQkDFaTtTobduIpppae6xELjW9itBbwy68x1evHEWE8SZfEnQi4qTmA/s4kOEUdCiZo9vyLPSiJv5ejwCKk7r+Qo1qImouLb0ijae+Vs7JN+QxbzNc7/8IxOvC7pme4y2sQot+VCXLuTOg+dwkzi7G82zV7xRZhEGgS8joGLMvdyiZnUTIySGutc07/5RFpSk458Rav4Elqt96ohGHxFBveCgxELfTdO76JRhKt0seFOjMpJvaElE4tZp0pfaNNzlWv1N6eHuoDWfGwR6KAIqHeHD/2yiop31mjaW+evBkvP6Mdj3ukx5QFoK+kabhklupNIgoJQIo2rsgTm2VwN1M2ePRkCO7bco1D+biEqi3d7W+m/PwYduwo6bbs131l2HDeUoLbrjqaAlE9E7oCVHZjdNLbDQa+5Mk1hbK5ytMzTR2VDz5lSGizYqtZAkXFFqDElc/HlFxrZ3J0wQnxvyDAJVM2avRkDFSLSAfr6ZhgJIpEdiISS3lSv58tGct+HOHPEgWOs6JN9uYZSUtyi0iTYpeTa9JiGS8sdr4ncqfaR5OURFEy0mtr1QWfzs55HnpTkGlc+yM25l/1t3f/dDVsHRNINAaBAQa/tM0LNsolIrpzc2ITvJNZki0O/yq323Zu+fK6x1JHTzQxZM/wPx3/q4cNE438uIpe9J5OkjDL4P5RG7IVHfoTUDloKAitH6oIV1bxP1D5UyQA96pj/Q/ySevrOWgYd8oToufeBOvlPovehy9zjuPZ4aMwMDOdpna6M9iTxzaZ5h2OK93NR7Ip8hzzDsiJEB0TwfAW3bRCXvZa9vcRLLQIlxZ3OF+lCjB9lEpVhZUC3zyx7WL37Y7zx7wg9QUO+PGTekCIi1/Y8O1q+bGfHnkMrom1in0vr1JMxR6JM1kXssOEXD9Qrn51NpkMztfbUZ8uyrO2/WXTICklXpKYUzqYmRgdQTKlmygB48iWc2vp09P46T+KSdyNf6kWy3ia4MaLqeMqwhz56yU0bO0CAg1vZnFfzMJjonNFJVQJA4idcdavdrYVgqEqkaTXxR60je4MDWKu1kv0R3U/lSgXgPrHGTKkBuGXOBQi9op+bSGQzPFaZqyLMAIE0Xg0AmAuLnORfUeJvovL4ETYzECxbJnzYx6oVqrHs8rXs7KKk1tCZyqQJyLNE4+zQzMru2kSHPCoBvpuhdCMixfb6FHjOVhpd619K6Xk2cxAMW6jdTqX+gGuv2koiAeloikywiC6CzAI1yjTN+wWKnSzGrM6WmvIbZzUT3yXrYkGfBaJqOBoE0AkKeL0XQP7ytxJpBPRXIOImrQS+3aZCMSxVtrtb5V0ki0k5k6xkMFxeor7luT+JkL76imc73ueTLdpPqcg2bMKz/odi/UUQGJVm53zT2lJR5XjPkWdE3wEzWGxCQY/t/InDAbTT8tzcsqNA1NNJ6ooP6fjPR4wp9xq9+GRmY7rDTqQB3yopc8muqtcY5nllXDmTj3f7Ln3/xZ867BRDSlmbIMxDEzaC9GQHRPCXzzz420bd780Kz1xYn8U0FDzYR3aHS6/4i41MqA9Mf3JBOMeRIZJIQWqbjfT7xsp3xu13Gifz5V/1Z7+wVfPzc3Rxwupu4xJBnt8iZDgaBLyMg5Ckx2LvaRD/oawDFaV3USeRb0xnxfiXX7pFnO8uumcE+s9xjung7yHE9sJa+++R3K/nkb3fxvYtc8hTSNppnYKibgXsrAkKe79Vh7XIzIz7urYvMty4xGgGP2UTvqOTaPfL8nE+vm8n+onl+CPw7aBm8eZfyzpT7OVzW/rKbJNqQZ9Dgm/F7HQJCnv9L0v6NaYz+tNetrpsFjadtrINzsE3Djyq5du/OczWf/fYO9r05IwNToGLkKFC3yCVQQ56BIm8G740IiMFoUR3Jr97Mrp/1xgV2tSYpaQx8mmTAxtPYqWI/Hp61vZNVc6azuyQl8dLXBboFHnmuYsm1d7Kf5DIw5Bko4mbw3oyARBgtXkXtFjMZtqI3LzTf2hpJ3O7Ay81Ef12p9VebPJfz4eR7OEgc9D3SNppnpTbfzNNrEJDEIEtXs+H/zWCb7vwKe82iMxfSyLxva5x7bKLbVGqBHnkmaX9hGqN/UcSd55bAz4AxrqySh/TydEb87lueuvKS/b5U8rSAI9w0ceOAF/NIIVmrJKP+zsBYYHH30q7Vw8+sShsDE4ETAJHrEWAy8PcMHIoUz3TviwiI5rl8CQM2eJCd2vsiALLmOK3iPP58EyOurwQGHnk6dD7Xwq4XuGWRpd5RV21bYAawW1YnsdILmb7anew+k2ddCjqQIAMhod2B53LIIGVJfuXK+ARwPPBJd7Jmfe4XeYo/bVMODOXK5hjgySLlMt37MAJiMFr5CsvWnc0+gbrJhBnjGHN3UVjPOtTu0MKwAqpplreaEshTiEpISrTOXE20T0kUnOxKMp/J8wxAjF1ey0We6wI2cLTbqZrkuRlwJ7BfHowudrX48jbXPN1nEBDyXLU5j64ziUlOn1l1joXGaRPtZmeb+sAt7yWQpxSUux/4KiD+mUJI0s5ySXW2q9F16a/qM3mKpnY+IO/NsDya5wBX6xwByLWIHI2rpXmu75K91JmS+21JwSgFASVY4Fp3LVf35e+AWXtxCAh5Sj7LfsU9Fp7eblo3+SJvplEfJVGPTmP426C8e7yChY2R+JOCv9lERQ2+lv4AACAASURBVJMLrMVpkyQdN3Sycup09ryvgGP7d4E/ucd20fg8497mwN2AEINod/+poOYpUwk53gA0dnFsl34e+f+viuSZCxq5brgGONCVK9e1Q2DvgRm4ZyMg5NlhE63tics4hec3iNDvXmD/DPk7FPx6Hdovv4HRnxezrjOYt3k7zjOgr7VpkLuxQFqOCCOJ8unqzlO0tbuA7KPlhi55yr2iIc/Cd0uqgQ53rzpGu9qnaPZF/+AWPqXp2dsQEPLstIlmltPtMWuMMWd7ReRxic6xiJyRpGNzC+uXGvZQ8JMm6m8pVgNtZM4wTeQxUFfb1N8aBBjVI8+EGJx+/Cmvn/0bjm71wVWpJ2qeGwDnAKdJqKpbFlo8DkwzCBSFgBSAS9rUy91PD2ha5N0bWMem/vEJvNZvNZ9NAX24wjqyiRF/iZPYDNS9oMVAcKBNVBKfFNUamfcNjXN/uiRzg3zRfG0eea7k4xvu4gBxlZGEIK91MYnUln/atRTLPaenUYv1+EHX9UeyQ4nfZt4Wp2225PR8jxePfYwzJJdBuX6ePY08pQT1VGAT947zKffO1tf9NYP1DQRE83RsouKvF/p2MvM2ryH5CCixKh9sE10UY+5OCitVNlnjHNHMyJfH0zrcQc1wUCe3UN9WysIm8OK67dQ0a9SmSZyzp9Ewt5Rxcj3TheEm3xTenaF86U+VeHzXPUgs8PJ3KV4nho9lXckYI/GwgkM/4qXGhxkr96N9iTw9jwW535Q7cjFemWYQKBkBIU9tE5U7oNC38SwY6tD5qIa3kqwaM53dU2TRyLzvaBzRwFqTtB/rZ5x+I4nxGnWVRt/ViXPt7YwsyCG9aw2wdRKoS3JE+uR7bCAg+TfFwTxXEwf1ad1toEfan/HurfdxmOAlBCpGnFKc5MUrQazWW7nzyl6I9V2uOrzxRDO9zDUoeeWdZU65wy3mR80PP0/PuCYnl+wmx3eRM7s8SXeQms/7MAI9ijwn8dea9xh8jYIJCtXYxIgZ6TtNrRppO0PDTa5G6msN+rRhqu480HJknqGJ3N7M8OdLeW9izI0pLCGddTV6+bu8OHUWE4Rwurt3E+d4sax7ZOVNL8d+sXZLZqYum0eeS3hz6gP8SKz8YqQqJSWd5Bu9zr03zJxTjGyZ1wri2C/z1GcJJtE9spZCmx/k6Wnv4laVq2XLXqhspl8fRaBHkafsUcad5mgFV0dwbptCw4ceeWr0Kc00TA9iP+MkxEdQtLwTQClwRAt+cgnrPFNIhFYmcXryOTgrlvLmFQ9y1C+7kdmzEEuY4/cloYlrgZfQwoLSCfqcku5I9/5QDDDS8mmeVwGSLd9r1dI8RQvOlsWTKZfsQbxCZsxehIAYYByb+h5x5+nhfhpzt+zEaslwUZKjtLjrfKxxDtJEBlvoveVu1ML5/VSiLxdrde9uj+MkRmk4UMG+6XA/1QbOP9J5OdUbCutdjfNRkgGL1+GtFTdz4Oo4CYleGpo9tkPyfy2MEgNXoO2LrEqf3n4n+0sO03LvPAOVN2NwPzTPSslq5ukjCIjmmWyivkaV4FReTYwm8GrdKj47VqHlrkoSZsjd3UWum9KJGbJ1rH3E91/qCTxe186mIxR6Z43eXuqqa/RQUJKEYj0FAzWIwSLvj1Ql7p19jjDyH8j8IxryrCTaZq6CEEhFGG3OsnUm9djYdq0mMGdwHQOSn9F5HWi5//u7xhnn0Pl6hNrJoA5yUAe2UJ8v609BYJXbKSyap0lJV+5OmucNAunSwyvb2XCDnp6SLsNhXu625B6uv4I2B1oVHAnqept6ufOqWivzzrNsuXNonp5/aSnW9rLlKWIAo3kWAZbpWhkEhDyXdjJoi+ns0KWPYGXEKX2WcczZxiLymIZZDu1X1lB3qE5bx78pTuQa5/ChrHjuA4bsonHeFB/R0mcr/cm1CVSveocXb5nFBIn8kXpCgTZzbA8UXjN4H0NAyPMj4BvVIhO/8M5wYzrJizYSF6axLBhisSL5FTo+f4/BcQU3arimmahkJ6pK82oYdbL63unsJi4y3cW2+yJnAE7yvshVwCBG8ywAJNOlsghIDaN3HJLfbmHXwPNYBr20DDcmsYTfbWFd38SIf4pxaTVLJoK6ApAkKPdo1OOrqHm4GuVHvJR0oJc+xEnHL+Ilwb67ZMhlwSfZp/qRfEMMWD6GZ5YlUxEPG/IsAizTtTIIiOb5iqLzoCZ27TYTeWVEKm+W05mzYQfWpXKdq6C5k/aLIvS7BPhJ9sgq5fDO6TbRleXNWvzTcRISzTJsFUsWfMK/pm/M7jNnMFw00JxtPPO3TtKRcpDvoHZhV32zB2ik9VCNEgLaBdTTNvUSqSRjGVel4rfOPGEQSCEg5DlfYY1tYkSgmk+l8ZZj/CcMWGcVNTcrENclcWW6rAPrN7U47aCmgv5mEuugaYzoMqFGELKntU+pn6Mk23p2e1O7EUcKcoUTlirSUo2zdzMjOwBx+Dd120tF0jzX5xGQGkbPRtDnTqWh1yWCjdE6RqHu1DC3Bn3ibTT8x70bTd19gppex7oTb2a71dV4E4Zz+lY78L2p/Ri0aR3rSqZ1KYWcr70Fyg3h1Lt00zdrDPW0Qi9YTWTSDIYvB0YBkoaw1PDMSsNlju2VRtzM1y0CEmE0S5Gc3MTIP3bbu4d1aGT+dpA8pAPrvumMeN8lzvMVXAjqedDHpk7BcJjGeVWs8VXwd5W470Gu0Uhch5bESEwVbVmhr19NzS+zjugSVy5/pAkRFlN7SrRNOa5783mnjZ7iqiTlRiQ1n9dE7sykNtl/z/XGFtKn1DddSN60PoKAlB6+30H9tpn6B3r7muMkJB77btFENckx/Rj0cScr7wAl5XMlFdCd/Uj+9FZGFVvdsRzo1gMkL+eahNQn8+zdNfQfuowPfn8vh0j2eGnSz68mhCvEKeTrLj3137Bm1/I0T7/WH8Q4HrEbAg0C3RCOKZpnk8Jpa6LBKyoWQjHLFylOYge3RnedhT54Kg0vuYk+HlXwLw3zpZ6Nhici1ManMkxcuCrVhDglA1HdKH4yameOvjJC3UadrH7/TvY6vpPOzPIQqwD5I000yGKqAIhBSuofyfE/U2PtqZpnpfankHnEKBnmH6BC1mD6FIGAuCpdo1CfNhGVQli9trkRSPdD8sYkVluSmnfXoV+naJ4KtZ5O1QDSYyQSSaGPa6JBaiNVvMVpe0Sjv6+gn4YVHaxsmsGeUpCu2CN6MbJ72lJYtaaecOcZ9h+gYt4H07cABETzPAecjYIoN1HA/BXv4mmboBaIsWgVSw5XKFvjSJq3lYrI3gNZ3VRs8Ti/FhInIb6YW7vaZX9QL9jUS5GyvtwMefbl3Q/p2lWMeScrkrvbNJwcUhl9Fcs1Gl2i4FyFPlWjpLyGhHVe10xUcmNWrZ3E3C1rsf6tYLVO33G+A9YGSdi5Gu5UVQPiyxMb8gzRZhhR0ghIBvZDQMebiB7SV0BJRyIh8eRSslhi+geLs3wz0SnVxKCRthM0+tcW1tkOTrODOs5CT9boS4JK8FzIemPM3ctKXW1Yu2iSYhjpplm7yFVId72++Fy/KfkGmhmZaUnPfNyQZ+Fgmp4VQkCNp3Wkg3WLTf3ICs0ZimnGsHDgADomABNAPWNRM7HCRqKcOJzJ/PVuZPiSOIlFSdq3b+Hbi6uVa9UNI70hHZlUkbbAQp81lYZsgjbkWRH4zSTFICARRl8BnrOJSkJh00KCQJy2V2rgwCnUd1WSODBpM2PhgaUaUoRWiEYpDvkanTfU9MtCK7njPcx1/F/STmSbLN9WQ56B7bQZuFQElNwBvs/g1TbRHlK7vdSllvNcOjuTjDCDXZb6XdIjl2SNJJ6xUBfcRv3fypG81GfjJORa48cSC9+OdZhLZnIUF9coqea5tICxpV9BrlRf5cB19+Xiey1qxOf1DptoZqVQQ54FgG26VBaBlFN0nMR7Fu0jpzJaIlxMy0BgHG31EfQNUt4jrXnxlIU+LR3qqa33mHeswjlT4xzfzKhX/AIvTttMB+fJFhru9GvMYsaJ0zYb9F7A4TZR0Tq/5vqVFjNMUX335crdt+V7V7SzfN4M9v6OG3UlYxjyLApJ07kSCHjk+aIieVYTo16oxKQ9ZQ4xlCgsqektse8PgBbXoRNBzbGwxjo4PwZ9AajWJOo4Py3iMRIXK3SdTcMF1cArTiLlt2gTlRR+Xiy8ONZ7x/FCNEpxyC84fHRL9tr4AK6bq+lc3syu4jo2x33ekGc1XgIzZ5cIpMgzRuJ+0L9rpkFqbJuWKkj+8iA3dLNe4xzWzEhJISda+kGAEKrcRe6o4TkJ9WxhlPhn+tZitB2u0KfYRA/2bdAiBsogzx2B/3NJU7IwFUyGRUyX6urlOe3k81ems0fcrUX/b6N5Fouk6V8JBDzy/KUFS5uIXl2JSXvCHF840/O6/L54OT9jzN1JYc1KV+xU99XSeUYQsfDjWLiFRec8m/pNqoFXBnmKpV3uOr30dYGJ45GnQ8fzLXz7/IwM+0bzDAx1M3CpCKTIczxtsSR6VDPRcaUO1NueE2tzLckHLIh00n7kNEZ/Op6FmyTpsBUcCtxUw4ALprCTl1zDdwjiJF52UCe2UN/m++DdDJhBnl4+0Xw+mL6JlpFh/xmbBkmIIlqupEo05OkbymYgvxDw7jz3AT3JpkEMBKa5CDTSeqJGtQDPavTtCnU0cJCGK/oz5Iqg84DGab0e1GKbqMS2V7TFScjd5pCHGHvwIl6SH4iKkadD53Mt7Cp3vTKv/HAY8qzo7pvJCkHA1TwXDNV0JpqISuSNaS4CbijnmQqkZLEYTsTafpmQp8bZxcL6pYZ9pTon8KuBtE/2Mya+kXl7avRNNvWVclJfs/dejH0rU8bNZ7rc7wZOnjHm7qKw5mucfzQzUgIYvMJ4hjzNtzJ0CKzJ3xin7dNaOrcL4v4udKsuSiCtxpMY7aAe0vDHJIPOqGXZ4RolVS8/B/0YqPUl1FPB1Zux7HI/EyrHSfwdmGgT/WtRYpfZuYxj+2DgPGBnQHw15Yel4JY1r5/kKeVObgHG5BHmTLmKSad1Lan5lVVpY9lv4AQJG3bTKErOBXkPSpWtpAWZh7pGYA15SjkOC+v8JkY8Y0BbGwHRQKXe+2rUa/1Ibgg8DmyfTtyhbgZ9v4aYSr/037WJiouNL62RxATxMbWJHuXLgAUOkkFiu7uO7nL32J2lfVPRwF2CegI4HigqsXSA5LkOICSU715f8hr8LCNXaoFIrenmB3lKgID8KO+WNfmnwDHAk8UKZfoHh0Cm5jkVnH/YNNwa3HQ9f2TX93M2qCtBy8surktypE8lGFHoI5to+I2fK42TeMtBndBSoWgjqdTppMsUv2UTlXteKfshd49dGcdEs5OE2tJfmh/k6eed5wBA4vQbc+yNRHGdA7xYxr6VS55yZSYBEfvlkUEMaBW/+y4Dj17/6BrybCRxmkYNs6nP9XL1eiAKXWCchETaPK5g1mYs+/lHDNyoE0u0wsOkvEctA67w2wIfoy2m0MfaROV+NfD2hdU7VabYy5De3Z2nkJNonSMAKWYnx8xyNU9Zq8zrx52nhNeK8e9HgGjTfhc8LJc85ernZreq6a8BOQFKyPTpwLWAuG4ZV8LA3/7CJ8g4ts8frei80aahT2VXKhyqdM/MInIKfrYO7S1+GonyyROn9XGFeroSGf9LJE9P9K8D97ulnsNEnnKXeI9bL0qKHYoRbjjwD1KVVLnbjSQr9pXw+pdLnrnmlTtPqfBwoPtD5Dfhl7pW81xmwa8jeX7A+vRbtjmP9pvEJMegkx+BdPTR51cCP3ENImfYROWL2W0bR9u3IuhGRe2lxaTAS2u8qjUdQ98gTvqBNY88Nc7fmhl5kXsPWOg9bljJ05NrWB7g5Nh+PZAsEVg/yVOUGiH2SwGpIiDap/wgGYNRiZsTxGNrVUuMk5B7rdP8NHgEIXQYxpSkIB8wfx8H51CNuqeF+m7vy4Q4LfSDwMYKa/8mRiSKWUuM1h9IHXoLvf9UGiQDfiDti0ifNf6WntW7kPlKJk83DZ5Y55faRCUQQZpfx3bRPMUYc3ieRTwPHCf3vIUsMkcfv8hzA/f+9TQ3DFiuTaRgn2khQyCbPKdo+E+1y1GEDCNfxMkgzq8odGMT0ZmlpLaLMXeswrpWYf0wKM+IL0hML7VpEBKrCHl2cV3gx51nvn2UOvYzATF4ibHrPyVuuB/kuQUwFZCQXLnjfAowp8ASNyTox9Yiz3G0nmhhHWhTL24RpvmEgF/E6YkTo/UYhXWPInlKEyNv90nMtYbJ4zJUyFQla54Bk6eQ5DQ3U5MYZETDlXyjBwDyd/mBqCZ51gFyFST3m/L9E4ObaSFGIFvzFEvy0zZR+QU0zQcE/CZOT6RxtO2q0E0WtNYw8Nxb2bEof8qulvaFq5Lzjs1IcSovRPMUa/tlriuQGDqkiRYnRqOCYvO/IE8W2kTFZ9bP2PZ64A9Avig6Sf58hlvXvpSdL1fz3Nw1Wnm5BDJlkCxegm0qs5dp4UBgLfIUkeIkXlNYP2hixD/DIWLPlSIo4sxEpJHEtRo1TqGv3IxHr/fD2NdFgo6uNmNbQFIaCkllNomUEUt2QS1AJ3lPsxNH+OwmxrCTgH8VJGTuTuWSZ3cGLbmvPSsd1WZaGBD4EnnGSLQo1HybeuMsX8YO5SLOs3ih/zLqhimcQQ6RV6cx/O1S7j2zxTqF1uE1qLN1ysFa2Z04M6fTUNLdXYy5MUVEAgA2BrX6HZ6/eRYTHi0gtl00T8kBIGGOXitK83R/vL0kzKKB+RmeKcOLjD9OpWuFbwLz00muU8f5j8vYbnm0XPLMhZ8nkgRgyB2ofCeNxb3MjfLr8Vyap1gcj7SJ5rNK+jV3rx3HPfY+IiGcrnHorkbm765x5EsqGpq0DkkwMpRlV/kVCz+e1p016iSdshqr/4LzuIP1l0I8AUSgNHFacv8nxpNUc3BW/I95Nz3KePnyBt6yNE8/I4yClr1c8gxaPjO+zwh8iTxPYt7GtThv2kTlMt20EhBws9BfrrFeaGbEgzFa93TLedQpOE+nYuP1GaBOV+hxTQHUKYqT2B/090HtDXpHjWpTqJc0+pUI6g0H/X4n+mNFx5ItGbJ8Eju1x0m8CwzNXrJDx0ctfFuyyQfe8iQkCdLa7teaDHn6hWQPGedL5Clyx2l7WjIENVEvkRimlYHAkbzcbwM+v0USh7jDdAB/0uiHQJ0MLEqyasx0dpejWSAtTmKIhR7uwE5gbatgK40WA8VG6SzxeiCofoCVTwCbaM53xW+BDXn6jagZLygEcn4hGkmcq9Fb2jSI9dG0MhAYy1/71zJYrOJbODg/UUQOBC25KrdMD6ua6lh3YtCJlQtZgtE8C0Epbx+jeZYFX897OA95zvuGJvlHm4av9LwlhU/itF+mulOhGpsYMWMSsyMfMGR7h86NV1GXmMkwqTJZ9ZbvzvNjXrruEU72EoQEKqfRPAOF1wzuIwJ5j2KNJBIK/fOpNEjNbtPKQEDuQDv4/FYFR4G6KYLTdBvRNyaBepd5u0XQZ2j4brouurZrcS6pVlJql0CFKFN3n+/wwq9nMeEuYGEZEBT0aMDhmQXJUEYno3mWAV5PfDQvecZI/EKhhtrUm6O7Dzs7gRfXbadGSniIm8wbndT+IELH0QoudIeXErvSdlAwRdLd+WWFL0X8OG2zQe/1Gk9c+BQXyA9o4Bl94iQOS8dzfykVnjEYlbKJ5plAEchLnqfQtm0E/YJNVBIqmOYLAlqdxPyNFNRG6ByqsJ7Q8EwtzoQpjHwHtIqTOB9UzEEd0EJ9OU7bZUkcJyERNz9exZIFj3PmhYt46bGyBuzmYVfrlFIjuyiY3ERUomqk+ZUYJEjxZWyjeQaNcMjG79KCGifxhEZNa6ZeHIlN8xGBGK1j0veg1n5NjPiLN3Qj86IaZ4aD+rEiaXVQ++o69OtMsvLoTiJ/nsaIUrP+FCW9S2aSzWeIpnO5oubPoNYKD9Qkv3SlY6HW01h5CtY562lU6jOFXgCWOMEL78i/iVO81Idf+gGvbPsHjpMs/X6GZxa1/hI6G/IsAbSe/EiX5JlOFKKOsoke3JMXGUbZ4yRGSU0aBRc0UX9LdqSRS6J/0vCqBR9pODhtcKqXUhcVaRKm2c6qphoGSL2mCjT1tCZ5ZjMjxd9UyNPvCKMg12DIM0h0Qzh2t757cRIfAXvaRL07uRAuo+eJFCexjoJbdSqDjrrJgRktjPi3R6LponODL9IgtWvkTPhIhNp4MQmUfUJlh905+3uD2VINZeQGFhEvRVqtkDmof+qMtGkavSStVeZq1hILJ/WZk9JOHdE0RQ9dYKGXuMbJGkASFg8C3gOk7HFPuvOUBMbZTYg1+7uW/W/d/d2P7RQcTfMJgW7JM0bilwpqbKI/92lOM4yLwHgWbuLQORn0D4FXklgHecfybPIUIwrpOkYfVBhAITPRkuW/cowWbXD5gdwyagt2vWsJb13yAD+8LYdM0j9flJp8JuQoTUIwvaqc8m9CqN5ckrBDPutJ5Fnh7Slqum6/70WN1sc7dwvmqbR+NYm1EPT/2URX9nG8Aln+KTy/gSIypIWRb4rm6dZJiiu4UYrKKZRYvs/RcE0zUSmLUekmhCbpCj3C4yh+e+Z6fOWQlSxuvYv9pU67n00I+vWMap09iTxzaZ5+YlPqWHtlGN9KHcM8l4FAt+QpfWMk7lDol20apDqiaQEikC7vkThJo24T4tQkx2zBync+YMheq1Fv19LRrz/rv1aliKSUVtjAadsPY8yDFrWbOyT/18Ko7+eARDTGfM7/8plXxlgIWTTNTC3UNSStGTXzuBnWo2fY7zzDLl+A36pghi6QPOc2KCK/s6k3SZKD2YfUqKJxvs+gCaCkYuIbGueIZka+LJ+lj/gdt0jGK40+sZkGKR1RlRYnISnqxKlf4uGXarizmagUw+vLLezkFHb5ety7UxB5utrn/RbMaSIqFQZNCwABLw5eQb3COspLSB0nsRmoe8VpHRCN74wqGI7WrDhOayuoaAYEz9pE9wgAkp40ZNjJKezy9aS9TslaBHmK9qn+sJh1vvIgO7X3uJX2EIElC5OI6mGcEdp5ouT/7M+QK9JHdq0mMGfwhny+spKRSGNYOHAAHe9rdKdCra/RHynUoCQDvjKNnT6tFswx5u6V6WOq0bnKWeQUT6WMYF/4sIr/qiKS9bx+U+O82cxIcdrP1cJOTmGXr1qvTsnzFkyeMkOc1ukK9U4T0YokiSh5Vb3mQYk4ajsXuFqOxrUMOH0KOy2fwKt1q1kyEdQVCnVGJX0/Y7Qdr+AG4HbQEx103IJfaTizmQYpw1HR5pYMEXnyOOb7K46G2eCc1czIbHessJNT2OXzd6MqMFpR5Jm2vKvXHGq/1sKwNyogX5+eQvJwgr4P1NAanIMkhFOIcxVLL3Rj4l/OPN5XCqzT+OugKeyzPE5iUZL27esY0i6kXqn5vXncKCh5D1ORScDDoN0a52nf0UJkclCS0zSDfK29wcmKnlJbAxJ7P0TctdqJbDOD4Znjh52cwi5fIVsVqj5FkWda+2y7RKG3ayIqhb1MCxAB12XpGgUTQF/YyerbIvQ/R4hTwVNJkuMi1A7QOPeBvt2mQTSwirU4ba/UwIFTqBdH9oo3L/5efGDbsQ5zyczzE/V8TF1H/ILEy/QCkAeEkIUsU+2rHLjuvlx8r0XNTm78fWa9prCTU9jlK2iDwtSpaPJME2jiZQt14VTqveQNYVpTr5IlbSxCknTsD0i01yaSiR4Yq3E2UFgPSa0k4B4LffWmRP85CeVFAQWKRSOJZyzUBbdR/7dAJ8ozuJf5CTjcJiqa4lq+qEHItC9X7r4t37uinRVtM9hL7kU9jTvs5BR2+YLYrkDHLIk8G5l7iMa6oY4hO1XJ3zBQUMI2uBhp1qHjSg1SyzwXcWaKfFMNAy6oxDE6TttMB+fJlgBqMBWyB3ESi+XIbhOtzYqCkhr2q9wxMiOYuhs2M/JJ+orWKdrnmjacU77WwKl/lmQpzewq/q1eFFTYySns8nW3N6H7vCTylFU0krjVQVvNNJwaulX1QoHG0barhR4P/CJD49xYoU9qIvr7sSwY0o/Om0HJdYpoYg8HDUOMxMUKXWfTcEHQc+UaPyPr/I6AFKiTO0jxi/XCPX0Xy6tp38nnr0xnj3hG/H3YySns8vm+V0EPWDJ5xknIr71YHK+0id4TtKBm/DQCGWWNt/SIU0I6vSxMoP5eqRj4GG2HK/Qp1cq6lUGeYuyRu00hzkVBvisx5u6isOY7JN9pYdSYjMxPYSensMsX5LYFMnbJ5CnSjKN1bwt+3wkN02n4TyASmkHXQiAdadR+nca6v5kRj3lZmMQf1Mv5qVn5aQ3964CVQeYjGMfCLSw659nUyz1sxVsGee7uhnjm88H0VbasOkteztGwk1PY5fN1jyoxWFnkKQLGSPwU9BHNNMgLbFoVETidORt2YF0KSo6TcjJYBfr8zVl+c1CO9GI8dFAntlDfVuml5ykWF7gYWfPKnaqsPezkFHb5At83vycomzxFoDiJZklb10T0JL8FNOMVhsDaIZzqPo3ziIKDQB3tVe3MTrhc2Mhd94rTej2oxTbRy/0Yr5gxQkKeYpgSo1HYySns8hWz9aHo6wt5uhronxR6rk2DV9AsFAvsK0LESJwmyZXd9S7WYIPzewvr5xq9aQeRQ29nxMd+49HIvD01+iab+opE+GTKXwZ5DgYkjd7O4vIFiNW+4JZnXr/ISWqGiVeFGP5EzkeAycDfMwi6YFkzOvolXylz98pnfCPPU3h5gwifPwXMtIle1yvRCvGi3KTVP1Cocx30oQqOB/qnRdaJTiKHTmfE+0EsIU5CvtgTbaJSwK1izfPz/Cf3n/ks14rxspA7z00BSa0oxp4nSOMkrk0F9bGlYgAAIABJREFUtwDJU0qPNAG7ZQkjOQOOkbItBQv55Y5+kWdQ5F7G0qrzqG/kKeLHSYiT8hOgbrCp97Sg6qysj83aSNv3Nfr3Cn1qE9HpcdrW1ah9FXpnjfOHHLHYviHUSGKChj1sokf5NmgBA5VAnuuS0sg52h0+TOQpwRB3AvvlWbqUYynnasQP8gyS3AvY8XB18ZU8ZWmNzPuGxnlUw+RmonLcMK0CCGRkXzoEEL/Le2yiazl4BylGnMRbDuqElgpGG5VAngNcrXMEsI17FA6L5rk+cDOwEfBr4BkgApwOXAucLwliytjDcskzaHIvY2nVedR38pRlnErr1ztRv7PgniaiV1RnaX1vVrG2txO5XsGJco+nsPZvYkSiEkjEaIupdI2lfSsxX/qk0yblSfZ6nVnj/sJFEl+/0PW77E6ErwP3A/8r9tjuJiORO9KlNtFD3YnkuqBccsols9x5SmLsA105n+tuYV18Xq58QZN7GUurzqOBkKcs5WSe3byG/veDWmhTf0Z1ltf3Zk2X8Zg/AvRm67D6zzcw+vNKoRCn9XGFerqJqHzhA285nOQDJ08vwkiSkdjUe6kZ/SZP+V4OB6Qe0mhX+xSy9wiwFGzLJc+gyb2UNVX1mcDIU1YlX+T3aLtLodcHdXIVKj9WFdy+Nnn6zlu1apzjm2mYFfT6eyl5bgCcA5wGSOIdIWg3zV5ZiPpJnkGQe1mLq8bDgZKnt6A4rZeAilmo8VOpl/o3plUZgTiJjRT6QI3aSuM8M5QVz33hSK/VOObtsJqat2cyLF8Rt5wriNEqFv87LfT+U2mYG9Qy3TBVyeUpx2cpTyLhmRXUPPUzNg1ixPEzwkhqhE11M2fJHad4r/iVIcsv8gyK3IN6VQIbtyLkKdI30naIxpkC3GvTIL+splUFAa0amXewRovVWdx2Us3LVB+hX4eXpR7UlTb1RZfSjTF3rMK6VmH9sIkRYvjwvXVxfC5krpLvPL15Nc4/mhk5wcfYdgmnvdK93xS3JHH/8rP5QZ5Bkrufa63IWBUjT1mN1Ce36HeDgl001nnNjAj8aFcRFHvQJHESYmn+gxiUHNSpFloMLeOAi0BfC0pKAf9E7vRqSI6R7PWlLC9G6zEK6x5F8pQmRt5eyhhdPZOheS6wiXpJibvz8xRr+2Wp3/K0A7o0yckgFveCw0uz/Dwlk5NovOWS0+bA3UCu2ktyfBe5s0t/FANrufIFTe7FrCUUfStKnt6KY8w9SmFJfsrnFVxmE309FGj0ASFiJC5XMFHjfL+Zkc/Lkk/m2cER+s9UkLIea3gkQm283AqdkkZPoZssaK1h4Lm3smNRzujdbUeeBB1dPbYtIHWW6rM6STSPEFdBLSDy9LThYXmEEOf5s4BSDYDlkmfQ5F4Q9mHqVBXy9ACI0zoJuFCjbtB0/rqFXT8MEzi9URY3EukYB3VAC/X/SpPnvM0jOA+odGSL78mUG0lcq1HjFPrKzXj0+klM8uUeLw+JdbVtonleBWSWzyhF8xQDzlatTBk3n+miDfqRGCSXbN5alrl+nhJ4UqrFvVzyDJrce9zXrarkKWiNZ8FQh85zQIsz8E1JrCnTqlQTp8ftXgkCp4/T6l4FP9mMZbe9z+CN3Zrwo+WOs46OG1ZTe7JG66Esu9WvbEyn0Dq8BnW2TkXQKLsTZ2apaQxjzI0pLLFCDxUI3uGFX89igmiUBR+9S4BuzSN5nPPLJadyRCrk2XLlC5rcC1lDqPpUnTw9NE5j7padqDNAyX3UnxSRGU0MfzxUaPUCYTLrwLtO4rKqDTWcr+mcaVF7I+hjQP8R1DF+RymNp3VnjTpJw3Gg/gvO4w7WX1qof7EQeF3ilAgcCbVMNQdnxXvMuW0WE84uZIxy+/RR8iwXtl73fGjI00P2SF7utwGfi+ZzIqhNQT8I6iGbqKT9Ms0HBCbw4rrt1FykxYYH68u98wCW3bCCwZMlOkks75rOc4K+RomT2B/090HtDXpHjWpTqJc0+pUI6g0H/X4n+mNFx5ItGbJ8Eju1x0m862mcmVAkaf94GqMrkpQ5TmK+1ImfR8uJCaa+7Wq8crSWFrrvlCtXuZqnD29e7xoirBudQjnG/F0Uzo9AH67SGYL+6KD+XMPK2bexR1FpxHrXtpW/Gqn/vprPJmv0gIFEJn5O8iwNF3suS+kCclrFadtbo7cYyvJ7/TrC55JeatRb6OEO7ATWtgq20mgxUkist9RVHwiqH2DlW71NtCLvcx7nfO99rIgMJbwBhjxLAK2rR8K60V+SOU7imxrnewr1HVDiFC3GjjkaK6FwFixmwEsPslO7z/j0ieHiJEal052peV79I7dm/JkKrtIwvxbn0CmMlFjwqraQaJ4pIrKJerWTxFXJkGdV34zKT95jyDMbmlNp27UTPUqh6zVqmEppLHKHxmsK/YaGdxT6fQfrQ4vOTzSRxRa1yyxqVnby+Wqb+s4gMqtXfgvLnzFO4iDgUdA/tYneOInZkfcYnCJOQKJ4TgzLtUm17zyzEoN8BxC/WDFUmWN7+a9ijxqhx5Lnl1FOHTHFneJroCXd2JZgbaZgEwe9kUqH78mLvg6oOo2uVWvup5QDWtxnJNRO/nQAqzWsVmm/upUKVmiU1Kv5TKE/0+gloD5V6E9BL9JYHykiH3aw4n/T2d37IvWIl0GKymk67pWcnKDEfUw8YiZpmKtJjmlhlBBoaFq2tX0pbz18Pz+8scBkyGWtI07isHTMeaCJQcqSMc/D5tjuM6q9iDzLQ0aSmLxPm+RPlMJptRa1dQ4d/TvRAxSsE0ENVKhBTtrKO0ThSLITSdMld3IbgdoEtIQ7St5DIdx3FbytUW9q9BtgvabRr27BH/7jl59jeSte++lxzNkmQqRFQyqlnIbnwkicmVLHScwAfryKJQse58wLF/HSY35ikj2Wq3VKtvxdFExuIiqRP9L8zqoUxDIMefqMqiFPnwGV4U7l7+t30LFlJG302EbBVx30dgq1PbC1hn9a8A+N/ruC+Z10tE1jtJRaqGpzyxo/plGrw06cApRLZuKwPkTTuVxR82dQXYYwyv14+sfBKqjmkiY5G6xdrFRIMaJ1yglm6Qe8su0fOE4yq/uZGCTI/Tfk6TO6hjx9BrS74eIkapPonWvgm+4XWHJvNgCvg35Bw7OSE9MmKi4wFW5axZi7nSKyrLv0gaKpv81zA6ez2/Jq3h1Loo52VjXVMEB+mCrQ1NOa5JnNjBSXKSFPv2Lbg5bdkKfPCBvy9BnQUoeThB0SHunAngq9D6h3QP3Jwpk1lYbZpY4bxHPi5rSKpRcqOBfY1yb6bBDzFDHmDrtz9vcGs6UaysgNLCJyfy132/tr9B9Vqn691yxxvVqi0AUm2bBSiTo0zIjgvOnuRQ0gMehyh/6eGCl9SAxSxHJL6mrIsyTY8j9kyNNnQP0azk2qcYAFB2n4P+Ah4MFqE1UGcUqJ6Q4FDyRRTVsw4rlJiOGtKk3ITNyt5L9yjF5yFA8ePIRtLvuMt6+8nyPuyCGVl6d0YIESSz0o6SuEKffc3lwSvCFzhp2cPPlypRiUz7K5IPvfuvt7gTB22c01VvoxVPBjGPIMHuOyZ5CaUA7qSJ0Km0x5AtzRyeBp09mholb9LOJ8A/QnoKRi6iANVwxl2VVBOtJ3A6TcRaZkkX7H8PDZ67LFQUt463cP8MMgChHKcV2ygYkHhqucpv4b1u9UqQlFyn5/ixjAI/YeQaJh3egi8O5bXeMk9pGwSoU6TKNv68S56fYSc24Wg9zaxKnvqiN5+s3s+plbblqqAySTWAdNY8RbxYwbQN/1BrJN3bHcO8eiZiuHzjda2DVXUbohrsZYaKZ86S9NMn/JNYCQZ2bzvvBh/eJ3pXkGsA1FDymBL3JFIgQaVgzXWpQhz6L3OBwPjOeFrTW1Ui/9p6B+Bfoqv5N4ZK40RusYKa8BXxCnfB6n7RLQFyhUYxMjZnjGoxhztofILSp9xB0I+ubNWT61EprpONr2tdCPiBaqUJ+BM6GJBqmJ3pdb2K8VhDAlU5Yhz778llZy7ZLST9N5voYjQV1kUy9Jc31vjczfTpPcr47Ou0Xj9CYQd6EanK+sR83rS+j8psIZ5BB51YGVtSSv0SixSMtZ9qnFDLikEiG0LqFfmA58UP0kuXMzUXEz6svNkKfPu280T58BrdZw43hxV0XN1XIHV4OeeBsNEqpagSY1kebvoXGmAZKpXVpHNe9A47RJ1JcYdAaB+hicQRHqtriNb/XlZDKGPH3+Nhjy9BnQag8XI/ELBecodKyJht8ELU+MuXspLIm0qVNwnobHQUte1tMVelylj8vjaN3bQv1WwXQNE0BNBP1LhfppE/W+11IKGl8fxzfk6SOY7mnK5xHNcFVHIEbrfgol94+TbeqvDUogN/fqLVqyB7oapySy1uiHQJ0MLEqyakylY/1/RmKj64guipNYlKR9+zpq+lUzI5QUq0vSsZVEKilYonGKrsNuodbLFxWVzrPgLIhQ+9ZUhucb25Cnz18Eo3n6DGhYhpNYdYuaB8RJvJnoRUHINZa/9q9lsBR428LB+YkiciBoKce7ZXo+1VTHuhNvZrvVQczf3Zhx2l6pgQOnVKmsSzp81BHNt1LWY0lWc2OectGGPLt7YYr83JBnkYD1pO5S6rmGusdAPdbEiCuCkN2tiXSnZ22XdHYfMGR7h86NV1GXGECH1L45TOO8OpQVz1XC2u6ts5HEMxbqgtuo/1sQa+9uzHSBQyUWZGkLFczWqIJi6nOP7XQRaabEIOZW3tSX2jRkE7Yhz+42rMjPDXkWCVhP6x4nIVme5Et7dTP1koXI15ZRE+koUDdFcJpuI/qGuCzJZ52svAPUETKpZKnvR/KntzLK1xLE+RYUp22mg/NkSxXclDLqyot4h9tEHyZdDUH+SKSSRCWVopGLISxnVNQR3L3fRny9RXxQLSLDs47whjx9ffPDGw3h8zL79nBiibeIPAtquE30H36j4dZEukzDaZI8OYk6SCqgxkmIj+ejCv4l2eiBazQ84UdN+ELWECNxsULX2TRcUEh/P/tIwhIHJenrFtpEo27dpa39nCPXWGOZPa0fg772PnOOeZTT78/oY8jTZ/CN5ukzoGEdLk7iJ6AOsan/bjAyanUS8zdKYvWrpWP9JDXvrkO/TtE8VcrYwdGgx4C6XqGPa6Lh3mDk+GLUGG2HK/QpNtGDg54re/wvjuypI7Ro/Fu5fSSc04uTL1UseT5nO5ZHrhrM0GMX8e8ZD3GCOJx7BiRDnqWinec5Q54+Axrm4RpJ/AX0PU00iE9mIM3TNiWvphiLVrHkcIWyNc730xn5I3uvw2czVjBwaH/Wfy1IY9I4Fm5h0TnPpr4iVTUzAfXIM8nqK6ex25PuZy+LB0IgwLuDxmgbq9C3r2Lpwjv5zuluiRAJJzXk6TPwhjx9BjTMw0lcPDDVJirlSgJpbuG4SyRdnUKfqlFzgcc0XNdMdHI64XLHLYAkOjmxmYaZgQjiDhon8bKDOrGFeqkzVLHmkWc7y66ZwT6z3AQigcvgXRe0s3zeDPb+qVwbuHH4hjx93n1Dnj4DGvbh4iSe0Kg7m6m/OyhZXSOVHFX3dwujDdZwukrX/rkXtCSBeNCi9oypDPsoKDlk3Dit14NabBO9PMh5ssf2yHM5H06+h4MkiEASphTt31mszB55dvL53OnscU7GvIY8iwWzm/6GPH0GNOzDpV2LrB/b1B8QpKxjWDhwAB3i8ylRPs841Jyv6Jik4EQJ3ezPkCvSR3atJjBn8IZ8vtJzY0prr+se4aDmlJulqZF5e2r0TTb1ZbgIFY9UnLYzQd+wms9+ewf73lxp8kzS/sI0Rv+iB2W6N4lBin/NzBOVRMAtdLesA2vr2xnxcWXmTlU2lazzV4u7Ui0DTp/CTsslzd1qlkwEdYVCndFEvS3E+QGDL9JwsZCsHw7+cRJ/BybaRMX6XZH2hQa4as50dpe1F6p5SoDBz4AxrqC/BURrfqcQwb3qng6dz7Wwq3gZBFEmZDBwHrBzqpTUFzXrCxExXx8/yXNj2W/gBEBklQxbktNV3gPf8poazbOc7e6hz8ZJ/E6h7muiPtOVJbDVxEkMAX0fqKE1OAdNYeQ7WYmVX1ZYR23G0lc84pTSvqCP7a6WUiFCN5KQ1H172ESPKqS/H31KJE9JrCLXHbtlySDZ6oVMX+1Othx3rX6Tp1SI/ZUrzxPA8YAffrt+kadk8ZLMYtkYSoFFSSbuGe+6g7Lbzw15dgtR7+sQp+1scDa3aTirEqtzjUjXqNQRXl/YyerbIvQ/R4HUQXoqSXLcFqx8Jwji9NYXJ/GWgzqhpULRRh55Jln94jR2Ey1Nkij/uwu864ArXa0zVzfRPsX1KNnVngVMnlJ22ybldpZqYSNPCQiRvK375cHoYleL9+W1N+TpC4w9a5AYbd9TOD+3aQjI5/PLeGQZkcRIJO5Df5Jj3+Ys+zhI4hRpYrTFVFqTzZVV3vcN9Mgzx/E531ziASEnga8CkotASEqa/MAJqUpopmh573clbCOJGzVM/JzFF87ku1KYz0/NU0JtRescAWzz/+2dCZhcZZnvf9+p7qazh30fQJAxBDWhq4Mw6oCOIAwDyiYwKIGkKxnAq9eB4Q6oRO9FRYRx2JKuTtjiBSEC4mW7M4OBUQJJVydhZBOQLRCWGBJCSNLprvPN8686p6lUal+6q5PvfZ56upM+5zvf955T//N+7/J/g21wI1meOwLyL6s442fAfwIRQClbIsi5VK6jWt1sB5610uQwGecoFjYdwtj9+7EPx4kepOqgTHLjei5DQaSR9F2hL3ch4PSwO/uYz4l4uZemNgNnWrg0TrSqHMkY3Q+qrXMn0SvruU6NXQF46kWml4m27RcCYXuQvQBlRggYZPH9sbDl2fOoshne5MkzH+DCt+oUbQ+B/u0G3LZnq0c+T93v44O5Pl6re+/As1aaHAbjzKRnqo+9wuD9syV5vSVyrsG/3uJf1MWUuqUuZapGXUE97Ezgn/NZnB0kzjdwQ7DNVedQBbaOjxNVw7WKJd1vyXRb/L/vol25l3WTCsBTFtwvgOyt5c4BeMrXWBQ8O0j82sBJ7/L0jF8zVUAbBqpqmao0HMBT2DY5cHUcGVifsuxdwKhuT/02PPA0eg6KwFKwvRZ2NrARULvgferZ/yiXSgM/6KUm5cfbMjiUJhTZ1EW6W+iGdDJ9VByhVT/4HXSfqF5MHvaYObQrgb8uMlTgGfo8P2DldXdwoiL1euG8UeMKo0YHz50A5biKa0E5tmK2qnmOrbM86/LVadxBYySeBSaA3WQx8gf9sItoXejqCmkhAzwVFZ2aGVXvYMlEg6c+9QdrjMz0plpotoMlUw3eVQbvlE4Ok1+s5lIBeH4eeCyIFMvPqRebRNHjBUE60FmBJZl3viF4ruXVOXdx6i/rVGHUyOC5j6roAp+6fJy/DQyEmt9jB541V2ljDziDnpssVtu/kUHJ4ClxovK1NYSoR30/5jaT3nIpB3RvsLIivhAnqgBITSQoFrjdkJzWyZSat+cI8y2T9D04jyMUZAkDN/nmHwKSAmn/oJLWIEdRwSL9W2WsCnx8UEgBYW37RlbfN59j/wUI6+m3h217mLEg/6Z2LcrrrJs48Kybaht34BiJVwDRo/1JQaNGmanY7w2R+QamqC5+T6I3w6Pe24zfZw8mvT4LIxdDzUT+V4MVE353E6MuuYEJtchXTM0vBLF+eu+4ib9S3mEx8BRHp2r+lXSeS6YDRQldCuSX1gI8FW3/ITAjAHbNU35V+WurrduvRZ5nGFxT//ds0fZdc19eqwfIgWetNDmMxlF9u+rODdzX2SAteaexdL8m/LstfCq786a2+CvYYWyEFm8NI9bVun3xDBJXWcx0g71iT+6/ZhazqgbpCrbteoLkwlDgLqSvC58qVcgIsJQrWlAK1NTXAjz1opUroC1rEqrkqTbgWAvwDK33gFF/K1XpJZbpEimmzoJ/b0jwPI1FI8bTfLzBS7H/WPw/rqXvwQUcGfqBqlr09n5yjMQ/grkC7GVxolc3gj4EnhH8O8HeuRfrrwvr3GMkPgtoWx1ayGsM5n/tybqbatnSYxrdk5swF9tUgrWJ9+PPv4n2gmlB+fTWwZIOQ0T63dXC5jd44tqH+KZSkLSFLiRhhFhJ9aLwU1WMIvAqLSyplDYEz428d/V8jvl/Gcn5tQBPWZ4/Ar6dsYhGsjxzzS+cqtwd8oEqi6PqwKMGbTjwjNFzGVgRGmS3GvgQzI/jtMkH5KQCDXTQfZzBnAsop3B8wC250GJvrnfqTgXTJUhrehDQl+IeAy9Z+AxwdL3aGs+k+1CLOdfCWWBeBv9BH++RubQ9Wcoa0sDpKUFb1Tgp8fE/fI8/XnkPX687s1OYJN/Luktv5QuLapwkX4oKKj2mFpZnpdeu6LyGAc8YiZEWFph0MqteDo8oJy/4vR3MF9NWKA8aOC1OdENFK94OTzoNG9mJnrl2S3+atqZeqA4Dt7xH2/QFmILlf4Olvoy2xmcowNVF9OF0X6Qle/ThqR/9hj4ip9/CZPkS6yIxEseA/TKYo8BOsJgeg3naYl+IYF7xsSv7sasMfWv3Zdz6WUzcHCOhtKC9syfk0/fuXI5QzmpdJcagJMnXYw0OPCvVaoyeh9IPKs94RC6cw+QtOgXOZNlRPkk51CeCebjelGqVrqPRzjuNu1rG87GHDKgscS3YHzXR+ssb+eSK8/nDvv1sOgOMtjNqlfHbtbx83AJO3zzU61DnT4+W36hlhYGzlIcqIuUkfXElgYN9eAdaOjbRu2cfzS+q5UeSDV/rJ/If1dLY5Vq7yE087GQ/9fx5BxnYz2IVoFAp4Hiwo8C0ZL6QsseJE627seLAc/Ce3LrfzFKWEqPnp2AvVk5aH96X8lGlncvSXZvxxYryaTBXxWlTCouTAhroIHG7yhtlxfv0f20uhyvSvoUEPd7lb2y3cEcXUeUTDqlkMNJfYDFnJtm4KELr9eIDBXvVKPou30DrRIv/bxZe9OBdCyekWyC3hXXhg76GobY8w237+7x+xZ2crO9KPZLk66FXZ3mWq9UZLD7CEpFvBktkcheTC6YSdLBskiGpTowYkkd2cvgT5V5zezl+Jj0dPlZAssJj8xFzOPLNfGufyaK9fVqky309TGwObV1Drac8jPQDRMqZ3J/p54f7BqszZz7d5PN5ruLpq+/jvLCHe91UO0hJ8vWYvwPPcrUao+dusCdDqsugFFhUPupMaO6J03ZK0RO2wwNiJJqB14E9LPbULtpVqldQOug+xWDkTxThw1/EifYVO6fef0/zfq6VRXktKR9j5KSbOCzFLJQNnrXkAK1mXUG0/TqwO4Bds4In55UYba/msqlzHXhWrcKSBxjSbfv59Ezox6pccN1mPtj9Fo5Wl7+iMpWFrS2MUc7b2CbMITfS9lzRk7azA9KthvlXBdi6iP5tqcvvIPFAELQT8/q1pZ5Xz+Myat2fitN2pQJHwbY+ZkAUbEsMRmxC/2Thylqwz1e7nnD7PNg9jNy2vdo7V/r5QwqeHSz5vsETwevsOFEV8WfKx4FrAsJVBYq2kBiJG1W2ZvEv72KKKgckamGgJNiTAZEDPAz8BNA2v9TcroLXzaPaplRRCcgKPi+w+IrdBZ1zYkBeoEwC5cvNBuZm0JEVG0PpXMoJPDDFggbrwhNiJFTxcZiH+bs5tN2fNZDSfX4eEMOqDHBAZtJzgo9VfuDSONHsZOhi8xmUv6uVyFskzrWY2QJOS/LraTLlcX/dDy/VI2BU7sI+IkPue3YeR+jZLrUNR7mXyv5e6FmfVGdKuqrmmOdkt20vR6sxEsqdO9xgj+mkPZMeX2SrsnpUcXFOwA69xdAz6P6Sxagme3GcqMBgQpBMfXjWHPTQClgEpMWk6HVzDKAcRPWcUfBK7RIUbCmW0Czg1DkC9my5KXgBDABhnknr5aC8QX0xBbjiyEylbwXEGk+DfSNOu14ooehleWwAnCpAUBfLrYgxYnSvALOPxT+0iynFEruL6bSmf5fFuZLRaionjsZXLP7JjTZHLXgmy/b3Scp/P+4l/v93f8tlqm6pio+0mCLDklBlKDzFbScs5loRv9SDkq7YVCr5uwPPUrV2Hs+PaWL9OlVgdBFVQX8o2SVqeUu/Okj0Gmjp5pa9l3G9GKK/GoCSqjJ6gSkBUAhU1AMmb8AkR2lcKSVnIbO2yGslpbYlUEte+RYFXLKUXwpqhdVnW0GFWACI+axlNbjSl1HrlWxRdhaj5wKw14O5OU6bLOFQRJagAJIIYiWq3tmKHDZGz01gzwVzYZw2VWQ0jMhl08yYTgNt6nvUyWFy+zSkfOSbV++MvmcjtNxV/UTtJJsucBiQwGXxFVmc6f9MxQ/0HVC+qYu2V6/0nCMM2ba9gyVHGrzHDSzpJJppLSpJXmlLIkG4KKDPz1k3O4PEYgtTlnHzt7q5QbRqYq+RNdefsdoQqMRMU6j+tuTrZoytKhLR/qsxlwBJNbXF2hKEBBBiDToTyPTXhuOJWUdgn89SkTWpaLi219HgRTFQs9tB4tZ0i18T69oyai4dqL/LrwNrPid4Bi0r4qKC6yIqy7+hRAn0mlCta9zrscipLPxJEyMv9IhkV8zV43Lvg/15EHiVy2V0nSjp6jH30PJUfreo+UKRAZGJU9n/zjWXUo6pdA0DQe0hA88ZLDnb4olm6/Y4UQFOtnwC0Jta27OcoBcjof8/6ylu/dVirpOvMhfTdsi0ojQc9YYpVkFT9Lo55io9ymKU+6AYeIbkBbI8VWqabV2KAEJb+lNLoNQSrZzqnrWmAfCM0d0NRqD6mThRuRKyJeSOzGN5JvQyexJsIk57e6VPmTsvpYHWv+SkE3bl4L0P4JjRIxivLIiUWIxYnfQyU0vkEttDmOUedqCqyscoQX9Hims0AAAUgElEQVSSl+px1PTqHCaL9Fes83rOZETI961AbC1q2+t5S0PwrOc1qh1b8ZmhB880OQU/s3BNF1H9ni3avou/sdC2/WoD33mGu554nJ9q65uL71AWobae2rqXwqhS9Lo55qrtu7gTRfNWDDzD8WUR52qDGvayyQlsWdcOWzToC5Npeb5rYFeP5t3n8Gk1W8uWsOVDzmuoksen7x0Lq7qIygp2Up0G9FyETEnrwx3FuTw2u5lRx/WxceHNfE6cA+WIrEr5zjNFoKqKJ/1NkhmkGi7gmW15lqOTwTh26MGzg8T31YLBwA87ieZKHg5BJmdQQ1qaQeIHFr7/NHe9tIifPpIHHENg0ynlgGfe6xYATznoBwI3ee5kQeAKfK96aZQDngq86a2Y+oLESMjiiOxFWyQPB6bmcAmgPuZbtcNVNHslPbJmk3Gi2V/QwXhAt8VrCEDlgxzQ53n8/vYmWvdKsnnVEmb/0x+Yv1X1V4WK0P2Xfz+z9UT4pS8pl7rC61ZzmgsYlaq9EsCzGMgMgOez3P3s7/nx7/KAY7mWZ9Hr5lhjTgswjy6UniR+xnzgWI7lGboAlACfYuwRCciO9OjL0x8nOrBFzJqL3BdKq8rbUCxGQgnyTYNRj13qM7MNHNeqbbwswy9xVfv+/PXtBq8V7IZe1v3mVr4oNrFSRRZspm9foCyLU9antukl5UyXerFBOM6BZ6lKLmHbLhBT4CjvF7yDROa2XW9asW2/nzUHRaZvB8TQVIrPs+h1C4Cn/FbFaMdCi/okQMQX2fIN4Lsl+jxD8FRP6gG/cAmWp/SgeeR0MTjLs9SnuPLjZrD0uxb/ewashR3ALIrTpnsyJKLUqiR9+4E3yaT8qB+JTflY/Rxl01sfW2jyluSjEZpfC/yy2Yc68Cz1zpcQMCoKYmHAaDk337uEGz6WB2gVdVRUWttapfQUk6LXrRI8w4CUgmHZASNFkZUtoEi85pEqQywgOcGzg0Qxn2dB8HQ+z2KPSHV/D0pnRXQsN4usxVcN7OzRsv9sPrWmutHLO3sqy8a34H8L7CBu582szXj/mkUn6MCz1FtXIFUpHELgIV9o3qhzmKqUYPYlS5mnqHx2fqS2MqrAUQL7cSVGNItetwB4Li3Buh0X5HBqq5+de6pE/zvSke6S/LMheKoyaeDFUEK0XeApirqcAB3DRdtLfY4rOa6DxEUG+y1DZKrFf8BgzrHYuQbmdRLNZGmvZPiyzsnMRVVak4FH05bmRyJL1GKCHNLM/7fLs48tfHGjXNSgRcZWXBYOPEu9cwWS5MMhCqbT6KAwSf4/mHXwy9yvck3lPypyr2R1+fuUR6ktrYJJSgEq5a1e9Lo51pgzZaiALqYFAKo5Kz9VRBxqM6EXgFJXBKryixaT0CWhZPsBd0GBPM9wPLkGlJCf0yXS6HmexZQynP4eI/HnJJsP9hi5SxeTXhjMuQdVUGGQ6qtxosr/Df2yykvNDhbKx/phjjnmOjbnUk5lwak7cYCY9vGIHJCxhXfgWc7Nz1OeqWR15X+q/DAUdb4T+A2UPeYoz1R0/NYczbNUMy6wKpZHV9J1s9anpHbV3QvsQlGvFFnAapSVT/SwKaE/u55fx6vmXOTEhfo16SWhvj4pdv1AlJYiS/LxAhVGSpIXQIcVRjpV85dlPnC9Rq4wKuf5Gg7Hxuh5oQmOv5E2pdoNqoT197I440SVF6xsAGUF1FWm8ui8FkYfuJLFZ9zPBQp2CpQdeJaj9TzEICWBWA5ikOzmWYoW68aoYVVmyka+KZZ03RqBp4YRgCrAJUDT9lvJ7Ep4vyeoGCqkyoLgWaC2vUTwbNza9nKer+Fw7AwS/+lhLptNm7JFBlU+2rKnttBqUJeZi6rAa2Y0X3OTJZqrUkrWaPaxeddyJvf9aAx7n/k2y6//DdPVgFDfTwee5dx9R0lXjrbKO7YIq1LewYYDq1J5mmjso2P0zPfx/30u7bcN9kxD8EzSe8U8/ios2BARTF0JTMLr/pnnb7mHswXaMhwUm1CMY4sqnsHWSTnXG7LyzHCSjgy5nNtV+rHbEp9n6asefkem853tDnHaLxvs2YcgtpkPrryFox8ClDuqcs66Ssg5uornrrmXrytd76mgnbEDz3I079pwlKOt0o/dVpjkS1/x8Dyyg56vGuy0ONETBnsFIXgONmFz2KRuBb87+yH+p7qNivlJLiwHnuU+BK4BXLkaK+344d7DqLRVDu+jpvPUPh79S+O0DTqHwFCDZxZh81QHnhU+y671cIWKK3LacO2eWR9tNOaoMRLP+JhvzKWt7lvmTA0MNXg6y7NGz2OMxEgLC4L+OSq+eETtctPD23YwqbQc9eQxcFqcaIo13UlhDQzXvu3b032N0X0NmDVxosVKe2uqlqEGzz/x0PRH+J5StJzPsxZ3NkbPZWBFkJCdEvEhmB/HaVNJo5MyNCCykJ3omWtBW6NQ/FSeciAGbnmPtukLMMX4Tsu4sju0FA3MYOnnLfbaOG1bVfGUcn6lx1QBnsoTVnT8UNLPVCnFJwPTDH2edQTPsFgmzL8OiXNWVKqrXOcNebQ916ROY9GI8TQfb/AmW+zfQeSna9l4zwKOLJQ4Xku9bJNjddB9nMGIN/LogPdRKSkLLfbmLtoVbXUyRBqIkfgv0RnGiYoYeVCkQvAU0bIKPARMpbad2WI9dQZPVeop/SmbZEXpUJqzuj7URBoSPMOVddAt1qTzIfKdOIepfttJjTQg5qQ8XJ81uoIbphwNzCDxTQufixMVx+qgSNgwrpd1d9/KF9ROpliHT1XUqQeWynoljQae6oWmnWkucnXNV24R5ZHWZHfV0OAZvI0/CTYep13lmU6cBrZZDcRIvOZjzp47SNVGFVieYcNDdZk9IGgTU6xzwlb3q46WZ0iUI4Y1kd8I6CUiQReoiqW+FLaykp6xhgVPdUlsYYxYzvcD82KctoNLWpE7yGlgmGogIGQ5M05UjFd1lwrAM5xTCFIitCkbPMMk+dW8cNXdnPVADQNGIZG4tu3qaBuSmIR9zHYsxA9crsIbFjw7SFxrsFPByDm93sIlXUTFQuTEaWCb1UCM7gcN5rFOoiJwqauE4LmR966ezzHivH0nV1uWHJOoCjzD667l1Tl3caoIdGoVbQ+7QHw/i5Q87PQgf21ecvVyld2w4Bmj+zEwoodLiY+5Yi5tYlh34jSwzWogRuJApehZ/L+vdxBvqCxPB551fHwvYPHOfUSUViB6N3FlyhH8YpzoxDpe1g3tNNAQGuig+0SDuc3DHjOH9iX1mtRQg+cHrLzuDk5UGlGtyjNDLl4Rg2c2exRuLAhSqs4KAmNVq7UhLc8YPd8Ge7mPOcXDzgUSwFc8zMlzaLu/6lW7AZwGGlwDHSyZavCuMnindHKYyK5rLkMNnlk19bUozwzdCSp1Ff2i/Kly+ylYpH+LJzhXe/KKdNuQ4BmuRFVHwOo40RHTWfyFuRz+24pW6U5yGhiGGuig+wyDd7shOa2TKSK/rqnESCiwcs57/OniX/E1VfMVS1VStP2HATF5SKgtsnH5GksuLc0D2rUATxXWiNw7sxgkU2ciH5lXKyU2NHhqkTF6XknCl+YNAdN2rZTsxnEaqFQD0+n5jMF2etDdxKhLbmDC6krHyj4vTBnKIugoRByuBHQFeNRUMVPOzuzeWmx+dQRPXVrJ8eokGxI7h9NRWxulOyooVhNpePCcQeJOsA90DgFZbE007AZxGqiBBmbQc5WF6QZ7xZ7cf80sZqm8tiqpADxleaozQ2aTukayPKWP7I4S6lL6i6BLw0Abn6oUF5zc8OAZIzEd7N/EaT+jFgt2YzgNDFcNTKN7chPmYptqEmji/fjzb6Jd4FW2dLCkwxC5DuwOluSaN1gy7yG+qZYYyt2sq9TZ8qzr3DMHb3jw/DbLxm8gqbfHbnGidW0PMGhadxdyGqhCAzPpPtRizrVwFpiXwX/Qx3tkLm1qWV1U0sDpqYOlyi1T4uN/2MuaS+dz7LVFB6jyAAeeVSqwnNNj9MwB/+047WoS5cRpwGkg0ECMxDFgvwzmKLATLKbHYJ+2mBcimFd87Mp+7CpD39p9Gbd+FhM3x0iIvV2dMrcQi13ZRftW/19rZTvwrLVGC4z3D3T/ZRLzlEfTgXOY9OYgXtpdymlg2GggRmKch53sw0TwDjKwn8WqNHEXYDzYUWBaMqkIsxcXJ1r33agDz0F+pGJ0/x8wB8SJKi3CidOA00CFGhhqyzOsbV/Hm//7l5z0SJAiVYtUpQo1UtlpdX/LVDat3GfF6FkE3B6nTblcTpwGnAYq0MDQ+zwTy0BbyCfPfIAL33LgWcFNLPcUOcp9zO8N5uudtInIwInTgNNABRoIAFQJ7yLL4H1e+/WdnNJR757tIYeoLvkUt52wmGsjDjwruIGVnCI2dA/zG7DHd9L+75WM4c5xGnAaSGsg9D/q9yR9z0Zo+jfw1hbSj8Ff7mO3OMbDjLd4QRsRO8nAWovJSLj3x6eDWoTH/CBOu/IvFaCqVW37oN7WYbVtDzUTI/EV4F6Ld2oXh4lYwInTgNNAhRqYysKftDDmkgpPL/e098H+PMicUaXS6BpS0pU7l6qOH5bgqRXPYOkXLf7/tfCzLqLKWXPiNOA0UJkGWg/mhJN2Y8IeB3DM6BGMby48jDkJzAQl2IPpBfuahecMdnn6PLPcYsYb/P1T/0pZpeZVD385NL06h8mySOUuEJFHf1AXr4ZylwdtMoZFSuKwBU/dlPPpOagff67BrOxn84XzOFLJ9E6cBpwGyteAgC6sB18PaFsuYNtCTmfBGePY/zKTJikPZd1GVn9vPsfeBjRldL6VVakxNmUcq7+PDyxO/XdIRiLAdOBZ/n2r7owOEj/x4BzwL+5kivwoTpwGnAbK10AmgOY8exqLFkRo2TX7j0k2r5rHkaeVcUmBqnK2Q7+oA88ylFfTQ2ew5Is+3o8M9h0ws+JEl9b0Am4wp4HtQwOtgD6yGmUlbiExEq/kU0OcqIJBAsWwd5AsWI2h8ST6XVao/l8/My1SB55D/XzF6P42GG0r7oW+qzr5TM36NA/12tz1nQaGWgP5EuxlRcaJ7lPF/DL9nM7nWYUiqzr1PJ4f08z6iyx8B+xdSbh+Hu1KzHXiNOA0UIUGciXYA+ss/kVdTOmqYuhhd+qwDhgV0/ZUlo1voV+0+zMN5r/AzOuk7Z5i57m/Ow04DeTXQACgCu4oR/NNi/+D7Q04pZ1tGjwzb3+MxDkWphrsgQbzCwt3xIn+wX1JnAacBsrXwAx6vmyxV8SJZrPKlz/YMD1juwHP8P50sGySwT8TOB1YBfaeCPbe2RWSyg7T++6m7TRQsQam09PmYR/0MN+cQ9tdFQ80zE/c7sAz83510P03Br4C5kQw75n0A/HQbNp+N8zvq5u+00BdNBCUc/6jwfyPTtpq3pSuLpOu06DbNXhmbes/myaV9UQu+3Ewj4C/0GIf7WLKM3XSvxvWaaDhNTCDJw+BpmkWZsrdBf6/uO/EduTzLOcJncnyvS3Jo33sUQY+b2CMxTxusIt8/CfX8uriBZyeLGdMd6zTwHDSQIzEZ4G/BU4EImB+BfZOFyf46C46y7OEJ3o6iw/wiBwB9giDmWJhCtiEwSTALvOxywze8jjRvhKGc4c4DTSUBs5l6a5N2MMN/hHp1r1GwPkEsNDiPdLFYY811IQbZDIOPCu4EVN5pbWVP7dZzGEWMwmsKis+BSgh/2mwz4B51sM+twfrn5/F0VvVCFdwWXeK00BVGvgGi3duJTLBYA+xmIkGc6jF6rndZGG5SfU/SvYkGbFkLp+sWX/zqibdwCc78KzRzTmNuyI7cdChFn8i2EOACWA+YeBgi33dYl40GIHrnyz+ywbzShMjXrmRiSpVc+I0ULUGLuC5nZOs/wsfs5+F/Qxmf4M9wMLHgAPTZB96Du0LFtSy+PkI9rnZtL9c9cW3wwEceNb5ps/Ceu+Q+HgSc1DwAB+oXkxgDwD2N9Br4XWwK8BbAfYNi/emR//KJJGV/Xhv3cLkguS0dV6CG36INHAUC5v2Z9QurZhdfMyuBrMb2N0sZnewu5s0rdueFvYE9gLzpsW+abBvGMwKCyuA1z3sa9Dy6hw+/e4QLWWbvKwDzyG+rdN5cvcILfv62H097D4+7G2we4OnL8SeYPXlGGXgbbDvGsy7NpWfinrY/9liVnuw2uC/Z/HXWCJr+hm95iY+8cEQL227vvz5PDMaPhwNkdE+/hhLZIxP31gPb4zFGwv+OCD1SXNfMt7CeIPZEeyOYHay2H4Dq4HVNrjfBlbpY7GrwLxj8d9tovXt9bz1znyODQk5tmvdD9biHXgOlqaruE6MxEiPyG7Qv1sSfzdo2gVs6uPh7WTxdyb1pSP8qOXBWLDrUu1pYB3YD8B8AHa9xaz3QD8/BH8DsAG8DQZ/o8VsNJhN4G8C22vxei2RXg+7OYndbKEPTJ9+trCxvw/TPxKvfy0jkhHGJj9kgz+Rd304yp+FsYA+ZcssZnnPcLnZkR5vL0aYVxnptbDa62dTpIlWL0KLl2RzxKc/4tMSaU39bI70sbGpheZIP7bJo6kpmfppxeYjgt9mQyT1uyWpFrzNFtPikWzRT4Np8fF3MERaTIrolx18aPVAv7daTKvFjvBSv9sR4I0AfySYEcBIAyNtur1vn4EPbYpdyKw3WLlm1tu07j8wA/dDNeFmnYfVPXo/Ce8b7NoIybU+zWviRPX/ThpUAw48G/TGVDut07CRsTwxrpkRY/vpHSvLpxlvdBIz2uCPNjDKYkbpS5/++CMtJgAG0wq2FcwOAhEf9LMFjAAoBTppejHbDEYNvARI+qmPl/ERenrBQ5b1rOlPNoWsJg2wW30M+KmDML7B6nefLT9KF9PHB5sErz/9Ux8jq63fBh+DUdCuD/zgp37Xx9sMfp/FbDaQ+tjUS8L0eimXSoopvddie/VSCV8sSfxNEZo3+vRvbMJsTGI2pl9CbIgT1U8n27gG/hu+PevDTvR2VQAAAABJRU5ErkJggg==" />
## Avec un peu de Python :
### Afficher une série d'images
```
from IPython.display import HTML
chaine = '<h3 style="color:darkorange;text-align:center;">Et voici le résultat :</h3>'
nombre = int(input('Entrer un nombre entier positif : '))
chaine = chaine + '<img src="https://ericecmorlaix.github.io/img/Jupyter_logo.svg" style = "display:inline-block">' * nombre
HTML(chaine)
```
### Générer un code en lagage SVG
```
from IPython.display import SVG
chaine = '<svg width="600" height="80">'
for i in range(10) :
chaine = chaine + f'<circle cx="{(30+3*i)*(10-i)}" cy="{30}" r="{3.*float(i)}" fill="purple" stroke-width="2" stroke="grey"></circle>'
SVG(chaine + '</svg>')
```
> Voici quelques tutoriels pour découvrir ce langage et plus si affinité :
> - https://www.alsacreations.com/tuto/lire/1421-svg-initiation-syntaxe-outils.html
> - https://developer.mozilla.org/fr/docs/Web/SVG/Tutoriel
> - http://svground.fr//
> - https://www.w3.org/TR/SVG/
On peut également et avantageusement faire un dessin vectoriel sur [draw.io](https://www.draw.io/) et récupérer son code SVG par le menu `File > Embed > SVG` (décocher toute les cases de choix) :
````html
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="385px" height="85px" viewBox="-0.5 -0.5 385 85"><defs/><g>
<ellipse cx="40" cy="40" rx="38" ry="38" fill="none" stroke="#ff8000" stroke-width="4" pointer-events="none"/>
<path d="M 100 80 L 180 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 200 80 L 280 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 280 80 L 280 10" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 300 80 L 380 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 380 10 L 380 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 300 80 L 380 10" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
</g></svg>
````
Et l'intégrer en l'adaptant si besoin tel que :
```
from IPython.display import SVG
SVG('''<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="385px" height="85px" viewBox="-0.5 -0.5 385 85"><defs/><g>
<ellipse cx="40" cy="40" rx="38" ry="38" fill="none" stroke="#ff8000" stroke-width="4" pointer-events="none"/>
<path d="M 100 80 L 180 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 200 80 L 280 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 280 80 L 280 10" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 300 80 L 380 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 380 10 L 380 80" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
<path d="M 300 80 L 380 10" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/>
</g></svg>''')
```
> Les triples apostrophes ``'''`` (ou guillements) permettent d'écrire une même chaine de caractères sur plusieurs lignes.
> Voici quelques tutoriels pour prendre en main l'application [draw.io](https://www.draw.io/) et plus si affinité :
> - http://www.ac-grenoble.fr/webeleves/IMG/pdf/tutoriel_draw.io_imprim.pdf
> - https://epn.salledesrancy.com/wp-content/uploads/2018/12/tutoriel-Draw-io-2018.pdf
> - La chaine [Youtube](https://www.youtube.com/channel/UCiTtRN9b8P4CoSfpkfgEJHA)
## Remix :
En fait SVG est une balise de HTML5 et on peut combiner ce que l'on vient de voir tel que par exemple :
```
from IPython.display import HTML
chaine = '<center><h3 style="color:darkorange;">Et voici le résultat :</h3><br>'
ga = '''<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="90px" height="103px" viewBox="-0.5 -0.5 84 103"><defs/><g><ellipse cx="42" cy="42" rx="40" ry="40" fill="none" stroke="#ff8000" stroke-width="4" pointer-events="none"/><g transform="translate(30.5,83.5)"><switch><foreignObject style="overflow:visible;" pointer-events="none" width="23" height="17" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: inline-block; font-size: 16px; font-family: "Comic Sans MS"; color: rgb(0, 0, 0); line-height: 1.2; vertical-align: top; width: 24px; white-space: nowrap; overflow-wrap: normal; text-align: center;"><div xmlns="http://www.w3.org/1999/xhtml" style="display:inline-block;text-align:inherit;text-decoration:inherit;white-space:normal;"><font color="#9933ff"><b>GA</b></font></div></div></foreignObject><text x="12" y="17" fill="#000000" text-anchor="middle" font-size="16px" font-family="Comic Sans MS">[Not supported by viewer]</text></switch></g></g></svg>'''
bu = '''<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="90px" height="23px" viewBox="-0.5 -0.5 85 23"><defs/><g><path d="M 2 2 L 82 2" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/><g transform="translate(30.5,3.5)"><switch><foreignObject style="overflow:visible;" pointer-events="none" width="22" height="17" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: inline-block; font-size: 16px; font-family: "Comic Sans MS"; color: rgb(0, 0, 0); line-height: 1.2; vertical-align: top; width: 23px; white-space: nowrap; overflow-wrap: normal; text-align: center;"><div xmlns="http://www.w3.org/1999/xhtml" style="display:inline-block;text-align:inherit;text-decoration:inherit;white-space:normal;"><font color="#9933ff"><b>BU</b></font></div></div></foreignObject><text x="11" y="17" fill="#000000" text-anchor="middle" font-size="16px" font-family="Comic Sans MS">[Not supported by viewer]</text></switch></g></g></svg>'''
zo ='''<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="90px" height="93px" viewBox="-0.5 -0.5 85 93"><defs/><g><path d="M 2 72 L 82 72" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/><path d="M 82 72 L 82 2" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/><g transform="translate(29.5,73.5)"><switch><foreignObject style="overflow:visible;" pointer-events="none" width="24" height="17" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: inline-block; font-size: 16px; font-family: "Comic Sans MS"; color: rgb(0, 0, 0); line-height: 1.2; vertical-align: top; width: 25px; white-space: nowrap; overflow-wrap: normal; text-align: center;"><div xmlns="http://www.w3.org/1999/xhtml" style="display:inline-block;text-align:inherit;text-decoration:inherit;white-space:normal;"><font color="#9933ff"><b>ZO</b></font></div></div></foreignObject><text x="12" y="17" fill="#000000" text-anchor="middle" font-size="16px" font-family="Comic Sans MS">[Not supported by viewer]</text></switch></g></g></svg>'''
meuh = '''<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="90px" height="93px" viewBox="-0.5 -0.5 85 93"><defs/><g><path d="M 2 72 L 82 72" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/><path d="M 82 2 L 82 72" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/><path d="M 2 72 L 82 2" fill="none" stroke="#ff8000" stroke-width="4" stroke-miterlimit="10" pointer-events="none"/><g transform="translate(17.5,73.5)"><switch><foreignObject style="overflow:visible;" pointer-events="none" width="48" height="17" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"><div xmlns="http://www.w3.org/1999/xhtml" style="display: inline-block; font-size: 16px; font-family: "Comic Sans MS"; color: rgb(0, 0, 0); line-height: 1.2; vertical-align: top; width: 49px; white-space: nowrap; overflow-wrap: normal; text-align: center;"><div xmlns="http://www.w3.org/1999/xhtml" style="display:inline-block;text-align:inherit;text-decoration:inherit;white-space:normal;"><font color="#9933ff"><b>MEUH</b></font></div></div></foreignObject><text x="24" y="17" fill="#000000" text-anchor="middle" font-size="16px" font-family="Comic Sans MS">[Not supported by viewer]</text></switch></g></g></svg>'''
HTML(chaine + meuh + ga + zo + ga + bu +'</center>')
```
## Ressources :
- https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html
| github_jupyter |
Borrowed from [Rebecca Weiss](https://stanford.edu/~rjweiss/public_html/IRiSS2013/text2/notebooks/tfidf.html)
```
import numpy as np
```
## Basic term frequencies
First, let's review how to get a count of terms per document: a term frequency vector.
```
#examples taken from here: http://stackoverflow.com/a/1750187
mydoclist = ['Julie loves me more than Linda loves me',
'Jane likes me more than Julie loves me',
'He likes basketball more than baseball']
#mydoclist = ['sun sky bright', 'sun sun bright']
from collections import Counter
for doc in mydoclist:
tf = Counter()
for word in doc.split():
tf[word] +=1
print (tf.items())
```
Here, we've introduced a new Python object called a Counter. Counters are only in Python 2.7 and higher. They're neat because they allow you to perform this exact kind of function; counting things in a loop.
Let's call this a first stab at representing documents quantitatively, just by their word counts. But for those of you who are already tipped off by the "vector" in the vector space model, these aren't really comparable. This is because they're not in the same vocabulary space.
What we really want is for every document to be the same length, where length is determined by the total vocabulary of our corpus.
```
import string #allows for format()
def build_lexicon(corpus):
lexicon = set()
for doc in corpus:
lexicon.update([word for word in doc.split()])
return lexicon
def tf(term, document):
return freq(term, document)
def freq(term, document):
return document.split().count(term)
vocabulary = build_lexicon(mydoclist)
doc_term_matrix = []
print ('Our vocabulary vector is [' + ', '.join(list(vocabulary)) + ']')
for doc in mydoclist:
print ('The doc is "' + doc + '"')
tf_vector = [tf(word, doc) for word in vocabulary]
tf_vector_string = ', '.join(format(freq, 'd') for freq in tf_vector)
print ('The tf vector for Document %d is [%s]' % ((mydoclist.index(doc)+1), tf_vector_string))
doc_term_matrix.append(tf_vector)
# here's a test: why did I wrap mydoclist.index(doc)+1 in parens? it returns an int...
# try it! type(mydoclist.index(doc) + 1)
print ('All combined, here is our master document term matrix: ')
print (doc_term_matrix)
```
what you've just seen is the creation of a feature space. Now every document is in the same feature space, meaning that we can represent the entire corpus in the same dimensional space without having lost too much information.
Normalizing vectors to L2 Norm = 1
Once you've got your data in the same feature space, you can start applying some machine learning methods; classifying, clustering, and so on. But actually, we've got a few problems. Words aren't all equally informative.
If words appear too frequently in a single document, they're going to muck up our analysis. We want to perform some scaling of each of these term frequency vectors into something a bit more representative. In other words, we need to do some vector normalizing.
We don't really have the time to go into the intense math of this. Just accept for now that we need to ensure that the L2 norm of each vector is equal to 1. Here's some code that shows how this is done.
```
import math
def l2_normalizer(vec):
denom = np.sum([el**2 for el in vec])
return [(el / math.sqrt(denom)) for el in vec]
doc_term_matrix_l2 = []
for vec in doc_term_matrix:
doc_term_matrix_l2.append(l2_normalizer(vec))
print ('A regular old document term matrix: ')
print (np.matrix(doc_term_matrix))
print ('\nA document term matrix with row-wise L2 norms of 1:')
print (np.matrix(doc_term_matrix_l2))
# if you want to check this math, perform the following:
# from numpy import linalg as la
# la.norm(doc_term_matrix[0])
# la.norm(doc_term_matrix_l2[0])
```
Without getting too deeply mired into the linear algebra, you can see immediately that we've scaled down vectors such that each element is between [0, 1], without losing too much valuable information. You see how it's no longer the case that a term count of 1 is the same value in one vector as another?
Why would we care about this kind of normalizing? Think about it this way; if you wanted to make a document seem more related to a particular topic than it actually was, you might try boosting the likelihood of its inclusion into a topic by repeating the same word over and over and over again. Frankly, at a certain point, we're getting a diminishing return on the informative value of the word. So we need to scale down words that appear too frequently in a document.
### IDF frequency weighting
But we're still not there yet. Just as all words aren't equally valuable within a document, not all words are valuable across all documents. We can try reweighting every word by its inverse document frequency. Let's see what's involved in that.
```
def numDocsContaining(word, doclist):
doccount = 0
for doc in doclist:
if freq(word, doc) > 0:
doccount +=1
return doccount
def idf(word, doclist):
n_samples = len(doclist)
df = numDocsContaining(word, doclist)
return np.log(n_samples / (1+df))
my_idf_vector = [idf(word, mydoclist) for word in vocabulary]
print ('Our vocabulary vector is [' + ', '.join(list(vocabulary)) + ']')
print ('The inverse document frequency vector is [' + ', '.join(format(freq, 'f') for freq in my_idf_vector) + ']')
```
Now we have a general sense of information values per term in our vocabulary, accounting for their relative frequency across the entire corpus. Recall that this is an inverse! The more negative a term, the more frequent it is.
We're almost there. To get TF-IDF weighted word vectors, you have to perform the simple calculation of tf * idf.
Time to take a step back. Recall from linear algebra that if you multiply a vector of A x B by a vector of A x B, you're going to get a vector of size A x A, or a scalar. This won't do, since what we want is a term vector of the same dimensions (1 x number of terms), where each element has been scaled by this idf weighting. How do we do that in Python?
We could write the whole function out here, but instead we're going to show a brief introduction into numpy.
```
import numpy as np
def build_idf_matrix(idf_vector):
idf_mat = np.zeros((len(idf_vector), len(idf_vector)))
np.fill_diagonal(idf_mat, idf_vector)
return idf_mat
my_idf_matrix = build_idf_matrix(my_idf_vector)
#print my_idf_matrix
```
Awesome! Now we have converted our IDF vector into a matrix of size BxB, where the diagonal is the IDF vector. That means we can perform now multiply every term frequency vector by the inverse document frequency matrix. Then to make sure we are also accounting for words that appear too frequently within documents, we'll normalize each document such that the L2 norm = 1.
```
doc_term_matrix_tfidf = []
#performing tf-idf matrix multiplication
for tf_vector in doc_term_matrix:
doc_term_matrix_tfidf.append(np.dot(tf_vector, my_idf_matrix))
#normalizing
doc_term_matrix_tfidf_l2 = []
for tf_vector in doc_term_matrix_tfidf:
doc_term_matrix_tfidf_l2.append(l2_normalizer(tf_vector))
print (vocabulary)
print (np.matrix(doc_term_matrix_tfidf_l2)) # np.matrix() just to make it easier to look at
```
Awesome! You've just seen an example of how to tediously construct a TF-IDF weighted document term matrix.
Here comes the best part: you don't even have to do this by hand.
Remember that everything in Python is an object, objects take up memory (and performing actions take up time). Using scikits-learn ensures that you don't have to worry about the efficiency of all the previous steps.
NOTE: The values you get from the TfidfVectorizer/TfidfTransformer will be different than what we have computed by hand. This is because scikits-learn uses an adapted version of Tfidf to deal with divide-by-zero errors. There is a more in-depth discussion here.
```
from sklearn.feature_extraction.text import CountVectorizer
count_vectorizer = CountVectorizer(min_df=1)
term_freq_matrix = count_vectorizer.fit_transform(mydoclist)
print ("Vocabulary:", count_vectorizer.vocabulary_)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(norm="l2")
tfidf.fit(term_freq_matrix)
tf_idf_matrix = tfidf.transform(term_freq_matrix)
print (tf_idf_matrix.todense())
```
In fact, you can do this just by combining the steps into one: the TfidfVectorizer
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(min_df = 1)
tfidf_matrix = tfidf_vectorizer.fit_transform(mydoclist)
print (tfidf_matrix.todense())
```
And we can fit new observations into this vocabulary space like so:
```
new_docs = ['He watches basketball and baseball', 'Julie likes to play basketball', 'Jane loves to play baseball']
new_term_freq_matrix = tfidf_vectorizer.transform(new_docs)
print (tfidf_vectorizer.vocabulary_)
print (new_term_freq_matrix.todense())
```
Note that we didn't get words like 'watches' in the new_term_freq_matrix. That's because we trained the object on the documents in mydoclist, and that word doesn't appear in the vocabulary from that corpus. In other words, it's out of the lexicon.
```
## Download the .csv from this URL
# https://github.com/ga-students/DAT_SF_13/blob/master/labs/NLP-Lab/amazon/sociology_2010.csv
# import os
# import csv
#os.chdir('/Users/rweiss/Dropbox/presentations/IRiSS2013/text1/fileformats/')
# with open('amazon/sociology_2010.csv', 'rb') as csvfile:
# amazon_reader = csv.DictReader(csvfile, delimiter=',')
# amazon_reviews = [row['review_text'] for row in amazon_reader]
#your code here!!!
```
| github_jupyter |
```
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn as skl
```
## Read data files
```
datasetA = []
features = None
for file in os.listdir('../../sepsis_data/trainingA/'):
# Read file
with open('../../sepsis_data/trainingA/%s' % (file)) as f:
if not features:
features = f.readline().rstrip('\n').split('|')
else:
# This skips the headers
f.readline()
for idx, line in enumerate(f):
pid = file.split('.')[0][1:]
line = line.rstrip('\n')
datasetA.append([pid] + line.split('|'))
datasetA = np.array(datasetA)
datasetB = []
features = None
for file in os.listdir('../../sepsis_data/trainingB/'):
# Read file
with open('../../sepsis_data/trainingB/%s' % (file)) as f:
if not features:
features = f.readline().rstrip('\n').split('|')
else:
# This skips the headers
f.readline()
for idx, line in enumerate(f):
pid = file.split('.')[0][1:]
line = line.rstrip('\n')
datasetB.append([pid] + line.split('|'))
datasetB = np.array(datasetB)
datasetA.shape, datasetB.shape
dfA = pd.DataFrame(datasetA, columns=['pid'] + features)
dfB = pd.DataFrame(datasetB, columns=['pid'] + features)
dfA = pd.concat([dfA.pid, dfA.loc[:,"HR":"SepsisLabel"].astype(np.float)], axis=1)
dfB = pd.concat([dfB.pid, dfB.loc[:,"HR":"SepsisLabel"].astype(np.float)], axis=1)
df = pd.concat([dfA, dfB], axis=0)
df = pd.concat([df.pid, df.loc[:,"HR":"SepsisLabel"].astype(np.float)], axis=1)
df.shape
df[df.SepsisLabel == 1].pid.unique().shape
```
## Check if there are any differences between two datasets
```
dfA.loc[:,'EtCO2':'Platelets'].isna().sum()
dfB.loc[:,'EtCO2':'Platelets'].isna().sum()
plt.figure(figsize=(15,5))
plt.bar(dfA.isna().sum().index, dfA.isna().sum().values)
plt.title("Number of NaNs per column set A")
plt.xticks(rotation=90)
plt.figure
plt.show()
plt.figure(figsize=(15,5))
plt.bar(dfB.isna().sum().index, dfB.isna().sum().values)
plt.title("Number of NaNs per column set B")
plt.xticks(rotation=90)
plt.figure
plt.show()
```
## Exploratory Analysis
```
# Entire dataset
plt.figure(figsize=(15,5))
plt.bar(df.isna().sum().index, df.isna().sum().values)
plt.title("Number of NaNs per Feature")
plt.xticks(rotation=90)
plt.savefig("../plots/feature_missingness")
plt.show()
# Create violin plots
feature_data = df.loc[:, 'HR':'SepsisLabel']
feature_data.drop(columns=['Age', 'Unit1', 'Unit2', 'Gender', 'ICULOS', 'HospAdmTime'],
inplace=True)
feature_data = (feature_data - feature_data.min())/(feature_data.max()-feature_data.min())
vplot_data = []
vplot_feature = []
vplot_label = []
for f in feature_data.drop(columns='SepsisLabel').columns:
vplot_data.extend(list(feature_data[f].values))
vplot_feature.extend([f]*feature_data[f].shape[0])
vplot_label.extend(list(feature_data.SepsisLabel.values))
vplot_data = pd.Series(data=vplot_data, dtype=np.float, name="Feature Values")
vplot_feature = pd.Series(data=vplot_feature, dtype=str, name="Feature")
vplot_label = pd.Series(data=vplot_label, dtype=np.int, name="SepsisLabel")
vplot_df = pd.concat([vplot_data, vplot_feature, vplot_label], axis=1).dropna()
# Create violin plots unstratified
plt.figure(1, figsize=(30,15))
sns.violinplot(x='Feature', y='Feature Values', data=vplot_df, hue='SepsisLabel', split=True)
plt.title("Feature Distributions between Sepsis/Non-Sepsis Patients")
plt.ylabel("Feature Values")
plt.xticks(rotation=45)
plt.savefig('../plots/feature_violin_plots')
plt.show()
plt.close()
# Create violin plots stratified
for f in features[0:34]:
f_data = dataset[[f, 'Unit1', 'Unit2', 'Gender', 'SepsisLabel']].dropna()
f_data['Unit'] = f_data['Unit1'].replace(0, 2)
f_data.drop(columns=['Unit1', 'Unit2'], inplace=True)
plt.figure(1)
sns.violinplot(x='Unit', y=f, data=f_data, hue='SepsisLabel', split=True)
plt.title("%s Distribution Across ICU Units"%f)
plt.ylabel("%s Value"%f)
plt.savefig('../plots/%s_distr_unit'%f)
plt.close()
plt.figure(2)
sns.violinplot(x='Gender', y=f, data=f_data, hue='SepsisLabel', split=True)
plt.title("%s Distribution Across Gender"%f)
plt.ylabel("%s Value"%f)
plt.savefig('../plots/%s_distr_gender'%f)
plt.close()
# Create histograms
for f in features:
data = dataset[['pid', f, 'Unit1', 'Unit2', 'Gender', 'SepsisLabel']]
sep_data = data.groupby('pid').filter(lambda x: x['SepsisLabel'].any() == 1)
nonsep_data = data.groupby('pid').filter(lambda x: x['SepsisLabel'].sum() == 0)
before_sep = []
after_sep = []
for pid, df in sep_data.groupby('pid'):
df.reset_index(drop=True, inplace=True)
first_idx = df['SepsisLabel'].idxmax()
before_sep.extend(df.iloc[0:(first_idx+6)][[f, 'SepsisLabel']].values)
after_sep.extend(df.iloc[(first_idx+6):][[f, 'SepsisLabel']].values)
before_sep = np.array(before_sep, dtype=np.float)
after_sep = np.array(after_sep, dtype=np.float)
nonsep = nonsep_data[f].values
break
plt.hist([before_sep,after_sep,nonsep], bins=20, label=['before', 'after', 'non-sep'], density=True)
plt.title("Distribution of HR of Sepsis Patients")
plt.legend(loc=1, labels=['before', 'after', 'non-sep'])
plt.xlabel("%s"%f)
plt.ylabel("Count")
plt.savefig('../HR_distr')
plt.show()
```
### 1. Figure out number of patients with sepsis
```
df[df.SepsisLabel == 1]['pid'].unique().shape
df[df.SepsisLabel == 1].shape
```
## In MIMIC-III sepsis df, half the patients have sepsis.
### 2. Number of Septic patients between Units
```
patient_df[]
patient_df[(patient_df.SepsisLabel == 1) & (patient_df.Unit1 == 1)]['pid'].unique().shape
```
Unit 1: 88 septic patients
Unit 2: 191 septic patients
### Male Female Ratio
```
patient_df[(patient_df.SepsisLabel == 1) & (patient_df.Gender == 0)]['pid'].unique().shape
```
122 are female and 157 are male
```
patient_df[(patient_df.Gender == 0)]['pid'].unique().shape
```
### Age Range of Septic Patients Between ICUs
```
ages = patient_df[(patient_df.SepsisLabel == 1) & (patient_df.Unit1 == 1)]['Age'].unique()
plt.boxplot(ages)
plt.title("Unit 1 Ages")
plt.show()
plt.boxplot(patient_df[(patient_df.SepsisLabel == 1) & (patient_df.Unit2 == 1)]['Age'].unique())
plt.title("Unit 2 Ages")
plt.show()
```
## Let's look at degree of missingness
```
# Entire dataset
plt.figure(figsize=(15,5))
plt.bar(dataset.isna().sum().index, dataset.isna().sum().values)
plt.title("Number of NaNs per column")
plt.xticks(rotation=90)
plt.figure
plt.show()
u1_patients = patient_df[patient_df.Unit1 == 1]
u2_patients = patient_df[patient_df.Unit2 == 1]
u1_na_count = u1_patients.isna().sum()
plt.figure(figsize=(15,5))
plt.bar(u1_na_count.index, u1_patients.isna().sum().values)
plt.title("Number of NaNs per column, Unit 1")
plt.xticks(rotation=90)
plt.figure
plt.show()
u2_na_count = u2_patients.isna().sum()
plt.figure(figsize=(15,5))
plt.bar(u1_na_count.index, u2_patients.isna().sum().values)
plt.title("Number of NaNs per column, Unit 2")
plt.xticks(rotation=90)
plt.show()
```
### Missingness in Unit 1 is a little more than Unit 2 in 2 specific columns
```
# Across Septic and Non-Septic patients
septic_df = dataset[dataset.SepsisLabel == 1]
nonsep_df = dataset[dataset.SepsisLabel == 0]
sep_na_count = septic_df.isna().sum()
plt.figure(figsize=(15,5))
plt.bar(sep_na_count.index, sep_na_count.values)
plt.title("Number of NaNs in Septic Patients")
plt.xticks(rotation=90)
plt.show()
nonsep_na_count = nonsep_df.isna().sum()
plt.figure(figsize=(15,5))
plt.bar(nonsep_na_count.index, nonsep_na_count.values)
plt.title("Number of NaNs in Non-Septic Patients")
plt.xticks(rotation=90)
plt.show()
```
### What is the distribution of time steps across septic patients and non-septic patients
```
pos_time_agg = patient_df.groupby('pid').filter(lambda x: x['SepsisLabel'].any())
neg_time_agg = patient_df.groupby('pid').filter(lambda x: x['SepsisLabel'].sum() == 0)
time_agg.groupby('pid').ICULOS.agg(['count'])['count'].min(), time_agg.groupby('pid').ICULOS.agg(['count'])['count'].max()
plt.boxplot(time_agg.groupby('pid').ICULOS.agg(['count'])['count'])
plt.show()
time_agg =
time_agg['count'].min(), time_agg['count'].max()
(time_agg['count'] > 30).sum()
plt.boxplot(time_agg['count'])
```
| github_jupyter |
# Point Processes
**Author: Serge Rey <sjsrey@gmail.com> and Wei Kang <weikang9009@gmail.com>**
## Introduction
One philosophy of applying inferential statistics to spatial data is to think in terms of spatial processes and their possible realizations. In this view, an observed map pattern is one of the possible patterns that might have been generated by a hypothesized process. In this notebook, we are going to regard point patterns as the outcome of point processes. There are three major types of point process, which will result in three types of point patterns:
* [Random Patterns](#Random-Patterns)
* [Clustered Patterns](#Clustered-Patterns)
* [Regular Patterns](#Regular-Patterns)
We will investigate how to generate these point patterns via simulation (Data Generating Processes (DGP) is the corresponding point process), and inspect how these resulting point patterns differ from each other visually. In [Quadrat statistics notebook](Quadrat_statistics.ipynb) and [distance statistics notebook](distance_statistics.ipynb), we will adpot some statistics to infer whether it is a [Complete Spaital Randomness](https://en.wikipedia.org/wiki/Complete_spatial_randomness) (CSR) process.
A python file named "process.py" contains several point process classes with which we can generate point patterns of different types.
```
from pysal.explore.pointpats import PoissonPointProcess, PoissonClusterPointProcess, Window, poly_from_bbox, PointPattern
import pysal.lib as ps
from pysal.lib.cg import shapely_ext
%matplotlib inline
import numpy as np
#import matplotlib.pyplot as plt
```
## Random Patterns
Random point patterns are the outcome of CSR. CSR has two major characteristics:
1. Uniform: each location has equal probability of getting a point (where an event happens)
2. Independent: location of event points are independent
It usually serves as the null hypothesis in testing whether a point pattern is the outcome of a random process.
There are two types of CSR:
* $N$-conditioned CSR: $N$ is fixed
* Given the total number of events $N$ occurring within an area $A$, the locations of the $N$ events represent an independent random sample of $N$ locations where each location is equally likely to be chosen as an event.
* $\lambda$-conditioned CSR: $N$ is randomly generated from a Poisson process.
* The number of events occurring within a finite region $A$ is a random variable $\dot{N}$ following a Poisson distribution with mean $\lambda|A|$, with $|A|$ denoting area of $A$ and $\lambda$ denoting the intensity of the point pattern.
* Given the total number of events $\dot{N}$ occurring within an area $A$, the locations of the $\dot{N}$ events represent an independent random sample of $\dot{N}$ locations where each location is equally likely to be chosen as an event.
### Simulating CSR
We are going to generate several point patterns (200 events) from CSR within Virginia state boundary.
```
# open the virginia polygon shapefile
va = ps.io.open(ps.examples.get_path("virginia.shp"))
polys = [shp for shp in va]
# Create the exterior polygons for VA from the union of the county shapes
state = shapely_ext.cascaded_union(polys)
# create window from virginia state boundary
window = Window(state.parts)
```
#### 1. Generate a point series from N-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" false, we can generate a point series
# by specifying "conditioning" false, we can simulate a N-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=False)
samples
samples.realizations[0] # simulated event points
# build a point pattern from the simulated point series
pp_csr = PointPattern(samples.realizations[0])
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
#### 2. Generate a point series from $\lambda$-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" false, we can generate a point series
# by specifying "conditioning" True, we can simulate a lamda-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=True, asPP=False)
samples
samples.realizations[0] # simulated points
# build a point pattern from the simulated point series
pp_csr = PointPattern(samples.realizations[0])
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
The simulated point pattern has $194$ events rather than the Possion mean $200$.
#### 3. Generate a point pattern from N-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" True, we can generate a point pattern
# by specifying "conditioning" false, we can simulate a N-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=False, asPP=True)
samples
pp_csr = samples.realizations[0] # simulated point pattern
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
#### 4. Generate a point pattern of size 200 from a $\lambda$-conditioned CSR
```
# simulate a csr process in the same window (200 points, 1 realization)
# by specifying "asPP" True, we can generate a point pattern
# by specifying "conditioning" True, we can simulate a lamda-conditioned CSR
np.random.seed(5)
samples = PoissonPointProcess(window, 200, 1, conditioning=True, asPP=True)
samples
pp_csr = samples.realizations[0] # simulated point pattern
pp_csr
pp_csr.plot(window=True, hull=True, title='Random Point Pattern')
pp_csr.n
```
## Clustered Patterns
Clustered Patterns are more grouped than random patterns. Visually, we can observe more points at short distances. There are two sources of clustering:
* Contagion: presence of events at one location affects probability of events at another location (correlated point process)
* Heterogeneity: intensity $\lambda$ varies with location (heterogeneous Poisson point process)
We are going to focus on simulating correlated point process in this notebook. One example of correlated point process is Poisson cluster process. Two stages are involved in simulating a Poisson cluster process. First, parent events are simulted from a $\lambda$-conditioned or $N$-conditioned CSR. Second, $n$ offspring events for each parent event are simulated within a circle of radius $r$ centered on the parent. Offspring events are independently and identically distributed.
#### 1. Simulate a Poisson cluster process of size 200 with 10 parents and 20 children within 0.5 units of each parent (parent events: $N$-conditioned CSR)
```
np.random.seed(5)
csamples = PoissonClusterPointProcess(window, 200, 10, 0.5, 1, asPP=True, conditioning=False)
csamples
csamples.parameters #number of total events for each realization
csamples.num_parents #number of parent events for each realization
csamples.children # number of children events centered on each parent event
pp_pcp = csamples.realizations[0]
pp_pcp
pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern') #plot the first realization
```
It is obvious that there are several clusters in the above point pattern.
#### 2. Simulate a Poisson cluster process of size 200 with 10 parents and 20 children within 0.5 units of each parent (parent events: $\lambda$-conditioned CSR)
```
import numpy as np
np.random.seed(10)
csamples = PoissonClusterPointProcess(window, 200, 10, 0.5, 1, asPP=True, conditioning=True)
csamples
csamples.parameters #number of events for the realization might not be equal to 200
csamples.num_parents #number of parent events for the realization, not equal to 10
csamples.children # number of children events centered on each parent event
pp_pcp = csamples.realizations[0]
pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern')
```
#### 3. Simulate a Poisson cluster process of size 200 with 5 parents and 40 children within 0.5 units of each parent (parent events: $N$-conditioned CSR)
```
np.random.seed(10)
csamples = PoissonClusterPointProcess(window, 200, 5, 0.5, 1, asPP=True)
pp_pcp = csamples.realizations[0]
pp_pcp.plot(window=True, hull=True, title='Clustered Point Pattern')
```
| github_jupyter |
## Programming Language : Python
<img align='left' src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/python.jpeg?raw=1' >
## Problem Statement
Breast cancer (BC) is one of the most common cancers among women worldwide, representing the majority of new cancer cases and cancer-related deaths according to global statistics, making it a significant public health problem in today’s society.
The early diagnosis of BC can improve the prognosis and chance of survival significantly, as it can promote timely clinical treatment to patients. Further accurate classification of benign tumors can prevent patients undergoing unnecessary treatments. Thus, the correct diagnosis of BC and classification of patients into malignant or benign groups is the subject of much research. Because of its unique advantages in critical features detection from complex BC datasets, machine learning (ML) is widely recognized as the methodology of choice in BC pattern classification and forecast modelling.
<b>There are two main classifications of tumors. One is known as benign and the other as malignant. A benign tumor is a tumor that does not invade its surrounding tissue or spread around the body. A malignant tumor is a tumor that may invade its surrounding tissue or spread around the body.</b>
<hr>
## Dataset
1. https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
2. https://www.kaggle.com/uciml/breast-cancer-wisconsin-data
## Step 0: Importing Libraries
```
## For Data Manipulation: Provides Dataframe as the datastructure to hold data
import pandas as pd
```
<img src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/pandas_logo.png?raw=1' >
```
## For Faster data computation: Provides multidimentional array support to hold and manipulate data
import numpy as np
```
<img src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/numpy.jpeg?raw=1' >
```
## For Data Visualization
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
```
<img src='https://github.com/harveenchadha/Breast_Cancer_Prediction/blob/master/images/matplotlib.png?raw=1' >
```
pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
tf.__version__
```
## Step 1: Question
1. What are the factors that contribute to malignant and benign tumor?
2. Is the problem stated as classification or regression problem?
3. Our final model is capable enough to predict the diffence between two types?
## Step 2: Wrangle Data
### Step 2.1: Gathering Data
```
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('./data.csv') ## reading data from a csv into pandas datastructure
```
### Step 2.2: Accessing Data
```
df.head()
df.describe()
df.info()
```
### Step 2.3: Cleaning Data
```
df.isnull().sum() #checking if any column has a null value because nulls can cause a problem while training model
df.drop(columns=['Unnamed: 32'], inplace=True)
```
## Step 3: EDA
```
df.diagnosis.value_counts().plot(kind= 'bar');
fig, ax = plt.subplots(figsize =(16,4))
df.concavity_mean.plot()
malignant = df[df.diagnosis == 'M']
benign = df[df.diagnosis == 'B']
fig, ax = plt.subplots(figsize =(16,4))
plt.plot(malignant.concavity_mean, label = 'Malignant')
plt.plot(benign.concavity_mean, label = 'Benign')
plt.legend();
```
## Step 4: Model Data
```
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.metrics import confusion_matrix, classification_report
## Seperate out features and labels
X = df.drop(columns=['diagnosis'])
y = df.diagnosis
sc = StandardScaler()
X = sc.fit_transform(X)
## Since machines understand language of either 0 or 1, you have to provide them data in that language only.
## So convert M to 0 and B to 1
le = LabelEncoder()
y = le.fit_transform(y)
## Set aside Training and test data for validation of our model
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, shuffle=True, random_state = 42)
## checking out shape of the variables for traiing and test
X_train.shape, y_train.shape, X_test.shape, y_test.shape
from tensorflow.keras.layers import Dense, Flatten, Activation, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
model = Sequential()
model.add(Dense(32, input_shape=(31,)))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss = 'binary_crossentropy' , metrics=['accuracy'], optimizer=Adam(lr=0.0001))
model.fit(X_train, y_train, validation_split = 0.1, epochs= 50, verbose =1, batch_size = 8)
```
## Step 5: Evaluating Model
```
y_pred = model.predict(X_test) ## we perform prediction on the validation set kept aside in step 4
y_pred = (y_pred >= 0.5).astype(int)
```
<b> Metric for evaluation: Confusion Matrix </b>
```
confusion_matrix( y_test, y_pred) ## for validation set
```
## Step 9: Conclusion:
1. What are the factors that contribute to malignant and benign tumor? : Answer Given in Step 8
2. Is the problem stated as classification or regression problem?: Classification
3. Our final model is capable enough to predict the diffence between two types?: Our model is more than >94% accurate
## Step 10: Communicate
Create a powerpoint and communicate your findings
| github_jupyter |
# Cows supplychain network
Table of Contents:
1. [Network analysis - Part1](#network-analysis---part1)
- [Get network properties](#get-network-properties)
- [Log-log Plot](#log-log-plot)
- [Judge network](#judge-network)
2. [Network analysis - Part2](#network-analysis---part2)
- [Network centrality properties](#network-centrality-properties)
- [Robustness Testing](#robustness-testing)
3. [SIR & SIS Model](#sir--sis-model)
- [SIR Model](#sir-model)
- [SIS Model](#sis-model)
- [Different combinations of gamma and beta](#different-combinations-of-gamma-and-beta)
```
#!pip install networkx matplotlib
#!pip install -q ndlib
#!pip install -q bokeh
#import modules
import numpy as np
import networkx as nx
import random
import matplotlib.pyplot as plt
import ndlib.models.epidemics as ep
import ndlib.models.ModelConfig as mc
import sys
import math
import pandas as pd
import itertools
import statistics
from bokeh.io import output_notebook, show
from ndlib.viz.bokeh.DiffusionTrend import DiffusionTrend
from supply_chain_mapping import data_cleaning_and_processing as dc
from supply_chain_mapping import network_analysis as na
from supply_chain_mapping import visualizations as vz
from scipy.integrate import odeint
from scipy.optimize import minimize
%matplotlib inline
```
**Import the data and create the necessary datasets**:
```
# Get the directed weighted network
Gd = na.get_networks(directed=True)
# Get the undirected weighted network because some things in the nx package haven't been implemented for directed networks
Gn = na.get_networks(directed=False)
# Get a sample of the network
# %%time
# nx.draw(Gd)
```
-----------
## <font color=maroon>Network analysis - Part 1</font>
Here we analyse the network structure of the livestock supplychain by revealing key properties (number of nodes N), number of link (K), ave. clustring coef (\<C>), ave. degree (\<K>), and ave. shortest path (\<L>)) and judging the type of network from Small world model, Barabasi-Albert model, and Erdős-Rényi model.
Reference: [Assignment 1 Part1_Sample solution.ipynb](https://bcourses.berkeley.edu/courses/1509160/files/folder/Discussion/Discussion%205)
### <font color=amber>Get network properties!<font>
```
# network property ---------
# empirical network model
print("============empirical network model===================")
## number of nodes N
n_node = Gd.number_of_nodes()
## number of link K
n_edge = Gd.number_of_edges()
## ave. clustring coef <C>
c = nx.average_clustering(Gd)
## ave. degree <K>
k = np.mean(list(dict(Gd.degree()).values()))
## ave. shortest path <L>
# l = nx.average_shortest_path_length(largest_component)
#largest_component = sorted((GN.subgraph(c) for c in nx.connected_components(GN)), key = len, reverse=True)[0]
print("Number of links = ",n_edge)
print("Average clustering coefficient of the network = ",c)
print("Average degree of the network = ",k)
# print("Average shortest path = ",l)print("Number of nodes = ",n_node)
```
*Project into a ****Watts-Strogatz**** model to check fit*:
```
%%time
# small-world model a.k.a. Watts-Strogatz model
print("============small-world network model===================")
k = int(n_edge/n_node)*2
C0 = nx.average_clustering(nx.watts_strogatz_graph(n_node,k,0))
## probability of rewiring each edge
p_s = 1-pow(c/C0,1/3)
print("p = ",p_s)
gs=nx.watts_strogatz_graph(n_node,k,p_s,seed=123)
## number of nodes N
n_node_s=gs.number_of_nodes()
## number of link K
n_edge_s=gs.number_of_edges()
## ave. clustring coef <C>
c_s = nx.average_clustering(gs)
## ave. degree <K>
k_s = np.mean(list(dict(gs.degree()).values()))
## ave. shortest path <L>
#l_s = nx.average_shortest_path_length(gs) # Left out because it takes too long
print("Number of nodes = ",n_node_s)
print("Number of links = ",n_edge_s)
print("Average clustering coefficient of the network = ",c_s)
print("Average degree of the network = ",k_s)
#print("Average shortest path = ",l_s)
#Barabasi-Albert network model
print("============Barabasi-Albert network model===================")
## Set m. I totally forgot how to decide m. possibly, set m as the # of edges is close to the actual network...
m_b=6
print("m = ",m_b)
gb = nx.barabasi_albert_graph(n_node, m_b,seed=123)
## number of nodes N
n_node_b=gb.number_of_nodes()
## number of link K
n_edge_b=gb.number_of_edges()
## ave. clustring coef <C>
c_b = nx.average_clustering(gb)
## ave. degree <K>
k_b = np.mean(list(dict(gb.degree()).values()))
## ave. shortest path <L>
#l_b=nx.average_shortest_path_length(gb) # Left out because it takes too long
print("Number of nodes = ",n_node_b)
print("Number of links = ",n_edge_b)
print("Average clustering coefficient of the network = ",c_b)
print("Average degree of the network = ",k_b)
#print("Average shortest path = ",l_b)
# random graph, a.k.a Erdős-Rényi graph
print("============Erdős-Rényi network model===================")
## Probability for edge creation
p_e = 2*n_edge/(n_node*(n_node-1))
print("p = ",p_e)
ge = nx.erdos_renyi_graph(n_node, p_e, seed=123, directed=True)
#CC= sorted((ge.subgraph(c) for c in nx.connected_components(ge)), key = len, reverse=True)[0]
## number of nodes N
n_node_e=ge.number_of_nodes()
## number of link K
n_edge_e=ge.number_of_edges()
## ave. clustring coef <C>
c_e = nx.average_clustering(ge)
## ave. degree <K>
k_e = np.mean(list(dict(ge.degree()).values()))
## ave. shortest path <L> choose either of below functions
#l_e=nx.average_shortest_path_length(ge)
#l_e=nx.average_shortest_path_length(CC) #when ge is too big
print("Number of nodes = ",n_node_e)
print("Number of links = ",n_edge_e)
print("Average clustering coefficient of the network = ",c_e)
print("Average degree of the network = ",k_e)
#print("Average shortest path = ",l_e)
# show properties in dataframe
list_data = [
[n_node, n_edge, c, k],
[n_node_s, n_edge_s, c_s, k_s],
[n_node_b, n_edge_b, c_b, k_b],
[n_node_e, n_edge_e, c_e, k_e],
]
df=pd.DataFrame(list_data)
df.index=["empirical network", f"small world (p={p_s})", f"Barabasi-Albert (m={m_b})", f"Erdős-Rényi (p={p_e})"]
df.columns=["# of nodes","# of link","<C>","<K>"]
df
```
### Plot Weighted Degrees-out and Weighted Degrees-in
```
vz.plot_degrees_out_in_directedG(Gd)
```
### Log-log Plot
```
#G1:empirical
degs0 = list(dict(nx.degree(Gd)).values())
n0, bins0 = np.histogram(degs0, bins = list(range(min(degs0), max(degs0)+1, 1)), density="True")
#Gs:Small World
degs1 = list(dict(nx.degree(gs)).values())
n1, bins1 = np.histogram(degs1, bins = list(range(min(degs1), max(degs1)+1, 1)), density="True")
#Gb:Barabasi Albert
degs2 = list(dict(nx.degree(gb)).values())
n2, bins2 = np.histogram(degs2, bins = list(range(min(degs2), max(degs2)+1, 1)), density="True")
#Ge:Erdos Renyi
degs3 = list(dict(nx.degree(ge)).values())
n3, bins3 = np.histogram(degs3, bins = list(range(min(degs3), max(degs3)+1, 1)), density="True")
plt.figure(figsize=(17,8)) #use once and set figure size
plt.loglog(bins0[:-1],n0,'b-', markersize=10, label="Empirical Data")
plt.loglog(bins1[:-1],n1,'bs--', markersize=10, label="Small World")
plt.loglog(bins2[:-1],n2,'go--', markersize=10, label="Barabasi Albert")
plt.loglog(bins3[:-1],n3,'r*--', markersize=10, label="Erdos Renyi")
plt.legend(loc='upper right',prop={'size': 30})
plt.title('Degree Distributions log-log plot',fontsize=30,y=1.1)
plt.xlabel('Degree, k',fontsize=30)
plt.ylabel('P(k)',fontsize=30)
plt.xticks(fontsize=30)
plt.yticks(fontsize=30)
plt.tight_layout()
plt.savefig("./networkplot.png")
plt.show;
```
## Judge network
Small world property: low \<L> (avg. shortest path) and high \<C> (clustering coefficient).
```
# Small world property is low avg. shortest path and high clustering coefficient.
#metabolic
#print("<L_nw> =", l)
print("ln(N) =", np.log(n_node))
print("<C_nw> =", c)
print("<C_rm> =", c_e)
#or the below might work
# print("<C_rm> =",nx.average_clustering(nx.watts_strogatz_graph(n_node, k, 1.0, seed=123))
```
-----------
# Network analysis - Part2
Here we analyze network metrics (degree centrality and betweenness centrality) to detect cluster hubs and simulate robustness.
Reference:
[Livestock Network Analysis for Rhodesiense Human African Trypanosomiasis Control in Uganda (especially Table1&2)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8273440/)
[Lecture18_CentralityTutorial.ipynb](https://bcourses.berkeley.edu/courses/1509160/files/folder/Lectures/Lecture%2018)
## Network centrality properties
Properties that represent centrality.
```
# advanced network metrics
t_n = 10
## degree centrality - The number of edges (links) a cattle stop point (node) has. Indicates whether a node can be a source of infection (high out-degree centrality) or receive most of the infection from other cattle stop points (high in-degree centrality)
### Ave. degree centrality - should be the same as <K>
dc=nx.centrality.degree_centrality(Gd)
dc_sequence=list(dc.values())
degree_sequence = [Gd.degree(n) for n in Gd.nodes]
#assert dc_sequence==degree_sequence
adc=statistics.mean(dc_sequence)
#assert adc==k
print("Average degree centrality = ",adc)
print("Average degree of the network = ",k)
### Max degree centrality - Finding the node with max degree
max_degree_node = max(Gd.nodes, key=Gd.degree)
max_link=Gd.degree(max_degree_node)
top_n = list(pd.DataFrame(Gd.degree, columns=['node','degree']).sort_values('degree', ascending=False)['node'].iloc[:t_n].values)
for degree_node in top_n:
link=Gd.degree(degree_node)
print("the node", degree_node, "has", link, "links.")
%%time
## Degree betweenness - Measures the extent to which a cattle stop point (node) lies on the paths between other cattle stop points. Measures how frequently a given cattle stop point (node) can act as a bridge between other cattle stop points (nodes) in the network. The higher the degree betweenness, the higher the potential of a cattle stop point to transmit the infection from a source cattle stop point.
### Ave. betweenness centrality
bc=nx.centrality.betweenness_centrality(Gn)
%%time
bc_sequence = list(bc.values())
abc=statistics.mean(bc_sequence)
print('Average betweenness centrality = ', abc)
### Max betweenness centrality
max_bt_node = max(Gd.nodes, key=bc.get)
max_bt=bc[max_bt_node]
print("the node", max_bt_node, "has", max_bt, "betweenness.")
```
## Plot distribution
The plot should be similar to log-log plot in part1...
We can graphically represent the sequence of centrality values by using a *histogram*. In its basic form, a histogram plots the degree values on the x-axis, and the number of nodes having that degree on the y-axis. To do this counting, we can make use of Python's `collections.Counter`.
```
from collections import Counter
import plotly.graph_objects as go
degree_counts = Counter(degree_sequence)
degree_counts # dict format
min_degree, max_degree = min(degree_counts.keys()), max(degree_counts.keys())
plot_x = list(range(min_degree, max_degree + 1))
plot_y = [degree_counts.get(x, 0) for x in plot_x]
plt.bar(plot_x, plot_y)
plt.xlabel("Degree Values")
plt.ylabel("# of Nodes")
plt.show();
```
## Robustness Testing
We measure how much it would damage the network structure if particular nodes were to be removed.
Two types of network damage:
- Random failure: nodes are chosen randomly for removal
- Targeted attack: nodes are removed based on some criterion (e.g., in decreasing order of their degree centrality)
**In cow supply chain in the real world, it is not realistic to assume a certain number of stop points are randomly damaged (infected) at each step in geographical and epidemiological meanings (although it might be possible if the disease is widely spread all over the country...). So, we should modify the model provided to simulate M numbers of nodes are (randomly or purposefully) attacked only at Day0 (no step).**
```
# Random failure
C = Gn.copy()
N = Gn.number_of_nodes()
number_of_bins = 50
M = N // number_of_bins
num_nodes_removed = range(0, N, M)
random_attack_core_proportions = []
for nodes_removed in num_nodes_removed:
C = Gn.copy()
if C.number_of_nodes() > nodes_removed:
nodes_to_remove = random.sample(list(C.nodes), nodes_removed)
C.remove_nodes_from(nodes_to_remove)
# Measure the relative size of the network core
core = max(nx.connected_components(C))
core_proportion = len(core) / N
random_attack_core_proportions.append(core_proportion)
plt.title('Random failure at Day0')
plt.xlabel('Number of nodes removed')
plt.ylabel('Proportion of nodes in core')
plt.plot(num_nodes_removed, random_attack_core_proportions, marker='o');
# Targeted attack
C = Gn.copy()
N = Gn.number_of_nodes()
number_of_bins = 50
M = N // number_of_bins
num_nodes_removed = range(0, N, M)
targeted_attack_core_proportions = []
for nodes_removed in num_nodes_removed:
C = Gn.copy()
if C.number_of_nodes() > nodes_removed:
nodes_sorted_by_degree = sorted(C.nodes, key=C.degree, reverse=True)
nodes_to_remove = nodes_sorted_by_degree[:nodes_removed]
C.remove_nodes_from(nodes_to_remove)
# Measure the relative size of the network core
core = max(nx.connected_components(C))
core_proportion = len(core) / N
targeted_attack_core_proportions.append(core_proportion)
plt.title('Targeted Attack at Day0')
plt.xlabel('Number of nodes removed')
plt.ylabel('Proportion of nodes in core')
plt.plot(num_nodes_removed, random_attack_core_proportions, marker='o');
plt.title('Random Failure vs. Targeted Attack at Day0')
plt.xlabel('Number of nodes removed')
plt.ylabel('Proportion of nodes in core')
plt.plot(num_nodes_removed, random_attack_core_proportions, marker='o', label='Random')
plt.plot(num_nodes_removed, targeted_attack_core_proportions, marker='^', label='Targeted')
plt.legend();
```
---------
# SIR & SIS Model
Here we simulate the disease spread in the identified network with the two models below.
- SIR Model
- SIS Model
In our analysis, each node represents each cow stop point. Even after the point disinfects the disease, there should always be some risks of another outbreak. So, the SIS model is more realistic.
### Overview of SIR Model
In this SIR model, the (fixed) population of $N$ individuals are divided into three "compartments" and moved according to a function of time, $t$:
- $S_t$ : susceptible but not yet infected with the disease
- $I_t$ : infected
- $R_t$ : recovered from the disease and now have immunity to it. Will never be infected again
This model describes the change in the population of each compartment in terms of two parameters, $β$ and $γ$.
- $β$ : the effective contact rate of the disease: an infected individual comes into contact with $βN$ other individuals per unit time (of which the fraction that are susceptible to contracting the disease is $S/N$).
- $γ$ : the mean recovery rate. $1/γ$ is the mean period of time during which an infected individual can pass it on.
The Basic Reproductive Number (R naught) means the number of individuals that one infected individual passes the disease to, and is defined as:
- $R_0$ = $βN/γ$
The equations of this model are:
\begin{cases}
N = S \quad t = 0\\
\\
N = S + I + R \quad t > 0\\
\end{cases}
<img src="https://www.researchgate.net/profile/Claudio-Struchiner-2/publication/47676805/figure/fig2/AS:343729496969224@1458962906357/SIR-model-Schematic-representation-differential-equations-and-plot-for-the-basic-SIR.png" width="500"><figcaption>From Paula Luz, Claudio Struchiner, & Alison P Galvani. (2010). Modeling Transmission Dynamics and Control of Vector-Borne Neglected Tropical Diseases</figcaption>
### Overview of SIS Model
In this SIS model, the (fixed) population of $N$ individuals are divided into two "compartments" and moved according to a function of time, $t$:
- $S_t$ : susceptible either not yet infected with the disease or recovered without immunity
- $I_t$ : infected
This model describes the change in the population of each compartment in terms of two parameters, $β$ and $γ$ as the same as the SIR model does.
The equations of this model are:
\begin{cases}
N = S \quad t = 0\\
\\
N = S + I \quad t > 0\\
\end{cases}
$$
\frac{dS}{dt}=γI - βSI
$$
$$
\frac{dI}{dt}= -γI + βSI
$$
<img src="https://sineadmorris.github.io/post/the-sis-model/SISsimple.png" width="500"><figcaption>From Sinead Morris. (2018). SIS model for malaria</figcaption>
Reference:
[Learning Scientific Programming with Python The SIR epidemic model](https://scipython.com/book/chapter-8-scipy/additional-examples/the-sir-epidemic-model/)
[Valentina Alto. (2020). Visualizing dynamic phenomena with SIR model and Networks
An implementation with Python](https://python.plainenglish.io/visualizing-dynamic-phenomena-with-sir-model-and-networks-45a4e629e609)
[Epidemics on Networks (Python Package)](https://epidemicsonnetworks.readthedocs.io/en/latest/index.html)
```
# drawing network
plt.figure(figsize=(10,10))
nx.draw(Gd, with_labels = True)
import random
n_edge = Gd.number_of_edges()
#calculate normalized weights
# Here we should modify our code to get the number of edges for each pair of nodes..!!
# Create random weights
w = []
for i in range(n_edge):
w.append(random.random())
s = max(w)
# Normalize them by dividing by the maximum weight
w = [i/s for i in w]
len(w)
k = 0
for i, j in Gd.edges():
Gd[i][j]['weight'] = w[k]
k+=1
edgewidth = [d['weight'] for (u,v,d) in Gd.edges(data=True)]
# Get labels for each node
#labels = {}
#we should modify the below code if we want to show the label on each node in graph
#for i in range(100):
# labels[i] = i
# layout
#pos = nx.spring_layout(G, iterations=50)
pos = nx.spring_layout(Gd)
# rendering
plt.figure(figsize=(40,40))
nx.draw_networkx_nodes(Gd, pos)
nx.draw_networkx_edges(Gd, pos, width=edgewidth, node_size=500)
nx.draw_networkx_labels(Gd, pos, labels)
plt.axis('off')
```
## SIR Model
```
import EoN
gamma = 0.2 # hyperparameter
beta = 1.2 # hyperparameter
r_0 = beta/gamma
print("R_naught is", r_0)
N = n_node # population size - number of cow stop point
I0 = 1 # intial # of infected individuals
R0 = 0
S0 = N - I0 -R0
pos = nx.spring_layout(Gn)
nx_kwargs = {"pos": pos, "alpha": 0.7} #optional arguments to be passed on to the networkx plotting command.
# unable commands "with_labels":True, "width": edgewidth,
print("doing SIR simulation")
sim_sir = EoN.fast_SIR(Gn, tau = beta, gamma=gamma, rho = I0, transmission_weight="weight", return_full_data=True)
print("done with simulation, now plotting")
for i in range(0,5,1):
sim_sir.display(time = i, **nx_kwargs)
plt.axis('off')
plt.title("Iteration {}".format(i))
plt.draw()
```
## SIS Model
```
sim_sis = EoN.fast_SIS(Gd, tau = beta, gamma=gamma, rho = I0, transmission_weight="weight", return_full_data=True)
nx_kwargs = {"pos": pos, "alpha": 0.7} #optional arguments to be passed on to the networkx plotting command.
# unable commands "with_labels":True, "width": edgewidth,
pos = {node:node for node in Gd}
sim_sis.set_pos(pos)
for i in range(0,5,1):
sim_sis.display(time = i, **nx_kwargs)
plt.axis('off')
plt.title("Iteration {}".format(i))
plt.draw()
```
### Different combinations of gamma and beta
```
## CAUTION!! 242 outputs will be generated
# different gamma and beta
per = np.arange(0.0,1.0,0.1)
comb = list(itertools.combinations_with_replacement(per,2))
for g, b in comb:
gamma = g
beta =b
r_0 = beta/gamma
print("R_naught is", r_0)
N = n_node # population size - number of cow stop point
I0 = 1 # intial # of infected individuals
R0 = 0
S0 = N - I0 -R0
pos = nx.spring_layout(G1)
nx_kwargs = {"pos": pos, "alpha": 0.7} #optional arguments to be passed on to the networkx plotting command.
# unable commands "with_labels":True, "width": edgewidth,
print("doing SIR simulation")
sim_sir_iter = EoN.fast_SIR(g, tau = beta, gamma=gamma, rho = I0, transmission_weight="weight", return_full_data=True)
print("done with simulation, now plotting")
for i in range(0,5,1):
sim_sis_iter.display(time = i, **nx_kwargs)
plt.axis('off')
plt.title(f"Iteration{i} with gamma = {gamma}, beta = {beta}")
plt.draw()
print("doing SIS simulation")
sim_sis_iter = EoN.fast_SIS(G1, tau = beta, gamma=gamma, rho = I0, transmission_weight="weight", return_full_data=True)
pos = {node:node for node in G1}
sim_sis_iter.set_pos(pos)
for i in range(0,5,1):
sim_sis_iter.display(time = i, **nx_kwargs)
plt.axis('off')
plt.title(f"Iteration{i} with gamma = {gamma}, beta = {beta}")
plt.draw()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_parent" href="https://github.com/giswqs/geemap/tree/master/tutorials/ImageCollection/03_filtering_image_collection.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_parent" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/tutorials/ImageCollection/03_filtering_image_collection.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_parent" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/tutorials/ImageCollection/03_filtering_image_collection.ipynb"><img width=26px src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
# Filtering an ImageCollection
As illustrated in the [Get Started section](https://developers.google.com/earth-engine/getstarted) and the [ImageCollection Information section](https://developers.google.com/earth-engine/ic_info), Earth Engine provides a variety of convenience methods for filtering image collections. Specifically, many common use cases are handled by `imageCollection.filterDate()`, and `imageCollection.filterBounds()`. For general purpose filtering, use `imageCollection.filter()` with an ee.Filter as an argument. The following example demonstrates both convenience methods and `filter()` to identify and remove images with bad registration from an `ImageCollection`:
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
### Simple cloud score
For scoring Landsat pixels by their relative cloudiness, Earth Engine provides a rudimentary cloud scoring algorithm in the `ee.Algorithms.Landsat.simpleCloudScore()` method. Also note that `simpleCloudScore()` adds a band called `cloud` to the input image. The cloud band contains the cloud score from 0 (not cloudy) to 100 (most cloudy).
```
# Load Landsat 5 data, filter by date and bounds.
collection = ee.ImageCollection('LANDSAT/LT05/C01/T2') \
.filterDate('1987-01-01', '1990-05-01') \
.filterBounds(ee.Geometry.Point(25.8544, -18.08874))
# Also filter the collection by the IMAGE_QUALITY property.
filtered = collection \
.filterMetadata('IMAGE_QUALITY', 'equals', 9)
# Create two composites to check the effect of filtering by IMAGE_QUALITY.
badComposite = ee.Algorithms.Landsat.simpleComposite(collection, 75, 3)
goodComposite = ee.Algorithms.Landsat.simpleComposite(filtered, 75, 3)
# Display the composites.
Map.setCenter(25.8544, -18.08874, 13)
Map.addLayer(badComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'bad composite')
Map.addLayer(goodComposite,
{'bands': ['B3', 'B2', 'B1'], 'gain': 3.5},
'good composite')
```
## Display Earth Engine data layers
```
Map.addLayerControl()
Map
```
| github_jupyter |
<a href="https://colab.research.google.com/github/raqueeb/TensorFlow2/blob/master/scratch_model_weight_changes_affect_accuracy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
# আমাদের প্রেডিকশন করার জন্য ডেটা পয়েন্ট, ছবির সাথে মিলিয়ে দেখুন
input_data = np.array([2, 3])
# আমাদের ডিকশনারী
weights = {'node_0': np.array([1, 1]),
'node_1': np.array([-1, 1]),
'output': np.array([2, -1])
}
# node 0 এর ভ্যালু ক্যালকুলেট করি: node_0_value
node_0_value = (input_data * weights['node_0']).sum()
# node ১ এর ভ্যালু ক্যালকুলেট করি: node_1_value
node_1_value = (input_data * weights['node_1']).sum()
# নোডগুলোর ভ্য়ালুগুলোকে অ্যারেতে রাখি: hidden_layer_outputs
hidden_layer_outputs = np.array([node_0_value, node_1_value])
# আউটপুট ক্যালকুলেট করি: output
output = (hidden_layer_outputs * weights['output']).sum()
# আউটপুট প্রিন্ট করে দেখি
print(hidden_layer_outputs)
print(output)
# নতুন ওয়েট এবং ইনপুট ডেটা
weights = np.array([1, 2])
input_data = np.array([3, 4])
# প্রেডিকশন ক্যাল্কুলেট করি: preds
preds = (weights * input_data).sum()
# ধরি আমাদের টার্গেট ৬
target = 6
# এরর ক্যালকুলেট করি: error
error = preds - target
# স্লোপ ক্যালকুলেট করি: slope
slope = 2 * input_data * error
# স্লোপ প্রিন্ট করি
print(slope)
# লার্নিং রেট ঠিক করি: learning_rate
learning_rate = 0.01
# স্লোপ/গ্রেডিয়েন্ট ক্যালকুলেট করি: gradient
gradient = 2 * input_data * error
# ওয়েট আপডেট করি: weights_updated
weights_updated = weights - learning_rate * gradient
# প্রেডিকশন আপডেট নেই : preds_updated
preds_updated = (weights_updated * input_data).sum()
# এররের আপডেট নেই: error_updated
error_updated = preds_updated - target
# শুরুর এরর প্রিন্ট করি
print(error)
# নতুন এরর প্রিন্ট করি
print(error_updated)
import numpy as np
# আমাদের প্রেডিকশন করার জন্য ডেটা পয়েন্ট, ছবির সাথে মিলিয়ে দেখুন
input_data = np.array([0, 3])
# স্যাম্পল ওয়েট যা পাল্টে দিয়েছি আমরা
weights_0 = {'node_0': [2, 1],
'node_1': [1, 2],
'output': [1, 1]
}
# আসল টার্গেট ভ্যালু, এরর বের করার জন্য লাগবে
target_actual = 3
# দুটো মেথড ডিফাইন করি
def relu(input):
output = max(0, input)
return output
def predict_with_network(input_data_row, weights):
node_0_input = (input_data_row * weights['node_0']).sum()
# print(node_0_input)
node_0_output = relu(node_0_input)
# print(node_0_output)
node_1_input = (input_data_row * weights['node_1']).sum()
node_1_output = relu(node_1_input)
hidden_layer_outputs = np.array([node_0_output, node_1_output])
input_to_final_layer = (hidden_layer_outputs * weights['output']).sum()
model_output = relu(input_to_final_layer)
return model_output
# শুরুর ওয়েট দিয়ে প্রেডিকশন করি
model_output_0 = predict_with_network(input_data, weights_0)
# এরর ক্যালকুলেট করি: error_0
error_0 = model_output_0 - target_actual
# নতুন ওয়েট দেই যাতে টার্গেট প্রেডিকশন (3) ধরতে পারে: weights_1
weights_1 = {'node_0': [2, 1],
'node_1': [1, 2],
'output': [1, 0]
}
# নতুন ওয়েট দিয়ে প্রেডিকশন: model_output_1
model_output_1 = predict_with_network(input_data, weights_1)
# আবার এরর ক্যালকুলেট করি: error_1
error_1 = model_output_1 - target_actual
# সবকিছু প্রিন্ট করে দেখি
print(model_output_0)
print(model_output_1)
print(error_0)
print(error_1)
import numpy as np
# আমাদের প্রেডিকশন করার জন্য ডেটা পয়েন্ট, ছবির সাথে মিলিয়ে দেখুন
input_data = np.array([-1, 2])
# আমাদের ডিকশনারী
weights = {'node_0': np.array([3, 3]),
'node_1': np.array([1, 5]),
'output': np.array([2, -1])
}
def relu(input):
'''রেল্যু ফাংশনকে ডিফাইন করে দিচ্ছি এখানে'''
# ইনপুটে যা পাবো সেটাকে ম্যাক্সিমাম যা আসবে, অথবা ঋনাত্বক আসলে "০" : output
output = max(0, input)
# Return the value just calculated
return(output)
# নোড ১ এর ভ্যালু ক্যালকুলেট করি: node_0_output
node_0_input = (input_data * weights['node_0']).sum()
node_0_output = relu(node_0_input)
# নোড ২ এর ভ্যালু ক্যালকুলেট করি: node_1_output
node_1_input = (input_data * weights['node_1']).sum()
node_1_output = relu(node_1_input)
# নতুন ভ্যালুগুলোকে অ্যারেতে বসাই: hidden_layer_outputs
hidden_layer_outputs = np.array([node_0_output, node_1_output])
# মডেলের আউটপুট ক্যালকুলেট করি (রেল্যুকে সরাসরি ব্যবহার না করে)
model_output = (hidden_layer_outputs * weights['output']).sum()
# Print model output
print(node_0_output)
print(node_1_output)
print(hidden_layer_outputs)
print(model_output)
```
| github_jupyter |
```
from IPython.core.display import HTML
HTML(open("custom.css", "r").read())
```
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#From-Transistors-to-ALUs-With-Skywater" data-toc-modified-id="From-Transistors-to-ALUs-With-Skywater-1">From Transistors to ALUs With Skywater</a></span><ul class="toc-item"><li><span><a href="#Installing-Some-Tools" data-toc-modified-id="Installing-Some-Tools-1.1">Installing Some Tools</a></span></li><li><span><a href="#Getting-the-Skywater-PDK" data-toc-modified-id="Getting-the-Skywater-PDK-1.2">Getting the Skywater PDK</a></span></li><li><span><a href="#Some-Infrastructure" data-toc-modified-id="Some-Infrastructure-1.3">Some Infrastructure</a></span></li><li><span><a href="#The-Simplest:-the-Inverter" data-toc-modified-id="The-Simplest:-the-Inverter-1.4">The Simplest: the Inverter</a></span></li><li><span><a href="#The-Universal:-the-NAND-Gate" data-toc-modified-id="The-Universal:-the-NAND-Gate-1.5">The Universal: the NAND Gate</a></span></li><li><span><a href="#One-or-the-Other:-the-XOR-Gate" data-toc-modified-id="One-or-the-Other:-the-XOR-Gate-1.6">One or the Other: the XOR Gate</a></span></li><li><span><a href="#No,-It's-Not-a-Snake:-The-Adder" data-toc-modified-id="No,-It's-Not-a-Snake:-The-Adder-1.7">No, It's Not a Snake: The Adder</a></span></li><li><span><a href="#Fragments-of-Memory:-Latches,-Flip-Flops-and-Registers" data-toc-modified-id="Fragments-of-Memory:-Latches,-Flip-Flops-and-Registers-1.8">Fragments of Memory: Latches, Flip-Flops and Registers</a></span></li><li><span><a href="#The-Simplest-State-Machine:-the-Counter" data-toc-modified-id="The-Simplest-State-Machine:-the-Counter-1.9">The Simplest State Machine: the Counter</a></span></li><li><span><a href="#Bonus:-an-ALU" data-toc-modified-id="Bonus:-an-ALU-1.10">Bonus: an ALU</a></span></li><li><span><a href="#Extra-Bonus:-a-Down-Counter" data-toc-modified-id="Extra-Bonus:-a-Down-Counter-1.11">Extra Bonus: a Down Counter</a></span></li><li><span><a href="#End-of-the-Line" data-toc-modified-id="End-of-the-Line-1.12">End of the Line</a></span></li></ul></li></ul></div>
# From Transistors to ALUs With Skywater
[Google and SkyWater Technology](https://woset-workshop.github.io/PDFs/2020/a03.pdf) are cooperating to provide open-source hardware designers with a way to build custom ASICs. Part of this effort is the release of the [SkyWater PDK](https://github.com/google/skywater-pdk) which describes the parameters for a 130nm CMOS process. I thought it might be fun to simulate a few logic gates in SPICE using this PDK.
## Installing Some Tools
[This repo](https://github.com/bluecmd/learn-sky130/blob/main/schematic/xschem/getting-started.md) provided some guidance when I first started investigating the PDK, but it uses external tools like [XSCHEM](http://repo.hu/projects/xschem/) (for schematic capture) and [GAW](https://gaw.tuxfamily.org/linux/gaw.php) (for displaying waveforms). To make my work easier to replicate and distribute, I wanted everything to be done in a Jupyter notebook. It wasn't immediately apparent how I would integrate XSCHEM/GAW into a notebook and [schematics shouldn't be used in polite company](https://www.youtube.com/watch?v=WErQYI2A36M&list=PLy2022BX6EsqLkQy1EmXjVnauOH3FSHTV&index=3&t=257s) any way, so I took a more Python-centric approach and used these tools:
* [ngspice](http://ngspice.sourceforge.net/): An open-source SPICE simulator.
* [SciPy bundle](https://www.scipy.org/index.html): General-purpose Python libraries for handling data.
* [SKiDL](https://github.com/xesscorp/skidl): Used to describe circuitry using Python code.
* [PySpice](https://github.com/FabriceSalvaire/PySpice): A Python interface between SKiDL and ngspice.
Pre-built versions of ngspice are available for Windows and MacOS. For linux, I got the latest ngspice files (version 33) from [here](https://sourceforge.net/projects/ngspice/files/ng-spice-rework/33/), unpacked it into the `ngspice-33` directory, and built it using the instructions in the INSTALL file:
```bash
$ cd ngspice-33
$ mkdir release
$ cd release
$ ../configure --with-x --enable-xspice --disable-debug --enable-cider --with-readline=yes --enable-openmp
$ make 2>&1 | tee make.log
$ sudo make install
```
Installing the SciPy tools was also easy since I already had Python:
```bash
$ pip install matplotlib numpy pandas jupyter
```
I couldn't use the PyPi versions of PySpice and SKiDL because they had to be modified to make them work with the Skywater PDK. If you want to run this notebook, you'll need to install the development versions of those from GitHub:
```bash
$ pip install git+https://github.com/xesscorp/PySpice
$ pip install git+https://github.com/xesscorp/skidl@development
```
Once all these tools were installed, I imported them into this notebook:
```
import pandas as pd # For data frames.
import matplotlib.pyplot as plt # For plotting.
from skidl.pyspice import * # For describing circuits and interfacing to ngspice.
```
## Getting the Skywater PDK
With the tooling in place, it was time to get the Skywater PDK. If I wanted to wait a *long time*, I could install the entire PDK like this:
```bash
$ git clone --recurse-submodules https://github.com/google/skywater-pdk
```
But I don't need *everything*, just the latest SPICE models for the device primitives, so the following command is much quicker:
```bash
$ git clone --recurse-submodules=libraries/sky130_fd_pr/latest https://github.com/google/skywater-pdk
```
Even with a stripped-down repo, there's a lot of stuff in there. The [Skywater documentation](https://skywater-pdk.readthedocs.io/en/latest/index.html) provides some guidance, but there are a lot of sections that just contain **TODO**. Poking about in the PDK led me to these directories of device information files as decribed [here](https://skywater-pdk.readthedocs.io/en/latest/rules/device-details.html):
```
!ls -F ~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/
```
For my purposes, I only needed a simple NFET and PFET to build some logic gates. I figured 1.8V versions of these would be found in the `nfet_01v8` and `pfet_01v8` subdirectories, but I wasn't expecting all these files:
```
!ls -F ~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/nfet_01v8
```
After some further poking about and reading, I categorized the files as follows:
* Any file ending in `.cdl` seems to be associated with Cadence. Since I'll never have the money to run their software, I can ignore these and the associated `.tsv` files.
* Files with names containing `_tt_`, `_ff_`, `_fs_`, `_sf_`, or `_ss_` refer to *[process corners](http://anysilicon.com/understanding-process-corner-corner-lots/)* where variations in the semiconductor fabrication process lead to NMOS/PMOS transistors that have typical, fast or slow switching characteristics.
* Files ending with `.pm3.spice` contain most of the device parameters for a particular corner. These files are included inside an associated `corner.spice` file that makes a small adjustment to the parameters. Both these files are used for simulating transistors at a given process corner.
* Files ending with `leak.pm3.spice` and `leak.corner.spice` are similar to the previous files, but are probably used for simulating transistor leakage currents.
* Many of the `.spice` files contain multiple transistor models with differing parameters that are dependent on the length and width of the transistor gate. The file ending with `.bins.csv` contains a list of the supported transistor sizes. Some of the 1.8V NMOS and PMOS transistor dimensions are shown below:
```
import pandas as pd
nfet_sizes = pd.read_table("~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/nfet_01v8/sky130_fd_pr__nfet_01v8.bins.csv", delimiter=",")
pfet_sizes = pd.read_table("~/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/cells/pfet_01v8/sky130_fd_pr__pfet_01v8.bins.csv", delimiter=",")
pd.concat((nfet_sizes, pfet_sizes), axis=1)
```
Knowing where the device files are and what they contain is great, but how do I actually *use* them? It turns out there is a master library file: `skywater-pdk/libraries/sky130_fd_pr/latest/models/sky130.lib.spice`. Internally, this file is divided into nine *sections* that cover the five process corners (`tt`, `ff`, `fs`, `sf`, `ss`) as well as four additional corners for the low/high variations in resistor and capacitor values (`hh`, `hl`, `lh`, `ll`). I'm only interested in using some typical FETs, so I can load that section into a SKiDL library like so:
```
# Select a particular corner using tt, ff, fs, sf, ss, hh, hl, lh, ll.
corner = "tt" # Use typical transistor models.
# Create a SKiDL library for the Skywater devices at that process corner.
sky_lib = SchLib(
"/home/devb/tmp/skywater-pdk/libraries/sky130_fd_pr/latest/models/sky130.lib.spice",
lib_section=corner, # Load the transistor models for this corner.
recurse=True, # The master lib includes sublibraries, so recurse thru them to load everything.
)
print(sky_lib) # Print the list of devices in the library.
```
The progress messages indicate a few errors in the libraries (maybe they're corrected by now), but I'm not using those particular devices so I'm not going to worry about it. From the list above, I can pick out the 1.8V general-purpose transistors I want, but I also need to specify their gate dimensions so the right model gets loaded. I picked out a small NMOS FET, and then a PMOS FET that's 3x the width. (That's because I learned in my 1983 VLSI class that PMOS transistors have 3x the resistance-per-square of NMOS ones, so make them wider to make the current-driving capability about the same in each.)
```
nfet_wl = Parameters(W=0.42, L=0.15)
pfet_wl = Parameters(W=1.26, L=0.15) # 3x the width of the NMOS FET.
```
Now I can extract the NMOS and PMOS transistors from the library and use them to build logic gates:
```
nfet = Part(sky_lib, "sky130_fd_pr__nfet_01v8", params=nfet_wl)
pfet = Part(sky_lib, "sky130_fd_pr__pfet_01v8", params=pfet_wl)
```
## Some Infrastructure
Before I start building gates, there's some stuff that I'll use over and over to test the circuitry. The first of these is an *oscilloscope function* that takes a complete set of waveform data from a simulation and plots selected waveforms from it:
```
disp_vmin, disp_vmax = -0.4@u_V, 2.4@u_V
disp_imin, disp_imax = -10@u_mA, 10@u_mA
def oscope(waveforms, *nets_or_parts):
"""
Plot selected waveforms as a stack of individual traces.
Args:
waveforms: Complete set of waveform data from ngspice simulation.
nets_or_parts: SKiDL Net or Part objects that correspond to individual waveforms.
vmin, vmax: Minimum/maximum voltage limits for each waveform trace.
imin, imax: Minimum/maximum current limits for each waveform trace.
"""
# Determine if this is a time-series plot, or something else.
try:
x = waveforms.time # Sample times are used for the data x coord.
except AttributeError:
# Use the first Net or Part data to supply the x coord.
nets_or_parts = list(nets_or_parts)
x_node = nets_or_parts.pop(0)
x = waveforms[node(x_node)]
# Create separate plot traces for each selected waveform.
num_traces = len(nets_or_parts)
trace_hgt = 1.0 / num_traces
fig, axes = plt.subplots(nrows=num_traces, sharex=True, squeeze=False,
subplot_kw=None, gridspec_kw=None)
traces = axes[:,0]
# Set the X axis label on the bottom-most trace.
if isinstance(x.unit, SiUnits.Second):
xlabel = 'Time (S)'
elif isinstance(x.unit, SiUnits.Volt):
xlabel = x_node.name + ' (V)'
elif isinstance(x.unit, SiUnits.Ampere):
xlabel = x_node.ref + ' (A)'
traces[-1].set_xlabel(xlabel)
# Set the Y axis label position for each plot trace.
trace_ylbl_position = dict(rotation=0,
horizontalalignment='right',
verticalalignment='center',
x=-0.01)
# Plot each Net/Part waveform in its own trace.
for i, (net_or_part, trace) in enumerate(zip(nets_or_parts, traces), 1):
y = waveforms[node(net_or_part)] # Extract the waveform data
# Set the Y axis label depending upon whether data is voltage or current.
if isinstance(y.unit, SiUnits.Volt):
trace.set_ylim(float(disp_vmin), float(disp_vmax))
trace.set_ylabel(net_or_part.name + ' (V)', trace_ylbl_position)
elif isinstance(y.unit, SiUnits.Ampere):
trace.set_ylim(float(disp_imin), float(disp_imax))
trace.set_ylabel(net_or_part.ref + ' (A)', trace_ylbl_position)
# Set position of trace within stacked traces.
trace.set_position([0.1, (num_traces-i) * trace_hgt, 0.8, trace_hgt])
# Place grid on X axis.
trace.grid(axis='x', color='orange', alpha=1.0)
# Plot the waveform data.
trace.plot(x, y)
```
In addition to an oscilloscope, every electronics bench has a *signal generator*. For my purposes, I only need a simple function that generates one or more square waves whose frequencies decrease by a factor of two. (The collection of square waves looks like the output of a binary *counter*, hence the name.)
```
default_freq = 500@u_MHz # Specify a default frequency so it doesn't need to be set every time.
def cntgen(*bits, freq=default_freq):
"""
Generate one or more square waves varying in frequency by a factor of two.
Args:
bits: One or more Net objects, each of which will carry a square wave.
"""
bit_period = 1.0/freq
for bit in bits:
# Create a square-wave pulse generator with the current period.
pulse = PULSEV(initial_value=vdd_voltage, pulsed_value=0.0@u_V,
pulse_width=bit_period/2, period=bit_period)
# Attach the pulse generator between ground and the net that carries the square wave.
gnd & pulse["n, p"] & bit
# Double the period (halve the frequency) for each successive bit.
bit_period = 2 * bit_period
```
All the circuits in this notebook will run from a 1.8V supply, so the following function instantiates a global power supply and a $V_{dd}$ voltage rail for them to use:
```
default_voltage = 1.8@u_V # Specify a default supply voltage.
def pwr(voltage=default_voltage):
"""
Create a global power supply and voltage rail.
"""
# Clear any pre-existing circuitry. (Start with a clear slate.)
reset()
# Global variables for the power supply and voltage rail.
global vdd_ps, vdd, vdd_voltage
# Create a power supply and attach it between the Vdd rail and ground.
vdd_voltage = voltage
vdd_ps = V(ref="VDD_SUPPLY", dc_value=vdd_voltage)
vdd = Net("Vdd")
vdd & vdd_ps["p, n"] & gnd
```
Finally, here are some convenience functions that 1) generate a netlist from the SKiDL code and use that to create a PySpice simulator object, 2) use the simulator object to perform a DC-level analysis, 3) use the simulator to perform a transient analysis, and 4) count the number of transistors in a circuit.
```
get_sim = lambda : generate_netlist().simulator() # Compile netlist & create simulator.
do_dc = lambda **kwargs: get_sim().dc(**kwargs) # Run a DC-level analysis.
do_trans = lambda **kwargs: get_sim().transient(**kwargs) # Run a transient analysis.
def how_big(circuit=default_circuit):
from collections import defaultdict
parts = defaultdict(lambda: 0)
for p in circuit.parts:
parts[p.name] += 1
for part_name, num_parts in parts.items():
print(f"{part_name}: {num_parts}")
```
With the infrastructure in place, I can begin building logic gates, starting from the simplest one I know.
## The Simplest: the Inverter
Here's the gate-level schematic for a CMOS inverter:

And this is it's SKiDL version:
```
@package
def inverter(a=Net(), out=Net()):
# Create the NFET and PFET transistors.
qp, qn = pfet(), nfet()
# Attach the NFET substrate to ground and the PFET substrate to Vdd.
gnd & qn.b
vdd & qp.b
# Connect Vdd through the PFET source-to-drain on to the output node.
# From the output node, connect through the NFET drain-to-source to ground.
vdd & qp["s,d"] & out & qn["d,s"] & gnd
# Attach the input to the NFET and PFET gate terminals.
a & qn.g & qp.g
```
First, I'll test the inverter's transfer function by attaching a voltage ramp to its input and see when the output transitions. (For those playing at home, you may notice the SPICE simulations take a minute or two to run. These transistor models are *complicated*.)
```
pwr() # Apply power to the circuitry.
inv = inverter() # Create an inverter.
# Attach a voltage source between ground and the inverter's input.
# Then attach the output to a net.
gnd & V(ref="VIN", dc_value=0.0@u_V)["n, p"] & Net("VIN") & inv["a, out"] & Net("VOUT")
# Do a DC-level simulation while ramping the voltage source from 0 to Vdd.
vio = do_dc(VIN=slice(0, vdd_voltage, 0.01))
# Plot the inverter's output against its input.
oscope(vio, inv.a, inv.out)
```
For a low-level input, the inverter's output is high and vice-versa as expected. From the shape of the transfer curve, I'd estimate the inverter's trigger point is around 0.8V.
It's also interesting to look at the current draw of the inverter as the input voltage ramps up:
```
# Add a trace for the Vdd power supply current.
disp_imin, disp_imax = -15@u_uA, 1@u_uA
oscope(vio, inv.a, inv.out, vdd_ps)
```
As we learned in our textbooks so long ago, the quiescent current for CMOS logic is near zero but surges as the input voltage goes through the transition zone when both transistors are ON. For this inverter, the current maxes-out at about 13 $\mu$A at the trigger point.
It's equally easy to do a transient analysis of the inverter as it receives an input that varies over time:
```
pwr()
# Connect a 500 MHz square wave to net A.
a = Net("A")
cntgen(a)
# Pump the square wave through an inverter.
inv = inverter()
a & inv["a, out"] & Net("A_BAR")
# Do a transient analysis and look at the timing between input and output.
waveforms = do_trans(step_time=0.01@u_ns, end_time=3.5@u_ns)
oscope(waveforms, a, inv.out)
```
There is a bit of ringing on the inverter's output but no appreciable propagation delay, probably because there is no real load on the output. In order to get more delay, I'll cascade thirty inverters together and look at the output of the last one:
```
pwr()
a = Net("A")
cntgen(a)
# Create a list of 30 inverters.
invs = [inverter() for _ in range(30)]
# Attach the square wave to the first inverter in the list.
a & invs[0].a
# Go through the list, attaching the input of each inverter to the output of the previous one.
for i in range(1, len(invs)):
invs[i-1].out & invs[i].a
# Attach the output of the last inverter to the output net.
invs[-1].out & Net("A_DELAY")
# Do a transient analysis.
waveforms = do_trans(step_time=0.01@u_ns, end_time=3.5@u_ns)
oscope(waveforms, a, invs[-1].out)
```
Thirty cascaded inverters creates a total delay of around 0.65 ns, so each inverter contributes about 20 ps. This simulation doesn't include things like wiring delays, so don't get your hopes up about running at 50 GHz.
Finally, just to test the `how_big` function, let's see how many transistors are in 30 inverters:
```
how_big()
```
Thirty NMOS and thirty PMOS transistors. We're good to go.
## The Universal: the NAND Gate
They say if you have a NAND gate, you have it all (if all you want is combinational logic, which seems a bit limited). Here's the schematic for one:

And this is its SKiDL version:
```
@package
def nand(a=Net(), b=Net(), out=Net()):
# Create the PFET and NFET transistors.
q1, q2 = pfet(2)
q3, q4 = nfet(2)
# Connect the PFET/NFET substrates to Vdd/gnd, respectively.
vdd & q1.b & q2.b
gnd & q3.b & q4.b
# Go from Vdd through a parallel-pair of PFETs to the output and then
# through a series-pair of NFETs to ground.
vdd & (q1["s,d"] | q2["s,d"]) & out & q3["d,s"] & q4["d,s"] & gnd
# Connect the pair of inputs to the gates of the transistors.
a & q1.g & q3.g
b & q2.g & q4.g
```
Like with the inverter, I'll do a transient analysis but using two square waves to drive both NAND inputs:
```
pwr()
a, b, out = Net("A"), Net("B"), Net("OUT")
# Create two square waves: a at 500 MHz and b at 250 MHz.
cntgen(a, b)
# Create a NAND gate and connect its I/O to the nets.
nand()["a, b, out"] += a, b, out
# Perform a transient analysis.
waveforms = do_trans(step_time=0.01@u_ns, end_time=10@u_ns)
oscope(waveforms, a, b, out)
```
The NAND gate output only goes low when both inputs are high, as expected. Ho hum.
## One or the Other: the XOR Gate
Continuing on, here is the last combinational gate I'll do: the exclusive-OR. There's nothing really new here that you haven't already seen with the NAND gate, just more of it.

```
@package
def xor(a=Net(), b=Net(), out=Net()):
# Create eight transistors: four NFETs and four PFETs.
qn_a, qn_ab, qn_b, qn_bb = nfet(4)
qp_a, qp_ab, qp_b, qp_bb = pfet(4)
# Connect the substrates of the transistors.
vdd & qp_a.b & qp_ab.b & qp_b.b & qp_bb.b
gnd & qn_a.b & qn_ab.b & qn_b.b & qn_bb.b
# Create the two parallel "legs" of series PFETs-NFETs with a
# common output node in the middle.
vdd & qp_ab["s,d"] & qp_b["s,d"] & out & qn_a["d,s"] & qn_b["d,s"] & gnd
vdd & qp_a["s,d"] & qp_bb["s,d"] & out & qn_ab["d,s"] & qn_bb["d,s"] & gnd
# Create two inverters to get the complements of both inputs.
ab, bb = inverter(), inverter()
ab.a += a
bb.a += b
# Attach the two inputs and their complements to the transistor gates.
a & qp_a.g & qn_a.g
ab.out & qp_ab.g & qn_ab.g
b & qp_b.g & qn_b.g
bb.out & qp_bb.g & qn_bb.g
pwr()
a, b, out = Net("A"), Net("B"), Net("OUT")
cntgen(a, b)
xor()["a, b, out"] += a, b, out
waveforms = do_trans(step_time=0.01@u_ns, end_time=10@u_ns)
oscope(waveforms, a, b, out)
```
The output only goes high when the inputs have opposite values, so the XOR gate is working correctly.
## No, It's Not a Snake: The Adder
Finally I've reached the level of abstraction where individual transistors aren't needed. I can use the gates I've already built to construct new stuff, like this [full-adder bit](https://www.geeksforgeeks.org/full-adder-in-digital-logic/):

```
@package
def full_adder(a=Net(), b=Net(), cin=Net(), s=Net(), cout=Net()):
# Use two XOR gates to compute the sum bit.
ab_sum = Net() # Net to carry the intermediate result of a+b.
xor()["a,b,out"] += a, b, ab_sum # Compute ab_sum=a+b
xor()["a,b,out"] += ab_sum, cin, s # Compute s=a+b+cin
# Through the magic of DeMorgan's Theorem, the AND-OR carry circuit
# can be done using three NAND gates.
nand1, nand2, nand3 = nand(), nand(), nand()
nand1["a,b"] += ab_sum, cin
nand2["a,b"] += a, b
nand3["a,b,out"] += nand1.out, nand2.out, cout
```
I'll use a `cntgen()` with three outputs to apply all eight input combinations to the full-adder:
```
pwr()
# Generate nets for the inputs and outputs.
a, b, cin, s, cout = Net("A"), Net("B"), Net("CIN"), Net("S"), Net("COUT")
# Drive the A, B and CIN full-adder inputs with all eight combinations.
cntgen(a, b, cin)
# Connect the I/O nets to the full-adder.
full_adder()["a, b, cin, s, cout"] += a, b, cin, s, cout
# Do a transient analysis.
waveforms = do_trans(step_time=0.01@u_ns, end_time=8@u_ns)
oscope(waveforms, a, b, cin, s, cout)
```
The sum and carry-out bits of the full-adder match the truth-table for all the combinations of A, B and the carry input.
Now I'll combine multiple full-adders to build a multi-bit adder:
```
@subcircuit
def adder(a, b, cin, s, cout):
# a, b and s are multi-bit buses. The width of the adder will
# be determined by the length of the sum output.
width = len(s)
# Create a list of full-adders equal to the width of the sum output.
fadds = [full_adder() for _ in range(width)]
# Iteratively connect the full-adders to the input and output bits.
for i in range(width):
# Connect the i'th full adder to the i'th bit of a, b and s.
fadds[i]["a, b, s"] += a[i], b[i], s[i]
if i == 0:
# Connect the carry input to the first full-adder.
fadds[i].cin += cin
else:
# Connect the carry input of the rest of the full-adders
# to the carry output from the previous one.
fadds[i].cin += fadds[i-1].cout
# Connect the carry output to the carry output from the last bit of the adder.
cout += fadds[-1].cout
```
I'll instantiate a two-bit adder and test it with all 32 input combinations of A$_0$, A$_1$, B$_0$, B$_1$, and C$_{in}$:
```
pwr()
# Create the two-bit input and output buses and the carry input & output nets.
w = 2
a, b, cin, s, cout = Bus("A",w), Bus("B",w), Net("CIN"), Bus("S",w), Net("COUT")
# Drive the A0, A1, B0, B1, and CIN inputs with a five-bit counter.
cntgen(*a, *b, cin)
# Connect the I/O to an adder.
adder(a, b, cin, s, cout)
# Do a transient analysis
waveforms = do_trans(step_time=0.01@u_ns, end_time=32@u_ns)
oscope(waveforms, *a, *b, cin, *s, cout)
```
The outputs *look* like they might be correct, but I'm not going to waste my time trying to eyeball it when Python can do that. The following code subsamples the waveforms and converts them into a table of integers for the adder's inputs and outputs:
```
def integerize(waveforms, *nets, threshold=0.9@u_V):
"""
Convert a set of N waveforms to a stream of N-bit integer values.
Args:
waveforms: Waveform data from ngspice.
nets: A set of nets comprising a digital word.
threshold: Voltage threshold for determining if a waveform value is 1 or 0.
Returns:
A list of integer values, one for each sample time in the waveform data.
"""
def binarize():
"""Convert multiple waveforms into streams of ones and zeros."""
binary_vals = []
for net in nets:
binary_vals.append([v > threshold for v in waveforms[node(net)]])
return binary_vals
# Convert the waveforms into streams of bits, then combine the bits into integers.
int_vals = []
for bin_vector in zip(*reversed(binarize())):
int_vals.append(int(bytes([ord('0')+b for b in bin_vector]), base=2))
return int_vals
def subsample(subsample_times, sample_times, *int_waveforms):
"""
Take a subset of samples from a set of integerized waveforms at a set of specific times.
Args:
subsample_times: A list of times (in ascending order) at which to take subsamples.
sample_times: A list of times (in ascending order) for when each integerized sample was taken.
int_waveforms: List of integerized waveform sample lists.
Returns:
A list of subsample lists.
"""
# Create a list of the empty lists to hold the subsamples from each integerized waveform.
subsamples = [[] for _ in int_waveforms]
# Get the first subsample time.
subsample_time = subsample_times.pop(0)
# Step through the sample times, looking for the time to take a subsample.
for sample_time, *samples in zip(sample_times, *int_waveforms):
# Take a subsample whenever the sample time is less than the current subsample time.
if sample_time > subsample_time:
# Store a subsample from each waveform.
for i, v in enumerate(samples):
subsamples[i].append(v)
# Get the next subsample time and break from loop if there isn't one.
try:
subsample_time = subsample_times.pop(0)
except IndexError:
break
return subsamples
# Convert the waveforms for A, B, Cin, S, and Cout into lists of integers.
a_ints = integerize(waveforms, *a)
b_ints = integerize(waveforms, *b)
cin_ints = integerize(waveforms, cin)
# Combine the N-bit sum and carry-out into a single N+1-bit integer.
s_ints = integerize(waveforms, *s, cout)
# Set the subsample times just before the adder's inputs change.
ts = [(i+0.9)@u_ns for i in range(32)]
# Subsample the integerized adder waveforms.
av, bv, cinv, sv = subsample(ts, waveforms.time, a_ints, b_ints, cin_ints, s_ints)
# Display a table of the adder's inputs and corresponding output.
pd.DataFrame({'A': av, 'B': bv, 'CIN': cinv, 'S': sv})
```
That's better, but even checking all the table entries is too much work so I'll write a little code to do that:
```
error_flag = False
for a, b, cin, s in zip(av, bv, cinv, sv):
if a+b+cin != s:
print(f"ERROR: {a}+{b}+{cin} != {s}")
error_flag = True
if not error_flag:
print("No errors found.")
```
OK, at this point I'm convinced I have a working two-bit adder. And I can make any size adder I want just by changing the input and output bus widths.
Onward!
## Fragments of Memory: Latches, Flip-Flops and Registers
Cross-coupled logic gates like this [dynamic master-slave flip-flop](http://ece-research.unm.edu/jimp/vlsi/slides/chap5_2.html) are often used for storing bits:

A problem with this circuit is the use of NMOS FETs as pass gates for the input and feedback latch. Because I'm using 1.8V as my supply voltage, any logic-high signal passing through the NMOS FET is reduced by the approximately 0.6V threshold voltage to around 1.2V. While this is workable, it does lead to some increased propagation delay. Therefore, I built a transmission gate that parallels the NMOS FET with a PMOS FET with the gate of each being driven by complementary signals. This allows signals to pass through without being degraded by the threshold voltage.
<a id="tx_gate"/>
```
@package
def tx_gate(i, g, g_b, o):
"""NMOS/PMOS transmission gate. When g is high and g_b is low, i and o are connected."""
# NMOS and PMOS transistors for passing input to output.
qn, qp = nfet(), pfet()
# Transistor substrate connections.
gnd & qn.b
vdd & qp.b
# Parallel NMOS/PMOS transistors between the input and output.
i & (qn["s,d"] | qp["s,d"]) & o
# Connect the gate input to the NMOS and the complement of the gate input
# to the PMOS. Both transistors will conduct when the gate input is high,
# and will block the input from the output when the gate input is low.
g & qn.g
g_b & qp.g
```
The SKiDL implementation for half of this flip-flop creates a latch that allows data to enter and pass through when the write-enable is active, and then latches the data bit with a feedback gate when the write-enable is not asserted:
```
@package
def latch_bit(wr=Net(), wr_b=Net(), d=Net(), out_b=Net()):
in_tx, fb_tx = tx_gate(), tx_gate()
in_inv, fb_inv = inverter(), inverter()
# Input data comes in through the input gate, goes through an inverter to the data output.
d & in_tx["i,o"] & in_inv["a, out"] & out_b
# The data output is fed back through another inverter and transmission gate to the input inverter.
out_b & fb_inv["a, out"] & fb_tx["i,o"] & in_inv.a # Feed output back to input.
# wr activates the input gate and deactivates the feedback gate, allowing data into the latch.
wr & in_tx.g & fb_tx.g_b
# Complement of wr deactivates the input gate and activates the feedback gate, latching the data.
wr_b & in_tx.g_b & fb_tx.g
```
By cascading two of these latches, I arrive at the complete flip-flop:
```
@package
def ms_ff(wr=Net(), d=Net(), out=Net()):
# Create the master and slave latches.
master, slave = latch_bit(), latch_bit()
# Data passes from the input through the master to the slave latch and then to the output.
d & master["d, out_b"] & slave["d, out_b"] & out
# Data continually enters the master latch when the write-enable is low, but gets
# latched when the write-enable goes high..
wr & inverter()["a, out"] & master.wr & slave.wr_b
# Data from the master passes through the slave when the write-enable goes high, and
# this data stays stable in the slave when the write-enable goes low and new data
# is entering the master.
wr & slave.wr & master.wr_b
```
A simple test shows the flip-flop retains data and the output only changes upon the rising edge of the write-enable (after a small propagation delay):
```
pwr()
wr, d, out = Net('WR'), Net('D'), Net('OUT')
cntgen(wr, d)
ms_ff()["wr, d, out"] += wr, d, out
waveforms = do_trans(step_time=0.01@u_ns, end_time=8@u_ns)
oscope(waveforms, wr, d, out)
```
Once I have a basic flip-flop, it's easy to build multi-bit registers:
```
@subcircuit
def register(wr, d, out):
# Create a flip-flop for each bit in the output bus.
reg_bits = [ms_ff() for _ in out]
# Connect the inputs and outputs to the flip-flops.
for i, rb in enumerate(reg_bits):
rb["wr, d, out"] += wr, d[i], out[i]
```
## The Simplest State Machine: the Counter
With both an adder and a register in hand, a counter is the obvious next step:
```
@subcircuit
def cntr(clk, out):
# Create two buses: one for the next counter value, and one that's all zero bits.
width = len(out)
nxt, zero = Bus(width), Bus(width)
# Provide access to the global ground net.
global gnd
# Connect all the zero bus bits to ground (that's why it's zero).
gnd += zero
# The next counter value is the current counter value plus 1. Set the
# adder's carry input to 1 and the b input to zero to do this.
adder(a=out, b=zero, cin=vdd, s=nxt, cout=Net())
# Clock the next counter value into the register on the rising clock edge.
register(wr=clk, d=nxt, out=out)
```
Now just give it a clock and watch it go!
```
pwr()
# Generate a clock signal.
clk = Net('clk')
cntgen(clk)
# Create a three-bit counter.
cnt = Bus('CNT', 3)
cntr(clk, cnt)
# Simulate the counter.
waveforms = do_trans(step_time=0.01@u_ns, end_time=30@u_ns)
# In addition to the clock and counter value, also look at the power supply current.
disp_imin, disp_imax = -3@u_mA, 3@u_mA
oscope(waveforms, clk, *cnt, vdd_ps)
```
Looking at the counter bits shows its obviously incrementing 0, 1, 2, ..., 7, 0, ... The bottom trace shows the pulses of supply current on every clock edge. (Remember that whole current-pulse-during-input-transition thing?) But how much energy is being used? Multiplying the supply current by its output voltage and summing over time will answer that:
```
time_steps = waveforms.time[1:] - waveforms.time[0:-1]
ps_current = -waveforms[node(vdd_ps)][0:-1] # Mult by -1 to get current FROM the + terminal of the supply.
ps_voltage = waveforms[node(vdd)][0:-1]
energy = sum(ps_current * ps_voltage * time_steps)@u_J
print(f"Total energy = {energy}")
```
As for the total number of transistors in the counter ...
```
how_big()
```
## Bonus: an ALU
An adder is great and all, but that's all it does: adds. Having a module that adds, subtracts, shifts, and performs logical operations is much cooler! That's an *arithmetic logic unit* (ALU).
You might think building an ALU is a lot harder than building an adder, but it's not. It can all be done using an *8-to-1 multiplexer* (mux) as the basic building block:

Now, if you look *real hard* at the circuit above, you'll realize you can smash the sixteen legs of series NMOS/PMOS transistors into just eight legs of series transmission gates like the one I used [above](#tx_gate).
I can build a full-adder bit from a pair of 8-to-1 muxes by passing the A, B, and C$_{in}$ inputs as the selectors, and applying the eight-bit truth-table for the S and C$_{out}$ bits to the input of each mux, respectively. Then I'll combine the full-adder bits to build a complete $N$-bit adder as before.
But I can also build a subtractor, left-shifter, logical-AND, etc just by changing the truth-table bits that go to each mux. (If you're familiar with [FPGAs](https://en.wikipedia.org/wiki/Field-programmable_gate_array), the mux is essentially the same as their [look-up tables](https://electronics.stackexchange.com/questions/169532/what-is-an-lut-in-fpga).)
The complete SKiDL code for an ALU is shown below. (Much easier to create, thankfully, than tediously drawing the circuit shown above.)
```
@package
def mux8(in_, i0=Net(), i1=Net(), i2=Net(), out=Net()):
# Create the complements of the selection inputs.
i0b, i1b, i2b = Net(), Net(), Net()
i0 & inverter()["a,out"] & i0b
i1 & inverter()["a,out"] & i1b
i2 & inverter()["a,out"] & i2b
out_ = Net() # Output from the eight legs of the mux.
i = 0 # Input bit index.
# Create the eight legs of the mux by nested iteration of the selection inputs
# and their complements. Each leg is turned on by a different combination of inputs.
for i2_g, i2_g_b in ((i2b, i2), (i2, i2b)):
for i1_g, i1_g_b in ((i1b, i1), (i1, i1b)):
for i0_g, i0_g_b in ((i0b, i0), (i0, i0b)):
# Place 3 transmission gates in series from input bit i to output.
i0_gate, i1_gate, i2_gate = tx_gate(), tx_gate(), tx_gate()
in_[i] & i0_gate["i,o"] & i1_gate["i,o"] & i2_gate["i,o"] & out_
# Attach the selection inputs and their complements to the transmission gates.
i0_gate["g, g_b"] += i0_g, i0_g_b
i1_gate["g, g_b"] += i1_g, i1_g_b
i2_gate["g, g_b"] += i2_g, i2_g_b
i = i+1 # Go to the next input bit.
# Run the output through two inverters to restore signal strength.
out_ & inverter()["a, out"] & inverter()["a, out"] & out
@subcircuit
def alu(a, b, cin, s, cout, s_opcode, c_opcode):
"""
Multi-bit ALU with the operation determined by the eight-bit codes
that determine the output from the sum and carry muxes.
"""
width = len(s)
s_bits = [mux8() for _ in range(width)]
c_bits = [mux8() for _ in range(width)]
# For each bit in the ALU...
for i in range(width):
# Connect truth-table bits to the sum and carry mux inputs.
s_bits[i].in_ += s_opcode
c_bits[i].in_ += c_opcode
# Connect inputs to the sum and carry mux selectors.
s_bits[i]["i0, i1"] += a[i], b[i]
c_bits[i]["i0, i1"] += a[i], b[i]
# Connect the carry input of each ALU bit to the carry output of the previous bit.
if i == 0:
s_bits[i].i2 & cin
c_bits[i].i2 & cin
else:
s_bits[i].i2 & c_bits[i-1].out
c_bits[i].i2 & c_bits[i-1].out
# Connect the output bit of each sum mux to the ALU sum output.
s[i] & s_bits[i].out
# Connect the carry output from the last ALU bit.
cout & c_bits[-1].out
```
By setting the sum and carry opcodes appropriately, I can build a subtractor from the ALU:
```
@subcircuit
def subtractor(a, b, cin, s, cout):
"""
Create a subtractor by applying the required opcodes to the ALU.
"""
# Set the opcodes to perform subtraction (a - b - c), so in reality the carry
# is actually a borrow.
# cin b a s cout
# ====================
# 0 0 0 0 0
# 0 0 1 1 0
# 0 1 0 1 1
# 0 1 1 0 0
# 1 0 0 1 1
# 1 0 1 0 0
# 1 1 0 0 1
# 1 1 1 1 1
one = vdd
zero = gnd
s_opcode = Bus(zero, one, one, zero, one, zero, zero, one)
c_opcode = Bus(zero, zero, one, zero, one, zero, one, one)
# Connect the I/O and opcodes to the ALU.
alu(a=a, b=b, cin=cin, s=s, cout=cout, s_opcode=s_opcode, c_opcode=c_opcode)
```
Now I'll test the subtractor just as I did previously with the adder:
```
pwr()
# Create the two-bit input and output buses and the carry input & output nets.
w = 2
a, b, cin, s, cout = Bus("A",w), Bus("B",w), Net("CIN"), Bus("S",w), Net("COUT")
# Drive the A0, A1, B0, B1, and CIN inputs with a five-bit counter.
cntgen(*a, *b, cin)
# Connect the I/O to the subtractor.
subtractor(a=a, b=b, cin=cin, s=s, cout=cout)
# Do a transient analysis
disp_vmax = 4@u_V
waveforms = do_trans(step_time=0.01@u_ns, end_time=32@u_ns)
# Display the output waveforms.
oscope(waveforms, *a, *b, cin, *s, cout)
# Convert the waveforms for A, B, Cin, S, and Cout into lists of integers.
a_ints = integerize(waveforms, *a)
b_ints = integerize(waveforms, *b)
cin_ints = integerize(waveforms, cin)
# Combine the N-bit sum and carry-out into a single N+1-bit integer.
s_ints = integerize(waveforms, *s, cout)
# Set the subsample times right before the ALU's inputs change.
ts = [(i+0.9)@u_ns for i in range(32)]
# Subsample the integerized ALU waveforms.
av, bv, cinv, sv = subsample(ts, waveforms.time, a_ints, b_ints, cin_ints, s_ints)
# Display a table of the ALU's inputs and corresponding output.
pd.DataFrame({'A': av, 'B': bv, 'CIN': cinv, 'S': sv})
```
## Extra Bonus: a Down Counter
Since I went to the trouble to build a subtractor, it would be a waste if I didn't use it to make a down-counter:
```
@subcircuit
def down_cntr(clk, out):
# Provide access to the global ground net.
global gnd
width = len(out)
nxt, zero = Bus(width), Bus(width)
gnd += zero
# The next counter value is the current counter value minus 1. Set the
# subtractor's borrow input to 1 and the b input to zero to do this.
subtractor(a=out, b=zero, cin=vdd, s=nxt, cout=Net())
register(wr=clk, d=nxt, out=out)
pwr()
clk = Net('clk')
cntgen(clk)
# Create a three-bit down counter.
cnt = Bus('CNT', 3)
down_cntr(clk, cnt)
# Simulate it.
waveforms = do_trans(step_time=0.01@u_ns, end_time=30@u_ns)
oscope(waveforms, clk, *cnt, vdd_ps)
```
From the waveforms, it's obvious the counter is decrementing: 7, 6, 5, ..., 0, 7, ... so chalk this one up as a win. But how does this compare to the counter I previously built using just an adder?
With regards to energy consumption, this ALU-based counter is about 2x worse (7 pJ compared to 3.3 pJ):
```
time_steps = waveforms.time[1:] - waveforms.time[0:-1]
ps_current = -waveforms[node(vdd_ps)][0:-1] # Mult by -1 to get current FROM the + terminal of the supply.
ps_voltage = waveforms[node(vdd)][0:-1]
energy = sum(ps_current * ps_voltage * time_steps)@u_J
print(f"Total energy = {energy}")
```
And it uses about 2.5x the number of transistors (402 versus 162):
```
how_big()
```
## End of the Line
OK, well that was fun.
I've gone from extracting a few transistors from the Skywater PDK all the way to constructing an ALU. And I've encapsulated that journey in this notebook so you can do it, too.
Is this the way to go about using the PDK? Possibly, if you need some detailed performance measurements on some portion of your design. But for big digital designs, definitely not. Running this entire notebook takes 8 minutes and all of the designs are under 500 transistors. Using an HDL and a digital simulator would be much faster.
Now for analog designs ... maybe. SPICE lends itself to the analog domain. And I can envision using some of the functions in [`scipy.optimize`](https://docs.scipy.org/doc/scipy/reference/optimize.html) to fine-tune analog circuits for certain performance criteria. But I haven't produced a concrete example of doing this so it remains in the realm of the imaginary. For now.
| github_jupyter |
# Setup
```
import os
from google.colab import drive as gdrive
# @markdown Setup output directory for the models
OUTPUT_DIR = 'Colab/varname/' # @param {type:'string'}
SAVE_ON_GDRIVE = False # @param {type:'boolean'}
if SAVE_ON_GDRIVE:
GDRIVE_ROOT = os.path.abspath('gdrive')
GDRIVE_OUT = os.path.join(GDRIVE_ROOT, 'My Drive', OUTPUT_DIR)
print('[INFO] Mounting Google Drive in {}'.format(GDRIVE_ROOT))
gdrive.mount(GDRIVE_ROOT, force_remount = True)
OUT_PATH = GDRIVE_OUT
else:
OUT_PATH = os.path.abspath(OUTPUT_DIR)
os.makedirs(OUT_PATH, exist_ok = True)
# @markdown Machine setup
# Install java 11
!sudo DEBIAN_FRONTEND=noninteractive apt-get install -qq git openjdk-11-jdk > /dev/null
# Install python 3.7 and pip
!sudo DEBIAN_FRONTEND=noninteractive apt-get install -qq python3.7 python3.7-dev python3.7-venv python3-pip > /dev/null
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 > /dev/null
!python3 -m pip install -q --upgrade pip > /dev/null
# Install pipenv (i.e. a better python package manager).
!pip3 install pipenv -qq > /dev/null
%env PIPENV_QUIET 1
%env PIPENV_VENV_IN_PROJECT 1
%env PIPENV_SKIP_LOCK 1
from IPython.display import clear_output
clear_output()
# @markdown Download code
# Clone the project and cd into it
!git clone --branch master https://github.com/simonepri/varname-seq2seq code
%cd -q code
# Install dependencies
!pipenv install > /dev/null
# @markdown Download the dataset
DATASET = "java-corpora-dataset-obfuscated.tgz" # @param ["java-corpora-dataset-obfuscated.tgz", "java-corpora-dataset.tgz"]
!pipenv run bin src/bin/download_data.py \
--file-name "$DATASET" \
--data-path "data/dataset"
```
# Model training
```
# @markdown Model configs
BATCH_SIZE = 256 # @param {type:'number'}
RNN_CELL = "lstm" # @param ['lstm', 'gru']
RNN_BIDIRECTIONAL = False # @param {type:'boolean'}
RNN_NUL_LAYERS = 1# @param {type:'number'}
RNN_HIDDEN_SIZE = 256 # @param {type:'number'}
RNN_EMBEDDING_SIZE = 256 # @param {type:'number'}
RNN_TF_RATIO = "auto" # @param {type:'raw'}
INPUT_SEQ_MAX_LEN = 256 # @param {type:'number'}
OUTPUT_SEQ_MAX_LEN = 32 # @param {type:'number'}
# @markdown Run training
RUN_TRAIN = True # @param {type:'boolean'}
TRAIN_RUN_ID = "lstm-256-256-dtf-obf" # @param {type:'string'}
TRAIN_EPOCHS = 35 # @param {type:'number'}
if RUN_TRAIN:
!pipenv run bin src/bin/run_seq2seq.py \
--do-train \
--run-id "$TRAIN_RUN_ID" \
--epochs "$TRAIN_EPOCHS" \
--batch-size "$BATCH_SIZE" \
--rnn-cell "$RNN_CELL" \
--rnn-num-layers "$RNN_NUL_LAYERS" \
--rnn-hidden-size "$RNN_HIDDEN_SIZE" \
--rnn-embedding-size "$RNN_EMBEDDING_SIZE" \
--rnn-tf-ratio "$RNN_TF_RATIO" \
--rnn-bidirectional "$RNN_BIDIRECTIONAL" \
--input-seq-max-length "$INPUT_SEQ_MAX_LEN" \
--output-seq-max-length "$OUTPUT_SEQ_MAX_LEN" \
--output-path "$OUT_PATH"/models \
--cache-path "$OUT_PATH"/cache \
--train-file data/dataset/train.mk.tsv \
--valid-file data/dataset/dev.mk.tsv
```
# Model testing
```
# @markdown Print available models
!ls -Ral "$OUT_PATH"/models
# @markdown Run tests
RUN_TEST = True # @param {type:'boolean'}
TEST_RUN_ID = "lstm-256-256-dtf-obf" # @param {type:'string'}
if RUN_TEST:
!pipenv run bin src/bin/run_seq2seq.py \
--do-test \
--run-id "$TEST_RUN_ID" \
--batch-size "$BATCH_SIZE" \
--output-path "$OUT_PATH"/models \
--cache-path "$OUT_PATH"/cache \
--test-file data/dataset/test.mk.tsv
!pipenv run bin src/bin/run_seq2seq.py \
--do-test \
--run-id "$TEST_RUN_ID" \
--batch-size "$BATCH_SIZE" \
--output-path "$OUT_PATH"/models \
--cache-path "$OUT_PATH"/cache \
--test-file data/dataset/unseen.all.mk.tsv
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import geopandas as gpd
import libpysal as lp
import esda
import numpy as np
import matplotlib.pyplot as plt
```
# Case Study: *Gini in a bottle: Income Inequality and the Trump Vote*
#### Read in the table and show the first three rows
```
# %load _solved/solutions/case-trump-vote01.py
pres = gpd.read_file("zip://./data/uspres.zip")
pres.head(3)
```
#### Set the CRS and reproject it into a suitable projection for mapping the contiguous US
*hint: the epsg code useful here is 5070, for Albers equal area conic*
```
# %load _solved/solutions/case-trump-vote02.py
pres.crs = {'init':'epsg:4269'}
pres = pres.to_crs(epsg=5070)
```
#### Plot each year's vote against each other year's vote
In this instance, it also helps to include the line ($y=x$) on each plot, so that it is clearer the directions the aggregate votes moved.
```
# %load _solved/solutions/case-trump-vote03.py
import seaborn as sns
facets = sns.pairplot(data=pres.filter(like='dem_'))
facets.map_offdiag(lambda *arg, **kw: plt.plot((0,1),(0,1), color='k'))
```
#### Show the relationship between the dem two-party vote and the Gini coefficient by county.
```
# %load _solved/solutions/case-trump-vote04.py
import seaborn as sns
facets = sns.pairplot(x_vars=pres.filter(like='dem_').columns,
y_vars=['gini_2015'], data=pres)
```
#### Compute the swings (change in vote from year to year)
```
# %load _solved/solutions/case-trump-vote05.py
pres['swing_2012'] = pres.eval("dem_2012 - dem_2008")
pres['swing_2016'] = pres.eval("dem_2016 - dem_2012")
pres['swing_full'] = pres.eval("dem_2016 - dem_2008")
```
Negative swing means the Democrat voteshare in 2016 (what Clinton won) is lower than Democrat voteshare in 2008 (what Obama won).
So, counties where swing is negative mean that Obama "outperformed" Clinton.
Equivalently, these would be counties where McCain (in 2008) "beat" Trump's electoral performance in 2016.
Positive swing in a county means that Clinton (in 2016) outperformed Obama (in 2008), or where Trump (in 2016) did better than McCain (in 2008).
The national average swing was around -9% from 2008 to 2016. Further, swing does not directly record who "won" the county, only which direction the county "moved."
#### map the swing from 2008 to 2016 alongside the votes in 2008 and 2016:
```
# %load _solved/solutions/case-trump-vote06.py
f,ax = plt.subplots(3,1,
subplot_kw=dict(aspect='equal',
frameon=False),
figsize=(60,15))
pres.plot('dem_2008', ax=ax[0], cmap='RdYlBu')
pres.plot('swing_full', ax=ax[1], cmap='bwr_r')
pres.plot('dem_2016', ax=ax[2], cmap='RdYlBu')
for i,ax_ in enumerate(ax):
ax_.set_xticks([])
ax_.set_yticks([])
```
#### Build a spatial weights object to model the spatial relationships between US counties
```
# %load _solved/solutions/case-trump-vote07.py
import libpysal.api as lp
w = lp.Rook.from_dataframe(pres)
```
Note that this is just one of many valid solutions. But, all the remaining exercises are predicated on using this weight. If you choose a different weight structure, your results may differ.
#### Is swing "contagious?" Do nearby counties tend to swing together?
```
# %load _solved/solutions/case-trump-vote08.py
from pysal import esda as esda
np.random.seed(1)
moran = esda.moran.Moran(pres.swing_full, w)
print(moran.I)
```
#### Visually show the relationship between places' swing and their surrounding swing, like in a scatterplot.
```
# %load _solved/solutions/case-trump-vote09.py
f = plt.figure(figsize=(6,6))
plt.scatter(pres.swing_full, lp.lag_spatial(w, pres.swing_full))
plt.plot((-.3,.1),(-.3,.1), color='k')
plt.title('$I = {:.3f} \ \ (p < {:.3f})$'.format(moran.I,moran.p_sim))
```
#### Are there any outliers or clusters in swing using a Local Moran's $I$?
```
# %load _solved/solutions/case-trump-vote10.py
np.random.seed(11)
lmos = esda.moran.Moran_Local(pres.swing_full, w,
permutations=70000) #min for a bonf. bound
(lmos.p_sim <= (.05/len(pres))).sum()
```
#### Where are these outliers or clusters?
```
# %load _solved/solutions/case-trump-vote11.py
f = plt.figure(figsize=(10,4))
ax = plt.gca()
ax.set_aspect('equal')
is_weird = lmos.p_sim <= (.05/len(pres))
pres.plot(color='lightgrey', ax=ax)
pres.assign(quads=lmos.q)[is_weird].plot('quads',
legend=True,
k=4, categorical=True,
cmap='bwr_r', ax=ax)
```
#### Can you focus in on the regions which are outliers?
```
# %load _solved/solutions/case-trump-vote12.py
f = plt.figure(figsize=(10,4))
ax = plt.gca()
ax.set_aspect('equal')
is_weird = lmos.p_sim <= (.05/len(pres))
pres.assign(quads=lmos.q)[is_weird].plot('quads',
legend=True,
k=4, categorical='True',
cmap='bwr_r', ax=ax)
bounds = ax.axis()
pres.plot(color='lightgrey', ax=ax, zorder=-1)
ax.axis(bounds)
```
Group 3 moves surprisingly strongly from Obama to Trump relative to its surroundings, and group 1 moves strongly from Obama to Hilary relative to its surroundings.
Group 4 moves surprisingly away from Trump while its area moves towards Trump. Group 2 moves surprisingly towards Trump while its area moves towards Hilary.
#### Relaxing the significance a bit, where do we see significant spatial outliers?
```
# %load _solved/solutions/case-trump-vote13.py
pres.assign(local_score = lmos.Is,
pval = lmos.p_sim,
quad = lmos.q)\
.sort_values('local_score')\
.query('pval < 1e-3 & local_score < 0')[['name','state_name','dem_2008','dem_2016',
'local_score','pval', 'quad']]
```
mainly in ohio, indiana, and west virginia
#### What about when comparing the voting behavior from 2012 to 2016?
```
# %load _solved/solutions/case-trump-vote14.py
np.random.seed(21)
lmos16 = esda.moran.Moran_Local(pres.swing_2016, w,
permutations=70000) #min for a bonf. bound
(lmos16.p_sim <= (.05/len(pres))).sum()
pres.assign(local_score = lmos16.Is,
pval = lmos16.p_sim,
quad = lmos16.q)\
.sort_values('local_score')\
.query('pval < 1e-3 & local_score < 0')[['name','state_name','dem_2008','dem_2016',
'local_score','pval', 'quad']]
```
##### What is the relationship between the Gini coefficient and partisan swing?
```
# %load _solved/solutions/case-trump-vote15.py
#% load _solved/solutions/case-trump-vote14.py
sns.regplot(pres.gini_2015,
pres.swing_full)
```
Hillary tended to do better than Obama in counties with higher income inequality.
In contrast, Trump fared better in counties with lower income inequality.
If you're further interested in the sometimes-counterintuitive relationship between income, voting, & geographic context, check out Gelman's [Red State, Blue State](https://www.amazon.com/Red-State-Blue-Rich-Poor/dp/0691143935).
| github_jupyter |
This notebook is part of the *orix* documentation https://orix.readthedocs.io. Links to the documentation won’t work from the notebook.
# Visualizing Crystal Poles in the Pole Density Function
This notebook demonstrates how to quantify the distribution of crystallographic poles,
which is useful, for example, in texture analysis, using the Pole Distribution Function
(PDF).
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from orix import plot
from orix.crystal_map import Phase
from orix.data import ti_orientations
from orix.sampling import sample_S2
from orix.vector import Miller, Vector3d
# We'll want our plots to look a bit larger than the default size
plt.rcParams.update(
{
"figure.figsize": (10, 5),
"lines.markersize": 2,
"font.size": 15,
"axes.grid": False,
}
)
w, h = plt.rcParams["figure.figsize"]
```
First, we load some sample orientations from a Titanium sample dataset which represent
crystal orientations in the sample reference frame. These orientations have a defined
$622$ point group symmetry:
<div class="alert alert-info">
Note
If not previously downloaded, running this cell will download some example data from an
online repository to a local cache, see the docstring of
[ti_orientations](reference.rst#orix.data.ti_orientations) for more details.
</div>
```
ori = ti_orientations(allow_download=True)
ori
```
Let's look at the sample's $\{01\bar{1}1\}$ texture plotted in the stereographic projection.
First we must define the crystal's point group and generate the set of symmetrically
unique $(01\bar{1}1)$ poles:
```
m = Miller(hkil=(0, 1, -1, 1), phase=Phase(point_group=ori.symmetry))
m = m.symmetrise(unique=True)
m
```
Now let's compute the direction of these poles in the sample reference frame.
This is done using the [Orientation](reference.rst#orix.quaternion.Orientation)-[Vector3d](reference.rst#orix.vector.Vector3d)
`outer` product. We can pass `lazy=True` parameter to perform the computation in chunks
using `Dask`, this helps to reduce memory usage when there are many computations to be
performed.
```
poles = (~ori).outer(m, lazy=True, progressbar=True, chunk_size=2000)
poles.shape
```
We can plot these poles in the sterographic projection:
```
poles.scatter(
hemisphere="both",
alpha=0.02,
figure_kwargs=dict(figsize=(2 * h, h)),
axes_labels=["X", "Y"],
)
```
In this case there are many individual data points, which makes it difficult to
interpret whether regions contain higher or lower pole density.
In this case we can use the [Vector3d.pole_density_function()](reference.rst#orix.vector.Vector3d.pole_density_function)
to measure the pole density on the unit sphere $S_2$. Internally this uses the equal
area parameterization to calculate cells on $S_2$ with the same solid angle. In this
representation randomly oriented vectors have the same probability of intercepting each
cell, thus we can represent our sample's PDF as Multiples of Random Density (MRD). This
follows the work of <cite data-cite="rohrer2004distribution">Rohrer et al.(2004)</cite>.
Below is the equal area sampling representation on $S_2$ in both the stereographic
projection and 3D with a resolution of 10°:
```
fig = plt.figure(figsize=(2 * h, h))
ax0 = fig.add_subplot(121, projection="stereographic")
ax1 = fig.add_subplot(122, projection="3d")
v_mesh = sample_S2(resolution=10, method="equal_area")
ax0.hemisphere = "upper"
ax0.scatter(v_mesh)
ax0.show_hemisphere_label()
ax0.set_labels("X", "Y", None)
ax1.scatter(*v_mesh.data.T)
lim = 1
ax1.set_xlim(-lim, lim)
ax1.set_ylim(-lim, lim)
ax1.set_zlim(-lim, lim)
ax1.set_xticks((-1, 0, 1))
ax1.set_yticks((-1, 0, 1))
ax1.set_zticks((-1, 0, 1))
ax1.set_xlabel("X")
ax1.set_ylabel("Y")
ax1.set_zlabel("Z")
ax1.set_box_aspect((1, 1, 1))
```
For randomly distributed vectors on $S_2$, we can can see that MRD tends to 1 with an increasing number of vectors:
NB. PDF plots are displayed on the same color scale.
```
num = (10_000, 100_000, 1_000_000, 10_000_000)
fig, ax = plt.subplots(
nrows=2,
ncols=2,
figsize=(2 * h, 2 * h),
subplot_kw=dict(projection="stereographic"),
)
ax = ax.ravel()
for i, n in enumerate(num):
v = Vector3d(np.random.randn(n, 3)).unit
ax[i].pole_density_function(v, log=False, vmin=0.8, vmax=1.2)
ax[i].set_labels("X", "Y", None)
ax[i].set_title(str(n))
```
We can also change the sampling angular `resolution` on $S_2$, the colormap with the
`cmap` parameter, and broadening of the density distribution with `sigma`:
```
fig, ax = plt.subplots(
nrows=2,
ncols=2,
figsize=(2 * h, 2 * h),
subplot_kw=dict(projection="stereographic"),
)
ax = ax.ravel()
v = Vector3d(np.random.randn(1_000_000, 3)).unit
ax[0].pole_density_function(v, log=False, resolution=1)
ax[0].set_title("Sampling resolution: 1$\degree$")
# change sampling resolution on S2
ax[1].pole_density_function(v, log=False, resolution=5)
ax[1].set_title("Sampling resolution: 5$\degree$")
# increase peak broadening
ax[2].pole_density_function(v, log=False, resolution=1, sigma=15)
ax[2].set_title("Sampling resolution: 1$\degree$\n$\sigma$: 15$\degree$")
# change colormap
ax[3].pole_density_function(v, log=False, resolution=1, cmap="gray_r")
ax[3].set_title('Sampling resolution: 1$\degree$\ncmap: "gray_r"')
for a in ax:
a.set_labels("X", "Y", None)
```
Poles from real samples tend not to be randomly oriented, as the material microstructure
is arranged into regions of similar crystal orientation, known as grains.
The PDF for the measured $\{01\bar{1}1\}$ poles from the Titanium sample loaded at the beginning
of the notebook:
```
poles.pole_density_function(
hemisphere="both", log=False, figure_kwargs=dict(figsize=(2 * h, h))
)
```
We can also plot these densities on a `log` scale to reduce the contrast between high
and low density regions.
By comparing the point data shown at the top of the notebook with the calculated pole
densities from PDF, we can see that not all regions in the point data representation
have the same density and that PDF is needed for better quantification:
```
fig, ax = plt.subplots(
ncols=2, subplot_kw=dict(projection="stereographic"), figsize=(2 * h, h)
)
ax[0].hemisphere = "upper"
ax[1].hemisphere = "upper"
ax[0].scatter(poles, s=2, alpha=0.02)
ax[1].pole_density_function(poles, log=True)
for a in ax:
a.set_labels("X", "Y", None)
```
A clear example of this can be shown by combining the PDF and point data onto the same
plot:
```
fig = poles.scatter(
alpha=0.01,
c="w",
return_figure=True,
axes_labels=["X", "Y"],
show_hemisphere_label=True,
)
poles.pole_density_function(log=True, figure=fig)
```
| github_jupyter |
```
from qiskit.ml.datasets import *
from qiskit import QuantumCircuit
from qiskit.aqua.components.optimizers import COBYLA, ADAM, SPSA, SLSQP, POWELL, L_BFGS_B, TNC, AQGD
from qiskit.circuit.library import ZZFeatureMap, RealAmplitudes
from qiskit.quantum_info import Statevector
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
# constants
n = 4
RANDOM_STATE = 42
LR = 1e-3
class_labels = ['yes', 'no']
def normalizeData(DATA_PATH = "../../Data/Processed/winedata.csv"):
"""
Normalizes the data
"""
# Reads the data
data = pd.read_csv(DATA_PATH)
data = shuffle(data, random_state=RANDOM_STATE)
X, Y = data[['alcohol', 'flavanoids', 'color_intensity', 'proline']].values, data['target'].values
# normalize the data
scaler = MinMaxScaler(feature_range=(-2 * np.pi, 2 * np.pi))
X = scaler.fit_transform(X)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=RANDOM_STATE)
return X_train, X_test, Y_train, Y_test
X_train, X_test, Y_train, Y_test = normalizeData()
sv = Statevector.from_label('0' * n)
feature_map = ZZFeatureMap(n, reps=1)
var_form = RealAmplitudes(n, reps=1)
circuit = feature_map.combine(var_form)
circuit.draw(output='mpl')
def get_data_dict(params, x):
parameters = {}
for i, p in enumerate(feature_map.ordered_parameters):
parameters[p] = x[i]
for i, p in enumerate(var_form.ordered_parameters):
parameters[p] = params[i]
return parameters
def assign_label(bit_string, class_labels):
hamming_weight = sum([int(k) for k in list(bit_string)])
is_odd_parity = hamming_weight & 1
if is_odd_parity:
return class_labels[1]
else:
return class_labels[0]
def return_probabilities(counts, class_labels):
shots = sum(counts.values())
result = {class_labels[0]: 0,
class_labels[1]: 0}
for key, item in counts.items():
label = assign_label(key, class_labels)
result[label] += counts[key]/shots
return result
def classify(x_list, params, class_labels):
qc_list = []
for x in x_list:
circ_ = circuit.assign_parameters(get_data_dict(params, x))
qc = sv.evolve(circ_)
qc_list += [qc]
probs = []
for qc in qc_list:
counts = qc.to_counts()
prob = return_probabilities(counts, class_labels)
probs += [prob]
return probs
def mse_cost(probs, expected_label):
p = probs.get(expected_label)
actual, pred = np.array(1), np.array(p)
return np.square(np.subtract(actual,pred)).mean()
cost_list = []
def cost_function(X, Y, class_labels, params, shots=100, print_value=False):
# map training input to list of labels and list of samples
cost = 0
training_labels = []
training_samples = []
for sample in X:
training_samples += [sample]
for label in Y:
if label == 0:
training_labels += [class_labels[0]]
elif label == 1:
training_labels += [class_labels[1]]
probs = classify(training_samples, params, class_labels)
# evaluate costs for all classified samples
for i, prob in enumerate(probs):
cost += mse_cost(prob, training_labels[i])
cost /= len(training_samples)
# print resulting objective function
if print_value:
print('%.4f' % cost)
# return objective value
cost_list.append(cost)
return cost
cost_list = []
optimizer = SPSA(maxiter=100)
# define objective function for training
objective_function = lambda params: cost_function(X_train, Y_train, class_labels, params, print_value=True)
# randomly initialize the parameters
np.random.seed(RANDOM_STATE)
init_params = 2*np.pi*np.random.rand(n*(1)*2)
# train classifier
opt_params, value, _ = optimizer.optimize(len(init_params), objective_function, initial_point=init_params)
# print results
print()
print('opt_params:', opt_params)
print('opt_value: ', value)
fig = plt.figure()
plt.plot(range(0,len(cost_list),1), cost_list)
plt.xlabel('Steps')
plt.ylabel('Cost value')
plt.title("ADAM Cost value against steps")
plt.show()
def test_model(X, Y, class_labels, params):
accuracy = 0
training_labels = []
training_samples = []
for sample in X:
training_samples += [sample]
probs = classify(training_samples, params, class_labels)
for i, prob in enumerate(probs):
if (prob.get('yes') >= prob.get('no')) and (Y_test[i] == 0):
accuracy += 1
elif (prob.get('no') >= prob.get('yes')) and (Y_test[i] == 1):
accuracy += 1
accuracy /= len(Y_test)
print("Test accuracy: {}\n".format(accuracy))
test_model(X_test, Y_test, class_labels, opt_params)
```
| github_jupyter |
### Prerequisites
You should have completed steps 1-3 of this tutorial before beginning this exercise. The files required for this notebook are generated by those previous steps.
This notebook takes approximately 3 hours to run on an AWS `p3.8xlarge` instance.
```
# # Optional: you can set what GPU you want to use in a notebook like this.
# # Useful if you want to run concurrent experiments at the same time on different GPUs.
# import os
# os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"]="2"
from pathlib import Path
import numpy as np
from seq2seq_utils import extract_encoder_model, load_encoder_inputs
from keras.layers import Input, Dense, BatchNormalization, Dropout, Lambda
from keras.models import load_model, Model
from seq2seq_utils import load_text_processor
#where you will save artifacts from this step
OUTPUT_PATH = Path('./data/code2emb/')
OUTPUT_PATH.mkdir(exist_ok=True)
# These are where the artifacts are stored from steps 2 and 3, respectively.
seq2seq_path = Path('./data/seq2seq/')
langemb_path = Path('./data/lang_model_emb/')
# set seeds
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
```
# Train Model That Maps Code To Sentence Embedding Space
In step 2, we trained a seq2seq model that can summarize function code using `(code, docstring)` pairs as the training data.
In this step, we will fine tune the encoder from the seq2seq model to generate code embeddings in the docstring space by using `(code, docstring-embeddings)` as the training data. Therefore, this notebook will go through the following steps:
1. Load the seq2seq model and extract the encoder (remember seq2seq models have an encoder and a decoder).
2. Freeze the weights of the encoder.
3. Add some dense layers on top of the encoder.
4. Train this new model supplying by supplying `(code, docstring-embeddings)` pairs. We will call this model `code2emb_model`.
5. Unfreeze the entire model, and resume training. This helps fine tune the model a little more towards this task.
6. Encode all of the code, including code that does not contain a docstring and save that into a search index for future use.
### Load seq2seq model from Step 2 and extract the encoder
First load the seq2seq model from Step2, then extract the encoder (we do not need the decoder).
```
# load the pre-processed data for the encoder (we don't care about the decoder in this step)
tokens_encoder_input_data, tokens_doc_length = load_encoder_inputs(seq2seq_path/'train.tokens.npy')
tokens_seq2seq_Model = load_model(seq2seq_path/'code_summary_seq2seq_model.h5')
# load the pre-processed data for the encoder (we don't care about the decoder in this step)
apiseq_encoder_input_data, apiseq_doc_length = load_encoder_inputs(seq2seq_path/'train.apiseq.npy')
apiseq_seq2seq_Model = load_model(seq2seq_path/'api_seq_seq2seq_model.h5')
# load the pre-processed data for the encoder (we don't care about the decoder in this step)
methname_encoder_input_data, methname_doc_length = load_encoder_inputs(seq2seq_path/'train.methname.npy')
methname_seq2seq_Model = load_model(seq2seq_path/'methname_seq2seq_model.h5')
# Extract Encoder from seq2seq model
token_encoder_model = extract_encoder_model(tokens_seq2seq_Model)
# Get a summary of the encoder and its layers
token_encoder_model.name = 'Token-Encoder-Model'
token_encoder_model.summary()
# Extract Encoder from seq2seq model
apiseq_encoder_model = extract_encoder_model(apiseq_seq2seq_Model)
# Get a summary of the encoder and its layers
apiseq_encoder_model.name = 'ApiSeq-Encoder-Model'
apiseq_encoder_model.summary()
# Extract Encoder from seq2seq model
methname_encoder_model = extract_encoder_model(methname_seq2seq_Model)
# Get a summary of the encoder and its layers
methname_encoder_model.name = 'Methname-Encoder-Model'
methname_encoder_model.summary()
```
Freeze the encoder
```
# Freeze Encoder Model
for encoder_model in [token_encoder_model, apiseq_encoder_model, methname_encoder_model]:
for l in encoder_model.layers:
l.trainable = False
print(l, l.trainable)
```
### Load Docstring Embeddings From From Step 3
The target for our `code2emb` model will be docstring-embeddings instead of docstrings. Therefore, we will use the embeddings for docstrings that we computed in step 3. For this tutorial, we will use the average over all hidden states, which is saved in the file `avg_emb_dim500_v2.npy`.
Note that in our experiments, a concatenation of the average, max, and last hidden state worked better than using the average alone. However, in the interest of simplicity we demonstrate just using the average hidden state. We leave it as an exercise to the reader to experiment with other approaches.
```
# Load Fitlam Embeddings
fastailm_emb = np.load(langemb_path/'avg_emb_dim500_v2.npy')
# check that the encoder inputs have the same number of rows as the docstring embeddings
assert tokens_encoder_input_data.shape[0] == fastailm_emb.shape[0]
assert methname_encoder_input_data.shape[0] == fastailm_emb.shape[0]
assert apiseq_encoder_input_data.shape[0] == fastailm_emb.shape[0]
fastailm_emb.shape
encoder_input_data.shape[0]
fastailm_emb.shape[0]
```
### Construct `codeFusion` Model Architecture
The `codeFusion` model is the the fusion of the tokens, api sequence, and method name encoders followed by a dense layer - this model should feed into the `code2emb` model
```
from keras.layers import Concatenate
token_input = Input(shape=(tokens_doc_length,), name='Token-Input')
apiseq_input = Input(shape=(apiseq_doc_length,), name='API-Input')
methname_input = Input(shape=(methname_doc_length,), name='Methname-Input')
token_out = token_encoder_model(token_input)
apiseq_out = apiseq_encoder_model(apiseq_input)
methname_out = methname_encoder_model(methname_input)
concatenation_layer = Concatenate(name="Concatenate-Token-API-Methname")\
([token_out, apiseq_out, methname_out])
codeFusion_layer = Dense(1000, activation='relu')(concatenation_layer)
```
### Construct `code2emb` Model Architecture
The `code2emb` model is the encoder from the seq2seq model with some dense layers added on top. The output of the last dense layer of this model needs to match the dimensionality of the docstring embedding, which is 500 in this case.
```
# first dense layer with batch norm
x = Dense(500, activation='relu')(codeFusion_layer)
x = BatchNormalization(name='bn-1')(x)
out = Dense(500)(x)
code2emb_model = Model([token_input, apiseq_input, methname_input], out)
code2emb_model.summary()
```
### Train the `code2emb` Model
The model we are training is relatively simple - with two dense layers on top of the pre-trained encoder. We are leaving the encoder frozen at first, then will unfreeze the encoder in a later step.
```
from keras.callbacks import CSVLogger, ModelCheckpoint
from keras import optimizers
code2emb_model.compile(optimizer=optimizers.Nadam(lr=0.002), loss='cosine_proximity')
script_name_base = 'code2emb_model_'
csv_logger = CSVLogger('{:}.log'.format(script_name_base))
model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base),
save_best_only=True)
batch_size = 20000
epochs = 15
history = code2emb_model.fit([encoder_input_data], fastailm_emb,
batch_size=batch_size,
epochs=epochs,
validation_split=0.12, callbacks=[csv_logger, model_checkpoint])
```
`.7453`
### Unfreeze all Layers of Model and Resume Training
In the previous step, we left the encoder frozen. Now that the dense layers are trained, we will unfreeze the entire model and let it train some more. This will hopefully allow this model to specialize on this task a bit more.
```
for l in code2emb_model.layers:
l.trainable = True
print(l, l.trainable)
code2emb_model.compile(optimizer=optimizers.Nadam(lr=0.0001), loss='cosine_proximity')
script_name_base = 'code2emb_model_unfreeze_'
csv_logger = CSVLogger('{:}.log'.format(script_name_base))
model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base),
save_best_only=True)
batch_size = 2000
epochs = 20
history = code2emb_model.fit([encoder_input_data], fastailm_emb,
batch_size=batch_size,
epochs=epochs,
initial_epoch=16,
validation_split=0.12, callbacks=[csv_logger, model_checkpoint])
```
### Save `code2emb` model
```
code2emb_model.save(OUTPUT_PATH/'code2emb_model.hdf5')
```
This file has been cached and is also available for download here:
`code2emb_model.hdf5`:https://storage.googleapis.com/kubeflow-examples/code_search/data/code2emb/code2emb_model.hdf5
# Vectorize all of the code without docstrings
We want to vectorize all of the code without docstrings so we can test the efficacy of the search on the code that was never seen by the model.
```
from keras.models import load_model
from pathlib import Path
import numpy as np
from seq2seq_utils import load_text_processor
code2emb_path = Path('./data/code2emb/')
seq2seq_path = Path('./data/seq2seq/')
data_path = Path('./data/processed_data/')
code2emb_model = load_model(code2emb_path/'code2emb_model.hdf5')
num_encoder_tokens, enc_pp = load_text_processor(seq2seq_path/'py_code_proc_v2.dpkl')
with open(data_path/'without_docstrings.function', 'r') as f:
no_docstring_funcs = f.readlines()
```
### Pre-process code without docstrings for input into `code2emb` model
We use the same transformer we used to train the original model.
```
# tokenized functions that did not contain docstrigns
no_docstring_funcs[:5]
encinp = enc_pp.transform_parallel(no_docstring_funcs)
np.save(code2emb_path/'nodoc_encinp.npy', encinp)
```
### Extract code vectors
```
from keras.models import load_model
from pathlib import Path
import numpy as np
code2emb_path = Path('./data/code2emb/')
encinp = np.load(code2emb_path/'nodoc_encinp.npy')
code2emb_model = load_model(code2emb_path/'code2emb_model.hdf5')
```
Use the `code2emb` model to map the code into the same vector space as natural language
```
nodoc_vecs = code2emb_model.predict(encinp, batch_size=20000)
# make sure the number of output rows equal the number of input rows
assert nodoc_vecs.shape[0] == encinp.shape[0]
```
Save the vectorized code
```
np.save(code2emb_path/'nodoc_vecs.npy', nodoc_vecs)
```
| github_jupyter |
# Regular Expressions
Regular expressions are text-matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, from finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).
If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the <code>re</code> module with Python for this lecture.
Let's get started!
## Searching for Patterns in Text
One of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
```
import re
# List of patterns to search for
patterns = ['term1', 'term2']
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for pattern in patterns:
print('Searching for "%s" in:\n "%s"\n' %(pattern,text))
#Check for match
if re.search(pattern,text):
print('Match was found. \n')
else:
print('No Match was found.\n')
```
Now we've seen that <code>re.search()</code> will take the pattern, scan the text, and then return a **Match** object. If no pattern is found, **None** is returned. To give a clearer picture of this match object, check out the cell below:
```
# List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern,text)
type(match)
```
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
```
# Show start of match
match.start()
# Show end
match.end()
```
## Split with regular expressions
Let's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
```
# Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: hello@gmail.com'
# Split the phrase
re.split(split_term,phrase)
```
Note how <code>re.split()</code> returns a list with the term to split on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!
## Finding all instances of a pattern
You can use <code>re.findall()</code> to find all the instances of a pattern in a string. For example:
```
# Returns a list of all matches
re.findall('match','test phrase match is in middle')
```
## re Pattern Syntax
This will be the bulk of this lecture on using re with Python. Regular expressions support a huge variety of patterns beyond just simply finding where a single string occurred.
We can use *metacharacters* along with re to find specific types of patterns.
Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
```
def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print('Searching the phrase using the re check: %r' %(pattern))
print(re.findall(pattern,phrase))
print('\n')
```
### Repetition Syntax
There are five ways to express repetition in a pattern:
1. A pattern followed by the meta-character <code>*</code> is repeated zero or more times.
2. Replace the <code>*</code> with <code>+</code> and the pattern must appear at least once.
3. Using <code>?</code> means the pattern appears zero or one time.
4. For a specific number of occurrences, use <code>{m}</code> after the pattern, where **m** is replaced with the number of times the pattern should repeat.
5. Use <code>{m,n}</code> where **m** is the minimum number of repetitions and **n** is the maximum. Leaving out **n** <code>{m,}</code> means the value appears at least **m** times, with no maximum.
Now we will see an example of each of these using our multi_re_find function:
```
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
'sd{2,3}', # s followed by two to three d's
]
multi_re_find(test_patterns,test_phrase)
```
## Character Sets
Character sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input <code>[ab]</code> searches for occurrences of either **a** or **b**.
Let's see some examples:
```
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = ['[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase)
```
It makes sense that the first input <code>[sd]</code> returns every instance of s or d. Also, the second input <code>s[sd]+</code> returns any full strings that begin with an s and continue with s or d characters until another character is reached.
## Exclusion
We can use <code>^</code> to exclude terms by incorporating it into the bracket syntax notation. For example: <code>[^...]</code> will match any single character not in the brackets. Let's see some examples:
```
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
```
Use <code>[^!.? ]</code> to check for matches that are not a !,.,?, or space. Add a <code>+</code> to check that the match appears at least once. This basically translates into finding the words.
```
re.findall('[^!.? ]+',test_phrase)
```
## Character Ranges
As character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is <code>[start-end]</code>.
Common use cases are to search for a specific range of letters in the alphabet. For instance, <code>[a-f]</code> would return matches with any occurrence of letters between a and f.
Let's walk through some examples:
```
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=['[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] # one upper case letter followed by lower case letters
multi_re_find(test_patterns,test_phrase)
```
## Escape Codes
You can use special escape codes to find specific types of patterns in your data, such as digits, non-digits, whitespace, and more. For example:
<table border="1" class="docutils">
<colgroup>
<col width="14%" />
<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Code</th>
<th class="head">Meaning</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td>
<td>a digit</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td>
<td>a non-digit</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td>
<td>whitespace (tab, space, newline, etc.)</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td>
<td>non-whitespace</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td>
<td>alphanumeric</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td>
<td>non-alphanumeric</td>
</tr>
</tbody>
</table>
Escapes are indicated by prefixing the character with a backslash <code>\</code>. Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with <code>r</code>, eliminates this problem and maintains readability.
Personally, I think this use of <code>r</code> to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
```
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric characters
r'\W+', # non-alphanumeric
]
multi_re_find(test_patterns,test_phrase)
```
## Conclusion
You should now have a solid understanding of how to use the regular expression module in Python. There are a ton of more special character instances, but it would be unreasonable to go through every single use case. Instead take a look at the full [documentation](https://docs.python.org/3/library/re.html#regular-expression-syntax) if you ever need to look up a particular pattern.
You can also check out the nice summary tables at this [source](http://www.tutorialspoint.com/python/python_reg_expressions.htm).
Good job!
| github_jupyter |
# Encoding of categorical variables
In this notebook, we will present typical ways of dealing with
**categorical variables** by encoding them, namely **ordinal encoding** and
**one-hot encoding**.
Let's first load the entire adult dataset containing both numerical and
categorical data.
```
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
# drop the duplicated column `"education-num"` as stated in the first notebook
adult_census = adult_census.drop(columns="education-num")
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name])
```
## Identify categorical variables
As we saw in the previous section, a numerical variable is a
quantity represented by a real or integer number. These variables can be
naturally handled by machine learning algorithms that are typically composed
of a sequence of arithmetic instructions such as additions and
multiplications.
In contrast, categorical variables have discrete values, typically
represented by string labels (but not only) taken from a finite list of
possible choices. For instance, the variable `native-country` in our dataset
is a categorical variable because it encodes the data using a finite list of
possible countries (along with the `?` symbol when this information is
missing):
```
data["native-country"].value_counts().sort_index()
```
How can we easily recognize categorical columns among the dataset? Part of
the answer lies in the columns' data type:
```
data.dtypes
```
If we look at the `"native-country"` column, we observe its data type is
`object`, meaning it contains string values.
## Select features based on their data type
In the previous notebook, we manually defined the numerical columns. We could
do a similar approach. Instead, we will use the scikit-learn helper function
`make_column_selector`, which allows us to select columns based on
their data type. We will illustrate how to use this helper.
```
from sklearn.compose import make_column_selector as selector
categorical_columns_selector = selector(dtype_include=object)
categorical_columns = categorical_columns_selector(data)
categorical_columns
```
Here, we created the selector by passing the data type to include; we then
passed the input dataset to the selector object, which returned a list of
column names that have the requested data type. We can now filter out the
unwanted columns:
```
data_categorical = data[categorical_columns]
data_categorical.head()
print(f"The dataset is composed of {data_categorical.shape[1]} features")
```
In the remainder of this section, we will present different strategies to
encode categorical data into numerical data which can be used by a
machine-learning algorithm.
## Strategies to encode categories
### Encoding ordinal categories
The most intuitive strategy is to encode each category with a different
number. The `OrdinalEncoder` will transform the data in such manner.
We will start by encoding a single column to understand how the encoding
works.
```
from sklearn.preprocessing import OrdinalEncoder
education_column = data_categorical[["education"]]
encoder = OrdinalEncoder()
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
We see that each category in `"education"` has been replaced by a numeric
value. We could check the mapping between the categories and the numerical
values by checking the fitted attribute `categories_`.
```
encoder.categories_
```
Now, we can check the encoding applied on all categorical features.
```
data_encoded = encoder.fit_transform(data_categorical)
data_encoded[:5]
encoder.categories_
print(
f"The dataset encoded contains {data_encoded.shape[1]} features")
```
We see that the categories have been encoded for each feature (column)
independently. We also note that the number of features before and after the
encoding is the same.
However, be careful when applying this encoding strategy:
using this integer representation leads downstream predictive models
to assume that the values are ordered (0 < 1 < 2 < 3... for instance).
By default, `OrdinalEncoder` uses a lexicographical strategy to map string
category labels to integers. This strategy is arbitrary and often
meaningless. For instance, suppose the dataset has a categorical variable
named `"size"` with categories such as "S", "M", "L", "XL". We would like the
integer representation to respect the meaning of the sizes by mapping them to
increasing integers such as `0, 1, 2, 3`.
However, the lexicographical strategy used by default would map the labels
"S", "M", "L", "XL" to 2, 1, 0, 3, by following the alphabetical order.
The `OrdinalEncoder` class accepts a `categories` constructor argument to
pass categories in the expected ordering explicitly. You can find more
information in the
[scikit-learn documentation](https://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features)
if needed.
If a categorical variable does not carry any meaningful order information
then this encoding might be misleading to downstream statistical models and
you might consider using one-hot encoding instead (see below).
### Encoding nominal categories (without assuming any order)
`OneHotEncoder` is an alternative encoder that prevents the downstream
models to make a false assumption about the ordering of categories. For a
given feature, it will create as many new columns as there are possible
categories. For a given sample, the value of the column corresponding to the
category will be set to `1` while all the columns of the other categories
will be set to `0`.
We will start by encoding a single feature (e.g. `"education"`) to illustrate
how the encoding works.
```
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False)
education_encoded = encoder.fit_transform(education_column)
education_encoded
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p><tt class="docutils literal">sparse=False</tt> is used in the <tt class="docutils literal">OneHotEncoder</tt> for didactic purposes, namely
easier visualization of the data.</p>
<p class="last">Sparse matrices are efficient data structures when most of your matrix
elements are zero. They won't be covered in detail in this course. If you
want more details about them, you can look at
<a class="reference external" href="https://scipy-lectures.org/advanced/scipy_sparse/introduction.html#why-sparse-matrices">this</a>.</p>
</div>
We see that encoding a single feature will give a NumPy array full of zeros
and ones. We can get a better understanding using the associated feature
names resulting from the transformation.
```
feature_names = encoder.get_feature_names_out(input_features=["education"])
education_encoded = pd.DataFrame(education_encoded, columns=feature_names)
education_encoded
```
As we can see, each category (unique value) became a column; the encoding
returned, for each sample, a 1 to specify which category it belongs to.
Let's apply this encoding on the full dataset.
```
print(
f"The dataset is composed of {data_categorical.shape[1]} features")
data_categorical.head()
data_encoded = encoder.fit_transform(data_categorical)
data_encoded[:5]
print(
f"The encoded dataset contains {data_encoded.shape[1]} features")
```
Let's wrap this NumPy array in a dataframe with informative column names as
provided by the encoder object:
```
columns_encoded = encoder.get_feature_names_out(data_categorical.columns)
pd.DataFrame(data_encoded, columns=columns_encoded).head()
```
Look at how the `"workclass"` variable of the 3 first records has been
encoded and compare this to the original string representation.
The number of features after the encoding is more than 10 times larger than
in the original data because some variables such as `occupation` and
`native-country` have many possible categories.
### Choosing an encoding strategy
Choosing an encoding strategy will depend on the underlying models and the
type of categories (i.e. ordinal vs. nominal).
Indeed, using an `OrdinalEncoder` will output ordinal categories. It means
that there is an order in the resulting categories (e.g. `0 < 1 < 2`). The
impact of violating this ordering assumption is really dependent on the
downstream models. Linear models will be impacted by misordered categories
while tree-based models will not be.
Thus, in general `OneHotEncoder` is the encoding strategy used when the
downstream models are **linear models** while `OrdinalEncoder` is used with
**tree-based models**.
You still can use an `OrdinalEncoder` with linear models but you need to be
sure that:
- the original categories (before encoding) have an ordering;
- the encoded categories follow the same ordering than the original
categories.
The next exercise highlight the issue of misusing `OrdinalEncoder` with a
linear model.
Also, there is no need to use a `OneHotEncoder` even if the original
categories do not have a given order with tree-based model. It will be
the purpose of the final exercise of this sequence.
## Evaluate our predictive pipeline
We can now integrate this encoder inside a machine learning pipeline like we
did with numerical data: let's train a linear classifier on the encoded data
and check the generalization performance of this machine learning pipeline using
cross-validation.
Before we create the pipeline, we have to linger on the `native-country`.
Let's recall some statistics regarding this column.
```
data["native-country"].value_counts()
```
We see that the `Holand-Netherlands` category is occurring rarely. This will
be a problem during cross-validation: if the sample ends up in the test set
during splitting then the classifier would not have seen the category during
training and will not be able to encode it.
In scikit-learn, there are two solutions to bypass this issue:
* list all the possible categories and provide it to the encoder via the
keyword argument `categories`;
* use the parameter `handle_unknown`.
Here, we will use the latter solution for simplicity.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Tip</p>
<p class="last">Be aware the <tt class="docutils literal">OrdinalEncoder</tt> exposes as well a parameter
<tt class="docutils literal">handle_unknown</tt>. It can be set to <tt class="docutils literal">use_encoded_value</tt> and by setting
<tt class="docutils literal">unknown_value</tt> to handle rare categories. You are going to use these
parameters in the next exercise.</p>
</div>
We can now create our machine learning pipeline.
```
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
model = make_pipeline(
OneHotEncoder(handle_unknown="ignore"), LogisticRegression(max_iter=500)
)
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">Here, we need to increase the maximum number of iterations to obtain a fully
converged <tt class="docutils literal">LogisticRegression</tt> and silence a <tt class="docutils literal">ConvergenceWarning</tt>. Contrary
to the numerical features, the one-hot encoded categorical features are all
on the same scale (values are 0 or 1), so they would not benefit from
scaling. In this case, increasing <tt class="docutils literal">max_iter</tt> is the right thing to do.</p>
</div>
Finally, we can check the model's generalization performance only using the
categorical columns.
```
from sklearn.model_selection import cross_validate
cv_results = cross_validate(model, data_categorical, target)
cv_results
scores = cv_results["test_score"]
print(f"The accuracy is: {scores.mean():.3f} +/- {scores.std():.3f}")
```
As you can see, this representation of the categorical variables is
slightly more predictive of the revenue than the numerical variables
that we used previously.
In this notebook we have:
* seen two common strategies for encoding categorical features: **ordinal
encoding** and **one-hot encoding**;
* used a **pipeline** to use a **one-hot encoder** before fitting a logistic
regression.
| github_jupyter |
```
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import sys, collections, os, argparse
%matplotlib inline
```
# Download the 10x Dataset `1k Brain Cells from an E18 Mouse (v3 chemistry)`
10x datasets are available at
https://support.10xgenomics.com/single-cell-gene-expression/datasets
The page for the `1k Brain Cells from an E18 Mouse (v3 chemistry)` dataset is
https://support.10xgenomics.com/single-cell-gene-expression/datasets/3.0.0/neuron_1k_v3
But the FASTQ files (7.4GB) can be downloaded with `wget` directly (without giving them email info) from http://cf.10xgenomics.com/samples/cell-exp/3.0.0/neuron_1k_v3/neuron_1k_v3_fastqs.tar
In the cell below we check if the dataset file `neuron_1k_v3_fastqs.tar` already exists. If not we download the dataset to the same directory as this notebook
```
#Check if the file was downloaded already before doing wget:
if not (os.path.isfile('./neuron_1k_v3_fastqs.tar')):
# the `!` means we're running a command line statement (rather than python)
!wget http://cf.10xgenomics.com/samples/cell-exp/3.0.0/neuron_1k_v3/neuron_1k_v3_fastqs.tar
else: print('Dataset already downloaded!')
```
Because this dataset was run on two lanes, we need to uncompress the FASTQ files and concatenate them before using kallisto bus. If we had a single file kallisto could take gz files as is.
```
# now we untar the fastq files on neuron_1k_v3_fastqs folder
!tar -xvf ./neuron_1k_v3_fastqs.tar
```
# Buiding the kallisto index
First make sure that kallisto is installed and the version is greater than 0.45
If it's not installed, see instructions at https://pachterlab.github.io/kallisto/download
```
!kallisto version
```
First we build the kallisto index for the dataset.
The index is built from the published reference transcriptome for each organism.
Building the index takes a few minutes and needs to be done only once for each organism.
### Download reference transcriptome from ensembl
In order to do that we first download the mouse transcriptome from ensembl, you can see the reference genomes they have at https://uswest.ensembl.org/info/data/ftp/index.html
```
#Check if the file was downloaded already before doing wget:
if not (os.path.isfile('Mus_musculus.GRCm38.cdna.all.fa.gz')):
# the `!` means we're running a command line statement (rather than python)
!wget ftp://ftp.ensembl.org/pub/release-94/fasta/mus_musculus/cdna/Mus_musculus.GRCm38.cdna.all.fa.gz
else: print('Mouse transcriptome already downloaded!')
### Now we can build the index
if not (os.path.isfile('mouse_transcripts.idx')):
!kallisto index -i mouse_transcripts.idx Mus_musculus.GRCm38.cdna.all.fa.gz
else: print ('Mouse transcript index already exist!')
```
# Preparing transcript_to_gene.tsv file process the single cell data with kallisto bus
Depending on which transcriptome you used, you will need to create a file translating transcripts to genes. This notebook assumes the file is in `transcript_to_gene.tsv`, for ensembl transcriptomes these can be generated using biomart.
The general format of `transcript_to_gene.tsv` is
```
ENST00000632684.1 ENSG00000282431.1
ENST00000434970.2 ENSG00000237235.2
ENST00000448914.1 ENSG00000228985.1
ENST00000415118.1 ENSG00000223997.1
ENST00000631435.1 ENSG00000282253.1
...
```
To create the `transcript_to_gene.tsv` we fetch and parse the mouse GTF file from ensembl.
The reference GTF files are available at https://uswest.ensembl.org/info/data/ftp/index.html
The mouse ones which we use are at ftp://ftp.ensembl.org/pub/release-94/gtf/mus_musculus
```
#Check if the file was downloaded already before doing wget:
if not (os.path.isfile('Mus_musculus.GRCm38.94.gtf.gz') or os.path.isfile('Mus_musculus.GRCm38.94.gtf')):
# the `!` means we're running a command line statement (rather than python)
!wget ftp://ftp.ensembl.org/pub/release-94/gtf/mus_musculus/Mus_musculus.GRCm38.94.gtf.gz
else: print('Mouse transcriptome already downloaded!')
# Unzip the file
!gunzip ./Mus_musculus.GRCm38.94.gtf.gz
```
## Create transcript_to_gene.tsv
Now we can use the cells below to parse the GTF file and keep only the transcript mapping as a tsv file in the format below.
```
ENST00000632684.1 ENSG00000282431.1
ENST00000434970.2 ENSG00000237235.2
ENST00000448914.1 ENSG00000228985.1
```
```
def create_transcript_list(input, use_name = False, use_version = True):
r = {}
for line in input:
if len(line) == 0 or line[0] == '#':
continue
l = line.strip().split('\t')
if l[2] == 'transcript':
info = l[8]
d = {}
for x in info.split('; '):
x = x.strip()
p = x.find(' ')
if p == -1:
continue
k = x[:p]
p = x.find('"',p)
p2 = x.find('"',p+1)
v = x[p+1:p2]
d[k] = v
if 'transcript_id' not in d or 'gene_id' not in d:
continue
tid = d['transcript_id']
gid = d['gene_id']
if use_version:
if 'transcript_version' not in d or 'gene_version' not in d:
continue
tid += '.' + d['transcript_version']
gid += '.' + d['gene_version']
gname = None
if use_name:
if 'gene_name' not in d:
continue
gname = d['gene_name']
if tid in r:
continue
r[tid] = (gid, gname)
return r
def print_output(output, r, use_name = True):
for tid in r:
if use_name:
output.write("%s\t%s\t%s\n"%(tid, r[tid][0], r[tid][1]))
else:
output.write("%s\t%s\n"%(tid, r[tid][0]))
with open('./Mus_musculus.GRCm38.94.gtf') as file:
r = create_transcript_list(file, use_name = False, use_version = True)
with open('transcript_to_gene.tsv', "w+") as output:
print_output(output, r, use_name = False)
print('Created transcript_to_gene.tsv file')
```
# Run kallisto bus
kallisto bus supports several single cell sequencing technologies, as you can see below. We'll be using 10xv3
```
!kallisto bus --list
!kallisto453 bus -i mouse_transcripts.idx -o out_1k_mouse_brain_v3 -x 10xv3 -t 4 \
./neuron_1k_v3_fastqs/neuron_1k_v3_S1_L001_R1_001.fastq.gz \
./neuron_1k_v3_fastqs/neuron_1k_v3_S1_L001_R2_001.fastq.gz \
./neuron_1k_v3_fastqs/neuron_1k_v3_S1_L002_R1_001.fastq.gz \
./neuron_1k_v3_fastqs/neuron_1k_v3_S1_L002_R2_001.fastq.gz
```
### The `matrix.ec` file
The `matrix.ec` is generated by kallisto and connects the equivalence class ids to sets of transcripts. The format looks like
~~~
0 0
1 1
2 2
3 3
4 4
...
884398 26558,53383,53384,69915,69931,85319,109252,125730
884399 7750,35941,114698,119265
884400 9585,70083,92571,138545,138546
884401 90512,90513,134202,159456
~~~
```
#load transcript to gene file
tr2g = {}
trlist = []
with open('./transcript_to_gene.tsv') as f:
for line in f:
l = line.split()
tr2g[l[0]] = l[1]
trlist.append(l[0])
genes = list(set(tr2g[t] for t in tr2g))
# load equivalence classes
ecs = {}
with open('./out_1k_mouse_brain_v3/matrix.ec') as f:
for line in f:
l = line.split()
ec = int(l[0])
trs = [int(x) for x in l[1].split(',')]
ecs[ec] = trs
def ec2g(ec):
if ec in ecs:
return list(set(tr2g[trlist[t]] for t in ecs[ec]))
else:
return []
```
### Processing the BUS file
For these notebooks we will work with the text file that `BUStools` produces, rather than the raw `BUS` file.
To install `BUStools` see https://github.com/BUStools/bustools
We discard any barcodes that don't have more 10 UMIs
To produce the text file, starting with the `output.bus` file produced by kallisto, we first sort it on bustools:
```
bustools sort -o output.sorted output.bus
```
Then we convert it to txt:
```
bustools text -o output.sorted.txt output.sorted
```
```
#sort bus file
!bustools sort -o ./out_1k_mouse_brain_v3/output.sorted ./out_1k_mouse_brain_v3/output.bus
# convert the sorted busfile to txt
!bustools text -o ./out_1k_mouse_brain_v3/output.sorted.txt ./out_1k_mouse_brain_v3/output.sorted
```
# Plot the bus file results
```
import csv
from collections import defaultdict
# precompute because this is constant per ec
ec2g = {ec:frozenset(tr2g[trlist[t]] for t in ecs[ec]) for ec in ecs}
# first pass: collect gene sets
bcu_gs = dict()
with open('./out_1k_mouse_brain_v3/output.sorted.txt') as f:
rdr = csv.reader(f, delimiter='\t')
for bar,umi,ec,_ in rdr:
gs = ec2g[int(ec)]
if (bar,umi) in bcu_gs:
bcu_gs[bar,umi].intersection_update(gs)
else:
bcu_gs[bar,umi] = set(gs)
# second pass: compute gene counts
cell_gene = defaultdict(lambda: defaultdict(float))
for (bar,umi),gs in bcu_gs.items():
for g in gs:
cell_gene[bar][g] += 1.0 / len(gs)
# finally: filter out barcodes below threshold
cell_gene = {bar:cell_gene[bar] for bar in cell_gene
if sum(cell_gene[bar].values()) >= 10.0}
barcode_hist = collections.defaultdict(int)
for barcode in cell_gene:
cg = cell_gene[barcode]
s = len([cg[g] for g in cg])
barcode_hist[barcode] += s
```
### Download the 10x whitelist
```
!wget https://github.com/10XGenomics/supernova/blob/master/tenkit/lib/python/tenkit/barcodes/737K-august-2016.txt
whitelist = set(x.strip() for x in open('737K-august-2016.txt'))
```
### Plot counts
```
bcv = [x for b,x in barcode_hist.items() if x > 600 and x < 12000]
_ = plt.hist(bcv,bins=100)
print(len(bcv))
```
| github_jupyter |
This is a technical description of banded ridge regression (see [Nunez-Elizalde, et al., 2019](https://doi.org/10.1016/j.neuroimage.2019.04.012))
```
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = [7., 7.]
matplotlib.rcParams['font.size'] = 15
import os
import time
import numpy as np
np.random.seed(1337)
np.set_printoptions(precision=4, suppress=True)
from scipy.stats import zscore
from matplotlib import pyplot as plt
```
# Banded ridge regression with two feature spaces
When estimating a joint encoding model that consists of two feature spaces, banded ridge regression can be used to fit the model and assign each feature space a different regularization parameter.
$$Y = X_1 \beta_1 + X_2 \beta_2 + \epsilon$$
$$
\begin{align*}
\beta_1 \sim \mathcal{N}\left(0, \lambda_1^{-2} I_p\right)\\
\beta_2 \sim \mathcal{N}\left(0, \lambda_2^{-2} I_q\right)\\
\end{align*}
$$
However, estimating this model requires cross-validating two regularization parameters ($\lambda_1$ and $\lambda_2$), which can be computationally expensive. In this notebook, we describe a trick that can reduce this computational cost.
### Cartesian grid search
Suppose we have two feature spaces $X_1$ and $X_2$ each with a corresponding regularization parameter: $\lambda_1$ and $\lambda_2$. In order to find the optimal regularization parameters given the data, we can use cross-validation. This requires us to test many combinations of $\lambda_1$ and $\lambda_2$. For 10 values of $\lambda_1$ and 10 values of $\lambda_2$, a grid search requires a total of $10^2$ evaluations. In general, for $N$ hyperparameter values and $M$ feature spaces, a grid search requires the evaluation fo $N^M$ points. This is computationally expensive and can quickly become computationally intractable.
```
import itertools
lambda_one_candidates = np.logspace(0,3,10)
lambda_two_candidates = np.logspace(0,3,10)
all_pairs = np.asarray(list(itertools.product(lambda_one_candidates, lambda_two_candidates)))
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.loglog(all_pairs[:,0], all_pairs[:,1], 'o')
ax.set_xlabel(r'$\lambda_1$ [log-scale]')
ax.set_ylabel(r'$\lambda_2$ [log-scale]')
__ = fig.suptitle('banded-ridge\nhyperparameter cartesian search')
```
### Polar grid search
Luckily, we can gain some computational efficiency by searching for the hyperparameters $\lambda_1$ and $\lambda_2$ using polar coordinates instead of a cartesian coordinates.
NB: This computational trick is nothing more than a change in coordinate systems. The hyperparameters $\lambda_1$ and $\lambda_2$ can always be converted to and from cartesian and polar coordinates without loss of generaility.
```
import matplotlib.patches as patches
fig = plt.figure(figsize=(15, 6))
ax = plt.subplot2grid((1, 7), (0, 0), colspan=3, fig=fig)
ax2 = plt.subplot2grid((1, 7), (0, 3), colspan=3, fig=fig)
ax3 = plt.subplot2grid((1, 7), (0, 6), fig=fig)
# Polar sampling
radii = np.logspace(0,4,11)
angles = np.deg2rad(np.linspace(1, 89,11))
for iangle, angle in enumerate(angles):
ypos = np.sin(angle)*np.log10(radii) # lambda2 values [log10-scale]
xpos = np.cos(angle)*np.log10(radii) # lambda1 values [log10-scale]
# plot
ax.plot(xpos, ypos, 'o-', label=r'$\theta=%0.1f$'%np.rad2deg(angle))
ax2.plot(xpos, ypos, color='grey',
marker='o', markerfacecolor='none', markersize=10, alpha=1.0)
# scaling radii
for cdx, radius in enumerate(radii):
radius_color = plt.cm.plasma((float(cdx+1)/len(radii)))
circle = plt.Circle((0,0), np.log10(radius), color=radius_color, fill=False, lw=3.)
ax2.add_artist(circle)
# angle arrow
style="Simple,tail_width=1.0,head_width=10,head_length=8"
kw = dict(arrowstyle=style, color="k")
arrow = patches.FancyArrowPatch((4,2),(2,4), connectionstyle="arc3,rad=0.2", **kw)
ax2.add_patch(arrow)
ax2.text(3,3, r'$\theta$ angle', rotation=-45)
# Add colorbar for the radii
cbar = matplotlib.colorbar.ColorbarBase(ax3,
cmap=plt.cm.plasma,
norm=matplotlib.colors.Normalize(0,10**4),
orientation='vertical')
cbar.set_label(r'scaling factor $\alpha$')
# labels
for axx in [ax, ax2]:
# Set cartesian sampling as grid
axx.set_yticks(np.log10(all_pairs[:,0]), minor=True)
axx.set_xticks(np.log10(all_pairs[:,0]), minor=True)
axx.set_xlabel(r'$\lambda_1$ [log-scale]')
axx.set_ylabel(r'$\lambda_2$ [log-scale]')
axx.set_xticks([0, 1, 2, 3], minor=False)
axx.set_yticks([0, 1, 2, 3], minor=False)
__ = axx.set_xticklabels(['$10^0$', '$10^1$', '$10^2$', '$10^3$'], fontsize=15)
__ = axx.set_yticklabels(['$10^0$', '$10^1$', '$10^2$', '$10^3$'], fontsize=15)
ax.grid(True, which='minor')
__ = ax.set_title('banded ridge hyperparameters: polar grid')
__ = ax2.set_title(r'angle ($\theta$) and scaling ($\alpha$) factors')
plt.tight_layout()
```
Each ray above corresponds to a set of hyperparameter combinations at a fixed ratio.
$$r = \left(\frac{\lambda_2}{\lambda_1}\right)$$
The ratio between $\lambda_1$ and $\lambda_2$ defines an angle:
$$\theta = \text{tan}^{-1}\left(\frac{\lambda_2}{\lambda_1}\right)$$
And the angle defines a ratio:
$$\left(\frac{\lambda_2}{\lambda_1}\right) = \text{tan}(\theta)$$
For example, the angle $\theta=45^{\circ}$ defines a set of solutions where the ratio between $\lambda_1$ and $\lambda_2$ is constant and equal to one:
$$\frac{\lambda_2=1}{\lambda_1=1} = \frac{\lambda_2=10}{\lambda_1=10} = \frac{\lambda_2=100}{\lambda_1=100}$$
These solutions can be expressed as a ratio $r$ times a scaling controlled by $\alpha$:
$$\frac{\lambda_2 \alpha}{\lambda_1\alpha} = r \alpha$$
We can define the polar hyperparameter search in terms of ratios and scalings along the unit circle:
```
# Sampling in terms of ratios and scalings
alphas = np.logspace(0,4,11)
ratios = np.logspace(-2,2,25)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
for ratio in ratios:
angle = np.arctan(ratio)
ypos = np.sin(angle)*np.log10(alphas)
xpos = np.cos(angle)*np.log10(alphas)
label = None
if np.allclose(angle, np.deg2rad(45)):
label = r'$\theta = 45^\circ$'
ax.plot(xpos, ypos, 'o-', label=label)
ax.set_xticks([0, 1, 2, 3], minor=False)
ax.set_yticks([0, 1, 2, 3], minor=False)
__ = ax.set_xticklabels(['$10^0$', '$10^1$', '$10^2$', '$10^3$'], fontsize=15)
__ = ax.set_yticklabels(['$10^0$', '$10^1$', '$10^2$', '$10^3$'], fontsize=15)
# Labels
ax.set_xlabel(r'$\lambda_1$ [log-scale]')
ax.set_ylabel(r'$\lambda_2$ [log-scale]')
ax.legend(loc='best')
__ = ax.set_title('banded ridge hyperparameters\npolar grid search')
```
Note that the ridge regression solution corresponds to the 1:1 ratio or equivalently the 45 degree angle ($\theta = 45^\circ$). This means that the banded ridge solution includes the ridge regression solution as a special case. Therefore, banded ridge regression will perform at least as well as ridge regression.
## Banded ridge with polar search: efficiency
Recall the Tikhohnov regression solution is
$$\hat{\beta}_{Tikhonov} = (X^\top X + C^\top C)^{-1} X^\top Y$$
For banded ridge regression, $C$ consist of a diagonal matrix where the first $p$ entries have a value of $\lambda_1$ and the last $q$ entries have a value of $\lambda_2$.
The solution to the banded ridge regression problem can be expressed as:
$$\hat{\beta}_{banded\_ridge} =
\begin{bmatrix}
\hat{\beta}_1 \\
\hat{\beta}_2
\end{bmatrix} =
\left(\begin{bmatrix}
X_1^\top X_1 & X_1^\top X_2 \\
X_2^\top X_1 & X_2^\top X_2 \\
\end{bmatrix}
+
\begin{bmatrix}
\lambda_1^2 I_p & 0 \\
0 & \lambda_2^2 I_q \\
\end{bmatrix} \right)^{-1}
\begin{bmatrix}
X_1^\top \\ X_2^\top
\end{bmatrix} Y
$$
In order to select the optimal regularization parameters for each feature space $\lambda^\ast_1$ and $\lambda^\ast_2$, we have to perform cross-validation and compute the solution for each candidate $\lambda_1$ and $\lambda_2$:
$$\hat{\beta}_{banded\_ridge}^{\lambda^\text{'}_1, \lambda^\text{'}_2}$$
Cross-validating $N$ hyperparameter combinations (i.e. $N$ $\lambda_1$ and $\lambda_2$ pairs) will take $N$ times the amount of time it takes to compute the solution for one. This is a costly endeavour because the solution requires computing the inverse:
$$\left(X^\top X + C^\top C\right)^{-1}$$
It turns out that we can achieve better computational performance by using the standard transform and a polar grid search over the hyperparameters $\lambda_1$ and $\lambda_2$. This trick allows us to solve multiple solutions for a given $\frac{\lambda_2}{\lambda_1}$ ratio at the cost of only one singular value decomposition (SVD) and many matrix multiplies. Because the computational cost of a matrix multiplication is lower than the cost of computing an inverse or an SVD, this approach is more efficient than performing $N$ SVD decompositions. And so, performing the hyperparameter search using polar coordinates is faster than using cartesian coordinates.
## Banded ridge with polar search: math
In what follows, we show how to solve the banded ridge regression problem using the standard transform. We show the numerical and mathematical steps required to go from the standard form solution to the Tikhonov solution.
We first generate some random data.
```
nsamples = 1000
npfeatures = 100
nqfeatures = 200
nresponses = 100
X1 = np.random.randn(nsamples, npfeatures)
X2 = np.random.randn(nsamples, nqfeatures)
Ytrain = np.random.randn(nsamples, nresponses)
```
We next compute the banded ridge regression solution directly
$$\hat{\beta}_{banded\_ridge} = (X^\top X + C^\top C)^{-1} X^\top Y$$
where
$$C = \begin{bmatrix}
\lambda_1 I_p & 0 \\
0 & \lambda_2 I_q \\
\end{bmatrix}
$$
In the following examples, we set $\lambda_1 = 30$ and $\lambda_2 = 20$.
```
# Direct banded ridge solution
lambda_one = 30.0
lambda_two = 20.0
bands = np.asarray([lambda_one]*X1.shape[1] + [lambda_two]*X2.shape[1])
C = np.diag(bands)
Xjoint = np.hstack([X1, X2])
LH = np.linalg.inv(np.dot(Xjoint.T, Xjoint) + np.dot(C.T, C))
XTY = np.dot(Xjoint.T, Ytrain)
solution_direct_solution = np.dot(LH, XTY)
```
Using the standard transform, we see that the banded ridge regression problem has a simple structure:
$$A = X C^{-1} = \left[\frac{X_1}{\lambda_1} \frac{X_1}{\lambda_2}\right]$$
The standard form solution to the banded ridge regression problem can be expressed as
$$\hat{\beta}_{banded\_standard} = (A^\top A + I_{p+q})^{-1} A^\top Y$$
Finally, the solution to the banded ridge regression problem is obtained by multiplying the standard form solution by $C^{-1}$:
$$\hat{\beta}_{banded\_ridge} = C^{-1}\hat{\beta}_{banded\_standard}$$
In what follows, we validate this result numerically.
```
# Standard form solution
lambda_one = 30.0
lambda_two = 20.0
alpha = 1.0
Xjoint = np.hstack([X1/lambda_one, X2/lambda_two])
LH = np.linalg.inv(np.dot(Xjoint.T, Xjoint) + (alpha**2)*np.eye(Xjoint.shape[1]))
RH = np.dot(Xjoint.T, Ytrain)
bands = np.asarray([lambda_one]*X1.shape[1] + [lambda_two]*X2.shape[1])
Cinv = np.diag(bands**-1)
solution_banded_standard = np.dot(LH, RH)
solution_banded_stand2tik = np.dot(Cinv, solution_banded_standard)
print(np.corrcoef(solution_banded_stand2tik.ravel(), solution_direct_solution.ravel()))
print(np.allclose(solution_banded_stand2tik, solution_direct_solution))
```
Thus far, we have been using the raw regularization parameters $\lambda_1$ and $\lambda_2$. However, note that the ratio of the regularization parameters is the same for both $\lambda_1=30, \lambda_2 = 20$ and $\lambda_1=3, \lambda_2 = 2$:
$$\frac{\lambda_2=20}{\lambda_1=30} = \frac{2}{3}\times 10$$
The standard form solution to the banded ridge regression problem can be modified to accomodate this fact
$$A = \left[\frac{X_1}{\lambda_1 / 10} \frac{X_2}{\lambda_2/10}\right]$$
However, the factor $10$ needs to be applied back to the solution in order to obtain an exact result
$$\hat{\beta}_{banded\_standard} = 10\times (A^\top A + 10^2 I_{p+q})^{-1} A^\top Y$$
In this example, the scaling factor takes a value of 10. More generally, we refer to the scaling factor as $\alpha$.
```
# Scaling the standard form solution with alpha
lambda_one = 3.0
lambda_two = 2.0
alpha = 10.0
Xjoint = np.hstack([X1/lambda_one, X2/lambda_two])
LH = np.linalg.inv(np.dot(Xjoint.T, Xjoint) + (alpha**2.0)*np.eye(Xjoint.shape[1]))
RH = np.dot(Xjoint.T, Ytrain)
solution_standard_scaled = np.dot(LH, RH)*alpha
# Check the standard form solution
print(np.corrcoef(solution_standard_scaled.ravel(), solution_banded_standard.ravel()))
print(np.allclose(solution_standard_scaled, solution_banded_standard))
# Check the tikhonov solution
solution_bandstd2tik = np.dot(Cinv, solution_standard_scaled)
print(np.corrcoef(solution_bandstd2tik.ravel(), solution_direct_solution.ravel()))
print(np.allclose(solution_bandstd2tik, solution_direct_solution))
```
For any given ratio $r = \frac{\lambda_2}{\lambda_1}$, multiple regularization parameters can be obtained by a simple scaling with a constant $\alpha$:
$$\left(\frac{\lambda_2}{\lambda_1}\right)\alpha = r\alpha$$
And for a given $\alpha$, the standard transform for banded ridge can be expressed as
$$A = \left[\frac{X_1}{\lambda_1 / \alpha} \frac{X_2}{\lambda_2/\alpha}\right]$$
The banded ridge solution for a given $\alpha$ is obtained from the standard transform as:
$$\hat{\beta}_{banded\_ridge} = \alpha C^{-1} (A^\top A + \alpha^2 I_{p+q})^{-1} A^\top Y$$
Expanding $\alpha C^{-1}$:
$$ \hat{\beta}_{banded\_ridge} =
\begin{bmatrix}
\frac{\alpha}{\lambda_1} I_p & 0 \\
0 & \frac{\alpha}{\lambda_2} I_q \\
\end{bmatrix}
(A^\top A + \alpha^2 I_{p+q})^{-1} A^\top Y $$
### Why would you ever want to do that?
It tuns out we can solve this problem very efficiently for multiple values of $\alpha$ using the fact that
$$\left(A^\top A + \alpha^2 I \right)^{-1}$$
is simultaneously diagonalizable. This fact allows us to compute solutions for $n$ values of $\alpha$ using only one singular value decomposition (SVD) and $n$ matrix multiplies.
To illustrate, recall the SVD of $A$:
$$U S V^\top = A $$
We substitute $A$ with its SVD decomposition inside the inverse term:
$$\left(A^\top A + \alpha^2 I \right)^{-1} = \left(V S^2 V^\top + \alpha^2 I \right)^{- 1}$$
Because $V$ is an orthonormal matrix and $S$ and $\alpha^2 I$ are diagonal matrices, the inverse of their sum is simultaneously diagonalizable and can be expressed as:
$$
\begin{align*}
\left(A^\top A + \alpha^2 I \right)^{-1} &= \left(V \left(S^2 + \alpha^2 I\right) V^\top \right)^{- 1}\\
\left(A^\top A + \alpha^2 I \right)^{-1} &= V \left(\frac{1}{S^2 + \alpha^2 I}\right) V^\top
\end{align*}
$$
And because
$$A^\top Y = V S U^\top Y,$$
the expression for the standard form solution can be further simplified:
$$\left(A^\top A + \alpha^2 I \right)^{-1} A^\top Y = V \left(\frac{S}{S^2 + \alpha^2 I}\right) U^\top Y$$
And so, for a given ratio $r = \frac{\lambda^\text{'}_2}{\lambda^\text{'}_1}$, the solution becomes:
$$\hat{\beta}^{{\lambda^\text{'}_1, \lambda^\text{'}_2}}_{banded\_standard} = V D U^\top Y$$
where
$$D \equiv \left(\frac{S}{S^2 + \alpha^2 I}\right)$$
Because $U^\top Y$ can be cached, computing the solutions for multiple $\alpha$ scalings requires only matrix multiplies and we only have to compute the SVD once per ratio.
```
# Simultaneous diagonalizability trick: one alpha
lambda_one = 3.0
lambda_two = 2.0
alpha = 10.0
A = np.hstack([X1/lambda_one, X2/lambda_two])
start_time = time.time()
U, S, VT = np.linalg.svd(A, full_matrices=False)
V = VT.T
UTY = np.dot(U.T, Ytrain)
D = np.diag(S / (S**2 + alpha**2))
solution_svd_standard = np.linalg.multi_dot([V, D, UTY])*alpha
solution_svd_bandstd2tik = np.dot(Cinv, solution_svd_standard)
one_dur = time.time() - start_time
print('Duration: %0.04f'%one_dur)
# Check the standard form solution
print(np.corrcoef(solution_svd_standard.ravel(), solution_banded_standard.ravel()))
print(np.allclose(solution_svd_standard, solution_banded_standard))
# Check the tikhonov solution
print(np.corrcoef(solution_svd_bandstd2tik.ravel(), solution_direct_solution.ravel()))
print(np.allclose(solution_svd_bandstd2tik, solution_direct_solution))
```
We now show that this formulation allows us to compute the solution for multiple values of $\alpha$ much faster
```
# Simultaneous diagonalizability trick: multiple alphas
lambda_one = 3.0
lambda_two = 2.0
alphas = np.logspace(0,4,10)
A = np.hstack([X1/lambda_one, X2/lambda_two])
start_time = time.time()
U, S, VT = np.linalg.svd(A, full_matrices=False)
V = VT.T
for alpha in alphas:
UTY = np.dot(U.T, Ytrain)
D = np.diag(S / (S**2 + alpha**2))
solution_svd_standard = np.linalg.multi_dot([V, D, UTY])*alpha
solution_svd_bandstd2tik = np.dot(Cinv, solution_svd_standard)
multiple_dur = time.time() - start_time
factor = one_dur*len(alphas) / multiple_dur
print('Total duration for %i alphas: %0.04f'%(len(alphas), multiple_dur))
print('Trick is %0.01f times faster'%factor)
```
## tikreg: banded rige with polar search
`tikreg` ([github.com/gallantlab/tikreg](http://github.com/gallantlab/tikreg)) is capable of performing a hyperpameter search in polar coordinates. To see it in action, we first generate some fake data.
```
from tikreg import models, utils as tikutils
from tikreg import spatial_priors, temporal_priors
# Generate some data
B1, (X1, X1tst), (Y1trn, Y1tst) = tikutils.generate_data(n=nsamples, p=npfeatures, v=nresponses/2, testsize=100)
B2, (X2, X2tst), (Y2trn, Y2tst) = tikutils.generate_data(n=nsamples, p=nqfeatures, v=nresponses/2, testsize=100)
Ytrain = np.c_[Y1trn, Y2trn]
Ytest = np.c_[Y1tst, Y2tst]
```
### tikreg example: solving for one set of hyperparameters
We first solve this problem directly so we can check the answer given by `tikreg`. To begin, we use one value for each of $\lambda_1$, $\lambda_2$ and $\alpha$.
```
## DIRECT SOLUTION
# Sampling in terms of ratios and scalings
alphas = np.logspace(0,4,11)
ratios = np.logspace(-2,2,25)
# Solve for one hyperparameter set only
# We will use this solution to test the tikreg implementation
ratio = ratios[16]
alpha = alphas[1]
angle = np.arctan(ratio)
lambda_one = np.cos(angle)*alpha
lambda_two = np.sin(angle)*alpha
bands = np.asarray([lambda_one]*X1.shape[1] + [lambda_two]*X2.shape[1])
Cinv = np.diag(bands**-1)
A = np.hstack([X1/lambda_one, X2/lambda_two])
U, S, VT = np.linalg.svd(A, full_matrices=False)
V = VT.T
UTY = np.dot(U.T, Ytrain)
D = np.diag(S / (S**2 + alpha**2))
solution_svd_standard = np.linalg.multi_dot([V, D, UTY])*alpha
solution_svd_bandstd2tik = np.dot(Cinv, solution_svd_standard)
print(np.rad2deg(angle), ratio, alpha, lambda_one, lambda_two)
```
Next, we solve the problem using `tikreg`. The relevant function is `tikreg.models.estimate_stem_wmvnp()`.
```
# Use tikreg to find the solution
X1_prior = spatial_priors.SphericalPrior(X1, hyparams=[lambda_one])
X2_prior = spatial_priors.SphericalPrior(X2, hyparams=[lambda_two])
# A temporal prior is unnecessary, so we specify no delays
temporal_prior = temporal_priors.SphericalPrior(delays=[0]) # no delays
fit_banded_polar = models.estimate_stem_wmvnp([X1, X2], Ytrain,
[X1tst, X2tst],Ytest,
feature_priors=[X1_prior, X2_prior],
temporal_prior=temporal_prior,
ridges=[alpha],
folds=(1,5), # 1x 5-fold cross-validation
performance=True,
weights=True,
verbosity=False)
```
`tikreg.estimate_stem_wmvnp()` solves the regression problem in the dual space using kernel regression. Therefore, the weights given by `tikreg` are the kernel weights ($\hat{\omega} \in \mathbb{R}^{n \times v}$), not the primal weights (i.e. $\hat{\beta}_{banded\_ridge} \in \mathbb{R}^{(p + q) \times v}$). This means that we'll have to project the kernel weights onto the feature spaces in order to obtain the primal weights:
$$\hat{\beta}_{banded\_ridge} = C^{-1}
\begin{bmatrix}
X_1^\top\\
X_2^\top
\end{bmatrix}
\hat{\omega} \alpha,$$
where
$$C^{-1} = \begin{bmatrix}
\lambda_1^{-1} I_p & 0 \\
0 & \lambda_2^{-1} I_q \\
\end{bmatrix}
$$
We can check the results for numerical accuracy.
```
## Verify the results numerically
lambda_one_scaled, lambda_two_scaled = fit_banded_polar['spatial'].squeeze()
ridge_scaled = fit_banded_polar['ridges'].squeeze()
print(lambda_one_scaled, lambda_two_scaled, ridge_scaled)
kernel_weights = fit_banded_polar['weights']
Xtmp = np.c_[X1/lambda_one_scaled, X2/lambda_two_scaled]
weights_standard = np.dot(Xtmp.T, kernel_weights*alpha)
# Standard form solutions
weights_x1 = weights_standard[:X1.shape[1],:]
weights_x2 = weights_standard[X1.shape[1]:,:]
sweights_x1 = solution_svd_standard[:X1.shape[1],:]
sweights_x2 = solution_svd_standard[X1.shape[1]:,:]
print('Standard transform weights for X1:')
print(weights_x1[:1,:5])
print(sweights_x1[:1,:5])
print(np.corrcoef(weights_x1.ravel(), sweights_x1.ravel())[0,1])
print(np.allclose(weights_x1, sweights_x1))
print('Standard transform weights for X2:')
print(weights_x2[:1,:5])
print(sweights_x2[:1,:5])
print(np.corrcoef(weights_x2.ravel(), sweights_x2.ravel())[0,1])
print(np.allclose(weights_x2, sweights_x2))
assert np.allclose(weights_standard, solution_svd_standard)
# TIkhonov solutions
bands = np.asarray([lambda_one_scaled]*X1.shape[1] + [lambda_two_scaled]*X2.shape[1])
Cinv = np.diag(bands**-1.0)
weights = np.dot(Cinv, weights_standard)
# full eq: np.dot(np.hstack([X1/(lambda_one_scaled**2.), X2/(lambda_two_scaled**2.)]).T, kernel_weights*alpha)
weights_x1t = weights[:X1.shape[1],:]
weights_x2t = weights[X1.shape[1]:,:]
tweights_x1 = solution_svd_bandstd2tik[:X1.shape[1],:]
tweights_x2 = solution_svd_bandstd2tik[X1.shape[1]:,:]
print('Tikhonov weights for joint model')
print(weights_x1t[:1,:5])
print(tweights_x1[:1,:5])
print(np.corrcoef(weights_x1t.ravel(), tweights_x1.ravel())[0,1])
print(weights_x2t[:1,:5])
print(tweights_x2[:1,:5])
print(np.corrcoef(weights_x2t.ravel(), tweights_x2.ravel())[0,1])
print('Full model weights')
print(np.corrcoef(weights.ravel(), solution_svd_bandstd2tik.ravel())[0,1])
assert np.allclose(weights, solution_svd_bandstd2tik)
print(weights.shape)
```
### tikreg: solving for multiple of hyperparameters using polar search
We next show how to use `tikreg` to solve for multiple ratios and scalings using polar coordinates.
First, we show the polar grid defined by the range of ratios and scalings that we wish to test:
```
# Sampling in terms of ratios and scalings
alphas = np.logspace(0,4,11)
ratios = np.logspace(-2,2,25)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
for ratio in ratios:
angle = np.arctan(ratio)
ypos = np.sin(angle)*np.log10(alphas)
xpos = np.cos(angle)*np.log10(alphas)
ax.plot(xpos, ypos, 'o-')
ax.set_xticks([0, 1, 2, 3], minor=False)
ax.set_yticks([0, 1, 2, 3], minor=False)
__ = ax.set_xticklabels(['$10^0$', '$10^1$', '$10^2$', '$10^3$'], fontsize=15)
__ = ax.set_yticklabels(['$10^0$', '$10^1$', '$10^2$', '$10^3$'], fontsize=15)
# Labels
ax.set_xlabel(r'$\lambda_1$ [log-scale]')
ax.set_ylabel(r'$\lambda_2$ [log-scale]')
__ = fig.suptitle('banded-ridge\nhyperparameter polar search')
ax.grid(True)
```
The function `tikreg.estimate_stem_wmvnp()` already implements polar grid search. In order to use it, a couple of things must be specified.
#### Define the priors
First, one of the feature spaces must serve as the reference feature space. To achieve this, the MVN prior for this feature space must only contain one hyperparameter to test. In this example, we use $X_1$ as our reference feature space and set $\lambda_1=1.0$:
`>>> X1_prior = spatial_priors.SphericalPrior(X1, hyparams=[1.0])`
The other feature spaces will contain the ratios that we wish to test. In this case, the only other feature space is $X_2$:
`>>> X2_prior = spatial_priors.SphericalPrior(X2, hyparams=ratios)`
If more than two features spaces are used, then all other feature spaces must also specify all the ratios to test.
#### Force hyperparameter normalization
Second, the keyword argument `normalize_hyparams=True` must be set. This forces `tikreg` to normalize the hyperparameters such that they lie on the unit circle. This is achieved by dividing the hyperparameters by their norm. Because $\lambda_1$ is always 1:
$$[\lambda^{\text{normalized}}_1,\lambda^{\text{normalized}}_2] = \frac{[1.0, \lambda_2]}{||[1.0, \lambda_2]||_2} $$
```
# The first feature space is the reference for the ratio (l2/l1) and so l1 is fixed as 1.
X1_prior = spatial_priors.SphericalPrior(X1, hyparams=[1.0])
# The second feature space contains the ratios to try
# These will be normalized internally in order to sample from the unit circle
X2_prior = spatial_priors.SphericalPrior(X2, hyparams=ratios)
temporal_prior = temporal_priors.SphericalPrior(delays=[0]) # no delays
fit_banded_polar = models.estimate_stem_wmvnp([X1, X2], Ytrain,
[X1tst, X2tst],Ytest,
feature_priors=[X1_prior, X2_prior],
temporal_prior=temporal_prior,
ridges=alphas, # Solution for all alphas
normalize_hyparams=True, # Normalizes the ratios
folds=(1,5),
performance=True,
weights=True,
verbosity=False)
```
Next, we compute the model weights for each voxel separately. To achieve this, we first find the optimal set of hyperparameters ($\lambda_1$, $\lambda_2$, $\alpha$) for each voxel
```
voxelwise_optimal_hyperparameters = fit_banded_polar['optima']
print(voxelwise_optimal_hyperparameters.shape)
```
We then iterate through each voxel and convert the kernel weights ($\omega$) into the primal weights ($\beta$).
```
kernel_weights = fit_banded_polar['weights']
primal_weights = []
for voxid, (temporal_lambda, lambda_one, lambda_two, alpha) in enumerate(voxelwise_optimal_hyperparameters):
ws = np.dot(np.hstack([X1/lambda_one**2, X2/lambda_two**2]).T, kernel_weights[:,voxid]*alpha)
primal_weights.append(ws)
primal_weights = np.asarray(primal_weights).T
print(primal_weights.shape)
```
The code above requires iterating through each individual voxel, which can be very slow. However, because the model is linear, we can compute the individual voxel solutions much faster using matrix multiplication. To achieve this, we first store the optimal hyperparameters ($\alpha$, $\lambda_1$, $\lambda_2$) for each voxel $i$ into vectors:
$$
\begin{align*}
\vec{\alpha} &= [\alpha_1, \alpha_2, \ldots,\alpha_i, \ldots \alpha_v]\\
\vec{\lambda_1} &= [\lambda_{1,1}, \lambda_{1,2}, \ldots,\lambda_{1,i}, \ldots \lambda_{1,v}]\\
\vec{\lambda_2} &= [\lambda_{2,1}, \lambda_{2,2}, \ldots,\lambda_{1,i}, \ldots \lambda_{2,v}]
\end{align*}
$$
where the subscript $i$ in $\alpha_i$, $\lambda_{1,i}$ and $\lambda_{2,i}$ corresponds to the voxel index.
Each vector contains $v$ entries, one for each of the responses (e.g. voxels, neurons, etc).
```
alphas = voxelwise_optimal_hyperparameters[:,-1]
lambda_ones = voxelwise_optimal_hyperparameters[:,1]
lambda_twos = voxelwise_optimal_hyperparameters[:,2]
```
Once we have stored the hyperparameters into separate vectors, we can
use matrix multiplication to convert the estimated kernel weights
($\hat{\omega} \in \mathbb{R}^{n \times v}$) into primal weights ($\hat{\beta}_{banded\_ridge} \in \mathbb{R}^{(p + q) \times v}$) for each feature space separately for each voxel:
$$
\hat{\beta}_1 = X_1^\top \hat{\omega}
\left[\begin{array}{ccc}
\alpha_1 & \ldots & 0 \\
\vdots & \ddots & \vdots \\
0 & \ldots & \alpha_v \\
\end{array}\right]
\left[\begin{array}{ccc}
\left(\lambda_{1,1}\right)^{-2} & \ldots & 0 \\
\vdots & \ddots & \vdots \\
0 & \ldots & \left(\lambda_{1,v}\right)^{-2} \\
\end{array}\right]
$$
where $\hat{\beta}_1 \in \mathbb{R}^{p \times v}$ is the matrix containing all the $p$ weights for feature space $X_1$ for all $v$ voxels.
$$
\hat{\beta}_2 = X_2^\top \hat{\omega}
\left[\begin{array}{ccc}
\alpha_1 & \ldots & 0 \\
\vdots & \ddots & \vdots \\
0 & \ldots & \alpha_v \\
\end{array}\right]
\left[\begin{array}{ccc}
\left(\lambda_{2,1}\right)^{-2} & \ldots & 0 \\
\vdots & \ddots & \vdots \\
0 & \ldots & \left(\lambda_{2,v}\right)^{-2} \\
\end{array}\right]
$$
where $\hat{\beta}_2 \in \mathbb{R}^{q \times v}$ is the matrix containing all the $q$ weights for feature space $X_2$ for all $v$ voxels.
To obtain the banded ridge weight estimate, we simply concatenate both weight matrices:
$$\hat{\beta}_{banded\_ridge} =
\begin{bmatrix}
\hat{\beta}_1\\
\hat{\beta}_2
\end{bmatrix}
$$
```
kernel_weights = fit_banded_polar['weights']
weights_x1 = np.linalg.multi_dot([X1.T, kernel_weights, np.diag(alphas), np.diag(lambda_ones**-2)])
weights_x2 = np.linalg.multi_dot([X2.T, kernel_weights, np.diag(alphas), np.diag(lambda_twos**-2)])
weights_joint = np.vstack([weights_x1, weights_x2])
print(weights_joint.shape)
assert np.allclose(weights_joint, primal_weights)
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Science/HeatAndTemperature/heat-and-temperature.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# Heat and Temperature
## Instructions before you start:
### Click the fast forward button ">>" in the menu bar above. Click "Yes" to restart and run.
```
%%html
<button onclick="run_all()">Run All Cells</button>
<script>
function run_all(){
Jupyter.actions.call('jupyter-notebook:run-all-cells-below');
Jupyter.actions.call('jupyter-notebook:save-notebook');
}
</script>
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){ code_shown=false; $('div.input').hide() });
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
```
## Heat and Temperature: How Human Needs Led to the Technologies for Obtaining and Controlling Thermal Energy
## Introduction
In this notebook we will give a brief overview of thermal energy and then move on to the uses of thermal energy in society and how our uses of it have changed throughout history. We will start by identifying and explaining common devices and systems used to generate, transfer, or control thermal energy. Then we will look at how human purposes have led to the development of heat-related materials and technologies.
### Thermal Energy
First we begin by giving a brief definition of what thermal energy is. A more complete and involved definition will be given in following notebooks. In the most basic sense thermal energy is the energy we associate with temperature. At a microscopic level it is made up of the energy of vibration, rotation, and motion of the particles and molecules that make up matter. As the particles and molecules move faster they contain more thermal energy and the temperature of matter is higher.
<img src="images/Matter_Temperature.jpg" alt="MatterTemp" width=500 align=middle>
As the temperature increases the thermal energy also increases. It's important to note that thermal energy of an object is given by its internal energy and not by its temperature. We can increase the thermal energy of an object by placing it next to an object warmer than itself. The warmer object will heat the cooler object through the transfer of thermal energy. As the thermal energy of the cooler object increases the thermal energy of the warmer object will decrease.
Before we move on let's discuss a few of the ways thermal energy can be generated. It could be generated due to chemical reactions, such as when you light a fire. Thermal energy is generated from the chemical reactions occuring as the wood burns. Thermal energy can also be generated mechanically by rubbing two objects together. For example you could rub your hands together and the energy from the motion of your hands is converted to an increase in thermal energy at a microcoping level. The energy in an electrical current can also generate an increase in thermal energy. An electrical cord for instance will warm to the touch due to electrical energy being converted in part to the thermal energy of the wire. Finally light energy can be converted to thermal energy as anyone who has stood in the sunshine can affirm.
This will be as far as we go into a definition about thermal energy as a more precise and complete one will be given in follow up notebooks.
## Devices and Systems used to Generate, Transfer, or Control Thermal Energy
In this section we are going to cover multiple common devices and systems that are used to generate, transfer, or control thermal energy in some way. Most of these devices and systems are seen everyday, but we might not be fully aware of them. We will start off by considering devices and systems that we have in our homes. I want to let the reader know that we will be explaining what the devices do but we won't be going into detail about how they function since that will be covered in a later notebook. This section is to get the reader familiar with the devices and what they accomplish.
### Exercise
Try to think of and list as many devices and systems that are used to generate, transfer or control thermal energy. If you want you can add them to the list below. You can work with a partner if you are running out of ideas. The **Add** button adds what you type in the box to the list, the **Remove** button removes the last item in the list and the **Clear List** button clears the list.
```
import ipywidgets as widgets
from IPython.display import display, Math, Latex
import traitlets
from IPython.display import Markdown
import random
output_list = []
list_output = widgets.HTML('')
text_box = widgets.Text(
value='',
placeholder='Enter list item',
description='',
disabled=False
)
add_item_button = widgets.Button(
value=False,
description='Add',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Add to list',
continuous_update=True
)
remove_item_button = widgets.Button(
value=False,
description='Remove',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Remove from list',
continuous_update=True
)
clear_list_button = widgets.Button(
value=False,
description='Clear List',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Clear List',
continuous_update=True
)
add_item_button.layout.width = '100px'
remove_item_button.layout.width = '100px'
clear_list_button.layout.width = '100px'
clear_list_button.layout.margin = '0px 0px 10px 600px'
list_output.layout.margin = '20px 0px 0px 0px'
list_widget = widgets.HBox(children=[text_box, add_item_button, remove_item_button])
display_widget = widgets.VBox(children=[clear_list_button, list_widget, list_output])
def update_Add(change):
if(not (text_box.value == '')):
output_list.append(text_box.value)
list_length = len(output_list)
text_box.value = ''
list_output.value = "<ul style='list-style-type:circle'>"
for i in range(list_length):
list_output.value = list_output.value + "<li>" + output_list[i] + "</li>"
list_output.value = list_output.value + "</ul>"
def update_Remove(change):
list_length = len(output_list)
if(not(list_length == 0)):
del output_list[list_length-1]
list_output.value = "<ul style='list-style-type:circle'>"
for i in range(len(output_list)):
list_output.value = list_output.value + "<li>" + output_list[i] + "</li>"
list_output.value = list_output.value + "</ul>"
def update_Clear(change):
del output_list[:]
list_output.value = ''
add_item_button.on_click(update_Add)
remove_item_button.on_click(update_Remove)
clear_list_button.on_click(update_Clear)
display_widget
```
Once you have completed the exercise above click the button below to open up the next section. In the section we will cover various devices that may be on your list and explain how they relate to generating, transferring or controlling thermal energy.
```
button_click = False
show_button = widgets.Button(
value=False,
description='Show Next Section',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Show Next Section',
continuous_update=True
)
def update_Show(change):
global button_click
if(not button_click):
display(Markdown(
"""
### Air Conditioner/Furnace & Thermostat
The first devices we cover are common to most homes and buildings and they are the air conditioner, furnace and thermostat. An air conditioner is used to remove thermal energy and moisture from a home or building and cooling it in turn. A furnace on the other hand is used to add thermal energy to a home or building by heating the air in it. The thermostat is used to control the temperature in a home or building by controlling the furnace and air conditioner. Thermostats have advanced enough that they can automatically adjust the temperature based on the time of day by preference of the building owner.
All of these devices together create a system that generate, transfer and control the thermal energy in a home or building. Some devices not mentioned yet like windows, insultation, building materials also contribute to this system since they maintain the thermal energy of a home or building by not allowing it to transfer outside the home or building.
### Refrigerator/Freezer
Other common devices found in almost every home are the refrigerator and freezer. A refrigerator or freezer keeps the air inside it a constant cold temperature. The refrigerator or freezer does this by constantly cycling and cooling the air similar to an air conditioner. As mentioned above a house is made with insulation to keep the transfer of thermal energy low and a refrigerator and freezer are designed the same way to keep the cold air inside from escaping out. Refrigerators and freezers are much smaller than a house so keeping the thermal energy lower is easier since it doesn't use as much energy to keep the air colder.
<center><img src="images/refrigerator.png" alt="fridge" width="350"></center>
### Stove/Oven
A device that would be the opposite of a refrigerator or freezer would be an oven or stove. An electrical oven generates thermal energy by heating up elements inside it which in turn heat up the air inside it. It is also insulated to keep the thermal energy inside from escaping. A stove generates thermal energy the same way by heating up elements but it is not insulated so pots or pans may transfer the heat from the elements to the food. The amount of thermal energy generated by the elements is controlled from the dials on the stove/oven.
### Barbecue
A barbecue is another device that is used to generate and control thermal energy. This is done by natural gas or propane being burned through the burners on the barbecue or by charcoal being burned to generate the thermal energy. The dials control how much propane or natural gas is burned and the amount of charcoal determine how much thermal energy is generated.
### Water Heater
Another very common device in homes is a hot water heater. A hot water heater uses electricity or natural gas to increase the thermal energy and temperature of the water in its tank. The hot water is then distributed throughout the house when required. Where the hot water ends up is controlled by a person turning on a hot water tap.
### Insulation/Building Materials
Insulation and building materials are both used to control the transfer of thermal energy from one object or space to another. Insulation can be as simple as a layer of air enclosed between two materials like a thermos or could be specialized material similar to that used in a house. The insulating material acts as a barrier to stop the thermal energy from one side transferring to the other. Just as in a house if it's winter time you usually don't want the inside of the house to be the same temperature as the outside so we use insulation to stop this. That said, even with good insulation some thermal energy will constantly be lost from your house in the winter but your furnace is used to constantly add this thermal energy back to keep the temperature of the house constant.
The building materials used also act as an insulator since they form the shell of the building or object. In North America wood is typically used when building the shell of a house since it's cheap and it's a better insulator than concrete, brick or steel. Structures and objects made of concrete, brick, steel, or some type of metal are typically stronger than wood but are usually a lot more expensive which is why houses are generally made of wood or similar material.
<center><img src="images/thermos.jpg" alt="thermos" width="350"></center>
### Doors and Windows
The other devices that are common to every home or building are the doors and windows which also contribute to the insulation of a building. Single pane windows don't act as good insulators since glass is not a good insulator but double pane windows have been developed to have a layer of air or some type of gas inbetween the panes that insulate much better than a single pane.
There are also varying types of doors that are better at insulating a house than others. A thin door doesn't insulate very well from the outside which is why usually thicker doors are used on the outsides of homes. The doors and windows need to be sealed well otherwise the outside and inside air will be able to mix and change the thermal energy. If the doors and windows aren't sealed well then the furnace or air conditioner would have to use more energy to keep the thermal energy in the house or building constant.
### Fans
A device that is a component to many different appliances and things around the home is a fan. Fans are used to transfer the thermal energy generated throughout the appliance. A convection oven has a fan that distributes the thermal energy generated by the elements around the oven to heat up the food evenly. A fridge, air conditioner and freezer will have fans to circulate the cooled air around the appliance or home. Fans are commonly used to transfer thermal energy from its current space to another and along with some vents or ducts also control where that thermal energy is going.
### Hair Dryer
A hair dryer is another device that generates and transfers thermal energy. An element inside the hair dryer will generate the heat and thermal energy and then a fan will blow and transfer the heat and thermal energy out.
### Washing Machine and Dryer
The last devices we will look at are a washing machine and dryer. When the washing machine is running a warm or hot cycle it typically heats the water it needs with an internal element but if it is an older version it will get the hot water it needs from the hot water heater. A dryer is used to dry the clothes that are wet from the washing machine. The dryer uses an element to generate thermal energy and a fan transfers the thermal energy throughout the dryer to the clothes.
"""))
button_click = True
show_button.on_click(update_Show)
show_button
```
## Heat-Related Materials and Technologies Developed for Human Purposes
To understand why we developed heat-related materials and technologies for our own purposes we only need to understand the function of the material or technology. When we look back throughout history a lot of the heat-related materials and technology developed were to make survival easier. In modern days we are improving upon those materials and technologies and making them more efficient and easier for people to acquire them. There are also heat-related materials and technologies for making our lives more convenient or for generating energy.
We will explain the purposes for the devices and systems mentioned in the section above and then move onto heat-related materials and some other technologies that haven't been listed. The list of devices and systems above can be broken down into a few main purposes. The first purpose is for shelter or a place to live in for survival. Our second purpose is for subsistence or for food and water and the third would be for convenience.
### Exercise
Before you move onto the next section go through the devices and technologies that have been listed above and any others you may have on your list and try to determine the purposes for them. These purposes will have to do with their function but there will also be broader purposes that are to do with survival, convenience and others.
- Air Conditioner
- Furnace
- Thermostat
- refrigerator
- Freezer
- Stove
- Oven
- barbecue
- Water Heater
- Insulation
- Building Materials
- Doors
- Windows
- Fans
- Hair Dryer
- Washing Machine & Dryer
```
button_click2 = False
show_button2 = widgets.Button(
value=False,
description='Show Next Section',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Show Next Section',
continuous_update=True
)
def update_Show2(change):
global button_click2
if(not button_click2):
display(Markdown("""
#### Shelter
- Air Conditioner/Furnace & Thermostat
- Insulation/Building Materials
- Doors and Windows
- Fans
The devices listed above all have to do with keeping a house or building at a certain temperature. The Air conditioner/Furnace are concerned with cooling down or heating up the building based on the thermostat setting while the insulation/building materials and doors and windows are concerned with keeping the temperature outside from affecting the temperature inside. The fans on the air conditioner/furnace keep the air circulating in a home to maintain a constant temperature over the whole building.
These devices don't necessarily mean your survival but as most people are aware we live on a planet with a wide variety of climates. If you lived in a location where the temperatures dropped well below zero degrees celsius then a furnace could determine your survival. Now if you lived in a location that reached very hot temperatures then your survival could depend on an air conditioner.
#### Food and Water
- Refrigerator/Freezer
- Stove/Oven
- Barbeque
The devices above have to do with food or water. These devices have to do with both survival and convenience. A refrigerator/freezer allows you to store food and keep food and drinks a lot longer than if they were left out in the open. This doesn't necessarily have to do with survival but without being able to store the food you could potentially run out of it. The stove, oven and barbeque are used to cook the food or boil water. Without being able to cook the food or boil water you could be exposing yourself to a dangerous bacterium or virus. This is why raw meat needs to be cooked and unclean water need to be boiled otherwise you could become quite sick.
#### Convenience
- Water Heater
- Hair Dryer
- Washing Machine and Dryer
These are heat-related devices that are more for convenience than survival. You can make the argument that the hot water from the water heater used in the washing machine, for washing hands or for dishes ensures they are cleaned better but it doesn't mean your survival would be put in jeporady by it. A hair dryer or clothes dryer is for convenience since they make drying your hair or clothes easier and faster. The washing machine is mostly for convenience since it reduces the amount of work and time it takes to wash clothes.
"""))
button_click2 = True
show_button2.on_click(update_Show2)
show_button2
```
### Purposes
Now let's focus on the some of the main purposes for which heat-related materials and technologies have been developed. As mentioned above the purposes can be reduced down to the broad categories of survival and convenience. There is also the purpose for electrical energy generation that will be looked at last.
#### Survival
We have already touched on survival in a couple of the sections above. These days there are a lot of heat-related materials and technologies that people do not realize they depend on. In the past the heat-related materials and technologies would have been more obvious since people could understand them but currently they have advanced enough that people might not understand them and take them for granted. We've addressed some heat-related materials and technologies above that relate to our survival and we will go through a few more to ensure we have a decent understanding of how much materials and technologies have an impact upon our survival.
> **House (Shelter)**
>
> An easy example to understand how heat-related materials and technologies have enabled our survival is to examine our shelter or house. This has been looked at above but here we will examine how the heat-related materials and technologies in a house have developed and advanced through history. In particular we will look at the material the home is made of and the technologies used with adjusting the temperature of it.
>
> The first heat-related technology we look at is burning wood for thermal energy. Burning wood for thermal energy has been used throughout history and still today. The use of burning coal came next and became popular in the 19th century from its abundance, availability and its ability to generate higher heat output than wood. Along with better materials to burn came advancing ventilation and delivery systems for the thermal energy generated in houses. With the use of natural gas becoming popular in the late 19th century and the discovery of electricity the modern heating system started to take shape. With the discovery of electricity and the use of natural gas and a hundred years later the modern furnace was invented and can be found in most houses. Through the use of electricity the air conditioner was invented in the early 20th century and has continually increased in popularity since then. It has also advanced in efficiency and technology till the modern unit we know and use in the majority of homes today.
>
> The material a home is made of can have an effect on survival since it effects how much thermal energy escapes from a house and thus how much the temperature changes. Throughout history houses were usually made out of stone, brick, or concrete like material and in North America homes are typically made out of wood or similar material. These materials are still in use today but have advanced to better keep the temperature and thermal energy of a home constant. The biggest advancement would be the development of insulating material used in between the walls of a house to limit the transfer of thermal energy to the outside of the home.
>
> In moderate climates these materials, devices and technologies would not add up to survival on a normal day but even in moderate climates there can be extreme climate changes that without shelter someone's life would be in danger. This is even more evident living in the Northern Hemispheres where Winter and extremely low temperatures occur. In the past a house or shelter has been linked with survival and is even more today since we have spread out to regions with extreme climates.
> **Food/Water**
>
> No matter what climate we live in the food and water we eat and drink are necessary for our survival. When the water we drink is not clean of bacteria or viruses we could become quite sick. To rid water of bacteria and viruses it needs to be boiled for a few minutes to ensure the bacteria and viruses are killed. In the past burning wood and coal would have been the primary ways to generate thermal energy to boil water. With the discovery of electricity there have been plenty of technology and devices developed like a stove to boil water.
>
> The food we eat can contain harmful bacteria and viruses if it is not properly cleaned or cooked. When food is grown in the ground it needs to be properly cleaned to stop any harmful bacteria or viruses from being eaten. The simplest method to cleaning food is to thoroughly wash it with clean water. When cooking raw meat we need to be sure it is cooked fully otherwise we could become quite sick from eating it. In the past food would have been cooked from the thermal energy generated by burning wood or coal but these days we have multiple devices that generate thermal energy to cook food. Common devices used to cook food are a stove, oven, microwave and barbeque.
>
> Without having clean water to drink or food that has been properly cleaned and cooked we could ingest some pretty harmful bacteria and viruses that could be life threatening.
<table>
<tr>
<td><img src="images/OldStove.jpg" alt="Old" width="300"/></td>
<td><img src="images/modernstove2.jpg" alt="New" width="300"/></td>
</tr>
</table>
> **Clothing**
>
> The most common example of a heat-related material that is pertinent to our survival would be the clothing we wear. Similar to ones house, clothing is most useful to our survival in harsh climates. During the summer if it is hot out we can remove clothing to become cooler but during the winter we need warmer clothing that will allow us to survive if we have to go outside. In the past clothing would have been made of animal hides but we have come a long way in being able to make our own clothing. Through the discovery of new materials and technologies we are able to create or use material that is thinner and more efficient at retaining or releasing thermal energy. Winter is the season of the year that would be considered the harshest and is the reason we have developed specific clothing like jackets, pants, gloves and headwear to retain your thermal energy and allow you to survive outdoors. During the other seasons of the year clothing is more for comfort and convenience since you could survive outdoors without it.
<table>
<tr>
<td><img src="images/OldClothing.jpg" alt="Old" width="250"/></td>
<td><img src="images/modernwinterjacket.jpg" alt="New" width="250"/></td>
</tr>
</table>
#### Convenience and Comfort
When you have a house to live in and clean food and water most of the other heat-related materials and technologies are for making your life easier. These materials and devices have advanced in efficiency and technology throughout history to provide you with more convenience, time and comfort. We will discuss a few examples below and look at how they have changed over time.
> **Hot Water**
>
> Hot water can have an effect on survival but these days we use it more for convenience and comfort. If you have ever lost hot water in your home you quickly realize a shower with cold water is not very comfortable. When cleaning clothes, dishes and anything else hot water is more effective and efficient than using cold water.
>
> To obtain hot water you only need to heat up water. In the past water would have been heated using thermal energy from burning wood or some other fuel source. These days with the use of electricity a hot water heater uses an element to heat up the water.
> **Washing Machine and Dryer**
>
> A washing machine and dryer are used for convenience and saving time. In the past clothes and sheets would have been washed by hand using a washboard or similar device and would have been hung on a washing line to dry. Washing by hand is hard work and time consuming and clothes take a long time to air dry. A washing machine takes out the hard work and time spent hand washing clothes, and a dryer reduces the time it takes for the clothes to dry.
<table>
<tr>
<td><img src="images/oldwashingboard.png" alt="Old" width="200"/></td>
<td><img src="images/washingmachine.jpg" alt="New" width="200"/></td>
</tr>
</table>
> **Transportation**
>
> One of the more convenient pieces of technology you likely use everyday would be some kind of vehicle for transportation. Without a vehicle to drive around it would take you a lot more time when traveling. Modern vehicles are enclosed with air conditioners and heaters making a drive much more comfortable than being exposed to the outside temperature.
>
> In the past vehicles would have used a steam engine to move. The steam engine worked by burning some fuel source to generate thermal energy that was transferred to water which would boil and generate steam to be used to move the vehicle. As history moved forward the modern combustion engine was invented that used fuel like gasoline to create combustion when ignited which was used to move the vehicle. The modern combustion engine is much more efficient than the steam engine in moving a vehicle. From the invention of the modern combustion engine till now there have been great advancements in its design and efficiency to use less fuel while traveling the same distance.
<table>
<tr>
<td><img src="images/oldcar.jpg" alt="Old" width="300"/></td>
<td><img src="images/moderncar.jpg" alt="New" width="300"/></td>
</tr>
</table>
#### Electrical Energy Generation
Heat-related materials and technologies have long been used to generate mechanical, electrical and thermal energy. Without electrical energy all of the devices that we have become so accustomed to and use everyday would not work.Electricity is typically generated from an electrical generator that converts mechanical energy into electrical energy. The mechanical energy comes from a turbine being rotated and its rotation comes from the generation of thermal energy. The thermal energy used can be generated using various methods outlined below.
> **Steam**
>
> A turbine is rotated from the steam generated from heating up water. The thermal energy used in heating up the water can be generated from burning coal or some other material. Another alternative for the thermal energy is using a nuclear reactor that generates thermal energy from nuclear fission which is the used to heat up the water.
>
> **Combustion**
>
> A combustion turbine uses the combustion generated from igniting some type of gas or fuel to rotate the turbine. A gas like natural gas is commonly used as well as gasoline or diesel fuel for combustion. Smaller generators that are similar to turbines can generate electricity in the same manor through the combustion of gasoline or diesel.
>
> **Geothermal Energy**
>
> Geothermal energy is the thermal energy we find deep within the ground. As seen in the image below the hot water found deep underground is pumped up to the surface and is then used to generate steam which then turns a turbine to generate electrical energy.
<img src="images/geothermal3.png" alt="" width="400" align="middle"/>
> Another use of the geothermal energy is using it to heat and cool your home using a heat pump. The heat pump uses the thermal energy from the water pumped up to either heat or cool your house.
<img src="images/geothermal2.gif" alt="" width="400" align="middle"/>
### Exercise
We have touched on some of the broader purposes of heat-related materials and technologies that have to do with survival, convenience and electrical energy generation above. As an exercises try to think of any other heat-related devices, materials or technologies that we haven't discussed and determine what their purpose and function is. If you are having trouble with the purpose or function you can always ask your teacher or do a web search to find it out.
## Conclusion
In this notebook we have addressed what thermal energy is and how heat and temperature are related to it. We discussed multiple devices, technologies and materials that are used to generate, transfer or control thermal energy and heat in some fashion. The purposes of the devices, technologies and materials were discussed in detail and the broader purposes of how they relate to survival, convenience and energy creation were looked at. We also look at how houses, food/water and clothing and the devices and technology associated with them developed throughout history. This notebook gives a lot of information about the various heat-related devices, materials and technologies that are used in our everyday lives and how much of an impact they have. A more indepth look into thermal energy and how the devices, materials and technologies function will be given in later notebooks.
## Image Sites
0. https://chem.libretexts.org/LibreTexts/Mount_Royal_University/Chem_1202/Unit_5%3A_Fundamentals_of_Thermochemistry/5.2%3A_Heat
1. http://outside-in.me/vintage-cook-stoves-for-sale/vintage-cook-stoves-for-sale-old-kitchen-wood-stoves-for-sale-old-cook-stove-yahoo-image-search-results-old-kitchen-cook-vintage-gas-cook-stoves-for-sale/
2. https://pixabay.com/en/kids-stove-children-toys-tin-stove-434916/
3. http://angels4peace.com/modern-kitchen-stove.html/modern-kitchen-stove-simple-popular
4. https://pixabay.com/en/ginsburgconstruction-kitchen-3-330737/
5. http://collections.musee-mccord.qc.ca/scripts/printtour.php?tourID=CW_InuitClothing_EN&Lang=2
6. https://pixabay.com/en/washboard-wash-tub-old-formerly-982990/
7. https://pixabay.com/en/auto-renault-juvaquatre-pkw-old-1661009/
8. https://pixabay.com/en/car-audi-auto-automotive-vehicle-604019/
9. http://photonicswiki.org/index.php?title=Survey_of_Renewables
10. https://sintonair.com/geothermal-heat-pump/
11. http://ca.audubon.org/conservation/geothermal-power
12. https://de.wikipedia.org/wiki/Waschmaschine
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# Loads the mechanical Turk data
Run this script to load the data. Your job after loading the data is to make a 20 questions style game (see www.20q.net )
## Read in the list of movies
There were 250 movies in the list, but we only used the 149 movies that were made in 1980 or later
```
# Read in the list of 250 movies, making sure to remove commas from their names
# (actually, if it has commas, it will be read in as different fields)
import csv
movies = []
with open('movies.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for index, row in enumerate(myreader):
movies.append( ' '.join(row) ) # the join() call merges all fields
# We might like to split this into two tasks, one for movies pre-1980 and one for post-1980,
import re # used for "regular-expressions", a method of searching strings
cutoffYear = 1980
oldMovies = []
newMovies = []
for mv in movies:
sp = re.split(r'[()]',mv)
#print sp # output looks like: ['Kill Bill: Vol. 2 ', '2004', '']
year = int(sp[1])
if year < cutoffYear:
oldMovies.append( mv )
else:
newMovies.append( mv )
print("Found", len(newMovies), "new movies (after 1980) and", len(oldMovies), "old movies")
# and for simplicity, let's just rename "newMovies" to "movies"
movies = newMovies
# Make a dictionary that will help us convert movie titles to numbers
Movie2index = {}
for ind, mv in enumerate(movies):
Movie2index[mv] = ind
# sample usage:
print('The movie ', movies[3],' has index', Movie2index[movies[3]])
```
## Read in the list of questions
There were 60 questions but due to a copy-paste error, there were some duplicates, so we only have 44 unique questions
```
# Read in the list of 60 questions
AllQuestions = []
with open('questions60.csv', 'r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
# the rstrip() removes blanks
AllQuestions.append( row[0].rstrip() )
print('Found', len(AllQuestions), 'questions')
questions = list(set(AllQuestions))
print('Found', len(questions), 'unique questions')
# As we did for movies, make a dictionary to convert questions to numbers
Question2index = {}
for index,quest in enumerate( questions ):
Question2index[quest] = index
# sample usage:
print('The question ', questions[40],' has index', Question2index[questions[40]])
```
## Read in the training data
The columns of `X` correspond to questions, and rows correspond to more data. The rows of `y` are the movie indices. The values of `X` are 1, -1 or 0 (see `YesNoDict` for encoding)
```
YesNoDict = { "Yes": 1, "No": -1, "Unsure": 0, "": 0 }
# load from csv files
X = []
y = []
with open('MechanicalTurkResults_149movies_X.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
X.append( list(map(int,row)) )
with open('MechanicalTurkResults_149movies_y.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
y = list(map(int,row))
```
# Your turn: train a decision tree classifier
```
from sklearn import tree
# the rest is up to you
```
# Use the trained classifier to play a 20 questions game
You can see the list of movies we trained on here: https://docs.google.com/spreadsheets/d/1-849aPzi8Su_c5HwwDFERrogXjvSaZFfp_y9MHeO1IA/edit?usp=sharing
You may want to use `from sklearn.tree import _tree` and 'tree.DecisionTreeClassifier' with commands like `tree_.children_left[node]`, `tree_.value[node]`, `tree_.feature[node]`, and `tree_.threshold[node]'.
```
# up to you
```
| github_jupyter |
# Conditional Statements
## If
if "condition":
> "do something"
print()
```
num = int(input("Please enter a number: "))
if num < 0:
num *= -1 #num = num*-1
print("Result: ", num)
```
# If-Else
if "condition":
> "do something"
else:
> "do something another"
print()
# If-Elif-Else
if "condition":
> "do something"
elif "condition-2":
> "do something another"
elif "condition-3":
> "do something another"
else:
> "do something different"
print()
```
score = int(input("Please enter your score: "))
if score <= 40:
print("Very bad, you should work hard..")
elif score <= 60:
print("Nice but you should work more..")
elif score <= 100:
print("Congratulation!")
else:
print("Invalid score!!!")
x = 8
if x > 4:
x = x+1
elif x > 5:
x = x+2
elif x > 7:
x = x+3
print ("x, ", x)
x = 5
if x > 4:
x = x+1
if x > 5:
x = x+2
if x > 7:
x = x+3
print ("x, ", x)
print("**************ATM Giriş Paneli**************")
kullanici_adi = "Omer"
parola ="hello"
kullanici_adi1 =input("Lütfen kullanıcı adınızı giriniz")
parola1= input("Lütfen Parolanızı giriniz.")
if (kullanici_adi != kullanici_adi1 and parola == parola1):
print("Kullanıcı adınız hatalı")
elif (kullanici_adi==kullanici_adi1 and parola != parola1):
print("Parolanız hatalı")
elif (kullanici_adi != kullanici_adi1 and parola!= parola1):
print("Kullanıcı adınız ve parolanız hatalıdır.")
else:
print("Tebrikler, Başarıyla giriş yaptınız")
x = 10
if x > 5:
if x >7:
print("İlker and Eylül")
```
# Question
At a particular company, employees’ salaries are raised progressively, calculated using the following formula:
salary = salary + salary x (raise percentage)
Raises are predefined as given below, according to the current salary of the worker. For instance, if the worker’s current salary is less than or equal to 1000 TL, then its salary is increased 15%.
Range | Percentage
--- | ---
0 < salary ≤ 1000 | 15%
1000 < salary ≤ 2000 |10%
2000 < salary ≤ 3000| 5%
3000 < salary | 2.5%
Write a program that asks the user to enter his/her salary. Then your program should calculate and print the raised salary of the user.
Some example program runs:
Please enter your salary: 1000
Your raised salary is 1150.0.
Please enter your salary: 2500
Your raised salary is 2625.0.
# Answer
```
salary = float(input("Please enter your salary: "))
if salary < 0:
print("Invalid value")
else:
if 0 < salary <= 1000:
salary = salary + salary * 0.15
elif salary <= 2000:
salary = salary + salary * 0.1
elif salary <= 3000:
salary = salary + salary * 0.05
else:
salary = salary + salary * 0.025
print("Your raised salary is", salary)
```
# While Loop Structure
Condition Intro
while <condition>:
> "true statement"
> "true statement"
> condition update
A logical condition is repeated as long as it has a logical true value. For ending the loop, that condition must become false.
```
num = 0
while num < 9:
print("Value:{}".format(num))
num = num +1
num2 = 0
while num2 != 9:
print("Value:", num2)
num2 +=2
#infinite loop
num3 = int(input("Please enter an integer between 1 and 10: "))
while num3 < 1 or num3 > 10:
print("Invalid value!!!!")
sayi3 = int(input("Please enter an integer between 1 and 10: "))
print("Congrats...")
num3 = int(input("Please enter an integer between 1 and 10: "))
while num3 < 1 or num3 > 10:
print("Invalid value!!!!")
num3 = int(input("Please enter an integer between 1 and 10: "))
print("Congrats!!...")
```
## Print items of a list
```
t = [1,2,3,4,5,6]
len(t)
i = 0 #counter value
while (i < len(t)):
#i+=1
print(i, "th item: ", t[i])
i+=1
while True:
a = input("Enter a value: ")
if a == "Exit":
break
```
# For Loop Structure
for "repeated value" in "list":
> "true statement"
It takes the items as in sequence and processes them in a loop.
```
"""
a[0]
a[1]
a[2]
"""
a = [2,45,57]
for i in a:
print(i)
i = 5
i in range(5)
list1 = list(range(5))
list1
list2 = list("Python")
list2
for i in range(5):
print(i)
for i in "Python Course":
print(i)
for i in "Python Course".split():
print(i)
for i in range(10):
if i == 5:
break
print(i)
for i in range(10):
if i == 5:
continue
print(i)
nums = list(range(8))
squares = []
for i in nums:
squares.append(i**2)
print(squares)
nums = list(range(8))
print(nums)
squares = [i**2 for i in nums]
print(squares)
nums = list(range(8))
even_squares = [i**2 for i in nums if i % 2 == 0]
odd_squares = [i**3 for i in nums if i% 2 == 1]
print(even_squares)
print(odd_squares)
mylist = [3,5,12,7,65,35]
sum1 = 0
for i in mylist:
sum1 = sum1 + i #sum += i
print(sum1)
print(sum1)
#print(sum1)
```
### Find the max item of a list
```
mylist = [3,5,12,7,65,35]
print(mylist[0])
max1 = mylist[0]
for i in mylist:
if i > max1:
max1 = i
print("max1: ", max1)
print("i: ", i)
print(max1)
```
### Summation of list items
```
list1 = [1,2,3]
list2 = [4,5,6]
list(zip(list1,list2))
list1 = [1,2,3]
list2 = [4,5,6]
list3 = [a + b for a,b in zip(list1,list2)]
print(list3)
```
| github_jupyter |
## Identifiability Test of Linear VAE on KittiMask Dataset
```
%load_ext autoreload
%autoreload 2
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader, random_split
import ltcl
import numpy as np
import scipy
from ltcl.datasets.kitti import KittiMasks, KittiMasksTwoSample
from ltcl.modules.srnn_cnn_kitti import SRNNConv
from ltcl.modules.linear_vae import AfflineVAECNN
from ltcl.modules.metrics.correlation import correlation
import random
import seaborn as sns
from torchvision import transforms
from torchvision.utils import save_image, make_grid
import matplotlib.pyplot as plt
%matplotlib inline
def show(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest')
```
### Load KittiMask dataset
```
use_cuda = True
device = torch.device("cuda" if use_cuda else "cpu")
latent_size = 10
nc = 1
data = KittiMasksTwoSample(path = '/srv/data/ltcl/data/kitti/',
transform = None,
max_delta_t = 5)
num_validation_samples = 2500
train_data, val_data = random_split(data, [len(data)-num_validation_samples, num_validation_samples])
train_loader = DataLoader(train_data, batch_size=2560, shuffle=True, pin_memory=True)
val_loader = DataLoader(val_data, batch_size=16, shuffle=False, pin_memory=True)
```
### Load model
```
model = SRNNConv.load_from_checkpoint('/srv/data/ltcl/log/weiran/kitti_10_g25_linear/lightning_logs/version_45/checkpoints/epoch=42-step=54565.ckpt',
nc=1, length=1, z_dim=10, z_dim_trans=10, lag=1, hidden_dim=512, bias=False)
for batch in train_loader:
break
batch_size = batch['s1']['xt'].shape[0]
diag_ckp = "/data/datasets/logs/cmu_wyao/linear_vae_kitti_1lag_10_gamma_10_diag/lightning_logs/version_21/checkpoints/epoch=249-step=302999.ckpt"
lin_ckp = "/data/datasets/logs/cmu_wyao/kitti_10_g25_linear/lightning_logs/version_10/checkpoints/epoch=99-step=126899.ckpt"
model = model.load_from_checkpoint(lin_ckp,
z_dim = 10, nc=1, lag=1, diagonal=False, hidden_dim=512)
model.eval()
model.to(device)
```
### Visualization of MCC and causal matrix
```
for batch in train_loader:
break
batch_size = batch['s1']['yt'].shape[0]
fig = plt.figure(figsize=(2,2))
eps = model.sample(batch["xt"])
eps = eps.detach().cpu().numpy()
component_idx = 7
sns.distplot(eps[:,component_idx], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2});
plt.title("Learned noise prior")
batch = next(iter(train_loader))
batch_size = batch['s1']['yt'].shape[0]
z, mu, logvar = model.forward(batch['s1'])
mu = mu.view(batch_size, -1, latent_size)
A = mu[:,0,:].detach().cpu().numpy().T
B = batch['s1']['yt'][:,0,:].detach().cpu().numpy().T
result = np.zeros(A.shape)
result[:B.shape[0],:B.shape[1]] = B
for i in range(len(A) - len(B)):
result[B.shape[0] + i, :] = np.random.normal(size=B.shape[1])
corr_sorted, sort_idx, mu_sorted = correlation(A, result, method='Spearman')
mu.shape
figure_path = '/home/weiran/figs/'
from matplotlib.backends.backend_pdf import PdfPages
with PdfPages(figure_path + '/mcc_kitti.pdf') as pdf:
fig = plt.figure(figsize=(1.5,1.5))
sns.heatmap(np.abs(corr_sorted[:3,:3]), vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("Estimated Latents ")
plt.ylabel("True Latents ")
plt.title("MCC=%.3f"%np.abs(np.diag(corr_sorted)[:3]).mean())
pdf.savefig(fig, bbox_inches="tight")
from scipy.optimize import linear_sum_assignment
row_ind, col_ind = linear_sum_assignment(C)
A = A[:, col_ind]
mask = np.ones(latent_size)
for i in range(latent_size):
if np.corrcoef(B, A, rowvar=False)[i,latent_size:][i] > 0:
mask[i] = -1
print("Permutation:",col_ind)
print("Sign Flip:", mask)
x_recon, mu, logvar, z = model.forward(batch)
mu = mu.view(batch_size, -1, latent_size)
A = mu[:,0,:].detach().cpu().numpy().T
B = batch['yt'][:,0,:].detach().cpu().numpy().T
result = np.zeros(A.shape)
result[:B.shape[0],:B.shape[1]] = B
for i in range(len(A) - len(B)):
result[B.shape[0] + i, :] = np.random.normal(size=B.shape[1])
corr_sorted, sort_idx, mu_sorted = correlation(A, result, method='Spearman')
fig = plt.figure(figsize=(1.5,4))
sns.heatmap(np.abs(corr_sorted[:,:3]), vmin=0, vmax=1, annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.xlabel("True latents ")
plt.ylabel("Estimated latents ")
plt.title("MCC=%.3f"%np.abs(np.diag(corr_sorted)[:3]).mean())
# Model is not accurate, nonlinear
# Maybe violate no instanous relation assumption
col_ind = sort_idx[:3].astype('int')
print('Permutation:', col_ind)
with PdfPages(figure_path + '/xy_kitti.pdf') as pdf:
fig, axs = plt.subplots(3,1, figsize=(1,1.5))
for i in range(3):
ax = axs[i]
ax.scatter(B.T[:,i], A.T[:,col_ind[i]], s=5, color='b', alpha=0.25)
ax.axis('off')
pdf.savefig(fig, bbox_inches="tight")
fig, axs = plt.subplots(1,3, figsize=(7,2))
for i in range(3):
ax = axs[i]
ax.scatter(B.T[:,i], A.T[:,col_ind[i]], s=5, color='b', alpha=0.25)
ax.set_xlabel('Ground truth latent')
ax.set_ylabel('Estimated latent')
ax.grid('..')
fig.tight_layout()
col_ind = sort_idx.astype('int')
with PdfPages(figure_path + '/B1_kitti.pdf') as pdf:
fig = plt.figure(figsize=(1.5,1.5))
sns.heatmap(np.abs(model.transition_prior.transition.w[0][col_ind][:, col_ind][:3,:3].detach().cpu().numpy()),
annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.title(r'Entries of $\mathbf{B}_1$')
plt.xlabel('Effect index')
plt.ylabel('Cause index')
pdf.savefig(fig, bbox_inches="tight")
fig = plt.figure(figsize=(4.5,4.5))
sns.heatmap(np.abs(model.transition_prior.transition.w[0][col_ind][:, col_ind].detach().cpu().numpy()),
annot=True, fmt=".2f", linewidths=.5, cbar=False, cmap='Greens')
plt.title(r'State transition matrix $B_\tau$')
plt.xlabel('Latent index')
plt.ylabel('Latent index')
```
### Latent traversal
```
fig, axs = plt.subplots(1, 10, figsize=(20,2))
for idx in range(10):
sns.distplot(mu[:,-1,idx].detach().detach().cpu().numpy(), hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
ax = axs[idx])
fig.tight_layout()
fixed_img = torch.tensor(data.__getitem__(205)['s1']['xt'], device='cpu')
x_recon_img, mu_img, logvar_img, z_img = model.net(fixed_img)
fig = plt.figure(figsize=(3,3))
show(make_grid(torch.cat((fixed_img[1:2], x_recon_img[1:2]), 0).detach().cpu(), pad_value=1))
plt.axis('off')
from torch.distributions.normal import Normal
lowest_prob = 0.05
n_steps = 6
normals = [Normal(z[:, i].mean(), z[:, i].std()) for i in range(10)]
interpolation = torch.linspace(lowest_prob, 1 - lowest_prob, steps=n_steps)
mu_img = mu_img.detach()[1:2]
from torch.distributions.normal import Normal
traverse_idx = col_ind[2]
with PdfPages(figure_path + '/traversal_kitti_%d.pdf'%traverse_idx) as pdf:
mus = normals[traverse_idx].icdf(interpolation)
samples = [ ]
for step in range(n_steps):
z_trav = mu_img.clone()
z_trav[0, traverse_idx] = mus[step]
sample = F.sigmoid(model.net._decode(z_trav)).data
samples.append(sample[0].detach().cpu())
fig = plt.figure(figsize=(n_steps*1,1))
show(make_grid(samples, pad_value=1, nrow=n_steps))
plt.axis('off');
pdf.savefig(fig, bbox_inches="tight")
```
##### Vertical position
```
traverse_idx = 6 #[7,6,2,8]
```
##### Horizontal position
```
traverse_idx = 1 #[7,6,2,8]
samples = [ ]
for step in range(steps):
z_trav = mu_img.detach().clone()
z_trav[0, traverse_idx] = mus[step]
sample = F.sigmoid(model.net._decode(z_trav)).data
samples.append(sample[0].detach().cpu())
fig = plt.figure(figsize=(steps*2,2))
show(make_grid(samples, pad_value=1, nrow=steps))
```
#### Mask size
```
traverse_idx = 2 #[7,6,2,8]
samples = [ ]
for step in range(steps):
z_trav = z.detach().clone()
z_trav[0, traverse_idx] = mus[step]
sample = F.sigmoid(model.net._decode(z_trav)).data
samples.append(sample[0].detach().cpu())
fig = plt.figure(figsize=(steps*2,2))
show(make_grid(samples, pad_value=1, nrow=steps))
```
#### Mask rotation
This is an extra latents not found by slow-vae. It has causal relations with size, x and y.
```
traverse_idx = 8 #[7,6,2,8]
samples = [ ]
for step in range(steps):
z_trav = z.detach().clone()
z_trav[0, traverse_idx] = mus[step]
sample = F.sigmoid(model.net._decode(z_trav)).data
samples.append(sample[0].detach().cpu())
fig = plt.figure(figsize=(steps*2,2))
show(make_grid(samples, pad_value=1, nrow=steps))
```
| github_jupyter |
___
<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
___
# Part of Speech Basics
The challenge of correctly identifying parts of speech is summed up nicely in the [spaCy docs](https://spacy.io/usage/linguistic-features):
<div class="alert alert-info" style="margin: 20px">Processing raw text intelligently is difficult: most words are rare, and it's common for words that look completely different to mean almost the same thing. The same words in a different order can mean something completely different. Even splitting text into useful word-like units can be difficult in many languages. While it's possible to solve some problems starting from only the raw characters, it's usually better to use linguistic knowledge to add useful information. That's exactly what spaCy is designed to do: you put in raw text, and get back a **Doc** object, that comes with a variety of annotations.</div>
In this section we'll take a closer look at coarse POS tags (noun, verb, adjective) and fine-grained tags (plural noun, past-tense verb, superlative adjective).
```
# Perform standard imports
import spacy
nlp = spacy.load('en_core_web_sm')
# Create a simple Doc object
doc = nlp(u"The quick brown fox jumped over the lazy dog's back.")
```
## View token tags
Recall that you can obtain a particular token by its index position.
* To view the coarse POS tag use `token.pos_`
* To view the fine-grained tag use `token.tag_`
* To view the description of either type of tag use `spacy.explain(tag)`
<div class="alert alert-success">Note that `token.pos` and `token.tag` return integer hash values; by adding the underscores we get the text equivalent that lives in **doc.vocab**.</div>
```
# Print the full text:
print(doc.text)
# Print the fifth word and associated tags:
print(doc[4].text, doc[4].pos_, doc[4].tag_, spacy.explain(doc[4].tag_))
```
We can apply this technique to the entire Doc object:
```
for token in doc:
print(f'{token.text:{10}} {token.pos_:{8}} {token.tag_:{6}} {spacy.explain(token.tag_)}')
```
## Coarse-grained Part-of-speech Tags
Every token is assigned a POS Tag from the following list:
<table><tr><th>POS</th><th>DESCRIPTION</th><th>EXAMPLES</th></tr>
<tr><td>ADJ</td><td>adjective</td><td>*big, old, green, incomprehensible, first*</td></tr>
<tr><td>ADP</td><td>adposition</td><td>*in, to, during*</td></tr>
<tr><td>ADV</td><td>adverb</td><td>*very, tomorrow, down, where, there*</td></tr>
<tr><td>AUX</td><td>auxiliary</td><td>*is, has (done), will (do), should (do)*</td></tr>
<tr><td>CONJ</td><td>conjunction</td><td>*and, or, but*</td></tr>
<tr><td>CCONJ</td><td>coordinating conjunction</td><td>*and, or, but*</td></tr>
<tr><td>DET</td><td>determiner</td><td>*a, an, the*</td></tr>
<tr><td>INTJ</td><td>interjection</td><td>*psst, ouch, bravo, hello*</td></tr>
<tr><td>NOUN</td><td>noun</td><td>*girl, cat, tree, air, beauty*</td></tr>
<tr><td>NUM</td><td>numeral</td><td>*1, 2017, one, seventy-seven, IV, MMXIV*</td></tr>
<tr><td>PART</td><td>particle</td><td>*'s, not,*</td></tr>
<tr><td>PRON</td><td>pronoun</td><td>*I, you, he, she, myself, themselves, somebody*</td></tr>
<tr><td>PROPN</td><td>proper noun</td><td>*Mary, John, London, NATO, HBO*</td></tr>
<tr><td>PUNCT</td><td>punctuation</td><td>*., (, ), ?*</td></tr>
<tr><td>SCONJ</td><td>subordinating conjunction</td><td>*if, while, that*</td></tr>
<tr><td>SYM</td><td>symbol</td><td>*$, %, §, ©, +, −, ×, ÷, =, :), 😝*</td></tr>
<tr><td>VERB</td><td>verb</td><td>*run, runs, running, eat, ate, eating*</td></tr>
<tr><td>X</td><td>other</td><td>*sfpksdpsxmsa*</td></tr>
<tr><td>SPACE</td><td>space</td></tr>
___
## Fine-grained Part-of-speech Tags
Tokens are subsequently given a fine-grained tag as determined by morphology:
<table>
<tr><th>POS</th><th>Description</th><th>Fine-grained Tag</th><th>Description</th><th>Morphology</th></tr>
<tr><td>ADJ</td><td>adjective</td><td>AFX</td><td>affix</td><td>Hyph=yes</td></tr>
<tr><td>ADJ</td><td></td><td>JJ</td><td>adjective</td><td>Degree=pos</td></tr>
<tr><td>ADJ</td><td></td><td>JJR</td><td>adjective, comparative</td><td>Degree=comp</td></tr>
<tr><td>ADJ</td><td></td><td>JJS</td><td>adjective, superlative</td><td>Degree=sup</td></tr>
<tr><td>ADJ</td><td></td><td>PDT</td><td>predeterminer</td><td>AdjType=pdt PronType=prn</td></tr>
<tr><td>ADJ</td><td></td><td>PRP\$</td><td>pronoun, possessive</td><td>PronType=prs Poss=yes</td></tr>
<tr><td>ADJ</td><td></td><td>WDT</td><td>wh-determiner</td><td>PronType=int rel</td></tr>
<tr><td>ADJ</td><td></td><td>WP\$</td><td>wh-pronoun, possessive</td><td>Poss=yes PronType=int rel</td></tr>
<tr><td>ADP</td><td>adposition</td><td>IN</td><td>conjunction, subordinating or preposition</td><td></td></tr>
<tr><td>ADV</td><td>adverb</td><td>EX</td><td>existential there</td><td>AdvType=ex</td></tr>
<tr><td>ADV</td><td></td><td>RB</td><td>adverb</td><td>Degree=pos</td></tr>
<tr><td>ADV</td><td></td><td>RBR</td><td>adverb, comparative</td><td>Degree=comp</td></tr>
<tr><td>ADV</td><td></td><td>RBS</td><td>adverb, superlative</td><td>Degree=sup</td></tr>
<tr><td>ADV</td><td></td><td>WRB</td><td>wh-adverb</td><td>PronType=int rel</td></tr>
<tr><td>CONJ</td><td>conjunction</td><td>CC</td><td>conjunction, coordinating</td><td>ConjType=coor</td></tr>
<tr><td>DET</td><td>determiner</td><td>DT</td><td>determiner</td><td></td></tr>
<tr><td>INTJ</td><td>interjection</td><td>UH</td><td>interjection</td><td></td></tr>
<tr><td>NOUN</td><td>noun</td><td>NN</td><td>noun, singular or mass</td><td>Number=sing</td></tr>
<tr><td>NOUN</td><td></td><td>NNS</td><td>noun, plural</td><td>Number=plur</td></tr>
<tr><td>NOUN</td><td></td><td>WP</td><td>wh-pronoun, personal</td><td>PronType=int rel</td></tr>
<tr><td>NUM</td><td>numeral</td><td>CD</td><td>cardinal number</td><td>NumType=card</td></tr>
<tr><td>PART</td><td>particle</td><td>POS</td><td>possessive ending</td><td>Poss=yes</td></tr>
<tr><td>PART</td><td></td><td>RP</td><td>adverb, particle</td><td></td></tr>
<tr><td>PART</td><td></td><td>TO</td><td>infinitival to</td><td>PartType=inf VerbForm=inf</td></tr>
<tr><td>PRON</td><td>pronoun</td><td>PRP</td><td>pronoun, personal</td><td>PronType=prs</td></tr>
<tr><td>PROPN</td><td>proper noun</td><td>NNP</td><td>noun, proper singular</td><td>NounType=prop Number=sign</td></tr>
<tr><td>PROPN</td><td></td><td>NNPS</td><td>noun, proper plural</td><td>NounType=prop Number=plur</td></tr>
<tr><td>PUNCT</td><td>punctuation</td><td>-LRB-</td><td>left round bracket</td><td>PunctType=brck PunctSide=ini</td></tr>
<tr><td>PUNCT</td><td></td><td>-RRB-</td><td>right round bracket</td><td>PunctType=brck PunctSide=fin</td></tr>
<tr><td>PUNCT</td><td></td><td>,</td><td>punctuation mark, comma</td><td>PunctType=comm</td></tr>
<tr><td>PUNCT</td><td></td><td>:</td><td>punctuation mark, colon or ellipsis</td><td></td></tr>
<tr><td>PUNCT</td><td></td><td>.</td><td>punctuation mark, sentence closer</td><td>PunctType=peri</td></tr>
<tr><td>PUNCT</td><td></td><td>''</td><td>closing quotation mark</td><td>PunctType=quot PunctSide=fin</td></tr>
<tr><td>PUNCT</td><td></td><td>""</td><td>closing quotation mark</td><td>PunctType=quot PunctSide=fin</td></tr>
<tr><td>PUNCT</td><td></td><td>``</td><td>opening quotation mark</td><td>PunctType=quot PunctSide=ini</td></tr>
<tr><td>PUNCT</td><td></td><td>HYPH</td><td>punctuation mark, hyphen</td><td>PunctType=dash</td></tr>
<tr><td>PUNCT</td><td></td><td>LS</td><td>list item marker</td><td>NumType=ord</td></tr>
<tr><td>PUNCT</td><td></td><td>NFP</td><td>superfluous punctuation</td><td></td></tr>
<tr><td>SYM</td><td>symbol</td><td>#</td><td>symbol, number sign</td><td>SymType=numbersign</td></tr>
<tr><td>SYM</td><td></td><td>\$</td><td>symbol, currency</td><td>SymType=currency</td></tr>
<tr><td>SYM</td><td></td><td>SYM</td><td>symbol</td><td></td></tr>
<tr><td>VERB</td><td>verb</td><td>BES</td><td>auxiliary "be"</td><td></td></tr>
<tr><td>VERB</td><td></td><td>HVS</td><td>forms of "have"</td><td></td></tr>
<tr><td>VERB</td><td></td><td>MD</td><td>verb, modal auxiliary</td><td>VerbType=mod</td></tr>
<tr><td>VERB</td><td></td><td>VB</td><td>verb, base form</td><td>VerbForm=inf</td></tr>
<tr><td>VERB</td><td></td><td>VBD</td><td>verb, past tense</td><td>VerbForm=fin Tense=past</td></tr>
<tr><td>VERB</td><td></td><td>VBG</td><td>verb, gerund or present participle</td><td>VerbForm=part Tense=pres Aspect=prog</td></tr>
<tr><td>VERB</td><td></td><td>VBN</td><td>verb, past participle</td><td>VerbForm=part Tense=past Aspect=perf</td></tr>
<tr><td>VERB</td><td></td><td>VBP</td><td>verb, non-3rd person singular present</td><td>VerbForm=fin Tense=pres</td></tr>
<tr><td>VERB</td><td></td><td>VBZ</td><td>verb, 3rd person singular present</td><td>VerbForm=fin Tense=pres Number=sing Person=3</td></tr>
<tr><td>X</td><td>other</td><td>ADD</td><td>email</td><td></td></tr>
<tr><td>X</td><td></td><td>FW</td><td>foreign word</td><td>Foreign=yes</td></tr>
<tr><td>X</td><td></td><td>GW</td><td>additional word in multi-word expression</td><td></td></tr>
<tr><td>X</td><td></td><td>XX</td><td>unknown</td><td></td></tr>
<tr><td>SPACE</td><td>space</td><td>_SP</td><td>space</td><td></td></tr>
<tr><td></td><td></td><td>NIL</td><td>missing tag</td><td></td></tr>
</table>
For a current list of tags for all languages visit https://spacy.io/api/annotation#pos-tagging
## Working with POS Tags
In the English language, the same string of characters can have different meanings, even within the same sentence. For this reason, morphology is important. **spaCy** uses machine learning algorithms to best predict the use of a token in a sentence. Is *"I read books on NLP"* present or past tense? Is *wind* a verb or a noun?
```
doc = nlp(u'I read books on NLP.')
r = doc[1]
print(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')
doc = nlp(u'I read a book on NLP.')
r = doc[1]
print(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')
```
In the first example, with no other cues to work from, spaCy assumed that ***read*** was present tense.<br>In the second example the present tense form would be ***I am reading a book***, so spaCy assigned the past tense.
## Counting POS Tags
The `Doc.count_by()` method accepts a specific token attribute as its argument, and returns a frequency count of the given attribute as a dictionary object. Keys in the dictionary are the integer values of the given attribute ID, and values are the frequency. Counts of zero are not included.
```
doc = nlp(u"The quick brown fox jumped over the lazy dog's back.")
# Count the frequencies of different coarse-grained POS tags:
POS_counts = doc.count_by(spacy.attrs.POS)
POS_counts
```
This isn't very helpful until you decode the attribute ID:
```
doc.vocab[83].text
```
### Create a frequency list of POS tags from the entire document
Since `POS_counts` returns a dictionary, we can obtain a list of keys with `POS_counts.items()`.<br>By sorting the list we have access to the tag and its count, in order.
```
for k,v in sorted(POS_counts.items()):
print(f'{k}. {doc.vocab[k].text:{5}}: {v}')
# Count the different fine-grained tags:
TAG_counts = doc.count_by(spacy.attrs.TAG)
for k,v in sorted(TAG_counts.items()):
print(f'{k}. {doc.vocab[k].text:{4}}: {v}')
```
<div class="alert alert-success">**Why did the ID numbers get so big?** In spaCy, certain text values are hardcoded into `Doc.vocab` and take up the first several hundred ID numbers. Strings like 'NOUN' and 'VERB' are used frequently by internal operations. Others, like fine-grained tags, are assigned hash values as needed.</div>
<div class="alert alert-success">**Why don't SPACE tags appear?** In spaCy, only strings of spaces (two or more) are assigned tokens. Single spaces are not.</div>
```
# Count the different dependencies:
DEP_counts = doc.count_by(spacy.attrs.DEP)
for k,v in sorted(DEP_counts.items()):
print(f'{k}. {doc.vocab[k].text:{4}}: {v}')
```
Here we've shown `spacy.attrs.POS`, `spacy.attrs.TAG` and `spacy.attrs.DEP`.<br>Refer back to the **Vocabulary and Matching** lecture from the previous section for a table of **Other token attributes**.
___
## Fine-grained POS Tag Examples
These are some grammatical examples (shown in **bold**) of specific fine-grained tags. We've removed punctuation and rarely used tags:
<table>
<tr><th>POS</th><th>TAG</th><th>DESCRIPTION</th><th>EXAMPLE</th></tr>
<tr><td>ADJ</td><td>AFX</td><td>affix</td><td>The Flintstones were a **pre**-historic family.</td></tr>
<tr><td>ADJ</td><td>JJ</td><td>adjective</td><td>This is a **good** sentence.</td></tr>
<tr><td>ADJ</td><td>JJR</td><td>adjective, comparative</td><td>This is a **better** sentence.</td></tr>
<tr><td>ADJ</td><td>JJS</td><td>adjective, superlative</td><td>This is the **best** sentence.</td></tr>
<tr><td>ADJ</td><td>PDT</td><td>predeterminer</td><td>Waking up is **half** the battle.</td></tr>
<tr><td>ADJ</td><td>PRP\$</td><td>pronoun, possessive</td><td>**His** arm hurts.</td></tr>
<tr><td>ADJ</td><td>WDT</td><td>wh-determiner</td><td>It's blue, **which** is odd.</td></tr>
<tr><td>ADJ</td><td>WP\$</td><td>wh-pronoun, possessive</td><td>We don't know **whose** it is.</td></tr>
<tr><td>ADP</td><td>IN</td><td>conjunction, subordinating or preposition</td><td>It arrived **in** a box.</td></tr>
<tr><td>ADV</td><td>EX</td><td>existential there</td><td>**There** is cake.</td></tr>
<tr><td>ADV</td><td>RB</td><td>adverb</td><td>He ran **quickly**.</td></tr>
<tr><td>ADV</td><td>RBR</td><td>adverb, comparative</td><td>He ran **quicker**.</td></tr>
<tr><td>ADV</td><td>RBS</td><td>adverb, superlative</td><td>He ran **fastest**.</td></tr>
<tr><td>ADV</td><td>WRB</td><td>wh-adverb</td><td>**When** was that?</td></tr>
<tr><td>CONJ</td><td>CC</td><td>conjunction, coordinating</td><td>The balloon popped **and** everyone jumped.</td></tr>
<tr><td>DET</td><td>DT</td><td>determiner</td><td>**This** is **a** sentence.</td></tr>
<tr><td>INTJ</td><td>UH</td><td>interjection</td><td>**Um**, I don't know.</td></tr>
<tr><td>NOUN</td><td>NN</td><td>noun, singular or mass</td><td>This is a **sentence**.</td></tr>
<tr><td>NOUN</td><td>NNS</td><td>noun, plural</td><td>These are **words**.</td></tr>
<tr><td>NOUN</td><td>WP</td><td>wh-pronoun, personal</td><td>**Who** was that?</td></tr>
<tr><td>NUM</td><td>CD</td><td>cardinal number</td><td>I want **three** things.</td></tr>
<tr><td>PART</td><td>POS</td><td>possessive ending</td><td>Fred**'s** name is short.</td></tr>
<tr><td>PART</td><td>RP</td><td>adverb, particle</td><td>Put it **back**!</td></tr>
<tr><td>PART</td><td>TO</td><td>infinitival to</td><td>I want **to** go.</td></tr>
<tr><td>PRON</td><td>PRP</td><td>pronoun, personal</td><td>**I** want **you** to go.</td></tr>
<tr><td>PROPN</td><td>NNP</td><td>noun, proper singular</td><td>**Kilroy** was here.</td></tr>
<tr><td>PROPN</td><td>NNPS</td><td>noun, proper plural</td><td>The **Flintstones** were a pre-historic family.</td></tr>
<tr><td>VERB</td><td>MD</td><td>verb, modal auxiliary</td><td>This **could** work.</td></tr>
<tr><td>VERB</td><td>VB</td><td>verb, base form</td><td>I want to **go**.</td></tr>
<tr><td>VERB</td><td>VBD</td><td>verb, past tense</td><td>This **was** a sentence.</td></tr>
<tr><td>VERB</td><td>VBG</td><td>verb, gerund or present participle</td><td>I am **going**.</td></tr>
<tr><td>VERB</td><td>VBN</td><td>verb, past participle</td><td>The treasure was **lost**.</td></tr>
<tr><td>VERB</td><td>VBP</td><td>verb, non-3rd person singular present</td><td>I **want** to go.</td></tr>
<tr><td>VERB</td><td>VBZ</td><td>verb, 3rd person singular present</td><td>He **wants** to go.</td></tr>
</table>
### Up Next: Visualizing POS
| github_jupyter |
# Load data
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(precision=3, linewidth=120)
import sys
sys.path.append("..")
from scem import ebm, stein, kernel, util, gen
from scem.datasets import *
import matplotlib.pyplot as plt
from tqdm import notebook as tqdm
dname = "banana"
p = load_data(dname, D=2, noise_std = 0.0, seed=0, itanh=False, whiten=False )
x = p.sample(1000)
x_eval = p.sample(100)
import torch
import torch.nn as nn
import numpy as np
import torch.distributions as td
class EBM(nn.Module):
'''
EBM
'''
def __init__(self, Dx, Dz, Dh):
super().__init__()
self.layer_1 = nn.Linear(Dz+Dx, Dh)
self.layer_2 = nn.Linear(Dh, 1)
def forward(self, X, Z):
elu = nn.ELU()
XZ = torch.cat([X, Z], axis=-1)
h = elu(self.layer_1(XZ))
E = self.layer_2(h)
return E[:,0]
# dimensionality of model
Dx = 2
Dz = 2
Dh = 100
lebm = ebm.LatentEBMAdapter(EBM(Dx, Dz, Dh), var_type_obs='continuous', var_type_latent='continuous')
def weight_reset(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
m.reset_parameters()
X = torch.as_tensor(x, dtype=torch.float32)
# define kernel
med2 = util.pt_meddistance(X)**2/8
#kx = kernel.KIMQ(b=-0.5, c=1, s2=med2)
kx = kernel.KGauss(torch.tensor([med2]))
# q(z|x)
# cs = gen.CSFactorisedGaussian(Dx, Dz, Dh)
cs = gen.Implicit(Dx, Dz, Dh)
# optimizer settings
learning_rate_q = 1e-3
weight_decay_q = 0
optimizer_q = torch.optim.Adam(cs.parameters(), lr=learning_rate_q,
weight_decay=weight_decay_q)
approx_score = stein.ApproximateScore(
lebm.score_joint_obs, cs)
approx_score.n_sample = 100
# optimizer settings for p(x)
learning_rate_p = 1e-3 # !!!
weight_decay_p = 0
optimizer_p = torch.optim.Adam(lebm.parameters(), lr=learning_rate_p,
weight_decay=weight_decay_p)
iter_p = 3000
iter_q = 1
batch_size = 100
def inner_loop(niter, X, cs, opt):
for i in range(niter):
Z = cs.sample(1, X)
Z = Z.squeeze(0)
zmed2 = util.pt_meddistance(Z)**2/8
kz = kernel.KIMQ(b=-0.5, c=1, s2=zmed2)
loss = stein.kcsd_ustat(
X, Z, lebm.score_joint_latent, kx, kz)
opt.zero_grad()
loss.backward(retain_graph=False)
opt.step()
# print('kcsd', loss.item())
#inner_loop(400, X)
losses = []
with tqdm.tqdm(range(iter_p)) as ts:
for t in ts:
# reset q(z|x)'s weight
# cs.apply(weight_reset)
# optimizer_q = torch.optim.Adam(cs.parameters(), lr=learning_rate_q,
# weight_decay=weight_decay_q)
perm = torch.randperm(X.shape[0]).detach()
idx = perm[:batch_size]
X_ = X[idx].detach()
inner_loop(iter_q, X_, cs, optimizer_q)
loss = stein.ksd_ustat(X_, approx_score, kx)
losses += [loss.item()]
# if (t%10 == 0):
# loss_ = stein.ksd_ustat(X, approx_score, kx).item()
# print(loss.item(), loss_)
ts.set_postfix(loss=loss.item())
optimizer_p.zero_grad()
loss.backward(retain_graph=False)
optimizer_p.step()
plt.plot(losses)
# form a grid for numerical normalisation
from itertools import product
ngrid = 50
grid = torch.linspace(-10, 10, ngrid)
xz_eval = torch.tensor(list(product(*[grid]*4)))
x_eval = xz_eval[:,:2]
z_eval = xz_eval[:,2:]
# true log density
E_true = p.logpdf_multiple(torch.tensor(list(product(*[grid]*2))))
E_true -= E_true.max()
# EBM log density
E_eval = lebm(x_eval, z_eval).reshape(ngrid,ngrid,ngrid,ngrid).exp().detach()
E_eval /= E_eval.sum()
E_eval = E_eval.sum(-1).sum(-1)
E_eval.log_()
E_eval -= E_eval.max()
# E_eval = E_eval.sum(-1).sum(-1)
def normalise(E):
if isinstance(E, np.ndarray):
E = np.exp(E)
else:
E = E.exp()
E /= E.sum()
return E
fig, axes = plt.subplots(2,2,figsize=(6,6), sharex=True, sharey=True)
ax = axes[0,0]
ax.pcolor(grid, grid,E_true.reshape(ngrid,ngrid), shading='auto', vmin=-10, vmax=0)
ax.scatter(x[:,1], x[:,0], c="r", s=1, alpha=0.05)
ax = axes[1,0]
ax.pcolor(grid, grid,normalise(E_true).reshape(ngrid,ngrid), shading='auto')
ax = axes[0,1]
ax.pcolor(grid, grid,E_eval,shading='auto', vmin=-10, vmax=0, )
ax.scatter(x[:,1], x[:,0], c="r", s=1, alpha=0.05)
ax = axes[1,1]
ax.pcolor(grid, grid,normalise(E_eval),shading='auto' )
ax.scatter(x[:,1], x[:,0], c="r", s=1, alpha=0.0)
axes[0,0].set_ylabel("logp")
axes[1,0].set_ylabel("p")
axes[0,0].set_title("data")
axes[0,1].set_title("KSD")
axes[0,0].set_xlim(-10,10)
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Data transformation (collaborative filtering)
It is usually observed in the real-world datasets that users may have different types of interactions with items. In addition, same types of interactions (e.g., click an item on the website, view a movie, etc.) may also appear more than once in the history. Given that this is a typical problem in practical recommendation system design, the notebook shares data transformation techniques that can be used for different scenarios.
Specifically, the discussion in this notebook is only applicable to collaborative filtering algorithms.
## 0 Global settings
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import pandas as pd
import numpy as np
import datetime
import math
print("System version: {}".format(sys.version))
```
## 1 Data creation
Two dummy datasets are created to illustrate the ideas in the notebook.
### 1.1 Explicit feedback
In the "explicit feedback" scenario, interactions between users and items are numerical / ordinal **ratings** or binary preferences such as **like** or **dislike**. These types of interactions are termed as *explicit feedback*.
The following shows a dummy data for the explicit rating type of feedback. In the data,
* There are 3 users whose IDs are 1, 2, 3.
* There are 3 items whose IDs are 1, 2, 3.
* Items are rated by users only once. So even when users interact with items at different timestamps, the ratings are kept the same. This is seen in some use cases such as movie recommendations, where users' ratings do not change dramatically over a short period of time.
* Timestamps of when the ratings are given are also recorded.
```
data1 = pd.DataFrame({
"UserId": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3],
"ItemId": [1, 1, 2, 2, 2, 1, 2, 1, 2, 3, 3, 3, 3, 3, 1],
"Rating": [4, 4, 3, 3, 3, 4, 5, 4, 5, 5, 5, 5, 5, 5, 4],
"Timestamp": [
'2000-01-01', '2000-01-01', '2000-01-02', '2000-01-02', '2000-01-02',
'2000-01-01', '2000-01-01', '2000-01-03', '2000-01-03', '2000-01-03',
'2000-01-01', '2000-01-03', '2000-01-03', '2000-01-03', '2000-01-04'
]
})
data1
```
### 1.2 Implicit feedback
Many times there are no explicit ratings or preferences given by users, that is, the interactions are usually implicit. For example, a user may puchase something on a website, click an item on a mobile app, or order food from a restaurant. This information may reflect users' preference towards the items in an **implicit** manner.
As follows, a data set is created to illustrate the implicit feedback scenario.
In the data,
* There are 3 users whose IDs are 1, 2, 3.
* There are 3 items whose IDs are 1, 2, 3.
* There are no ratings or explicit feedback given by the users. Sometimes there are the types of events. In this dummy dataset, for illustration purposes, there are three types for the interactions between users and items, that is, **click**, **add** and **purchase**, meaning "click on the item", "add the item into cart" and "purchase the item", respectively.
* Sometimes there is other contextual or associative information available for the types of interactions. E.g., "time-spent on visiting a site before clicking" etc. For simplicity, only the type of interactions is considered in this notebook.
* The timestamp of each interaction is also given.
```
data2 = pd.DataFrame({
"UserId": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3],
"ItemId": [1, 1, 2, 2, 2, 1, 2, 1, 2, 3, 3, 3, 3, 3, 1],
"Type": [
'click', 'click', 'click', 'click', 'purchase',
'click', 'purchase', 'add', 'purchase', 'purchase',
'click', 'click', 'add', 'purchase', 'click'
],
"Timestamp": [
'2000-01-01', '2000-01-01', '2000-01-02', '2000-01-02', '2000-01-02',
'2000-01-01', '2000-01-01', '2000-01-03', '2000-01-03', '2000-01-03',
'2000-01-01', '2000-01-03', '2000-01-03', '2000-01-03', '2000-01-04'
]
})
data2
```
## 2 Data transformation
Many collaborative filtering algorithms are built on a user-item sparse matrix. This requires that the input data for building the recommender should contain unique user-item pairs.
For explicit feedback datasets, this can simply be done by deduplicating the repeated user-item-rating tuples.
```
data1 = data1.drop_duplicates()
data1
```
In the implicit feedback use cases, there are several methods to perform the deduplication, depending on the requirements of the actual business user cases.
### 2.1 Data aggregation
Usually, data is aggregated by user to generate some scores that represent preferences (in some algorithms like SAR, the score is called *affinity score*, for simplicity reason, hereafter the scores are termed as *affinity*).
It is worth mentioning that in such case, the affinity scores are different from the ratings in the explicit data set, in terms of value distribution. This is usually termed as an [ordinal regression](https://en.wikipedia.org/wiki/Ordinal_regression) problem, which has been studied in [Koren's paper](https://pdfs.semanticscholar.org/934a/729409d6fbd9894a94d4af66bd82222b5515.pdf). In this case, the algorithm used for training a recommender should be carefully chosen to consider the distribution of the affinity scores rather than discrete integer values.
#### 2.2.1 Count
The most simple technique is to count times of interactions between user and item for producing affinity scores. The following shows the aggregation of counts of user-item interactions in `data2` regardless the interaction type.
```
data2_count = data2.groupby(['UserId', 'ItemId']).agg({'Timestamp': 'count'}).reset_index()
data2_count.columns = ['UserId', 'ItemId', 'Affinity']
data2_count
```
#### 2.2.1 Weighted count
It is useful to consider the types of different interactions as weights in the count aggregation. For example, assuming weights of the three differen types, "click", "add", and "purchase", are 1, 2, and 3, respectively. A weighted-count can be done as the following
```
# Add column of weights
data2_w = data2.copy()
conditions = [
data2_w['Type'] == 'click',
data2_w['Type'] == 'add',
data2_w['Type'] == 'purchase'
]
choices = [1, 2, 3]
data2_w['Weight'] = np.select(conditions, choices, default='black')
# Convert to numeric type.
data2_w['Weight'] = pd.to_numeric(data2_w['Weight'])
# Do count with weight.
data2_wcount = data2_w.groupby(['UserId', 'ItemId'])['Weight'].sum().reset_index()
data2_wcount.columns = ['UserId', 'ItemId', 'Affinity']
data2_wcount
```
#### 2.2.2 Time dependent count
In many scenarios, time dependency plays a critical role in preparing dataset for building a collaborative filtering model that captures user interests drift over time. One of the common techniques for achieving time dependent count is to add a time decay factor in the counting. This technique is used in [SAR](https://github.com/Microsoft/Recommenders/blob/master/notebooks/02_model/sar_deep_dive.ipynb). Formula for getting affinity score for each user-item pair is
$$a_{ij}=\sum_k w_k \left(\frac{1}{2}\right)^{\frac{t_0-t_k}{T}} $$
where $a_{ij}$ is the affinity score, $w_k$ is the interaction weight, $t_0$ is a reference time, $t_k$ is the timestamp for the $k$-th interaction, and $T$ is a hyperparameter that controls the speed of decay.
The following shows how SAR applies time decay in aggregating counts for the implicit feedback scenario.
In this case, we use 5 days as the half-life parameter, and use the latest time in the dataset as the time reference.
```
T = 5
t_ref = pd.to_datetime(data2_w['Timestamp']).max()
# Calculate the weighted count with time decay.
data2_w['Timedecay'] = data2_w.apply(
lambda x: x['Weight'] * np.power(0.5, (t_ref - pd.to_datetime(x['Timestamp'])).days / T),
axis=1
)
data2_w
```
Affinity scores of user-item pairs can be calculated then by summing the 'Timedecay' column values.
```
data2_wt = data2_w.groupby(['UserId', 'ItemId'])['Timedecay'].sum().reset_index()
data2_wt.columns = ['UserId', 'ItemId', 'Affinity']
data2_wt
```
### 2.2 Negative sampling
The above aggregation is based on assumptions that user-item interactions can be interpreted as preferences by taking the factors like "number of interation times", "weights", "time decay", etc. Sometimes these assumptions are biased, and only the interactions themselves matter. That is, the original dataset with implicit interaction records can be binarized into one that has only 1 or 0, indicating if a user has interacted with an item, respectively.
For example, the following generates data that contains existing interactions between users and items.
```
data2_b = data2[['UserId', 'ItemId']].copy()
data2_b['Feedback'] = 1
data2_b = data2_b.drop_duplicates()
data2_b
```
"Negative sampling" is a technique that samples negative feedback. Similar to the aggregation techniques, negative feedback cna be defined differently in different scenarios. In this case, for example, we can regard the items that a user has not interacted as those that the user does not like. This may be a strong assumption in many user cases, but it is reasonable to build a model when the interaction times between user and item are not that many.
The following shows that, on top of `data2_b`, there are another 2 negative samples are generated which are tagged with "0" in the "Feedback" column.
```
users = data2['UserId'].unique()
items = data2['ItemId'].unique()
interaction_lst = []
for user in users:
for item in items:
interaction_lst.append([user, item, 0])
data_all = pd.DataFrame(data=interaction_lst, columns=["UserId", "ItemId", "FeedbackAll"])
data_all
data2_ns = pd.merge(data_all, data2_b, on=['UserId', 'ItemId'], how='outer').fillna(0).drop('FeedbackAll', axis=1)
data2_ns
```
Also note that sometimes the negative sampling may also impact the count-based aggregation scheme. That is, the count may start from 0 instead of 1, and 0 means there is no interaction between the user and item.
# References
1. X. He *et al*, Neural Collaborative Filtering, WWW 2017.
2. Y. Hu *et al*, Collaborative filtering for implicit feedback datasets, ICDM 2008.
3. Simple Algorithm for Recommendation (SAR), url: https://github.com/Microsoft/Recommenders/blob/master/notebooks/02_model/sar_deep_dive.ipynb
4. Y. Koren and J. Sill, OrdRec: an ordinal model for predicting personalized item rating distributions, RecSys 2011.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from sklearn.model_selection import train_test_split
dataset = pd.read_csv('Churn_Modelling.csv')
d_X = dataset.iloc[:, 3:13]
d_y = dataset.iloc[:, 13]
d_X = pd.get_dummies(d_X)
d_X.drop(['Geography_France', 'Gender_Female'], axis=1, inplace=True)
X_train, X_test, y_train, y_test = train_test_split(d_X, d_y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
EPOCH = 100
BATCH_SIZE = 10
LR = 0.001
torch_net = torch.nn.Sequential(
torch.nn.Linear(11, 6),
torch.nn.ReLU(),
torch.nn.Linear(6, 2),
torch.nn.ReLU(),
)
t_x_train = torch.from_numpy(X_train).float()
t_y_train = torch.from_numpy(y_train.as_matrix()).long()
t_x_test = torch.from_numpy(X_test).float()
t_y_test = torch.from_numpy(y_test.as_matrix()).long()
torch_dataset = Data.TensorDataset(data_tensor=t_x_train, target_tensor=t_y_train)
train_loader = Data.DataLoader(
dataset=torch_dataset,
batch_size=BATCH_SIZE,
shuffle=True)
# To use GPU
v_x_test = Variable(t_x_test).cuda()
t_y_test = t_y_test.cuda()
torch_net.cuda()
optimizer = torch.optim.Adam(torch_net.parameters(), lr=LR)
loss_func = nn.CrossEntropyLoss()
losses_his = []
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
# !!!!!!!! Change in here !!!!!!!!! #
b_x = Variable(x).cuda() # Tensor on GPU
b_y = Variable(y).cuda() # Tensor on GPU
output = torch_net(b_x)
loss = loss_func(output, b_y)
losses_his.append(loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 10 == 0 and step % 500 == 0:
test_output = torch_net(v_x_test)
# !!!!!!!! Change in here !!!!!!!!! #
pred_y = torch.max(F.softmax(test_output), 1)[1].cuda().data.squeeze() # move the computation in GPU
accuracy = sum(pred_y == t_y_test) / t_y_test.size(0)
print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.4f' % accuracy)
test_output = torch_net(v_x_test)
test_output_pro = F.softmax(test_output)[:,1].data.cpu().numpy()
from sklearn.metrics import accuracy_score, roc_auc_score
accuracy_score(t_y_test.cpu().numpy(), test_output_pro>0.5)
roc_auc_score(y_test, test_output_pro, average='macro')
```
## Use the model as if we have a new customer
```
new_customer = np.array([[600, 40, 3, 6000, 2, 1, 1, 50000,0, 0, 1]], dtype='float')
new_customer = sc.transform(new_customer)
new_customer = torch.from_numpy(new_customer).float()
new_customer = Variable(new_customer).cuda()
new_customer_pre = torch_net(new_customer)
F.softmax(new_customer_pre).data.cpu().numpy()
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
model = XGBClassifier(max_depth=6, learning_rate=0.12, n_estimators=14, objective="binary:logistic", subsample=0.6, seed=0)
scores = cross_val_score(model, X_train, y_train, cv=10, scoring="accuracy")
print(scores.mean())
print(np.std(scores))
scores
```
| github_jupyter |
```
import heapq
class Solution:
def minEatingSpeed(self, piles, H: int) -> int:
total_sum = sum(piles)
left, right = 1, max(piles) + 1
min_speed = float('inf')
while left < right:
mid = left + (right - left) // 2
eat_sum = speed * H
if eat_sum >= total_sum: # 吃快了
right = mid
min_speed = min(min_speed, mid)
elif eat_sum < total_sum: # 吃慢了
left = mid + 1
return min_speed
# def helper(self, speed, piles, H):
# heap_piles = [-x for x in piles]
# heapq.heapify(heap_piles)
# sum_val = 0
# for i in range(H):
# if heap_piles:
# pile = -heapq.heappop(heap_piles)
# if pile > speed:
# sum_val += speed
# pile -= speed
# heapq.heappush(heap_piles, -pile)
# else:
# sum_val += pile
# else:
# break
# return sum_val
import heapq
class Solution:
def minEatingSpeed(self, piles, H: int) -> int:
total_sum = sum(piles)
left, right = 1, max(piles) + 1
min_speed = float('inf')
while left < right:
mid = left + (right - left) // 2
eat_sum = self.helper(mid, piles, H)
if eat_sum >= total_sum: # 吃快了
right = mid
min_speed = min(min_speed, mid)
elif eat_sum < total_sum: # 吃慢了
left = mid + 1
return min_speed
def helper(self, speed, piles, H):
sum_val = speed * H
pile_sum = sum(piles)
if sum_val > pile_sum:
return pile_sum
else:
return sum_val
class Solution:
def minEatingSpeed(self, piles, H: int) -> int:
total_sum = sum(piles)
left, right = 1, max(piles) + 1
min_speed = float('inf')
while left < right:
mid = left + (right - left) // 2
eat_sum = self.helper(mid, piles, H)
# print(left, right, mid, eat_sum, total_sum)
if eat_sum > total_sum: # 吃快了
right = mid
mid_speed = mid
elif eat_sum < total_sum: # 吃慢了
left = mid + 1
return min_speed
def helper(self, speed, piles, H):
total_sum = 0
count = 0
idx = 0
while idx < len(piles):
p = piles[idx]
if p < speed:
total_sum += p
count += 1
if count == H:
return total_sum
else:
div_val = p // speed
mod_val = p % speed
div_val = min(div_val, H - count)
total_sum += div_val * speed
count += div_val
if count == H:
return total_sum
total_sum += mod_val
count += 1
if count == H:
print(speed, p, idx, count, total_sum, H)
return total_sum
idx += 1
return total_sum
class Solution:
def minEatingSpeed(self, piles, H: int) -> int:
total_sum = sum(piles)
left, right = 1, max(piles) + 1
min_speed = float('inf')
while left < right:
mid = left + (right - left) // 2
if self.helper(mid, piles, H): # 吃快了
right = mid
min_speed = min(min_speed, mid)
else: # 吃慢了
left = mid + 1
return min_speed
def helper(self, speed, piles, H):
cost_time = 0
for p in piles:
if p > speed:
div_val = p // speed
cost_time += div_val
cost_time += 1
return cost_time <= H
solution = Solution()
solution.minEatingSpeed(piles = [312884470], H = 312884469)
a = [1,2,3,4]
a[:3]
# [312884470]
# 312884469
print(312884470 // 312884469)
print(30%23)
```
| github_jupyter |
```
# Cloning the github branch in the '/content/' directory.
# installing the highway-env package.
!pip install git+https://github.com/eleurent/highway-env.git --quiet
# Cloning the github branch in the '/content/' directory.
# installing the finite-mdp package.
!pip install git+https://github.com/eleurent/finite-mdp.git --quiet
# Uninstalling the highway-env package in case an issue has happened
# while coding your new environment.
# Ucomment the command below and execute it once.
# !pip uninstall -y highway-env
# general package imports
import os
import time
# RL specific package imports
import gym
import highway_env
# plotting specific import statements
# corresponding version outputs
import numpy as np
print('numpy: '+np.version.full_version)
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 20})
import matplotlib.image as mpimg
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
import matplotlib
print('matplotlib: '+matplotlib.__version__)
# creating an instance of roundabout environment
env_h = gym.make("highway-v0")
# converting the roundabout environment into a finite mdp
mdp_h = env_h.unwrapped.to_finite_mdp()
print("Lane change task MDP shape: "+str(mdp_h.transition.shape))
# generic function implementation for MDP data plotting
def plot_3d_fig(data, img_name, x_deg=-20, y_deg=-40, show_flag=False):
if not os.path.exists('output'):
os.makedirs('output')
fig = plt.figure(figsize=(10,10), dpi=100)
ax = plt.axes(projection='3d')
X = np.arange(0, 120, 1)
X = np.arange(0, mdp_h.transition.shape[0], 1)
Y = np.arange(0, mdp_h.transition.shape[1], 1)
Y, X = np.meshgrid(Y, X)
Z = data
ax.plot_surface(X, Y, Z, cmap='magma', rstride=1, cstride=1, linewidth=0, alpha=0.7)
ax.view_init(x_deg, y_deg)
plt.xlabel("States")
plt.ylabel("Actions")
plt.savefig('output/'+img_name)
# To switch off the display output of plot.
if show_flag == False:
plt.close(fig)
# plotting the deterministic MDP's transition matrix outputs for all states
plot_3d_fig(mdp_h.transition, 'lane_change_task_transition_matrix.png', -25, -45)
# plotting the deterministic MDP's reward matrix outputs for all states
plot_3d_fig(mdp_h.reward, 'lane_change_task_reward_matrix.png', -25, -45)
# storing the value function calculated w/ value iteration algorithm
val_func_array = np.zeros((mdp_h.transition.shape[0], 5))
val_cumu_array = np.zeros((mdp_h.transition.shape[0], 5))
# this calculates evaluates the deterministic policy
# for the deterministic version of roundabout environment
def determine_policy(mdp,v, gamma=1.0):
policy = np.zeros(mdp.transition.shape[0])
for s in range(mdp.transition.shape[0]):
q_sa = np.zeros(env.action_space.n)
for a in range(env.action_space.n):
s_ = mdp.transition[s][a]
r = mdp.reward[s][a]
q_sa[a] += (1 * (r + gamma * v[s_]))
policy[s] = np.argmax(q_sa)
return policy
# value iteration algorithm's baseline implementation
def value_iteration(mdp, env, gamma=0.99):
value = np.zeros(mdp.transition.shape[0])
max_iterations = 10000
eps = 1e-10
for i in range(max_iterations):
prev_v = np.copy(value)
for s in range(mdp.transition.shape[0]):
q_sa = np.zeros(env.action_space.n)
for a in range(env.action_space.n):
s_ = mdp.transition[s][a]
r = mdp.reward[s][a]
q_sa[a] += (1 * (r + gamma * prev_v[s_]))
value[s] = max(q_sa)
ind_ = np.argmax(q_sa)
val_func_array[s,ind_] = max(q_sa)
val_cumu_array[s,:] = q_sa
if (np.sum(np.fabs(prev_v - value)) <= eps):
print('Problem converged at iteration %d.' % (i + 1))
break
return value
# inline code execution for value iteration
# and policy determination functions
gamma = 0.99
env = gym.make('highway-v0')
mdp = env.unwrapped.to_finite_mdp()
optimal_value_func = value_iteration(mdp, env, gamma)
start_time = time.time()
policy = determine_policy(mdp, optimal_value_func, gamma)
print("Best Policy Values Determined for the MDP.\n")
print(policy)
# plotting the value function as output
plot_3d_fig(val_func_array, 'value_func_array.png', 40, -45)
plot_3d_fig(val_cumu_array, 'value_cumu_array.png', 40, 145)
# downloading the zip files from the output directory
!zip -r /content/output.zip /content/output/
from google.colab import files
files.download("/content/output.zip")
```
| github_jupyter |
# Emission AI
#### Microsoft AI for Earth Project
AI Monitoring Coal-fired Power Plant Emission from Space
#### Team Members
Ziheng Sun, Ahmed Alnaim, Zack Chester, Daniel Tong
#### Date
4/30/2020-10/30/2021
#### Abstract
The goal is to build a reusable machine learning model to estimate the emission of coal-fired power plants by satellite observations. The machine learning model will be trained on the monitoring data of the power plants collected from EPA eGRID, and the remote sensed datasets of TROPOMI on Sentinel 5 Precursor and the meterological observations from MERRA.
The model will take remote sensing records as inputs, and output an estimated NOX emission daily volume.
### Step 1: Read CSV
The demo CSV files are located in the folder `data`. The CSV initially contains six columns: Facility ID (EPA Code of PP), Latitude, Longitude, Date, EPA Daily NO2 divided by 1e+05, TROPOMI NO2_column_number_density (Total vertical column of NO2, ratio of the slant column density of NO2 and the total air mass factor). Both [EPA](https://www.epa.gov/egrid), [TROPOMI](http://www.tropomi.eu/) and [MERRA](https://gmao.gsfc.nasa.gov/reanalysis/MERRA/) datasets can be accessed and retrieval free of charge.
One preprocessing step is to turn the date column into three separate columns as the machine learning cannot parse date strings as input. It need be turned into numeric values. We transform the date column into dayofweek, dayofmonth, and dayofyear. The original date column is excluded from the training dataset to pass the data type checker.
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # Plotting and Visualizing data
from sklearn.model_selection import train_test_split
import os
print(os.listdir("data"))
# Describe the data, and get a overview
data = pd.read_csv('data/tropomi_epa_kvps_NO2_2019_56.csv',parse_dates=["Date"])
print("==================>")
print(data.describe())
data['dayofyear'] = data['Date'].dt.dayofyear
data['dayofweek'] = data['Date'].dt.dayofweek
data['dayofmonth'] = data['Date'].dt.day
data = data.drop(columns=["Date"])
print("==================>")
print(data.columns)
# Separating dependednt & Indepented Variables
x = data.iloc[:, data.columns != 'EPA_NO2/100000'].values
y = data.iloc[:, data.columns == 'EPA_NO2/100000']
# show the shape of x and y to make sure they have the same length
# Train Test Split at ratio 0.33
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.33)
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
y_train = y_train.ravel()
y_test = y_test.ravel()
print("===================>")
print("X_train's shape: ", x_train.shape)
print("y_train's shape: ", y_train.shape)
print("x_test's shape: ", x_test.shape)
print("y_test's shape: ", y_test.shape)
# print(y_test)
# print(y_train)
```
### Step 2: Train Deep Learning model
The hyperparameter tuning is a very troublesome task. Use Keras tensorflow model.
```
# Model Import and Build
import tensorflow as tf
# from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import SGD, Adagrad, Adadelta, RMSprop, Adam
model = tf.keras.Sequential(
[
tf.keras.Input(shape=(11)),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(500, activation="relu"),
layers.Dense(1, activation="sigmoid"),
]
) # No weights at this stage!
# Call the model on a test input
# x = tf.ones((1, 4))
# y = model(x)
print("Number of weights after calling the model:", len(model.weights)) # 6
# lr_schedule = keras.optimizers.schedules.ExponentialDecay(
# initial_learning_rate=1e-2,
# decay_steps=10000,
# decay_rate=0.9)
# sgd = SGD(lr=lr_schedule)
model.summary()
model.compile(optimizer="adadelta", loss="mse", metrics=[tf.keras.metrics.mean_squared_error])
model.fit(x_train, y_train, batch_size=8, validation_split = 0.15, epochs=100)
```
### Step 3: Test ML model
Predict on the test dataset using the trained models
```
# Use the trained models to make predictions
y_test_pred = model.predict(x_test)
```
### Step 4: Visualize the Results
Visualization of the ML results could facilitate the intercomparison of machine learning models and identify the pros and cons of various models in different groups of data samples.
Blue dots are the true observation of EPA. Black dots are the predicted values of machine learning models.
```
def visualizeResults(modelname, x_test, y_test, pred):
# Visualization
## Check the fitting on training set
plt.scatter(x_test[:,3], y_test, color='blue')
plt.scatter(x_test[:,3], pred, color='black')
# plt.scatter(y_test, pred, color='black')
plt.title(modelname + ' Fit on testing set')
plt.xlabel('TROMPOMI-Test')
plt.ylabel('EPA-Test')
plt.show()
visualizeResults("Neural Network", x_test, y_test, y_test_pred)
```
### Step 5: Calculate quantitative metrics
For a regression task, the accuracy metrics are normally mean squared error (MSE), mean absolute error (MAE), and coefficient of determination (R2).
```
from sklearn.metrics import accuracy_score
from sklearn import metrics
def showAccuracyMetrics(mlmethod, model, y_test, y_pred):
print("Model ", mlmethod, " Performance:")
# print(y_test.shape, y_pred.shape)
mae = metrics.mean_absolute_error(y_test, y_pred)
mse = metrics.mean_squared_error(y_test, y_pred)
r2 = metrics.r2_score(y_test, y_pred)
print(" MAE: ", mae)
print(" MSE: ", mse)
print(" R2: ", r2)
# print(y_test, linear_pred)
showAccuracyMetrics("Neural Network: ", model, y_test, y_test_pred)
```
### Step 6: Feature Importance
0 - 'FID',
1 - 'Latitude',
2 - 'Longitude',
3 - 'TROPOMI*1000',
4 - 'Wind (Monthly)',
5 - 'Temp (Monthly)',
6 - 'Precip (Monthly)',
7 - 'Cloud Fraction (Monthly)',
8 - 'dayofyear',
9 - 'dayofweek',
10 - 'dayofmonth'
```
def showImportance(model):
labels = ['FID', 'Latitude', 'Longitude', 'TROPOMI*1000',\
'Wind (Monthly)', 'Temp (Monthly)', 'Precip (Monthly)',\
'Cloud Fraction (Monthly)', 'dayofyear', 'dayofweek', 'dayofmonth']
# get importance
importance = model.best_estimator_.feature_importances_
print(len(labels))
print(importance)
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %s, Score: %.5f' % (labels[i],v))
# plot feature importance
plt.bar([x for x in range(len(importance))], importance)
plt.show()
# showImportance(rf_regressor)
```
### Conclusion
This notebook shows how to use machine learning models to predict the emission of coal-fired power plants using satellite observations like TROPOMI and meteorology observations from MERRA.
The results show that random forest and voting ensemble models are similar in the performance. The random forest model has a slightly better performance in this case. That is also because the ensembled model is from the trained linear regression and the random forest models. The results are basically an average between the two models' results.
Linear regression model outputs basically the values in a narrow range regarless of the variances in the TROPOMI observation and never produce the values greater than 0.3 or less than 0.1. It is not suitable for this prediction.
##### Final Remarks
Using machine learning to predict ground emission from remote sensed data is possible. More improvements are needed to ensure the accuracy, generality, and stability of the trained models in a long-term operational run. The demonstrated power plant site is in the rural area in Alabama and there is less NO2 emission sources other than the power plant itself. More research is required to make it work for those power plants located in or nearby urban regions, where other emission sources may dominate the NOX in the atmosphere.
### Citation
Please cite this work as:
`Sun, Ziheng, Zack Chester, and Daniel Tong. 2021. "EmissionAI: Ai Monitoring Coal-Fired Power Plant Emission from Space." https://github.com/ZihengSun/EmissionAI `
```
pip list
```
| github_jupyter |
```
import tree.ctutils as ctu
from tree import treeutils
import numpy as np
import pickle
# Calculate merger event parameters
def find_merger(atree, idx=None, aexp_min=0.0):
"""
find indices of merger event from a tree.
(Full tree or main progenitor trunk)
"""
if idx == None:
idx = atree['id'][0]
nprg = 1
merger_list=[]
i = 0
while nprg > 0:
idx = ctu.get_progenitors(atree, idx, main=True)[0]
ind = np.where(atree['id'] == idx)[0]
if atree['aexp'][ind] < aexp_min:
break
nprg = ctu.get_npr(atree, idx)
if nprg > 1:
merger_list.append(i)
i +=1
return merger_list
def merger_mass_ratio(atree, idx=None):
"""
return mass ratio of the given merger event
"""
if idx == None:
idx = atree['id'][0]
prgs = ctu.get_progenitors(atree, idx)
# only for mergers
if len(prgs) > 1:
i_prgs = [np.where(atree['id'] == i)[0] for i in prgs]
mass = []
for iprg in i_prgs:
mass.append(atree['m'])
else:
print("This is not a merger")
return 0
def merger_properties_main_prg(atree, idx):
"""
Calculate merger mass ratio for "one" merger event.
if idx == None:
if nout == None:
print("Both idx and nout are missing")
return
else:
if nout == None:
nout = np.where(atree['id'] == idx)[0]
idx = atree['id'][ind]
"""
#prgs = get_progenitors(atree, idx)
#if len(prgs) > 1:
# i_prgs = [np.where(atree['id'] == i)[0] for i in prgs]
i_prgs = np.where(atree['desc_id'] == idx)[0]
print(i_prgs)
id_prgs = atree['id'][i_prgs]
mass_prgs = atree['m'][i_prgs]
#mass_prgs_norm = mass_prgs / sum(mass_prgs)
return mass_prgs
def distance_to(xc, xx):
import numpy as np
return np.sqrt([(xc[0] - xx[0])**2 + (xc[1] - xx[1])**2 + (xc[2] - xx[2])**2])[0]
def extract_halos_within(halos, i_center, info, dist_in_mpc=1.0):
xc = halos['x'][i_center]
yc = halos['y'][i_center]
zc = halos['z'][i_center]
xx = halos['x']
yy = halos['y']
zz = halos['z']
dd = np.multiply(distance_to([xc,yc,zc], [xx,yy,zz]), info.pboxsize)
return (dd < (dist_in_mpc))
import utils.match as mtc
import matplotlib.pyplot as plt
import pandas as pd
import tree.ctutils as ctu
import load
import tree.halomodule as hmo
r_cluster_scale = 2.5
mstar_min = 2e9
is_gal = True
nout_fi = 187
#Last merger
import matplotlib.pyplot as plt
nout_ini = 87 # recent merger after z =1.
# Load tree
is_gal = True
# all catalogs
verbose=False
#
most_recent_only = False
#clusters = ['39990', '36415', '10002', '05427', '36413', '01605']
clusters=['28928']
# final result arrays
gal_list=[]
mr_list=[]
nout_list=[]
#for cluster in clusters:
wdir = '/home/hoseung/Work/data/' + clusters[0] + '/'
#wdir = './'
alltrees = ctu.load_tree(wdir, is_gal=is_gal)
ft = alltrees.data[alltrees.data['nout'] == nout_fi]
#allgals = ft['id'][ft['m'] > 5e9]
info = load.info.Info(nout=nout_fi, base=wdir, load=True)
hhal = hmo.Halo(base=wdir, nout=nout_fi, halofinder='HM', info=info, load=True, is_gal=False)
i_center = np.where(hhal.data['np'] == max(hhal.data['np']))[0]
r_cluster = hhal.data['rvir'][i_center] * info.pboxsize
hh = hmo.Halo(base=wdir, nout=nout_fi, halofinder='HM', info=info, load=True, is_gal=is_gal)
i_center = np.where(hh.data['np'] == max(hh.data['np']))[0]
i_satellites = extract_halos_within(hh.data, i_center, info, dist_in_mpc = r_cluster * r_cluster_scale)
print("Total {0} galaxies \n{1} galaxies are selected".format(
len(i_satellites),sum(i_satellites)))
# halos found inside the cluster and have complete tree back to nout_ini
large_enugh = hh.data['m'] > mstar_min
halo_list = hh.data['id'][i_satellites * large_enugh]
final_ids = ctu.check_tree_complete(alltrees.data, 87, nout_fi, halo_list, idx=False) # 87: z = 1
final_gals_idx = [ft['id'][ft['Orig_halo_id'] == final_gal] for final_gal in final_ids]
#print(len(final_gals_idx), "halos left")
#ngals = len(final_gals_idx)
# Search for all galaxies that listed in the trees of final_gals
#all_gals_in_trees = all_gals(tt, final_gals_idx)
i_center = np.where(hhal.data['np'] == max(hhal.data['np']))[0]
print(hhal.data[['x', 'y', 'z']][i_center])
print(hh.data[['x', 'y', 'z']][i_center])
print(i_center)
final_ids
final_gals_idx
for idx in final_gals_idx:
#gal = cat['id']
if verbose: print("analyzing merger events of galaxy ", gal)
# Convert halo id to tree id
#idx = id2idx(alltrees.data, gal, 187)
#idx = cat['idx']
# full tree of a galaxy
atree = ctu.extract_a_tree(alltrees.data, idx)
# main progenitor tree
mtree = ctu.extract_main_tree(alltrees.data, idx)
x_nout = mtree['nout'].flatten()
x_nout = x_nout[x_nout > nout_ini]
mass_ratios_single = np.zeros(len(x_nout))
for i, nout in enumerate(x_nout):
# merger ratio
i_prgs = np.where(atree['desc_id'] == mtree['id'][i])[0]
# multiple prgs = merger
if len(i_prgs) > 1:
if verbose: print(" {} mergers at nout = {}".format(len(i_prgs), nout))
id_prgs = atree['id'][i_prgs]
mass_prgs = atree['m'][i_prgs]
m_r = mass_prgs / max(mass_prgs)
if verbose:
print(" Mass ratios : ", m_r)
mass_ratios_single[i] = max([mass_prgs[1:] / max(mass_prgs)][0])
else:
mass_ratios_single[i] = 0
ind_ok = np.where(mass_ratios_single > 0.1)[0]
#print("all ind_ok", ind_ok)
if len(ind_ok) > 0:
# if a satellite oscillates around the host,
# it could be identified as multiple mergers with short time interval.
# leave only the first passage / merger.
good =[]
for i in range(len(ind_ok)-1):
if ind_ok[i+1] > ind_ok[i] + 2:
good.append(ind_ok[i])
good.append(ind_ok[-1])
ind_ok = good
# if most_recent_only:
# ind_ok = max(ind_ok) # most recent
# print(" galaxy {}, Last nout {}, Merger ratio 1:{:.1f}".format(idx,
# x_nout[ind_ok],
# 1./mass_ratios_single[ind_ok]))
mr = 1./mass_ratios_single[ind_ok]
gal_list.append(idx)
mr_list.append(mr)
nout_list.append(x_nout[ind_ok])
"""
fig, ax = plt.subplots(1)
ax.scatter(nout_list, mr_list)
ax.set_title("last merger Vs final lambda")
ax.set_ylabel(r"$\lambda _R$")
ax.set_xlabel("Last merger")
for i,gal_name in enumerate(gal_list):
ax.text(nout_list[i]+0.5, mr_list[i]+0.1, str(gal_name))
plt.show()
"""
with open(wdir + 'merger_list.txt', 'w') as f:
# print("Major mergers in this cluster")
for gal, nout, mr in zip(gal_list, mr_list, nout_list):
for ni, mi in zip(nout, mr):
f.write("{} {} {} \n".format(gal, ni, mi))
idx
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
!pip install thop
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import torch.backends.cudnn as cudnn
import math
import time
import torch.nn.init as init
import csv
import shutil
import pathlib
from os import remove
from os.path import isfile
from collections import OrderedDict
from google.colab import files
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
#config
dataset = 'cifar100'
workers = 8
batch_size = 128
epochs = 200
learning_rate = 0.1
momentum = 0.9
weight_decay = 5e-4
print_freq = 50
ckpt = '/content/drive/My Drive/Coding/nowdoing2/SQUEEZENET_result/checkpoint/'
def load_model(model, ckpt_file):
checkpoint = torch.load(ckpt_file, map_location=lambda storage, loc: storage)
try:
model.load_state_dict(checkpoint['model'])
except:
# create new OrderedDict that does not contain `module.`
new_state_dict = OrderedDict()
for k, v in checkpoint['model'].items():
if k[:7] == 'module.':
name = k[7:] # remove `module.`
else:
name = k[:]
new_state_dict[name] = v
model.load_state_dict(new_state_dict)
return checkpoint
def save_model(state, epoch, is_best):
dir_ckpt = pathlib.Path('/content/drive/My Drive/Coding/nowdoing2/SQUEEZENET_result/checkpoint/')
dir_path = dir_ckpt / dataset
dir_path.mkdir(parents=True, exist_ok=True)
model_file = dir_path / 'ckpt_epoch_{}.pth'.format(epoch)
torch.save(state, model_file)
if is_best:
shutil.copyfile(model_file, dir_path / 'ckpt_best.pth')
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
class ProgressMeter(object):
def __init__(self, num_batches, *meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
def print(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('\t'.join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = '{:' + str(num_digits) + 'd}'
return '[' + fmt + '/' + fmt.format(num_batches) + ']'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
transforms.Pad(4),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32),
transforms.ToTensor(),
#normalize,
])
# CIFAR-10 dataset
train_dataset = torchvision.datasets.CIFAR100(root='../../data/',
train=True,
transform=transform,
download=True)
test_dataset = torchvision.datasets.CIFAR100(root='../../data/',
train=False,
#transform=transforms.Compose([
transform=transforms.ToTensor(),
#normalize,
#])
)
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
val_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# 3x3 convolution
def conv3x3(in_channels, out_channels, stride=1):
return nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
__all__ = ['SqueezeNet', 'squeezenet1_0', 'squeezenet1_1']
model_urls = {
'squeezenet1_0': 'https://download.pytorch.org/models/squeezenet1_0-a815701f.pth',
'squeezenet1_1': 'https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth',
}
class Fire(nn.Module):
def __init__(self, inplanes, squeeze_planes,
expand1x1_planes, expand3x3_planes):
super(Fire, self).__init__()
self.inplanes = inplanes
self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1)
self.squeeze_activation = nn.ReLU(inplace=True)
self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes,
kernel_size=1)
self.expand1x1_activation = nn.ReLU(inplace=True)
self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes,
kernel_size=3, padding=1)
self.expand3x3_activation = nn.ReLU(inplace=True)
def forward(self, x):
x = self.squeeze_activation(self.squeeze(x))
out = torch.cat([
self.expand1x1_activation(self.expand1x1(x)),
self.expand3x3_activation(self.expand3x3(x))
], 1)
#print("Output Size: {}".format(out))
return out
class SqueezeNet(nn.Module):
def __init__(self, version='1_0', num_classes=100):
super(SqueezeNet, self).__init__()
self.num_classes = num_classes
if version == '1_0':
self.features = nn.Sequential(
nn.Conv2d(3, 96, kernel_size=3, stride=1),
nn.ReLU(inplace=True),
#nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(96, 16, 64, 64),
Fire(128, 16, 64, 64),
Fire(128, 32, 128, 128),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(256, 32, 128, 128),
Fire(256, 48, 192, 192),
Fire(384, 48, 192, 192),
Fire(384, 64, 256, 256),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(512, 64, 256, 256),
)
elif version == '1_1':
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(64, 16, 64, 64),
Fire(128, 16, 64, 64),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(128, 32, 128, 128),
Fire(256, 32, 128, 128),
nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True),
Fire(256, 48, 192, 192),
Fire(384, 48, 192, 192),
Fire(384, 64, 256, 256),
Fire(512, 64, 256, 256),
)
else:
# FIXME: Is this needed? SqueezeNet should only be called from the
# FIXME: squeezenet1_x() functions
# FIXME: This checking is not done for the other models
raise ValueError("Unsupported SqueezeNet version {version}:"
"1_0 or 1_1 expected".format(version=version))
# Final convolution is initialized differently from the rest
final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1)
self.classifier = nn.Sequential(
nn.Dropout(p=0.5),
final_conv,
nn.ReLU(inplace=True),
#nn.AdaptiveAvgPool2d((1, 1))
)
for m in self.modules():
if isinstance(m, nn.Conv2d):
if m is final_conv:
init.normal_(m.weight, mean=0.0, std=0.01)
else:
init.kaiming_uniform_(m.weight)
if m.bias is not None:
init.constant_(m.bias, 0)
def forward(self, x):
x = self.features(x)
x = self.classifier(x)
return torch.flatten(x, 1)
def _squeezenet(version, pretrained, progress, **kwargs):
model = SqueezeNet(version, **kwargs)
if pretrained:
arch = 'squeezenet' + version
state_dict = load_state_dict_from_url(model_urls[arch],
progress=progress)
model.load_state_dict(state_dict)
return model
def squeezenet1_0(pretrained=False, progress=True, **kwargs):
r"""SqueezeNet model architecture from the `"SqueezeNet: AlexNet-level
accuracy with 50x fewer parameters and <0.5MB model size"
<https://arxiv.org/abs/1602.07360>`_ paper.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _squeezenet('1_0', pretrained, progress, **kwargs)
def squeezenet1_1(pretrained=False, progress=True, **kwargs):
r"""SqueezeNet 1.1 model from the `official SqueezeNet repo
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_.
SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters
than SqueezeNet 1.0, without sacrificing accuracy.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
return _squeezenet('1_1', pretrained, progress, **kwargs)
def adjust_learning_rate(optimizer, epoch, lr):
"""Sets the learning rate, decayed rate of 0.1 every epoch"""
#if epoch >= 150:
# lr = 0.01
#if epoch >=80:
# lr = 0.001
#if epoch >=120:
# lr = 0.0001
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
best_acc1 = 0
model = squeezenet1_0()
print(model)
from thop import profile
input = torch.randn(1, 3, 32, 32)
flops, params = profile(model, inputs=(input, ))
from thop import clever_format
macs, params = clever_format([flops, params], "%.3f")
print("MACS: {}".format(macs))
print("Params: {}".format(params))
print(model)
criterion = nn.CrossEntropyLoss()
#optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate,
weight_decay=weight_decay, momentum=0.4,
nesterov=True)
start_epoch = 0
model = model.to(device)
criterion = criterion.to(device)
ckpt_dir = pathlib.Path('checkpoint')
ckpt_file = ckpt_dir / dataset / ckpt
train_time = 0.0
validate_time = 0.0
lr = learning_rate
max_grad_norm = 2.0
def train(train_loader, **kwargs):
epoch = kwargs.get('epoch')
model = kwargs.get('model')
criterion = kwargs.get('criterion')
optimizer = kwargs.get('optimizer')
batch_time = AverageMeter('Time', ':6.3f')
data_time = AverageMeter('Data', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(len(train_loader), batch_time, data_time,
losses, top1, top5, prefix="Epoch: [{}]".format(epoch))
# switch to train mode
model.train()
end = time.time()
running_loss = 0.0
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
input = input.to(device)
target = target.to(device)
#if args.cuda:
# target = target.cuda(non_blocking=True)
# compute output
output = model(input)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), input.size(0))
top1.update(acc1[0], input.size(0))
top5.update(acc5[0], input.size(0))
# compute gradient and do SGD step.
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
running_loss += loss.item()
if i % print_freq == 0:
progress.print(i)
end = time.time()
print('====> Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
epoch_loss = running_loss / len(train_loader)
print('====> Epoch loss {:.3f}'.format(epoch_loss))
return top1.avg, top5.avg, epoch_loss
def validate(val_loader, model, criterion):
batch_time = AverageMeter('Time', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(len(val_loader), batch_time, losses, top1, top5,
prefix='Test: ')
# switch to evaluate mode
model.eval()
total_loss = 0.0
with torch.no_grad():
end = time.time()
for i, (input, target) in enumerate(val_loader):
#if args.cuda:
# target = target.cuda(non_blocking=True)
input = input.to(device)
target = target.to(device)
# compute output
output = model(input)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), input.size(0))
top1.update(acc1[0], input.size(0))
top5.update(acc5[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
total_loss += loss.item()
if i % print_freq == 0:
progress.print(i)
end = time.time()
print('====> Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
total_loss = total_loss/ len(val_loader)
return top1.avg, top5.avg, loss.item()
start_time = time.time()
result_epoch = []
result_lr = []
result_train_avgtime = []
result_train_avgloss = []
result_train_avgtop1acc = []
result_train_avgtop5acc = []
result_test_avgtime = []
result_test_avgtop1acc = []
result_test_avgtop5acc = []
"""
Results
Epoch
learning rate
Training average loss per epoch
Training average time per epoch
Training average top1 accuracy per epoch
Training average top5 accuracy per epoch
Training average top1 error per epoch
Training average top5 error per epoch
Validation average loss per epoch
Validation average time per epoch
Validation average top1 accuracy per epoch
Validation average top5 accuracy per epoch
Validation average top1 error per epoch
Validation average top5 error per epoch
Best test accuracy
"""
for epoch in range(start_epoch, epochs):
adjust_learning_rate(optimizer, epoch, lr)
print('\n==> Epoch: {}, lr = {}'.format(
epoch, optimizer.param_groups[0]["lr"]))
result_epoch.append(epoch)
result_lr.append(optimizer.param_groups[0]["lr"])
# train for one epoch
print('===> [ Training ]')
start_time = time.time()
acc1_train, acc5_train, avgloss_train = train(train_loader,
epoch=epoch, model=model,
criterion=criterion, optimizer=optimizer)
result_train_avgloss.append(avgloss_train)
result_train_avgtop1acc.append(round(acc1_train.item(),2))
result_train_avgtop5acc.append(round(acc5_train.item(),2))
elapsed_time = time.time() - start_time
train_time += elapsed_time
print('====> {:.2f} seconds to train this epoch\n'.format(
elapsed_time))
result_train_avgtime.append(elapsed_time)
# evaluate on validation set
print('===> [ Validation ]')
start_time = time.time()
acc1_valid, acc5_valid, avgloss_valid = validate(val_loader, model, criterion)
result_test_avgtop1acc.append(acc1_valid.item())
result_test_avgtop5acc.append(acc5_valid.item())
elapsed_time = time.time() - start_time
validate_time += elapsed_time
print('====> {:.2f} seconds to validate this epoch\n'.format(
elapsed_time))
result_test_avgtime.append(elapsed_time)
import pandas as pd
"""
print(result_epoch)
print(result_lr)
print(result_train_avgloss)
print(result_train_avgtop1acc)
print(result_train_avgtop5acc)
print(result_test_avgtop1acc)
print(result_test_avgtop5acc)
"""
df = pd.DataFrame({
'Epoch': result_epoch,
'Learning rate': result_lr,
'Training avg loss': result_train_avgloss,
'Training avg top1 acc': result_train_avgtop1acc,
'Training avg top5 acc': result_train_avgtop5acc,
'Test avg top1 acc': result_test_avgtop1acc,
'Test avg top5 acc': result_test_avgtop5acc,
})
#df.to_csv('resnet_result.csv')
# remember best Acc@1 and save checkpoint
is_best = acc1_valid > best_acc1
best_acc1 = max(acc1_valid, best_acc1)
state = {'epoch': epoch + 1,
'model': model.state_dict(),
'optimizer': optimizer.state_dict()}
if (epoch + 1) % 20 == 0:
save_model(state, epoch, is_best)
df.to_csv('/content/drive/My Drive/Coding/nowdoing2/SQUEEZENET_result/squeeze_result.csv')
avg_train_time = train_time / (epochs-start_epoch)
avg_valid_time = validate_time / (epochs-start_epoch)
total_train_time = train_time + validate_time
print('====> average training time per epoch: {:,}m {:.2f}s'.format(
int(avg_train_time//60), avg_train_time%60))
print('====> average validation time per epoch: {:,}m {:.2f}s'.format(
int(avg_valid_time//60), avg_valid_time%60))
print('====> training time: {}h {}m {:.2f}s'.format(
int(train_time//3600), int((train_time%3600)//60), train_time%60))
print('====> validation time: {}h {}m {:.2f}s'.format(
int(validate_time//3600), int((validate_time%3600)//60), validate_time%60))
print('====> total training time: {}h {}m {:.2f}s'.format(
int(total_train_time//3600), int((total_train_time%3600)//60), total_train_time%60))
elapsed_time = time.time() - start_time
print('====> total time: {}h {}m {:.2f}s'.format(
int(elapsed_time//3600), int((elapsed_time%3600)//60), elapsed_time%60))
# Test the model
model.eval()
result_test_acc = 0.0
with torch.no_grad():
correct = 0
total = 0
for images, labels in val_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
result_test_acc = 100 * correct / total
print('Accuracy of the model on the test images: {} %'.format(100 * correct / total))
acc = []
acc.append(result_test_acc)
df2 = pd.DataFrame({
'Accuracy': acc,
})
df2.to_csv('resnet_accuracy.csv')
#files.download('resnet_result.csv')
#files.download('resnet_accuracy.csv')
```
| github_jupyter |
# CGC API Quickstart
This Guide leads you through a simple RNA sequencing analysis which parallels the GUI Quickstart using the CGC API. We have written this example in Python, but the concepts can be adapted to your preferred programming language. We encourage you to try this analysis yourself
## Set project name, application, and AUTH_TOKEN
In the code below, please replace the AUTH_TOKEN string with your authentication token string! Otherwise the code wil only mock you. The authentication token associated with your account, which you can get by going to [Developer Dashboard](https://cgc.sbgenomics.com/account/#developer) after logging into your account. Remember to **keep your AUTH_TOKEN secure!**
```
# IMPORTS
import time as timer
from requests import request
import json
from urllib2 import urlopen
import os
# GLOBALS
FLAGS = {'targetFound': False, # target project exists in CGC project
'taskRunning': False, # task is still running
'startTasks': True # (False) create, but do NOT start tasks
}
TARGET_PROJECT = 'Quickstart_API' # project we will create in CGC (Settings > Project name in GUI)
TARGET_APP = 'RNA-seq Alignment - STAR for TCGA PE tar' # app to use
INPUT_EXT = 'tar.gz'
AUTH_TOKEN = 'AUTH_TOKEN' # TODO: replace 'AUTH_TOKEN' with yours here
```
## Functions & Classes
Since we are going to write the functions that interact with API in Python, we'll prepare a function that converts the information we send and receive into JSON.
We will not only create things but also need to interact with them, so in this demo we also may use object oriented programming. The class definition is below. Generally, the api_calls will either return a **list of things** (e.g. *myFiles is plural*) or a very **detailed description of one thing** (e.g. *myFile is singular*). The appropriate structure is created automatically in the response_to_fields() method.
```
# FUNCTIONS
def api_call(path, method='GET', query=None, data=None, flagFullPath=False):
""" Translates all the HTTP calls to interface with the CGC
code adapted from the Seven Bridges platform API example
https://docs.sbgenomics.com/display/developerhub/Quickstart
flagFullPath is novel, added to smoothly resolve API pagination issues"""
data = json.dumps(data) if isinstance(data, dict) or isinstance(data,list) else None
base_url = 'https://cgc-api.sbgenomics.com/v2/'
headers = {
'X-SBG-Auth-Token': AUTH_TOKEN,
'Accept': 'application/json',
'Content-type': 'application/json',
}
if flagFullPath:
response = request(method, path, params=query, data=data, headers=headers)
else:
response = request(method, base_url + path, params=query, data=data, headers=headers)
response_dict = json.loads(response.content) if response.content else {}
if response.status_code / 100 != 2:
print response_dict['message']
raise Exception('Server responded with status code %s.' % response.status_code)
return response_dict
def print_project_details(proj, flag_new):
#Output details of the project
if flag_new:
print "Congratulations, you have made a new project. Details: \n"
else:
print "Your project exists. Details: \n"
print u'\u2022' + ("Name: %s \n") % (proj.name)
print u'\u2022' + ("ID: %s \n") % (proj.id)
print u'\u2022' + ("Description: %s \n") % (proj.description)
return None
def download_files(fileList):
# download a list of files from URLs (adapted from a few stackoverflow threads)
dl_dir = 'files/downloads/'
try: # make sure we have the download directory
os.stat(dl_dir)
except:
hello()
a = dl_dir.split('/')[:-1]
b = ''
for a_dir in a:
b = b + a_dir
os.mkdir(b)
b = b + '/'
del a, b, a_dir
for ii in range(1, len(fileList)): # skip first [0] entry, it is a text header
url = fileList[ii]
file_name = url.split('/')[-1]
file_name = file_name.split('?')[0]
file_name = file_name.split('%2B')[1]
u = urlopen(url)
f = open((dl_dir + file_name), 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 1024*1024
prior_percent = 0
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
if (file_size_dl * 100. / file_size) > (prior_percent+20):
print status + '\n'
prior_percent = (file_size_dl * 100. / file_size)
f.close()
def hello():
print("Is it me you're looking for?")
return True
#%% CLASSES
class API(object):
# making a class out of the api() function, adding other methods
def __init__(self, path, method='GET', query=None, data=None, flagFullPath=False):
self.flag = {'longList': False}
response_dict = api_call(path, method, query, data, flagFullPath)
self.response_to_fields(response_dict)
if self.flag['longList']:
self.long_list(response_dict, path, method, query, data)
def response_to_fields(self,rd):
if 'items' in rd.keys(): # get * {files, projects, tasks, apps} (object name plural)
if len(rd['items']) > 0:
self.list_read(rd)
else:
self.empty_read(rd)
else: # get details about ONE {file, project, task, app} (object name singular)
self.detail_read(rd)
def list_read(self,rd):
n = len(rd['items'])
keys = rd['items'][0].keys()
m = len(keys)
for jj in range(m):
temp = [None]*n
for ii in range(n):
temp[ii] = rd['items'][ii][keys[jj]]
setattr(self, keys[jj], temp)
if ('links' in rd.keys()) & (len(rd['links']) > 0):
self.flag['longList'] = True
def empty_read(self,rd): # in case an empty project is queried
self.href = []
self.id = []
self.name = []
self.project = []
def detail_read(self,rd):
keys = rd.keys()
m = len(keys)
for jj in range(m):
setattr(self, keys[jj], rd[keys[jj]])
def long_list(self, rd, path, method, query, data):
prior = rd['links'][0]['rel']
# Normally .rel[0] is the next, and .rel[1] is prior. If .rel[0] = prior, then you are at END_OF_LIST
keys = rd['items'][0].keys()
m = len(keys)
while prior == 'next':
rd = api_call(rd['links'][0]['href'], method, query, data, flagFullPath=True)
prior = rd['links'][0]['rel']
n = len(rd['items'])
for jj in range(m):
temp = getattr(self, keys[jj]) # possible speed bottleneck next three ops (allocating memory)
for ii in range(n):
temp.append(rd['items'][ii][keys[jj]])
setattr(self, keys[jj], temp)
if __name__ != "__main__":
exit() # prevent accidentally running script if loading file
# Did you remember to change the AUTH_TOKEN
if AUTH_TOKEN == 'AUTH_TOKEN':
print "You need to replace 'AUTH_TOKEN' string with your actual token. Please fix it."
exit()
```
## Create a project
Projects are the foundation of any analysis on the CGC. We can either use a project that has already been created, or we can use the API to create one. Here we will create a new project, but first check that it doesn't exist to show both methods. The *project name*, Pilot Fund *billing group*, and a project *description* will be sent in our API call.
```
# list all billing groups on your account
billingGroups = API('billing/groups')
# Select the first billing group, this is "Pilot_funds(USER_NAME)"
print billingGroups.name[0], \
'will be charged for this computation. Approximate price is $4 for example STAR RNA seq (n=1) \n'
# list all projects you are part of
existingProjects = API(path='projects') # make sure your project doesn't already exist
# set up the information for your new project
NewProject = {
'billing_group': billingGroups.id[0],
'description': "A project created by the API Quickstart",
'name': TARGET_PROJECT,
'tags': ['tcga']
}
# Check to make sure your project doesn't already exist on the platform
for ii,p_name in enumerate(existingProjects.name):
if TARGET_PROJECT == p_name:
FLAGS['targetFound'] = True
break
# Make a shiny, new project
if FLAGS['targetFound']:
myProject = API(path=('projects/' + existingProjects.id[ii])) # GET existing project details (we need them later)
else:
myProject = API(method='POST', data=NewProject, path='projects') # POST new project
# (re)list all projects, to check that new project posted
existingProjects = API(path='projects')
# GET new project details (we will need them later)
myProject = API(path=('projects/' + existingProjects.id[0])) # GET new project details (we need them later)
```
## Add files
Here we have multiple options for adding data to a project, but will only present:
* Copy files from existing project (API)
Here we will take advantage of the already created Quickstart project from the GUI tutorial. This code will look for our three input files from that project and copy them over.
Note: other options are available in docs (TODO: link)
```
for ii,p_id in enumerate(existingProjects.id):
if existingProjects.name[ii] == 'QuickStart':
filesToCopy = API(('files?limit=100&project=' + p_id))
break
# Don't make extra copies of files (loop through all files because we don't know what we want)
myFiles = API(('files?limit=100&project=' + myProject.id)) # files currently in project
for jj,f_name in enumerate(filesToCopy.name):
# Conditional is HARDCODED for RNA Seq STAR workflow
if f_name[-len(INPUT_EXT):] == INPUT_EXT or f_name[-len('sta'):] == 'sta' or \
f_name[-len('gtf'):] == 'gtf':
if f_name not in myFiles.name: # file currently not in project
api_call(path=(filesToCopy.href[jj] + '/actions/copy'), method='POST', \
data={'project': myProject.id, 'name': f_name}, flagFullPath=True)
```
## Add Applications or Workflows
There are more than 150 public apps available on the Seven Bridges CGC. Here we query all of them, then copy the target workflow to our project.
```
myFiles = API(('files?limit=100&project=' + myProject.id)) # GET files LIST, regardless of upload method
# Add workflow (copy from other project or GUI, not looping through all apps, we know exactly what we want)
allApps = API(path='apps?limit=100&visibility=public') # long function call, currently 183
myApps = API(path=('apps?limit=100&project=' + myProject.id))
if TARGET_APP not in allApps.name:
print "Target app (%s) does not exist in the public repository. Please double-check the spelling" % (TARGET_APP)
else:
ii = allApps.name.index(TARGET_APP)
if TARGET_APP not in myApps.name: # app not already in project
temp_name = allApps.href[ii].split('/')[-2] # copy app from public repository
api_call(path=('apps/' + allApps.project[ii] + '/' + temp_name + '/actions/copy'), \
method='POST', data={'project': myProject.id, 'name': TARGET_APP})
myApps = API(path=('apps?limit=100&project=' + myProject.id)) # update project app list
del allApps
```
## Build a file processing list
Most likely, we will only have one input file and two reference files in the project. However, if multiple input files were imported, this will create a batch of *single-input-single-output tasks* - one for each file. This code builds the list of files
```
# Build .fileProcessing (inputs) and .fileIndex (references) lists [for workflow]
FileProcList = ['Files to Process']
Ind_GtfFile = None
Ind_FastaFile = None
for ii,f_name in enumerate(myFiles.name):
# this conditional is for 'RNA seq STAR alignment' in Quickstart_API. _Adapt_ appropriately for other workflows
if f_name[-len(INPUT_EXT):] == INPUT_EXT: # input file
FileProcList.append(ii)
elif f_name[-len('gtf'):] == 'gtf':
Ind_GtfFile = ii
elif f_name[-len('sta'):] == 'sta':
Ind_FastaFile = ii
```
## Build & Start tasks
Next we will iterate through the File Processing List (FileProcList) to generate one task for each input file. If the Flag *startTasks* is true, the tasks will start running immediately.
```
myTaskList = [None]
for ii,f_ind in enumerate(FileProcList[1:]): # Start at 1 because FileProcList[0] is a header
NewTask = {'description': 'APIs are awesome',
'name': ('batch_task_' + str(ii)),
'app': (myApps.id[0]), # ASSUMES only single workflow in project
'project': myProject.id,
'inputs': {
'genomeFastaFiles': { # .fasta reference file
'class': 'File',
'path': myFiles.id[Ind_FastaFile],
'name': myFiles.name[Ind_FastaFile]
},
'input_archive_file': { # File Processing List
'class': 'File',
'path': myFiles.id[f_ind],
'name': myFiles.name[f_ind]
},
# .gtf reference file, !NOTE: this workflow expects a _list_ for this input
'sjdbGTFfile': [
{
'class': 'File',
'path': myFiles.id[Ind_GtfFile],
'name': myFiles.name[Ind_GtfFile]
}
]
}
}
# Create the tasks, run if FLAGS['startTasks']
if FLAGS['startTasks']:
myTask = api_call(method='POST', data=NewTask, path='tasks/', query={'action': 'run'}) # task created and run
myTaskList.append(myTask['href'])
else:
myTask = api_call(method='POST', data=NewTask, path='tasks/') # task created and run
myTaskList.pop(0)
print "%i tasks have been created. \n" % (ii+1)
print "Enjoy a break, come back to us once you've gotten an email that tasks are done"
```
## Check task completion
These tasks may take a long time to complete, here are two ways to check in on them:
* Wait for email confirmation
* No additional code need, emails will arrive whether the task was started by GUI or API.
```
# if tasks were started, check if they've finished
for href in myTaskList:
# check on one task at a time, if any running, can not continue (no sense to query others)
print "Pinging CGC for task completion, will download summary files once all tasks completed."
FLAGS['taskRunning'] = True
while FLAGS['taskRunning']:
task = api_call(path=href, flagFullPath=True)
if task['status'] == 'COMPLETED':
FLAGS['taskRunning'] = False
elif task['status'] == 'FAILED': # NOTE: also leaving loop on "FAILED" statuses
print "Task failed, can not continue"
exit()
timer.sleep(600)
```
## EXTRA CELLS
From the Quickstart, these are the files for:
* downloading files
* uploading local files
* setting file metadata
```
from urllib2 import urlopen
import os
def download_files(fileList):
# download a list of files from URLs (adapted from a few stackoverflow threads)
dl_dir = 'downloads/'
try: # make sure we have the download directory
os.stat(dl_dir)
except:
os.mkdir(dl_dir)
for ii in range(1, len(fileList)): # skip first [0] entry, it is a text header
url = fileList[ii]
file_name = url.split('/')[-1]
file_name = file_name.split('?')[0]
file_name = file_name.split('%2B')[1]
u = urlopen(url)
f = open((dl_dir + file_name), 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 1024*1024
prior_percent = 0
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
if (file_size_dl * 100. / file_size) > (prior_percent+20):
print status + '\n'
prior_percent = (file_size_dl * 100. / file_size)
f.close()
# Check which files have been generated (only taking small files to avoid long times)
myNewFiles = API(('files?project=' + myProject.id)) # calling again to see what was generated
dlList = ["links to file downloads"]
for ii, f_name in enumerate(myNewFiles.name):
# downloading only the summary files. Adapt for whichever files you need
if (f_name[-4:] == '.out'):
dlList.append(api_call(path=('files/' + myNewFiles.id[ii] + '/download_info'))['url'])
T0 = timer.time()
download_files(dlList)
print timer.time() - T0, "seconds download time"
# TODO: validate this
print "You need to install the command line uploader before proceeding"
ToUpload = ['G17498.TCGA-02-2483-01A-01R-1849-01.2.tar.gz','ucsc.hg19.fasta','human_hg19_genes_2014.gtf']
for ii in range(len(ToUpload)):
cmds = "cd ~/cgc-uploader; bin/cgc-uploader.sh -p 0f90eae7-2a76-4332-a233-6d20990189b7 " + \
"/Users/digi/PycharmProjects/cgc_API/toUpload/" + ToUpload[ii]
os.system(cmds) # TODO, uncomment
del cmds
myFiles = API(('files?project=' + myProject.id)) # GET files LIST
metadata = {
"name": "readme.md",
"library":"TEST",
"file_type": "fastq",
"sample": "example_human_Illumina",
"seq_tech": "Illumina",
"paired_end": "1",
'gender': "male",
"data_format": "awesome"
}
print myFiles.href[3] + '/metadata'
api_call(path=(myFiles.href[3] + '/metadata'), method='PUT', data = metadata, flagFullPath=True)
```
| github_jupyter |
# Flow analysis
This notebook reproduces plots that appear in Figs 6 and 8 of the paper:
- Gilson M, Zamora-López G, Pallarés V, Adhikari MH, Senden M, Tauste Campo A, Mantini D, Corbetta M, Deco G, Insabato A (submitted) "Model-based whole-brain effective connectivity to study distributed cognition in health and disease", bioRxiv; https://doi.org/10.1101/531830.
The goal of this analysis is a network-oriented interpretation of the whole-brain dynamic model that is fitted to fMRI data (see the *MOU_EC_Estimation* notebook). By this we mean understanding how all the connections between regions of interest (effective connectivity) interplay to generate patterns of activity propagation. we use the amount of activity that propagates from a ROI to another to define interactions between ROIs that take into account indirect paths in the network. In particular, we use our network measures to compare the two conditions, rest (with eyes open) and movie viewing/listening.
It uses the library *NetDynFlow* to calculate the flow, which is presented in:
- Gilson M, Kouvaris NE, Deco G, Zamora-López G (2018) "Framework based on communicability and flow to analyze complex network dynamics", Phys Rev E 97: 052301; https://doi.org/10.1103/PhysRevE.97.052301.
See also for an application to resting-state fMRI data of the related network analysis using dynamic communicability:
- Gilson M, Kouvaris NE, Deco G, Mangin J-F, Poupon C, Lefranc S, Rivière D, Zamora-López G (2019) "Network analysis of whole-brain fMRI dynamics: A new framework based on dynamic communicability", *Neuroimage* 201, 116007 https://doi.org/10.1016/j.neuroimage.2019.116007.
```
# Toggle to True to create directory and store results there
save_outputs = False
if save_outputs:
import os
res_dir = 'flow/'
if not os.path.exists(res_dir):
os.mkdir(res_dir)
# Import dependencies
from __future__ import division
import numpy as np
import scipy.stats as stt
import matplotlib.pyplot as plt
%matplotlib inline
## Check whether NetDynFlow is installed, otherwise install using pip
try:
import netdynflow
except:
! pip install git+https://github.com/mb-BCA/NetDynFlow.git@master
# Import NetDynFlow
import netdynflow as ndf
```
The following code loads the estimated parameters in the model, the Jacobian $J$ and the input covariance $\Sigma$ (a diagonal matrix), respectively $J$ and $Sigma$ in the code. The object *DynFlow* is the flow tensor, namely a cube indexed by time x ROI x ROI. It describes the interaction between the first indexed ROI to the second indexed ROI after the corresponding integration time. This cube is stored in a 5-dimensional array, indexed by subject x condition x time x ROI x ROI.
```
# Set the network properties
param_dir = 'model_param_movie/'
n_sub = 22 # number of subjects
n_run = 5 # first 2 rest + last 3 movie
N = 66 # number of ROIs
# Load the ROI labels
ROI_labels = np.load('ROI_labels.npy')
# Load the Jacobian and the noise covariance.
# Created in notebook 'MOU_EC_Estimation.ipybn'
J = np.load(param_dir + 'J_mod.npy')
Sigma = np.load(param_dir + 'Sigma_mod.npy')
# Create binary masks
mask_diag = np.eye(N,dtype=np.bool)
mask_offdiag = np.logical_not(mask_diag)
# Set the simulation properties
T = 40.0 # duration
dt = 1.0 # time step
vT = np.arange(0, T+dt*0.5, dt) # discrete simulation steps
nT = vT.size
# Calculate the dynamic flow, for every subject in all five sessions
flow = np.zeros([n_sub,n_run,nT,N,N]) # dynamic flow matrix
for i_sub in range(n_sub):
for i_run in range(n_run):
C_tmp = np.copy(J[i_sub,i_run,:,:])
C_tmp[mask_diag] = 0
tau_tmp = -1. / J[i_sub,i_run,:,:].diagonal().mean()
flow[i_sub,i_run,:,:,:] = ndf.DynFlow(C_tmp, tau_tmp,
Sigma[i_sub,i_run,:,:],
tmax=T,
timestep=dt)
if save_outputs:
np.save(res_dir + 'flow.npy', flow)
```
## Global analysis of flow
Firstly, we evaluate the total flow, which is the sum of flow interactions between all pairs of ROIs. To do so we use the function `TotalEvolution()` on the flow tensor, for each subject and condition.
This quantifies the total activity propagation in the network, as a proxy for global communication. Importantly, this measure is time dependent, as the interactions measure the activity propagation following a perturbation whose amplitude at each node is described by $\Sigma$. The results show a higher level of interactions in the network for movie than for rest.
In the following plots, each curve corresponds to the average over all subjects and the bars indicate the standard error of the mean over the subjects.
```
# Calculate total communicability, for each subject in all the five sessions
tot_tmp = np.zeros([n_sub,n_run,nT])
for i_sub in range(n_sub):
for i_run in range(n_run):
tot_tmp[i_sub,i_run,:] = ndf.TotalEvolution(flow[i_sub,i_run,:,:,:])
# Plot the results
plt.figure()
plt.errorbar(vT, tot_tmp[:,:2,:].mean(axis=(0,1)),
yerr=tot_tmp[:,:2,:].std(axis=(0,1)) / np.sqrt(n_sub),
color='k')
plt.errorbar(vT, tot_tmp[:,2:,:].mean(axis=(0,1)),
yerr=tot_tmp[:,2:,:].std(axis=(0,1)) / np.sqrt(n_sub),
color=[0.7,0.3,0])
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('Total flow', fontsize=14)
plt.legend(['Rest','Movie'], fontsize=14)
if save_outputs:
plt.savefig(res_dir + 'total_flow.png', format='png')
plt.show()
```
Secondly, we use the function `Diversity()` to evaluate the flow diversity, which measures the heterogeneity between the flow interactions in the network. In practice, it is calculated as a coefficient of variation: the standard deviation divided by the mean across all matrix elements. The diversity can be interpreted as a degree of inhomonogeneity of the communication between ROIs.
```
# Calculate the diversity, for every subject in all the five sessions
div_tmp = np.zeros([n_sub,n_run,nT])
for i_sub in range(n_sub):
for i_run in range(n_run):
div_tmp[i_sub,i_run,:] = ndf.Diversity(flow[i_sub,i_run,:,:,:])
# Visualise the results
plt.figure()
plt.errorbar(vT, div_tmp[:,:2,:].mean(axis=(0,1)),
yerr=div_tmp[:,:2,:].std(axis=(0,1)) / np.sqrt(n_sub),
color='k')
plt.errorbar(vT, div_tmp[:,2:,:].mean(axis=(0,1)),
yerr=div_tmp[:,2:,:].std(axis=(0,1)) / np.sqrt(n_sub),
color=[0.7,0.3,0])
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('Flow diversity', fontsize=14)
plt.legend(['Rest','Movie'], fontsize=12)
if save_outputs:
plt.savefig(res_dir + 'diversity_flow.png', format='png')
plt.show()
```
## ROI-based analysis of flow
Now we turn to the more refined scale of ROIs to examine the flow.
We also reorganize the list of ROIs to group them by anatomical regions, as done in the paper.
```
# Organize the ROIs into groups
# occipital ROIs
# CUN, PCAL, LING, LOCC, FUS
ind_occ = [3, 20, 12, 10, 6]
# temporal ROIs
# IT, MT, ST, TT, TP
ind_tmp = [8, 14, 29, 32, 31]
# parietal ROIs
# IP, SP, SMAR, BSTS, PCUN
ind_par = [7, 28, 30, 0, 24]
# central ROIs
# PARC, PREC, PSTC
ind_cnt = [15, 23, 21]
# frontal ROIs:
# FP, SF, RMF, CMF, LOF, MOF, POPE, PORB, PTRI
ind_frnt = [5, 27, 26, 2, 11, 13, 17, 18, 19]
# cingulate ROIs
# ENT, PARH, RAC, CAC, PC, ISTC
ind_cing = [4, 16, 25, 1, 22, 9]
# rearranged list of ROIs for right hemisphere
ind_aff = np.array(ind_occ + ind_tmp + ind_par + ind_cnt + ind_frnt + ind_cing,
dtype=np.int)
# get labels for homotopic regions
ROI_labels_sym = np.array(ROI_labels[:int(N/2)], dtype=np.str)
for i in range(int(N/2)):
# remove white space
ROI_labels_sym[i] = ROI_labels_sym[i].replace(' ', '')
# remove first letter (left or right)
ROI_labels_sym[i] = ROI_labels_sym[i].replace('r', '')
ROI_labels_sym = ROI_labels_sym[ind_aff]
```
We calculate for the input and output flow for all each ROI, using function `NodeEvolution()` on the flow tensor.
```
# Calculate input and output flow for each node, in each subject for all 5 sessions
in_flow = np.zeros([n_sub,n_run,nT,N])
out_flow = np.zeros([n_sub,n_run,nT,N])
for i_sub in range(n_sub):
for i_run in range(n_run):
in_flow[i_sub,i_run,:,:], out_flow[i_sub,i_run,:,:] = \
ndf.NodeEvolution(flow[i_sub,i_run,:,:,:])
```
We first plot the input and output flow for the rest condition. The interpretation is as follows:
- strong input flow means that the ROI listens to the rest of the network (its activity is strongly affected by that of others);
- strong output flow means that the ROI broadcasts to the rest of the network (its activity strongly affects that of others).
The third plot can be used to classify ROI in listeners or broadcasters (or both).
```
# Rest averaged over subjects and sessions
rest_in_flow = in_flow[:,:2,:,:].mean(axis=(0,1))
rest_out_flow = out_flow[:,:2,:,:].mean(axis=(0,1))
# Group homotopic ROIs
sym_rest_in_flow = rest_in_flow[:,:N//2] + rest_in_flow[:,N//2:][:,::-1]
sym_rest_out_flow = rest_out_flow[:,:N//2] + rest_out_flow[:,N//2:][:,::-1]
plt.figure(figsize=(8,6))
plt.imshow(sym_rest_in_flow[:,ind_aff].T, origin='lower', cmap='Reds')
plt.clim(0, 0.1)
plt.colorbar(ticks=[0,0.05,0.1])
plt.yticks(range(int(N/2)), ROI_labels_sym, fontsize=8)
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('ROI label', fontsize=14)
plt.title('Input flow (rest)', fontsize=14)
plt.figure(figsize=(8,6))
plt.imshow(sym_rest_out_flow[:,ind_aff].T, origin='lower', cmap='Reds')
plt.clim(0, 0.1)
plt.colorbar(ticks=[0,0.05,0.1])
plt.yticks(range(int(N/2)), ROI_labels_sym, fontsize=8)
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('ROI label', fontsize=14)
plt.title('Output flow (rest)', fontsize=14)
plt.figure()
plt.errorbar(in_flow[:,:2,5,:].mean(axis=(0,1)),
out_flow[:,:2,5,:].mean(axis=(0,1)),
xerr = in_flow[:,:2,5,:].std(axis=(0,1)) / np.sqrt(2*n_sub),
yerr = out_flow[:,:2,5,:].std(axis=(0,1)) / np.sqrt(2*n_sub),
color='r',
linestyle='')
plt.plot([0,0.1], [0,0.1], '--k')
plt.xlabel('input flow (rest)', fontsize=14)
plt.ylabel('output flow (rest)', fontsize=14)
plt.show()
```
<br>
We then do the same plots for the movie condition.
```
# Movie averaged over subjects and sessions
movie_in_flow = in_flow[:,2:,:,:].mean(axis=(0,1))
movie_out_flow = out_flow[:,2:,:,:].mean(axis=(0,1))
# Group homotopic ROIs
sym_movie_in_flow = movie_in_flow[:,:N//2] + movie_in_flow[:,N//2:][:,::-1]
sym_movie_out_flow = movie_out_flow[:,:N//2] + movie_out_flow[:,N//2:][:,::-1]
plt.figure(figsize=(8,6))
plt.imshow(sym_movie_in_flow[:,ind_aff].T, origin='lower', cmap='Reds')
plt.clim(0, 0.1)
plt.colorbar(ticks=[0,0.05,0.1])
plt.yticks(range(int(N/2)), ROI_labels_sym ,fontsize=8)
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('ROI label', fontsize=14)
plt.title('Input flow (movie)', fontsize=14)
plt.figure(figsize=(8,6))
plt.imshow(sym_movie_out_flow[:,ind_aff].T, origin='lower', cmap='Reds')
plt.clim(0, 0.1)
plt.colorbar(ticks=[0,0.05,0.1])
plt.yticks(range(int(N/2)), ROI_labels_sym ,fontsize=8)
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('ROI label', fontsize=14)
plt.title('Output flow (movie)', fontsize=14)
plt.figure()
plt.errorbar(in_flow[:,2:,5,:].mean(axis=(0,1)),
out_flow[:,2:,5,:].mean(axis=(0,1)),
xerr = in_flow[:,2:,5,:].std(axis=(0,1)) / np.sqrt(3*n_sub),
yerr = out_flow[:,2:,5,:].std(axis=(0,1)) / np.sqrt(3*n_sub),
color='r',
linestyle='')
plt.plot([0,0.1], [0,0.1], '--k')
plt.xlabel('Input flow (movie)', fontsize=14)
plt.ylabel('Output flow (movie)', fontsize=14)
plt.show()
```
<br>
Finally, we plot **the difference** between the flow in the two conditions (average over the subjects), to see which ROIs change their listening/broadcasting roles from rest to movie.
The third plot shows that the main changes in movie compared to rest are ROIs increasing their broadcasting roles.
```
# Difference (movie minus rest) averaged over subjects and sessions
diff_in_flow = in_flow[:,2:,:,:].mean(axis=(0,1)) - \
in_flow[:,:2,:,:].mean(axis=(0,1))
diff_out_flow = out_flow[:,2:,:,:].mean(axis=(0,1)) - \
out_flow[:,:2,:,:].mean(axis=(0,1))
# Group homotopic ROIs
sym_diff_in_flow = diff_in_flow[:,:int(N/2)] + \
diff_in_flow[:,int(N/2):][:,::-1]
sym_diff_out_flow = diff_out_flow[:,:int(N/2)] + \
diff_out_flow[:,int(N/2):][:,::-1]
plt.figure(figsize=(8,6))
plt.imshow(sym_diff_in_flow[:,ind_aff].T, origin='lower', cmap='bwr')
plt.clim(-0.06, 0.06)
plt.colorbar(ticks=[-0.05,0,0.05])
plt.yticks(range(int(N/2)), ROI_labels_sym, fontsize=8)
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('ROI label', fontsize=14)
plt.title('Input flow (movie - rest)', fontsize=14)
if save_outputs:
plt.savefig(res_dir + 'input_flow_diff.png', format='png')
plt.figure(figsize=(8,6))
plt.imshow(sym_diff_out_flow[:,ind_aff].T, origin='lower', cmap='bwr')
plt.clim(-0.06, 0.06)
plt.colorbar(ticks=[-0.05,0,0.05])
plt.yticks(range(int(N/2)), ROI_labels_sym, fontsize=8)
plt.xlabel('Integration time (TR)', fontsize=14)
plt.ylabel('ROI label', fontsize=14)
plt.title('Output flow (movie - rest)', fontsize=14)
if save_outputs:
plt.savefig(res_dir + 'output_flow_diff.png', format='png')
plt.figure()
diff_in_flow_tmp = in_flow[:,2:,5,:].mean(axis=(1)) - \
in_flow[:,:2,5,:].mean(axis=(1))
diff_out_flow_tmp = out_flow[:,2:,5,:].mean(axis=(1)) - \
out_flow[:,:2,5,:].mean(axis=(1))
plt.errorbar(diff_in_flow_tmp.mean(0),
diff_out_flow_tmp.mean(0),
xerr = diff_in_flow_tmp.std(0) / np.sqrt(n_sub),
yerr = diff_out_flow_tmp.std(0) / np.sqrt(n_sub),
color='r',
linestyle='')
plt.plot([-0.05,0.05], [-0.05,0.05], '--k')
plt.xlabel('Input flow diff. (movie - rest)', fontsize=14)
plt.ylabel('Output flow diff. (movie - rest)', fontsize=14)
plt.show()
```
| github_jupyter |
In this kernel, i will help you speed up your preprocessing at:
1. replace text in `question_text`:
replace the text in the dict, e.g.: x = x.replace("?", " ? ")
2. load embeddings
There is a quick report about the kernel. If you want to see the details, follow the codes :)
| No | Category | Type | DictLength | Run time (total) | Run time(s/w) |
|-----|-----------------------|--------|----------------|---------------------| ----------------- |
|1 | replace text | slow | 130 | 43.4 s | 0.3338 |
|2 | replace text | fast | 130 | 8.8 s | 0.0677 |
|3 | replace text | slow | 65 | 23.9 s | 0.3677 |
|4 | replace text | fast | 65 | 6.05 s | 0.0931 |
|5 | load embedding | slow | --- | 51.6 s | --- |
|6 | load embedding | fast | --- | 19.1 s | --- |
First, let's import the packages and load the datas.
```
import numpy as np
import pandas as pd
!ls ../input/
train_org = pd.read_csv("../input/train.csv")
print("train shape:", train_org.shape)
train_org.head()
```
## 1. replace text in question_text
In this section, i will use `clean_text_fast` to speed up. The original replace function is `clean_text_slow`.
The two functions are defined as blow:
```
def clean_text_slow(x, maxlen=None):
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
x = x.lower()
for punct in puncts[:maxlen]:
x = x.replace(punct, f' {punct} ')
return x
def clean_text_fast(x, maxlen=None):
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
x = x.lower()
for punct in puncts[:maxlen]:
if punct in x: # add this line
x = x.replace(punct, f' {punct} ')
return x
```
the `puncts` contains 130 words or characters
the `fast` function only add one line: `if punct in x:`
Let's look at the run time first.
```
%%time
_ = train_org.question_text.apply(lambda x: clean_text_slow(x, maxlen=None))
%%time
_ = train_org.question_text.apply(lambda x: clean_text_fast(x, maxlen=None))
```
This is because the `in` operation is more fast than `create a new str object` in python.
In the `slow` function, we create a new str in every iteration.
Next, let's use a half `puncts` length to calculate the run time.
```
%%time
_ = train_org.question_text.apply(lambda x: clean_text_slow(x, maxlen=65))
%%time
_ = train_org.question_text.apply(lambda x: clean_text_fast(x, maxlen=65))
```
As we can see. The `slow` function runs **double** while the `puncts` become **double**
The `fast` only use extra **1/3** seconds while the `puncts` become **double**.
As our `puncts` grows longer and longer... I think i dont need to say anymore.
> **do not create a `new string object` if you can use `in operation`** in python.
## 2. load embeddings.
use hard code to speed up.
```
def load_glove_slow(word_index, max_words=200000, embed_size=300):
EMBEDDING_FILE = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE) if o.split(" ")[0] in word_index)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
embedding_matrix = np.random.normal(emb_mean, emb_std, (max_words, embed_size))
for word, i in word_index.items():
if i >= max_words: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_glove_fast(word_index, max_words=200000, embed_size=300):
EMBEDDING_FILE = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
emb_mean, emb_std = -0.005838499, 0.48782197
embedding_matrix = np.random.normal(emb_mean, emb_std, (max_words, embed_size))
with open(EMBEDDING_FILE, 'r', encoding="utf8") as f:
for line in f:
word, vec = line.split(' ', 1)
if word not in word_index:
continue
i = word_index[word]
if i >= max_words:
continue
embedding_vector = np.asarray(vec.split(' '), dtype='float32')[:300]
if len(embedding_vector) == 300:
embedding_matrix[i] = embedding_vector
return embedding_matrix
```
In the `load_glove_slow`, we calculate the `emb_mean` and `emb_std` on every loading.
In the `load_glove_fast`, we write the `emb_mean` and `emb_std` in the code, and avoid create `dict`
Let create the word_index:
```
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer()
tokenizer.fit_on_texts(train_org.question_text.values)
```
run the functions:
```
%%time
_ = load_glove_slow(tokenizer.word_index, len(tokenizer.word_index) + 1)
%%time
_ = load_glove_fast(tokenizer.word_index, len(tokenizer.word_index) + 1)
```
the `fast` function use less **32.5** seconds the `slow`
**And in my code, the slow code runs totally 5mins while the `fast` runs only 40~50 seconds.**
> Try use hard code if it is possible**?**
| github_jupyter |
# Get Data
```
# Get data from Github
import numpy as np
from math import sqrt
from sklearn.metrics import mean_squared_error
import pandas as pd
url = 'https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/confirmed28Aug.csv?token=ALQ7JJRH3WVGQHDY3LCIWVK7KT2DO'
confirmed = pd.read_csv(url, error_bad_lines=False)
url = 'https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/deceased28Aug.csv?token=ALQ7JJVH5ILEJ7WS3HPV4P27KT2F6'
death = pd.read_csv(url, error_bad_lines=False)
url = 'https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/recovered28Aug.csv?token=ALQ7JJT6AVYXZ5BL75Z7OYC7KT2IO'
recover = pd.read_csv(url, error_bad_lines=False)
```
## Population
```
population=pd.read_csv('https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/population.csv?token=ALQ7JJRNMBZ6KNF4B3NLSLS7KT2LE', sep=',', encoding='latin1')
confirmed=pd.merge(confirmed, population,how='left' ,on=['Province/State','Country/Region'])
death=pd.merge(death, population,how='left' ,on=['Province/State','Country/Region'])
recover=pd.merge(recover, population,how='left' ,on=['Province/State','Country/Region'])
# merge region
confirmed['region']=confirmed['Country/Region'].map(str)+'_'+confirmed['Province/State'].map(str)
death['region']=death['Country/Region'].map(str)+'_'+death['Province/State'].map(str)
recover['region']=recover['Country/Region'].map(str)+'_'+recover['Province/State'].map(str)
confirmed.iloc[:,:]
```
## Create Time Series + Plots
```
def create_ts(df):
ts=df
ts=ts.drop(['Province/State', 'Country/Region','Lat', 'Long','Population'], axis=1)
ts.set_index('region')
ts=ts.T
ts.columns=ts.loc['region']
ts=ts.drop('region')
ts=ts.fillna(0)
ts=ts.reindex(sorted(ts.columns), axis=1)
return (ts)
ts=create_ts(confirmed)
ts_d=create_ts(death)
ts_rec=create_ts(recover)
import matplotlib.pyplot as plt
p=ts.reindex(ts.max().sort_values(ascending=False).index, axis=1)
p.iloc[:,:1].plot(marker='*',figsize=(10,4)).set_title('Daily Total Confirmed - Maharashtra',fontdict={'fontsize': 22}).figure.savefig('img1.png')
p.iloc[:,2:10].plot(marker='*',figsize=(10,4)).set_title('Daily Total Confirmed - Major areas',fontdict={'fontsize': 22}).figure.savefig('img2.png')
p_d=ts_d.reindex(ts.mean().sort_values(ascending=False).index, axis=1)
p_d.iloc[:,:1].plot(marker='*',figsize=(10,4)).set_title('Daily Total Death - Maharashtra',fontdict={'fontsize': 22}).figure.savefig('img3.png')
p_d.iloc[:,2:10].plot(marker='*',figsize=(10,4)).set_title('Daily Total Death - Major areas',fontdict={'fontsize': 22}).figure.savefig('img4.png')
p_r=ts_rec.reindex(ts.mean().sort_values(ascending=False).index, axis=1)
p_r.iloc[:,:1].plot(marker='*',figsize=(10,4)).set_title('Daily Total Recoverd - Maharashtra',fontdict={'fontsize': 22}).figure.savefig('img5.png')
p_r.iloc[:,2:10].plot(marker='*',figsize=(10,4)).set_title('Daily Total Recoverd - Major areas',fontdict={'fontsize': 22}).figure.savefig('img6.png')
```
## Kalman Filter With R
```
conda install -m rpy2
import rpy2
%load_ext rpy2.ipython
%%R
install.packages('pracma', repos='http://cran.us.r-project.org')
install.packages('reshape', repos='http://cran.us.r-project.org')
install.packages('readr', repos='http://cran.us.r-project.org')
%%R
require(pracma)
require(Metrics)
require(readr)
all<- read_csv("https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/ts_C28.csv?token=ALQ7JJRHXS5QM4RYNPPYT4S7KT3YU")
all$X1<-NULL
date<-all[,1]
date[nrow(date) + 1,1] <-all[nrow(all),1]+1
pred_all<-NULL
for (n in 2:ncol(all)-1) {
Y<-ts(data = all[n+1], start = 1, end =nrow(all)+1)
sig_w<-0.01
w<-sig_w*randn(1,100) # acceleration which denotes the fluctuation (Q/R) rnorm(100, mean = 0, sd = 1)
sig_v<-0.01
v<-sig_v*randn(1,100)
t<-0.45
phi<-matrix(c(1,0,t,1),2,2)
gama<-matrix(c(0.5*t^2,t),2,1)
H<-matrix(c(1,0),1,2)
#Kalman
x0_0<-p0_0<-matrix(c(0,0),2,1)
p0_0<-matrix(c(1,0,0,1),2,2)
Q<-0.01
R<-0.01
X<-NULL
X2<-NULL
pred<-NULL
for (i in 0:nrow(all)) {
namp <-paste("p", i+1,"_",i, sep = "")
assign(namp, phi%*%(get(paste("p", i,"_",i, sep = "")))%*%t(phi)+gama%*%Q%*%t(gama))
namk <- paste("k", i+1, sep = "")
assign(namk,get(paste("p", i+1,"_",i, sep = ""))%*%t(H)%*%(1/(H%*%get(paste("p", i+1,"_",i, sep = ""))%*%t(H)+R)))
namx <- paste("x", i+1,"_",i, sep = "")
assign(namx,phi%*%get(paste("x", i,"_",i, sep = "")))
namE <- paste("E", i+1, sep = "")
assign(namE,Y[i+1]-H%*%get(paste("x", i+1,"_",i, sep = "")))
namx2 <- paste("x", i+1,"_",i+1, sep = "")
assign(namx2,get(paste("x", i+1,"_",i, sep = ""))+get(paste("k", i+1, sep = ""))%*%get(paste("E", i+1, sep = "")))
namp2 <- paste("p", i+1,"_",i+1, sep = "")
assign(namp2,(p0_0-get(paste("k", i+1, sep = ""))%*%H)%*%get(paste("p", i+1,"_",i, sep = "")))
X<-rbind(X,get(paste("x", i+1,"_",i,sep = ""))[1])
X2<-rbind(X2,get(paste("x", i+1,"_",i,sep = ""))[2])
if(i>2){
remove(list=(paste("p", i-1,"_",i-2, sep = "")))
remove(list=(paste("k", i-1, sep = "")))
remove(list=(paste("E", i-1, sep = "")))
remove(list=(paste("p", i-2,"_",i-2, sep = "")))
remove(list=(paste("x", i-1,"_",i-2, sep = "")))
remove(list=(paste("x", i-2,"_",i-2, sep = "")))}
}
pred<-NULL
pred<-cbind(Y,X,round(X2,4))
pred<-as.data.frame(pred)
pred$region<-colnames(all[,n+1])
pred$date<-date$date
pred$actual<-rbind(0,(cbind(pred[2:nrow(pred),1])/pred[1:nrow(pred)-1,1]-1)*100)
pred$predict<-rbind(0,(cbind(pred[2:nrow(pred),2])/pred[1:nrow(pred)-1,2]-1)*100)
pred$pred_rate<-(pred$X/pred$Y-1)*100
pred$X2_change<-rbind(0,(cbind(pred[2:nrow(pred),3]-pred[1:nrow(pred)-1,3])))
pred_all<-rbind(pred_all,pred)
}
pred_all<-cbind(pred_all[,4:5],pred_all[,1:3])
names(pred_all)[5]<-"X2"
pred_all=pred_all[with( pred_all, order(region, date)), ]
pred_all<-pred_all[,3:5]
p=%R pred_all
############ Merge R output due to package problem
t=ts
t=t.stack().reset_index(name='confirmed')
t.columns=['date', 'region','confirmed']
t['date']=pd.to_datetime(t['date'] ,errors ='coerce')
t=t.sort_values(['region', 'date'])
temp=t.iloc[:,:3]
temp=temp.reset_index(drop=True)
for i in range(1,len(t)+1):
if(temp.iloc[i,1] is not temp.iloc[i-1,1]):
temp.loc[len(temp)+1] = [temp.iloc[i-1,0]+ pd.DateOffset(1),temp.iloc[i-1,1], 0]
temp=temp.sort_values(['region', 'date'])
p.set_index(temp.index,inplace=True)
#temp=temp.reset_index(drop=True)
temp['Y']=p['Y']
temp['X']=p['X']
temp['X2']=p['X2']
```
## Pre Proccessing Data for ML Model
```
w=pd.read_csv('https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/w_hist.csv?token=ALQ7JJWOZSPOIZUDSGTNFMC7KT35C', sep=',', encoding='latin1')
w['date']=pd.to_datetime(w['date'])
#w['date']=pd.to_datetime(w['date'],errors ='coerce')
w_forecast=pd.read_csv('https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/w_fore.csv?token=ALQ7JJWNHNWK4HVJ3YLZGW27KT4AS', sep=',', encoding='latin1')
w_forecast['date']=pd.to_datetime(w_forecast['date'])
```
## Build Train Set Data Structure
```
t=ts
t=t.stack().reset_index(name='confirmed')
t.columns=['date', 'region','confirmed']
t['date']=pd.to_datetime(t['date'] ,errors ='coerce')
t=t.sort_values(['region', 'date'])
# Add 1 Future day for prediction
t=t.reset_index(drop=True)
for i in range(1,len(t)+1):
if(t.iloc[i,1] is not t.iloc[i-1,1]):
t.loc[len(t)+1] = [t.iloc[i-1,0]+ pd.DateOffset(1),t.iloc[i-1,1], 0]
t=t.sort_values(['region', 'date'])
t=t.reset_index(drop=True)
t['1_day_change']=t['3_day_change']=t['7_day_change']=t['1_day_change_rate']=t['3_day_change_rate']=t['7_day_change_rate']=t['last_day']=0
for i in range(1,len(t)):
if(t.iloc[i,1] is t.iloc[i-2,1]):
t.iloc[i,3]=t.iloc[i-1,2]-t.iloc[i-2,2]
t.iloc[i,6]=(t.iloc[i-1,2]/t.iloc[i-2,2]-1)*100
t.iloc[i,9]=t.iloc[i-1,2]
if(t.iloc[i,1] is t.iloc[i-4,1]):
t.iloc[i,4]=t.iloc[i-1,2]-t.iloc[i-4,2]
t.iloc[i,7]=(t.iloc[i-1,2]/t.iloc[i-4,2]-1)*100
if(t.iloc[i,1] is t.iloc[i-8,1]):
t.iloc[i,5]=t.iloc[i-1,2]-t.iloc[i-8,2]
t.iloc[i,8]=(t.iloc[i-1,2]/t.iloc[i-8,2]-1)*100
t=t.fillna(0)
t=t.merge(temp[['date','region', 'X']],how='left',on=['date','region'])
t=t.rename(columns = {'X':'kalman_prediction'})
t=t.replace([np.inf, -np.inf], 0)
t['kalman_prediction']=round(t['kalman_prediction'])
train=t.merge(confirmed[['region','Population']],how='left',on='region')
train=train.rename(columns = {'Population':'population'})
# train['population']=train['population'].str.replace(r" ", '')
# train['population']=train['population'].str.replace(r",", '')
train['population']=train['population'].fillna(1)
train['population']=train['population'].astype('int32')
train['infected_rate'] =train['last_day']/train['population']*10000
train=train.merge(w,how='left',on=['date','region'])
train=train.sort_values(['region', 'date'])
### fill missing weather
for i in range(0,len(train)):
if(np.isnan(train.iloc[i,13])):
if(train.iloc[i,1] is train.iloc[i-1,1]):
train.iloc[i,13]=train.iloc[i-1,13]
train.iloc[i,14]=train.iloc[i-1,14]
```
## Kalman 1 day Prediction with Evaluation
```
# Select region
region='India_Maharashtra'
evaluation=pd.DataFrame(columns=['region','mse','rmse','mae'])
place=0
for i in range(1,len(t)):
if(t.iloc[i,1] is not t.iloc[i-1,1]):
ex=np.array(t.iloc[i-len(ts):i,10])
pred=np.array(t.iloc[i-len(ts):i,2])
evaluation=evaluation.append({'region': t.iloc[i-1,1], 'mse': np.power((ex - pred),2).mean(),'rmse':sqrt(mean_squared_error(ex,pred)),'mae': (abs(ex - pred)).mean()}, ignore_index=True)
p=t[t['region']==region][['date','region','confirmed','kalman_prediction']]
# p=p.rename(columns = {'confirmed':'recoverd'})
p.iloc[len(p)-1,2]=None
p=p.set_index(['date'])
p.iloc[:,1:].plot(marker='o',figsize=(16,8)).set_title('Kalman Prediction - Select Region to Change - {}'.format(p.iloc[0,0])).figure.savefig('img7.png')
print(evaluation[evaluation['region']==p.iloc[0,0]])
# print(evaluation)
p=t[t['region']==region][['date','region','confirmed','kalman_prediction']]
p.tail(10)
```
## Correlation Matrix And Temperature
```
from string import ascii_letters
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
# Compute the correlation matrix
corr = train.iloc[:,2:].corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=np.bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.9, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5}).figure.savefig('img8.png')
print ('Correlation Matrix')
print('Correlation To Confirmed')
print (corr.confirmed)
import matplotlib.pyplot as plt
p=train[['date','region','min','max']].set_index('date')
p=p[p['region']=='India_Maharashtra']
p.iloc[:,:].plot(marker='*',figsize=(12,4),color=['#19303f','#cccc00']).set_title('Daily Min/Max Temperature - Maharashtra',fontdict={'fontsize': 20}).figure.savefig('img9.png')
avg_temp=train[['region','confirmed','min','max']] # from 17-02-20
avg_temp=avg_temp.groupby(by='region').mean()
avg_temp=avg_temp.sort_values('confirmed',ascending=False)
print( 'Most infected Areas Avg Temperature')
print(avg_temp.iloc[:10,1:])
```
## Kalman X Days Ahead Prediction
```
%%R
install.packages('reshape', repos='http://cran.us.r-project.org')
%%R
require(pracma)
require(Metrics)
require(readr)
library(reshape)
all<- read_csv("https://raw.githubusercontent.com/raghav7203/covid19-raw-data/master/28AugCSVs/ts_C28.csv?token=ALQ7JJXV3ONARHJ3QQO6MES7KT6AG")
all$X1<-NULL
for (i in 1:30) { # Set i days prediction
if( i>1) {all<-all_new}
date<-all[,1]
date[nrow(date) + 1,1] <-all[nrow(all),1]+1
pred_all<-NULL
for (n in 2:ncol(all)-1) {
Y<-ts(data = all[n+1], start = 1, end =nrow(all)+1)
sig_w<-0.01
w<-sig_w*randn(1,100) # acceleration which denotes the fluctuation (Q/R) rnorm(100, mean = 0, sd = 1)
sig_v<-0.01
v<-sig_v*randn(1,100)
t<-0.45
phi<-matrix(c(1,0,t,1),2,2)
gama<-matrix(c(0.5*t^2,t),2,1)
H<-matrix(c(1,0),1,2)
#Kalman
x0_0<-p0_0<-matrix(c(0,0),2,1)
p0_0<-matrix(c(1,0,0,1),2,2)
Q<-0.01
R<-0.01
X<-NULL
X2<-NULL
pred<-NULL
for (i in 0:nrow(all)) {
namp <-paste("p", i+1,"_",i, sep = "")
assign(namp, phi%*%(get(paste("p", i,"_",i, sep = "")))%*%t(phi)+gama%*%Q%*%t(gama))
namk <- paste("k", i+1, sep = "")
assign(namk,get(paste("p", i+1,"_",i, sep = ""))%*%t(H)%*%(1/(H%*%get(paste("p", i+1,"_",i, sep = ""))%*%t(H)+R)))
namx <- paste("x", i+1,"_",i, sep = "")
assign(namx,phi%*%get(paste("x", i,"_",i, sep = "")))
namE <- paste("E", i+1, sep = "")
assign(namE,Y[i+1]-H%*%get(paste("x", i+1,"_",i, sep = "")))
namx2 <- paste("x", i+1,"_",i+1, sep = "")
assign(namx2,get(paste("x", i+1,"_",i, sep = ""))+get(paste("k", i+1, sep = ""))%*%get(paste("E", i+1, sep = "")))
namp2 <- paste("p", i+1,"_",i+1, sep = "")
assign(namp2,(p0_0-get(paste("k", i+1, sep = ""))%*%H)%*%get(paste("p", i+1,"_",i, sep = "")))
X<-rbind(X,get(paste("x", i+1,"_",i,sep = ""))[1])
X2<-rbind(X2,get(paste("x", i+1,"_",i,sep = ""))[2])
if(i>2){
remove(list=(paste("p", i-1,"_",i-2, sep = "")))
remove(list=(paste("k", i-1, sep = "")))
remove(list=(paste("E", i-1, sep = "")))
remove(list=(paste("p", i-2,"_",i-2, sep = "")))
remove(list=(paste("x", i-1,"_",i-2, sep = "")))
remove(list=(paste("x", i-2,"_",i-2, sep = "")))}
}
pred<-NULL
pred<-cbind(Y,X,round(X2,4))
pred<-as.data.frame(pred)
pred$region<-colnames(all[,n+1])
pred$date<-date$date
pred$actual<-rbind(0,(cbind(pred[2:nrow(pred),1])/pred[1:nrow(pred)-1,1]-1)*100)
pred$predict<-rbind(0,(cbind(pred[2:nrow(pred),2])/pred[1:nrow(pred)-1,2]-1)*100)
pred$pred_rate<-(pred$X/pred$Y-1)*100
pred$X2_change<-rbind(0,(cbind(pred[2:nrow(pred),3]-pred[1:nrow(pred)-1,3])))
pred_all<-rbind(pred_all,pred)
}
pred_all<-cbind(pred_all[,4:5],pred_all[,1:3])
names(pred_all)[5]<-"X2"
pred_all<-pred_all[,1:5]
pred_all_today=pred_all[with( pred_all, order(region, date)), ]
all_new=all
#all_new[nrow(all_new),1]<-all_new[nrow(all),1]+1
temp<-with(pred_all_today, pred_all_today[date == all[nrow(all),1]+1, ])
temp<-cbind(temp[,1:2],temp[,4])
temp2<-reshape(temp, direction = "wide", idvar = "date", timevar = "region")
rand_num<-runif(ncol(temp2)-1, 0.9, 1.05)
temp2[,2:ncol(temp2)]<-temp2[,2:ncol(temp2)]*rand_num
colnames(temp2)=colnames(all_new)
all_new<-rbind(all_new,temp2)
all_new[,2:ncol(all_new)]<-round(all_new[,2:ncol(all_new)])
for (i in 2:ncol(all_new)) {
all_new[nrow(all_new),i]=max(all_new[nrow(all_new)-1,i],all_new[nrow(all_new),i])}
}
all_new=%R all_new
all_new['date']=pd.to_datetime(all_new['date'],unit='d')
# Select region
region = ['date', "India_Maharashtra"]
p_kalman=all_new[region]
#p=all_new
#p.iloc[len(p)-1,2]=None
p_kalman=p_kalman.set_index(['date'])
p_kalman.iloc[:,:].plot(marker='o',figsize=(24,14)).set_title('Kalman Prediction {}'.format(region[1]), fontdict={'fontsize': 22})
prediction_one_month = p_kalman.tail(30)
prediction_two_weeks = prediction_one_month.head(15)
prediction_two_weeks
```
| github_jupyter |
```
# Adds link to the scripts folder
import sys
import os
sys.path.append("../../scripts/")
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
from trajectory import Trajectory, load_trajectory_dict
from hivevo.patients import Patient
import filenames
import copy
from activity import get_average_activity
from proba_fix import get_proba_fix
```
# Activity plots
## Functions
Format of the dictionnaries : trajectories[region][rev/non_rev/syn/non_syn]
```
def get_mean_in_time(trajectories, nb_bins=20, freq_range=[0.4, 0.6]):
"""
Computes the mean frequency in time of a set of trajectories from the point they are seen in the freq_range window.
Returns the middle of the time bins and the computed frequency mean.
"""
# Create bins and select trajectories going through the freq_range
time_bins = np.linspace(-677, 3000, nb_bins)
trajectories = [traj for traj in trajectories if np.sum(np.logical_and(
traj.frequencies >= freq_range[0], traj.frequencies < freq_range[1]), dtype=bool)]
# Offset trajectories to set t=0 at the point they are seen in the freq_range and adds all the frequencies / times
# to arrays for later computation of mean
t_traj = np.array([])
f_traj = np.array([])
for traj in trajectories:
idx = np.where(np.logical_and(traj.frequencies >=
freq_range[0], traj.frequencies < freq_range[1]))[0][0]
traj.t = traj.t - traj.t[idx]
t_traj = np.concatenate((t_traj, traj.t))
f_traj = np.concatenate((f_traj, traj.frequencies))
# Binning of all the data in the time bins
filtered_fixed = [traj for traj in trajectories if traj.fixation == "fixed"]
filtered_lost = [traj for traj in trajectories if traj.fixation == "lost"]
freqs, fixed, lost = [], [], []
for ii in range(len(time_bins) - 1):
freqs = freqs + [f_traj[np.logical_and(t_traj >= time_bins[ii], t_traj < time_bins[ii + 1])]]
fixed = fixed + [len([traj for traj in filtered_fixed if traj.t[-1] < time_bins[ii]])]
lost = lost + [len([traj for traj in filtered_lost if traj.t[-1] < time_bins[ii]])]
# Computation of the mean in each bin, active trajectories contribute their current frequency,
# fixed contribute 1 and lost contribute 0
mean = []
for ii in range(len(freqs)):
mean = mean + [np.sum(freqs[ii]) + fixed[ii]]
mean[-1] /= (len(freqs[ii]) + fixed[ii] + lost[ii])
nb_active = [len(freq) for freq in freqs]
nb_dead = [fixed[ii] + lost[ii] for ii in range(len(fixed))]
return 0.5 * (time_bins[1:] + time_bins[:-1]), mean, nb_active, nb_dead
def make_mean_in_time_dict(trajectories):
regions = ["env", "pol", "gag", "all"]
means = {}
freq_ranges = [[0.2, 0.4], [0.4, 0.6], [0.6, 0.8]]
times = []
for freq_range in freq_ranges:
means[str(freq_range)] = {}
for region in regions:
means[str(freq_range)][region] = {}
for key in trajectories[region].keys():
times, means[str(freq_range)][region][key], _, _ = get_mean_in_time(trajectories[region][key], freq_range=freq_range)
return times, means, freq_ranges
def make_pfix(nb_bin=8):
regions = ["env", "pol", "gag", "all"]
pfix = {}
for region in regions:
pfix[region] = {}
for key in trajectories[region].keys():
tmp_freq_bin, tmp_proba, tmp_err = get_proba_fix(trajectories[region][key], nb_bin=nb_bin)
pfix[region][key] = {"freq_bin": tmp_freq_bin, "proba": tmp_proba, "error": tmp_err}
return pfix
```
## Mean in time
```
trajectories = load_trajectory_dict("../../trajectory_dict")
times, means, freq_ranges = make_mean_in_time_dict(trajectories)
pfix = make_pfix(nb_bin=8)
def plot_mean(times, means, savefig=False, fontsize=16):
freq_ranges = [[0.2, 0.4], [0.4, 0.6], [0.6, 0.8]]
colors = ["r","b","g"]
plt.figure(figsize=(14,10))
for ii, freq_range in enumerate(freq_ranges):
plt.plot(times, means[str(freq_range)]["all"]["rev"], "-", color=colors[ii], label="rev")
plt.plot(times, means[str(freq_range)]["all"]["non_rev"], "--", color=colors[ii], label="non_rev")
plt.xlabel("Time [days]", fontsize=fontsize)
plt.ylabel("Frequency", fontsize=fontsize)
plt.ylim([-0.03, 1.03])
plt.grid()
plt.legend(fontsize=fontsize)
if savefig:
plt.savefig(savefig+".pdf", format="pdf")
plt.show()
trajectories = load_trajectory_dict("../../trajectory_dict")
times, means, freq_ranges = make_mean_in_time_dict(trajectories)
plot_mean(times, means)
```
# Plot 2
```
fontsize=16
grid_alpha = 0.5
colors = ["C0","C1","C2","C4"]
markersize=12
freq_ranges = [[0.2, 0.4], [0.4, 0.6], [0.6, 0.8]]
regions = ["env","pol","gag"]
lines = ["-","--",":"]
fig, axs = plt.subplots(ncols=2, nrows=1, figsize=(14,7), sharey=True)
# Plot left
for ii, freq_range in enumerate(freq_ranges):
for jj, region in enumerate(regions):
axs[0].plot(times, means[str(freq_range)][region]["non_syn"], lines[jj], color=colors[ii])
line1, = axs[0].plot([0], [0], "k-")
line2, = axs[0].plot([0], [0], "k--")
line3, = axs[0].plot([0], [0], "k:")
line4, = axs[0].plot([0], [0], "-", color=colors[0])
line5, = axs[0].plot([0], [0], "-", color=colors[1])
line6, = axs[0].plot([0], [0], "-", color=colors[2])
axs[0].set_xlabel("Time [days]", fontsize=fontsize)
axs[0].set_ylabel("Frequency", fontsize=fontsize)
axs[0].set_ylim([-0.03, 1.03])
axs[0].grid(grid_alpha)
axs[0].legend([line1, line2, line3, line4, line5, line6], regions + ["[0.2, 0.4]", "[0.4, 0.6]", "[0.6, 0.8]"], fontsize=fontsize, ncol=2)
# Plot right
for ii,region in enumerate(regions):
axs[1].plot(pfix[region]["non_syn"]["freq_bin"], pfix[region]["non_syn"]["proba"], lines[ii], color=colors[3])
axs[1].plot([0,1], [0,1], "k--")
axs[1].set_xlabel("Initial frequency", fontsize=fontsize)
axs[1].set_ylabel("Fixation probability", fontsize=fontsize)
axs[1].set_ylim([-0.03, 1.03])
axs[1].set_xlim([-0.03, 1.03])
axs[1].grid(grid_alpha)
plt.legend(["env", "pol", "gag", "neutral expectation"], fontsize=fontsize, loc="lower right")
plt.tight_layout()
plt.show()
print(pfix["pol"]["non_syn"]["proba"])
print(pfix["pol"]["non_syn"]["freq_bin"])
```
| github_jupyter |
# Objects and Data Structures Assessment Test
## Test your knowledge.
** Answer the following questions **
Write a brief description of all the following Object Types and Data Structures we've learned about:
**For the full answers, review the Jupyter notebook introductions of each topic!**
[Numbers](http://nbviewer.ipython.org/github/jmportilla/Complete-Python-Bootcamp/blob/master/Numbers.ipynb)
[Strings](http://nbviewer.ipython.org/github/jmportilla/Complete-Python-Bootcamp/blob/master/Strings.ipynb)
[Lists](http://nbviewer.ipython.org/github/jmportilla/Complete-Python-Bootcamp/blob/master/Lists.ipynb)
[Tuples](http://nbviewer.ipython.org/github/jmportilla/Complete-Python-Bootcamp/blob/master/Tuples.ipynb)
[Dictionaries](http://nbviewer.ipython.org/github/jmportilla/Complete-Python-Bootcamp/blob/master/Dictionaries.ipynb)
## Numbers
Write an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.
Hint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25
```
# Your answer is probably different
(60 + (10 ** 2) / 4 * 7) - 134.75
```
Answer these 3 questions without typing code. Then type code to check your answer.
What is the value of the expression 4 * (6 + 5)
What is the value of the expression 4 * 6 + 5
What is the value of the expression 4 + 6 * 5
```
4 * (6 + 5)
4 * 6 + 5
4 + 6 * 5
```
What is the *type* of the result of the expression 3 + 1.5 + 4?
**Answer: Floating Point Number**
What would you use to find a number’s square root, as well as its square?
```
# Square root:
100 ** 0.5
# Square:
10 ** 2
```
## Strings
Given the string 'hello' give an index command that returns 'e'. Enter your code in the cell below:
```
s = 'hello'
# Print out 'e' using indexing
s[1]
```
Reverse the string 'hello' using slicing:
```
s ='hello'
# Reverse the string using slicing
s[::-1]
```
Given the string 'hello', give two methods of producing the letter 'o' using indexing.
```
s ='hello'
# Print out the 'o'
# Method 1:
s[-1]
# Method 2:
s[4]
```
## Lists
Build this list [0,0,0] two separate ways.
```
# Method 1:
[0]*3
# Method 2:
list2 = [0,0,0]
list2
```
Reassign 'hello' in this nested list to say 'goodbye' instead:
```
list3 = [1,2,[3,4,'hello']]
list3[2][2] = 'goodbye'
list3
```
Sort the list below:
```
list4 = [5,3,4,6,1]
# Method 1:
sorted(list4)
# Method 2:
list4.sort()
list4
```
## Dictionaries
Using keys and indexing, grab the 'hello' from the following dictionaries:
```
d = {'simple_key':'hello'}
# Grab 'hello'
d['simple_key']
d = {'k1':{'k2':'hello'}}
# Grab 'hello'
d['k1']['k2']
# Getting a little tricker
d = {'k1':[{'nest_key':['this is deep',['hello']]}]}
# This was harder than I expected...
d['k1'][0]['nest_key'][1][0]
# This will be hard and annoying!
d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]}
# Phew!
d['k1'][2]['k2'][1]['tough'][2][0]
```
Can you sort a dictionary? Why or why not?
**Answer: No! Because normal dictionaries are *mappings* not a sequence. **
## Tuples
What is the major difference between tuples and lists?
**Tuples are immutable!**
How do you create a tuple?
```
t = (1,2,3)
```
## Sets
What is unique about a set?
**Answer: They don't allow for duplicate items!**
Use a set to find the unique values of the list below:
```
list5 = [1,2,2,33,4,4,11,22,3,3,2]
set(list5)
```
## Booleans
For the following quiz questions, we will get a preview of comparison operators. In the table below, a=3 and b=4.
<table class="table table-bordered">
<tr>
<th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th>
</tr>
<tr>
<td>==</td>
<td>If the values of two operands are equal, then the condition becomes true.</td>
<td> (a == b) is not true.</td>
</tr>
<tr>
<td>!=</td>
<td>If values of two operands are not equal, then condition becomes true.</td>
<td> (a != b) is true.</td>
</tr>
<tr>
<td>></td>
<td>If the value of left operand is greater than the value of right operand, then condition becomes true.</td>
<td> (a > b) is not true.</td>
</tr>
<tr>
<td><</td>
<td>If the value of left operand is less than the value of right operand, then condition becomes true.</td>
<td> (a < b) is true.</td>
</tr>
<tr>
<td>>=</td>
<td>If the value of left operand is greater than or equal to the value of right operand, then condition becomes true.</td>
<td> (a >= b) is not true. </td>
</tr>
<tr>
<td><=</td>
<td>If the value of left operand is less than or equal to the value of right operand, then condition becomes true.</td>
<td> (a <= b) is true. </td>
</tr>
</table>
What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!)
```
# Answer before running cell
2 > 3
# Answer before running cell
3 <= 2
# Answer before running cell
3 == 2.0
# Answer before running cell
3.0 == 3
# Answer before running cell
4**0.5 != 2
```
Final Question: What is the boolean output of the cell block below?
```
# two nested lists
l_one = [1,2,[3,4]]
l_two = [1,2,{'k1':4}]
# True or False?
l_one[2][0] >= l_two[2]['k1']
```
## Great Job on your first assessment!
| github_jupyter |
# What is Quantum?
```
# This code is to create the interactive figure
from bokeh.events import ButtonClick
from bokeh.layouts import row, column
from bokeh.models import ColumnDataSource, CustomJS, Button, Slider, DataRange1d, LabelSet, RadioButtonGroup, Div
from bokeh.plotting import figure
from bokeh.embed import file_html
from bokeh.resources import CDN
from bokeh.io import output_notebook, show, curdoc
import IPython
```
‘Quantum physics’ is a term widely used but much less understood. It is a mathematical model first used to describe the behavior of small things in a laboratory, which exposed gaps in the preceding theory of ‘classical’ physics. Quantum theory explains this behavior and gives us a more complete picture of our universe. We have realized we can use this previously unexplained behavior to perform certain computations that we previously did not believe possible. We call this quantum computing.
Quantum computing is the perfect way to dip your toes into quantum physics. It distills the core concepts from quantum physics into their simplest forms, stripping away the complications of the physical world. This page will take you on a short journey to discover (and explain!) some strange quantum phenomena, and give you a taste for what ‘quantum’ is.
## Review of Classical Probability
To cover quantum phenomena, we need to first remind ourselves of 'classical' probabilities. In this sense, 'classical' just means pre-quantum, i.e. the normal probability trees you should have seen in school. If you're already familiar with this material, you should move through it quickly. If you're not so hot on this then don't worry- we'll only cover some of the simplest probability problems possible.
### Probability Trees
You will hopefully remember probability trees from school. The idea is simple - we use a drawing to map out every possible eventuality and from this, we can calculate the chance of it happening.
Say we have a coin, and to start, we place it in the state Heads. If we then toss this fair coin and look at it, there is a 50% chance we will see Heads again, and a 50% chance of seeing Tails instead. We can plot this on a probability tree like so:

We draw the outcomes on the end of each branch, and the probabilities of each occurrence on the branches. Similarly, if we started in the state Tails and tossed the coin, we would have a 50% chance of seeing Heads and a 50% chance of seeing tails.

We can test this works by trying it. You can physically get a coin out, flip it many times, and record each result; you will eventually see roughly 50% of your results are Heads and 50% tails. Around 500 to 1000 tosses should be enough to get reliable results.
### Experiment #1: Single Coin Toss
Too lazy to try this? Don’t worry! You can simulate the coin-tossing experiment by pressing the `Toss Coin` button below to simulate a coin toss and store the results. You can change the initial state to 'Heads' or 'Tails', or increase the number of coins (`No. of Coins`) slider to get many results quickly. Click `Reset` to discard your results and start again.
```
data_labels = ["Heads", "Tails"]
initial_counts = [0]*2
initial_labels = ["0.00%"]*2
source = ColumnDataSource(data=dict(x=data_labels, top=initial_counts, top_percent=initial_labels))
plot = figure(
plot_height=500,
plot_width=500,
sizing_mode="scale_width",
tools="reset",
x_range=data_labels,
y_range=DataRange1d(start=0, range_padding = 20, range_padding_units='absolute'),
y_axis_label="Counts")
plot.vbar(top='top',x='x',width=0.5,source=source, color="#BE95FF")
labels = LabelSet(x='x', y='top', text='top_percent', level='glyph',
x_offset=-20, y_offset=12, source=source, render_mode='canvas')
plot.add_layout(labels)
# Define buttons and sliders
tossbtn = Button(label="Toss Coin")
repeat_slider = Slider(title="No. of Coins", start=1, end=50, step=1, value=0)
radio_label = Div(text="Initial State:")
init_state_radio = RadioButtonGroup(name="Initial State", labels=data_labels, active=0)
resetbtn = Button(label="Reset")
results_label = Div(text="Results:")
resultsHTML = Div(text="", style={'overflow-y': 'auto', 'max-height': '250px'})
# Define callbacks
toss_callback = CustomJS(args=dict(source=source, p=plot,
s=repeat_slider,
resultsHTML=resultsHTML,
results_label=results_label),
code="""
for (var i = 0; i <= s.value; ++i){
const result = Math.floor(Math.random() * 2);
source.data.top[result] += 1;
resultsHTML.text += [' H ', ' T '][result]
}
const total_tosses = source.data.top[0] + source.data.top[1];
results_label.text = "Results (" + total_tosses + " tosses):";
for (var i = 0; i<2; ++i){
const frac = source.data.top[i] / source.data.top.reduce((a, b) => a + b, 0);
source.data.top_percent[i] = (frac*100).toFixed(2) + "%";
}
if (Math.max(...source.data.top) > 22) {
p.y_range.range_padding_units = 'percent';
p.y_range.range_padding = 1;
} else {
p.y_range.range_padding_units = 'absolute';
p.y_range.range_padding = 30 - Math.max(...source.data.top);
};
source.change.emit();
""")
slider_callback = CustomJS(args=dict(tossbtn=tossbtn),
code="""
const repeats = cb_obj.value;
if (repeats == 1) {
tossbtn.label = "Toss Coin";
}
else {
tossbtn.label = "Toss " + repeats + " Coins";
};
""")
reset_callback = CustomJS(args=dict(source=source, resultsHTML=resultsHTML, results_label=results_label), code="""
source.data.top = [0,0];
source.data.top_percent = ['0.00%', '0.00%'];
source.change.emit();
resultsHTML.text = "";
results_label.text = "Results:";
""")
# Link callbacks to buttons / sliders
tossbtn.js_on_event(ButtonClick, toss_callback)
repeat_slider.js_on_change('value', slider_callback)
resetbtn.js_on_event(ButtonClick, reset_callback)
# Compose Layout
control_panel = column(tossbtn,
repeat_slider,
radio_label,
init_state_radio,
resetbtn,
results_label,
resultsHTML)
layout = row(plot, control_panel)
# Output HTML
html_repr = file_html(layout, CDN)
IPython.display.HTML(html_repr)
```
### Going Further
It looks like our probability tree model correctly predicts the results of our experiments. We can go further and chain our probability trees together to predict the outcomes of chains of events. For example, let’s say we start in the state Heads, toss the coin, then _toss the coin again,_ what would we see? We can work it out using the trees:

You may remember from school that we multiply along the branches to calculate the probability of each combination of events:

We then add the results together to calculate the probability of each outcome:

And we can see the probability of seeing Heads after these two tosses is 50%, and the probability of seeing Tails after these two tosses is also 50%.
### Experiment #2: Double Coin Toss
As with before, you can simulate the coin-tossing experiment by pressing the `Toss Coin Twice` button to simulate two coin tosses and store the final result. You can change the initial state, or change the `No. of Coins` to get many results quickly. Click `Reset` to discard your results and start again.
```
data_labels = ["Heads", "Tails"]
initial_counts = [0]*2
initial_labels = ["0.00%"]*2
source = ColumnDataSource(data=dict(x=data_labels, top=initial_counts, top_percent=initial_labels))
plot = figure(
plot_height=500,
plot_width=500,
sizing_mode="scale_width",
tools="reset",
x_range=data_labels,
y_range=DataRange1d(start=0, range_padding = 20, range_padding_units='absolute'),
y_axis_label="Counts")
plot.vbar(top='top',x='x',width=0.5,source=source, color="#BE95FF")
labels = LabelSet(x='x', y='top', text='top_percent', level='glyph',
x_offset=-20, y_offset=12, source=source, render_mode='canvas')
plot.add_layout(labels)
# Define buttons and sliders
tossbtn = Button(label="Toss Coin Twice")
repeat_slider = Slider(title="No. of Coins", start=1, end=50, step=1, value=0)
radio_label = Div(text="Initial State:")
init_state_radio = RadioButtonGroup(name="Initial State", labels=data_labels, active=0)
resetbtn = Button(label="Reset")
results_label = Div(text="Results:")
resultsHTML = Div(text="", style={'overflow-y': 'auto', 'max-height': '250px'})
# Define callbacks
toss_callback = CustomJS(args=dict(source=source, p=plot,
s=repeat_slider,
resultsHTML=resultsHTML,
results_label=results_label),
code="""
for (var i = 0; i <= s.value; ++i){
const result = Math.floor(Math.random() * 2);
source.data.top[result] += 1;
resultsHTML.text += [' H ', ' T '][result]
}
const total_tosses = source.data.top[0] + source.data.top[1];
results_label.text = "Results (" + total_tosses + " Double Tosses):";
for (var i = 0; i<2; ++i){
const frac = source.data.top[i] / source.data.top.reduce((a, b) => a + b, 0);
source.data.top_percent[i] = (frac*100).toFixed(2) + "%";
}
if (Math.max(...source.data.top) > 22) {
p.y_range.range_padding_units = 'percent';
p.y_range.range_padding = 1;
} else {
p.y_range.range_padding_units = 'absolute';
p.y_range.range_padding = 30 - Math.max(...source.data.top);
};
source.change.emit();
""")
slider_callback = CustomJS(args=dict(tossbtn=tossbtn),
code="""
const repeats = cb_obj.value;
if (repeats == 1) {
tossbtn.label = "Toss Coin Twice";
}
else {
tossbtn.label = "Toss " + repeats + " Coins Twice";
};
""")
reset_callback = CustomJS(args=dict(source=source, resultsHTML=resultsHTML, results_label=results_label), code="""
source.data.top = [0,0];
source.data.top_percent = ['0.00%', '0.00%'];
source.change.emit();
resultsHTML.text = "";
results_label.text = "Results:";
""")
# Link callbacks to buttons / sliders
tossbtn.js_on_event(ButtonClick, toss_callback)
repeat_slider.js_on_change('value', slider_callback)
resetbtn.js_on_event(ButtonClick, reset_callback)
# Compose Layout
control_panel = column(tossbtn,
repeat_slider,
radio_label,
init_state_radio,
resetbtn,
results_label,
resultsHTML)
layout = row(plot, control_panel)
# Output HTML
html_repr = file_html(layout, CDN)
IPython.display.HTML(html_repr)
```
With enough tosses, our results are as expected: Equal chance of measuring Heads or Tails.
## The Quantum Coin
Now we have a complete description of the classical coin, it’s time to introduce a quantum ‘coin’. Our quantum coin is called a ‘qubit’.
A qubit is something you can only play with in a lab, as they are very difficult to manipulate. Many years of scientific and technological advancements have gone into creating the qubits we have today, but the beauty of learning through quantum computing is that we can ignore the physical complications and just remember that when we measure a qubit, it will be in one of two states: Instead of the two states Heads and Tails, we call our qubit’s two states 0 and 1.
### Experiment #3: The Quantum Coin Toss
Let’s experiment with our quantum coin and see how it behaves. We’re going to do a quantum toss, measure the state of our coin, and record it. This is just like the classical coin toss in the section above.
```
data_labels = ["0", "1"]
initial_counts = [0]*2
initial_labels = ["0.00%"]*2
source = ColumnDataSource(data=dict(x=data_labels, top=initial_counts, top_percent=initial_labels))
plot = figure(
plot_height=500,
plot_width=500,
sizing_mode="scale_width",
tools="reset",
x_range=data_labels,
y_range=DataRange1d(start=0, range_padding = 20, range_padding_units='absolute'),
y_axis_label="Counts")
plot.vbar(top='top',x='x',width=0.5,source=source, color="#BE95FF")
labels = LabelSet(x='x', y='top', text='top_percent', level='glyph',
x_offset=-20, y_offset=12, source=source, render_mode='canvas')
plot.add_layout(labels)
# Define buttons and sliders
tossbtn = Button(label="Toss Quantum Coin")
repeat_slider = Slider(title="No. of Coins", start=1, end=50, step=1, value=0)
radio_label = Div(text="Initial State:")
init_state_radio = RadioButtonGroup(name="Initial State", labels=data_labels, active=0)
resetbtn = Button(label="Reset")
results_label = Div(text="Results:")
resultsHTML = Div(text="", style={'overflow-y': 'auto', 'max-height': '250px'})
# Define callbacks
toss_callback = CustomJS(args=dict(source=source, p=plot,
s=repeat_slider,
resultsHTML=resultsHTML,
results_label=results_label),
code="""
for (var i = 0; i <= s.value; ++i){
const result = Math.floor(Math.random() * 2);
source.data.top[result] += 1;
resultsHTML.text += [' 0 ', ' 1 '][result]
}
const total_tosses = source.data.top[0] + source.data.top[1];
results_label.text = "Results (" + total_tosses + " Quantum Tosses):";
for (var i = 0; i<2; ++i){
const frac = source.data.top[i] / source.data.top.reduce((a, b) => a + b, 0);
source.data.top_percent[i] = (frac*100).toFixed(2) + "%";
}
if (Math.max(...source.data.top) > 22) {
p.y_range.range_padding_units = 'percent';
p.y_range.range_padding = 1;
} else {
p.y_range.range_padding_units = 'absolute';
p.y_range.range_padding = 30 - Math.max(...source.data.top);
};
source.change.emit();
""")
slider_callback = CustomJS(args=dict(tossbtn=tossbtn),
code="""
const repeats = cb_obj.value;
if (repeats == 1) {
tossbtn.label = "Toss Quantum Coin";
}
else {
tossbtn.label = "Toss " + repeats + " Quantum Coins";
};
""")
reset_callback = CustomJS(args=dict(source=source, resultsHTML=resultsHTML, results_label=results_label), code="""
source.data.top = [0,0];
source.data.top_percent = ['0.00%', '0.00%'];
source.change.emit();
resultsHTML.text = "";
results_label.text = "Results:";
""")
# Link callbacks to buttons / sliders
tossbtn.js_on_event(ButtonClick, toss_callback)
repeat_slider.js_on_change('value', slider_callback)
resetbtn.js_on_event(ButtonClick, reset_callback)
# Compose Layout
control_panel = column(tossbtn,
repeat_slider,
radio_label,
init_state_radio,
resetbtn,
results_label,
resultsHTML)
layout = row(plot, control_panel)
# Output HTML
html_repr = file_html(layout, CDN)
IPython.display.HTML(html_repr)
```
We’re going to try and describe our quantum coin using probability trees. This looks like, from a 0 state, the coin toss gives us a 50-50 chance of measuring 0 or 1. Let’s plot this on a tree as we did with the classical coin:

```
data_labels = ["0", "1"]
initial_counts = [0]*2
initial_labels = ["0.00%"]*2
source = ColumnDataSource(data=dict(x=data_labels, top=initial_counts, top_percent=initial_labels))
plot = figure(
plot_height=500,
plot_width=500,
sizing_mode="scale_width",
tools="reset",
x_range=data_labels,
y_range=DataRange1d(start=0, range_padding = 20, range_padding_units='absolute'),
y_axis_label="Counts")
plot.vbar(top='top',x='x',width=0.5,source=source, color="#BE95FF")
labels = LabelSet(x='x', y='top', text='top_percent', level='glyph',
x_offset=-20, y_offset=12, source=source, render_mode='canvas')
plot.add_layout(labels)
# Define buttons and sliders
tossbtn = Button(label="Toss Quantum Coin")
repeat_slider = Slider(title="No. of Coins", start=1, end=50, step=1, value=0)
radio_label = Div(text="Initial State:")
init_state_radio = RadioButtonGroup(name="Initial State", labels=data_labels, active=1)
resetbtn = Button(label="Reset")
results_label = Div(text="Results:")
resultsHTML = Div(text="", style={'overflow-y': 'auto', 'max-height': '250px'})
# Define callbacks
toss_callback = CustomJS(args=dict(source=source, p=plot,
s=repeat_slider,
resultsHTML=resultsHTML,
results_label=results_label),
code="""
for (var i = 0; i <= s.value; ++i){
const result = Math.floor(Math.random() * 2);
source.data.top[result] += 1;
resultsHTML.text += [' 0 ', ' 1 '][result]
}
const total_tosses = source.data.top[0] + source.data.top[1];
results_label.text = "Results (" + total_tosses + " Quantum Tosses):";
for (var i = 0; i<2; ++i){
const frac = source.data.top[i] / source.data.top.reduce((a, b) => a + b, 0);
source.data.top_percent[i] = (frac*100).toFixed(2) + "%";
}
if (Math.max(...source.data.top) > 22) {
p.y_range.range_padding_units = 'percent';
p.y_range.range_padding = 1;
} else {
p.y_range.range_padding_units = 'absolute';
p.y_range.range_padding = 30 - Math.max(...source.data.top);
};
source.change.emit();
""")
slider_callback = CustomJS(args=dict(tossbtn=tossbtn),
code="""
const repeats = cb_obj.value;
if (repeats == 1) {
tossbtn.label = "Toss Quantum Coin";
}
else {
tossbtn.label = "Toss " + repeats + " Quantum Coins";
};
""")
reset_callback = CustomJS(args=dict(source=source, resultsHTML=resultsHTML, results_label=results_label), code="""
source.data.top = [0,0];
source.data.top_percent = ['0.00%', '0.00%'];
source.change.emit();
resultsHTML.text = "";
results_label.text = "Results:";
""")
# Link callbacks to buttons / sliders
tossbtn.js_on_event(ButtonClick, toss_callback)
repeat_slider.js_on_change('value', slider_callback)
resetbtn.js_on_event(ButtonClick, reset_callback)
# Compose Layout
control_panel = column(tossbtn,
repeat_slider,
radio_label,
init_state_radio,
resetbtn,
results_label,
resultsHTML)
layout = row(plot, control_panel)
# Output HTML
html_repr = file_html(layout, CDN)
IPython.display.HTML(html_repr)
```
And similarly, it looks like from a 1 state, the coin toss gives us a 50-50 chance of measuring 0 or 1. The probability tree looks like this:

### Experiment #4: The Double Quantum Coin Toss
We now have a model that predicts the behaviour of the quantum coin. Like good scientists, we now want to test it on new scenarios and see if it holds up. Let’s try the double coin toss as we did before. Just like the classical coins, our model of the quantum coin predicts a 50-50 chance of measuring 0 or 1, regardless of which state we start in:

So let’s try it! We’re going to toss the quantum coin twice:
```
data_labels = ["0", "1"]
initial_counts = [0]*2
initial_labels = ["0.00%"]*2
source = ColumnDataSource(data=dict(x=data_labels, top=initial_counts, top_percent=initial_labels))
plot = figure(
plot_height=500,
plot_width=500,
sizing_mode="scale_width",
tools="reset",
x_range=data_labels,
y_range=DataRange1d(start=0, range_padding = 20, range_padding_units='absolute'),
y_axis_label="Counts")
plot.vbar(top='top',x='x',width=0.5,source=source, color="#BE95FF")
labels = LabelSet(x='x', y='top', text='top_percent', level='glyph',
x_offset=-20, y_offset=12, source=source, render_mode='canvas')
plot.add_layout(labels)
# Define buttons and sliders
tossbtn = Button(label="Toss Quantum Coin Twice")
repeat_slider = Slider(title="No. of Coins", start=1, end=50, step=1, value=0)
radio_label = Div(text="Initial State:")
init_state_radio = RadioButtonGroup(name="Initial State", labels=data_labels, active=0)
resetbtn = Button(label="Reset")
results_label = Div(text="Results:")
resultsHTML = Div(text="", style={'overflow-y': 'auto', 'max-height': '250px'})
# Define callbacks
toss_callback = CustomJS(args=dict(source=source, p=plot,
s=repeat_slider,
resultsHTML=resultsHTML,
results_label=results_label,
init_state_radio=init_state_radio),
code="""
for (var i = 0; i <= s.value; ++i){
const result = init_state_radio.active;
source.data.top[result] += 1;
resultsHTML.text += [' 0 ', ' 1 '][result]
}
const total_tosses = source.data.top[0] + source.data.top[1];
results_label.text = "Results (" + total_tosses + " Double Quantum Tosses):";
for (var i = 0; i<2; ++i){
const frac = source.data.top[i] / source.data.top.reduce((a, b) => a + b, 0);
source.data.top_percent[i] = (frac*100).toFixed(2) + "%";
}
if (Math.max(...source.data.top) > 22) {
p.y_range.range_padding_units = 'percent';
p.y_range.range_padding = 1;
} else {
p.y_range.range_padding_units = 'absolute';
p.y_range.range_padding = 30 - Math.max(...source.data.top);
};
source.change.emit();
""")
slider_callback = CustomJS(args=dict(tossbtn=tossbtn),
code="""
const repeats = cb_obj.value;
if (repeats == 1) {
tossbtn.label = "Toss Quantum Coin Twice";
}
else {
tossbtn.label = "Toss " + repeats + " Quantum Coins Twice";
};
""")
reset_callback = CustomJS(args=dict(source=source, resultsHTML=resultsHTML, results_label=results_label), code="""
source.data.top = [0,0];
source.data.top_percent = ['0.00%', '0.00%'];
source.change.emit();
resultsHTML.text = "";
results_label.text = "Results:";
""")
# Link callbacks to buttons / sliders
tossbtn.js_on_event(ButtonClick, toss_callback)
repeat_slider.js_on_change('value', slider_callback)
resetbtn.js_on_event(ButtonClick, reset_callback)
# Compose Layout
control_panel = column(tossbtn,
repeat_slider,
radio_label,
init_state_radio,
resetbtn,
results_label,
resultsHTML)
layout = row(plot, control_panel)
# Output HTML
html_repr = file_html(layout, CDN)
IPython.display.HTML(html_repr)
```
Hmm… this is an unexpected result. Let's see what happens when we set the initial state to 1:
```
data_labels = ["0", "1"]
initial_counts = [0]*2
initial_labels = ["0.00%"]*2
source = ColumnDataSource(data=dict(x=data_labels, top=initial_counts, top_percent=initial_labels))
plot = figure(
plot_height=500,
plot_width=500,
sizing_mode="scale_width",
tools="reset",
x_range=data_labels,
y_range=DataRange1d(start=0, range_padding = 20, range_padding_units='absolute'),
y_axis_label="Counts")
plot.vbar(top='top',x='x',width=0.5,source=source, color="#BE95FF")
labels = LabelSet(x='x', y='top', text='top_percent', level='glyph',
x_offset=-20, y_offset=12, source=source, render_mode='canvas')
plot.add_layout(labels)
# Define buttons and sliders
tossbtn = Button(label="Toss Quantum Coin Twice")
repeat_slider = Slider(title="No. of Coins", start=1, end=50, step=1, value=0)
radio_label = Div(text="Initial State:")
init_state_radio = RadioButtonGroup(name="Initial State", labels=data_labels, active=1)
resetbtn = Button(label="Reset")
results_label = Div(text="Results:")
resultsHTML = Div(text="", style={'overflow-y': 'auto', 'max-height': '250px'})
# Define callbacks
toss_callback = CustomJS(args=dict(source=source, p=plot,
s=repeat_slider,
resultsHTML=resultsHTML,
results_label=results_label,
init_state_radio=init_state_radio),
code="""
for (var i = 0; i <= s.value; ++i){
const result = init_state_radio.active;
source.data.top[result] += 1;
resultsHTML.text += [' 0 ', ' 1 '][result]
}
const total_tosses = source.data.top[0] + source.data.top[1];
results_label.text = "Results (" + total_tosses + " Quantum Tosses)";
for (var i = 0; i<2; ++i){
const frac = source.data.top[i] / source.data.top.reduce((a, b) => a + b, 0);
source.data.top_percent[i] = (frac*100).toFixed(2) + "%";
}
if (Math.max(...source.data.top) > 22) {
p.y_range.range_padding_units = 'percent';
p.y_range.range_padding = 1;
} else {
p.y_range.range_padding_units = 'absolute';
p.y_range.range_padding = 30 - Math.max(...source.data.top);
};
source.change.emit();
""")
slider_callback = CustomJS(args=dict(tossbtn=tossbtn),
code="""
const repeats = cb_obj.value;
if (repeats == 1) {
tossbtn.label = "Toss Quantum Coin Twice";
}
else {
tossbtn.label = "Toss " + repeats + " Quantum Coins Twice";
};
""")
reset_callback = CustomJS(args=dict(source=source, resultsHTML=resultsHTML, results_label=results_label), code="""
source.data.top = [0,0];
source.data.top_percent = ['0.00%', '0.00%'];
source.change.emit();
resultsHTML.text = "";
results_label.text = "Results:";
""")
# Link callbacks to buttons / sliders
tossbtn.js_on_event(ButtonClick, toss_callback)
repeat_slider.js_on_change('value', slider_callback)
resetbtn.js_on_event(ButtonClick, reset_callback)
# Compose Layout
control_panel = column(tossbtn,
repeat_slider,
radio_label,
init_state_radio,
resetbtn,
results_label,
resultsHTML)
layout = row(plot, control_panel)
# Output HTML
html_repr = file_html(layout, CDN)
IPython.display.HTML(html_repr)
```
This doesn't match our prediction at all! Our model has failed us! This is the same problem that physicists encountered in the early 20th century. Searching for the answer led to the development of quantum physics, which is what we will use to describe our quantum coin toss.
## The Quantum Model
In short, quantum theory is probability theory with negative numbers.
What does this mean? We can’t have negative probabilities as that doesn’t make sense. To accommodate this, we use a new quantity we call _amplitudes_ and plot these on trees instead. To get around the fact that we cannot have negative probabilities, and that all our probabilities must add up to 1, we use a mathematical trick: We square our amplitudes to calculate the probabilities.
Let’s see an example. The amplitude trees for our single quantum coin toss look like this:

We can see that starting in the state 0, the quantum coin toss assigns equal amplitudes to both outcomes. When we square these amplitudes, they give us the correct probability of measuring 0 or 1 (50-50 chance). How did we know that the amplitudes were $\sqrt{\tfrac{1}{2}}$? Because they're the values that give us the right answers!
Starting in the state 1, the amplitude tree is different:

Here we can see our first negative number appearing in the amplitude of the 1 outcome. When we square our amplitudes to calculate the probabilities, this negative sign disappears (remember that a negative times a negative is a positive), and we see the 50-50 chance we measured above. The interesting result is when we chain these probabilities together.
## Explaining the Double Quantum Coin Toss
Just like with classical probability, we multiply our amplitudes along the branches to calculate the amplitude of each outcome:

And to work out the probability of measuring each outcome, we add these amplitudes together, and then square them:

We can see the amplitudes of finding the coin (qubit) in the state 1 cancel each other out, and we call this effect interference. You should verify for yourself that this model works when the initial state is 1.
## What is Quantum Computing?
This is cool, but how is it useful? It turns out that these interference effects can be used to our advantage; we can combine operations such as the quantum coin toss to build more efficient algorithms. These algorithms can use interference effects to make the wrong answers cancel out quickly and give us a high probability of measuring the right answer. This is the idea behind quantum computing.
| github_jupyter |
# 1 - Predicting Salaries from Stack Overflow Surveys
Stack Overflow has been conducting [annual user surveys](https://insights.stackoverflow.com/survey/?utm_source=so-owned&utm_medium=blog&utm_campaign=dev-survey-2017&utm_content=blog-link&utm_term=data) starting in 2011. Yes, this is the same survey that (re)started the whole tabs vs spaces [debate](https://stackoverflow.blog/2017/06/15/developers-use-spaces-make-money-use-tabs/) in 2017. The results for the 2018 survey has been released, and I wanted to try **to use the 2017 results to try and predict salaries in the 2018 results**.
For anyone who has worked on a dataset not from Kaggle or the UCI repository, you might have experienced of the 80/20 rule, where 80% of your time is spent cleaning data, and 20% on modeling. Despite knowing the rule, it still surprised me how much time I spent cleaning the data, which is detailed below.
Broadly, I will be going through:
- Downcasting data
- Identifying and renaming common columns
- Pre-processing data
#### 1.1 - Importing Libraries
Importing all standard libraries because its just habit now. I've also set options to view up to 50 columns without truncation.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
pd.set_option('display.max_columns', 50)
```
# 2 - Reading and Downcasting Data
Downcasting data means to optimize the datatype of each column to reduce memory usage. For 2018, the dataset was more than 500 MB, which unfortunately is reaching the upper computational limits of my computer. If you are interested in a more detailed explanation, check out my [kernel](https://www.kaggle.com/yscyang1/microsoft-malware-1-loading-a-large-data-set) for the Microsoft malware competition.
Both 2017 and 2018 had the same treatment. First, I printed the breakdown of each datatype's memory usage, including the total memory usage. Then I downcasted each column and checked to see that the downcasting occurred.
Note: I changed the columns "Respondent" from int32 to float32 because when saving to feather format, an error occurs with int32 dtype.
### 2.1 - 2017 Data
- Memory usage before downcasting: 405.03 MB
- Memory usage after downcasting: 15.56 MB
- About a 95% reduction in memory usage
```
df_2017 = pd.read_csv('2017/survey_results_public.csv')
df_2017.info(memory_usage='deep')
def get_memoryUsage(df):
dtype_lst = list(df.get_dtype_counts().index)
for dtype in dtype_lst:
print('Total memory usage for {}: {} MB'.format(dtype, format(df.select_dtypes([dtype]).memory_usage(deep = True).sum()/1024**2,'.5f')))
print('\n' + 'Total Memory Usage: {} MB'.format(format(df.memory_usage(deep=True).sum()/1024**2, '.2f')))
get_memoryUsage(df_2017)
def downcast(df):
for col in df.select_dtypes(['int64']):
df[col] = pd.to_numeric(df[col], downcast = 'signed')
for col in df.select_dtypes(['float64']):
df[col] = pd.to_numeric(df[col], downcast = 'float')
for col in df.select_dtypes(['object']):
df[col] = df[col].astype('category')
downcast(df_2017)
get_memoryUsage(df_2017)
df_2017['Respondent'] = df_2017['Respondent'].astype('float32')
```
### 2.2 - 2018 Data
- Memory usage before downcasting: 619.4 MB
- Memory usage after downcasting: 45.08 MB
- About a 90% reduction in memory usage
```
df_2018 = pd.read_csv('2018/survey_results_public.csv', low_memory=False)
get_memoryUsage(df_2018)
downcast(df_2018)
get_memoryUsage(df_2018)
df_2018['Respondent'] = df_2018['Respondent'].astype('float32')
```
### 2.3 - A Brief Glance at the Columns
There are 154 columns in 2017 and 129 in 2018. Yet, there are only 17 columns with the same name. Surely there are more common columns between the two years?
```
pd.set_option('display.max_columns', 155)
df_2017.head(3)
df_2018.head(3)
print('Number of common columns: {} \n'.format(len(set(df_2017.columns).intersection(set(df_2018)))))
print(set(df_2017.columns).intersection(set(df_2018)))
```
### 2.4 - Identifying and Renaming Columns
From the documentation, I identified 49 columns in common between 2017 and 2018, including the 17 identified above. I isolate each column and rename them to so both years have the same column names.
```
# Identifying columns
df_2017_keep = df_2017[['Respondent', 'ProgramHobby', 'Country', 'University', 'EmploymentStatus', 'FormalEducation',
'MajorUndergrad', 'CompanySize', 'YearsProgram', 'YearsCodedJob', 'DeveloperType', 'CareerSatisfaction',
'JobSatisfaction', 'KinshipDevelopers', 'CompetePeers', 'LastNewJob', 'AssessJobIndustry', 'AssessJobDept',
'AssessJobTech', 'AssessJobCompensation', 'AssessJobOffice', 'AssessJobRemote', 'AssessJobProfDevel',
'AssessJobDiversity', 'AssessJobProduct', 'AssessJobFinances', 'ResumePrompted', 'Currency',
'EducationTypes', 'SelfTaughtTypes', 'TimeAfterBootcamp', 'HaveWorkedLanguage', 'WantWorkLanguage',
'HaveWorkedFramework','WantWorkFramework', 'HaveWorkedDatabase', 'WantWorkDatabase', 'HaveWorkedPlatform',
'WantWorkPlatform', 'IDE', 'Methodology', 'VersionControl', 'CheckInCode', 'StackOverflowJobListing',
'Gender', 'HighestEducationParents', 'Race', 'SurveyLong', 'Salary']]
df_2018_keep = df_2018[['Respondent', 'Hobby', 'Country', 'Student', 'Employment', 'FormalEducation', 'UndergradMajor',
'CompanySize', 'DevType', 'YearsCoding', 'YearsCodingProf', 'JobSatisfaction', 'CareerSatisfaction',
'LastNewJob', 'AssessJob1', 'AssessJob2', 'AssessJob3', 'AssessJob4', 'AssessJob5', 'AssessJob6',
'AssessJob7', 'AssessJob8', 'AssessJob9', 'AssessJob10', 'UpdateCV', 'Currency', 'ConvertedSalary',
'EducationTypes', 'SelfTaughtTypes', 'TimeAfterBootcamp', 'AgreeDisagree1', 'AgreeDisagree2',
'LanguageWorkedWith', 'LanguageDesireNextYear', 'DatabaseWorkedWith', 'DatabaseDesireNextYear',
'PlatformWorkedWith', 'PlatformDesireNextYear', 'FrameworkWorkedWith', 'FrameworkDesireNextYear',
'IDE', 'Methodology', 'VersionControl', 'CheckInCode', 'StackOverflowJobs', 'Gender',
'EducationParents', 'RaceEthnicity', 'SurveyTooLong']]
# Renaming columns
df_2017_keep.rename(columns = {'Respondent': 'ID', 'ProgramHobby': 'Hobby', 'University': 'Student', 'EmploymentStatus': 'Employment',
'FormalEducation': 'Education', 'MajorUndergrad': 'UndergradMajor', 'YearsProgram': 'YearsCoding',
'YearsCodedJob': 'YearsCodingProf', 'DeveloperType': 'DevType', 'ResumePrompted': 'UpdateCV',
'HaveWorkedLanguage': 'LanguageWorkedWith', 'WantWorkLanguage': 'LanguageDesireNextYear',
'HaveWorkedFramework': 'FrameworkWorkedWith', 'WantWorkFramework': 'FrameworkDesireNextYear',
'HaveWorkedDatabase': 'DatabaseWorkedWith', 'WantWorkDatabase': 'DatabaseDesireNextYear',
'HaveWorkedPlatform': 'PlatformWorkedWith', 'WantWorkPlatform': 'PlatformDesireNextYear',
'StackOverflowJobListing': "StackOverflowJobs", 'HighestEducationParents': 'EducationParents'},
inplace = True)
df_2018_keep.rename(columns = {'Respondent': 'ID', 'FormalEducation': 'Education', 'AssessJob1': 'AssessJobIndustry',
'AssessJob2': 'AssessJobFinances', 'AssessJob3': 'AssessJobDept', 'AssessJob4': 'AssessJobTech',
'AssessJob5': 'AssessJobCompensation', 'AssessJob6': 'AssessJobOffice',
'AssessJob7': 'AssessJobRemote', 'AssessJob8': 'AssessJobProfDevel', 'AssessJob9': 'AssessJobDiversity',
'AssessJob10': 'AssessJobProduct', 'AgreeDisagree1': 'KinshipDevelopers', 'AgreeDisagree2': 'CompetePeers',
'RaceEthnicity': 'Race', 'SurveyTooLong': 'SurveyLong', 'ConvertedSalary': 'Salary'},
inplace = True)
```
### 2.5 - Save to Feather
At this point, I would like to save my condensed raw data so I have something to go back to before I start manipulating things.
```
import os
os.makedirs('tmp', exist_ok=True)
df_2017_keep.to_feather('tmp/df_2017_1keep')
df_2018_keep.to_feather('tmp/df_2018_1keep')
```
# 3 - Processing Each Column
This is the last, but arguably the most important part of this post.
### 3.1 - Missing Data
Some respondents didn't fill out too much of the survey. For example, one person filled out the hobby section, and left the rest blank. Such answers are going to be useless for analysis, so I will drop all the rows htat have more than 50% of the answers blank. This results in ~30% reduction of rows.
```
df_2017_keep.dropna(thresh=len(df_2017_keep.columns)/2, inplace=True)
df_2018_keep.dropna(thresh=len(df_2018_keep.columns)/2, inplace=True)
```
### 3.2 - Salary
Since the main goal is to predict salary, any rows without a salary or currency is removed. This results in removing about 66% and 35% of the rows in 2017 and 2018 respectively. I haven't found in the documentation whether the salaries are already converted to US dollars in 2017, but working with the data, it seems like they are converted. In the 2018 documentation, it clearly states salaries have been converted.
Since I want to know how much of each column is missing, I've written a function, getMissingPercent(), which takes a string of the column name and returns what percent of the column is missing for each year.
```
def getMissingPercent(col):
print('{} - percent missing in 2017: {}%'.format(col, df_2017_keep[col].isnull().sum()/len(df_2017_keep)*100))
print('{} - percent missing in 2018: {}%'.format(col, df_2018_keep[col].isnull().sum()/len(df_2018_keep)*100))
getMissingPercent('Salary')
df_2017_keep = df_2017_keep[(df_2017_keep['Currency'].notnull()) & (df_2017_keep['Salary'].notnull())]
df_2018_keep = df_2018_keep[(df_2018_keep['Currency'].notnull()) & (df_2018_keep['Salary'].notnull())]
# Commented out in case need to convert 2017 currencies to USD
# currency_dict = {'British pounds sterling (£)': 1.27386, 'U.S. dollars ($)': 1, 'Euros (€)': 1.14630, 'Brazilian reais (R$)': 0.269293,
# 'Indian rupees (?)': 0.0142103, 'Polish zloty (zl)': 0.266836, 'Canadian dollars (C$)': 0.755728,
# 'Russian rubles (?)': 0.0148888, 'Swiss francs': 1.01940, 'Swedish kroner (SEK)': 0.112174,
# 'Mexican pesos (MXN$)': 0.0517878, 'Australian dollars (A$)': 0.715379, 'Japanese yen (¥)': 0.00917943,
# 'Chinese yuan renminbi (¥)': 0.146269, 'Singapore dollars (S$)': 0.736965, 'South African rands (R)': 0.0721070,
# 'Bitcoin (btc)': 4019.77}
# def convert_salary(col):
# currency = col[0]
# salary = col[1]
# return currency_dict[currency] * salary
# df_2017_keep['Salary'] = df_2017_keep[['Currency','Salary']].apply(convert_salary, axis = 1)
```
### 3.3 - Hobby
Surprisingly, everyone filled out if they program as a hobby or not. Although, in 2017, you could also answer if you contributed to open source projects whereas in 2018, answers constrained to yes or no. I've simplified the 2017 answers so that anything aside from "no" becomes "yes".
```
getMissingPercent('Hobby')
df_2017_keep['Hobby'].unique()
df_2018_keep['Hobby'].unique()
def hobby(col):
for row in col:
if row != 'No':
return 'Yes'
else:
return 'No'
df_2017_keep['Hobby'] = df_2017_keep[['Hobby']].apply(hobby, axis = 1)
```
### 3.4 - Country
Respondents state what country they reside in. Again, no missing data. But respondents had to type in the country name, so watch out for typos later.
```
getMissingPercent('Country')
```
### 3.5 - Student
For both years, this asks if the respondent is currently enrolled in a college or university program. They have the same answer choices of full time, part time, no, or prefer not to say. However, in the 2018 dataset, "I prefer not to say" all comes out as null values. This seems to be true for all of 2018 data. Null values are filled in with "I prefer not to say".
```
getMissingPercent('Student')
df_2017_keep['Student'].unique()
df_2018_keep['Student'].unique()
df_2018_keep['Student'] = df_2018_keep.Student.cat.add_categories('I prefer not to say').fillna('I prefer not to say')
```
### 3.6 - Employment
After removing null salaries, 2017 only has two employment statuses: employed full time or part time. In the 2017 documentation, it states that salary information was only collected if respondents stated they were employed.
There was no such filter for the 2018 data, so I've filtered out anyone unemployed, which includes independent contractors/freelancers/self-employed, those unemployed but looking for work, those unemployed and not looking for work, and retired people.
```
getMissingPercent('Employment')
sorted(df_2017_keep['Employment'].unique())
df_2018_keep['Employment'].unique()
df_2018_keep = df_2018_keep[(df_2018_keep['Employment']=='Employed full-time') | (df_2018_keep['Employment']=='Employed part-time')]
```
### 3.7 - Education
Education refers to the highest level of formal education that the respondent has completed. In 2018, a category was added for associates degree. I've added that to 2017 categories and converted null values in 2018 to "I prefer not to say".
```
getMissingPercent('Education')
list(df_2017_keep['Education'].unique())
list(df_2018_keep['Education'].unique())
df_2017_keep['Education'] = df_2017_keep.Education.cat.add_categories('Associate degree')
df_2018_keep['Education'] = df_2018_keep.Education.cat.add_categories('I prefer not to answer').fillna('I prefer not to answer')
```
### 3.8 - Undergraduate Major
As one would expect this column asks what is/was the respondent's undergraduate major. The two years have the same options to choose from, and it encompasses a wide variety of majors, with heavy emphasis on different types of computer science majors.
```
getMissingPercent('UndergradMajor')
list(df_2017_keep['UndergradMajor'].unique())==list(df_2017_keep['UndergradMajor'].unique())
df_2017_keep['UndergradMajor'] = df_2017_keep.UndergradMajor.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['UndergradMajor'] = df_2018_keep.UndergradMajor.cat.add_categories('NaN').fillna('NaN')
```
### 3.9 - Company Size
Company size options range from fewer than 10 employees to greater than 10k employees.
```
getMissingPercent('CompanySize')
df_2017_keep['CompanySize'] = df_2017_keep.CompanySize.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['CompanySize'] = df_2018_keep.CompanySize.cat.add_categories('NaN').fillna('NaN')
sorted(df_2017_keep['UndergradMajor'].unique())==sorted(df_2017_keep['UndergradMajor'].unique())
```
### 3.10 - Years Coded
This section asks how many years the respondent has been coding, including for school, for fun, or for work (professionally). The answer choices for 2017 were a little confusing. For example, answer choices included 1 to 2 years, 2 to 3 years, and so forth. 2018 choices were less ambiguous, with example choices of 0-2 years and 3-5 years.
For the 2017 choices, I've reworked the answer choices so that the first number is included, and the second is excluded. To clarify, if the respondent chose answer choice 1 to to years, it means they have been coding anywhere between 1 to 1.99 years. With this method, I am able to make the same answer choices between the two datasets.
```
getMissingPercent('YearsCoding')
df_2017_keep['YearsCoding'] = df_2017_keep.YearsCoding.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['YearsCoding'] = df_2018_keep.YearsCoding.cat.add_categories('NaN').fillna('NaN')
YearsCoding2017_dict ={'Less than a year': '0-2 years', '1 to 2 years': '0-2 years', '2 to 3 years': '3-5 years',
'3 to 4 years': '3-5 years', '4 to 5 years': '3-5 years', '5 to 6 years': '6-8 years',
'6 to 7 years': '6-8 years', '7 to 8 years': '6-8 years', '8 to 9 years': '9-11 years',
'9 to 10 years': '9-11 years', '10 to 11 years': '9-11 years', '11 to 12 years': '12-14 years',
'12 to 13 years': '12-14 years', '13 to 14 years': '12-14 years', '14 to 15 years': '15-17 years',
'15 to 16 years': '15-17 years', '16 to 17 years': '15-17 years', '17 to 18 years': '18-20 years',
'18 to 19 years': '18-20 years', '19 to 20 years': '18-20 years', '20 or more years': '20 or more years',
'NaN': 'NaN'}
def convert_YearsCoding2017(col):
return YearsCoding2017_dict[col]
df_2017_keep['YearsCoding'] = df_2017_keep['YearsCoding'].apply(convert_YearsCoding2017)
YearsCoding2018_dict = {'21-23 years': '20 or more years', '24-26 years': '20 or more years', '27-29 years': '20 or more years',
'30 or more years': '20 or more years', 'NaN': 'NaN'}
def convert_YearsCoding2018(col):
try:
return YearsCoding2018_dict[col]
except:
return col
df_2018_keep['YearsCoding'] = df_2018_keep['YearsCoding'].apply(convert_YearsCoding2018)
```
### 3.11 - Years Coded Professionally
Similar to section 3.10's Years Coded, but only applies to the years that the respondent has coded for work. I was able to reuse the years coding dictionary.
```
getMissingPercent('YearsCodingProf')
df_2017_keep['YearsCodingProf'] = df_2017_keep.YearsCodingProf.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['YearsCodingProf'] = df_2018_keep.YearsCodingProf.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['YearsCodingProf'] = df_2017_keep['YearsCodingProf'].apply(convert_YearsCoding2017)
df_2018_keep['YearsCodingProf'] = df_2018_keep['YearsCodingProf'].apply(convert_YearsCoding2018)
```
### 3.12 - Software Developer Type
This question asks the respondent what type of software developer they are. Multiple responses are allowed, which has resulted in ~900 and 4800 unique responses for 2017 and 2018 respectively. For now, I will fill in the null values as "NaN", and create a new column that indicates how many options the respondent chose, as written in get_count().
```
getMissingPercent('DevType')
df_2017_keep['DevType']= df_2017_keep.DevType.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['DevType']= df_2018_keep.DevType.cat.add_categories('NaN').fillna('NaN')
len(df_2017_keep['DevType'][36].split(";"))
def get_count(col):
count = len(col.split(';'))
if col == 'NaN':
count = 0
return count
df_2017_keep['DevType_Count'] = df_2017_keep['DevType'].apply(get_count)
df_2018_keep['DevType_Count'] = df_2018_keep['DevType'].apply(get_count)
```
### 3.13 - Career Satisfaction
This question asks responded how satisfied they are with their career so far. Again, between 2017 and 2018, completely different answer systems are used, where 2017 uses a 0 to 10 scale (where 0 is most dissatisfied and 10 is most satisfied), and in 2018, answers range from extremely dissatisfied to extremely satisfied. To combine the two, I've anchored it so that a 0 correlates to extremely dissatisfied, 5 to neither satisfied nor dissatisfied, and 10 to extremely satisfied.
```
getMissingPercent('CareerSatisfaction')
list(df_2017_keep['CareerSatisfaction'].unique())
list(df_2018_keep['CareerSatisfaction'].unique())
satisfaction_dict = {0.0: 'Extremely dissatisfied', 1.0: 'Moderately dissatisfied', 2.0: 'Moderately dissatisfied',
3.0: 'Slightly dissatisfied', 4.0: 'Slightly dissatisfied', 5.0: 'Neither satisfied nor dissatisfied',
6.0: 'Slightly satisfied', 7.0: 'Slightly satisfied', 8.0: 'Moderately satisfied', 9.0: 'Moderately satisfied',
10.0: 'Extremely satisfied', 'NaN': 'NaN'}
def convert_satisfaction(col):
return satisfaction_dict[col]
df_2017_keep['CareerSatisfaction'] = df_2017_keep['CareerSatisfaction'].astype('category')
df_2017_keep['CareerSatisfaction'] = df_2017_keep.CareerSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['CareerSatisfaction'] = df_2018_keep.CareerSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['CareerSatisfaction'] = df_2017_keep['CareerSatisfaction'].apply(convert_satisfaction)
```
### 3.14 - Job Satisfaction
This question is similar to section 3.13, but specific to the respondent's current job. Data processing is also similar to career satisfaction.
```
getMissingPercent('JobSatisfaction')
df_2017_keep['JobSatisfaction'].unique()
df_2018_keep['JobSatisfaction'].unique()
df_2017_keep['JobSatisfaction'] = df_2017_keep['JobSatisfaction'].astype('category')
df_2017_keep['JobSatisfaction'] = df_2017_keep.JobSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['JobSatisfaction'] = df_2018_keep.JobSatisfaction.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['JobSatisfaction'] = df_2017_keep['JobSatisfaction'].apply(convert_satisfaction)
```
### 3.15 - Kinship with Developers
Kinship refers to the sense of connection the respondent feels with other developers. The two years have the same 5 point scale ranging from strongly disagree to strongly agree. A minor difference is 2017's somewhat agree is equivalent to 2018's neither agree nor disagree. I've decided to change this to a numerical scale where strongly disagree is a 1 and strongly agree is a 5.
```
getMissingPercent('KinshipDevelopers')
list(df_2017_keep['KinshipDevelopers'].unique())
list(df_2018_keep['KinshipDevelopers'].unique())
agree_dict = {'Strongly disagree': 1, 'Disagree': 2, 'Somewhat agree': 3, 'Neither Agree nor Disagree': 3, 'Agree': 4,
'Strongly agree': 5}
def convert_agreement(col):
return agree_dict[col]
df_2017_keep['KinshipDevelopers'] = df_2017_keep['KinshipDevelopers'].apply(convert_agreement)
df_2018_keep['KinshipDevelopers'] = df_2018_keep['KinshipDevelopers'].apply(convert_agreement)
df_2017_keep['KinshipDevelopers'] = pd.to_numeric(df_2017_keep['KinshipDevelopers'], downcast='unsigned')
df_2018_keep['KinshipDevelopers'] = pd.to_numeric(df_2018_keep['KinshipDevelopers'], downcast='unsigned')
df_2017_keep['KinshipDevelopers'] = df_2017_keep['KinshipDevelopers'].fillna(0)
df_2018_keep['KinshipDevelopers'] = df_2018_keep['KinshipDevelopers'].fillna(0)
```
### 3.16 - Compete with Peers
For the philosophers, would this be the opposite or similar emotion to kinship with other developers (section 3.15)? As the title suggests, the survey is questioning if the respondent thinks of themselves in competition with their peers. It uses the same 5 point scale as section 3.15.
```
getMissingPercent('CompetePeers')
df_2017_keep['CompetePeers'].unique()
df_2018_keep['CompetePeers'].unique()
df_2017_keep['CompetePeers'] = df_2017_keep['CompetePeers'].apply(convert_agreement)
df_2018_keep['CompetePeers'] = df_2018_keep['CompetePeers'].apply(convert_agreement)
df_2017_keep['CompetePeers'] = pd.to_numeric(df_2017_keep['CompetePeers'], downcast='unsigned')
df_2018_keep['CompetePeers'] = pd.to_numeric(df_2018_keep['CompetePeers'], downcast='unsigned')
df_2017_keep['CompetePeers'] = df_2017_keep['CompetePeers'].fillna(0)
df_2018_keep['CompetePeers'] = df_2018_keep['CompetePeers'].fillna(0)
```
### 3.17 - Last New Job
This section asks when was the last time the respondent took a job with a new employer. Responses range from never to more than four years ago. I changed 2018's 'I've never had a job' to 'Not applicable/ never' to match 2017's response.
```
getMissingPercent('LastNewJob')
list(df_2017_keep['LastNewJob'].unique())
list(df_2018_keep['LastNewJob'].unique())
df_2017_keep['LastNewJob'] = df_2017_keep.LastNewJob.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LastNewJob'] = df_2018_keep.LastNewJob.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LastNewJob'] = df_2018_keep['LastNewJob'].replace("I've never had a job", 'Not applicable/ never')
```
### 3.18 - Assessing Jobs: Industry
The subsequent assessing jobs sections are based on if the respondent is assessing a potential job to apply to, how important is each category. In this section, the category is the industry.
For all of the assessing jobs columns, 2017 potential responses range from not at all important to very important, whereas 2018's responses range from 1 to 10. I've anchored it so that a 1 corresponds to not at all important, 5 is somewhat important, and 10 corresponds to very important.
```
getMissingPercent('AssessJobIndustry')
list(df_2017_keep['AssessJobIndustry'].unique())
list(df_2018_keep['AssessJobIndustry'].unique())
df_2018_keep['AssessJobIndustry'] = df_2018_keep['AssessJobIndustry'].astype('category')
df_2018_keep['AssessJobIndustry'] = df_2018_keep.AssessJobIndustry.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['AssessJobIndustry'] = df_2017_keep.AssessJobIndustry.cat.add_categories('NaN').fillna('NaN')
importance_dict = {1: 'Not at all important', 2: 'Not at all important', 3: 'Not very important', 4: 'Not very important',
5: 'Somewhat important', 6: 'Somewhat important', 7: 'Important', 8: 'Important', 9: 'Very important',
10: 'Very important', 'NaN': 'NaN'}
def convert_importance(col):
return importance_dict[col]
df_2018_keep['AssessJobIndustry'] = df_2018_keep['AssessJobIndustry'].apply(convert_importance)
```
### 3.19 - Assessing Jobs: Department
How important is the specific team or department when assessing potential jobs?
```
getMissingPercent('AssessJobDept')
list(df_2017_keep['AssessJobDept'].unique())
df_2018_keep['AssessJobDept'].unique()
df_2018_keep['AssessJobDept'] = df_2018_keep['AssessJobDept'].astype('category')
df_2017_keep['AssessJobDept'] = df_2017_keep.AssessJobDept.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDept'] = df_2018_keep.AssessJobDept.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDept'] = df_2018_keep['AssessJobDept'].apply(convert_importance)
```
### 3.20 - Assessing Jobs: Technology
How important is the language, frameworks, and/or other technologies when assessing a potential job?
```
getMissingPercent('AssessJobTech')
df_2018_keep['AssessJobTech'] = df_2018_keep['AssessJobTech'].astype('category')
df_2017_keep['AssessJobTech'] = df_2017_keep.AssessJobTech.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobTech'] = df_2018_keep.AssessJobTech.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobTech'] = df_2018_keep['AssessJobTech'].apply(convert_importance)
```
### 3.21 - Assessing Jobs: Compensation
How important are the benefits and compensation when assessing a potential job?
```
getMissingPercent('AssessJobCompensation')
df_2018_keep['AssessJobCompensation'] = df_2018_keep['AssessJobCompensation'].astype('category')
df_2017_keep['AssessJobCompensation'] = df_2017_keep.AssessJobCompensation.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobCompensation'] = df_2018_keep.AssessJobCompensation.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobCompensation'] = df_2018_keep['AssessJobCompensation'].apply(convert_importance)
```
### 3.22 - Assessing Jobs: Office
How important is the office environment/company culture when assessing a potential job?
```
getMissingPercent('AssessJobOffice')
df_2018_keep['AssessJobOffice'] = df_2018_keep['AssessJobOffice'].astype('category')
df_2017_keep['AssessJobOffice'] = df_2017_keep.AssessJobOffice.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobOffice'] = df_2018_keep.AssessJobOffice.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobOffice'] = df_2018_keep['AssessJobOffice'].apply(convert_importance)
```
### 3.23 - Assessing Jobs: Work Remotely
How important is working from home/remotely when assessing a potential job?
```
getMissingPercent("AssessJobRemote")
df_2018_keep['AssessJobRemote'] = df_2018_keep['AssessJobRemote'].astype('category')
df_2017_keep['AssessJobRemote'] = df_2017_keep.AssessJobRemote.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobRemote'] = df_2018_keep.AssessJobRemote.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobRemote'] = df_2018_keep['AssessJobRemote'].apply(convert_importance)
```
### 3.24 - Assessing Jobs: Professional Development
How important are opportunities for professional development when assessing a potential job?
```
getMissingPercent('AssessJobProfDevel')
df_2018_keep['AssessJobProfDevel'] = df_2018_keep['AssessJobProfDevel'].astype('category')
df_2017_keep['AssessJobProfDevel'] = df_2017_keep.AssessJobProfDevel.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProfDevel'] = df_2018_keep.AssessJobProfDevel.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProfDevel'] = df_2018_keep['AssessJobProfDevel'].apply(convert_importance)
```
### 3.25 - Assessing Jobs: Diversity
How important is the diversity of the company or organization when assessing a potential job?
```
getMissingPercent('AssessJobDiversity')
df_2018_keep['AssessJobDiversity'] = df_2018_keep['AssessJobDiversity'].astype('category')
df_2017_keep['AssessJobDiversity'] = df_2017_keep.AssessJobDiversity.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDiversity'] = df_2018_keep.AssessJobDiversity.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobDiversity'] = df_2018_keep['AssessJobDiversity'].apply(convert_importance)
```
### 3.26 - Assessing Jobs: Product Impact
How important is the impactfulness of the product or service the respondent would be working on when assessing a potential job?
```
getMissingPercent('AssessJobProduct')
df_2018_keep['AssessJobProduct'] = df_2018_keep['AssessJobProduct'].astype('category')
df_2017_keep['AssessJobProduct'] = df_2017_keep.AssessJobProduct.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProduct'] = df_2018_keep.AssessJobProduct.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobProduct'] = df_2018_keep['AssessJobProduct'].apply(convert_importance)
```
### 3.27 - Assessing Jobs: Finances
How important is the financial performance or funding status of the company when assessing a potential job?
```
getMissingPercent('AssessJobFinances')
df_2018_keep['AssessJobFinances'] = df_2018_keep['AssessJobFinances'].astype('category')
df_2017_keep['AssessJobFinances'] = df_2017_keep.AssessJobFinances.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobFinances'] = df_2018_keep.AssessJobFinances.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['AssessJobFinances'] = df_2018_keep['AssessJobFinances'].apply(convert_importance)
```
### 3.28 - Reason for Updated CV
What was the reason the respondent last updated their resume/CV? They could only pick one response, but between the two years, the responses were vastly different. I added categories as appropriate to each year.
```
getMissingPercent('UpdateCV')
list(df_2017_keep['UpdateCV'].unique())
list(df_2018_keep['UpdateCV'].unique())
df_2017_keep['UpdateCV'] = df_2017_keep.UpdateCV.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['UpdateCV'] = df_2018_keep.UpdateCV.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['UpdateCV'] = df_2017_keep.UpdateCV.cat.add_categories(['My job status or other personal status changed',
'I did not receive an expected change in compensation',
'I had a negative experience or interaction at work',
'I received bad news about the future of my company or department'])
df_2018_keep['UpdateCV'] = df_2018_keep.UpdateCV.cat.add_categories(['I completed a major project, assignment, or contract',
'Something else',
'I was just giving it a regular update'])
```
### 3.29 - Informal Schooling Education Types
The respondent is asked what types of activities they have participated in outside of their formal schooling. Multiple answers are allowed, resulting in ~425 unique responses for each year. Answers include anything from taking an online programming course, participating in coding competitions or hackathons, and contributing to open source software.
```
getMissingPercent('EducationTypes')
df_2017_keep['EducationTypes'].unique()
df_2017_keep['EducationTypes']= df_2017_keep.EducationTypes.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['EducationTypes']= df_2018_keep.EducationTypes.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['EducationTypes_Count'] = df_2017_keep['EducationTypes'].apply(get_count)
df_2018_keep['EducationTypes_Count'] = df_2018_keep['EducationTypes'].apply(get_count)
```
### 3.30 - Resources for the Self Taught
Respondents who indicated they taught themselves a programming technology without taking a course are asked what resources they went to. Sources include books, Stack Overflow, and official documentation.
```
getMissingPercent('SelfTaughtTypes')
df_2017_keep['SelfTaughtTypes']= df_2017_keep.SelfTaughtTypes.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['SelfTaughtTypes']= df_2018_keep.SelfTaughtTypes.cat.add_categories('NaN').fillna('NaN')
```
### 3.31 - Time After Bootcamp to get Hired
For respondents who indicated they went to a bootcamp, this question asks how long did it take for each person to get hired after the camp. Both years have essentially the same options, but the wording is slightly different.
Also note that there is an extremely high number of missing data for both years.
```
getMissingPercent('TimeAfterBootcamp')
list(df_2017_keep['TimeAfterBootcamp'].unique())
list(df_2018_keep['TimeAfterBootcamp'].unique())
df_2017_keep['TimeAfterBootcamp']= df_2017_keep.TimeAfterBootcamp.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['TimeAfterBootcamp']= df_2018_keep.TimeAfterBootcamp.cat.add_categories('NaN').fillna('NaN')
def convert_TimeAfterBootcamp(col):
if col == 'I already had a full-time job as a developer when I began the program':
return 'I already had a job as a developer when I started the program'
elif col == 'Immediately after graduating':
return 'Immediately upon graduating'
elif col == 'I haven’t gotten a developer job':
return "I haven't gotten a job as a developer yet"
else:
return col
df_2018_keep['TimeAfterBootcamp'] = df_2018_keep['TimeAfterBootcamp'].apply(convert_TimeAfterBootcamp)
df_2018_keep['TimeAfterBootcamp'] = df_2018_keep.TimeAfterBootcamp.cat.add_categories('I got a job as a developer before completing the program')
```
### 3.32 - Languages Worked With
What programming languages has the respondent worked with extensively in the past year? Multiple languages are allowed, which gives many unique variables.
```
getMissingPercent('LanguageWorkedWith')
df_2017_keep['LanguageWorkedWith']= df_2017_keep.LanguageWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LanguageWorkedWith']= df_2018_keep.LanguageWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['LanguageWorkedWith_Count'] = df_2017_keep['LanguageWorkedWith'].apply(get_count)
df_2018_keep['LanguageWorkedWith_Count'] = df_2018_keep['LanguageWorkedWith'].apply(get_count)
```
### 3.33 - Languages Want to Work With
Similar to section 3.32, but with languages the respondent would like to learn in the next year.
```
getMissingPercent('LanguageDesireNextYear')
df_2017_keep['LanguageDesireNextYear']= df_2017_keep.LanguageDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['LanguageDesireNextYear']= df_2018_keep.LanguageDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['LanguageDesireNextYear_Count'] = df_2017_keep['LanguageDesireNextYear'].apply(get_count)
df_2018_keep['LanguageDesireNextYear_Count'] = df_2018_keep['LanguageDesireNextYear'].apply(get_count)
```
### 3.34 - Frameworks Worked With
Similar to section 3.32, but with frameworks (ex: Django, TensorFlow, Angular, etc)
```
getMissingPercent('FrameworkWorkedWith')
df_2017_keep['FrameworkWorkedWith']= df_2017_keep.FrameworkWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['FrameworkWorkedWith']= df_2018_keep.FrameworkWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['FrameworkWorkedWith_Count'] = df_2017_keep['FrameworkWorkedWith'].apply(get_count)
df_2018_keep['FrameworkWorkedWith_Count'] = df_2018_keep['FrameworkWorkedWith'].apply(get_count)
```
### 3.35 - Frameworks Want to Work With
Similar to section 3.33 and 3.34, but with frameworks the respondent would like to learn next year.
```
getMissingPercent('FrameworkDesireNextYear')
df_2017_keep['FrameworkDesireNextYear']= df_2017_keep.FrameworkDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['FrameworkDesireNextYear']= df_2018_keep.FrameworkDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['FrameworkDesireNextYear_Count'] = df_2017_keep['FrameworkDesireNextYear'].apply(get_count)
df_2018_keep['FrameworkDesireNextYear_Count'] = df_2018_keep['FrameworkDesireNextYear'].apply(get_count)
```
### 3.36 - Databases Worked With
Similar to section 3.32, but with databases (ex: Microsoft Azure, MySQL, MongoDB, etc.)
```
getMissingPercent('DatabaseWorkedWith')
df_2017_keep['DatabaseWorkedWith']= df_2017_keep.DatabaseWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['DatabaseWorkedWith']= df_2018_keep.DatabaseWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['DatabaseWorkedWith_Count'] = df_2017_keep['DatabaseWorkedWith'].apply(get_count)
df_2018_keep['DatabaseWorkedWith_Count'] = df_2018_keep['DatabaseWorkedWith'].apply(get_count)
```
### 3.37 - Databases Want to Work With
Similar to section 3.33, but with databases.
```
getMissingPercent('DatabaseDesireNextYear')
df_2017_keep['DatabaseDesireNextYear']= df_2017_keep.DatabaseDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['DatabaseDesireNextYear']= df_2018_keep.DatabaseDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['DatabaseDesireNextYear_Count'] = df_2017_keep['DatabaseDesireNextYear'].apply(get_count)
df_2018_keep['DatabaseDesireNextYear_Count'] = df_2018_keep['DatabaseDesireNextYear'].apply(get_count)
```
### 3.38 - Platforms Worked With
Similar to section 3.32 but with platforms (ex: Linux, Microsoft Azure, AWS, etc.)
```
getMissingPercent('PlatformWorkedWith')
df_2017_keep['PlatformWorkedWith']= df_2017_keep.PlatformWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['PlatformWorkedWith']= df_2018_keep.PlatformWorkedWith.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['PlatformWorkedWith_Count'] = df_2017_keep['PlatformWorkedWith'].apply(get_count)
df_2018_keep['PlatformWorkedWith_Count'] = df_2018_keep['PlatformWorkedWith'].apply(get_count)
```
### 3.39 - Platforms Want to Work With
Similar to section 3.33, but with platforms.
```
getMissingPercent('PlatformDesireNextYear')
df_2017_keep['PlatformDesireNextYear']= df_2017_keep.PlatformDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['PlatformDesireNextYear']= df_2018_keep.PlatformDesireNextYear.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['PlatformDesireNextYear_Count'] = df_2017_keep['PlatformDesireNextYear'].apply(get_count)
df_2018_keep['PlatformDesireNextYear_Count'] = df_2018_keep['PlatformDesireNextYear'].apply(get_count)
```
### 3.40 - IDE
What development environment does the respondent use on a regular basis? Examples include Sublime, RStudio, PyCharm, etc. Multiple answers allowed.
```
getMissingPercent('IDE')
df_2017_keep['IDE']= df_2017_keep.IDE.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['IDE']= df_2018_keep.IDE.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['IDE_Count'] = df_2017_keep['IDE'].apply(get_count)
df_2018_keep['IDE_Count'] = df_2018_keep['IDE'].apply(get_count)
```
### 3.41 - Methodology
Asks the respondent what types of methodology they are familiar with. Examples include pair programming, lean, and scrum. Multiple answers allowed.
```
getMissingPercent('Methodology')
df_2017_keep['Methodology']= df_2017_keep.Methodology.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['Methodology']= df_2018_keep.Methodology.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['Methodology_Count'] = df_2017_keep['Methodology'].apply(get_count)
df_2018_keep['Methodology_Count'] = df_2018_keep['Methodology'].apply(get_count)
```
### 3.42 - Version Control
Asks the respondent what version control (if at all) they use. Multiple answers allowed.
```
getMissingPercent('VersionControl')
df_2017_keep['VersionControl']= df_2017_keep.VersionControl.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['VersionControl']= df_2018_keep.VersionControl.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['VersionControl_Count'] = df_2017_keep['VersionControl'].apply(get_count)
df_2018_keep['VersionControl_Count'] = df_2018_keep['VersionControl'].apply(get_count)
```
### 3.43 - Frequency of Checking in Code
Asks the respondent over the past year, how often they checked in or committed code. Answers are similarly worded for the two years.
```
getMissingPercent('CheckInCode')
list(df_2017_keep['CheckInCode'].unique())
list(df_2018_keep['CheckInCode'].unique())
df_2017_keep['CheckInCode']= df_2017_keep.CheckInCode.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['CheckInCode']= df_2018_keep.CheckInCode.cat.add_categories('NaN').fillna('NaN')
checkInCode_dict = { 'Just a few times over the year': 'Less than once per month', 'A few times a month': 'Weekly or a few times per month',
'Multiple times a day': 'Multiple times per day', 'Once a day': 'Once a day', 'A few times a week': 'A few times per week',
'Never': 'Never', 'NaN': 'NaN'}
def convert_checkInCode(col):
return checkInCode_dict[col]
df_2017_keep['CheckInCode'] = df_2017_keep['CheckInCode'].apply(convert_checkInCode)
```
### 3.44 - Stack Overflow Jobs
Respondents are asked if they have ever used or visited the Stack Overflow Jobs webpage. It was difficult to combine responses between the two years, so I simplified answers to yes or no.
```
getMissingPercent('StackOverflowJobs')
list(df_2017_keep['StackOverflowJobs'].unique())
list(df_2018_keep['StackOverflowJobs'].unique())
df_2017_keep['StackOverflowJobs']= df_2017_keep.StackOverflowJobs.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['StackOverflowJobs']= df_2018_keep.StackOverflowJobs.cat.add_categories('NaN').fillna('NaN')
SOJobs_dict = {'Yes': 'Yes', 'No, I knew that Stack Overflow had a jobs board but have never used or visited it': 'No',
"No, I didn't know that Stack Overflow had a jobs board": 'No', "Haven't done at all": 'No', 'Several times': 'Yes',
'Once or twice': 'Yes', 'At least once each week': 'Yes', 'At least once each day': 'Yes', 'NaN': 'NaN'}
def convert_SOJobs(col):
return SOJobs_dict[col]
df_2017_keep['StackOverflowJobs'] = df_2017_keep['StackOverflowJobs'].apply(convert_SOJobs)
df_2018_keep['StackOverflowJobs'] = df_2018_keep['StackOverflowJobs'].apply(convert_SOJobs)
```
### 3.45 - Gender
Respondents are asked what gender(s) they identify with. Multiple answers allowed, so there are more unique answers than just your typical male/female binary.
```
getMissingPercent('Gender')
df_2017_keep['Gender'].unique()
df_2018_keep['Gender'].unique()
df_2017_keep['Gender']= df_2017_keep.Gender.cat.add_categories('I prefer not to answer').fillna('I prefer not to answer')
df_2018_keep['Gender']= df_2018_keep.Gender.cat.add_categories('I prefer not to answer').fillna('I prefer not to answer')
df_2017_keep['Gender_Count'] = df_2017_keep['Gender'].apply(get_count)
df_2018_keep['Gender_Count'] = df_2018_keep['Gender'].apply(get_count)
```
### 3.46 - Parents' Highest Education
Asks what is the respondent's' parents highest level of education. Both years had similar answers but different wording.
```
getMissingPercent('EducationParents')
list(df_2017_keep['EducationParents'].unique())
list(df_2018_keep['EducationParents'].unique())
df_2017_keep['EducationParents']= df_2017_keep.EducationParents.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['EducationParents']= df_2018_keep.EducationParents.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['EducationParents']= df_2017_keep.EducationParents.cat.add_categories('Associate degree')
df_2018_keep['EducationParents']= df_2018_keep.EducationParents.cat.add_categories(['I don\'t know/not sure', 'I prefer not to answer'])
educationParents_dict = {'NaN': 'NaN', 'A professional degree': 'A professional degree',
'Professional degree (JD, MD, etc.)':'A professional degree',
"A bachelor's degree": "A bachelor's degree",
'Bachelor’s degree (BA, BS, B.Eng., etc.)': "A bachelor's degree",
"A master's degree": "A master's degree",
'Master’s degree (MA, MS, M.Eng., MBA, etc.)': "A master's degree",
'High school': 'High school',
'Secondary school (e.g. American high school, German Realschule or Gymnasium, etc.)': 'High school',
'A doctoral degree': 'A doctoral degree',
'Other doctoral degree (Ph.D, Ed.D., etc.)': 'A doctoral degree',
'Some college/university study, no bachelor\'s degree': 'Some college/university',
'Some college/university study without earning a degree': 'Some college/university',
'Primary/elementary school': 'Primary/elementary school',
'No education': 'No education', 'They never completed any formal education': 'No education',
'I prefer not to answer': 'I prefer not to answer', "I don't know/not sure": "I don't know/not sure",
'Associate degree': 'Associate degree'}
def convert_educationParents(col):
return educationParents_dict[col]
df_2017_keep['EducationParents'] = df_2017_keep['EducationParents'].apply(convert_educationParents)
df_2018_keep['EducationParents'] = df_2018_keep['EducationParents'].apply(convert_educationParents)
```
### 3.47 - Race
Respondents are asked what race(s) they identify with. Multiple answers allowed.
```
getMissingPercent('Race')
df_2017_keep['Race']= df_2017_keep.Race.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['Race']= df_2018_keep.Race.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['Race_Count'] = df_2017_keep['Race'].apply(get_count)
df_2018_keep['Race_Count'] = df_2018_keep['Race'].apply(get_count)
```
### 3.48 - Survey too Long
Lastly, respondents are asked if the survey was too long.
```
getMissingPercent('SurveyLong')
list(df_2017_keep['SurveyLong'].unique())
list(df_2018_keep['SurveyLong'].unique())
surveyLength_dict = {'Strongly disagree': 'The survey was too short', 'Disagree': 'The survey was too short',
'Somewhat agree': 'The survey was an appropriate length', 'Agree': 'The survey was too long',
'Strongly agree': 'The survey was too long', 'NaN': 'NaN'}
def convert_survey(col):
return surveyLength_dict[col]
df_2017_keep['SurveyLong']= df_2017_keep.SurveyLong.cat.add_categories('NaN').fillna('NaN')
df_2018_keep['SurveyLong']= df_2018_keep.SurveyLong.cat.add_categories('NaN').fillna('NaN')
df_2017_keep['SurveyLong'] = df_2017_keep['SurveyLong'].apply(convert_survey)
```
# 4 - Saving the Data
Before saving the data, lets check the data types for each column is what we want.
### 4.1 - Checking Datatypes
```
df_2017_keep.info()
df_2018_keep.info()
```
I see lots of object categories, and I'm fairly certain the 'Count' columns don't need to be 64 bit. Lets downcast both dataframes again.
```
downcast(df_2017_keep)
downcast(df_2018_keep)
get_memoryUsage(df_2017_keep)
get_memoryUsage(df_2018_keep)
```
As expected, objects were converted to categories, and the 'count' columns were converted to int8.
### 4.2 - Saving to Feather
Finally, we can save the updated columns to feather format and it should be able to run through a random forest model without a problem. Since I deleted some rows, the index is not in sequential order. The index must be reset before saving to feather.
```
df_2017_keep.reset_index(inplace = True)
df_2018_keep.reset_index(inplace = True)
df_2017_keep.to_feather('tmp/df_2017_2keep')
df_2018_keep.to_feather('tmp/df_2018_2keep')
df_2017_keep.columns
```
### test data
```
df_2017_keep['DevType'][:5]
test = pd.DataFrame({'A':['adlkfslkfd', 'Nan', 'NaN', 'joke;asdlfk;asdf', 'adsf;dsf;asdf;dsa;fds;;fd;faf;ds'],
'B': [np.nan, 'No', 'Yes, fdas', 'Yes', 'No'], 'C':[45, 65,23,45,74]})
test
test['A'].apply(get_count)
def test_func(col):
return len(col.split(';'))
# col_suffix = '_count'
# for row in df[col]:
# df[col + col_suffix] = row.split(';')
test['A_Count'] = test['A'].apply(test_func)
test
len('web;asdf'.split(';'))
df_2018[df_2018['Respondent']==21]
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from scipy import stats
from statsmodels.stats.weightstats import ztest
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display, Markdown
df = pd.read_csv("../data/raw/train.csv")
```
### Dataset inspection
- Should we worry about computational complexity? (No, small dataset and small number of features)
- Should we use sampling techniques to reduce the size of the dataset? (No)
```
def display_df_memory_usage(df):
"""
Display the memory usage of a dataframe.
"""
md_table_str = '|Column Name|Size (MB)|\n|---|---|\n'
mem_mb_total = 0
for col_name, mem_bytes in df.memory_usage(deep=True).items():
mem_mb = mem_bytes / 1024**2
mem_mb_total += mem_mb
md_table_str += '|{}|{:.2f}|\n'.format(col_name, mem_mb)
md_table_str += '|Total|{:.2f}|\n'.format(mem_mb_total)
display(Markdown(md_table_str))
display_df_memory_usage(df)
```
### Conclusion:
- We're working with a small dataset. Thus we can use all the data without worrying about computational resources or sampling the data.
## Data Quality Checks
- Are there too many missing values? (Just in some columns)
- Any there any columns with many values missing? (Yes, cabin)
- Should we drop any columns? (Maybe, cabin)
- Are there duplicate values? (No)
- Any there strange behavior or corelation in the data? (No, it's seems to be ok. But we should investigate with more sophisticated methods)
- At first glance, we can think that the embarked port affect the survival rate. But the initial analysis showed that maybe it's not the case.
- Survival rate it seems correlated with the Pclass
- Should we stop the analysis? (No, we should continue)
```
df.info()
# create a series with the percentage of missing values for each column
missing_values = df.isnull().sum() / len(df)*100
missing_values = missing_values.sort_values(ascending=False)
missing_values.rename("% missing values", inplace=True)
display(Markdown('**Missing values**'))
display(Markdown(missing_values.to_markdown()))
del missing_values
# print a markdown table with the col , the number of unique values and the unique values list
def unique_values_table(df):
"""Print a markdown table
with the col, the number of unique values and the unique values
list if there are more than 4 unique values.
"""
md_table_str = '|Column Name|Unique Values||\n|---|---|---|\n'
for col_name, unique_values in df.nunique().items():
if unique_values > 3:
md_table_str += '|{}|{}|\n'.format(col_name, unique_values)
else:
md_unique_str = ' '.join([
f'{name}: {value*100:.1f}\%'
for name, value in
df[col_name].value_counts(normalize=True).items()
])
md_table_str += '|{}|{}|{}\n'.format(
col_name, unique_values, md_unique_str)
display(Markdown(md_table_str))
unique_values_table(df)
# drop PassengerId column
df.drop(columns=['PassengerId'], inplace=True)
df.describe()
# check for duplicate rows
display(Markdown('**Duplicate rows**'))
display(Markdown(f'{df.duplicated().sum()} duplicate rows'))
df.hist('Age', bins=100)
plt.show()
```
- The `Age` feature distribution seems to be skewed. We should take this into account if we will perform any kind of replacement of missing values.
- The values are between 0 and 80 which seems to be a reasonable range.
```
fig, axes = plt.subplots(nrows=1, ncols=3)
for a, col in zip(axes, ['Pclass', 'Sex', 'Embarked']):
sns.countplot(x=col ,hue='Survived',data=df, ax=a)
plt.show()
```
- The `Pclass` seems to affect the survival rate. Which seems reasonable.
- The discrepancy between female/male rates can be related to the code of conduct
"*Women and children first*". However, we must to investigate this better. Because this discrepancy can be caused by other factors.
- At first glance it seems that the passenger that embarked in the `S` point are more likely to die. Obviously, is unrealistic that where the passenger chose to embark affect the chance of survival.
- Almost $72\%$ of the passengers embarked at the S point.
```
fig, axes = plt.subplots(nrows=1, ncols=2)
for a, col in zip(axes, ['Pclass', 'Survived']):
sns.countplot(x=col ,hue='Sex',data=df, ax=a)
plt.show()
```
- We can notice that the third class is composed mostly of male passengers. So perhaps the discrepancy in survival rates between male and female passengers could be also related to this. We must investigate this more carefully.
```
fig, axes = plt.subplots(nrows=1, ncols=2)
for a, col in zip(axes, ['Pclass', 'Sex']):
sns.countplot(x=col ,hue='Embarked',data=df, ax=a)
plt.show()
def show_dist_table(
df, col_a='Embarked', col_b='Pclass',
col_by='Pclass', how='count'
):
sce = df[
[col_a, col_b]].groupby(
[col_a, col_b]
).agg(
{col_by: how}
)
sce['Percentage'] = sce.groupby(
level=0
).apply(
lambda x: 100 * x / float(x.sum())
)
sce['Percentage'] = sce['Percentage'].map(
lambda x: f'{x:.1f}%')
return sce
show_dist_table(df)
```
- We can notice that mostly of the passengers that emabrked in the `S` point came from the third class.
- The `Q` point also has a higher rate of third class of passengers. But there is a diffrence because contrary to the `S` point, the number of passengers that embarked in the `Q` point is much lower.
## More EDA and statistics
Let's take a look at the `Age` feature distribution.
```
# plot histogram of age by Pclass
plt.figure()
for col in [1, 2, 3]:
df_age = df[df['Pclass'] == col]['Age']
sns.distplot(df_age, label=f'Pclass {col}')
plt.legend()
plt.show()
(df[df['Pclass'] == 1]['Age'].describe(), df[df['Pclass'] == 2]['Age'].describe())
```
- The first class passengers are older than the second and third class. We know that the first class passengers has a higher chance of survival than the second and third class.
```
def z_test(df, col='Age'):
df_survivors = df[df['Survived'] == 1][col].dropna()
df_nonsurvivors = df[df['Survived'] == 0][col].dropna()
t_stat, p_value = ztest(df_survivors, df_nonsurvivors)
print("Z Test")
print(20*'-')
print(f"T stat. = {t_stat:.3f}")
print(f"P value = {p_value:.3f}\n")
print(20*'=')
z_test(df)
sns.histplot(df[df['Survived'] == 0]['Age'], kde=True)
```
## EDA through SHAP
| github_jupyter |
<a href="https://colab.research.google.com/github/Edudeiko/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module3-make-explanatory-visualizations/Evgenii_Dudeiko_DSPT3_123_Make_Explanatory_Visualizations_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ASSIGNMENT
### 1) Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit).
Get caught up to where we got our example in class and then try and take things further. How close to "pixel perfect" can you make the lecture graph?
Once you have something that you're proud of, share your graph in the cohort channel and move on to the second exercise.
### 2) Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).
**WARNING**: There are a lot of very custom graphs and tables at the above link. I **highly** recommend not trying to reproduce any that look like a table of values or something really different from the graph types that we are already familiar with. Search through the posts until you find a graph type that you are more or less familiar with: histogram, bar chart, stacked bar chart, line chart, [seaborn relplot](https://seaborn.pydata.org/generated/seaborn.relplot.html), etc. Recreating some of the graphics that 538 uses would be a lot easier in Adobe photoshop/illustrator than with matplotlib.
- If you put in some time to find a graph that looks "easy" to replicate you'll probably find that it's not as easy as you thought.
- If you start with a graph that looks hard to replicate you'll probably run up against a brick wall and be disappointed with your afternoon.
```
# Your Work Here
```
# STRETCH OPTIONS
### 1) Reproduce one of the following using the matplotlib or seaborn libraries:
- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/)
- [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/)
- or another example of your choice!
### 2) Make more charts!
Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary).
Find the chart in an example gallery of a Python data visualization library:
- [Seaborn](http://seaborn.pydata.org/examples/index.html)
- [Altair](https://altair-viz.github.io/gallery/index.html)
- [Matplotlib](https://matplotlib.org/gallery.html)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes.
Take notes. Consider sharing your work with your cohort!
```
# More Work Here
from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display(example)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='#ed713a', width=0.9);
style_list = ['default', 'classic'] + sorted(
style for style in plt.style.available if style != 'classic')
style_list
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.head()
plt.style.use('fivethirtyeight')
fake2.value_counts().sort_index().plot.bar(color='#ed713a', width=0.9);
display(example)
fig = plt.figure(facecolor='black')
ax = fake2.value_counts().sort_index().plot.bar(color='#ed713a', width=0.9);
ax.set(facecolor='black')
plt.xlabel('Rating', color='white')
plt.ylabel('Percent of total votes', color='white')
display(example)
list(range(0, 50,10))
fig = plt.figure(facecolor='white', figsize=(5, 4))
ax = fake.plot.bar(color='#ed713a', width=0.9)
ax.set(facecolor='white')
ax.patch.set_alpha(0.1)
plt.xlabel('Rating', fontweight='bold')
plt.ylabel('Percent of total votes', fontweight='bold')
plt.title('`An Inconvenient Sequal: Truth To Power` is divise',
fontsize=12,
loc='left',
x=-0.1,
y=1.1,
fontweight='bold')
#ax.text(x=-1.7, y=42, s='IMDb ratings for the film as of Aug. 29')
plt.text(x=-1.7, y=fake.max() + 4, s='IMDb ratings for the film as of Aug. 29', fontsize=10)
plt.xticks(rotation=0, color='#a7a7a7')
plt.yticks(range(0, 50, 10), labels=[f'{i}' if i!=40 else f'{i}%' for i in range(0, 50, 10)], color='#a7a7a7');
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
df.head()
df.dtypes
df['timestamp'] = pd.to_datetime(df['timestamp'])
df.describe()
df.dtypes
df['timestamp'].min()
df['timestamp'].max()
df = df.set_index('timestamp')
df.head()
df['2017-08-29']
lastday = df['2017-08-29']
lastday_filtered = lastday[lastday['category'] == 'IMDb users']
lastday_filtered.head()
lastday_filtered.tail()
lastday_filtered['respondents'].plot();
lastday_filtered['category'].value_counts()
pct_columns = [f'{i}_pct' for i in range(1, 11)]
pct_columns
final = lastday_filtered.tail(1)
final
final[pct_columns].T
plot_data = final[pct_columns].T
plot_data.index = range(1, 11)
plot_data
display(example)
plt.style.use('fivethirtyeight')
ax = plot_data.plot.bar(color='#ed713a', width=0.9)
plt.xlabel('Rating', fontsize=10, fontweight='bold')
plt.ylabel('Percent of total votes', fontsize=10, fontweight='bold')
plt.title('`An Inconvenient Sequel: truth to Power` is divisive',
fontsize=12,
x=-0.1, y=1.1, loc='left',
fontweight='bold')
plt.text(x=-1.7, y=plot_data.max() + 4, s='IMDb ratings for the fild as of Aug. 29')
plt.xticks(rotation=0, color='#a7a7a7')
plt.yticks(range(0, 50, 10), labels=[f'{i}' if i!=40 else f'{i}%' for i in range(0, 50, 10)], color='#a7a7a7');
legend = ax.legend()
legend.remove()
```
https://fivethirtyeight.com/features/some-people-are-too-superstitious-to-have-a-baby-on-friday-the-13th/
```
df1 = pd.read_csv('https://github.com/fivethirtyeight/data/raw/master/births/US_births_1994-2003_CDC_NCHS.csv')
df1.head()
df1.dtypes
df1.shape
df2 = pd.read_csv('https://github.com/fivethirtyeight/data/raw/master/births/US_births_2000-2014_SSA.csv')
df2.head()
df2.shape
df_all = [df1, df2]
df_man = pd.concat(df_all)
df_man.head()
df_man.shape
df_man['day_of_week'].unique()
weeks = {
1 : 'MON',
2 : 'TUES',
3 : 'WED',
4 : 'THURS',
5 : 'FRI',
6 : 'SAT',
7 : 'SUN',
}
df_man['day_of_week'] = df_man['day_of_week'].map(weeks)
df_man.head()
# df_man['year']
df_man.describe()
df_man['day_of_week'].min()
df_man['day_of_week'].max()
df_man['day_of_week'].value_counts()
df_man.loc[df_man['day_of_week'] == 'FRI']
import seaborn as sns
sns.violinplot(x='births', y='day_of_week', data=df_man)
plt.xticks(rotation=90);
# df_man = df_man.set_index('day_of_week')
# df_man.head()
```
| github_jupyter |
## Torch Core
This module contains all the basic functions we need in other modules of the fastai library (split with [`core`](/core.html#core) that contains the ones not requiring pytorch). Its documentation can easily be skipped at a first read, unless you want to know what a given function does.
```
from fastai.imports import *
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
from fastai.torch_core import *
```
## Global constants
`AdamW = partial(optim.Adam, betas=(0.9,0.99))` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L43">[source]</a></div>
`bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L41">[source]</a></div>
`defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')` <div style="text-align: right"><a href="https://github.com/fastai/fastai/blob/master/fastai/torch_core.py#L62">[source]</a></div>
If you are trying to make fastai run on the CPU, simply change the default device: `defaults.device = 'cpu'`.
Alternatively, if not using wildcard imports: `fastai.torch_core.defaults.device = 'cpu'`.
## Functions that operate conversions
```
show_doc(batch_to_half)
show_doc(flatten_model, full_name='flatten_model')
```
Flattens all the layers of `m` into an array. This allows for easy access to the layers of the model and allows you to manipulate the model as if it was an array.
```
m = simple_cnn([3,6,12])
m
flatten_model(m)
show_doc(model2half)
```
Converting model parameters to half precision allows us to leverage fast `FP16` arithmetic which can speed up the computations by 2-8 times. It also reduces memory consumption allowing us to train deeper models.
**Note**: Batchnorm layers are not converted to half precision as that may lead to instability in training.
```
m = simple_cnn([3,6,12], bn=True)
def show_params_dtype(state_dict):
"""Simple function to pretty print the dtype of the model params"""
for wt_name, param in state_dict.items():
print("{:<30}: {}".format(wt_name, str(param.dtype)))
print()
print("dtypes of model parameters before model2half: ")
show_params_dtype(m.state_dict())
# Converting model to half precision
m_half = model2half(m)
print("dtypes of model parameters after model2half: ")
show_params_dtype(m_half.state_dict())
show_doc(np2model_tensor)
```
It is a wrapper on top of Pytorch's `torch.as_tensor` which converts numpy array to torch tensor, and additionally attempts to map all floats to `torch.float32` and all integers to `torch.int64` for consistencies in model data. Below is an example demonstrating it's functionality for floating number, similar functionality applies to integer as well.
```
a1 = np.ones((2, 3)).astype(np.float16)
a2 = np.ones((2, 3)).astype(np.float32)
a3 = np.ones((2, 3)).astype(np.float64)
b1 = np2model_tensor(a1) # Maps to torch.float32
b2 = np2model_tensor(a2) # Maps to torch.float32
b3 = np2model_tensor(a3) # Maps to torch.float32
print(f"Datatype of as': {a1.dtype}, {a2.dtype}, {a3.dtype}")
print(f"Datatype of bs': {b1.dtype}, {b2.dtype}, {b3.dtype}")
show_doc(requires_grad)
```
Performs both getting and setting of [`requires_grad`](/torch_core.html#requires_grad) parameter of the tensors, which decided whether to accumulate gradients or not.
* If `b` is `None`: The function **gets** the [`requires_grad`](/torch_core.html#requires_grad) for the model parameter, to be more specific it returns the [`requires_grad`](/torch_core.html#requires_grad) of the first element in the model.
* Else if `b` is passed (a boolean value), [`requires_grad`](/torch_core.html#requires_grad) of all parameters of the model is **set** to `b`.
```
# Any Pytorch model
m = simple_cnn([3, 6, 12], bn=True)
# Get the requires_grad of model
print("requires_grad of model: {}".format(requires_grad(m)))
# Set requires_grad of all params in model to false
requires_grad(m, False)
# Get the requires_grad of model
print("requires_grad of model: {}".format(requires_grad(m)))
show_doc(tensor)
```
Handy function when you want to convert any list type object to tensor, initialize your weights manually, and other similar cases.
**NB**: When passing multiple vectors, all vectors must be of same dimensions. (Obvious but can be forgotten sometimes)
```
# Conversion from any numpy array
b = tensor(np.array([1, 2, 3]))
print(b, type(b))
# Passing as multiple parameters
b = tensor(1, 2, 3)
print(b, type(b))
# Passing a single list
b = tensor([1, 2, 3])
print(b, type(b))
# Can work with multiple vectors / lists
b = tensor([1, 2], [3, 4])
print(b, type(b))
show_doc(to_cpu)
```
A wrapper on top of Pytorch's `torch.Tensor.cpu()` function, which creates and returns a copy of a tensor or even a **list** of tensors in the CPU. As described in Pytorch's docs, if the tensor or list of tensor is already on the CPU, the exact data is returned and no copy is made.
Useful to convert all the list of parameters of the model to CPU in a single call.
```
if torch.cuda.is_available():
a = [torch.randn((1, 1)).cuda() for i in range(3)]
print(a)
print("Id of tensors in a: ")
for i in a: print(id(i))
# Getting a CPU version of the tensors in GPU
b = to_cpu(a)
print(b)
print("Id of tensors in b:")
for i in b: print(id(i))
# Trying to perform to_cpu on a list of tensor already in CPU
c = to_cpu(b)
print(c)
# The tensors in c has exact id as that of b. No copy performed.
print("Id of tensors in c:")
for i in c: print(id(i))
show_doc(to_data)
```
Returns the data attribute from the object or collection of objects that inherits from [`ItemBase`](/core.html#ItemBase) class. Useful to examine the exact values of the data, could be used to work with the data outside of `fastai` classes.
```
# Default example examined
from fastai import *
from fastai.vision import *
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
# Examin the labels
ys = list(data.y)
print("Category display names: ", [ys[0], ys[-1]])
print("Unique classes internally represented as: ", to_data([ys[0], ys[-1]]))
show_doc(to_detach)
show_doc(to_device)
show_doc(to_half)
```
Converts the tensor or list of `FP16`, resulting in less memory consumption and faster computations with the tensor. It does not convert `torch.int` types to half precision.
```
a1 = torch.tensor([1, 2], dtype=torch.int64)
a2 = torch.tensor([1, 2], dtype=torch.int32)
a3 = torch.tensor([1, 2], dtype=torch.int16)
a4 = torch.tensor([1, 2], dtype=torch.float64)
a5 = torch.tensor([1, 2], dtype=torch.float32)
a6 = torch.tensor([1, 2], dtype=torch.float16)
print("dtype of as: ", a1.dtype, a2.dtype, a3.dtype, a4.dtype, a5.dtype, a6.dtype, sep="\t")
b1, b2, b3, b4, b5, b6 = to_half([a1, a2, a3, a4, a5, a6])
print("dtype of bs: ", b1.dtype, b2.dtype, b3.dtype, b4.dtype, b5.dtype, b6.dtype, sep="\t")
show_doc(to_np)
```
Internally puts the data to CPU, and converts to `numpy.ndarray` equivalent of `torch.tensor` by calling `torch.Tensor.numpy()`.
```
a = torch.tensor([1, 2], dtype=torch.float64)
if torch.cuda.is_available():
a = a.cuda()
print(a, type(a), a.device)
b = to_np(a)
print(b, type(b))
show_doc(try_int)
# Converts floating point numbers to integer
print(try_int(12.5), type(try_int(12.5)))
# This is a Rank-1 ndarray, which ideally should not be converted to int
print(try_int(np.array([1.5])), try_int(np.array([1.5])).dtype)
# Numpy array with a single elements are converted to int
print(try_int(np.array(1.5)), type(try_int(np.array(1.5))))
print(try_int(torch.tensor(2.5)), type(try_int(torch.tensor(2.5))))
# Strings are not converted to int (of course)
print(try_int("12.5"), type(try_int("12.5")))
```
## Functions to deal with model initialization
```
show_doc(apply_init)
show_doc(apply_leaf)
show_doc(cond_init)
show_doc(in_channels)
show_doc(init_default)
```
## Functions to get information of a model
```
show_doc(children)
show_doc(children_and_parameters)
show_doc(first_layer)
show_doc(last_layer)
show_doc(num_children)
show_doc(one_param)
show_doc(range_children)
show_doc(trainable_params)
```
## Functions to deal with BatchNorm layers
```
show_doc(bn2float)
show_doc(set_bn_eval)
show_doc(split_no_wd_params)
```
This is used by the optimizer to determine which params should be applied weight decay when using the option `bn_wd=False` is used in a [`Learner`](/basic_train.html#Learner).
## Functions to get random tensors
```
show_doc(log_uniform)
log_uniform(0.5,2,(8,))
show_doc(rand_bool)
rand_bool(0.5, 8)
show_doc(uniform)
uniform(0,1,(8,))
show_doc(uniform_int)
uniform_int(0,2,(8,))
```
## Other functions
```
show_doc(ModelOnCPU, title_level=3)
show_doc(NoneReduceOnCPU, title_level=3)
show_doc(ParameterModule, title_level=3)
show_doc(data_collate)
show_doc(get_model)
show_doc(grab_idx)
show_doc(logit)
show_doc(logit_)
show_doc(model_type)
show_doc(np_address)
show_doc(split_model)
```
If `splits` are layers, the model is split at those (not included) sequentially. If `want_idxs` is True, the corresponding indexes are returned. If `splits` are lists of layers, the model is split according to those.
```
show_doc(split_model_idx)
show_doc(trange_of)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(tensor__array__)
show_doc(ParameterModule.forward)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(to_float)
show_doc(flatten_check)
```
| github_jupyter |
# Demonstration of basic image manipulation with SIRF/CIL
This demonstration shows how to create image data objects for MR, CT and PET and how to work with them.
This demo is a jupyter notebook, i.e. intended to be run step by step.
Author: Kris Thielemans, Richard Brown, Christoph Kolbitsch
First version: 8th of September 2016
Second Version: 17th of May 2018
Third Version: 23rd of October 2019
Fourth Version: 23rd of April 2021
CCP SyneRBI Synergistic Image Reconstruction Framework (SIRF).
Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.
Copyright 2015 - 2019, 2021 University College London.
Copyright 2021 Physikalisch-Technische Bundesanstalt.
This is software developed for the Collaborative Computational
Project in Synergistic Reconstruction for Biomedical Imaging
(http://www.ccpsynerbi.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Initial set-up
```
# Make sure figures appears inline and animations works
%matplotlib notebook
# We have placed a file in this directory, notebook_setup.py, which will allow us to import the sirf_exercises library
import notebook_setup
# The sirf_exercises defines some handy tools for these notebooks
from sirf_exercises import cd_to_working_dir
# Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import os
import sys
import shutil
import brainweb
from tqdm.auto import tqdm
from sirf.Utilities import examples_data_path
```
## make sure that your installation knows where to read and write data
Later scripts will first have to download data. In addition, the SIRF exercises are set-up to write output in a separate "working directory" to avoid cluttering/overwriting your SIRF files. We need to tell Python where that will be. To do that, you have to run the `download_data.sh` script. You can do that from a terminal, or from this notebook.
The following cell will run the script to simply print a usage message.
```
%%bash
bash ../../scripts/download_data.sh -h
```
Let's now run the script again. The line below will actually not download anything (see further notebooks) but configure the destination directory, which is also used for the "working directory" set-up.
Note that you might want to use the `-d` option to write files somewhere else than the default location. (If you're running this as part of a training session, follow the advice given by your instructors of course!).
```
%%bash
bash ../../scripts/download_data.sh
```
We can now move to a working directory for this notebook.
```
cd_to_working_dir('Introductory', 'introduction')
```
Let's check where we are by using the ipython "magic" command to print the current working directory
```
%pwd
```
# Utilities
First define some handy function definitions to make subsequent code cleaner. You can ignore them when you first see this demo.
They have (minimal) documentation using Python docstrings such that you can do for instance `help(plot_2d_image)`
```
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar(shrink=.6)
plt.title(title)
plt.axis("off")
def crop_and_fill(templ_im, vol):
"""Crop volumetric image data and replace image content in template image object"""
# Get size of template image and crop
idim = templ_im.as_array().shape
# Let's make sure everything is centered.
# Because offset is used to index an array it has to be of type integer, so we do an integer division using '//'
offset = (numpy.array(vol.shape) - numpy.array(idim)) // 2
vol = vol[offset[0]:offset[0]+idim[0], offset[1]:offset[1]+idim[1], offset[2]:offset[2]+idim[2]]
# Make a copy of the template to ensure we do not overwrite it
templ_im_out = templ_im.clone()
# Fill image content
templ_im_out.fill(numpy.reshape(vol, idim))
return(templ_im_out)
```
Note that SIRF and CIL have their own `show*` functions which will be used on other demos.
# Get brainweb data
We will download and use Brainweb data, which is made more convenient by using the Python brainweb module. We will use a FDG image for PET. MR usually provides qualitative images with an image contrast proportional to difference in T1, T2 or T2* depending on the sequence parameters. Nevertheless, we will make our life easy, by directly using the T1 map provided by the brainweb for MR.
```
fname, url= sorted(brainweb.utils.LINKS.items())[0]
files = brainweb.get_file(fname, url, ".")
data = brainweb.load_file(fname)
brainweb.seed(1337)
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
FDG_arr = vol['PET']
T1_arr = vol['T1']
uMap_arr = vol['uMap']
```
## Display it
The convention for the image dimensions in the brainweb images is [z, y, x]. If we want to
display the central slice (i.e. z), we therefore have to use the 0th dimension of the array.
We are using an integer division using '//' to ensure we can use the value to index the array.
```
plt.figure();
slice_show = FDG_arr.shape[0]//2
# The images are very large, so we only want to visualise the central part of the image. In Python this can be
# achieved by using e.g. 100:-100 as indices. This will "crop" the first 100 and last 100 voxels of the array.
plot_2d_image([1,3,1], FDG_arr[slice_show, 100:-100, 100:-100], 'FDG', cmap="hot")
plot_2d_image([1,3,2], T1_arr[slice_show, 100:-100, 100:-100], 'T1', cmap="Greys_r")
plot_2d_image([1,3,3], uMap_arr[slice_show, 100:-100, 100:-100], 'uMap', cmap="bone")
```
More than likely, this image came out a bit small for your set-up. You can check the default image size as follows (note: units are inches)
```
plt.rcParams['figure.figsize']
```
You can then change them to a size more suitable for your situation, e.g.
```
plt.rcParams['figure.figsize']=[10,7]
```
Now execute the cell above that plots the images again to see if that helped.
You can make this change permanent by changing your `matplotlibrc` file (this might be non-trivial when running on Docker or JupyterHub instance!). You will need to search for `figure.figsize` in that file. Its location can be found as follows:
```
import matplotlib
matplotlib.matplotlib_fname()
```
# SIRF/CIL ImageData based on Brainweb
In order to create an __MR__, __PET__ or __CT__ `ImageData` object, we need some information about the modality, the hardware used for scanning and the to some extent also the acquisition and reconstruction process. Most of this information is contained in the raw data files which can be exported from the __MR__ and __PET__ scanners. For __CT__ the parameters can be defined manually.
In the following we will now go through each modality separately and show how a simple `ImageData` object can be created. In the last part of the notebook we will then show examples about how to display the image data with python or how to manipulate the image data (e.g. multiply it with a constant or calculate its norm).
In order to make our life easier, we will assume that the voxel size and image orientation for __MR__, __PET__ and __CT__ are all the same and they are the same as the brainweb data. This is of course not true, real-life applications and/or synergistic image reconstruction we would need to resample the brainweb images before using them as input to the `ImageData` objects.
# MR
Use the 'mr' prefix for all Gadgetron-based SIRF functions.
This is done here to explicitly differentiate between SIRF mr functions and
anything else.
```
import sirf.Gadgetron as mr
```
We'll need a template MR acquisition data object
```
templ_mr = mr.AcquisitionData(os.path.join(examples_data_path('MR'), 'simulated_MR_2D_cartesian.h5'))
```
In MR the dimensions of the image data depend of course on the data acquisition but they are also influenced by the reconstruction process. Therefore, we need to carry out an example reconstruction, in order to have all the information about the image.
```
# Simple reconstruction
preprocessed_data = mr.preprocess_acquisition_data(templ_mr)
recon = mr.FullySampledReconstructor()
recon.set_input(preprocessed_data)
recon.process()
im_mr = recon.get_output()
```
If the above failed with an error 'Server running Gadgetron not accessible', you probably still have to start a Gadgetron server. Check the [DocForParticipants](https://github.com/SyneRBI/SIRF-Exercises/blob/master/DocForParticipants.md#start-a-Gadgetron-server).
Now we have got an MR image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_mr = crop_and_fill(im_mr, T1_arr)
# im_mr is an MR image object. In order to visualise it we need access to the underlying data array. This is
# provided by the function as_array(). This yields a numpy array which can then be easily displayed. More
# information on this is also provided at the end of the notebook.
plt.figure();
plot_2d_image([1,1,1], numpy.abs(im_mr.as_array())[im_mr.as_array().shape[0]//2, :, :], 'MR', cmap="Greys_r")
```
# CT
Use the 'ct' prefix for all CIL-based functions.
This is done here to explicitly differentiate between CIL ct functions and
anything else.
```
import cil.framework as ct
```
Create a template Cone Beam CT acquisition geometry
```
N = 120
angles = numpy.linspace(0, 360, 50, True, dtype=numpy.float32)
offset = 0.4
channels = 1
ag = ct.AcquisitionGeometry.create_Cone3D((offset,-100, 0), (offset,100,0))
ag.set_panel((N,N-2))
ag.set_channels(channels)
ag.set_angles(angles, angle_unit=ct.AcquisitionGeometry.DEGREE);
```
Now we can create a template CT image object
```
ig = ag.get_ImageGeometry()
im_ct = ig.allocate(None)
```
Now we have got an CT image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_ct = crop_and_fill(im_ct, uMap_arr)
plt.figure();
plot_2d_image([1,1,1], im_ct.as_array()[im_ct.as_array().shape[0]//2, :, :], 'CT', cmap="bone")
```
# PET
Use the 'pet' prefix for all STIR-based SIRF functions.
This is done here to explicitly differentiate between SIRF pet functions and
anything else.
```
import sirf.STIR as pet
```
We'll need a template sinogram
```
templ_sino = pet.AcquisitionData(os.path.join(examples_data_path('PET'), 'mMR','mMR_template_span11.hs'))
```
Now we can create a template PET image object that would fit dimensions for that sinogram
```
im_pet = pet.ImageData(templ_sino)
```
Now we have got a PET image object and can fill it with the brainweb data. The dimensions won't fit, but we will simply crop the image.
```
im_pet = crop_and_fill(im_pet, FDG_arr)
plt.figure();
plot_2d_image([1,1,1], im_pet.as_array()[im_pet.as_array().shape[0]//2, :, :], 'PET', cmap="hot")
```
# Basic image manipulations
Images (like most other things in SIRF and CIL) are represented as *objects*, in this case of type `ImageData`.
In practice, this means that you can only manipulate its data via *methods*.
Image objects contain the actual voxel values, but also information on the number of voxels,
voxel size, etc. There are methods to get this information.
There are additional methods for other manipulations, such as basic image arithmetic (e.g.,
you can add image objects).
Because we created an `ImageData` object for each modality we can now simply select which modality we want to look at. Because SIRF is implemented to make the transition from one modality to the next very easy, many of the *methods* and *attributes* are exactly the same between __MR__, __PET__ or __CT__ . There are of course *methods* and *attributes* which are modality-specific but the basic handling of the `ImageData` objects is very similar between __MR__, __PET__ or __CT__ .
```
# Make a copy of the image of a specific modality
image_data_object = im_ct.clone()
```
What is an ImageData?
Images are represented by objects with several methods. The most important method
is `as_array()` which we'll use below.
```
# Let's see what all the methods are.
help(pet.ImageData)
# Use as_array to extract an array of voxel values
# The resulting array as a `numpy` array, as standard in Python.
image_array=image_data_object.as_array()
# We can use the standard `numpy` methods on this array, such as getting its `shape` (i.e. dimensions).
print(image_array.shape)
# Whenever we want to do something with the image-values, we have to do it via this array.
# Let's print a voxel-value roughly in the centre of the object.
# We will not use the centre because the intensity here happens to be 0.
centre = numpy.array(image_array.shape)//2
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
Manipulate the image data for illustration
```
# Multiply the data with a factor
image_array *= 0.01
# Stick this new data into the original image object.
# (This will not modify the file content, only the variable in memory.)
image_data_object.fill(image_array)
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
You can do basic math manipulations with ImageData objects
So the above lines can be done directly on the `image` object
```
image_data_object *= 0.01
# Let's check
image_array=image_data_object.as_array()
print(image_array[centre[0], centre[1]+20, centre[2]+20])
```
Display the middle slice of the image (which is really a 3D volume)
We will use our own `plot_2d_image` function (which was defined above) for brevity.
```
# Create a new figure
plt.figure()
# Display the slice (numpy.absolute is only necessary for MR but doesn't matter for PET or CT)
plot_2d_image([1,1,1], numpy.absolute(image_array[centre[0], :, :]), 'image data', cmap="viridis")
```
Some other things to do with ImageData objects
```
print(image_data_object.norm())
another_image=image_data_object*3+8.3
and_another=another_image+image_data_object
```
| github_jupyter |
```
import gif
import seaborn as sns; sns.set();
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
from mpl_toolkits.mplot3d import Axes3D
# Function to plot our input data for classification tasks.
def plot_2D_input_datapoints(X_inp, y_inp):
"""Method to plot 2D datapoints for classification tasks.
Parameters
------------
X_inp: ndarray (num_examples(rows) vs num_features(columns))
Input data which would be plotted.
Y_inp: ndarray (num_examples(rows) vs num_outputs(columns))
Corresponding labels of X_inp
"""
sns.set();
X_inp = X_inp[:, :2]
inp_data = np.hstack((X_inp, y_inp.reshape(-1, 1)))
df = pd.DataFrame(data=inp_data, columns=["$X_0$", "$X_1$", "$y$"])
sns.scatterplot(x="$X_0$", y="$X_1$", hue="$y$", legend='full', palette='dark', data=df)
plt.title('Input data')
plt.legend(loc='lower right');
# Sourced from https://github.com/eriklindernoren/ML-From-Scratch
# Method to normalize the data
def normalize(X, axis=-1, order=2):
""" Normalize the dataset X """
l2 = np.atleast_1d(np.linalg.norm(X, order, axis))
l2[l2 == 0] = 1
return X / np.expand_dims(l2, axis)
# Defining our sigmoid activation function
def sigmoid(vec_w_x, predict='no'):
""" Sigmoid activation for binary classification.
Parameters
------------
vec_w_x: ndarray
Weighted inputs
predict: str ('yes' or 'no')
'no' corresponds to training
'yes' corresponds to prediction
"""
pred_prob = (1 / (1 + np.exp(-1 * vec_w_x)))
if predict == 'yes':
pred_prob[pred_prob >= 0.5] = 1
pred_prob[pred_prob < 0.5] = 0
return pred_prob
# Defining our softmax function for classification
def softmax(X_input_set, vec_w_x, predict='no'):
""" softmax activation for multiclass classification.
Parameters
------------
vec_w_x: ndarray
Weighted inputs
X_input_set: ndarray
Input dataset whose examples have to be classified
predict: str ('yes' or 'no')
'no' corresponds to training
'yes' corresponds to prediction
"""
num_datapoints = np.shape(X_input_set)[0]
# Computing predicted probabilites using softmax function
exp_units = np.exp(vec_w_x)
summation_exp_units = np.sum(exp_units, axis=-1, keepdims=True)
pred_prob = exp_units/summation_exp_units
# Predicting classes
if predict == 'yes':
maxvals = np.amax(pred_prob, axis=1)
for i in range(num_datapoints):
idx = np.argwhere(pred_prob == maxvals[i])[0]
pred_prob[idx[0], idx[1]] = 1
non_maxvals_idxs = np.argwhere(pred_prob != 1)
pred_prob[non_maxvals_idxs[:, 0], non_maxvals_idxs[:, 1]] = 0
return pred_prob
# Defining signum activation function
def signum(vec_w_x):
""" signum activation for perceptron
Parameters
------------
vec_w_x: ndarray
Weighted inputs
"""
vec_w_x[vec_w_x >= 0] = 1
vec_w_x[vec_w_x < 0] = -1
return vec_w_x
# multi-class signum
def multi_class_signum(vec_w_x):
""" Multiclass signum activation.
Parameters
------------
vec_w_x: ndarray
Weighted inputs
"""
flag = np.all(vec_w_x == 0)
if flag:
return vec_w_x
else:
num_examples, num_outputs = np.shape(vec_w_x)
range_examples = np.array(range(0, num_examples))
zero_idxs = np.argwhere(np.all(vec_w_x == 0, axis=1))
non_zero_examples = np.delete(range_examples, zero_idxs[:, 0])
signum_vec_w_x = vec_w_x[non_zero_examples]
maxvals = np.amax(signum_vec_w_x, axis=1)
for i in range(num_examples):
idx = np.argwhere(signum_vec_w_x == maxvals[i])[0]
signum_vec_w_x[idx[0], idx[1]] = 1
non_maxvals_idxs = np.argwhere(signum_vec_w_x != 1)
signum_vec_w_x[non_maxvals_idxs[:, 0], non_maxvals_idxs[:, 1]] = -1
vec_w_x[non_zero_examples] = signum_vec_w_x
return vec_w_x
# Evaluation for train, val, and test set.
def get_accuracy(y_predicted, Y_input_set, num_datapoints):
miscls_points = np.argwhere(np.any(y_predicted != Y_input_set, axis=1))
miscls_points = np.unique(miscls_points)
accuracy = (1-len(miscls_points)/num_datapoints)*100
return accuracy
def get_prediction(X_input_set, Y_input_set, weights, get_acc=False, model_type='perceptron', predict='no'):
if len(Y_input_set) != 0:
num_datapoints, num_categories = np.shape(Y_input_set)
vec_w_transpose_x = np.dot(X_input_set, weights)
if num_categories > 1: # Multi-class
if model_type == 'perceptron':
y_pred_out = multi_class_signum(vec_w_transpose_x)
elif model_type == 'logreg':
y_pred_out = softmax(X_input_set, vec_w_transpose_x, predict=predict)
else: # Binary class
if model_type == 'perceptron' or model_type == 'LinearDA':
y_pred_out = signum(vec_w_transpose_x)
elif model_type == 'logreg':
y_pred_out = sigmoid(vec_w_transpose_x, predict=predict)
# Both prediction and evaluation
if get_acc:
cls_acc = get_accuracy(y_pred_out, Y_input_set, num_datapoints)
return cls_acc, y_pred_out
# Only prediction
return y_pred_out
# Method to generate gifs of decision boundary
def generate_gifs(X, y, trained_weights, dataset_type, path='/path/filename.gif', bias='on', class_label_01_form='off', model_type='perceptron', predict='no'):
if dataset_type == 'train':
frames = []
for i in range(len(trained_weights)):
frame = plot_decision_boundary(X, y, trained_weights, i, dataset_type=dataset_type, bias=bias,
class_label_01_form=class_label_01_form, model_type=model_type, predict=predict)
frames.append(frame)
gif.save(frames, path, duration=500)
print('Gif/image generated')
@gif.frame
def plot_decision_boundary(X_input, Y_input, weights, i=0, dataset_type='test', bias='on', class_label_01_form='off', model_type='perceptron', predict='no'):
""" Plotting decision boundary for train/test set. This method can
also be used to plot a single decision boundary instead of a gif.
Parameters
-----------
X_input: ndarray (num_examples(rows) vs num_features(columns))
Input data which you would like to plot
Y_input: ndarray (num_examples(rows) vs num_outputs(columns))
Class labels
weights: ndarray (num_features(rows) vs num_outputs(columns))
Trained weights
dataset_type: str
Depending on the dataset_type, different plot/gifs will be generated.
bias: str
This is used to portray the effect of bias for the perceptron model
class_label_01_form: str
Signum activation results output labels in the form {-1, +1}.
If class_label_01_form is True, {-1, +1} will change to {0, 1}
model_type: str
Depending on the model_type, different plot/gifs will be generated.
predict: str
If predict is true, thresholding in activation function will be done.
"""
plt.figure(figsize=(8,8));
sns.set();
h = 0.008
input_data = X_input[:,:2]
x0_min, x0_max = input_data[:, 0].min() - 1, input_data[:, 0].max() + 1
x1_min, x1_max = input_data[:, 1].min() - 1, input_data[:, 1].max() + 1
xx0, xx1 = np.mgrid[x0_min:x0_max:h, x1_min:x1_max:h]
# making prediction over the datapoint space
xx0_xx1 = np.c_[xx0.ravel(), xx1.ravel()]
if bias == 'on':
b_ones = np.ones((len(xx0_xx1), 1))
xx0_xx1 = np.hstack((xx0_xx1, b_ones))
# Getting prediction over the datapoint space
if dataset_type == 'train':
#Z = get_prediction(xx0_xx1, np.array([]), weights[i], get_acc=False)
Z = np.dot(xx0_xx1, weights[i])
Z = Z.reshape(xx0.shape)
y_pred = get_prediction(X_input, Y_input, weights[i], get_acc=False, model_type=model_type, predict=predict)
elif dataset_type == 'test':
#Z = get_prediction(xx0_xx1, np.array([]), weights, get_acc=False)
Z = np.dot(xx0_xx1, weights)
Z = Z.reshape(xx0.shape)
y_pred = get_prediction(X_input, Y_input, weights, get_acc=False, model_type=model_type, predict=predict)
# Converting labels from {-1, 1} to {0, 1}
if class_label_01_form == 'on':
if np.shape(Y_input)[1] > 1:
neg_one_class_idxs = np.argwhere(y_pred == -1)
y_pred[neg_one_class_idxs[:, 0], neg_one_class_idxs[:, 1]] = 0
y_pred = y_pred.reshape((-1, np.shape(Y_input)[1]))
else:
neg_one_class_idxs = np.argwhere(y_pred == -1)[:, 0]
y_pred[neg_one_class_idxs] = 0
y_pred = y_pred.reshape((-1, 1))
# Getting misclassfied points on train/val/test set
miscls_points = np.unique(np.argwhere(y_pred != Y_input)[:, 0])
# Put the result into a color plot
plt.contour(xx0, xx1, Z, levels=[0], cmap='gray')
pred_data = np.hstack((X_input[:,:2], y_pred))
df = pd.DataFrame(data=pred_data, columns=["$X_0$", "$X_1$", "Y_"+dataset_type+"_pred"])
sns.scatterplot(x="$X_0$", y="$X_1$", hue="Y_"+dataset_type+"_pred", legend='full', palette='dark', data=df)
plt.scatter(X_input[miscls_points, 0], X_input[miscls_points, 1], s=150, cmap="Greens", marker='x')
plt.title(dataset_type+' data')
plt.tight_layout();
plt.legend(loc='lower right');
# Mean squared error
def mse_error(X_train, Y_train, THETA):
Y_PRED = np.dot(X_train, THETA)
m_sq_error = np.mean(0.5 * (Y_train - Y_PRED)**2)
return m_sq_error
@gif.frame
def plot_convex_loss_and_predict_line(training_mse, weights, X_train, Y_train, Y_train_pred, i):
""" Method to plot Gradient Descent.
Parameters
-----------
training_mse: ndarray
List of training errors used to create decision boundary for every epoch/iteration.
weights: ndarray
List of trained weights. This is used to create decision boundary for every epoch/iteration
and to see how the decision boundary changes.
X_train: ndarray (num_examples(rows) vs num_features(columns))
Input data which you would like to plot
Y_train: ndarray (num_examples(rows) vs num_outputs(columns))
Class labels
Y_train_pred: ndarray (num_examples(rows) vs num_outputs(columns))
Class labels
"""
fig = plt.figure(figsize=(15,10))
cmap = plt.get_cmap('cividis')
ax = fig.add_subplot(2, 1, 1, projection='3d')
sns.set();
weights = np.array(weights)
training_mse = np.array(training_mse)
weight_b_grid = np.linspace(weights[-1][0] - 20, weights[-1][0] + 20, 20) #intercept
weight_m_grid = np.linspace(weights[-1][1] - 40, weights[-1][1] + 40, 40) #slope
# Generating convex loss surfaces
B, M = np.meshgrid(weight_b_grid, weight_m_grid)
Y_train = Y_train.reshape(-1,)
zs = np.array([mse_error(X_train, Y_train, theta)
for theta in zip(np.ravel(B), np.ravel(M))])
Z = zs.reshape(M.shape)
ax.plot_surface(B, M, Z, rstride=1, cstride=1, color='blue', alpha=0.25)
ax.plot(weights[:i+1, 0], weights[:i+1, 1], training_mse[:i+1], markerfacecolor='r', markeredgecolor='r', marker='*', markersize=10, color='black')
# Plotting loss vs prediction
ax.set_title("Loss vs weights")
ax.set_xlabel('weight(b)', labelpad=20)
ax.set_ylabel('weight(m)', labelpad=20)
ax.set_zlabel('Training loss', labelpad=10)
ax.view_init(elev=10, azim=25)
ax = fig.add_subplot(2, 1, 2)
ax.scatter(X_train[:, 1], Y_train, color=cmap(1), s=10)
ax.plot(X_train[:, 1], Y_train_pred[i], color='red', linewidth=0.2, label="Training-Prediction")
ax.set_title("Linear Regression")
ax.set_xlabel('Input (X_train)')
ax.set_ylabel('Y_train')
plt.tight_layout()
```
| github_jupyter |
# 轨道旋转 MP2 方法 (OO-MP2) 简单理解
> 创建日期:2021-01-09
这篇文档会尝试简单介绍轨道旋转 MP2 方法 (Orbital-Optimized Second-Order Møller–Plesset Perturbation, OO-MP2 or OMP2) 的基础概念与 PySCF 上的程序实现和理解。
这篇文档的编写并没有翻阅很多文献,并作测评上的认识。为数不多的文献与参考资料是
> Sun, Chan, et al. [^Sun-Chan.JCP.2020] (PySCF 进展文章)
>
> PySCF 并没有一个完整或独立的 OO-MP2 模块。实现 OO-MP2 可以通过仿 CASSCF 的方式实现。之后使用到的 `MP2AsFCISolver` class 就是直接从该文章中截取的演示代码。
> Psi4NumPy 演示文档 [10a_orbital-optimized-mp2.ipynb](https://github.com/psi4/psi4numpy/blob/master/Tutorials/10_Orbital_Optimized_Methods/10a_orbital-optimized-mp2.ipynb)
>
> 这是一份比较不错的基于 Psi4 的程序简要文档,使用的算法与技巧也不复杂。
需要指出,这里的 OO-MP2 程序实现完全是基于 Post-HF 的闭壳层、无冻结轨道 MP2 实现的。更复杂的开壳层、冻结轨道、双杂化泛函方法,都不予以考虑。
```
import numpy as np
import scipy
from pyscf import gto, mcscf, fci, mp, scf
from functools import partial
np.random.seed(0)
np.einsum = partial(np.einsum, optimize=True)
np.set_printoptions(precision=4, linewidth=150, suppress=True)
```
这篇文档的程序理解部分,我们都会使用下述水分子。但文档末尾,我们会用氢分子的例子,说明 OO-MP2 的能量未必要比 MP2 能量要低。
```
mol = gto.Mole()
mol.atom = """
O 0. 0. 0.
H 0. 0. 1.
H 0. 1. 0.
"""
mol.basis = "6-31G"
mol.verbose = 0
mol.build()
```
## PySCF 程序实现:高效方式
这段程序 `MP2AsFCISolver` class 是直接从 Sun 的 JCP 文章截取的。通过在 CASSCF 中,将活性空间更改为全轨道、更改约化密度矩阵 (1-RDM, 2-RDM) 的生成方式为 MP2 的约化密度矩阵、并且允许活性空间的轨道旋转,就可以实现 OO-MP2。
```
class MP2AsFCISolver:
def kernel(self, h1, h2, norb, nelec, ci0=None, ecore=0, **kwargs):
# Kernel takes the set of integrals from the current set of orbitals
fakemol = gto.M(verbose=0)
fakemol.nelectron = sum(nelec)
fake_hf = fakemol.RHF()
fake_hf._eri = h2
fake_hf.get_hcore = lambda *args: h1
fake_hf.get_ovlp = lambda *args: np.eye(norb)
# Build an SCF object fake_hf without SCF iterations to perform MP2
fake_hf.mo_coeff = np.eye(norb)
fake_hf.mo_occ = np.zeros(norb)
fake_hf.mo_occ[:fakemol.nelectron//2] = 2
self.mp2 = fake_hf.MP2().run()
return self.mp2.e_tot + ecore, self.mp2.t2
def make_rdm12(self, t2, norb, nelec):
dm1 = self.mp2.make_rdm1(t2)
dm2 = self.mp2.make_rdm2(t2)
return dm1, dm2
```
`mf_rhf` 为 RHF 实例:
```
mf_rhf = mol.RHF().run()
mf_rhf.e_tot
```
`mf_mp2` 为 MP2 实例:
```
mf_mp2 = mp.MP2(mf_rhf).run()
mf_mp2.e_tot
mf_mp2.e_corr
```
`mf_cas` 在这里是指 OO-MP2 实例:
```
mf_cas = mcscf.CASSCF(mf_rhf, mol.nao, mol.nelectron)
mf_cas.fcisolver = MP2AsFCISolver()
mf_cas.internal_rotation = True
cas_result = mf_cas.kernel()
cas_result[0]
```
## PySCF 程序实现:大体思路拆解
在这一段中,我们不会使用 PySCF 的 `CASSCF` class,而是从 RHF 与 MP2 的结果,了解 OO-MP2 的大体思路。
从结果上,这种实现方式与 PySCF 会相同。但 PySCF 的 `CASSCF` class 一般会使用二阶方法 (即使用 Orbital Hessian) 加速收敛,而我们这里只使用一阶梯度下降方法 (Orbital Gradient) 进行收敛;一阶收敛方法显然会慢一些,但公式与程序会简单一些。
首先对一些基础变量作声明:
- `nocc` $n_\mathrm{occ}$ 占据轨道数,`nvir` $n_\mathrm{vir}$ 非占轨道数;
- `nmo` $n_\mathrm{MO}$ 分子轨道数,一般与原子轨道数相等;
- `so` $[0:n_\mathrm{occ}]$ 占据轨道分割 (slice),`sv` $[n_\mathrm{occ}:n_\mathrm{MO}]$ 非占轨道分割 (slice);
- `mo_occ` PySCF 中用于表示轨道占据数的变量。
```
nocc, nmo = mol.nelec[0], mol.nao
nvir = nmo - nocc
so, sv = slice(0, nocc), slice(nocc, nmo)
mo_occ = mf_rhf.mo_occ
```
OO-MP2 的大体过程可以拆分为如下循环:
1. 代入分子轨道系数 $C_{\mu i}$,得到该系数下 MP2 的激发张量 $t_{ij}^{ab}$;
2. 进而生成该情形下的 1-RDM $\gamma_{pq}$ 与 2-RDM $\Gamma_{pr}^{qs}$;
3. 进而生成广义 Fock 矩阵 $F_{pq}$ 与轨道梯度 $x_{pq} = F_{pq} - F_{qp}$;
4. 最后更新分子轨道系数 $C_{\mu i}$。
最终的收敛条件判据是 $F_{pq} - F_{qp} = 0$,即广义 Fock 矩阵 $F_{pq}$ 为对称矩阵。
```
def oomp2_cycle(C):
# Generate Psuedo objects, and therefore t_iajb
mf_prhf = scf.RHF(mol)
mf_prhf.mo_occ, mf_prhf.mo_coeff = mo_occ, C
mf_pmp2 = mp.MP2(mf_prhf).run() # Step 1
# Generate 1-RDM, 2-RDM and orbital gradient from generalized Fock matrix
rdm1, rdm2 = mf_pmp2.make_rdm1(), mf_pmp2.make_rdm2() # Step 2
gfock_grad = mf_cas.unpack_uniq_var(mf_cas.get_grad(C, (rdm1, rdm2), mf_cas.ao2mo(C))) # Step 3
# Returned value: Updated MO Coefficient; OO-MP2 Energy (in current cycle); orbital gradient error
return update_C(C, gfock_grad), mf_pmp2.e_tot, np.linalg.norm(gfock_grad) # Step 4
```
而更新轨道系数是通过下述过程实现:
$$
\begin{gather}
X_{ai} = - X_{ia} = \frac{x_{ai}}{- \varepsilon_a + \varepsilon_i} = \frac{F_{ai} - F_{ia}}{- \varepsilon_a + \varepsilon_i} \\
X_{ij} = 0, \; = X_{ab} = 0 \\
\mathbf{C} \leftarrow \mathbf{C} \exp(\lambda \mathbf{X})
\end{gather}
$$
其中 $\lambda$ 是梯度下降率,对应于机器学习,它与梯度下降算法的学习率是类似的。这里取为 $\lambda = 0.5$。
```
def update_C(C, gfock_grad):
# Generate anti-symmetrized rotation matrix
D = mf_rhf.make_rdm1(C, mo_occ)
e = (C.T @ mf_rhf.get_fock(dm=D) @ C).diagonal()
X = np.zeros_like(C)
X[sv, so] = gfock_grad[sv, so] / (- e[sv, None] + e[None, so])
X[so, sv] = gfock_grad[so, sv] / (- e[None, sv] + e[so, None])
# Control rotation by factor
X *= 0.5
# Generate rotated MO coefficient
C_new = C @ scipy.linalg.expm(X)
return C_new
```
如果将 RHF 的分子轨道系数 `mf_rhf.mo_coeff` 作为分子轨道系数的初猜,那么收敛过程可以用下述迭代代码给出:
```
C_oo = np.copy(mf_rhf.mo_coeff)
print("Cycle | OO-MP2 Energy | G-Fock Gradient Norm")
for i in range(15):
C_oo, eng, err = oomp2_cycle(C_oo)
print("{:5d} | {:<13.8f} | {:<16.8e}".format(i, eng, err))
```
:::{admonition} 记号区别
在这份文档中,RHF 的 Fock 矩阵记号定义为 $f_{pq}$,而 Post-HF 方法的 Fock 矩阵记号定义为 $F_{pq}$。这两者并非相同,并且非轨道优化的方法下,广义 Fock 矩阵 $F_{pq}$ 矩阵一般是非对称的。
:::
## PySCF 程序实现:理解与分解
我们会对上面程序中的重要步骤进行说明。
### 原子轨道电子积分定义
- `h` $h_{\mu \nu}$,维度 $(\mu, \nu)$,原子轨道基的 Hamiltonian Core 矩阵,即动能与原子核-电子势能积分;
- `S` $S_{\mu \nu}$,维度 $(\mu, \nu)$,原子轨道基的重叠积分;
- `eri` $(\mu \nu | \kappa \lambda)$,维度 $(\mu, \nu, \kappa, \lambda)$,原子轨道基的双电子积分。
```
h = mol.intor("int1e_kin") + mol.intor("int1e_nuc")
S = mol.intor("int1e_ovlp")
eri = mol.intor("int2e")
```
### Canonical MP2
我们先简单回顾一下在 Canonical RHF 下,MP2 的激发系数 $t_{ij}^{ab}$ 与能量 $E_\mathrm{corr}^\mathsf{MP2}$ 的导出方式。我们留意到 PySCF 的自洽场过程给出的是 Canonical 情况,即分子轨道的 Fock 矩阵 $f_{pq}$ 是对角矩阵。
- `C` $C_{\mu p}$ 为分子轨道系数,`e` $e_p$ 为轨道能量;
- `D_iajb` $D_{ij}^{ab}$ MP2 分母项,维度 $(i, a, j, b)$:
$$
D_{ij}^{ab} = \varepsilon_i - \varepsilon_a + \varepsilon_j - \varepsilon_b
$$
- `eri_mo` $(pq|rs)$ 分子轨道基下的双电子积分,维度 $(p, q, r, s)$:
$$
(pq|rs) = C_{\mu p} C_{\nu q} (\mu \nu | \kappa \lambda) C_{\kappa r} C_{\lambda s}
$$
- `t_iajb` $t_{ij}^{ab}$ MP2 激发系数:
$$
t_{ij}^{ab} = \frac{(ia|jb)}{D_{ij}^{ab}}
$$
```
C, e = mf_rhf.mo_coeff, mf_rhf.mo_energy
D_iajb = e[so, None, None, None] - e[None, sv, None, None] + e[None, None, so, None] - e[None, None, None, sv]
eri_mo = np.einsum("up, vq, uvkl, kr, ls -> pqrs", C, C, eri, C, C)
t_iajb = eri_mo[so, sv, so, sv] / D_iajb
```
因此,MP2 相关能可以写为 (参考值为 -0.134335 a.u.)
$$
E_\mathrm{corr}^\mathsf{MP2} = \big( 2 t_{ij}^{ab} - t_{ij}^{ba} \big) (ia|jb)
$$
```
((2 * t_iajb - t_iajb.swapaxes(-1, -3)) * eri_mo[so, sv, so, sv]).sum()
```
### Non-Canonical MP2:PySCF 程序
但对于 OO-MP2 而言,由于产生了轨道旋转,我们需要考察 Non-Canonical RHF 的 MP2。
Non-Canonical 意指 RHF 的 Fock 矩阵 $f_{pq}$ 是分块对角化的,即占据-非占和非占-占据分块 $f_{ia}$、$f_{ai}$ 均为零;而占据和非占分块 $f_{ij}$、$f_{ab}$ 的矩阵并非是对角化的。
为了构造这样一个 Non-Canonical RHF 的情形,我们可以对 Canonical RHF 分子轨道系数矩阵 `C_rhf` 作下述变换,得到 Non-Canonical RHF 分子轨道系数矩阵 `C_rot`:
$$
\mathbf{C} \leftarrow \mathbf{C} \mathbf{U}
$$
上述的 $\mathbf{U}$ 矩阵是分块对角化的正交矩阵。为了构造这样的正交矩阵,我们可以生成一个分块对角化、且反对称的 `X` $\mathbf{X}$ 矩阵,并令 $\mathbf{U} = \exp(\mathbf{X})$。
```
C_rhf = mf_rhf.mo_coeff
X = np.random.randn(nmo, nmo)
X[sv, so] = X[so, sv] = 0
X -= X.T
X *= 0.02
C_rot = C_rhf @ scipy.linalg.expm(X)
```
由此构建出的 Non-Canonical 分子轨道 Fock 矩阵 $f_{pq}$ 是分块对角化的,即不一定要求 $f_{ij} = \delta_{ij} \varepsilon_i$ 与 $f_{ab} = \delta_{ab} \varepsilon_a$:
```
fock_rot = np.einsum("up, uv, vq -> pq", C_rot, mf_rhf.get_fock(), C_rot)
fock_rot
```
对于这样的分子轨道系数矩阵 `C_rot`,PySCF 的程序照样可以给出正确的 MP2 相关能量 -0.134335 a.u. (其中 `mf_prhf` 是指虚假的 (Pseudo) RHF 实例):
```
mf_prhf = scf.RHF(mol)
mf_prhf.mo_occ, mf_prhf.mo_coeff = mo_occ, C_rot
mf_pmp2 = mp.MP2(mf_prhf).run()
mf_pmp2.e_corr
```
### Non-Canonical MP2:激发系数 $t_{ij}^{ab}$ 迭代更新方式
首先为程序与公式说明,定义一些变量:
- RHF Fock 对角线占据部分记为 `eo` $\varepsilon_i = f_{ii}$;
- RHF Fock 对角线非占部分记为 `ev` $\varepsilon_a = f_{aa}$;
- RHF Fock 去除对角线的占据分块记为 `fock_oo` $f'_{ij} = (1 - \delta_{ij}) f_{ij}$;
- RHF Fock 去除对角线的非占分块记为 `fock_vv` $f'_{ab} = (1 - \delta_{ab}) f_{ab}$;
- 双电子积分 `eri_mo` $(pq|rs)$;
- 只包含占据-非占分块的双电子积分 `eri_iajb` $(ia|jb)$
- MP2 分母 `D_iajb` $D_{ij}^{ab}$。
```
eo, ev = fock_rot.diagonal()[so], fock_rot.diagonal()[sv]
fock_oo, fock_vv = fock_rot[so, so], fock_rot[sv, sv]
fock_oop, fock_vvp = fock_oo - np.diag(eo), fock_vv - np.diag(ev)
eri_mo = np.einsum("up, vq, uvkl, kr, ls -> pqrs", C_rot, C_rot, eri, C_rot, C_rot)
eri_iajb = eri_mo[so, sv, so, sv]
D_iajb = eo[:, None, None, None] - ev[None, :, None, None] + eo[None, None, :, None] - ev[None, None, None, :]
```
:::{caution}
**变量重新定义**
上面代码块中 `eo`, `ev`, `eri_mo`, `D_iajb` 就在 Non-Canonical 的系数矩阵 `C_rot` 下给出;但我们曾经也在 Canonical 系数矩阵下给出过类似的变量。
由于我们会经常切换各种系数矩阵的旋转方式 (非旋转、Non-Canonical、Non-HF),因此一些变量也会被复用与复写,也暂不区分旋转后与旋转前的分子轨道角标。这可能会对阅读造成困惑。
:::
依据不同的微扰理论定义方式,Non-Canonical RHF 的 MP2 相关能可能与 Canonical RHF 的 MP2 相关能不同。因此这里采用两个相关能相同的定义。此时激发系数 $t_{ij}^{ab}$ 应当满足
$$
(ia|jb) = t_{kj}^{ab} f_{ki} + t_{ik}^{ab} f_{kj} - t_{ij}^{cb} f_{ca} - t_{ij}^{ac} f_{cb}
$$
上式是对等式右的 $k$ 进行求和的。如果现在用 $f_{ij} = f'_{ij} + \delta_{ij} \varepsilon_i$,$f_{ab} = f'_{ab} + \delta_{ab} \varepsilon_a$ 展开,那么上式可以写为
$$
(ia|jb) = t_{ij}^{ab} D_{ij}^{ab} + t_{kj}^{ab} f'_{ki} + t_{ik}^{ab} f'_{kj} - t_{ij}^{cb} f'_{ca} - t_{ij}^{ac} f'_{cb}
$$
整理上式,就可以得到迭代关系
$$
t_{ij}^{ab} \leftarrow \frac{(ia|jb) - t_{kj}^{ab} f'_{ki} - t_{ik}^{ab} f'_{kj} + t_{ij}^{cb} f'_{ca} + t_{ij}^{ac} f'_{cb}}{D_{ij}^{ab}}
$$
一般来说,如果轨道的旋转并不是很剧烈,那么 $f'_{ij}$, $f'_{ab}$ 两者的贡献较小,因此 $t_{ij}^{ab} \simeq (ia|jb) / D_{ij}^{ab}$ 会是一个比较好的近似。
在此情形下,Non-Canonical MP2 的能量计算方式如下:
$$
E_\mathrm{corr}^\mathsf{MP2} = \big( 2 t_{ij}^{ab} - t_{ij}^{ba} \big) (ia|jb)
$$
下面的程序就是实现 Non-Canonical MP2 的流程。
- `update_t_iajb` 即使用迭代关系,更新 $t_{ij}^{ab}$;
- `calculate_noncan_mp2` 即计算 Non-Canonical MP2 相关能的函数。
```
def update_t_iajb(t_iajb):
t_iajb_new = np.zeros_like(t_iajb)
t_iajb_new += np.einsum("icjb, ca -> iajb", t_iajb, fock_vvp)
t_iajb_new += np.einsum("iajc, cb -> iajb", t_iajb, fock_vvp)
t_iajb_new -= np.einsum("iakb, kj -> iajb", t_iajb, fock_oop)
t_iajb_new -= np.einsum("kajb, ki -> iajb", t_iajb, fock_oop)
t_iajb_new += eri_iajb
t_iajb_new /= D_iajb
return t_iajb_new
def calculate_noncan_mp2(t_iajb):
return ((2 * t_iajb - t_iajb.swapaxes(-1, -3)) * eri_iajb).sum()
```
随后声明初猜 $t_{ij}^{ab} \simeq (ia|jb) / D_{ij}^{ab}$ 并以此迭代更新;并以 Canonical MP2 的相关能加以验证。在 5 次循环后,几乎收敛到了正确能量。
```
t_iajb = eri_mo[so, sv, so, sv] / D_iajb
for i in range(10):
print("Error: {:16.8e}".format(calculate_noncan_mp2(t_iajb) - mf_mp2.e_corr))
t_iajb = update_t_iajb(t_iajb)
```
事实上,在 PySCF 中,包含占据-非占轨道旋转的 Non-RHF 下的 MP2 方法,也是通过上述过程进行计算的。
### MP2 1-RDM
对于一阶约化密度 1-RDM $\gamma_{pq}$,其需要通过分块的方式生成:
$$
\begin{align}
\gamma_{ij}^\mathsf{RHF} &= 2 \delta_{ij} \\
\gamma_{ab}^\mathsf{RHF} &= \gamma_{ia}^\mathsf{RHF} = \gamma_{ai}^\mathsf{RHF} = 0 \\
\gamma_{ij}^\mathrm{corr} &= - 4 t_{ik}^{ab} t_{jk}^{ab} + 2 t_{ik}^{ba} t_{jk}^{ab} \\
\gamma_{ab}^\mathrm{corr} &= 4 t_{ij}^{ac} t_{ij}^{bc} - 2 t_{ij}^{ca} t_{ij}^{bc} \\
\gamma_{ia}^\mathrm{corr} &= \gamma_{ai}^\mathrm{corr} = 0 \\
\gamma_{pq} &= \gamma_{pq}^\mathsf{RHF} + \gamma_{pq}^\mathrm{corr}
\end{align}
$$
这种生成方式无关乎方法是否是 Canonical 的。
首先生成 RHF 的 1-RDM `rdm1_rhf` $\gamma_{pq}^\mathsf{RHF}$:
```
rdm1_rhf = np.zeros((nmo, nmo))
np.fill_diagonal(rdm1_rhf[so, so], 2)
```
随后给出 MP2 相关部分所给出的 1-RDM `rdm1_corr` $\gamma_{pq}^\mathrm{corr}$:
```
rdm1_corr = np.zeros((nmo, nmo))
rdm1_corr[so, so] = - 4 * np.einsum("iakb, jakb -> ij", t_iajb, t_iajb) + 2 * np.einsum("ibka, jakb -> ij", t_iajb, t_iajb)
rdm1_corr[sv, sv] = 4 * np.einsum("iajc, ibjc -> ab", t_iajb, t_iajb) - 2 * np.einsum("icja, ibjc -> ab", t_iajb, t_iajb)
```
总 1-RDM `rdm1` $\gamma_{pq}$ 可以通过简单相加获得:
```
rdm1 = rdm1_rhf + rdm1_corr
np.allclose(rdm1, mf_pmp2.make_rdm1())
```
### MP2 2-RDM
对于二阶约化密度 2-RDM `rdm2` $\Gamma_{pr}^{qs}$ (维度 $(p, q, r, s)$),其也需要通过分块生成。首先生成 $\Gamma_{ia}^{jb}$, $\Gamma_{ai}^{bj}$, $\Gamma_{ik}^{jl}$, $\Gamma_{ac}^{bd}$ 部分:
$$
\Gamma_{pr}^{qs} = \left( \gamma_{pq} \gamma_{rs} - \frac{1}{2} \gamma_{ps} \gamma_{rq} \right) - \left( \gamma_{pq}^\mathrm{corr} \gamma_{rs}^\mathrm{corr} - \frac{1}{2} \gamma_{ps}^\mathrm{corr} \gamma_{rq}^\mathrm{corr} \right)
$$
其余的部分是 $\Gamma_{ij}^{ab}$ 与 $\Gamma_{ab}^{ij}$:
$$
\Gamma_{ij}^{ab} = \Gamma_{ab}^{ij} = 4 t_{ij}^{ab} - 2 t_{ij}^{ba}
$$
```
rdm2 = np.zeros((nmo, nmo, nmo, nmo))
rdm2 = np.einsum("pq, rs -> pqrs", rdm1, rdm1) - 0.5 * np.einsum("ps, rq -> pqrs", rdm1, rdm1)
rdm2 -= np.einsum("pq, rs -> pqrs", rdm1_corr, rdm1_corr) - 0.5 * np.einsum("ps, rq -> pqrs", rdm1_corr, rdm1_corr)
rdm2[so, sv, so, sv] = 4 * np.einsum("iajb -> iajb", t_iajb) - 2 * np.einsum("ibja -> iajb", t_iajb)
rdm2[sv, so, sv, so] = 4 * np.einsum("iajb -> aibj", t_iajb) - 2 * np.einsum("ibja -> aibj", t_iajb)
np.allclose(rdm2, mf_pmp2.make_rdm2(), atol=1e-7)
```
由此,我们可以通过 1-RDM $\gamma_{pq}$ 与 2-RDM $\Gamma_{pr}^{qs}$ 验证 MP2 总能量 -76.104036 a.u.:
$$
E_\mathrm{tot}^\mathsf{MP2} = h_{pq} \gamma_{pq} + \frac{1}{2} (pq|rs) \Gamma_{pr}^{qs} + E_\mathrm{nuc}
$$
但这里的单电子积分 $h_{pq}$ 与双电子积分 $(pq|rs)$ 都是在旋转过后的系数轨道矩阵 `C_rot` $\mathbf{C}$ 为基给出,因此需要重新生成一下。
```
h_mo = np.einsum("up, uv, vq -> pq", C_rot, h, C_rot)
eri_mo = np.einsum("up, vq, uvkl, kr, ls -> pqrs", C_rot, C_rot, eri, C_rot, C_rot)
(
+ np.einsum("pq, pq ->", h_mo, rdm1)
+ 0.5 * np.einsum("pqrs, pqrs ->", eri_mo, rdm2)
+ mol.energy_nuc()
)
```
### 生成广义 Fock 矩阵
广义 Fock 矩阵 `gfock` $F_{pq}$ 区别于 RHF 的 Fock 矩阵 $f_{pq}$。其定义为
$$
F_{pq} = h_{pm} \gamma_{mq} + (pm|rs) \Gamma_{mr}^{qs}
$$
```
gfock = np.einsum("pr, rq -> pq", h_mo, rdm1) + np.einsum("pmrs, mqrs -> pq", eri_mo, rdm2)
```
事实上,RHF 的 Fock 矩阵中,占据轨道部分也可以用类似的方法定义:
$$
\begin{align}
2 f_{ij} &= h_{im} \gamma_{mj}^\mathsf{RHF} + (im|rs) \Gamma_{mr}^{js, \mathsf{RHF}} \\
\Gamma_{pr}^{qs, \mathsf{RHF}} &= \gamma_{pq}^\mathsf{RHF} \gamma_{rs}^\mathsf{RHF} - \frac{1}{2} \gamma_{ps}^\mathsf{RHF} \gamma_{rq}^\mathsf{RHF}
\end{align}
$$
```
rdm2_rhf = np.einsum("pq, rs -> pqrs", rdm1_rhf, rdm1_rhf) - 0.5 * np.einsum("ps, rq -> pqrs", rdm1_rhf, rdm1_rhf)
np.allclose(
(np.einsum("pr, rq -> pq", h_mo, rdm1_rhf) + np.einsum("pmrs, mqrs -> pq", eri_mo, rdm2_rhf))[so, so],
2 * fock_rot[so, so],
)
```
但在 PySCF 的 CASSCF 模块中,似乎没有直接生成广义 Fock 矩阵的方式。但其有广义 Fock 的导数量,被称为轨道梯度 (Orbital Gradient) `gfock_grad` $x_{pq}$:
$$
x_{pq} = F_{pq} - F_{qp}
$$
```
gfock_grad = gfock - gfock.T
np.allclose(
mf_cas.unpack_uniq_var(mf_cas.get_grad(C_rot, (rdm1, rdm2), mf_cas.ao2mo(C_rot))),
gfock_grad
)
```
至此,所有生成 OO-MP2 所需要的单步复杂计算都已经涵盖到了。
## 轨道旋转的意义
讨论到现在,我们仅仅知道了 OO-MP2 的程序实现是如何进行的;但对其根源的合理性问题,我们在这里才开始说明。
出于一般性,我们现在考虑 Non-HF 形式的轨道系数,即相对于 RHF 系数已经一定程度的旋转。该 Non-HF 轨道系数称为 `C_base` $C_{\mu p}^\mathrm{base}$。我们之后的讨论都基于该 Non-HF 轨道系数开始。
```
X = np.random.randn(nmo, nmo)
X = (X - X.T) * 0.02
C_base = C_rhf @ scipy.linalg.expm(X)
```
首先需要说明,轨道的旋转矩阵必须是正交矩阵 (酉矩阵)。这是因为轨道系数必须满足
$$
\mathbf{C}^\dagger \mathbf{S} \mathbf{C} = \mathbf{I}
$$
旋转矩阵 $\mathbf{U}$ 通过下式定义:$\mathbf{C} = \mathbf{C}^\mathrm{base} \mathbf{U}$。因此,
$$
\mathbf{C}^\dagger \mathbf{S} \mathbf{C} = \mathbf{U}^\dagger \mathbf{C}^\dagger \mathbf{S} \mathbf{C} \mathbf{U} = \mathbf{U}^\dagger \mathbf{I} \mathbf{U} = \mathbf{U}^\dagger \mathbf{U} = \mathbf{I}
$$
而任何正交矩阵都可以通过反对称矩阵 $\mathbf{X} = \mathbf{X}^\dagger$ 的幂次给出 $\mathbf{U} = \exp(\mathbf{X})$。
现在考察在微扰下,能量随轨道系数的变化情况。令一般情况下轨道系数 $C_{\mu p}$ 为关于反对称矩阵 $X_{pq}$ 的函数:
$$
\mathbf{C} = \mathbf{C}^\mathrm{base} \exp (\mathbf{X})
$$
而 $C_{\mu p}$ 对应的 MP2 能量写作关于 $X_{pq}$ 的函数 $E_\mathrm{tot}^\mathsf{MP2} (\mathbf{X})$。下面的代码 `eng_mp2_pert` 即是代入反对称矩阵 $X_{pq}$,生成 MP2 能量的函数。
```
def eng_mp2_pert(X):
C_rot = C_base @ scipy.linalg.expm(X)
mf_prhf = scf.RHF(mol)
mf_prhf.mo_occ, mf_prhf.mo_coeff = mo_occ, C_rot
mf_pmp2 = mp.MP2(mf_prhf).run()
return mf_pmp2.e_tot
```
由此,能量关于旋转矩阵的导数关系可以写为矩阵 `dX` ${\mathrm{d} \mathbf{X}}$,其维度为 $(p, q)$:
$$
{\mathrm{d}X}_{pq} = \frac{\partial E_\mathrm{tot}^\mathsf{MP2}}{\partial X_{pq}}
$$
这种导数可以写成三点差分的数值微分的形式:
$$
{\mathrm{d}X}_{pq} \simeq \frac{E_\mathrm{tot}^\mathsf{MP2} (d_{pq}) - E_\mathrm{tot}^\mathsf{MP2} (- d_{pq})}{2 d_{pq}}
$$
$E_\mathrm{tot}^\mathsf{MP2} (d_{pq})$ 的意义是,反对称矩阵 $\mathbf{X}$ 仅在第 $p$ 行、第 $q$ 列上,$X_{pq} = d_{pq}$;且在第 $q$ 行、第 $p$ 列上,$X_{qp} = - d_{pq}$;其它位置上,$\mathbf{X}$ 均取到零值。如果 $p = q$,那么 $\mathbf{X} = \mathbf{0}$。生成这种反对称矩阵的函数 `gen_pert_X` 如下所示:
```
def gen_pert_X(p, q, interval):
X = np.zeros((nmo, nmo))
X[p, q] = interval
X -= X.T
return X
```
那么依据上述反对称矩阵,所求出的 MP2 能量随 $X_{pq}$ 变化的数值导数 ${\mathrm{d}X}_{pq}$ 的生成函数如下:
```
def eng_mp2_numdiff(p, q, interval):
X_positive = gen_pert_X(p, q, interval)
X_negative = gen_pert_X(p, q, -interval)
return (eng_mp2_pert(X_positive) - eng_mp2_pert(X_negative)) / (2 * interval)
```
对角标 $p, q$ 循环,我们就能求出完整的导数矩阵 `dX` ${\mathrm{d} \mathbf{X}}$ (这里选取的数值微分的间隙值 `interval` 为 $10^{-4}$ a.u.):
```
dX = np.zeros((nmo, nmo))
for a in range(nmo):
for i in range(nmo):
dX[a, i] = eng_mp2_numdiff(a, i, 1e-4)
dX
```
注意到这是一个反对称且分块的矩阵;在占据与非占分块值完全为零,有值处仅有 $\mathrm{d} X_{ai} = - \mathrm{d} X_{ia}$。这实际上近乎等于 2 倍的轨道梯度矩阵 `2 * gfock_grad`:
$$
\mathrm{d} X_{pq} = 2 x_{pq} = 2 (F_{pq} - F_{qp})
$$
```
mf_prhf = scf.RHF(mol)
mf_prhf.mo_occ, mf_prhf.mo_coeff = mo_occ, C_base
mf_pmp2 = mp.MP2(mf_prhf).run()
rdm1, rdm2 = mf_pmp2.make_rdm1(), mf_pmp2.make_rdm2()
gfock_grad = mf_cas.unpack_uniq_var(mf_cas.get_grad(C_base, (rdm1, rdm2), mf_cas.ao2mo(C_base)))
np.allclose(2 * gfock_grad, dX, atol=5e-6)
```
因此,可以说 OO-MP2 的意义是,找到一个合适的 $\mathbf{C}^\mathrm{base}$,使得对于任意的很小的、用于旋转的反对称矩阵 $\mathbf{X}$,有 $E_\mathrm{tot}^\mathsf{MP2} (\mathbf{X})$ 不会更改。
## OO-MP2 能量并非一定比 MP2 低
在文档最后,我们会指出,OO-MP2 能量并非 MP2 的下界。尽管 OO-MP2 看起来对轨道进行变分式的优化,但其变分的对象应当认为是 Hylleraas 泛函,而非总 MP2 能量。
对于下述拉长的氢分子,就是一个 OO-MP2 能量比 MP2 能量高的例子。
```
mol = gto.Mole()
mol.atom = """
H 0. 0. 0.
H 0. 0. 15.
"""
mol.basis = "6-31G"
mol.verbose = 0
mol.build()
```
其 MP2 能量为
```
mol.RHF().run().MP2().run().e_tot
```
而其 OO-MP2 能量为
```
mf_cas = mcscf.CASSCF(mol.RHF().run(), mol.nao, mol.nelectron)
mf_cas.fcisolver = MP2AsFCISolver()
mf_cas.internal_rotation = True
cas_result = mf_cas.kernel()
cas_result[0]
```
但即使 OO-MP2 的能量比 MP2 高,它仍然无法解决 MP2 方法在解离两个氢原子所产生的相当大的解离误差。
[^Sun-Chan.JCP.2020]: Recent Developments in the PySCF Program Package. *J. Chem. Phys.* **2020**, *153* (2), 24109. doi: [10.1063/5.0006074](https://doi.org/10.1063/5.0006074).
| github_jupyter |
# Output from another step
* **Difficulty level**: intermediate
* **Time need to lean**: 10 minutes or less
* **Key points**:
* Function `output_from(step)` refers to output from another `step`
* `output_from(step)[name]` can be used to refer to named output from `step`
## Referring to named output from another step
As shown in the example from tutorial [How to use named output in data-flow style workflows](named_output.html), function `named_output` can be used to refer to named output from another step:
```
!rm -f data/DEG.csv
%run plot -v1
[global]
excel_file = 'data/DEG.xlsx'
csv_file = 'data/DEG.csv'
figure_file = 'output.pdf'
[convert]
input: excel_file
output: csv = _input.with_suffix('.csv')
run: expand=True
xlsx2csv {_input} > {_output}
[plot]
input: named_output('csv')
output: figure_file
R: expand=True
data <- read.csv('{_input}')
pdf('{_output}')
plot(data$log2FoldChange, data$stat)
dev.off()
```
One obvious limitation of `named_output()` is that the name has to be unique in the workflow. For example, in the following script where another step `test_csv` also gives its output a name `csv`, the workflow would fail due to ambiguity. This is usually not a concern with small workflows. However, when workflows get more and more complex, it is sometimes desired to anchor named output more precisely.
```
%env --expect-error
!rm -f data/DEG.csv
%run plot -v1
[global]
excel_file = 'data/DEG.xlsx'
csv_file = 'data/DEG.csv'
figure_file = 'output.pdf'
[convert]
input: excel_file
output: csv = _input.with_suffix('.csv')
run: expand=True
xlsx2csv {_input} > {_output}
[test_csv]
input: excel_file
output: csv = f'{_input:n}_test.csv'
run: expand=True
xlsx2csv {_input} | head -10 > {_output}
[plot]
input: named_output('csv')
output: figure_file
R: expand=True
data <- read.csv('{_input}')
pdf('{_output}')
plot(data$log2FoldChange, data$stat)
dev.off()
```
## Function `output_from` <a id="output_from"></a>
<div class="bs-callout bs-callout-primary" role="alert">
<h4>Function <code>output_from(steps, group_by, ...)</code></h4>
<p>Function <code>output_from</code> refers to the output of <code>step</code>. The returned the object is the complete output from <code>step</code> with its own sources and groups. Therefore,</p>
<ul>
<li>More than one steps can be specified as a list of step names</li>
<li>Option <code>group_by</code> can be used to regroup the returned files</li>
<li><code>output_from(step)[name]</code> refers to all output with source <code>name</code></li>
</ul>
</div>
Function `output_from` imports the output from one or more other steps. For example, in the following workflow `output_from(['step_10', 'step_20'])` takes the output from steps `step_10` and `step_20` as input.
```
%run -v0
[step_10]
output: 'a.txt'
_output.touch()
[step_20]
output: 'b.txt'
_output.touch()
[step_30]
input: output_from(['step_10', 'step_20'])
print(_input)
```
The above example is a simple forward workflow with numerically numbered steps. In this case the parameters of `output_from` can be simplied to just the indexes (integers) so the workflow can be written as
```
%run -v0
[step_10]
output: 'a.txt'
_output.touch()
[step_20]
output: 'b.txt'
_output.touch()
[step_30]
input: output_from([10, 20])
print(_input)
```
The source `steps` of `output_from(steps)` does not have to be limited to numerically-indexed steps. For example, the above example can be written as:
```
%run -v0
[A]
output: 'a.txt'
_output.touch()
[B]
output: 'b.txt'
_output.touch()
[default]
input: output_from(['A', 'B'])
print(_input)
```
### `labels` of outputs returned from `output_from`
The `sources` of the files returned from `output_from()` is by default the names of the steps so you can refer to these files separately using the `_input[name]` syntax:
```
%run -v0
[A]
output: 'a.txt'
_output.touch()
[B]
output: 'b.txt'
_output.touch()
[default]
input: output_from(['A', 'B'])
print(_input)
print(f'Output from A is {_input["A"]}')
print(f'Output from B is {_input["B"]}')
```
If the output has its own sources (names), the sources will be kept.
```
%run -v0
[A]
output: A_out = 'a.txt'
_output.touch()
[B]
output: B_out = 'b.txt'
_output.touch()
[default]
input: output_from(['A', 'B'])
print(_input)
print(f'Output from A is {_input["A_out"]}')
print(f'Output from B is {_input["B_out"]}')
```
As usual, keyword arguments of the input statement override the `sources` of input files:
```
%run -v0
[step_10]
output: 'a.txt'
_output.touch()
[step_20]
output: 'b.txt'
_output.touch()
[step_30]
input: s10=output_from(10), s20=output_from(20)
print(f'Output from step_10 is {_input["s10"]}')
print(f'Output from step_20 is {_input["s20"]}')
```
### Groups of output returned from `output_from`
Similar to the case with `named_output`, the returned object from `output_from()` keeps its original groups. For example,
```
%run B -v0
[A]
input: for_each=dict(i=range(4))
output: f'a_{i}.txt'
_output.touch()
[B]
input: output_from('A')
output: _input.with_suffix('.bak')
print(f'Converting {_input} to {_output}')
_output.touch()
```
You can override the groups using the `group_by` option of `output_from`.
```
%run B -v0
[A]
input: for_each=dict(i=range(4))
output: f'a_{i}.txt'
_output.touch()
[B]
input: output_from('A', group_by=2)
output: [x.with_suffix('.bak') for x in _input]
print(f'Converting {_input} to {_output}')
_output.touch()
```
Note that we used
```
_input.with_suffix('.bak')
```
when `_input` contains only one filename and the above the statement is equivalent to
```
_input[0].with_suffix('.bak')
```
However, when `_input` contains more than one files, you will have to deal with them one by one as follows:
```
[x.with_suffix('.bak') for x in _input]
```
## Using `output_from` in place of `named_output`
Going back to our `convert`, `plot` example. When another step is added to have the same named output, it is no longer possible to use `named_output(name)`. In this case you can explicitly specify the step from which the named output is defined, and use
```
output_from(step)[name]
```
instead of
```
named_output(name)
```
as shown in the following example:
```
!rm -f data/DEG.csv
%run plot -v1
[global]
excel_file = 'data/DEG.xlsx'
csv_file = 'data/DEG.csv'
figure_file = 'output.pdf'
[convert]
input: excel_file
output: csv = _input.with_suffix('.csv')
run: expand=True
xlsx2csv {_input} > {_output}
[test_csv]
input: excel_file
output: csv = f'{_input:n}_test.csv'
run: expand=True
xlsx2csv {_input} | head -10 > {_output}
[plot]
input: output_from('convert')['csv']
output: figure_file
R: expand=True
data <- read.csv('{_input}')
pdf('{_output}')
plot(data$log2FoldChange, data$stat)
dev.off()
```
Note that `output_from` is better than `named_output` for its ability to referring to a specific step, but is also worse than `named_output` for the same reason because it makes the workflow more difficult to maintain. We generally recommend the use of `named_output` for its simplicity.
## `output_from()` with skipped substeps
Function `output_from()` obtains outputs, actually substeps output from another step. There is, however, a case when a substep is skipped and leaves no output. In this case, the substep output is dicarded.
For example, when a substep in the step `A` of the following workflow is skipped, the result from `output_from('A')` contains only the output of valid substeps.
```
%run -v0
[A]
input: for_each=dict(i=range(4))
output: f'output_{i}.txt'
skip_if(i == 2, 'Skip substep 2')
_output.touch()
[default]
input: output_from('A')
print(_input)
```
However, if you would like to keep consistent number of substeps across steps, you can handle get output from all substeps by using option `remove_empty_groups=False`.
```
%run -v0
[A]
input: for_each=dict(i=range(4))
output: f'output_{i}.txt'
skip_if(i == 2, 'Skip substep 2')
_output.touch()
[default]
input: output_from('A', remove_empty_groups=False)
print(_input)
```
## Output from a workflow
<div class="bs-callout bs-callout-info" role="alert">
<h4>Function <code>output_from(workflow_name)</code></h4>
<p><code>output_from(workflow_name)</code> is equivalent to <code>output_from(workflow_name_index)</code> where <code>index</code> is the largest index of the workflow <code>workflow_name</code></p>
</div>
Function `output_from` is usually used to refer the output of a specific step. However, similar to [target `sos_step` that can refer to a numerically indexed workflow](target_sos_step.html), `output_from` can also accept the name of the workflow and returns the output of the last step of the workflow.
For example, in the following workflow, `output_from('A')` is used to obtain the output of step `A_2`, which is the last step of the workflow `A`. Although `output_from('A')` is identical to `output_from('A_2')`, it frees you from specifying the index of the last step of the workflow, and is more intuitive to think `output_from('A')` as the output of the workflow.
```
%run -v0
[A_1]
print(f'Running {step_name} to analye data')
[A_2]
output: 'result.txt'
print(f'Running {step_name} to generate result')
_output.touch()
[default]
input: output_from('A')
print(f'Input of the {step_name} is {_input}')
```
| github_jupyter |
## **GRIP - TSF | Data Science & Business Analytics Internship**
### **Task 2 : K-Means Clustering**
### Author : AYOUB EL AAMRI.
# 1. Setup the environment
PANDAS,NUMPY for data manuplation.
Matplotlib,seaborn module for Data Visualisation.
sklearn for modelling
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="white", color_codes=True)
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
import sklearn.metrics as metrics
from mpl_toolkits.mplot3d import Axes3D
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings('ignore')
```
# 2 .Importing data
```
iris =pd.read_csv('Iris.csv')
print(iris .head())
print('Data shape -->', iris .shape)
iris["Species"].value_counts()
```
# 3.Data preprocessing
```
data = iris.drop(['Species'], axis=1)
y = iris['Species']
```
### (i).Missing values
```
data.isnull().sum()
```
### (ii) Data Visualisation
```
f,ax=plt.subplots(1,2,figsize=(8,5))
iris['Species'].value_counts().plot.pie(explode=[0.1,0.1,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True)
ax[0].set_title('Iris Species Count')
sns.countplot('Species',data=iris,ax=ax[1])
ax[1].set_title('Iris Species Count')
plt.show()
```
We can see that there are 50 samples each of all the Iris Species in the data set.
### FacetGrid Plot
```
# Plotting species for Sepal
sns.FacetGrid(iris, hue="Species", size=4) \
.map(plt.scatter, "SepalLengthCm", "SepalWidthCm") \
.add_legend()
# Plotting species for petals
sns.FacetGrid(iris, hue="Species", size=4) \
.map(plt.scatter, "PetalLengthCm", "PetalWidthCm") \
.add_legend()
```
Observed that the species are nearly linearly separable with petal size, but sepal sizes are more mixed.This is an indication that the Petals can help in better and accurate Predictions over the Sepal
### Boxplot
```
fig=plt.gcf()
fig.set_size_inches(10,7)
fig=sns.boxplot(x='Species',y='SepalLengthCm',data=iris)
fig=sns.stripplot(x='Species',y='SepalLengthCm',data=iris,jitter=True,edgecolor='gray')
```
We can observe from the box plot of Iris-Virginica , there are some outliers
```
tmp = iris.drop('Id', axis=1)
tmp.hist(edgecolor='black', linewidth=1.2)
fig=plt.gcf()
fig.set_size_inches(12,6)
plt.show()
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
sns.violinplot(x='Species',y='PetalLengthCm',data=iris)
plt.subplot(2,2,2)
sns.violinplot(x='Species',y='PetalWidthCm',data=iris)
plt.subplot(2,2,3)
sns.violinplot(x='Species',y='SepalLengthCm',data=iris)
plt.subplot(2,2,4)
sns.violinplot(x='Species',y='SepalWidthCm',data=iris)
sns.pairplot(tmp, hue="Species", diag_kind="hist", size=1.6)
```
This shows how similar versicolor and virginica are, at least with the given features.But there could be features that you didn't measure that would more clearly separate the species.It's the same for any unsupervised learning - you need to have the right features to separate the groups in the best way.
### Converting Species to numeric
```
def y_label (invalue):
if invalue == 'Iris-setosa' :
return 1
elif invalue == 'Iris-virginica' :
return 0
else :
return 2
df1 = pd.DataFrame(data=y.values, columns=['species'])
df1['index']=df1['species'].apply(y_label)
```
# 4 Data Preparation
The data we are using to build a clustering should
1. Always be numeric and
2.should always be on same scale
### (i) Data Type
```
data.dtypes
```
The features we are using for clustering are numeric
### (ii).Scaling the data
```
std_scale = StandardScaler().fit(data)
data_scaled = std_scale.transform(data)
X_scaled = pd.DataFrame(data_scaled, columns = data.columns)
X_scaled.sample(5)
```
Hence before we feed a data to a clustering algorithm it becomes imperative to bring our data on the same scale by using StandardScaler
# 4 (a) K-Means algorithm
Lets try to visulaize the data if we can segreggate into clusters
### (i).Scatter plot to visualise the scaled data and intial centriods for given K -clusters (K-Means)
```
def plot_kmeans_scale(k) :
kmeans_model = KMeans(n_clusters=k, random_state=123)
kmeans_model.fit(data_scaled)
#Make predictions
labels=kmeans_model.predict(data_scaled)
#to get centroids
centroid=kmeans_model.cluster_centers_
colors=['r','g','p','b','o','y','m','w']
fig = plt.figure(1, figsize=(3,3))
kx = Axes3D(fig, rect=[0, 0, 1, 1], elev=50, azim=120)
for i in range(k) :
points=np.array([data_scaled[j]for j in range(len(data_scaled))if labels[j]>=i])
kx.scatter(points[:, 3], points[:, 0], points[:, 2],s=5, cmap='jet')#colors[i])
kx.scatter(centroid[:,0],centroid[:,1],marker='*',s=200,c='red')
#plt.title('Number of clusters = {}'.format(k))
plt.show()
k=5
for i in range(k+1):
if i>1 :
plot_kmeans_scale(i)
```
Initial centroids are indicated as red stars.
Starting with k =2 to 5 we ran the code,
Squared Euclidean distance measures the distance between each data point and the centroid, then the centroid will be re-calculated until the stop criteria and the following are screen shots of the results.
A good choice of number of clusters will lead to compact and well separated clusters.
That is to maximize intra-cluster similarity, and minimize inter-cluster similarity.
Measure the compactness of clusters( inter-cluster similarity), We can compute a measure called "Within Sum of Squares for this cluster(WSS)"for each cluster or we can take an average.
### (ii) Finding optimal number of clusters - Plot Scree pot/Elbow Plot
The technique we use to determine optimum K, the number of clusters, is called the elbow method.
```
k=9
WSS = []
for k in range(1,9):
kmeans_model = KMeans(n_clusters=k, random_state=123)
kmeans_model.fit(data_scaled)
WSS.append(kmeans_model.inertia_)
plt.plot(range(1,9), WSS, marker='o')
plt.xlabel("Number of clusters")
plt.ylabel("Within-cluster WSS")
plt.title("Scree Plot")
plt.plot([3]*6000, range(1,6001), ",")
plt.text(3.1, 5001, "optimal number of clusters = 3")
```
By plotting the number of centroids and the average distance between a data point and the centroid within the cluster we arrive at the above graph.
inertia_ :The sum of squared distances within (WSS) the cluster to its centroid.
Higher inertia_ refers to higher spread of data points from its own centroid. Lower inertia_ refers to higher concentration of datapoints at its own centroid.
From the Scree plot, inertia_ is decreasing with higher number of clusters. However, decrease in inertia_ got flattened from 4 clusters onwards.
To finalise the optimum number of clusters need to the similarity of data points in its own cluster compared to other clusters. This can be measured using Silhouette Score.
The Silhouette score is maximum for 3 clusters. Also, it is evident from Scree curve ,inertia_ got flattened for 4 clusters onwards.
Hence, based on Silhouette score and scree plot 3 clusters were considered as optimal clusters
```
for i in range(2,8):
labels=KMeans(n_clusters=i,random_state=123).fit(data_scaled).labels_
print ("Silhoutte score for k= "+str(i)+" is "+str(metrics.silhouette_score(data_scaled,labels,metric="euclidean",random_state=123)))
```
The silhouette score is a measure of how similar an object is to its own cluster compared to other clusters (separation). The value of this messure range from -1 to 1 and higher the value indicates maximum similarity in its own cluster.
```
scores = metrics.silhouette_samples(data_scaled, labels)
sns.distplot(scores)
df_scores = pd.DataFrame()
df_scores['SilhouetteScore'] = scores
df_scores['Species'] = iris['Species']
df_scores.hist(by='Species', column='SilhouetteScore', range=(0,1.0), bins=20)
sns.pairplot(df_scores, hue="Species", size=3)
```
### (iii). K-means clustering with 3 optimal Clusters
```
km = KMeans(n_clusters=3, random_state=123)
km.fit(data_scaled)
print('inertia with clusters=3 -->' ,km.inertia_)
km.cluster_centers_
```
### (iv) Make predictions on the lables using K=3
```
predicted_cluster = km.predict(data_scaled)
predicted_labels = km.labels_
```
### (v). Plot the scaled data partitioned into optimal clusters K=3
```
fig = plt.figure(1, figsize=(7,7))
ax = Axes3D(fig, rect=[0, 0, 1, 1], elev=50, azim=120)
ax.scatter(data_scaled[:, 3], data_scaled[:, 0], data_scaled[:, 2],
c=predicted_labels.astype(np.float), cmap='jet',edgecolor="k", s=150)
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
plt.title("K Means", fontsize=14)
```
### (vi) Comparing Clustered data with original data for defining boundaries of 3 clusters(k-means)
```
from matplotlib import cm
fig = plt.figure(figsize=plt.figaspect(0.25))
ax = fig.add_subplot(1, 2, 1, projection='3d')
surf =ax.scatter(data_scaled[:, 3], data_scaled[:, 0],data_scaled[:, 2],
c=df1['index'], cmap='gist_rainbow',edgecolor="k", s=150)
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
plt.title("Original data-IRIS", fontsize=14)
fig.colorbar(surf, shrink=0.5, aspect=10)
ax = fig.add_subplot(1, 2, 2, projection='3d')
ax.scatter(data_scaled[:, 3], data_scaled[:, 0], data_scaled[:, 2],
c=predicted_labels.astype(np.float), cmap='jet',edgecolor='k', s=150)
ax.set_xlabel("Petal width")
ax.set_ylabel("Sepal length")
ax.set_zlabel("Petal length")
plt.title("K Means Clustering -IRIS", fontsize=14)
plt.show()
```
### Observations
In conclusion, from Silhouette score and Scree plot, we can be clustered to 3 main groups of species :
we can infer that the sepal sizes are more mixed, so the clustering algorithm cant vary much difference between two species i.e versicolor and virginica species. And also note that the species are nearly linearly separable with petal size,
we can compare this range of values with the original data (which is not scaled)
### (vi) Create cluster profiles compare with Original Data labels
```
def predict_species (invalue):
if invalue == 1:
return 'Iris-setosa'
elif invalue == 0 :
return 'Iris-virginica'
else :
return 'Iris-versicolor'
df1['predict_label']= pd.DataFrame(data=predicted_labels, columns=['predict_label'])
df1['predict_species']=df1['predict_label'].apply(predict_species)
sum(np.where((df1['species']!=df1['predict_species']),1,0))
df1[df1['species']!=df1['predict_species']]
```
By K-means clustering with number of clusters=3 , we are able to cluster 143 species correctly out of 150 species.The cluster Iris-versicolor and Iris-virginica are misclustered.
| github_jupyter |
```
from results import *
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import os
from matplotlib.transforms import Affine2D
sns.set(style='whitegrid')
def extract_data(label, data_dict, denominator_dict, normalize):
mean_list = list()
std_list = list()
if normalize:
factor = denominator_dict[label]
else:
factor = 1
for iter_nb in range(5):
mean_list.append(data_dict[iter_nb][label]['mean']/factor)
std_list.append(data_dict[iter_nb][label]['sem']/factor)
return mean_list, std_list
def extract_data_new(label, data_dict, denominator_dict, normalize):
mean_list = list()
std_list = list()
if normalize:
factor = denominator_dict[label]
else:
factor = 1
for iter_nb in range(4):
mean_list.append(data_dict[iter_nb][label]['numerator']/factor)
std_list.append(data_dict[iter_nb][label]['sem']/factor)
if 4 in data_dict.keys():
mean_list.append(data_dict[4][label]['numerator']/factor)
std_list.append(data_dict[4][label]['sem']/factor)
return mean_list, std_list
labels=['is_hired_1mo', 'is_unemployed', 'job_offer', 'job_search', 'lost_job_1mo']
output_path = '/home/manuto/Documents/world_bank/bert_twitter_labor/twitter-labor-data/data/fig/expansion'
iter_nb = range(5)
```
## Old
```
for label in labels:
fig,ax = plt.subplots(figsize=(4,4))
mean_list_our_method, std_list_our_method = extract_data(label, our_method_dict, denominator_dict, normalize=False)
mean_list_adaptive, std_list_adaptive = extract_data(label, adaptive_dict, denominator_dict, normalize=False)
ax.plot(iter_nb, mean_list_our_method, 'b-', label='our method')
ax.fill_between(iter_nb, list(np.array(mean_list_our_method) - np.array(std_list_our_method)), list(np.array(mean_list_our_method) + np.array(std_list_our_method)), color='b', alpha=0.2)
ax.plot(iter_nb, mean_list_adaptive, 'r-', label='adaptive retrieval')
ax.fill_between(iter_nb, list(np.array(mean_list_adaptive) - np.array(std_list_adaptive)), list(np.array(mean_list_adaptive) + np.array(std_list_adaptive)), color='r', alpha=0.2)
ax.set_yscale('log')
ax.tick_params(which='both',direction='in',pad=3)
# ax.locator_params(axis='y',nbins=6)
ax.set_ylabel('Expansion rate numerator',fontweight='bold')
ax.set_xlabel('Iteration number',fontweight='bold')
ax.set_title(label.replace('_',' ').replace('1mo','').title(),fontweight='bold')
ax.legend(loc='best',fontsize=9)
plt.savefig(os.path.join(output_path,f'numerator_{label}.png'),bbox_inches='tight', format='png' ,dpi=1200, transparent=False)
for label in labels:
fig,ax = plt.subplots(figsize=(4,4))
mean_list_our_method, std_list_our_method = extract_data(label, our_method_dict, denominator_dict, normalize=True)
mean_list_adaptive, std_list_adaptive = extract_data(label, adaptive_dict, denominator_dict, normalize=True)
ax.plot(iter_nb, mean_list_our_method, 'b-', label='our method')
ax.fill_between(iter_nb, list(np.array(mean_list_our_method) - np.array(std_list_our_method)), list(np.array(mean_list_our_method) + np.array(std_list_our_method)), color='b', alpha=0.2)
ax.plot(iter_nb, mean_list_adaptive, 'r-', label='adaptive retrieval')
ax.fill_between(iter_nb, list(np.array(mean_list_adaptive) - np.array(std_list_adaptive)), list(np.array(mean_list_adaptive) + np.array(std_list_adaptive)), color='r', alpha=0.2)
ax.set_yscale('log')
ax.tick_params(which='both',direction='in',pad=3)
# ax.locator_params(axis='y',nbins=6)
ax.set_ylabel('Expansion rate',fontweight='bold')
ax.set_xlabel('Iteration number',fontweight='bold')
ax.set_title(label.replace('_',' ').replace('1mo','').title(),fontweight='bold')
ax.legend(loc='best',fontsize=9)
plt.savefig(os.path.join(output_path,f'{label}.png'),bbox_inches='tight', format='png' ,dpi=1200, transparent=False)
```
## New
```
for num, label in enumerate(labels):
fig,ax = plt.subplots(figsize=(4,4))
mean_list_our_method, std_list_our_method = extract_data_new(label, our_method_new_dict, denominator_dict, normalize=True)
mean_list_adaptive, std_list_adaptive = extract_data_new(label, adaptive_new_dict, denominator_dict, normalize=True)
mean_list_uncertainty, std_list_uncertainty = extract_data_new(label, uncertainty_new_dict, denominator_dict, normalize=True)
mean_list_uncertainty_uncalibrated, std_list_uncertainty_uncalibrated = extract_data_new(label, uncertainty_uncalibrated_new_dict, denominator_dict, normalize=True)
trans1 = Affine2D().translate(-0.2, 0.0) + ax.transData
trans2 = Affine2D().translate(-0.1, 0.0) + ax.transData
trans3 = Affine2D().translate(+0.1, 0.0) + ax.transData
if label == 'is_unemployed':
# ax.scatter(0, mean_list_our_method[0], c='b', marker='x', transform=trans1)
ax.errorbar(range(1,5), mean_list_our_method[1:], std_list_our_method[1:], linestyle='None', marker='o', transform=trans1, label='our method')
# ax.scatter(0, mean_list_adaptive[0], c='r', marker='x', transform=trans2)
ax.errorbar(range(1,5), mean_list_adaptive[1:], std_list_adaptive[1:], linestyle='None', marker='v', label='adaptive retrieval', c='r', transform=trans2)
# ax.scatter(0, mean_list_uncertainty[0], c='g', marker='x',)
ax.errorbar(range(1,5), mean_list_uncertainty[1:], std_list_uncertainty[1:], linestyle='None', marker='s', label='uncertainty sampling', c='g')
# ax.scatter([0,1], [mean_list_uncertainty_uncalibrated[0], mean_list_uncertainty_uncalibrated[1]], c='m', marker='x', transform=trans3)
ax.errorbar([2,3,4], [ mean_list_uncertainty_uncalibrated[2], mean_list_uncertainty_uncalibrated[3], mean_list_uncertainty_uncalibrated[4]], [ std_list_uncertainty_uncalibrated[2], std_list_uncertainty_uncalibrated[3], std_list_uncertainty_uncalibrated[4]], linestyle='None', marker='P', label='uncertainty sampling (uncalibrated)', c='m', transform=trans3)
elif label == 'lost_job_1mo':
# ax.scatter([0,3], [mean_list_our_method[0], mean_list_our_method[3]] , c='b', marker='x', transform=trans1)
ax.errorbar([1,2,4], [mean_list_our_method[1], mean_list_our_method[2], mean_list_our_method[4]], [std_list_our_method[1], std_list_our_method[2], std_list_our_method[4]], linestyle='None', marker='o', transform=trans1, label='our method', c='b')
# ax.scatter([0,1,2], [mean_list_adaptive[0], mean_list_adaptive[1], mean_list_adaptive[2]], c='r', marker='x', transform=trans2)
ax.errorbar([3,4], [ mean_list_adaptive[3], mean_list_adaptive[4]], [std_list_adaptive[3], std_list_adaptive[4]], linestyle='None', marker='v', label='adaptive retrieval', c='r', transform=trans2)
# ax.scatter([0,1], [mean_list_uncertainty[0],mean_list_uncertainty[1]], c='g', marker='x')
ax.errorbar([2,3, 4], [mean_list_uncertainty[2],mean_list_uncertainty[3], mean_list_uncertainty[4]] , [std_list_uncertainty[2],std_list_uncertainty[3], std_list_uncertainty[4]], linestyle='None', marker='s', label='uncertainty sampling', c='g')
# ax.scatter([0, 4], [mean_list_uncertainty_uncalibrated[0], mean_list_uncertainty[4]], c='m', marker='x', transform=trans3)
ax.errorbar([1,2,3], [ mean_list_uncertainty_uncalibrated[1], mean_list_uncertainty_uncalibrated[2], mean_list_uncertainty_uncalibrated[3]], [ std_list_uncertainty_uncalibrated[1], std_list_uncertainty_uncalibrated[2], std_list_uncertainty_uncalibrated[3]], linestyle='None', marker='P', label='uncertainty sampling (uncalibrated)', c='m', transform=trans3)
else:
#ax.scatter(iter_nb, mean_list_our_method, c='b', label='our method')
ax.errorbar(iter_nb, mean_list_our_method, std_list_our_method, linestyle='None', marker='o', transform=trans1, label='our method', c='b')
# ax.fill_between(iter_nb, list(np.array(mean_list_our_method) - np.array(std_list_our_method)), list(np.array(mean_list_our_method) + np.array(std_list_our_method)), color='b', alpha=0.2)
#ax.scatter(iter_nb, mean_list_adaptive, c='r', label='adaptive retrieval')
ax.errorbar(iter_nb, mean_list_adaptive, std_list_adaptive, linestyle='None', marker='v', label='adaptive retrieval', c='r', transform=trans2)
# ax.fill_between(iter_nb, list(np.array(mean_list_adaptive) - np.array(std_list_adaptive)), list(np.array(mean_list_adaptive) + np.array(std_list_adaptive)), color='r', alpha=0.2)
#ax.scatter(range(4), mean_list_uncertainty, c='g', label='uncertainty sampling')
ax.errorbar(iter_nb, mean_list_uncertainty, std_list_uncertainty, linestyle='None', marker='s', label='uncertainty sampling', c='g')
ax.errorbar(iter_nb, mean_list_uncertainty_uncalibrated, std_list_uncertainty_uncalibrated, linestyle='None', marker='P', transform=trans3, label='uncertainty sampling (uncalibrated)', c='m')
ax.set_yscale('log')
ax.tick_params(which='both',direction='in',pad=3)
# ax.locator_params(axis='y',nbins=6)
ax.set_ylabel('Expansion rate',fontweight='bold')
ax.set_xlabel('Iteration number',fontweight='bold')
ax.set_title(label.replace('_',' ').replace('1mo','').title(),fontweight='bold')
ax.set_xlim([-0.5, 4.5])
# ax.set_ylim([0.1, 300])
if num == 4:
ax.legend(loc='best',fontsize=9)
sns.despine()
plt.savefig(os.path.join(output_path,f'{label}_extra.png'),bbox_inches='tight', format='png' ,dpi=1200, transparent=False)
import pandas as pd
path_data = '/home/manuto/Documents/world_bank/bert_twitter_labor/twitter-labor-data/data/active_learning/evaluation_inference/US'
paths = [path for path in os.listdir(path_data) if 'iter_' in path]
for path in paths:
for label in labels:
df = pd.read_csv(os.path.join(path_data, path,f'{label}.csv'))
extra_df = pd.read_csv(os.path.join(path_data, path,f'extra_{label}.csv'))
df = pd.concat([df, extra_df])
nb_pos = df.loc[df['class'] == 1].shape[0]
print(path, label, nb_pos)
```
| github_jupyter |
### base : 넘버웍스의 질문 + 공부하면서 느낀 것 추가
#### 통계
- p-value를 고객에게는 뭐라고 설명하는게 이해하기 편할까요?
- p-value는 요즘 시대에도 여전히 유효할까요? 언제 p-value가 실제를 호도하는 경향이 있을까요?
- A/B Test 등 현상 분석 및 실험 설계 상 통계적으로 유의미함의 여부를 결정하기 위한 방법에는 어떤 것이 있을까요?
- R square의 의미는 무엇인가요? 고객에게는 어떻게 설명하실 예정인가요?
- 평균(mean)과 중앙값(median)중에 어떤 케이스에서 뭐를 써야할까요?
- 중심극한정리는 왜 유용한걸까요?
- 엔트로피(entropy)에 대해 설명해주세요. 가능하면 Information Gain도요.
- 하지만 또 요즘같은 빅데이터(?)시대에는 정규성 테스트가 의미 없다는 주장이 있습니다. 맞을까요?
- 어떨 때 모수적 방법론을 쓸 수 있고, 어떨 때 비모수적 방법론을 쓸 수 있나요?
- “likelihood”와 “probability”의 차이는 무엇일까요?
- 통계에서 사용되는 bootstrap의 의미는 무엇인가요.
- 수가 매우 적은 (수십개 이하) 케이스의 경우 어떤 방식으로 예측 모델을 수립할 수 있을까요.
- 베이지안과 프리퀀티스트간의 입장차이를 설명해주실 수 있나요?
#### 머신러닝
- Local Minima와 Global Minima에 대해 설명해주세요.
- 차원의 저주에 대해 설명해주세요
- dimension reduction기법으로 보통 어떤 것들이 있나요?
- PCA는 차원 축소 기법이면서, 데이터 압축 기법이기도 하고, 노이즈 제거기법이기도 합니다. 왜 그런지 설명해주실 수 있나요?
- LSA, LDA, SVD 등의 약자들이 어떤 뜻이고 서로 어떤 관계를 가지는지 설명할 수 있나요?
- Markov Chain을 고등학생에게 설명하려면 어떤 방식이 제일 좋을까요?
- 텍스트 더미에서 주제를 추출해야 합니다. 어떤 방식으로 접근해 나가시겠나요?
- SVM은 왜 반대로 차원을 확장시키는 방식으로 동작할까요? 거기서 어떤 장점이 발생했나요?
- eigenvalue, eigenvector에 대해 설명해주세요. 이건 왜 중요한가요?
- 다른 좋은 머신 러닝 대비, 오래된 기법인 나이브 베이즈(naive bayes)의 장점을 옹호해보세요.
- Association Rule의 Support, Confidence, Lift에 대해 설명해주세요.
- 최적화 기법중 Newton’s Method와 Gradient Descent 방법에 대해 알고 있나요?
- 머신러닝(machine)적 접근방법과 통계(statistics)적 접근방법의 둘간에 차이에 대한 견해가 있나요?
- 인공신경망(deep learning이전의 전통적인)이 가지는 일반적인 문제점은 무엇일까요?
- 지금 나오고 있는 deep learning 계열의 혁신의 근간은 무엇이라고 생각하시나요?
- ROC 커브에 대해 설명해주실 수 있으신가요?
- 여러분이 서버를 100대 가지고 있습니다. 이때 인공신경망보다 Random Forest를 써야 하는 이유는 뭘까요?
- 두 추천엔진간의 성능 비교는 어떤 지표와 방법으로 할 수 있을까요? 검색엔진에서 쓰던 방법을 그대로 쓰면 될까요? 안될까요?
- K-means의 대표적 의미론적 단점은 무엇인가요? (계산량 많다는것 말고)
#### 시각화
- "신규/재방문자별 지역별(혹은 일별) 방문자수와 구매전환율"이나 "고객등급별 최근방문일별 고객수와 평균구매금액"와 같이 4가지 이상의 정보를 시각화하는 가장 좋은 방법을 추천해주세요
- 구매에 영향을 주는 요소의 발견을 위한 관점에서, 개인에 대한 쇼핑몰 웹 활동의 시계열 데이터를 효과적으로 시각화하기 위한 방법은 무엇일까요? 표현되어야 하는 정보(feature)는 어떤 것일까요? 실제시 어떤 것이 가장 고민될까요?
- 파이차트는 왜 구릴까요? 언제 구린가요? 안구릴때는 언제인가요?
- 히스토그램의 가장 큰 문제는 무엇인가요?
- 워드클라우드는 보기엔 예쁘지만 약점이 있습니다. 어떤 약점일까요?
- 어떤 1차원값이, 데이터가 몰려있어서 직선상에 표현했을 때 보기가 쉽지 않습니다. 어떻게 해야할까요?
#### 분석 일반
- 좋은 feature란 무엇인가요. 이 feature의 성능을 판단하기 위한 방법에는 어떤 것이 있나요
- "상관관계는 인과관계를 의미하지 않는다"라는 말이 있습니다. 설명해주실 수 있나요?
- A/B 테스트의 장점과 단점, 그리고 단점의 경우 이를 해결하기 위한 방안에는 어떤 것이 있나요?
- 각 고객의 웹 행동에 대하여 실시간으로 상호작용이 가능하다고 할 때에, 이에 적용 가능한 고객 행동 및 모델에 관한 이론을 알아봅시다.
- 고객이 원하는 예측모형을 두가지 종류로 만들었다. 하나는 예측력이 뛰어나지만 왜 그렇게 예측했는지를 설명하기 어려운 random forest 모형이고, 또다른 하나는 예측력은 다소 떨어지나 명확하게 왜 그런지를 설명할 수 있는 sequential bayesian 모형입니다. 고객에게 어떤 모형을 추천하겠습니까?
- 고객이 내일 어떤 상품을 구매할지 예측하는 모형을 만들어야 한다면 어떤 기법(예: SVM, Random Forest, logistic regression 등)을 사용할 것인지 정하고 이를 통계와 기계학습 지식이 전무한 실무자에게 설명해봅시다.
- 나만의 feature selection 방식을 설명해봅시다.
- 데이터 간의 유사도를 계산할 때, feature의 수가 많다면(예: 100개 이상), 이러한 high-dimensional clustering을 어떻게 풀어야할까요?
#### 시스템 엔지니어링
- 처음 서버를 샀습니다. 어떤 보안적 조치를 먼저 하시겠습니까?
- SSH로의 brute-force attack을 막기 위해서 어떤 조치를 취하고 싶으신가요?
- MySQL이 요새 느리다는 신고가 들어왔습니다. 첫번째로 무엇을 확인하시고 조정하시겠나요?
- 동작하는 MySQL에 Alter table을 하면 안되는 이유를 설명해주세요. 그리고 대안을 설명해주세요.
- 빡세게 동작하고 있는 MySQL을 백업뜨기 위해서는 어떤 방법이 필요할까요?
- 프로세스의 CPU 상태를 보기 위해 top을 했습니다. user,system,iowait중에 뭐를 제일 신경쓰시나요? 이상적인 프로그램이라면 어떻게 저 값들이 나오고 있어야 할까요?
- iowait이 높게 나왔다면, 내가 해야하는 조치는 무엇인가요? (돈으로 해결하는 방법과 소프트웨어로 해결하는 방법을 대답해주세요)
- 동시에 10개의 컴퓨터에 라이브러리를 설치하는 일이 빈번히 발생합니다. 어떤 해결책이 있을까요?
- screen과 tmux중에 뭘 더 좋아하시나요?
- vim입니까. emacs입니까. 소속을 밝히세요.
- 가장 좋아하는 리눅스 배포판은 뭡니까. 왜죠?
- 관리하는 컴퓨터가 10대가 넘었습니다. 중요한 모니터링 지표는 뭐가 있을까요? 뭐로 하실건가요?
- GIT의 소스가 있고, 서비스 사용중인 웹서버가 10대 이상 넘게 있습니다. 어떻게 배포할건가요?
#### 분산처리
- 좋게 만들어진 MapReduce는 어떤 프로그램일까요? 데이터의 Size 변화의 관점에서 설명할 수 있을까요?
- 여러 MR작업의 연쇄로 최종결과물이 나올때, 중간에 작업이 Fail날수 있습니다. 작업의 Fail은 어떻게 모니터링 하시겠습니까? 작업들간의 dependency는 어떻게 해결하시겠습니까?
- 분산환경의 JOIN은, 보통 디스크, CPU, 네트워크 중 어디에서 병목이 발생할까요? 이를 해결하기 위해 무엇을 해야 할까요?
- 암달의 법칙에 대해 말해봅시다. 그러므로 왜 shared-nothing 구조로 만들어야 하는지 설명해봅시다.
- shared-nothing 구조의 단점도 있습니다. 어떤 것이 해당할까요?
- Spark가 Hadoop보다 빠른 이유를 I/O 최적화 관점에서 생각해봅시다.
- 카산드라는 망한것 같습니다. 왜 망한것 같나요? 그래도 활용처가 있다면 어디인것 같나요.
- TB 단위 이상의 기존 데이터와 시간당 GB단위의 신생 로그가 들어오는 서비스에서 모든 가입자에게 개별적으로 계산된 실시간 서비스(웹)를 제공하기 위한 시스템 구조를 구상해봅시다.
- 대용량 자료를 빠르게 lookup해야 하는 일이 있습니다. (100GB 이상, 100ms언더로 특정자료 찾기). 어떤 백엔드를 사용하시겠나요? 느린 백엔드를 사용한다면 이를 보완할 방법은 뭐가 있을까요?
- 데이터를 여러 머신으로 부터 모으기 위해 여러 선택지가 있을 수 있습니다. (flume, fluentd등) 아예 소스로부터 kafka등의 메시징 시스템을 바로 쓸 수도 있습니다. 어떤 것을 선호하시나요? 왜죠?
#### 웹 아키텍쳐
- 트래픽이 몰리는 상황입니다. AWS의 ELB 세팅을 위해서 웹서버는 어떤 요건을 가져야 쉽게 autoscale가능할까요?
- 왜 Apache보다 Nginx가 성능이 좋을까요? node.js가 성능이 좋은 이유와 곁들여 설명할 수 있을까요?
- node.js는 일반적으로 빠르지만 어떤 경우에는 쓰면 안될까요?
- 하나의 IP에서 여러 도메인의 HTTPS 서버를 운영할 수 있을까요? 안된다면 왜인가요? 또 이걸 해결하는 방법이 있는데 그건 뭘까요?
- 개발이 한창 진행되는 와중에도 서비스는 계속 운영되어야 합니다. 이를 가능하게 하는 상용 deploy 환경은 어떻게 구현가능한가요? WEB/WAS/DB/Cluster 각각의 영역에서 중요한 변화가 수반되는 경우에도 동작 가능한, 가장 Cost가 적은 방식을 구상하고 시나리오를 만들어봅시다.
#### 서비스 구현 (python, javascript, ...)
- 크롤러를 파이썬으로 구현할 때 스크래핑 입장에서 BeautifulSoup과 Selenium의 장단점은 무엇일까요?
- 빈번한 접속으로 우리 IP가 차단되었을 때의 해결책은? (대화로 푼다. 이런거 말구요)
- 당장 10분안에 사이트의 A/B 테스트를 하고 싶다면 어떻게 해야 할까요? 타 서비스를 써도 됩니다.
- 신규 방문자와 재 방문자를 구별하여 A/B 테스트를 하고 싶다면 어떻게 해야 할까요?
- R의 결과물을 python으로 만든 대시보드에 넣고 싶다면 어떤 방법들이 가능할까요?
- 쇼핑몰의 상품별 노출 횟수와 클릭수를 손쉽게 수집하려면 어떻게 해야 할까요?
- 여러 웹사이트를 돌아다니는 사용자를 하나로 엮어서 보고자 합니다. 우리가 각 사이트의 웹에 우리 코드를 삽입할 수 있다고 가정할 때, 이것이 가능한가요? 가능하다면, 그 방법에는 어떤 것이 있을까요?
- 고객사 혹은 외부 서버와의 데이터 전달이 필요한 경우가 있습니다. 데이터 전달 과정에서 보안을 위해 당연히(plain text)로 전송하는 것은 안됩니다. 어떤 방법이 있을까요?
#### 대 고객 사이드
- 고객이 궁금하다고 말하는 요소가 내가 생각하기에는 중요하지 않고 다른 부분이 더 중요해 보입니다. 어떤 식으로 대화를 풀어나가야 할까요?
- 현업 카운터 파트와 자주 만나며 실패한 분석까지 같이 공유하는 경우와, 시간을 두고 멋진 결과만 공유하는 케이스에서 무엇을 선택하시겠습니까?
- 고객이 질문지 리스트를 10개를 주었습니다. 어떤 기준으로 우선순위를 정해야 할까요?
- 오프라인 데이터가 결합이 되어야 해서, 데이터의 피드백 주기가 매우 느리고 정합성도 의심되는 상황입니다. 우리가 할 수 있는 액션이나 방향 수정은 무엇일까요?
- 동시에 여러개의 A/B테스트를 돌리기엔 모수가 부족한 상황입니다. 어떻게 해야할까요?
- 고객사가 과도하게 정보성 대시보드만을 요청할 경우, 어떻게 대처해야 할까요?
- 고객사에게 위클리 리포트를 제공하고 있었는데, 금주에는 별다른 내용이 없었습니다. 어떻게 할까요?
- 카페24, 메이크샵 같은 서비스에서 데이터를 어떻게 가져오면 좋을까요?
- 기존에 같은 목적의 업무를 수행하던 조직이 있습니다. 어떻게 관계 형성을 해 나가야 할까요. 혹은 일이 되게 하기 위해서는 어떤 부분이 해소되어야 할까요.
- 인터뷰나 강의에 활용하기 위한 백데이터는 어느 수준까지 일반화 해서 사용해야 할까요?
- 고객사가 우리와 일하고 싶은데 현재는 capa가 되지 않습니다. 어떻게 대처해야 할까요?
- 고객사들은 기존 추천서비스에 대한 의문이 있습니다. 주로 매출이 실제 오르는가 하는 것인데, 이를 검증하기 위한 방법에는 어떤 것이 있을까요?
- 위 관점에서 우리 서비스의 성능을 고객에게 명확하게 인지시키기 위한 방법을 생각해봅시다.
#### 개인정보에 대한 이해
- 어떤 정보들이 개인정보에 해당할까요? ID는 개인정보에 해당할까요? 이를 어기지 않는 합법적 방법으로 식별하고 싶으면 어떻게 해야할까요?
- 국내 개인 정보 보호 현황에 대한 견해는 어떠한지요? 만약 사업을 진행하는데 장애요소로 작용한다면, 이에 대한 해결 방안은 어떤 것이 있을까요?
- 제3자 쿠키는 왜 문제가 되나요?
| github_jupyter |
```
import random
from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.space import MultiGrid
from mesa.datacollection import DataCollector
def compute_gini(model):
agent_wealths = [agent.wealth for agent in model.schedule.agents]
x = sorted(agent_wealths)
N = model.num_agents
B = sum(xi * (N - i) for i, xi in enumerate(x)) / (N * sum(x))
return (1 + (1 / N) - 2 * B)
class MoneyModel(Model):
"""A simple model of an economy where agents exchange currency at random.
All the agents begin with one unit of currency, and each time step can give
a unit of currency to another agent. Note how, over time, this produces a
highly skewed distribution of wealth.
"""
def __init__(self, N, width, height):
self.num_agents = N
self.grid = MultiGrid(height, width, True)
self.schedule = RandomActivation(self)
self.datacollector = DataCollector(
model_reporters={"Gini": compute_gini},
agent_reporters={"Wealth": "wealth"}
)
# Create agents
for i in range(self.num_agents):
a = MoneyAgent(i, self)
self.schedule.add(a)
# Add the agent to a random grid cell
x = random.randrange(self.grid.width)
y = random.randrange(self.grid.height)
self.grid.place_agent(a, (x, y))
self.running = True
self.datacollector.collect(self)
def step(self):
self.schedule.step()
# collect data
self.datacollector.collect(self)
def run_model(self, n):
for i in range(n):
self.step()
class MoneyAgent(Agent):
""" An agent with fixed initial wealth."""
def __init__(self, unique_id, model):
super().__init__(unique_id, model)
self.wealth = 1
def move(self):
possible_steps = self.model.grid.get_neighborhood(
self.pos, moore=True, include_center=False
)
new_position = random.choice(possible_steps)
self.model.grid.move_agent(self, new_position)
def give_money(self):
cellmates = self.model.grid.get_cell_list_contents([self.pos])
if len(cellmates) > 1:
other = random.choice(cellmates)
other.wealth += 1
self.wealth -= 1
def step(self):
self.move()
if self.wealth > 0:
self.give_money()
```
## Process for adding interactive display
```
from mesa.visualization.ModularVisualization import ModularServer
from mesa.visualization.modules import CanvasGrid
from mesa.visualization.modules import ChartModule
from mesa.visualization.UserParam import UserSettableParameter
def agent_portrayal(agent):
portrayal = {"Shape": "circle",
"Filled": "true",
"r": 0.5}
if agent.wealth > 0:
portrayal["Color"] = "red"
portrayal["Layer"] = 0
else:
portrayal["Color"] = "grey"
portrayal["Layer"] = 1
portrayal["r"] = 0.2
return portrayal
grid = CanvasGrid(agent_portrayal, 10, 10, 500, 500)
chart = ChartModule([
{"Label": "Gini", "Color": "#0000FF"}],
data_collector_name='datacollector')
model_params = {
"N": UserSettableParameter('slider', "Number of agents", 100, 2, 200, 1,
description="Choose how many agents to include in the model"),
"width": 10,
"height": 10
}
server = ModularServer(MoneyModel, [grid, chart], "Money Model", model_params)
server.port = 8521
server.launch()
```
| github_jupyter |
## 3. Sequence Pattern Minning
```
# ! pip install gsppy
```
### a)
```
def _mystrip(x):
return x[:-1].split(',')
import pandas as pd
df = pd.read_csv('Sequence.csv', sep='\n', header=None)
contains = df[df[0].str.contains(",Bread,Sweet")]
df[0]=df[0].apply(_mystrip)
df.head()
# trans = df.values.tolist()
trans = list(df[0])
trans
from csv import reader
list_of_rows=[]
with open('Sequence.csv', 'r') as read_obj:
csv_reader = reader(read_obj)
list_of_rows = list(csv_reader)
display(list_of_rows)
```
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
<p style =" direction:rtl;text-align:right;">
ابتدا فایل را به صورت یک دیتاست با یک ستون می خوانیم و به کاما ها به عنوان جداکننده هر سطر را به یک لیست تبدیل می کنیم تا به فرمت ورودی تابع در بیاید
</p>
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
### b)
```
from gsppy.gsp import GSP
_gsp = GSP(trans)
result = _gsp.search(0.3)
display(result)
```
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
<p style =" direction:rtl;text-align:right;">
با 0.3 تابع را احرا می کنیم. همانطور که میبینیم اطلاعات چندانی به ما نمی دهد چون تعداد رکورد های فریکوئنت ست زیاد است و نسبت تکرار الگوها به آنها ممکن است عدد ناچیزی شود و از این آستانه ساپورت کمتر باشد.
</p>
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
### c)
```
_gsp2 = GSP(trans)
result2 = _gsp2.search(0.001)
result2
```
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
<p style =" direction:rtl;text-align:right;">
برای پیداکردن آن دنباله آستانه ساپورت را کمتر کردیم. با سرچ میبینیم که مقادیر زیر یافت می شوند:
('Panner', 'Bread', 'Sweet'): 10,('Cheese', 'Bread', 'Sweet'): 9
</p>
<p style =" direction:rtl;text-align:right;">
با کمتر کردن این آستانه به دنباله های بیشتری هم میرسیم اما زمان اجرا خیلی خیلی زیاد می شود.
</p>
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
```
contains.head(60)
print("Lassi :",contains[0].str.count("Lassi,Bread,Sweet,").sum())
print("Ghee :",contains[0].str.count("Ghee,Bread,Sweet,").sum())
print("Panner :",contains[0].str.count("Panner,Bread,Sweet,").sum())
print("Butter :",contains[0].str.count("Butter,Bread,Sweet,").sum())
print("Coffee Powder :",contains[0].str.count("Coffee Powder,Bread,Sweet,").sum())
print("Milk :",contains[0].str.count("Milk,Bread,Sweet,").sum())
print("Sugar :",contains[0].str.count("Sugar,Bread,Sweet,").sum())
print("Cheese :",contains[0].str.count("Cheese,Bread,Sweet,").sum())
print("Tea Powder :",contains[0].str.count("Tea Powder,Bread,Sweet,").sum())
print("Yougurt :",contains[0].str.count("Yougurt,Bread,Sweet,").sum())
```
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
<p style =" direction:rtl;text-align:right;">
حال اگر خود دیتاست را هم بررسی کنیم و سطرهای حاوی این الگو را جدا کنیم و فرکانس تکرار دنباله در حالت های مختلف را بشماریم اعداد فوق به دست می آیند که همانطور که میبینیم بیشترین آنها panner و cheese هستند که در بخش قبل هم به آنها رسیدیم
5مورد پرنکرار :
Panner->Cheese->Sugar->Coffee Powder->Gee->Butter
</p>
<hr style = "border-top: 3px solid #000000 ; border-radius: 3px;">
| github_jupyter |
# Climate Data
```
import matplotlib.pyplot as plt
import pydaymet as daymet
from pynhd import NLDI
```
The Daymet database provides climatology data at 1-km resolution. First, we use [PyNHD](https://github.com/cheginit/pynhd) to get the contributing watershed geometry of a NWIS station with the ID of `USGS-01031500`:
```
geometry = NLDI().get_basins("01031500").geometry[0]
```
[PyDaymet](https://github.com/cheginit/pynhd) allows us to get the data for a single pixel or for a region as gridded data. The function to get single pixel is called `pydaymet.get_bycoords` and for gridded data is called `pydaymet.get_bygeom`. The arguments of these functions are identical except the first argument where the latter
should be polygon and the former should be a coordinate (a tuple of length two as in `(x, y)`).
The input geometry or coordinate can be in any valid CRS (defaults to EPSG:4326). The `date`
argument can be either a tuple of length two like `(start_str, end_str)` or a list of years
like `[2000, 2005]`.
Additionally, we can pass `time_scale` to get daily, monthly or annual summaries. This flag
by default is set to daily. We can pass `time_scale` as `daily`, `monthly`, or `annual`
to `get_bygeom` or `get_bycoords` functions to download the respective summaries.
`PyDaymet` can also compute Potential EvapoTranspiration (PET) at daily time scale using three methods: ``penman_monteith``, ``priestley_taylor``, and ``hargreaves_samani``. Let's get the data for six days and plot PET.
```
dates = ("2000-01-01", "2000-01-06")
daily = daymet.get_bygeom(geometry, dates, variables="prcp", pet="hargreaves_samani")
ax = daily.pet.plot(x="x", y="y", row="time", col_wrap=3)
ax.fig.savefig("_static/daymet_grid.png", facecolor="w", bbox_inches="tight")
```
Now, let's get the monthly summaries for six months.
```
var = ["prcp", "tmin"]
dates = ("2000-01-01", "2000-06-30")
monthly = daymet.get_bygeom(geometry, dates, variables=var, time_scale="monthly")
ax = monthly.prcp.plot(x="x", y="y", row="time", col_wrap=3)
```
Note that the default CRS is EPSG:4326. If the input geometry (or coordinate) is in a different CRS we can pass it to the function. The gridded data are automatically masked to the input geometry. Now, Let's get the data for a coordinate in EPSG:3542 CRS.
```
coords = (-1431147.7928, 318483.4618)
crs = "epsg:3542"
dates = ("2000-01-01", "2006-12-31")
annual = daymet.get_bycoords(
coords, dates, variables=["prcp", "tmin"], crs=crs, time_scale="annual"
)
fig = plt.figure(figsize=(6, 4), facecolor="w")
gs = fig.add_gridspec(1, 2)
axes = gs[:].subgridspec(2, 1, hspace=0).subplots(sharex=True)
annual["tmin (degrees C)"].plot(ax=axes[0], color="r")
axes[0].set_ylabel(r"$T_{min}$ ($^\circ$C)")
axes[0].xaxis.set_ticks_position("none")
annual["prcp (mm/day)"].plot(ax=axes[1])
axes[1].set_ylabel("$P$ (mm/day)")
plt.tight_layout()
fig.savefig("_static/daymet_loc.png", facecolor="w", bbox_inches="tight")
```
Next, let's get annual total precipitation for Hawaii and Puerto Rico for 2010.
```
hi_ext = (-160.3055, 17.9539, -154.7715, 23.5186)
pr_ext = (-67.9927, 16.8443, -64.1195, 19.9381)
hi = daymet.get_bygeom(hi_ext, 2010, variables="prcp", region="hi", time_scale="annual")
pr = daymet.get_bygeom(pr_ext, 2010, variables="prcp", region="pr", time_scale="annual")
ax = hi.prcp.plot(size=4)
plt.title("Hawaii, 2010")
ax.figure.savefig("_static/hi.png", facecolor="w", bbox_inches="tight")
ax = pr.prcp.plot(size=4)
plt.title("Puerto Rico, 2010")
ax.figure.savefig("_static/pr.png", facecolor="w", bbox_inches="tight")
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from TextSum import helpers
tf.reset_default_graph()
sess = tf.InteractiveSession()
PAD = 0
EOS = 1
vocab_size = 10
input_embedding_size = 20
encoder_hidden_units = 25
decoder_hidden_units = encoder_hidden_units
# encoder_inputs:[max_time, batch_size]
encoder_inputs = tf.placeholder(shape=(None, None), dtype=tf.int32, name='encoder_inputs')
# decoder_targets: [max_time, batch_size]
decoder_targets = tf.placeholder(shape=(None, None), dtype=tf.int32, name='decoder_targets')
# decoder_inputs: [max_time, batch_size]
decoder_inputs = tf.placeholder(shape=(None, None), dtype=tf.int32, name='decoder_inputs')
embeddings = tf.Variable(tf.random_uniform([vocab_size, input_embedding_size], -1.0, 1.0), dtype=tf.float32)
# encoder_inputs_embeded: [max_time, batch_size, input_embedding_size]
encoder_inputs_embeded = tf.nn.embedding_lookup(embeddings, encoder_inputs)
# decoder_inputs_embeded: [max_time, batch_size, input_embedding_size]
decoder_inputs_embeded = tf.nn.embedding_lookup(embeddings, decoder_inputs)
decoder_inputs_embeded
encoder_cell = tf.contrib.rnn.LSTMCell(encoder_hidden_units)
encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn(encoder_cell,
encoder_inputs_embeded,
dtype=tf.float32, time_major=True)
del encoder_outputs
encoder_final_state
decoder_cell = tf.contrib.rnn.LSTMCell(decoder_hidden_units)
decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn(decoder_cell,
decoder_inputs_embeded,
initial_state=encoder_final_state,
dtype=tf.float32, time_major=True,scope='plain_decoder')
decoder_logits = tf.contrib.layers.linear(decoder_outputs, vocab_size)
decoder_prediction = tf.argmax(decoder_logits, 2)
stepwise_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
labels=tf.one_hot(decoder_targets,
depth=vocab_size,
dtype=tf.float32),
logits=decoder_logits)
loss = tf.reduce_mean(stepwise_cross_entropy)
train_op = tf.train.AdamOptimizer().minimize(loss)
sess.run(tf.global_variables_initializer())
batch_ = [[6], [3, 4], [9, 8, 7]]
batch_, batch_length_ = helpers.batch(batch_)
print('batch_encoded:\n' + str(batch_))
din_, dlen_ = helpers.batch(np.ones(shape=(3, 1), dtype=np.int32),
max_sequence_length=4)
print('decoder inputs:\n' + str(din_))
pred_ = sess.run(decoder_prediction,
feed_dict={encoder_inputs: batch_,
decoder_inputs: din_,})
print('decoder predictions:\n' + str(pred_))
batch_size = 10
batches = helpers.random_sequences(length_from=3, length_to=8, vocab_lower=2,
vocab_upper=10, batch_size=batch_size)
# print('head of the batch')
# for seq in next(batches):
# print(seq)
def next_feed():
batch = next(batches)
for seq in batch:
print(seq)
encoder_inputs_, _ = helpers.batch(batch)
decoder_targets_, _ = helpers.batch(
[(sequence) + [EOS] for sequence in batch]
)
decoder_inputs_, _ = helpers.batch(
[[EOS] + (sequence) for sequence in batch]
)
return {
encoder_inputs: encoder_inputs_,
decoder_inputs: decoder_inputs_,
decoder_targets: decoder_targets_,
}
a = next_feed()
a.get(encoder_inputs)
a.get(decoder_inputs)
a.get(decoder_targets)
a[encoder_inputs].T
for i, (aa, bbb) in enumerate(zip(a[encoder_inputs].T, a[encoder_inputs].T)):
print('a.{}'.format(aa))
print('b.{}'.format(bbb))
loss_track = []
max_batches = 3001
batches_in_epoch = 1000
try:
for batch in range(max_batches):
fd = next_feed()
_, l = sess.run([train_op, loss], fd)
loss_track.append(l)
if batch == 0 or batch % batches_in_epoch == 0:
print('batch {}'.format(batch))
print(' minibatch loss: {}'.format(sess.run(loss, fd)))
predict_ = sess.run(decoder_prediction, fd)
for i, (inp, pred) in enumerate(zip(fd[encoder_inputs].T, predict_.T)):
print(' sample {}:'.format(i + 1))
print(' input > {}'.format(inp))
print(' predicted > {}'.format(pred))
if i >= 2:
break
print()
except KeyboardInterrupt:
print('training interrupted')
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(loss_track)
print('loss {:.4f} after {} examples (batch_size={})'.format(loss_track[-1],
len(loss_track)*batch_size,
batch_size))
```
| github_jupyter |

```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import LabelEncoder
from nltk.stem import WordNetLemmatizer
from imblearn.over_sampling import SMOTE
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import re
import warnings
warnings.filterwarnings("ignore")
# read the datafile
train_data = pd.read_csv("./archive/Train.csv")
test_data = pd.read_csv("./archive/Test.csv")
print(train_data.shape)
print(test_data.shape)
train_data.head(2)
train_data.tail(2)
train_data.Label.value_counts().plot(kind='bar');
chichewa = ['i', 'ine', 'wanga', 'inenso', 'ife', 'athu', 'athu', 'tokha', 'inu', 'ndinu','iwe ukhoza', 'wako','wekha','nokha','iye','wake','iyemwini','icho','ndi','zake','lokha','iwo','awo','iwowo','chiyani','amene', 'uyu', 'uyo', 'awa', "ndili", 'ndi', 'ali','anali','khalani','akhala','kukhala',' Khalani nawo','wakhala','anali','chitani','amachita','kuchita', 'a', 'an', 'pulogalamu ya', 'ndi', 'koma', 'ngati', 'kapena', 'chifukwa', 'monga', 'mpaka', 'pamene', 'wa', 'pa ',' by','chifukwa' 'ndi','pafupi','kutsutsana','pakati','kupyola','nthawi', 'nthawi','kale','pambuyo','pamwamba', 'pansipa', 'kuti', 'kuchokera', 'mmwamba', 'pansi', 'mu', 'kunja', 'kuyatsa', 'kuchoka', 'kutha', 'kachiwiri', 'kupitilira','kenako',' kamodzi','apa','apo','liti','pati','bwanji','onse','aliyense','onse','aliyense', 'ochepa', 'zambiri', 'ambiri', 'ena', 'otero', 'ayi', 'kapena', 'osati', 'okha', 'eni', 'omwewo', 'kotero',' kuposa','nawonso',' kwambiri','angathe','ndidzatero','basi','musatero', 'musachite',' muyenera', 'muyenera kukhala','tsopano', 'sali', 'sindinathe','sanachite','satero','analibe', 'sanatero','sanachite','sindinatero','ayi','si', 'ma', 'sizingatheke','mwina','sayenera', 'osowa','osafunikira', 'shan' , 'nenani', 'sayenera', 'sanali', 'anapambana', 'sangachite', 'sanakonde', 'sangatero']
#cleaning texts
wn = WordNetLemmatizer()
def text_preprocessing(review):
review = re.sub('[^a-zA-Z]', ' ', review)
review = review.lower()
review = review.split()
review = [wn.lemmatize(word) for word in review if not word in chichewa]
review = ' '.join(review)
return review
import nltk
nltk.download('wordnet')
train_data['Text'] = train_data['Text'].apply(text_preprocessing)
test_data['Text'] = test_data['Text'].apply(text_preprocessing)
print(train_data.head())
print(test_data.head())
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(train_data['Text']).toarray()
training = pd.DataFrame(X, columns=vectorizer.get_feature_names())
print(training.shape)
X_test_final = vectorizer.transform(test_data['Text']).toarray()
test_new = pd.DataFrame(X_test_final, columns=vectorizer.get_feature_names())
print(test_new.shape)
training.head() #check first five rows
X = training
y = train_data['Label']
label_encoder = LabelEncoder()
y_label = label_encoder.fit_transform(y)
smote = SMOTE()
X, y_label = smote.fit_resample(X,y_label)
np.bincount(y_label)
X_train, X_test, y_train, y_test = train_test_split(X, y_label, test_size=0.1, random_state=0)
model = SGDClassifier(loss='hinge',
alpha=4e-4,
max_iter=20,
verbose=False)
model.fit(X_train, y_train)
pred = model.predict(X_test)
print("Train Accuracy Score:",round(model.score(X_train, y_train),2))
print("Test Accuracy Score:",round(accuracy_score(y_test, pred),2))
print(classification_report(y_test, pred))
test_pred = label_encoder.inverse_transform(pred)
test_label = label_encoder.inverse_transform(y_test)
cf_matrix = confusion_matrix(test_pred, test_label)
sns.heatmap(cf_matrix, annot=True);
# Preparing submission
sub_pred = model.predict(test_new)
submission = pd.DataFrame()
submission['ID'] = test_data['ID']
submission['Label'] = label_encoder.inverse_transform(sub_pred)
submission.to_csv('submission.csv', index=False)
```
<a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=1bff48d4-9307-4057-89cf-2a097c21437e' target="_blank">
<img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from qctrlvisualizer import get_qctrl_style, plot_controls
from qctrl import Qctrl
qctrl = Qctrl()
# Define standard matrices.
sigma_x = np.array([[0, 1], [1, 0]], dtype=complex)
sigma_y = np.array([[0, -1j], [1j, 0]], dtype=complex)
sigma_z = np.array([[1, 0], [0, -1]], dtype=complex)
# Define control parameters.
duration = 1e-6 # s
# Define standard deviation of the errors in the experimental results.
sigma = 0.01
# Create a random unknown operator.
rng = np.random.default_rng(seed=10)
phi = rng.uniform(-np.pi, np.pi)
u = rng.uniform(-1, 1)
Q_unknown = (
u * sigma_z + np.sqrt(1 - u ** 2) * (np.cos(phi) * sigma_x + np.sin(phi) * sigma_y)
) / 4
def run_experiments(omegas):
"""
Simulates a series of experiments where controls `omegas` attempt to apply
an X gate to a system. The result of each experiment is the infidelity plus
a Gaussian error.
In your actual implementation, this function would run the experiment with
the parameters passed. Note that the simulation handles multiple test points,
while your experimental implementation might need to queue the test point
requests to obtain one at a time from the apparatus.
"""
# Create the graph with the dynamics of the system.
with qctrl.create_graph() as graph:
signal = qctrl.operations.pwc_signal(values=omegas, duration=duration)
hamiltonian = qctrl.operations.pwc_operator(
signal=signal,
operator=0.5 * (sigma_x + Q_unknown),
)
qctrl.operations.infidelity_pwc(
hamiltonian=hamiltonian,
target_operator=qctrl.operations.target(operator=sigma_x),
name="infidelities",
)
# Run the simulation.
result = qctrl.functions.calculate_graph(
graph=graph,
output_node_names=["infidelities"],
)
# Add error to the measurement.
error_values = rng.normal(loc=0, scale=sigma, size=len(omegas))
infidelities = result.output["infidelities"]["value"] + error_values
# Return only infidelities between 0 and 1.
return np.clip(infidelities, 0, 1)
# Define the number of test points obtained per run.
test_point_count = 20
# Define number of segments in the control.
segment_count = 10
# Define parameters as a set of controls with piecewise constant segments.
parameter_set = (
np.pi
/ duration
* (np.linspace(-1, 1, test_point_count)[:, None])
* np.ones((test_point_count, segment_count))
)
# Obtain a set of initial experimental results.
experiment_results = run_experiments(parameter_set)
# Define initialization object for the automated closed-loop optimization.
length_scale_bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
lower_bound=1e-5,
upper_bound=1e5,
)
bound = qctrl.types.closed_loop_optimization_step.BoxConstraint(
lower_bound=-5 * np.pi / duration,
upper_bound=5 * np.pi / duration,
)
initializer = qctrl.types.closed_loop_optimization_step.GaussianProcessInitializer(
length_scale_bounds=[length_scale_bound] * segment_count,
bounds=[bound] * segment_count,
rng_seed=0,
)
# Define state object for the closed-loop optimization.
optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
gaussian_process_initializer=initializer,
)
c_controls=[]
best_cost, best_controls = min(
zip(experiment_results, parameter_set), key=lambda params: params[0]
)
optimization_count = 0
# Run the optimization loop until the cost (infidelity) is sufficiently small.
while best_cost > 3 * sigma:
# Print the current best cost.
optimization_steps = (
"optimization step" if optimization_count == 1 else "optimization steps"
)
print(
f"Best infidelity after {optimization_count} BOULDER OPAL {optimization_steps}: {best_cost}"
)
# Organize the experiment results into the proper input format.
results = [
qctrl.types.closed_loop_optimization_step.CostFunctionResult(
parameters=list(parameters),
cost=cost,
cost_uncertainty=sigma,
)
for parameters, cost in zip(parameter_set, experiment_results)
]
# Call the automated closed-loop optimizer and obtain the next set of test points.
optimization_result = qctrl.functions.calculate_closed_loop_optimization_step(
optimizer=optimizer,
results=results,
test_point_count=test_point_count,
)
optimization_count += 1
# Organize the data returned by the automated closed-loop optimizer.
parameter_set = np.array(
[test_point.parameters for test_point in optimization_result.test_points]
)
optimizer = qctrl.types.closed_loop_optimization_step.Optimizer(
state=optimization_result.state
)
# Obtain experiment results that the automated closed-loop optimizer requested.
experiment_results = run_experiments(parameter_set)
# Record the best results after this round of experiments.
cost, controls = min(
zip(experiment_results, parameter_set), key=lambda params: params[0]
)
if cost < best_cost:
best_cost = cost
best_controls = controls
c_controls.append({"duration": duration, "values": best_controls})
# Print final best cost.
print(f"Infidelity: {best_cost}")
# Plot controls that correspond to the best cost.
plot_controls(
figure=plt.figure(),
controls={
r"$\Omega(t)$": [
{"duration": duration / len(best_controls), "value": value}
for value in best_controls
]
},
)
c_controls
b_c = (c_controls)
for i in range(len(c_controls)):
b_c[i]['values']= (c_controls[i]['values']/sum(c_controls[i]['values']))
c_controls
shot_count = 1024
# Obtain the results of the experiment.
experiment_results = qctrl.functions.calculate_qchack_measurements(
controls=b_c,
shot_count=shot_count,
)
measurements = experiment_results.measurements
for k, measurement_counts in enumerate(measurements):
print(f"control #{k}: {measurement_counts}")
for k, measurement_counts in enumerate(measurements):
p0 = measurement_counts.count(0) / shot_count
p1 = measurement_counts.count(1) / shot_count
p2 = measurement_counts.count(2) / shot_count
print(f"control #{k}: P(|0>) = {p0:.2f}, P(|1>) = {p1:.2f}, P(|2>) = {p2:.2f}")
repetitions = [1, 4, 16, 32, 64]
controls = []
# Create a random string of complex numbers for all control,
# but set a different repetition_count for each control.
real_part = np.random.random(size=[segment_count])
imag_part = np.random.random(size=[segment_count])
values = 0.5 * (real_part + 1j * imag_part)
for repetition_count in repetitions:
controls.append(
{"duration": duration, "values": values, "repetition_count": repetition_count}
)
experiment_results = qctrl.functions.calculate_qchack_measurements(
controls=controls,
shot_count=shot_count,
)
for repetition_count, measurement_counts in zip(
repetitions, experiment_results.measurements
):
p0 = measurement_counts.count(0) / shot_count
p1 = measurement_counts.count(1) / shot_count
p2 = measurement_counts.count(2) / shot_count
print(
f"With {repetition_count:2d} repetitions: P(|0>) = {p0:.2f}, P(|1>) = {p1:.2f}, P(|2>) = {p2:.2f}"
)
import logging
from qctrlmloop import QctrlController
```
| github_jupyter |
# House Prices: Advanced Regression Techniques
## Table of Contents
- <b>Introduction</b>
- <b>Data Processing</b>
- Outliers
- Target variable
- <b>Feature engineering</b>
- Missing data
- <i>Exploration</i>
- <i>Imputation</i>
- Converting features
- <b>Machine Learning</b>
- Set up
- Initiating algorithms
- <i>Generalized linear models</i>
- <i>Ensemble methods (Gradient tree boosting)</i>
- Fitting algorithms
- <i>Fit all models</i>
- <i>Rank model performance</i>
- Stacking algorithms
- <b>Final predictions</b>
## Introduction
Hello Kagglers! In this kernel i'll be taking on the Kaggle Competition: 'House Prices: Advanced Regression Techniques'. This competition uses the Ames Housing Dataset, which itself contains 1460 observations in both training and tests sets, and 80 features to boot. The challenge is to predict property Sale Price, hence this is a Regression problem.
Throughout this kernel I will provide explanations about my code so you can understand the logic behind each action. While i'll conduct some feature engineering, my main focus will be to explore the predictive models and hopefully build an effective stacked model for final prediction.
At the time of posting, this model achieved a score within the top 12% of the Leaderboard, achieved through a simple approach to stacking.
Well that's enough from me - enjoy the read and please feel free to share with me any feedback regarding my code or overall approach! I'm always looking to improve :).
```
# All project packages imported at the start
# Project packages
import pandas as pd
import numpy as np
# Visualisations
import matplotlib.pyplot as plt
import seaborn as sns
# Statistics
from scipy import stats
from scipy.stats import norm, skew
from statistics import mode
from scipy.special import boxcox1p
# Machine Learning
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import Lasso, Ridge, RidgeCV, ElasticNet
import xgboost as xgb
import lightgbm as lgb
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from catboost import Pool, CatBoostRegressor, cv
import sys
import warnings
if not sys.warnoptions:
warnings.simplefilter("ignore")
# Reading in the data
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
# Inspecting the train dataset
train.info()
# And now the test data
test.info()
```
There a lot of object dtypes and a lot of missing values within this dataset. We'll need to consider these during data processing.
TO add, a lot of features have been abbreviated. For reference, here are their full names along with a brief explanation:
- SalePrice - the property's sale price in dollars. This is the target variable that you're trying to predict.
- MSSubClass: The building class
- MSZoning: The general zoning classification
- LotFrontage: Linear feet of street connected to property
- LotArea: Lot size in square feet
- Street: Type of road access
- Alley: Type of alley access
- LotShape: General shape of property
- LandContour: Flatness of the property
- Utilities: Type of utilities available
- LotConfig: Lot configuration
- LandSlope: Slope of property
- Neighborhood: Physical locations within Ames city limits
- Condition1: Proximity to main road or railroad
- Condition2: Proximity to main road or railroad (if a second is present)
- BldgType: Type of dwelling
- HouseStyle: Style of dwelling
- OverallQual: Overall material and finish quality
- OverallCond: Overall condition rating
- YearBuilt: Original construction date
- YearRemodAdd: Remodel date
- RoofStyle: Type of roof
- RoofMatl: Roof material
- Exterior1st: Exterior covering on house
- Exterior2nd: Exterior covering on house (if more than one material)
- MasVnrType: Masonry veneer type
- MasVnrArea: Masonry veneer area in square feet
- ExterQual: Exterior material quality
- ExterCond: Present condition of the material on the exterior
- Foundation: Type of foundation
- BsmtQual: Height of the basement
- BsmtCond: General condition of the basement
- BsmtExposure: Walkout or garden level basement walls
- BsmtFinType1: Quality of basement finished area
- BsmtFinSF1: Type 1 finished square feet
- BsmtFinType2: Quality of second finished area (if present)
- BsmtFinSF2: Type 2 finished square feet
- BsmtUnfSF: Unfinished square feet of basement area
- TotalBsmtSF: Total square feet of basement area
- Heating: Type of heating
- HeatingQC: Heating quality and condition
- CentralAir: Central air conditioning
- Electrical: Electrical system
- 1stFlrSF: First Floor square feet
- 2ndFlrSF: Second floor square feet
- LowQualFinSF: Low quality finished square feet (all floors)
- GrLivArea: Above grade (ground) living area square feet
- BsmtFullBath: Basement full bathrooms
- BsmtHalfBath: Basement half bathrooms
- FullBath: Full bathrooms above grade
- HalfBath: Half baths above grade
- Bedroom: Number of bedrooms above basement level
- Kitchen: Number of kitchens
- KitchenQual: Kitchen quality
- TotRmsAbvGrd: Total rooms above grade (does not include bathrooms)
- Functional: Home functionality rating
- Fireplaces: Number of fireplaces
- FireplaceQu: Fireplace quality
- GarageType: Garage location
- GarageYrBlt: Year garage was built
- GarageFinish: Interior finish of the garage
- GarageCars: Size of garage in car capacity
- GarageArea: Size of garage in square feet
- GarageQual: Garage quality
- GarageCond: Garage condition
- PavedDrive: Paved driveway
- WoodDeckSF: Wood deck area in square feet
- OpenPorchSF: Open porch area in square feet
- EnclosedPorch: Enclosed porch area in square feet
- 3SsnPorch: Three season porch area in square feet
- ScreenPorch: Screen porch area in square feet
- PoolArea: Pool area in square feet
- PoolQC: Pool quality
- Fence: Fence quality
- MiscFeature: Miscellaneous feature not covered in other categories
- MiscVal: $Value of miscellaneous feature
- MoSold: Month Sold
- YrSold: Year Sold
- SaleType: Type of sale
- SaleCondition: Condition of sale
```
# Viewing the first 10 observations
train.head(10)
# Let's get confirmation on the dataframe shapes
print("\nThe train data size is: {} ".format(train.shape))
print("The test data size is: {} ".format(test.shape))
```
That gives a better feel for what we are initally working with. As one final step pre-data processing, I'm going to take a copy of the ID column and remove it from both dataframes, since this is only needed when submitting final predictions to the Kaggle leaderboard, as opposed to be helpful within any predictive model.
```
#Save the 'Id' column
train_ID = train['Id']
test_ID = test['Id']
# Now drop the 'Id' colum since it's unnecessary for the prediction process
train.drop("Id", axis = 1, inplace = True)
test.drop("Id", axis = 1, inplace = True)
```
# Data Processing
## Outliers
The Ames dataset documentation reveals two outliers in the feature GrLivArea (Above grade (ground) living area square feet) - let's inspect these with a quick graph:
```
# Checking for outliers in GrLivArea as indicated in dataset documentation
sns.regplot(x=train['GrLivArea'], y=train['SalePrice'], fit_reg=True)
plt.show()
```
Yep, two pretty clear outliers in the bottom right hand corner. It's not always appropriate to delete outliers - removing too many can actually detriment the model's quality. These two however look relatively safe, and with backing from the documentation i'm going to go ahead and clear them.
```
# Removing two very extreme outliers in the bottom right hand corner
train = train.drop(train[(train['GrLivArea']>4000) & (train['SalePrice']<300000)].index)
# Re-check graph
sns.regplot(x=train['GrLivArea'], y=train['SalePrice'], fit_reg=True)
plt.show()
```
The updated graph is looking better now. Praise to the documentation!
## Target Variable
Let's now learn more about the Target Variable - Sale Price. I'm particularly interested in detecting any skew which would become problematic during the modelling phase.
```
(mu, sigma) = norm.fit(train['SalePrice'])
# 1. Plot Sale Price
sns.distplot(train['SalePrice'] , fit=norm);
plt.ylabel('Frequency')
plt.title('SalePrice distribution')
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)],
loc='best')
# Get the fitted parameters used by the function
print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))
# 2. Plot SalePrice as a QQPlot
fig = plt.figure()
res = stats.probplot(train['SalePrice'], plot=plt)
plt.show()
```
We can see here the Target Variable is right skewed. A log transformation should help bring it back to normality. The code below will complete this.
```
# Applying a log(1+x) transformation to SalePrice
train["SalePrice"] = np.log1p(train["SalePrice"])
# 1. Plot Sale Price
sns.distplot(train['SalePrice'] , fit=norm);
plt.ylabel('Frequency')
plt.title('SalePrice distribution')
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)],
loc='best')
# Get the fitted parameters used by the function
(mu, sigma) = norm.fit(train['SalePrice'])
print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))
# 2. Plot SalePrice as a QQPlot
fig = plt.figure()
res = stats.probplot(train['SalePrice'], plot=plt)
plt.show()
```
A thing of beauty - the target variable now looks far more amenable for modelling. Let's move on now to some feature engineering.
# Feature Engineering
Firstly, I will compile all data into a single dataset to save code duplication across both train & test sets:
```
# Saving train & test shapes
ntrain = train.shape[0]
ntest = test.shape[0]
# Creating y_train variable
y_train = train.SalePrice.values
# New all encompassing dataset
all_data = pd.concat((train, test)).reset_index(drop=True)
# Dropping the target
all_data.drop(['SalePrice'], axis=1, inplace=True)
# Printing all_data shape
print("all_data size is: {}".format(all_data.shape))
```
## Missing data
### Exploration
As was evident when initially inspecting the data, many feature variable are missing values. To get a better sense of this, I will compile a ranked table of missing values by the % of data missing.
```
# Getting a missing % count
all_data_missing = (all_data.isnull().sum() / len(all_data)) * 100
all_data_missing = all_data_missing.drop(all_data_missing[all_data_missing == 0].index).sort_values(ascending=False)
missing_data = pd.DataFrame({'Missing Percentage':all_data_missing})
missing_data.head(30)
```
Let's now make this data clearer by plotting it in a graph - enter barplot:
```
# Visualising missing data
f, ax = plt.subplots(figsize=(10, 6))
plt.xticks(rotation='90')
sns.barplot(x=missing_data.index, y=missing_data['Missing Percentage'])
plt.xlabel('Features', fontsize=15)
plt.ylabel('Percent of missing values', fontsize=15)
plt.title('Percent missing data by feature', fontsize=15)
```
A couple of features look severely depleted, but the rest only suffer a few omissions which means imputing these blank variables certainly becomes an option. To get a better sense for how each feature correlates to the target variable, i'll draw up a correlation matrix, before then tackling the missing data. See below!
```
# Initiate correlation matrix
corr = train.corr()
# Set-up mask
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set-up figure
plt.figure(figsize=(14, 8))
# Title
plt.title('Overall Correlation of House Prices', fontsize=18)
# Correlation matrix
sns.heatmap(corr, mask=mask, annot=False,cmap='RdYlGn', linewidths=0.2, annot_kws={'size':20})
plt.show()
```
Lots of strong correlations on show, especially Overall Quality (not surprising)! Features regarding the Garage are also relating strongly. Right, let's impute the missing values ready for modelling.
### Imputation
I have bundled features into a few different operations depending on what best fits their structure, whether that is replacing with a string or integer to denote zero, or imputation via a specific value. I have spared a lot of the trial and erroring with the final code used to achieve 0 missing values across both datasets.
```
# All columns where missing values can be replaced with 'None'
for col in ('PoolQC', 'MiscFeature', 'Alley', 'Fence', 'FireplaceQu', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'MasVnrType', 'MSSubClass'):
all_data[col] = all_data[col].fillna('None')
# All columns where missing values can be replaced with 0
for col in ('GarageYrBlt', 'GarageArea', 'GarageCars', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF','TotalBsmtSF', 'BsmtFullBath', 'BsmtHalfBath', 'MasVnrArea'):
all_data[col] = all_data[col].fillna(0)
# All columns where missing values can be replaced with the mode (most frequently occurring value)
for col in ('MSZoning', 'Electrical', 'KitchenQual', 'Exterior1st', 'Exterior2nd', 'SaleType', 'Functional', 'Utilities'):
all_data[col] = all_data[col].fillna(all_data[col].mode()[0])
# Imputing LotFrontage with the median (middle) value
all_data['LotFrontage'] = all_data.groupby('Neighborhood')['LotFrontage'].apply(lambda x: x.fillna(x.median()))
# Checking the new missing % count
all_data_missing = (all_data.isnull().sum() / len(all_data)) * 100
all_data_missing = all_data_missing.drop(all_data_missing[all_data_missing == 0].index).sort_values(ascending=False)
missing_data = pd.DataFrame({'Missing Ratio':all_data_missing})
missing_data.head(30)
```
Another check on the Missing data table reveals exactly the desired outcome - nothing.
## Converting variables
### Amending dtypes
I am going to perform a few further actions before modelling the data. This will be not an exhaustive engineering process, but instead some simple steps that will hopefully support more powerful future models.
Firstly, there are some variables that should in fact be categorical rather than numeric, so i'll complete this step below.
```
# Converting those variables which should be categorical, rather than numeric
for col in ('MSSubClass', 'OverallCond', 'YrSold', 'MoSold'):
all_data[col] = all_data[col].astype(str)
all_data.info()
```
### Transforming skewed feature variables
Ok, the dataset is starting to look better. I considered and fixed for skew within the Target variable earlier on, let's now do the same for all remaining numeric Feature variables.
```
# Applying a log(1+x) transformation to all skewed numeric features
numeric_feats = all_data.dtypes[all_data.dtypes != "object"].index
# Compute skewness
skewed_feats = all_data[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False)
skewness = pd.DataFrame({'Skew' :skewed_feats})
skewness.head(15)
```
<b>Box Cox Transformation of (highly) skewed features</b>
Skewed features are a formality when dealing with real-world data. Transformation techniques can help to stabilize variance, make data more normal distribution-like and improve the validity of measures of association.
The problem with the Box-Cox Transformation is estimating lambda. This value will depend on the existing data, and as such should be considered when performing cross validation on out of sample datasets.
```
# Check on number of skewed features above 75% threshold
skewness = skewness[abs(skewness) > 0.75]
print("Total number of features requiring a fix for skewness is: {}".format(skewness.shape[0]))
# Now let's apply the box-cox transformation to correct for skewness
skewed_features = skewness.index
lam = 0.15
for feature in skewed_features:
all_data[feature] = boxcox1p(all_data[feature], lam)
```
### New feature
I'm also going to create a new feature to bring together a few similar Features, into an overall 'Total Square Footage'.
```
# Creating a new feature: Total Square Footage
all_data['TotalSF'] = all_data['TotalBsmtSF'] + all_data['1stFlrSF'] + all_data['2ndFlrSF']
```
### Class imbalance
Lastly, a test for any significance class imbalance. Any variable that is represented by a single class by greater than 97% will be removed from the datasets. I also explored the same strategy at the 95% level, but found that model performance decreased ever so slightly with the removal of two further features - LandSlope & MiscFeature. Thus, I will stick at the 97% level.
```
# Identifying features where a class is over 97% represented
low_var_cat = [col for col in all_data.select_dtypes(exclude=['number']) if 1 - sum(all_data[col] == mode(all_data[col]))/len(all_data) < 0.03]
low_var_cat
# Dropping these columns from both datasets
all_data = all_data.drop(['Street', 'Utilities', 'Condition2', 'RoofMatl', 'Heating', 'PoolQC'], axis=1)
```
### Label encoding
This step build on the previous step whereby all text data will become numeric. This is a requirement for Machine Learning, that is, only numerical data can be fed into a predictive model. There are many other encoding techniques available, some of which more powerful than Label Encoding which does incur the risk of falsely ranking variables, e.g. coding three locations into 0, 1 and 2 might imply that 2 is a higher value than 0, which is incorrect as the numbers just represent different categories (locations). This is a simple approach, however, and therefore I'm going to stick with it for the current kernel.
Check out this link for more on encoding data:
https://www.kdnuggets.com/2015/12/beyond-one-hot-exploration-categorical-variables.html
```
# List of columns to Label Encode
cols = ('FireplaceQu', 'BsmtQual', 'BsmtCond', 'GarageQual', 'GarageCond',
'ExterQual', 'ExterCond','HeatingQC', 'KitchenQual', 'BsmtFinType1',
'BsmtFinType2', 'Functional', 'Fence', 'BsmtExposure', 'GarageFinish', 'LandSlope',
'LotShape', 'PavedDrive', 'Alley', 'CentralAir', 'MSSubClass', 'OverallCond',
'YrSold', 'MoSold')
# Process columns, apply LabelEncoder to categorical features
for c in cols:
lbl = LabelEncoder()
lbl.fit(list(all_data[c].values))
all_data[c] = lbl.transform(list(all_data[c].values))
# Check on data shape
print('Shape all_data: {}'.format(all_data.shape))
```
### Get dummies
I will now round up the feature engineering stage of this project by creating dummy variables ready for model building.
```
# Get dummies
all_data = pd.get_dummies(all_data)
all_data.shape
# Now to return to separate train/test sets for Machine Learning
train = all_data[:ntrain]
test = all_data[ntrain:]
```
# Machine Learning
## Set-up
Before modelling I am going to define a function that returns the cross-validation 'rmse' error, following 10-folds. This will ensure that all rmse scores produced have been smoothed out across the entire dataset and are not a result of any irregularities, which otherwise would provide a misleading representation of model performance. And that, we do not want.
```
# Set up variables
X_train = train
X_test = test
# Defining two rmse_cv functions
def rmse_cv(model):
rmse = np.sqrt(-cross_val_score(model, X_train, y_train, scoring="neg_mean_squared_error", cv = 10))
return(rmse)
```
With the rmse_cv function in place, I am going to tackle modelling in three phases - hopefully making it easy to follow:
1. Initiating algorithms
2. Fitting algorithms
3. Stacking algorithms
## 1. Initiating algorithms
I'm going to be working with two broad sets of algorithms within this kernel:
1. Generalized linear models
2. Ensemble methods (specifically Gradient Tree Boosting)
### A. Generalized linear models
I'm going to specifically focus on 'regularised' regression models within this section. <b>Regularisation</b> is a form of regression that shrinks (or 'regularises') the coefficient estimates towards zero. In other words, this technique discourages learning a more complex or flexible model, so as to avoid the risk of overfitting. This will be particularly helpful for the current dataset where the model needs to account for ~80 features.
There are different types of regularised regressions - I will now explore each of them.
#### 1. Ridge Regression (<i>L2 Regularisation</i>)
Ridge regression shrinks the regression coefficients, so that variables, with minor contribution to the outcome, have their coefficients <b>close to zero.</b>
The shrinkage of the coefficients is achieved by penalizing the regression model with a penalty term called L2-norm, which is the sum of the squared coefficients.
For regularised regression models, the key tuning parameter is <b>alpha</b> - a regularization parameter that measures how flexible our model is. The higher the regularization the less prone our model will be to overfit. However it will also lose flexibility and might not capture all of the signal in the data. Thus I will define multiple alpha's, iterate over them and plot the result so we can easily see the optimal alpha level.
```
# Setting up list of alpha's
alphas = [0.05, 0.1, 0.3, 1, 3, 5, 10, 15, 30]
# Iterate over alpha's
cv_ridge = [rmse_cv(Ridge(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_ridge = pd.Series(cv_ridge, index = alphas)
cv_ridge.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
# 5 looks like the optimal alpha level, so let's fit the Ridge model with this value
model_ridge = Ridge(alpha = 5)
```
#### 2. Lasso Regression <i>(L1 regularisation)</i>
Lasso stands for Least Absolute Shrinkage and Selection Operator. It shrinks the regression coefficients toward zero by penalizing the regression model with a penalty term called L1-norm, which is the sum of the absolute coefficients.
In the case of lasso regression, the penalty has the effect of forcing some of the coefficient estimates, with a minor contribution to the model, to be <b>exactly equal to zero</b>. This means that, lasso can be also seen as an alternative to the subset selection methods for performing variable selection in order to reduce the complexity of the model. For this reason, I usually prefer working with the Lasso algorithm over Ridge.
Let's take the same appraoch to alpha selection, before initiating the Lasso model.
```
# Setting up list of alpha's
alphas = [0.01, 0.005, 0.001, 0.0005, 0.0001]
# Iterate over alpha's
cv_lasso = [rmse_cv(Lasso(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_lasso = pd.Series(cv_lasso, index = alphas)
cv_lasso.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
```
An addition to the Lasso model - I will use a Pipeline to scale features. For the L1 norm to work properly, it's essential this step is taken before fitting the model.
```
# Initiating Lasso model
model_lasso = make_pipeline(RobustScaler(), Lasso(alpha = 0.0005))
```
#### 3. ElasticNet Regression
Elastic Net produces a regression model that is penalized with both the L1-norm and L2-norm. The consequence of this is to effectively shrink coefficients (like in ridge regression) and to set some coefficients to zero (as in LASSO).
```
# Setting up list of alpha's
alphas = [0.01, 0.005, 0.001, 0.0005, 0.0001]
# Iterate over alpha's
cv_elastic = [rmse_cv(ElasticNet(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_elastic = pd.Series(cv_elastic, index = alphas)
cv_elastic.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
```
Again, i'll be using RobustScaler to scale all features before initiating the ElasticNet model.
```
# Initiating ElasticNet model
model_elastic = make_pipeline(RobustScaler(), ElasticNet(alpha = 0.0005))
```
#### 4. Kernel ridge regression
OK, this is not strictly a generalized linear model. Kernel ridge regression (KRR) combines Ridge Regression (linear least squares with l2-norm regularization) with the 'kernel trick'. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space.
```
# Setting up list of alpha's
alphas = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
# Iterate over alpha's
cv_krr = [rmse_cv(KernelRidge(alpha = alpha)).mean() for alpha in alphas]
# Plot findings
cv_krr = pd.Series(cv_krr, index = alphas)
cv_krr.plot(title = "Validation")
plt.xlabel("Alpha")
plt.ylabel("Rmse")
```
As well as scaling features again for the Kernel ridge regression, I've defined a few more parameters within this algorithm:
- Kernel: Polynomial
- <i>This means that the algorithm will not just consider similarity between features, but also similarity between combinations of features.</i>
- Degree & Coef0:
- <i>These are used to define the precise structure of the Polynomial kernel. I arrived at the below numbers through a bit of trial and error. Implementing a GridSearchCV would probably yield a better overall fit.</i>
```
# Initiatiing KernelRidge model
model_krr = make_pipeline(RobustScaler(), KernelRidge(alpha=6, kernel='polynomial', degree=2.65, coef0=6.9))
```
### B. Ensemble methods (Gradient tree boosting)
Boosting is an ensemble technique in which the predictors are not made independently, but sequentially.
This technique employs the logic in which the subsequent predictors learn from the mistakes of the previous predictors. Therefore, the observations have an unequal probability of appearing in subsequent models and ones with the highest error appear most. The predictors can be chosen from a range of models like decision trees, regressors, classifiers etc. Because new predictors are learning from mistakes committed by previous predictors, it takes less time/iterations to reach close to actual predictions. But we have to choose the stopping criteria carefully or it could lead to overfitting on training data. Gradient Boosting is an example of a boosting algorithm, and these are what i'll be applying to the current data next.
#### 5. Gradient Boosting
For the Gradient Boosting algorithm I will use 'huber' as the loss function as this is robust to outliers. The other parameters on display originate from other kernels tackling this challenge, followed by trial and error to refine them to this specific dataset. Again, applying GridSearchCV will help to define a better set of parameters than those currently on display.
For the Gradient Boosting model I will use 'huber' as the loss function as this is robust to outliers.
```
# Initiating Gradient Boosting Regressor
model_gbr = GradientBoostingRegressor(n_estimators=1200,
learning_rate=0.05,
max_depth=4,
max_features='sqrt',
min_samples_leaf=15,
min_samples_split=10,
loss='huber',
random_state=5)
```
#### 6. XGBoost
Another gradient boosting algorithm; one that's well documented as being the key to many winning solutions on Kaggle.
```
# Initiating XGBRegressor
model_xgb = xgb.XGBRegressor(colsample_bytree=0.2,
learning_rate=0.06,
max_depth=3,
n_estimators=1150)
```
#### 7. LightGBM
A more recent gradient boosting algorithm which boasts significantly faster runtime than XGBoost, while still offering best-in-class predictive power.
```
# Initiating LGBMRegressor model
model_lgb = lgb.LGBMRegressor(objective='regression',
num_leaves=4,
learning_rate=0.05,
n_estimators=1080,
max_bin=75,
bagging_fraction=0.80,
bagging_freq=5,
feature_fraction=0.232,
feature_fraction_seed=9,
bagging_seed=9,
min_data_in_leaf=6,
min_sum_hessian_in_leaf=11)
```
#### 8. CatBoost
All the way from Russia, CatBoost is a new gradient boosting algorithm able to work with categorical features <b>without</b> any prior processing needed. I am still finding my feet with implementing the CatBoostRegressor - thus this section of the kernel is very much a work in progress. Any guidance on working with this algorithm would be greatly appreciated - especially with regards to performing cross-validation and hyperparameter tuning. The below parameters again came from my own trial & error.
```
# Initiating CatBoost Regressor model
model_cat = CatBoostRegressor(iterations=2000,
learning_rate=0.10,
depth=3,
l2_leaf_reg=4,
border_count=15,
loss_function='RMSE',
verbose=200)
# Initiating parameters ready for CatBoost's CV function, which I will use below
params = {'iterations':2000,
'learning_rate':0.10,
'depth':3,
'l2_leaf_reg':4,
'border_count':15,
'loss_function':'RMSE',
'verbose':200}
```
## 2. Fitting algorithms
### Fit all models
I'll now run the custom rmse_cv function on each algorithm to understand each model's performance. This function doesn't work for the CatBoost algorithm, so I will just fit this for now and will return with a solution at a later date.
```
# Fitting all models with rmse_cv function, apart from CatBoost
cv_ridge = rmse_cv(model_ridge).mean()
cv_lasso = rmse_cv(model_lasso).mean()
cv_elastic = rmse_cv(model_elastic).mean()
cv_krr = rmse_cv(model_krr).mean()
cv_gbr = rmse_cv(model_gbr).mean()
cv_xgb = rmse_cv(model_xgb).mean()
cv_lgb = rmse_cv(model_lgb).mean()
# Define pool
pool = Pool(X_train, y_train)
# CV Catboost algorithm
cv_cat = cv(pool=pool, params=params, fold_count=10, shuffle=True)
# Select best model
cv_cat = cv_cat.at[1999, 'train-RMSE-mean']
```
### Rank model performance
The moment of truth - let's see how each algorithm has performed, and which one tops the pile.
```
# Creating a table of results, ranked highest to lowest
results = pd.DataFrame({
'Model': ['Ridge',
'Lasso',
'ElasticNet',
'Kernel Ridge',
'Gradient Boosting Regressor',
'XGBoost Regressor',
'Light Gradient Boosting Regressor',
'CatBoost'],
'Score': [cv_ridge,
cv_lasso,
cv_elastic,
cv_krr,
cv_gbr,
cv_xgb,
cv_lgb,
cv_cat]})
# Build dataframe of values
result_df = results.sort_values(by='Score', ascending=True).reset_index(drop=True)
result_df.head(8)
# Plotting model performance
f, ax = plt.subplots(figsize=(10, 6))
plt.xticks(rotation='90')
sns.barplot(x=result_df['Model'], y=result_df['Score'])
plt.xlabel('Models', fontsize=15)
plt.ylabel('Model performance', fontsize=15)
plt.ylim(0.10, 0.116)
plt.title('RMSE', fontsize=15)
```
We can see from the above graph that the LASSO and ElasticNet are the best cross-validated models, scoring very closely to one another. Gradient boosting hasn't fared quite as well, however each algorithm still obtains a very respectable RMSE. The CatBoost model has not been cross-validated so I am not going to consider this algorithm (for the time being).
## 3. Stacking algorithms
I've ran eight models thus far, and they've all performed pretty well. I'm now quite keen to explore stacking as a means of achieving an even higher score. In a nutshell, stacking uses as a first-level (base) the predictions of a few basic classifiers and then uses another model at the second-level to predict the output from the earlier first-level predictions. Stacking can be beneficial as combining models allows the best elements of their predictive power on the given challenged to be pooled, thus smoothing over any gaps left from an individual model and increasing the likelihood of stronger overall model performance.
Ok, let's get model predictions and then stack the results!
```
# Fit and predict all models
model_lasso.fit(X_train, y_train)
lasso_pred = np.expm1(model_lasso.predict(X_test))
model_elastic.fit(X_train, y_train)
elastic_pred = np.expm1(model_elastic.predict(X_test))
model_ridge.fit(X_train, y_train)
ridge_pred = np.expm1(model_ridge.predict(X_test))
model_xgb.fit(X_train, y_train)
xgb_pred = np.expm1(model_xgb.predict(X_test))
model_gbr.fit(X_train, y_train)
gbr_pred = np.expm1(model_gbr.predict(X_test))
model_lgb.fit(X_train, y_train)
lgb_pred = np.expm1(model_lgb.predict(X_test))
model_krr.fit(X_train, y_train)
krr_pred = np.expm1(model_krr.predict(X_test))
model_cat.fit(X_train, y_train)
cat_pred = np.expm1(model_cat.predict(X_test))
```
## Final predictions
Now to create the stacked model! I'm going to keep this very simple by equally weighting every model. This is done by summing together the models and then dividing by the total count. Weighted averages could be a means of gaining a slightly better final predictions, whereby the best performing models take a bigger cut of the stacked model. One of the more important considerations when undertaking any kind of model stacking is model independence. Stacking models that draw similar conclusions from the data is quite unlikely to yield a better score compared to a single model, because there's no additional insight being drawn out. Rather, model's that tackle the dataset in different ways, and that are able to detect unique aspects within it stand a better chance of contributing to a more powerful overall stacked model, since as a whole, more of the nuances within the data have been recognised and accounted for.
Please note, I am not going to include the CatBoost model as I found the model prediction declined when this was included - looks at the output it appears as though it is overfitting the data (visible through the differing learn/test scores). I will return to this model later with a view to improve it's application to the current dataset.
```
# Create stacked model
stacked = (lasso_pred + elastic_pred + ridge_pred + xgb_pred + lgb_pred + krr_pred + gbr_pred) / 7
# Setting up competition submission
sub = pd.DataFrame()
sub['Id'] = test_ID
sub['SalePrice'] = stacked
sub.to_csv('house_price_predictions.csv',index=False)
```
And there you have it! Within this kernel I have performed simple data preparation techniques before applying several models, and then combining their performance into a single stacked model. This achieved a final RMSE that pitched me within the top 12% of the leaderboard.
I hope the approach and techniques on display in this kernel have been helpful in terms of not just solving the current challenges, but other regression and broader machine learning challenges.
If this kernel has indeed helped you - i'd very much like to hear it :). Please also share with me any suggestions that could improve my final model, i'm always looking to learn more. In terms of future version, I aim to tackle the following:
- Perfecting the CatBoost model
- Performing a more rigorous GridSearchCV
- Exploring more complex methods of model stacking for better final prediction.
Thank you for reading :).
| github_jupyter |
# Getting started
## Install and import `sme`
```
!pip install -q sme
import sme
from matplotlib import pyplot as plt
import numpy as np
print("sme version:", sme.__version__)
```
## Importing a model
- to load an existing sme or xml file: `sme.open_file('model_filename.xml')`
- to load a built-in example model: `sme.open_example_model()`
```
my_model = sme.open_example_model()
```
## Getting help
- to see the type of an object: `type(object)`
- to print a one line description of an object: `repr(object)`
- to print a multi-line description of an object: `print(object)`
- to get help on an object, its methods and properties: `help(object)`
```
type(my_model)
repr(my_model)
print(my_model)
help(my_model)
```
## Viewing model contents
- the compartments in a model can be accessed as a list: `model.compartments`
- the list can be iterated over, or an item looked up by index or name
- other lists of objects, such as species in a compartment, or parameters in a reaction, behave in the same way
### Iterating over compartments
```
for compartment in my_model.compartments:
print(repr(compartment))
```
### Get compartment by name
```
cell_compartment = my_model.compartments["Cell"]
print(repr(cell_compartment))
```
### Get compartment by list index
```
last_compartment = my_model.compartments[-1]
print(repr(last_compartment))
```
### Display geometry of compartments
```
fig, axs = plt.subplots(nrows=1, ncols=len(my_model.compartments), figsize=(18, 12))
for (ax, compartment) in zip(axs, my_model.compartments):
ax.imshow(compartment.geometry_mask, interpolation="none")
ax.set_title(f"{compartment.name}")
ax.set_xlabel("x")
ax.set_ylabel("y")
plt.show()
```
### Display parameter names and values
```
my_reac = my_model.compartments["Nucleus"].reactions["A to B conversion"]
print(my_reac)
for param in my_reac.parameters:
print(param)
```
## Editing model contents
- Parameter values and object names can be changed by assigning new values to them
### Names
```
print(f"Model name: {my_model.name}")
my_model.name = "New model name!"
print(f"Model name: {my_model.name}")
```
### Model Parameters
```
param = my_model.parameters[0]
print(f"{param.name} = {param.value}")
param.value = "2.5"
print(f"{param.name} = {param.value}")
```
### Reaction Parameters
```
k1 = my_model.compartments["Nucleus"].reactions["A to B conversion"].parameters["k1"]
print(f"{k1.name} = {k1.value}")
k1.value = 0.72
print(f"{k1.name} = {k1.value}")
```
### Species Initial Concentrations
- can be Uniform (`float`), Analytic (`str`) or Image (`np.ndarray`)
```
species = my_model.compartments["Cell"].species[0]
print(
f"Species '{species.name}' has initial concentration of type '{species.concentration_type}', value '{species.uniform_concentration}'"
)
species.uniform_concentration = 1.3
print(
f"Species '{species.name}' has initial concentration of type '{species.concentration_type}', value '{species.uniform_concentration}'"
)
plt.imshow(species.concentration_image)
plt.colorbar()
plt.show()
species.analytic_concentration = "3 + 2*cos(x/2)+sin(y/3)"
print(
f"Species '{species.name}' has initial concentration of type '{species.concentration_type}', expression '{species.analytic_concentration}'"
)
plt.imshow(species.concentration_image)
plt.colorbar()
plt.show()
# generate concentration image with concentration = x + y
new_concentration_image = np.zeros(species.concentration_image.shape)
for index, _ in np.ndenumerate(new_concentration_image):
new_concentration_image[index] = index[0] + index[1]
species.concentration_image = new_concentration_image
print(
f"Species '{species.name}' has initial concentration of type '{species.concentration_type}'"
)
plt.imshow(species.concentration_image)
plt.colorbar()
plt.show()
```
## Exporting a model
- to save the model, including any simulation results: `model.export_sme_file('model_filename.sme')`
- to export the model as an SBML file (no simulation results): `model.export_sbml_file('model_filename.xml')`
```
my_model.export_sme_file("model.sme")
```
| github_jupyter |
# Mount Drive & Login to Wandb
```
from google.colab import drive
from getpass import getpass
import urllib
import os
# Mount drive
drive.mount('/content/drive')
!pip install wandb -qqq
!wandb login
```
# Install dependencies
```
!rm -r pearl
!git clone https://github.com/PAL-ML/PEARL_v1.git pearl
%cd pearl
!pip install -r requirements.txt
%cd ..
!pip install git+git://github.com/ankeshanand/pytorch-a2c-ppo-acktr-gail
!pip install git+git://github.com/mila-iqia/atari-representation-learning.git
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
!pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git
!pip install git+git://github.com/openai/baselines
! wget http://www.atarimania.com/roms/Roms.rar
! unrar x Roms.rar
! unzip ROMS.zip
! python -m atari_py.import_roms /content/ROMS
```
# Imports
```
# ML libraries
import torch.nn as nn
import torch
import pearl.src.benchmark.colab_data_preprocess as data_utils
from pearl.src.benchmark.probe_training_wrapper import run_probe_training
from pearl.src.benchmark.utils import appendabledict
# Models
import clip
# Data processing
from PIL import Image
from torchvision.transforms import Compose, Resize, Normalize
# Misc
import numpy as np
import wandb
import os
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
```
# Helper functions
```
class ClipEncoder(nn.Module):
def __init__(self, input_channels, feature_size):
super().__init__()
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.clip_model, _ = clip.load("ViT-B/32", device=self.device, jit=False)
self.preprocess = Compose([
Resize((224, 224), interpolation=Image.BICUBIC),
Normalize(
(0.48145466, 0.4578275, 0.40821073),
(0.26862954, 0.26130258, 0.27577711)
)
])
self.feature_size = feature_size
self.input_channels = input_channels
def forward(self, inputs):
x = self.get_clip_features(inputs)
x = x.view(x.size(0), -1)
return x
def get_clip_features(self, image):
with torch.no_grad():
image_features = self.clip_model.encode_image(self.preprocess(image)).float()
return image_features
def concat_multiple_patch_embeddings_with_full_img(patch_eps1, patch_eps2, full_eps, num_patches1=16, num_patches2=4):
processed_eps = []
for i, ep in enumerate(patch_eps1):
processed_ep = []
for j, s in enumerate(ep):
if j % num_patches1 == 0:
processed_emb_4x4 = torch.cat(patch_eps1[i][j:j+num_patches1], dim=0)
index_full = j // num_patches1
index_2x2 = j // num_patches2
processed_emb_2x2 = torch.cat(patch_eps2[i][index_2x2:index_2x2+num_patches2], dim=0)
processed_emb = torch.cat([processed_emb_4x4, processed_emb_2x2, full_eps[i][index_full]], dim=0)
processed_ep.append(processed_emb)
processed_eps.append(processed_ep)
return processed_eps
```
# Initialization & constants
General
```
env_name = "BreakoutNoFrameskip-v4"
collect_mode = "random_agent" # random_agent or ppo_agent
steps = 50000
training_input = "embeddings" # embeddings or images
probe_type = "linear"
use_encoder = False
input_resolution1 = "4x4patch" # full-image, 4x4patch, 2x2patch
input_resolution2 = "2x2patch" # full-image, 4x4patch, 2x2patch
input_resolution3 = "full-image" # full-image, 4x4patch, 2x2patch
num_patches1 = 16 # 4 = 2x2 patches
num_patches2 = 4
```
Encoder Params
```
feature_size = 512 * (num_patches1 + num_patches2 + 1) # + 1 for full image
input_channels = 3
```
Probe Params
```
lr = 3e-4
batch_size = 64
num_epochs = 100
patience = 15
probe_name = input_resolution1 + "&" + input_resolution2 + "&" + input_resolution3 + "-" + probe_type
```
Paths
```
data_path_suffix = "-latest-04-05-2021"
models_dir = os.path.join("drive/MyDrive/PAL_HILL_2021/Atari-RL/Models/probes", probe_name, env_name)
data_dir = os.path.join("/content/drive/MyDrive/PAL_HILL_2021/Atari-RL/Images_Labels_Clip_embeddings", env_name + data_path_suffix)
if not os.path.exists(models_dir):
os.makedirs(models_dir)
```
Wandb
```
wandb.init(project='atari-clip')
config = wandb.config
config.game = "{}-4x4patch&2x2patch&FullImage-Linear".format(env_name.replace("NoFrameskip-v4", ""))
wandb.run.name = "{}_Linear_4x4patch&2x2patch&FullImage_may_08".format(env_name.replace("NoFrameskip-v4", ""))
```
# Get episode data
```
tr_episodes_patches, val_episodes_patches,\
tr_labels, val_labels,\
test_episodes_patches, test_labels = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True, input_resolution=input_resolution1)
tr_episodes_patches2, val_episodes_patches2,\
tr_labels, val_labels,\
test_episodes_patches2, test_labels = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True, input_resolution=input_resolution2)
tr_episodes_full, val_episodes_full,\
tr_labels, val_labels,\
test_episodes_full, test_labels = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True, input_resolution=input_resolution3)
```
# Load model
```
encoder = ClipEncoder(input_channels=input_channels,feature_size=feature_size)
```
# Run probe training
```
run_probe_training(training_input, encoder, probe_type, num_epochs, lr, patience, wandb, models_dir, batch_size,
tr_episodes, val_episodes, tr_labels, val_labels, test_episodes, test_labels, use_encoder=use_encoder)
```
| github_jupyter |
### Functions
Functions represent reusable blocks of code that you can reference by name and pass informatin into to customize the exectuion of the function, and receive a response representing the outcome of the defined code in the function. *When would you want to define a function?* You should consider defining a function when you find yourself entering very similar code to execute variations of the same process. The dataset used for the following example is part of the supplementary materials ([Data S1 - Egg Shape by Species](https://science.sciencemag.org/highwire/filestream/695981/field_highwire_adjunct_files/2/aaj1945_DataS1_Egg_shape_by_species_v2.xlsx)) for Stoddard et al. (2017).
Mary Caswell Stoddard, Ee Hou Yong, Derya Akkaynak, Catherine Sheard, Joseph A. Tobias, L. Mahadevan. 2017. "Avian egg shape: Form, function, and evolution". Science. 23 June 2017. Vol. 356, Issue 6344. pp. 1249-1254. DOI: 10.1126/science.aaj1945. [https://science.sciencemag.org/content/356/6344/1249](https://science.sciencemag.org/content/356/6344/1249)
A sample workflow without functions:
```
# read data into a list of dictionaries
import csv
# create an empty list that will be filled with the rows of data from the CSV as dictionaries
csv_content = []
# open and loop through each line of the csv file to populate our data file
with open('aaj1945_DataS1_Egg_shape_by_species_v2.csv') as csv_file:
csv_reader = csv.DictReader(csv_file)
for row in csv_reader: # process each row of the csv file
csv_content.append(row)
print(csv_content[0].keys())
#print()
#print(csv_content[0])
# extract content of each "column" individually
order = []
for item in csv_content:
try:
order.append(item['Order'])
except:
order.append(None)
family = []
for item in csv_content:
try:
family.append(item['Family'])
except:
family.append(None)
species = []
for item in csv_content:
try:
species.append(item['Species'])
except:
species.append(None)
asymmetry = []
for item in csv_content:
try:
asymmetry.append(item['Asymmetry'])
except:
asymmetry.append(None)
ellipticity = []
for item in csv_content:
try:
ellipticity.append(item['Ellipticity'])
except:
ellipticity.append(None)
avgLength = []
for item in csv_content:
try:
avgLength.append(item['AvgLength (cm)'])
except:
avgLength.append(None)
noImages = []
for item in csv_content:
try:
noImages.append(item['Number of images'])
except:
noImages.append(None)
noEggs = []
for item in csv_content:
try:
noEggs.append(item['Number of eggs'])
except:
noEggs.append(None)
print(order[0:3])
print(family[0:3])
print(species[0:3])
print(asymmetry[0:3])
print(ellipticity[0:3])
print(avgLength[0:3])
print(noImages[0:3])
print(noEggs[0:3])
# define a function that can extract a named column from a named list of dictionaries
def extract_column(source_list, source_column):
new_list = []
for item in source_list:
try:
new_list.append(item[source_column])
except:
new_list.append(None)
print(source_column + ": " + ", ".join(new_list[0:3]))
return(new_list)
order = extract_column(csv_content, 'Order')
family = extract_column(csv_content, 'Family')
species = extract_column(csv_content, 'Species')
asymmetry = extract_column(csv_content, 'Asymmetry')
ellipticity = extract_column(csv_content, 'Ellipticity')
avgLength = extract_column(csv_content, 'AvgLength (cm)')
noImages = extract_column(csv_content, 'Number of images')
noEggs = extract_column(csv_content, 'Number of eggs')
print()
print(order[0:3])
print(family[0:3])
print(species[0:3])
print(asymmetry[0:3])
print(ellipticity[0:3])
print(avgLength[0:3])
print(noImages[0:3])
print(noEggs[0:3])
# use the extract_column function in a loop to automatically extract all of the columns from a from the list
# of dictionaries to create a dictionary representing each column of values
columns = {}
for column in csv_content[0].keys():
columns[column] = extract_column(csv_content, column)
columns
```
### Putting it all together
An example of reading a data file and doing basic work with it illustrates all of these concepts. This also illustrates the concept of writing a script that combines all of your commands into a file that can be run. [eggs.py](eggs.py) in this case.
#!/usr/bin/env python
import csv
# create an empty list that will be filled with the rows of data from the CSV as dictionaries
csv_content = []
# open and loop through each line of the csv file to populate our data file
with open('aaj1945_DataS1_Egg_shape_by_species_v2.csv') as csv_file:
csv_reader = csv.DictReader(csv_file)
for row in csv_reader: # process each row of the csv file
csv_content.append(row)
print("keys: " + ", ".join(csv_content[0].keys()))
print()
print()
# define a function that can extract a named column from a named list of dictionaries
def extract_column(source_list, source_column):
new_list = []
for item in source_list:
try:
new_list.append(item[source_column])
except:
new_list.append(None)
print(source_column + ": " + ", ".join(new_list[0:3]))
return(new_list)
order = extract_column(csv_content, 'Order')
family = extract_column(csv_content, 'Family')
species = extract_column(csv_content, 'Species')
asymmetry = extract_column(csv_content, 'Asymmetry')
ellipticity = extract_column(csv_content, 'Ellipticity')
avgLength = extract_column(csv_content, 'AvgLength (cm)')
noImages = extract_column(csv_content, 'Number of images')
noEggs = extract_column(csv_content, 'Number of eggs')
print()
print(order[0:3])
print(family[0:3])
print(species[0:3])
print(asymmetry[0:3])
print(ellipticity[0:3])
print(avgLength[0:3])
print(noImages[0:3])
print(noEggs[0:3])
# Calculate and print some statistics
print()
mean_asymmetry = sum(map(float, asymmetry))/len(asymmetry)
print("Mean Asymmetry: ", str(mean_asymmetry))
mean_ellipticity = sum(map(float, ellipticity))/len(ellipticity)
print("Mean Ellipticity: ", str(mean_ellipticity))
mean_avglength = sum(map(float, avgLength))/len(avgLength)
print("Mean Average Length: ", str(mean_avglength)) print("Mean Average Length: ", str(mean_avglength))
To execute this script you can use a couple of strategies:
1. Run it using the python interpreter of your choice using the `python eggs.py` command at the command line
2. Run it using the python interpreter referenced in the `#!` line at the beginning of the script by making sure that the script is executable (`ls -l` can provide information about whether a file is executable, `chmod u+x eggs.py` can make your script executable for the user that owns the file), and entering the name of the script on the command line: `./eggs.py` if the script is in the current directory.
| github_jupyter |
Walk-through
============
This walk-through guides users through several key concepts for using the nervana graph. The corresponding jupyter notebook is found [here](https://github.com/NervanaSystems/ngraph-neon/blob/master/examples/walk_through/Graph_Introduction.ipynb).
Let's begin with a very simple example: computing ``x+1`` for several values of ``x`` using the ``ngraph``
API. We should think of the computation as being invoked from the *host*, but possibly taking place
somewhere else, which we will refer to as *the device.*
The nervana graph currently uses a compilation model. Users first define the computations by building a graph of operations, then they are compiled and run. In the future, we plan an even more compiler-like approach, where an executable is produced that can later be run on various platforms, in addition to an interactive version.
Our first program will use ngraph to compute ``x+1`` for each ``x`` provided.
The x+1 program
---------------
The complete program, which we will walk through, is:
```
from __future__ import print_function
from contextlib import closing
import neon as ng
import neon.transformers as ngt
# Build the graph
x = ng.placeholder(axes=())
x_plus_one = x + 1
# Select a transformer
with closing(ngt.make_transformer()) as transformer:
# Define a computation
plus_one = transformer.computation(x_plus_one, x)
# Run the computation
for i in range(5):
print(plus_one(i))
```
We begin by importing ``ngraph``, the Python module for graph construction, and ``ngraph.transformers``, the module for transformer operations.
```
import neon as ng
import neon.transformers as ngt
```
Next, we create a computational graph, which we refer to as ngraph, for the computation. Following TensorFlow terminology, we use ``placeholder`` to define a port for transferring tensors between the host and the device. ``Axes`` are used to tell the graph the tensor shape. In this example, ``x`` is a scalar so the axes are empty.
```
x = ng.placeholder(axes=())
```
x can be thought as a dummy node of the ngraph, providing an entry point for data into the computational graph. The ``ngraph`` graph construction API uses functions to build a graph of ``Op`` objects, the ngraph. Each function may add operations to the ngraph, and will return an ``Op`` that represents the computation. Here below, using implicitly ngraph as it will be made evident at the next step, we are adding an ``Op`` to the ngraph that takes as input the variable tensor x just defined, and the constant number 1.
```
x_plus_one = x + 1
```
A bit of behind the scenes magic occurs with the Python number ``1`` in the expression above, which is not an ``Op``. When an argument to a graph constructor is not an ``Op``, nervana graph will attempt to convert it to an ``Op`` using ``ng.constant``, the graph function for creating a constant.
Thus, what it is really happening (when we are defining x_plus_one as above) is:
```
x_plus_one = ng.add(x, ng.constant(1))
```
For more information about the Op hierarchy please visit: https://ngraph.nervanasys.com/docs/latest/building_graphs.html <br>
<br>At this point, our computational graph has been defined with only one function to compute represented by x_plus_one. Once the ngraph is defined, we can compile it with a *transformer*. Here we use ``make_transformer`` to make a default transformer. We tell the transformer the function to compute, ``x_plus_one``, and the associated input parameters, only ``x`` in our example. The constant needs not to be repeated here, as it is part of the definition of the function to compute. The current default transformer uses NumPy for execution.
```
# Select a transformer
with closing(ngt.make_transformer()) as transformer:
# Define a computation
plus_one = transformer.computation(x_plus_one, x)
# Run the computation
for i in range(5):
print(plus_one(i))
```
The first time the transformer executes a computation, the ngraph is analyzed and compiled, and storage is allocated and initialized on the device. Once compiled, the computations are callable Python objects residing on the host. On each call to ``plus_one`` the value of ``x`` is copied to the device, 1 is added, and then the result is copied
back from the device to the host.
### The Compiled x + 1 Program
The compiled code, to be executed on the device, can be examined (currently located in ``/tmp`` folder) to view the runtime device model. Here we show the code with some clarifying comments.
```
class Model(object):
def __init__(self):
self.a_AssignableTensorOp_0_0 = None
self.a_AssignableTensorOp_0_0_v_AssignableTensorOp_0_0_ = None
self.a_AssignableTensorOp_1_0 = None
self.a_AssignableTensorOp_1_0_v_AssignableTensorOp_1_0_ = None
self.a_AddZeroDim_0_0 = None
self.a_AddZeroDim_0_0_v_AddZeroDim_0_0_ = None
self.be = NervanaObject.be
def alloc_a_AssignableTensorOp_0_0(self):
self.update_a_AssignableTensorOp_0_0(np.empty(1, dtype=np.dtype('float32')))
def update_a_AssignableTensorOp_0_0(self, buffer):
self.a_AssignableTensorOp_0_0 = buffer
self.a_AssignableTensorOp_0_0_v_AssignableTensorOp_0_0_ = np.ndarray(shape=(), dtype=np.float32,
buffer=buffer, offset=0, strides=())
def alloc_a_AssignableTensorOp_1_0(self):
self.update_a_AssignableTensorOp_1_0(np.empty(1, dtype=np.dtype('float32')))
def update_a_AssignableTensorOp_1_0(self, buffer):
self.a_AssignableTensorOp_1_0 = buffer
self.a_AssignableTensorOp_1_0_v_AssignableTensorOp_1_0_ = np.ndarray(shape=(), dtype=np.float32,
buffer=buffer, offset=0, strides=())
def alloc_a_AddZeroDim_0_0(self):
self.update_a_AddZeroDim_0_0(np.empty(1, dtype=np.dtype('float32')))
def update_a_AddZeroDim_0_0(self, buffer):
self.a_AddZeroDim_0_0 = buffer
self.a_AddZeroDim_0_0_v_AddZeroDim_0_0_ = np.ndarray(shape=(), dtype=np.float32,
buffer=buffer, offset=0, strides=())
def allocate(self):
self.alloc_a_AssignableTensorOp_0_0()
self.alloc_a_AssignableTensorOp_1_0()
self.alloc_a_AddZeroDim_0_0()
def Computation_0(self):
np.add(self.a_AssignableTensorOp_0_0_v_AssignableTensorOp_0_0_,
self.a_AssignableTensorOp_1_0_v_AssignableTensorOp_1_0_,
out=self.a_AddZeroDim_0_0_v_AddZeroDim_0_0_)
def init(self):
pass
```
Tensors have two components:
- storage for their elements (using the convention ``a_`` for the allocated storage of a tensor) and
- views of that storage (denoted as ``a_...v_``).
The ``alloc_`` methods allocate storage and then create the views of the storage that will be needed. The view creation is separated from the allocation because storage may be allocated in multiple ways.
Each allocated storage can also be initialized to, for example, random Gaussian variables. In this example, there are no initializations, so the method ``init``, which performs the one-time device
initialization, is empty. Constants, such as 1, are copied to the device as part of the allocation process.
The method ``Computation_0`` handles the ``plus_one`` computation. Clearly this is not the optimal way to add 1 to a scalar,
so let's look at a more complex example next in the Logistic Regression walk-through.
| github_jupyter |
# Geography as Feature
```
import pandas as pd
import geopandas as gpd
import libpysal as lp
import matplotlib.pyplot as plt
import rasterio as rio
import numpy as np
import contextily as ctx
import shapely.geometry as geom
%matplotlib inline
```
Today, we'll talk about representing spatial relationships in Python using PySAL's *spatial weights* functionality. This provides a unified way to express the spatial relationships between observations.
First, though, we'll need to read in our data built in the `relations.ipynb` notebook: Airbnb listings & nightly prices for neighbourhoods in Austin.
```
listings = gpd.read_file('../data/listings.gpkg').to_crs(epsg=3857)
neighborhoods = gpd.read_file('../data/neighborhoods.gpkg').to_crs(epsg=3857)
listings.head()
listings.hood
```
Further, we'll grab a basemap for our study area using `contextily`. Contextily is package designed to provide basemaps for data. It's best used for data in webmercator or raw WGS longitude-latitude coordinates.
Below, we are going to grab the basemap images for the `total_bounds` of our study area at a given zoom level. Further, we are specifying a different tile server from the default, the [Stamen Maps `toner-lite` tiles](http://maps.stamen.com/m2i/#toner-lite/1500:1000/12/47.5462/7.6196), to use since we like its aesthetics.
```
basemap, bounds = ctx.bounds2img(*listings.total_bounds, zoom=10,
url=ctx.tile_providers.ST_TONER_LITE)
```
Spatial plotting has come a long way since we first started in spatial data science. But, a few tricks for `geopandas` are still somewhat arcane, so it's useful to know them.
```
f = plt.figure(figsize=(8,8))
ax = plt.gca()
# TRICK 1: when you only want to plot the boundaries, not the polygons themselves:
neighborhoods.boundary.plot(color='k', ax=ax)
ax.imshow(basemap, extent=bounds, interpolation='bilinear')
ax.axis(neighborhoods.total_bounds[np.asarray([0,2,1,3])])
# TRICK 2: Sorting the data before plotting it will ensure that
# the highest (or lowest) categories are prioritized in the plot.
# Use this to mimick blending or control the order in which alpha blending might occur.
listings.sort_values('price').plot('price', ax=ax, marker='o', cmap='plasma', alpha=.5)
```
# Spatial Weights: expressing spatial relationships mathematically
Spatial weights matrices are mathematical objects that are designed to express the inter-relationships between sites in a given geolocated frame of analysis.
This means that the relationships between each site (of which there are usually $N$) to every other site is *represented* by the weights matrix, which is some $N \times N$ matrix of "weights," which are scalar numerical representations of these relationships.
In a similar fashion to *affinity matrices* in machine learning, spatial weights matrices are used in a wide variety of problems and models in quantitative geography and spatial data science to express the spatial relationships present in our data.
In python, PySAL's `W` class is the main method by which people construct & represent spatial weights. This means that arbitary inter-site linkages can be expressed using one dictionary, and another *optional* dictionary:
- **a `neighbors` dictionary,** which encodes a *focal observation*'s "name" and which other "named" observations the focal is linked.
- **a `weights` dictionary,** which encodes how strongly each of the neighbors are linked to the focal observation.
Usually, these are one-to-many mappings, dictionaries keyed with the "focal" observation and values which are lists of the names to which the key is attached.
An example below shows three observations, `a`,`b`, and `c`, arranged in a straight line:
```
neighbors = dict(a = ['b'],
b = ['a','c'],
c = ['b']
)
```
Connectivity strength is recorded in a separate dictionary whose keys should align with the `neighbors`:
```
weights = dict(a = [1],
b = [.2, .8],
c = [.3]
)
```
To construct the most generic spatial weights object, only the `neighbors` dictionary is required; the `weights` will assumed to be one everywhere.
```
binary = lp.weights.W(neighbors) # assumes all weights are one
binary.weights
weighted = lp.weights.W(neighbors, weights=weights)
weighted.weights
```
# Constructing different types of weights
By itself, this is not really useful; the hardest part of *using* these representations is constructing them from your original spatial data. Thus, we show below how this can be done. First, we cover *contiguity* weights, which are analogues to adjacency matrices . These are nearly always used for polygonal "lattice" data, but can also be used for points as well by examining their voronoi diagram.
Second, we cover *distance* weights, which usually pertain to point data only. These tend to embed notions of distance decay, and are incredibly flexible for multiple forms of spatial data.
# Contiguity
Contiguity weights, or "adjacency matrices," are one common representation of spatial relationships that spring to mind when modeling how polygons relate to one another. In this representation, objects are considered "near" when they touch, and "far" when they don't. adjacency is considered as a "binary" relationship, so all polygons that are near to one another are *as near as they are to any other near polygon*.
We've got fast algos to build these kinds of relationships from `shapely`/`geopandas`, as well as directly from files (without having to read all the data in at once).
```
Qneighbs = lp.weights.Queen.from_dataframe(neighborhoods)
```
The `pysal` library has gone under a bit of restructuring.
The main components of the package are migrated to `libpysal`, which forms the base of a constellation of spatial data science packages.
Given this, we you can plot the adjacency graph for the polygons we showed above as another layer in the plot. We will remove some of the view to make the view simpler to examine:
```
f = plt.figure(figsize=(8,8))
ax = plt.gca()
# when you only want to plot the boundaries:
neighborhoods.boundary.plot(color='k', ax=ax, alpha=.4)
Qneighbs.plot(neighborhoods, edge_kws=dict(linewidth=1.5, color='orangered'),
node_kws=dict(marker='*'), ax=ax)
plt.show()
```
We can check if individual observations are disconnected using the weights object's `islands` argument:
```
Qneighbs.islands
```
This is good news, as each polygon has at least one neighbor, and our graph has a single connected component.
PySAL weights can be used in other packages by converting them into their equivalent matrix representations. Sparse and dense array versions are offered, with `.sparse` providing the sparse matrix representation, and `.full()` providing the ids and dense matrix representing the graphs.
```
spqneighbs = Qneighbs.sparse
spqneighbs.eliminate_zeros()
```
Visualizing the matrix, you can see that the adjacency matrix is very sparse indeed:
```
plt.matshow(spqneighbs.toarray())
```
We can get the number of links as a percentage of all possible $N^2$ links from:
```
Qneighbs.pct_nonzero
```
Which means that there are around 12.3% of all the possible connections between any two observations actually make it into the adjacency graph.
For contiguity matrices, this only has binary elements, recording 1 where two observations are linked. Everywhere else, the array is empty (zero, in a dense representation).
```
np.unique(spqneighbs.data)
```
Fortunately for us, PySAL plays real well with scipy & other things built on top of SciPy. So, the [new compressed sparse graph (`csgraph`)](https://docs.scipy.org/doc/scipy/reference/sparse.csgraph.html) module in SciPy works wonders with the PySAL sparse weights representations. So, we often will jump back and forth between PySAL weights and scipy tools when working with these spatial representations of data.
```
import scipy.sparse.csgraph as csgraph
```
Now, in `csgraph`, there are a ton of tools to work with graphs. For example, we could use `csgraph.connected_components`:
```
number_connected, labels = csgraph.connected_components(spqneighbs)
```
And verify that we have a single connected component:
```
print(number_connected, labels)
Qconnected = lp.weights.Queen.from_dataframe(neighborhoods)
Qconnected.plot(neighborhoods, node_kws=dict(marker='*'), edge_kws=dict(linewidth=.4))
neighborhoods.boundary.plot(color='r', ax=plt.gca())
```
In addition, we could use the `lp.w_subset` function, which would avoid re-constructing the weights again. This might help if they are truly massive, but it's often just as expensive to discover the subset as it is to construct a new weights object from this subset.
```
Qconnected2 = lp.weights.w_subset(Qneighbs, ids=[i for i in range(Qneighbs.n) if labels[i] == 0])
```
Sometimes, if `pandas` rearranges the dataframes, these will appear to be different weights since the ordering is different. To check if two weights objects are identical, a simple test is to check the sparse matrices for **in**equality:
```
(Qconnected2.sparse != Qconnected.sparse).sum()
```
### Alternative Representations
PySAL, by default, tends to focus on a single `W` object, which provides easy tools to construct & work with the accompanying sparse matrix representations.
However, it's often the case we want alternative representations of the same relationships.
One handy one is the weights list. This is an alternative form of expressing a weights matrix, and provides a copy of the underlying `W.sparse.data`, made more regular and put into a pandas dataframe.
```
adjlist = Qconnected.to_adjlist()
adjlist.head()
```
This handy if you'd rather work with the representation in terms of individual edges, rather than in sets of edges.
Also, it is exceptionally handy when you want to ask questions about the data used to generate the spatial weights, since it lets you attach this data to each of the focal pairs and ask questions about the associated data at that level.
For example, say we get the median price of airbnbs within a given neighbourhood:
```
listings.price.dtype
listings.price
price = listings[['price']].replace('[\$,]', '', regex=True).astype(float)
price.mean(), price.max(), price.median(), price.min()
listings['price'] = price
```
Now, we are going to attach that back to the dataframe containing the neighbourhood information.
```
median_prices = gpd.sjoin(listings[['price', 'geometry']], neighborhoods, op='within')\
.groupby('index_right').price.median()
median_prices.head()
neighborhoods = neighborhoods.merge(median_prices.to_frame('median_price'),
left_index=True, right_index=True, how='left')
```
Then, we can map this information at the neighbourhood level, computed from the individual listings within each neighbourhood:
```
f = plt.figure(figsize=(8,8))
ax = plt.gca()
# when you only want to plot the boundaries:
neighborhoods.plot('median_price', cmap='plasma', alpha=.7, ax=ax)
#basemap of the area
ax.imshow(basemap, extent=bounds, interpolation='gaussian')
ax.axis(neighborhoods.total_bounds[np.asarray([0,2,1,3])])
#if you want the highest values to show on top of lower ones
plt.show()
```
Then, to examine the local relationships in price between nearby places, we could merge this information back up with the weights list and get the difference in price between every adjacent neighbourhood.
Usually, these joins involve building links between both the focal and neighbor observation IDs. You can do this simply by piping together two merges: one that focuses on the "focal" index and one that focuses on the "neighbor" index.
Using a suffix in the later merge will give the data joined on the focal index a distinct name from that joined on the neighbor index.
```
adjlist = adjlist.merge(neighborhoods[['hood_id',
'median_price']],
left_on='focal', right_index=True, how='left')\
.merge(neighborhoods[['hood_id',
'median_price']],
left_on='neighbor', right_index=True ,how='left',
suffixes=('_focal', '_neighbor'))
adjlist.head()
adjlist.median_price_neighbor
```
Then, we can group by the `focal` index and take the difference of the prices.
```
pricediff = adjlist[['median_price_focal',
'median_price_neighbor']].diff(axis=1)
pricediff.head()
```
We can link this back up to the original adjacency list, but first let's rename the column we want to `price_difference` and only keep that column:
```
pricediff['price_difference'] = pricediff[['median_price_neighbor']]
adjlist['price_difference'] = pricediff[['price_difference']]
```
And, if we wanted to find the pair of adjacent neighbourhoods with the greatest price difference:
```
adjlist.head()
```
Now, we can group by *both* the focal and neighbor name to get a meaningful list of all the neighborhood boundaries & their difference in median listing price.
```
contrasts = adjlist.groupby(("hood_id_focal", "hood_id_neighbor"))\
.price_difference.median().abs()\
.sort_values().to_frame().reset_index()
```
For about six neighbourhood pairs (since these will be duplicate `(A,B) & (B,A)` links), the median listing price is the same:
```
contrasts.query('price_difference == 0').sort_values(['hood_id_focal','hood_id_neighbor'])
```
On the other end, the 20 largest paired differences in median price between adjacent neighbourhoods is shown below:
```
contrasts.sort_values(['price_difference',
'hood_id_focal'],
ascending=[False,True]).head(40)
```
## Contiguity for points
Contiguity can also make sense for point objects as well, if you think about the corresponding Voronoi Diagram and the Thiessen Polygons's adjacency graph.
Effectively, this connects each point to a set of its nearest neighbouring points, without pre-specifying the number of points.
We can use it to define relationships between airbnb listings in our dataset.
```
listings.sort_values('price').plot('price', cmap='plasma', alpha=.5)
from libpysal.cg.voronoi import voronoi_frames
from libpysal.weights import Voronoi
lp.cg.voronoi_frames
lp.weights.Voronoi?
coordinates = np.vstack((listings.centroid.x, listings.centroid.y)).T
thiessens, points = voronoi_frames(coordinates)
```
However, the "natural" polygons generated by the `scipy.distance.voronoi` object may be excessively big, since some of the nearly-parallel lines in the voronoi diagram may take a long time to intersect.
```
f,ax = plt.subplots(1,2,figsize=(2.16*4,4))
thiessens.plot(ax=ax[0], edgecolor='k')
neighborhoods.plot(ax=ax[0], color='w', edgecolor='k')
ax[0].axis(neighborhoods.total_bounds[np.asarray([0,2,1,3])])
ax[0].set_title("Where we want to work")
thiessens.plot(ax=ax[1])
neighborhoods.plot(ax=ax[1], color='w', edgecolor='k')
ax[1].set_title("The outer limit of the voronoi diagram from SciPy")
ax[0].axis('off')
ax[1].axis('off')
plt.show()
```
Fortunately, PySAL can work with this amount of observations to build weights really quickly. But, the `geopandas` overlay operation is very slow for this many polygons, so even with a spatial index, clipping these polygons to the bounding box can take a bit...
```
thiessens.shape
listings.shape
neighborhoods['dummy']=1
```
So, we've precomputed the clipped version of the thiessen polygons and stored them, so that we can move forward without waiting too long
```
clipper = neighborhoods.dissolve(by='dummy')
clipper.plot()
thiessens.head()
thiessens.crs = clipper.crs
clipped_thiessens = gpd.overlay(thiessens, clipper, how='intersection')
clipped_thiessens.shape
clipped_thiessens.head()
clipped_thiessens.plot()
clipped_thiessens.to_file('../data/thiessens.gpkg')
clipped_thiessens = gpd.read_file('../data/thiessens.gpkg')
```
Note that, whereas the overlay operation to clean up this diagram took quite a bit of computation time if just called regularly ([and there may be plenty faster ways to do these kinds of ops](http://2018.geopython.net/#w4)), constructing the topology for all 11k Thiessen polygons is rather fast:
Just to show what this looks like, we will plot a part of one of the neighbourhoods in Austin: Hyde Park to the North of UT.
```
focal_neighborhood = 'Hyde Park'
focal = clipped_thiessens[listings.hood == focal_neighborhood]
focal = focal.reset_index()
focal.shape
focal.plot()
thiessen_focal_w = lp.weights.Rook.from_dataframe(focal)
f,ax = plt.subplots(1,3,figsize=(15,5),sharex=True,sharey=True)
# plot the airbnbs across the map
listings.plot('price', cmap='plasma', ax=ax[0],zorder=0, marker='.')
#
ax[0].set_xlim(*focal.total_bounds[np.asarray([0,2])])
ax[0].set_ylim(*focal.total_bounds[np.asarray([1,3])])
# Plot the thiessens corresponding to each listing in focal neighbourhood
listings[listings.hood == focal_neighborhood]\
.plot('price', cmap='plasma', marker='.', ax=ax[1], zorder=0)
focal.boundary.plot(ax=ax[1], linewidth=.7)
thiessen_focal_w.plot(focal, node_kws=dict(marker='.',s=0),
edge_kws=dict(linewidth=.5), color='b', ax=ax[2])
focal.boundary.plot(ax=ax[2], linewidth=.7)
# underlay the neighbourhood boundaries
for ax_ in ax:
neighborhoods.boundary.plot(ax=ax_, color='grey',zorder=1)
ax_.set_xticklabels([])
ax_.set_yticklabels([])
ax[0].set_title("All Listings", fontsize=20)
ax[1].set_title("Voronoi for Listings in %s"%focal_neighborhood, fontsize=20)
ax[2].set_title("AdjGraph for Listings Voronoi", fontsize=20)
f.tight_layout()
plt.show()
```
# Distance
Distance weights tend to reflect relationships that work based on distance decay. Often, people think of spatial kernel functions when talking about distance weighting. But, PySAL also recognizes/uses distance-banded weights, which consider any neighbor within a given distance threshold as "near," and K-nearest neighbor weights, which consider any of the $k$-closest points to each point as "near" to that point.
KNN weights, by default, are the only asymmetric weight PySAL will construct. However, using `csgraph`, one could prune/trim any of the contiguity or distance weights to be directed.
### Kernel weights
These weights are one of the most commonly-used kinds of distance weights. They reflect the case where similarity/spatial proximity is assumed or expected to decay with distance.
Many of these are quite a bit more heavy to compute than the contiguity graph discussed above, since the contiguity graph structure embeds simple assumptions about how shapes relate in space that kernel functions cannot assume.
Thus, I'll subset the data to a specific area of Austin before proceeding.
```
listings['hood']=listings['hood'].fillna(value="None").astype(str)
focal_listings = listings[listings.hood.str.startswith("Hyde")].reset_index()
focal_listings.sort_values('price').plot('price', cmap='plasma', zorder=3)
neighborhoods.boundary.plot(color='grey', ax=plt.gca())
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
Wkernel = lp.weights.Kernel.from_dataframe(focal_listings)
```
Now, if you wanted to see what these look like on the map:
```
focal_listings.assign(weights=Wkernel.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma')
neighborhoods.boundary.plot(color='grey', ax=plt.gca())
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
So, clearly, near things are weighted very highly, and distant things are weighted low.
So, if you're savvy with this, you may wonder:
> Why use PySAL kernel weights when `sklearn.pairwise.kernel_metrics` are so much faster?
Well, PySAL's got a few enhancements over and above scikit kernel functions.
1. **pre-specified bandwidths**: using the `bandwidth=` argument, you can give a specific bandwidth value for the kernel weight. This lets you use them in optimization routines where bandwidth might need to be a parameter that's optimized by another function.
2. **fixed vs. adaptive bandwidths**: adaptive bandwidths adjust the map distanace to make things more "local" in densely-populated areas of the map and less "local" in sparsely-populated areas. This is adjusted by the...
3. **`k`-nearest neighborhood tuning**: this argument adjusts the number of nearby observations to use for the bandwidth.
Also, many of the scikit kernel functions are also implemented. The default is the `triangular` weight, which is a linear decay with distance.
For example, an adaptive Triangular kernel and an adaptive Gaussian kernel are shown below, alongisde the same point above for comparison.
```
Wkernel_adaptive = lp.weights.Kernel.from_dataframe(focal_listings, k=20, fixed=False)
Wkernel_adaptive_gaussian = lp.weights.Kernel.from_dataframe(focal_listings, k=10, fixed=False, function='gaussian')
f,ax = plt.subplots(1,3,figsize=(12,4))
focal_listings.assign(weights=Wkernel.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma',ax=ax[0])
focal_listings.assign(weights=Wkernel_adaptive.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma',ax=ax[1])
focal_listings.assign(weights=Wkernel_adaptive_gaussian.sparse[0,:].toarray().flatten()).plot('weights', cmap='plasma',ax=ax[2])
for i in range(3):
neighborhoods.boundary.plot(color='grey', ax=ax[i])
ax[i].axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
ax[0].set_title("Defaults (Triangular fixed kernel, k=2)")
ax[1].set_title("Adaptive Triangular Kernel, k=20")
ax[2].set_title("Adaptive Gaussian Kernel, k=10")
f.tight_layout()
plt.show()
```
In the adaptive kernels, you also obtain a distinct bandwidth at each site:
```
Wkernel_adaptive.bandwidth[0:5]
```
These are useful in their own right, since they communicate information about the structure of the density of points in the analysis frame:
```
f,ax = plt.subplots(1,2,figsize=(8,4))
focal_listings.assign(bandwidths=Wkernel_adaptive.bandwidth).plot('bandwidths', cmap='plasma',ax=ax[0])
focal_listings.assign(bandwidths=Wkernel_adaptive_gaussian.bandwidth).plot('bandwidths', cmap='plasma',ax=ax[1])
for i in range(2):
neighborhoods.boundary.plot(color='grey', ax=ax[i])
ax[i].axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
ax[0].set_title("Adaptive Triangular Kernel, k=20")
ax[0].set_ylabel("Site-specific bandwidths", fontsize=16)
ax[1].set_title("Adaptive Gaussian Kernel, k=10")
f.tight_layout()
plt.show()
```
Areas with large adaptive kernel bandwidths are considered in "sparse" regions and areas with small adaptive bandwidths are in "dense" regions; a similar kind of logic is used by clustering algortihms descended from DBSCAN.
### Distance bands
Conceptually, this is a binary kernel weight. All observations that are within a given distance from one another are considered "neighbors," and all that are further than this distance are "not neighbors."
In order for this weighting structure to connect all observations, it's useful to set this to the largest distance connecting on observation to its nearest neighbor. This observation is the "most remote" observation and have at least one neighbor; every other observation is thus guaranteed to have at least this many neighbors.
To get this "m distance to the first nearest neighbor," you can use the PySAL `min_threshold_distance` function, which requires an array of points to find the minimum distance at which all observations are connected to at least one other observation:
```
point_array = np.vstack(focal_listings.geometry.apply(lambda p: np.hstack(p.xy)))
minthresh = lp.weights.min_threshold_distance(point_array)
print(minthresh)
```
This means that the most remote observation is just over 171 meters away from its nearest airbnb. Building a graph from this minimum distance, then, is done by passing this to the weights constructor:
```
dbandW = lp.weights.DistanceBand.from_dataframe(focal_listings, threshold=minthresh)
neighborhoods.boundary.plot(color='grey')
dbandW.plot(focal_listings, ax=plt.gca(), edge_kws=dict(color='r'), node_kws=dict(zorder=10))
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
This model of spatial relationships will guarantee that each observation has at least one neighbor, and will prevent any disconnected subgraphs from existing.
### KNNW
$K$-nearest neighbor weights are constructed by considering the nearest $k$ points to each observation as neighboring that observation. This is a common way of conceptualizing observations' neighbourhoods in machine learning applications, and it is also common in geographic data science applications.
```
KNNW = lp.weights.KNN.from_dataframe(focal_listings, k=10)
neighborhoods.boundary.plot(color='grey')
KNNW.plot(focal_listings,ax=plt.gca(), edge_kws=dict(color='r'), node_kws=dict(zorder=10))
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
One exceedingly-common method of analysis using KNN weights is by changing `k` repeatedly and finding better values. Thus, the KNN-weights method provides a specific method to do this in a way that avoids re-constructing its core data structure, the `kdtree`.
Further, this can add additional data to the weights object as well.
By default, this operates in place, but can also provide a copy of the datastructure if `inplace=False`.
```
KNNW20 = KNNW.reweight(k=20, inplace=False)
neighborhoods.boundary.plot(color='grey')
KNNW20.plot(focal_listings,ax=plt.gca(), edge_kws=dict(color='r'), node_kws=dict(zorder=10))
plt.axis(focal_listings.total_bounds[np.asarray([0,2,1,3])])
plt.show()
```
Further, since KNN weights are asymmetric, special methods are provided to make them symmetric:
```
KNNW20sym = KNNW20.symmetrize()
(KNNW20sym.sparse != KNNW20sym.sparse.T).sum()
(KNNW20.sparse != KNNW20.sparse.T).sum()
```
In fact, these symmetrizing methods exist for any other weights type too, so if you've got an arbitrarily-computed weights matrix, it can be used in that case.
### KNN on Polygons
While K-nearest neighbors weighting methods often make more sense for data in point formats, it's also applicable to data in polygons, were a *representative point* for each polygon is used to construct K-nearest neighbors, instead of the polygons as a whole.
For comparison, I'll show this alongside of the Queen weights shown above for neighbourhoods in Berlin.
When the number of nearest neighbours is relatively large compared to the usual cardinality in an adjacency graph, this results in some neighbourhoods being connected to one another more than a single-neigbourhood deep. That is, neighbourhoods are considered spatially connected even if they don't touch, since their *representative points* are so close to one another relative to the nearest alternatives.
```
KNN_neighborhoods = lp.weights.KNN.from_dataframe(neighborhoods, k=10).symmetrize()
f,ax = plt.subplots(1,2,figsize=(8,4))
for i in range(2):
neighborhoods.boundary.plot(color='grey',ax=ax[i])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
KNN_neighborhoods.plot(neighborhoods, ax=ax[0], node_kws=dict(s=0), color='orangered')
Qconnected.plot(neighborhoods, ax=ax[1], node_kws=dict(s=0), color='skyblue')
ax[0].set_title("KNN(10)", fontsize=16)
ax[1].set_title("Queen Contiguity", fontsize=16)
f.tight_layout()
plt.show()
```
In conrast, very sparse K-nearest neighbours graphs will result in significantly different connectivity structure than the contiguity graph, since the relative position of large areas' *representative points* matters significantly for which observations it touches will be considered "connected." Further, this often reduces the density of areas in the map with small elementary units, where cardinality is often higher.
```
KNN_neighborhoods = lp.weights.KNN.from_dataframe(neighborhoods, k=2).symmetrize()
f,ax = plt.subplots(1,2,figsize=(8,4))
for i in range(2):
neighborhoods.boundary.plot(color='grey',ax=ax[i])
ax[i].set_xticklabels([])
ax[i].set_yticklabels([])
KNN_neighborhoods.plot(neighborhoods, ax=ax[0], node_kws=dict(s=0), color='orangered')
Qconnected.plot(neighborhoods, ax=ax[1], node_kws=dict(s=0), color='skyblue')
ax[0].set_title("KNN(2)", fontsize=16)
ax[1].set_title("Queen Contiguity", fontsize=16)
f.tight_layout()
plt.show()
```
## More representations
There are similarly more representations available and currently under development, such as a networkx interface in `W.to_networkx/W.from_networkx`. Further, we're always willing to add additional constructors or methods to provide new and interesting ways to represent geographic relationships.
| github_jupyter |
This lab on Polynomial Regression and Step Functions is a python adaptation of p. 288-292 of "Introduction to Statistical Learning with Applications in R" by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. Original adaptation by J. Warmenhoven, updated by R. Jordan Crouser at Smith College for SDS293: Machine Learning (Spring 2016).
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
import statsmodels.api as sm
import statsmodels.formula.api as smf
from patsy import dmatrix
%matplotlib inline
```
# 7.8.1 Polynomial Regression and Step Functions
In this lab, we'll explore how to generate the ${\tt Wage}$ dataset models we saw in class.
```
df = pd.read_csv('Wage.csv')
df.head(3)
```
We first fit the polynomial regression model using the following commands:
```
X1 = PolynomialFeatures(1).fit_transform(df.age.reshape(-1,1))
X2 = PolynomialFeatures(2).fit_transform(df.age.reshape(-1,1))
X3 = PolynomialFeatures(3).fit_transform(df.age.reshape(-1,1))
X4 = PolynomialFeatures(4).fit_transform(df.age.reshape(-1,1))
X5 = PolynomialFeatures(5).fit_transform(df.age.reshape(-1,1))
```
This syntax fits a linear model, using the ${\tt PolynomialFeatures()}$ function, in order to predict
wage using up to a fourth-degree polynomial in ${\tt age}$. The ${\tt PolynomialFeatures()}$ command
allows us to avoid having to write out a long formula with powers
of ${\tt age}$. We can then fit our linear model:
```
fit2 = sm.GLS(df.wage, X4).fit()
fit2.summary().tables[1]
```
Next we consider the task of predicting whether an individual earns more
than \$250,000 per year. We proceed much as before, except that first we
create the appropriate response vector, and then we fit a logistic model using the ${\tt GLM()}$ function from ${\tt statsmodels}$:
```
# Create response matrix
y = (df.wage > 250).map({False:0, True:1}).as_matrix()
# Fit logistic model
clf = sm.GLM(y, X4, family=sm.families.Binomial(sm.families.links.logit))
res = clf.fit()
```
We now create a grid of values for ${\tt age}$ at which we want predictions, and
then call the generic ${\tt predict()}$ function for each model:
```
# Generate a sequence of age values spanning the range
age_grid = np.arange(df.age.min(), df.age.max()).reshape(-1,1)
# Generate test data
X_test = PolynomialFeatures(4).fit_transform(age_grid)
# Predict the value of the generated ages
pred1 = fit2.predict(X_test) # salary
pred2 = res.predict(X_test) # Pr(wage>250)
```
Finally, we plot the data and add the fit from the degree-4 polynomial.
```
# creating plots
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,5))
fig.suptitle('Degree-4 Polynomial', fontsize=14)
# Scatter plot with polynomial regression line
ax1.scatter(df.age, df.wage, facecolor='None', edgecolor='k', alpha=0.3)
ax1.plot(age_grid, pred1, color = 'b')
ax1.set_ylim(ymin=0)
# Logistic regression showing Pr(wage>250) for the age range.
ax2.plot(age_grid, pred2, color='b')
# Rug plot showing the distribution of wage>250 in the training data.
# 'True' on the top, 'False' on the bottom.
ax2.scatter(df.age, y/5, s=30, c='grey', marker='|', alpha=0.7)
ax2.set_ylim(-0.01,0.21)
ax2.set_xlabel('age')
ax2.set_ylabel('Pr(wage>250|age)')
```
# Deciding on a degree
In performing a polynomial regression we must decide on the degree of
the polynomial to use. One way to do this is by using hypothesis tests. We
now fit models ranging from linear to a degree-5 polynomial and seek to
determine the simplest model which is sufficient to explain the relationship
between ${\tt wage}$ and ${\tt age}$.
We can do this using the ${\tt anova\_lm()}$ function, which performs an
analysis of variance (ANOVA, using an F-test) in order to test the null
hypothesis that a model $M_1$ is sufficient to explain the data against the
alternative hypothesis that a more complex model $M_2$ is required. In order
to use the ${\tt anova\_lm()}$ function, $M_1$ and $M_2$ must be **nested models**: the
predictors in $M_1$ must be a subset of the predictors in $M_2$. In this case,
we fit five different models and sequentially compare the simpler model to
the more complex model:
```
fit_1 = fit = sm.GLS(df.wage, X1).fit()
fit_2 = fit = sm.GLS(df.wage, X2).fit()
fit_3 = fit = sm.GLS(df.wage, X3).fit()
fit_4 = fit = sm.GLS(df.wage, X4).fit()
fit_5 = fit = sm.GLS(df.wage, X5).fit()
print(sm.stats.anova_lm(fit_1, fit_2, fit_3, fit_4, fit_5, typ=1))
```
The $p$-value comparing the linear Model 1 to the quadratic Model 2 is
essentially zero $(<10^{-32})$, indicating that a linear fit is not sufficient. Similarly
the $p$-value comparing the quadratic Model 2 to the cubic Model 3
is very low (0.0017), so the quadratic fit is also insufficient. The $p$-value
comparing the cubic and degree-4 polynomials, Model 3 and Model 4, is approximately
0.05 while the degree-5 polynomial Model 5 seems unnecessary
because its $p$-value is 0.37. Hence, either a cubic or a quartic polynomial
appear to provide a reasonable fit to the data, but lower- or higher-order
models are not justified.
As an alternative to using hypothesis tests and ANOVA, we could choose
the polynomial degree using cross-validation as we have in previous labs.
# Step functions
In order to fit a step function, we use the ${\tt cut()}$ function:
```
df_cut, bins = pd.cut(df.age, 4, retbins=True, right=True)
df_cut.value_counts(sort=False)
```
Here ${\tt cut()}$ automatically picked the cutpoints at 33.5, 49, and 64.5 years
of age. We could also have specified our own cutpoints directly. Now let's create a set of dummy variables for use in the regression:
```
df_steps = pd.concat([df.age, df_cut, df.wage], keys=['age','age_cuts','wage'], axis=1)
# Create dummy variables for the age groups
df_steps_dummies = pd.get_dummies(df_steps['age_cuts'])
# Statsmodels requires explicit adding of a constant (intercept)
df_steps_dummies = sm.add_constant(df_steps_dummies)
```
An now to fit the models! The ${\tt age<33.5}$ category is left out, so the intercept coefficient of
\$94,160 can be interpreted as the average salary for those under 33.5 years
of age, and the other coefficients can be interpreted as the average additional
salary for those in the other age groups.
```
fit3 = sm.GLM(df_steps.wage, df_steps_dummies.drop(['(17.938, 33.5]'], axis=1)).fit()
fit3.summary().tables[1]
```
We can produce predictions
and plots just as we did in the case of the polynomial fit.
```
# Put the test data in the same bins as the training data.
bin_mapping = np.digitize(age_grid.ravel(), bins)
# Get dummies, drop first dummy category, add constant
X_test2 = sm.add_constant(pd.get_dummies(bin_mapping).drop(1, axis=1))
# Predict the value of the generated ages using the linear model
pred2 = fit3.predict(X_test2)
# And the logistic model
clf2 = sm.GLM(y, df_steps_dummies.drop(['(17.938, 33.5]'], axis=1),
family=sm.families.Binomial(sm.families.links.logit))
res2 = clf2.fit()
pred3 = res2.predict(X_test2)
# Plot
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5))
fig.suptitle('Piecewise Constant', fontsize=14)
# Scatter plot with polynomial regression line
ax1.scatter(df.age, df.wage, facecolor='None', edgecolor='k', alpha=0.3)
ax1.plot(age_grid, pred2, c='b')
ax1.set_xlabel('age')
ax1.set_ylabel('wage')
ax1.set_ylim(ymin=0)
# Logistic regression showing Pr(wage>250) for the age range.
ax2.plot(np.arange(df.age.min(), df.age.max()).reshape(-1,1), pred3, color='b')
# Rug plot showing the distribution of wage>250 in the training data.
# 'True' on the top, 'False' on the bottom.
ax2.scatter(df.age, y/5, s=30, c='grey', marker='|', alpha=0.7)
ax2.set_ylim(-0.01,0.21)
ax2.set_xlabel('age')
ax2.set_ylabel('Pr(wage>250|age)')
```
To get credit for this lab, post your responses to the following questions:
- What is one real-world example where you might try polynomial regression?
- What is one real-world example where you might try using a step function?
to Piazza: https://piazza.com/class/igwiv4w3ctb6rg?cid=48
| github_jupyter |
# Data Balancing - Random undersampling class 0 and oversampling class 1
```
from collections import Counter
from imblearn.over_sampling import ADASYN
from imblearn.under_sampling import RandomUnderSampler
import numpy as np
import pandas as pd
```
## 1. Load the Data
```
starting_ratio = ''#'starting_ratio_05/'
percent_ratio = ''# '10_percent/'
base_data_folder = '../../Data/other_patients/'+starting_ratio+percent_ratio
X_train = pd.read_csv(base_data_folder+'X_train_total.csv', index_col=0, header=0)
y_train = pd.read_csv(base_data_folder+'y_train_total.csv', index_col=0, header=0)
X_train.shape, y_train.shape
```
## 2. Undersampling
```
ratio = 1
y_train[y_train['Class'] == 1].shape
rus = RandomUnderSampler(sampling_strategy=ratio, random_state=42)
X_resampled, y_resampled = rus.fit_resample(X_train, y_train['Class'].values)
print('Resampled dataset shape %s' % Counter(y_resampled))
X_resampled.shape
y_resampled.shape
```
## 3. Oversampling
```
adasyn = ADASYN(random_state=42, sampling_strategy=ratio)
X_resampled_total, y_resampled_total = adasyn.fit_resample(X_resampled, y_resampled)
print('Resampled dataset shape %s' % Counter(y_resampled_total))
total_patients = pd.DataFrame(X_resampled_total)
total_patients["Class"] = y_resampled_total
```
## 4. Dataset merge and Data preparation for disk writing
```
total_patients = pd.DataFrame(X_resampled_total)
total_patients["Class"] = y_resampled_total
columns_names = ['sex', 'age', 'weight', 'height', 'HIPX', 'smoking',
'ReumatoidArthritis', 'SecondaryOsteoporsis', 'Alcohol', 'VitaminD',
'calcium', 'dose_walk', 'dose_moderate', 'dose_vigorous','Class']
total_patients.columns = columns_names
total_patients = total_patients.sample(frac=1).reset_index(drop=True)
X_train_resampled = total_patients.iloc[:,:total_patients.shape[1]-1]
X_train_resampled
y_train_resampled = pd.DataFrame(total_patients['Class'], columns=['Class'])
y_train_resampled
```
## 5. Save to file
```
ratio_folder = ''
if starting_ratio == '':
if ratio < 1 :
base_data_folder += 'starting_ratio_05/'
else:
base_data_folder += 'starting_ratio_1/'
else:
if ratio < 1 :
ratio_folder = 'ratio_05/'
else:
ratio_folder = 'ratio_1/'
total_patients.to_csv(base_data_folder+ratio_folder+'total_patients_trainset_balanced.csv')
X_train_resampled.to_csv(base_data_folder+ratio_folder+'X_train.csv')
y_train_resampled.to_csv(base_data_folder+ratio_folder+'y_train.csv')
for j in range (1,6):
print(j)
starting_ratio = '05'
percent_ratio = str(j)+'0'
base_data_folder = '../../Data/other_patients/starting_ratio_'+starting_ratio+'/'+percent_ratio+'_percent/'
X_train = pd.read_csv(base_data_folder+'X_train_total.csv', index_col=0, header=0)
y_train = pd.read_csv(base_data_folder+'y_train_total.csv', index_col=0, header=0)
X_train.shape, y_train.shape
for i in [0.5,1]:
ratio = i
rus = RandomUnderSampler(sampling_strategy=ratio, random_state=42)
X_resampled, y_resampled = rus.fit_resample(X_train, y_train['Class'].values)
adasyn = ADASYN(random_state=42, sampling_strategy=ratio)
X_resampled_total, y_resampled_total = adasyn.fit_resample(X_resampled, y_resampled)
total_patients = pd.DataFrame(X_resampled_total)
total_patients["Class"] = y_resampled_total
columns_names = ['sex', 'age', 'weight', 'height', 'HIPX', 'smoking',
'ReumatoidArthritis', 'SecondaryOsteoporsis', 'Alcohol', 'VitaminD',
'calcium', 'dose_walk', 'dose_moderate', 'dose_vigorous','Class']
total_patients.columns = columns_names
total_patients = total_patients.sample(frac=1).reset_index(drop=True)
X_train_resampled = total_patients.iloc[:,:total_patients.shape[1]-1]
y_train_resampled = pd.DataFrame(total_patients['Class'], columns=['Class'])
if ratio < 1:
ratio_folder = 'ratio_05/'
else:
ratio_folder = 'ratio_1/'
total_patients.to_csv(base_data_folder+ratio_folder+'total_patients_trainset_balanced.csv')
X_train_resampled.to_csv(base_data_folder+ratio_folder+'X_train.csv')
y_train_resampled.to_csv(base_data_folder+ratio_folder+'y_train.csv')
```
| github_jupyter |
# ORF recognition by CNN
Compare to 114. Make sure there is always a STOP codon but only sometimes in frame.
```
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=4000 # how many protein-coding sequences
NC_SEQUENCES=4000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=36 # how long is each sequence
CDS_LEN=30 # include bases in start, residues, stop
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 16
DROP_RATE = 0.4
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=25 # how many times to train on all the data
SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=5 # train the model this many times (range 1 to SPLITS)
import sys
try:
from google.colab import drive
IN_COLAB = True
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import *
except:
print("CoLab not working. On my PC, use relative paths.")
IN_COLAB = False
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_gen import *
from SimTools.RNA_describe import ORF_counter
from SimTools.RNA_prep import *
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
if not assert_imported_RNA_gen():
print("ERROR: Cannot use RNA_gen.")
if not assert_imported_RNA_prep():
print("ERROR: Cannot use RNA_prep.")
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
import random
def partition_random_sequences(goal_per_class):
between_bases = CDS_LEN - 6
utr5_bases = (RNA_LEN - CDS_LEN) // 2
utr3_bases = RNA_LEN - utr5_bases - CDS_LEN
pc_seqs=[]
nc_seqs=[]
oc = ORF_counter()
trials = 0
pc_cnt = 0
nc_cnt = 0
bases=['A','C','G','T']
while pc_cnt<goal_per_class or nc_cnt<goal_per_class:
trials += 1
one_seq = "".join(random.choices(bases,k=utr5_bases))
one_seq += 'ATG'
random_cnt = random.randint(1,between_bases-3)
one_seq += "".join(random.choices(bases,k=random_cnt))
random_stop = random.choice(['TAA','TAG','TGA']) # random frame
one_seq += random_stop
remaining_cnt = between_bases - 3 - random_cnt
one_seq += "".join(random.choices(bases,k=remaining_cnt))
#one_seq += "".join(random.choices(bases,k=between_bases))
random_stop = random.choice(['TAA','TAG','TGA']) # in frame
one_seq += random_stop
one_seq += "".join(random.choices(bases,k=utr3_bases))
oc.set_sequence(one_seq)
cds_len = oc.get_max_cds_len() + 3
if cds_len >= CDS_LEN and pc_cnt<goal_per_class:
pc_cnt += 1
pc_seqs.append(one_seq)
elif cds_len < CDS_LEN and nc_cnt<goal_per_class:
nc_cnt += 1
nc_seqs.append(one_seq)
print ("It took %d trials to reach %d per class."%(trials,goal_per_class))
return pc_seqs,nc_seqs
pc_all,nc_all=partition_random_sequences(10) # just testing
pc_all,nc_all=partition_random_sequences(PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
#dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/01_IntroPython.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# What is a 'Program'
**A program is a sequence of instructions that specifies how to perform a computation.**
Details will depend on the kind of computation, but typically the following basic instructions are involved:
- *input*: Get data.
- *output*: Display data.
- *math*: Perform mathematical operations.
- *conditional execution*: Check for conditions and selectively run specific code.
- *repetition*: Perform some action repeatedly.
Consider the following task:
> Compute the difference between largest and smallest number from a list of numbers.
How to 'translate' this into a 'program'?
- Define problem, and specific task.
- Develop a computational strategy, an [*algorithm*](https://en.wikipedia.org/wiki/Algorithm), that solves this task.
- Implement this algorithm in a programming language.
- Test it!
Implementation, and thus the specific programming language, is only one of the steps in the problem solving context!
> Compute the difference between largest and smallest number from a list of numbers.
Steps:
- Find largest number $x_\text{max}$ in list.
- Find smallest number $x_\text{min}$ in list.
- Compute $x_\text{max}-x_\text{min}$.
A possible **algorithm** for finding the largest number $x_\text{max}$ in list of numbers:
- **Initialization**: Initialize variable $x_\text{max}$ with a starting value $x_\text{0}$: $x_\text{max}$ = $x_\text{0}$.
- **Repeat** for every number in the list:
- **If** current number $x_i>x_\text{max}$, **then**:
- update $x_\text{max}$: $x_\text{max} = x_i$
- **Output**: Value of $x_\text{max}$.
What value would you choose for $x_0$.
Does this algorithm produce the correct result for lists containing *any* kind of 'numbers' or only under certain conditions?
Beware of **implicit assumptions** and **resulting limitations**!
Programs are specified in *formal languages*, that is, languages that are designed for specific applications.
For example, the notation of mathematics is a formal language for denoting relationships among numbers and symbols.
**Programming languages are formal languages designed to express computations.**
The structure of statements in formal languages tends to follow strict syntax rules.
These rules define the basic language elements, so-called *tokens*, and how those tokens can be combined.
The process of reading and 'decoding' the structure of a sentence is called *parsing*.
Formal languages are designed to be *unambiguous*, *concise* and *literal*.
This allows the meaning of a computer program to be understood entirely by analysis of its tokens and structure.
See the [first chapter](http://greenteapress.com/thinkpython2/html/thinkpython2002.html) of the [Think Python](http://greenteapress.com/wp/think-python-2e/) book for a discussion of differences between *formal* and *natural* languages.
# Basic Programming Concepts in Python
The remainder of this notebook provides a very quick introduction to basic important programming constructs, and how these are used in Python.
Python has an extensive [official documentation](https://docs.python.org/3/), including a very detailed [tutorial](https://docs.python.org/3/tutorial/index.html).
Alternatively, [Python 101](https://python101.pythonlibrary.org) provides an in-depth introduction to general programming in python.
## Running Python & Basic Vocabulary
Python is in interpreted language.
The Python *interpreter* is a program that reads (parses) and executes Python code.
For an interactive console click [here](https://repl.it/languages/python3), and see the [introduction notebook](https://github.com/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/00_CompWorkingEnv.ipynb) for installation option.
In Jupyter Notebooks, code is written and executed in *code cells* like the following:
```
# This is a comment
print("Hello world!") # this is an inline comment
a = 1
b = a + 9
my_very_long_and_descriptive_variable_name = 10
print(b + my_very_long_and_descriptive_variable_name)
```
This little example alread includes many essential components of a *program*:
- **value**: E.g. letter or number, `"Hello world!"`, `1`.
- **operator**: Special symbols that represent computations. E.g. addition `+`, assignment `=`.
- **variable**: A 'placeholder' for a value. E.g.`a`, `b`, `my_very_long_and_descriptive_variable_name`.
- **expression**: A combination of values, operators and variables. E.g. `a + 9`.
- **statement**: A unit of code that has an effect, like creating a variable or displaying a value. E.g. `print()`.
The Python interpreter:
- *evaluates* an expression, i.e. finds its value.
- *executes* a statement, i.e. performs the set of instructions defined by the statement.
## Basic Arithmetic
Besides the standard arithmetic operations, Python 3 distinguishes division "/" and integer division "//".
```
print(4 + 3) # addition
print(4 - 3) # subtraction
print(4 * 3) # multiplication
print(4 / 3) # division (here, the types are widened to 'float' automatically)
print(4 % 3) # modulus
print(4 ** 3) # exponent
print(4 // 3) # integer division
```
The evaluation of expressions with more than one operator follows the *order of operations*:
1. parentheses
2. exponentiation
3. multiplication, division before addition, subtraction
4. operators of the same precedence are evaluated from left to right
```
print( 2*(5-1), 2*5-1 )
print( 2**2-1 , 2**(2-1) )
print( 1/2/3 , 1/(2/3) )
```
## Data Types
A *data type* or *type* constrains the possible value of an expression, defines its meaning and the operations that can be done the data.
- **Numeric Types**: Integral (Integer, Booleans), Real, Complex
- **Sequences**: Strings, Typles, Lists
- **Mappings**: Dictionaries
- **Callable**: Functions, Methods, Classes
See the documentation for details on all [built-in types](https://docs.python.org/3/library/stdtypes.html). And this [figure](https://commons.wikimedia.org/wiki/File:Python_3._The_standard_type_hierarchy.png) for an overview of the Python 3 Type hierarchy.
### Basic Numeric Types
Python has three basic **numeric types**: *integers*, *floating point numbers*, and *complex numbers*.
```
print(type(1)) # an integer number
print(type(1.1)) # a floating point number
print(type(1 + 1j)) # a complex number
```
Numbers can be cast into another type. Such a conversion may "widen" or "narrow" the original type.
- A *widening* conversion changes a value to a data type that supports any possible value of the original data, and this preserves the source value.
- A *narrowing* conversion changes a value to a data type that may not support some of the possible values of the original data type.
```
print(int(1.1)) # cast float to int
print(float(1)) # cast int to float
print(bool(1)) # cast int to boolean
print(complex(1)) # cast int to complex
# Try to convert a complex number to integer!
```
Python supports mixed arithmetic, i.e. arithmetic operations can be applied to operands of different numeric types. In this case, the operand with the "narrower" type is "widened" to that of the other.
```
print(type(1+1.1)) # integer + floating number -> float
print(type(1-1.1)) # integer - floating number -> float
print(type(1*1.1)) # integer * floating number -> float
print(type(1/1.1)) # integer / floating number -> float
print(type((1+1j) + 1.1)) # complex + floating number -> complex
```
The following operators allow **comparisons** between numeric types. Results of comparisons are of **boolean type**.
```
print( 1.1 < 1 ) # strictly less than
print( 1 <= 1 ) # less than or equal
print( 1 > 1 ) # strictly greater than
print( 1 >= 1 ) # greater than or equal
print( "1" == "2" ) # equal (NOTE: different from assignment operator '=')
print( 1 != 1 ) # not equal
```
The boolean type can take values `True` and `False` and is a subtype of integer.
```
print(type(1 <= 1))
print(int(True)) # True equivalent to int(1)
print(type(False))
print(int(False)) # False equivalent to int(0)
```
Boolean types, and thus comparisons like the ones above, can be combined using `and`, `or`, `not`:
```
is_it_true = (2 * 6 <= 10) and (32 / 8 >= 4) or not (5 ** 2 < 25)
print(is_it_true)
# False and True or not False
# ( False ) or ( True )
```
### Strings
Strings can be created in several different ways:
```
string_1 = "I'm a string!"
string_2 = 'This is another string'
string_3 = '''and this is
a
multiline string'''
print(string_1)
print(string_2)
print(string_3)
```
Strings can also be created from numbers, by casting a numeric data type to a string data type.
```
number = 123
number_as_string = str(number)
print(type(number))
print(type(number_as_string))
print(number_as_string)
```
Strings composed of numbers can be converted to an integer datatype. But only that.
```
number_2 = int("1233")
print(type(number_2))
# Try to convert
# 1) '1.2' to float
# 2) 'ABC' to int
```
**String formatting** allows you to insert/substitute values into a base string. We will use this later, for example to monitor progress of computations.
```
day = "Tuesday"
lecture = 4
duration = 1.5
# insert string in string
string_1 = "Today is %s." %day
print(string_1)
# insert integer in string
string_2 = "This is the %ith lecture." %lecture
print(string_2)
# insert multiple items in string
string_3 = "Today is %s, and this is the %ith lecture." %(day, lecture)
print(string_3)
# add a float
string_4 = "Today is %s, and this is the %ith lecture which lasts for %fh." \
%(day, lecture, duration)
print(string_4)
# and a little nicer:
string_5 = "Today is %s, and this is the %ith lecture which lasts for %.1fh." \
%(day, lecture, duration)
print(string_5)
string_6 = "Today is {1}, and this is the {0}ith lecture which lasts for {2}h."\
.format(lecture, day, duration)
print(string_6)
```
Above method is often referred to as "printf" formatting due to the name of the function of the C programming language that popularized this formatting style.
Python also supports another formatting method that is described [here](https://docs.python.org/3/tutorial/inputoutput.html).
### Lists, Tuples, Dictionaries
*Lists*, *tuples* and *dictionaries* are different kinds of "containers" that allow collecting and organizing information.
#### Lists
Python **list**s are ordered containers that can take elements of different types.
```
my_list_empty = [] # this is an empty list, alternatively use 'list()'
print(my_list_empty)
my_list_1 = [1,2,3,4,5]
my_list_2 = ["a", "b", "c"]
my_list_3 = [1, "two", 3, "four", 5]
print(my_list_1)
print(my_list_2)
print(my_list_3)
```
In fact, lists can contain objects of any type, also other lists.
```
nested_list = [my_list_1, my_list_2]
print(nested_list)
```
Lists can be extended and combined.
```
# append element to list
print("my_list_1 before: ", my_list_1)
my_list_1.append(6)
print("my_list_1 after: ", my_list_1)
# extend list
extended_list = my_list_2 + my_list_3
print(extended_list)
```
We can access elements of a list by specifying the index of the element of interest.
```
print(my_list_1[0]) # first element
print(my_list_1[1]) # second element
print(my_list_1[-1]) # last element
print(my_list_1[-2]) # second last element
```
We can also extract multiple elements from a list; this is called *slicing*.
```
print(my_list_1[0:2]) # first 2 elements
print(my_list_1[0:5:2]) # every second element of the first 5 elements
```
Python uses *zero-indexing*, i.e. the first element always has index 0. Therefore, if the list has N elements, the last element is at position N-1.
```
N = len(my_list_1) # len() gives length of list
print("Length of list: ", N)
print("Last element (method 1): ", my_list_1[N-1])
print("Last element (method 2): ", my_list_1[-1])
```
We can easily check whether a list contains a specific element:
```
is_two_in_list = 'two' in my_list_3
print(my_list_3)
print(is_two_in_list)
```
Python distinguishes, *mutable* and *immutable* types, that is types whose value can be changed after creation and types that do not allow this.
Lists are *mutable*:
```
print(my_list_1)
my_list_1[0] = 100
print(my_list_1)
```
#### Tuples
**Tuples** are similar to lists, but they are *immutable*. Tuples are created with parentheses, rather than square brackets.
```
my_tuple = (1, 2, 3, 4, 5)
print(my_tuple[2:5])
# now, try to change an element in the tuple
```
We have introduced strings before. Strings behave very similarly to tuples of characters!
---
**Exercise (1):**
Create a string and:
1. access individual characters in the string
2. extract a substring of more than 1 character length
2. concatenate two strings
3. try to change one of the string's characters
---
#### Dictionaries
A python **dictionary** is a mapping. It links *keys* to *values*, so that any value can be accessed by a specific key. Keys can be of any immutable type (e.g. numeric or strings), values can be of any type. Within a dictionary, key-value pairs are unordered.
```
my_dict_empty = {} # an empty dictionary; alternatively dict()
my_dict = {1 : "one",
2 : "two",
3 : "three"}
print(my_dict)
my_dict[10] = "ten" # add an item to a dictionary
print(my_dict)
print(my_dict[1]) # access an item in a dictionary
```
Keys and values in a dictionary can be accessed and retrieved as lists:
```
print(my_dict.keys())
print(my_dict.values())
# this allows you to check if a given key is available
a_key = 1
is_key_in_dict = a_key in my_dict.keys()
print("is key in dict: ", is_key_in_dict)
print("value associated to key : ", my_dict[a_key])
```
Like lists, dictionaries can be nested.
---
**Exercise (2):**
Create a nested dictionary and access an object from the 'inner' dictionary.
---
### Mutable vs. immutable data types
There are some subtle consequences resulting from data types being mutable or not. Being aware of those will help you avoid unexpected behavior when using mutable types such as lists.
```
list_1 = [1, 'two', 3, 'four', 5]
list_2 = list_1 # we assign list_1 to a new variable list_2
print('list_1 before: ',list_1)
print('list_2 before: ',list_2)
list_1.append('six') # append an additional element to list_1
print('list_1 after: ',list_1) # this is expected
print('list_2 after: ',list_2) # this may surprise you
list_1[2] = 'three' # change an element in list_1
print('list_1 after: ',list_1) # again, this is expected
print('list_2 after: ',list_2) #
```
We saw that the 'copy' that we created of `list_1` is not a copy but instead just a different name for the same object. We can confirm this by comparing the 'identity' of those two variables.
Python has an `id()` function that returns the 'identity' of an object. This identity has to be unique and constant for this object during its lifetime.
```
print('id(list_1): ',id(list_1))
print('id(list_2): ',id(list_2))
# -> list_1 and list_2 refer to the same object
```
If instead of a reference, you would like a true copy of a mutable object, you need to use a dedicated function that creates a copy of the memory representation of this object. All objects of mutable type have an in-built `copy()` function that provides this functionality.
```
list_3 = list_1.copy() # here we create a copy of list_1
print('list_1 before: ', list_1)
print('list_3 before: ',list_3)
list_1.append(7)
print('list_1 after: ',list_1)
print('list_3 after: ',list_3)
print('id(list_1): ', id(list_1)) # list_1 still has the same id
print('id(list_3): ', id(list_3)) # list_3 is an entirely new object
```
Dictionaries, the other mutable type that we have introduce before, show the same behavior.
Besides allowing to change already existing content, mutable types also provide methods for adding or removing objects, such as `append()`, `pop()`, `extend()`. Immutable types do not have those or similar methods.
We have seen previously that the `+` operator can be used to concatenate lists, tuples and strings. In contrast to `append()` or `extend()`, this operation always creates new objects.
*Which* type should I use *when*?
* Use *mutable* objects when you need to change the size of the object. Changes are 'cheap'.
* Use *immutable* objects when you need to ensure that the object will always stay the same. Changes are 'expensive' because a new object is created.
## Control Flows
So far, we have learned the basic data types, and how to interact and manipulate them. This is sufficient for elementary computations, but most computational tasks require some way of encoding logic.
### Conditional Statements & Selective Code Execution
Python's **if/elif/else** statements check whether a condition is `True` or `False` and allow code to be executed selectively, depending on the the outcome of these checks.
```
var1 = 1
var2 = 3
if var1 < var2:
print("Variable 1 < variable 2")
```
---
**Python cares about whitespaces!**
Note that we indented the code inside the if statement. This is very important!
The indentation level indicates the beginning and end of a code 'block' in Python.
Any code line that is part of the block *must* start at the same indentation level.
This is fairly unique among programming languages; most languages use parantheses, braces or specific keywords for indicating beginning and end of code blocks.
---
`if` can be combined with `elif` and `else` to define (multiple) alternative scenarios subject to specific conditions, as well as a scenario for the case that none of the conditions is fulfilled.
```
var1 = 1
var2 = 1
if var1 == var2:
print("Variable 1 == variable 2")
elif var1 < var2:
print("Variable 1 < variable 2")
print("Since this line is part of the elif block it must start at the same indentation level!")
else:
print("Variable 1 > variable 2")
```
---
**Exercise (3):**
Construct an if/else statement that additionally checks whether `var1` is a factor 2 (or more) smaller than `var2`.
*Hint*: You can use multiple `elif` statements in the same if/else block.
What role does the order of `elif` statements play?
---
### Loops
Loops are used to perform a set of operations repeatedly. Two types of loops exist in Python, the **for loop** and the **while loop**.
#### The for loop
As the name suggests, the `for` loop can be used to iterate over something a certain number of times. Iteration requires an object that is *iterable*. All the 'container' types introduced before (`list`, `tuple`, `dict`) are iterable.
```
for i in [0,1,2,3,4]:
print(i) # note the indentation again!
```
You don't have to define a list manually every time you want to write a loop!
Python has a function that provides an iterator over integer numbers:
```
print( range(10) ) # this means, start from 0, iterate 10 times,
# equivalent to range(0,10)
print( list(range(10)) ) # this gives a list of integers from 1-9
print( list(range(0,10,2))) # you can also define a stepsize
```
This gives you an easy way to define a loop over integer numbers:
```
# instead of
# for i in [0, 1, 2, 3, 4]:
for i in range(5):
print(i)
```
We can iterate over *any* list, even if it contains objects that are not integer numbers:
```
my_list = [1, 'two', 3, 'four', 100, ['another', 'list'] ]
for item in my_list:
print(item)
```
It is often useful to have access to both the position of the current item in e.g. a list, and to *the* item itself. This can be achieved in multiple ways:
```
# iterate over index and access list using index
print("Method 2:")
for i in range(len(my_list)):
print(i, my_list[i])
# iterate over index and list items simultaneously
print("Method 2:")
for i, item in enumerate(my_list):
print(i, item)
```
#### The while Loop
Instead of iterating for a pre-specified number of times (or number of elements), the `while` loop continues 'looping' until a certain condition is met.
```
i = 0
while i < 10:
print(i)
i += 1 # this is a shorthand for i = i + 1
# What would happen if we remove the last line that increments i?
```
We can 'break out' of while loops and skip iterations subject to conditionals:
```
i = 0
while i < 10:
if i == 3:
i += 1
continue # if i==3, we increment i and immediately start a new iteration
# -> print statement not executed
print(i)
if i == 5: # we stop the loop if i==5
break
i += 1
```
## Functions
*Functions* allow you to package parts of your program into reusable units. We have already used some functions, such as `print()`, `len()`, and seen these structures may process some input and may return some output.
Functions are defined using the `def` keyword, and called using their name followed by parentheses `()`:
```
def my_function(): # the 'def' keyword defines a function
print("This is my first function")
my_function() # call function named 'my_function'
```
Note that the code block that defines a function is indented, just as code blocks for conditionals and loops.
A function can have *arguments*, this is the information given as input to the function.
Also, all functions return something. You can specify a specific *return value* using the `return` keyword. If no return value is defined, the function will return `None`. This is the case in `my_function()` above.
```
# function arguments a, b
# return value a + b
def add(a, b):
return a + b
sum_1_2 = add(1,2) # function arguments are identified by their order
print(sum_1_2)
```
Functions accept two different types of arguments, *regular* and *keyword* arguments.
A function argument becomes a *keyword* argument when a default value is declared in the function definition.
This makes this argument *optional*.
```
# function arguments a, b, c
# b, c have default values
# return value a + b + c
def add_mixed_arguments(a, b=1, c=2):
return a + b + c
# call function only with required argument
sum_1 = add_mixed_arguments(1)
print(sum_1)
# call function with required argument and one keyword argument
sum_2 = add_mixed_arguments(1, c=10)
print(sum_2)
# What will happen when you call the function only with keyword arguments?
```
Similar to mathematical functions, the **names of function arguments** are just **symbols** that are used to refer to the values that are provided to the function when it is called.
You can choose whatever names you like (subject to some syntax rules) for the function arguments.
These names (and the values to which they link to) exist only in the definition block of the function and can only be used there.
You can think of function arguments as *'local variables'* that only exist in the definition block of the functions.
```
def sum_diff(a, b):
my_diff = a - b
my_sum = a + b
return my_sum
sum_from_function = sum_diff(1, 5)
print( sum_from_function )
#print( my_diff ) # check if you can also access 'my_diff'
# which has been defined in the scope of
# the function 'sum_diff()'
```
To return multiple objects from a function, you can wrap those objects in a container, or simply use the `return` keyword followed by a list of variable names. This will return a tuple that contains the listed objects.
```
def sum_diff(a, b):
my_diff = a - b
my_sum = a + b
return my_sum, my_diff
answer = sum_diff(1, 5)
print( answer )
sum_from_function, diff_from_function = sum_diff(1, 5)
print( sum_from_function )
print( diff_from_function )
```
When your functions become more complex, you will want to make sure that you and possibly others can easily understand what the function does.
In Python, functions are best documented using a `docstring`, that is a triple quoted `"""` possibly multiline string right after the `def` statement. For more information about docstrings, see the official guide on [docstring conventions](https://www.python.org/dev/peps/pep-0257/).
```
def sum_diff(a, b):
"""Returns the sum and difference of variables a and b."""
my_diff = a - b
my_sum = a + b
return my_sum, my_diff
```
*When* should I use functions?
* Probably, whenever, you find yourself writing (or copy & pasting) code that you have already written before and now want to use in a different context.
*Why* should I use functions?
* Isolating meaningful 'functional units' of code increases readability.
* Fixing an error in an isolated piece of code is much easier than finding and modifying pieces of code spread over multiple scripts.
## 'Everything is an Object in Python'
You may come across the statement that 'everything is an object in Python'.
Python is an [object-oriented](https://en.wikipedia.org/wiki/Object-oriented_programming) programming (OOP) language.
The concepts of *class* and *object* are central to OOPs: A *class* provides the definition for a general type of object by specifying the attributes and methods that any such object should have. An *object* is a concrete realization, a so-called *instance*, of a class. *Objects* may contain data in the form of *attributes* and procedures in the form of *methods*.
'Everything is an object in Python' means that anything that can be used as a value (int, str, float, functions, modules, etc) is implemented as objects.
These objects may have *attributes* (values associated with them) and *methods* (functions associated with them).
We are not interested in details of OOP here, but it is important to know how to access an object's methods and attributes.
```
a_float = 1.1
print(type(a_float)) # '1.1' is an instance of class 'float'
print(a_float.is_integer()) # every object of type 'float',
# i.e. every instance of the 'float' class
# has a method 'is_integer' that acts on the object itself
1.1.is_integer() # this function can even be called like this
a_list = [1,2,"three"]
print(type(a_list)) # a_list is an instance of class 'list'
a_list.append('4') # the 'list' class defines a method 'append'
# which takes another object and appends it to the list object
print(a_list)
```
Methods and attributes can be accessed via the `.` notation, i.e. by combining `object_name` + `.` + `method_name()` or `attribute_name`.
## Libraries
Until now, we have only worked with very basic functions and types.
Suppose we want to compute the square root of 4, $\sqrt{4}$. Usually, such a function is called `sqrt` or similar. Try this...
```
sqrt(4)
```
The error means that python does not know any object of name 'sqrt'.
To understand how Python identifies functions we need to introduce the concept of *namespaces* and revisit *scopes*.
A *namespace* is a system that ensures all the names in a program are unique and can be used without any conflict.
Python implements namespaces as dictionaries, i.e. as name-to-object maps where each object is identified by a unique name.
Multiple namespaces can use the same name and map it to a different object, for example:
- *Local Namespace*: Includes names defined locally inside a function. This namespace is created when a function is called and ceases to exist when the function returns.
- *Global Namespace*: Includes names that are used in a project. For example, the namespace of this notebook.
- *Built-in Namespace*: Includes *built-in* [functions](https://docs.python.org/3/library/functions.html), [types](https://docs.python.org/3/library/stdtypes.html), [keywords](https://docs.python.org/3/reference/lexical_analysis.html#keywords) and [exceptions](https://docs.python.org/3/library/exceptions.html).
We can inspect *namespaces* using the `dir()` function.
```
dir() # global namespace
#dir(__builtins__) # built-in names
```
In this notebook, all names listed in the *global namespace* and *built-in namespace* are known to the interpreter.
The 'square-root' function is not among them. To make this function available, we need to declare such a function in the namespace, as we did with the `sum_diff()` function above, or *import* it from a source outside of this notebook.
Python comes with a large set of predefined functions ([Standard Library](https://docs.python.org/3/library/index.html)) that are automatically installed, but not 'built-in', i.e. not immediately available in any namespace.
These functions are grouped by topic into individual *modules* that consist of a file in which those functions are defined.
In fact, *any* importable python file is a module!
How can the functions defined in a *module* be made available in the current global namespace?
```
import math
print(math)
```
We imported the *math* module into the global namespace.
The first print statement confirms that `math` is a module that is 'built-in' the standard Python distribution. Nevertheless, we need to import it into our namespace explicitly.
We can now call the function `sqrt()` that is defined in this module.
```
print(math.sqrt(4))
```
Try calling `sqrt()` directly, as we did before. This will still fail.
The reason for this is that we have to tell the interpreter where to find the `sqrt()` function. Currently, the 'path' to this function is in the `math` module under the name `sqrt()`, so: `math.sqrt()`.
This is verbose, and sometimes a more concise way of calling functions can be convenient. For example, we could import the function `sqrt` from the module `math` explicitly into the main namespace.
```
from math import sqrt # import single function from module
print(sqrt(4))
```
It is also possible to import all functions from a module into the current namespace using the '*' wildcard.
```
from math import * # import all functions from module
print(sqrt(9))
```
However, this contaminates the namespace with many new function names, which can have undesired sideeffects.
A possible complication is *shadowing* due to redefinition of a name in the namespace.
```
sqrt = 5
print(sqrt(9))
```
The safest approach is to only import those function that are actually needed, or to import an entire module by its name. Function calls that be shortened by using alternative shortened names for imported libraries. For example a library for numeric computing that we will be using is frequently imported in this way:
```
import numpy as np # import using short name
print(np.ones( (5,5) ))
```
## 'Introspection' and help
*Introspection* in computing is the ability to examine the type or the properties of objects at runtime.
Python provides a few built-in functions that allow you to do this.
We have used of one of those functions before, the `type()` function, to identify the precise type of several numperic types
```
print( type("test") )
print( type(7) )
print( type(None) )
print( type([]) )
print( type(()) )
print( type({}) )
```
Another useful function is `dir()`. We have used this function before to inspect the names (functions, variables, etc) defined in a namespace, such as the global namespace or a module.
You can also use it to discover methods (functions) of objects.
For example, applied to a string, it will give you all the methods available for objects of type string.
```
my_string = "This is a String."
dir(my_string)
```
Elements in this list that have leading or tailing double underscores `__` are internal attributes or methods of the module; not of interest for now.
Let's try one of the other methods that we have discovered:
```
print(my_string)
print(my_string.lower())
```
Python comes with a help utility. You have access to an interactive help by typing `help()`.
```
help()
```
Or, you can ask for a description of a specific object `xx` by executing `help(xx)`.
This simply prints the docstring of the object `xx`.
Note that when calling help on functions you **must not** include the parentheses `()` that you would otherwise use for calling that function. I.e. `help(math.sqrt)`, *not* `help(math.sqrt())`.
```
help(math.sqrt)
```
A very helpful website for (not only) programming-related questions of all kind is [stackoverflow](https://stackoverflow.com).
When you search for a programming-related question or a specific error message on the web, stackoverflow discussions will be among the first hits.
Colab even has a built-in functionality to search for discussions on stackoverflow when you encountered an error.
```
my_tuple = (1, 2, 3)
my_tuple[2] = 1
```
These discussions are community-based and always consist of a question (at the top) and multiple responses and comments to this question (ranked by degree of community approval).
You will often find that discussions about your question already exist, and reading them usually helps getting you on the right track to solve or better understand how to approach your specific problem.
Of course, there is no guarantee for the correctness of these answers, and you need to employ your own judgement...
# Exercises
- In [this exercise](https://github.com/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/exercises/01_ComputingSummaryStatistics.ipynb) you will use lists, control flows and functions for computing basic summary statistics.
- In [Zen of Python](https://github.com/cohmathonc/biosci670/blob/master/IntroductionComputationalMethods/exercises/02_ZenOfPython.ipynb) you will explore the `this` module and work with string and dictionary data types. (**optional**)
##### About
This notebook is part of the *biosci670* course on *Mathematical Modeling and Methods for Biomedical Science*.
See https://github.com/cohmathonc/biosci670 for more information and material.
| github_jupyter |
```
# Configuration related preprocessing step before mounting the drive
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
#Mount the google drive
from google.colab import drive
drive.mount('/content/drive')
import os
# Set Directory path for Dataset
os.chdir("/content/drive/My Drive/Projects/Face_mask")
Dataset='dataset'
Data_Dir=os.listdir(Dataset)[1:3]
print(Data_Dir)
# Import necessary libraries
import cv2
import numpy as np
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
img_rows, img_cols = 112, 112
images = []
labels = []
for category in Data_Dir:
folder_path = os.path.join('/content/drive/My Drive/Projects/Face_mask/dataset', category)
for img in os.listdir(folder_path):
img_path = os.path.join(folder_path, img)
img=cv2.imread(img_path)
try:
#Coverting the image into gray scale
grayscale_img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#resizing the gray scaled image into size 56x56 in order to keep size of the images consistent
resized_img=cv2.resize(grayscale_img,(img_rows, img_cols))
images.append(resized_img)
labels.append(category)
# Exception Handling in case any error occurs
except Exception as e:
print('Exception:',e)
images=np.array(images)/255.0
images=np.reshape(images,(images.shape[0],img_rows, img_cols,1))
# Perform one hot encoding on the labels since the label are in textual form
lb=LabelBinarizer()
labels=lb.fit_transform(labels)
labels = to_categorical(labels)
labels = np.array(labels)
(train_X, test_X, train_y, test_y) = train_test_split(images, labels, test_size=0.25,
random_state=0)
# Import Necessary Keras Libraries
from keras.models import Sequential
from keras.layers import Dense,Activation,Flatten,Dropout
from keras.layers import Conv2D,MaxPooling2D
# Define model paramters
num_classes = 2
batch_size = 32
# Build CNN model using Sequential API
model=Sequential()
#First layer group containing Convolution, Relu and MaxPooling layers
model.add(Conv2D(64,(3,3),input_shape=(img_rows, img_cols, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#Second layer group containing Convolution, Relu and MaxPooling layers
model.add(Conv2D(128,(3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#Flatten and Dropout Layer to stack the output convolutions above as well as cater overfitting
model.add(Flatten())
model.add(Dropout(0.5))
# Softmax Classifier
model.add(Dense(64,activation='relu'))
model.add(Dense(num_classes,activation='softmax'))
print(model.summary())
# Plot the model
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='face_mask_detection_architecture.png')
## Train the Model
from keras.optimizers import Adam
epochs = 50
model.compile(loss = 'categorical_crossentropy',
optimizer = Adam(lr=0.001),
metrics = ['accuracy'])
fitted_model = model.fit(
train_X,
train_y,
epochs = epochs,
validation_split=0.25)
## Plot the Training Loss & Accuracy
from matplotlib import pyplot as plt
# Plot Training and Validation Loss
plt.plot(fitted_model.history['loss'],'r',label='training loss')
plt.plot(fitted_model.history['val_loss'],label='validation loss')
plt.xlabel('Number of Epochs')
plt.ylabel('Loss Value')
plt.legend()
plt.show()
# Plot Training and Validation Accuracy
plt.plot(fitted_model.history['accuracy'],'r',label='training accuracy')
plt.plot(fitted_model.history['val_accuracy'],label='validation accuracy')
plt.xlabel('Number of Epochs')
plt.ylabel('Accuracy Value')
plt.legend()
plt.show()
# Save or Serialize the model with the name face_mask_detection_alert_system
model.save('face_mask_detection_alert_system.h5')
```
| github_jupyter |
# SVD Practice.
2018/2/12 - WNixalo
Fastai Computational Linear Algebra (2017) §2: [Topic Modeling w NMF & SVD](https://github.com/fastai/numerical-linear-algebra/blob/master/nbs/2.%20Topic%20Modeling%20with%20NMF%20and%20SVD.ipynb)
facebook research: [Fast Randomized SVD](https://research.fb.com/fast-randomized-svd/)
---
## 1. Singular-Value Decomposition
SVD is a factorization of a real or complex matrix. It factorizes a matrix $A$ into one with **orthogonal columns** $V^T$, one with **orthogonal rows** $U$, and a diagonal matrix of singular values $Σ$ (aka $S$ or $s$ or $σ$) which contains the **relative importance** of each factor.
```
from scipy.stats import ortho_group
import numpy as np
Q = ortho_group.rvs(dim=3)
B = np.random.randint(0,10,size=(3,3))
A = Q@B@Q.T
U,S,V = np.linalg.svd(A, full_matrices=False)
U
S
V
for i in range(3):
print(U[i] @ U[(i+1) % len(U)])
# wraps around
# U[0] @ U[1]
# U[1] @ U[2]
# U[2] @ U[0]
for i in range(len(U)):
print(U[:,i] @ U[:, (i+1)%len(U[0])])
```
Wait so.. the rows of a matrix $A$ are **orthogonal** ***iff*** $AA^T$ is diagonal? Hmm. [Math.StackEx Link](https://math.stackexchange.com/a/784144)
```
np.isclose(np.eye(len(U)), U @ U.T)
np.isclose(np.eye(len(V)), V.T @ V)
```
Wait but that also gives `True` for $VV^T$. Hmmm.
## 2. Truncated SVD
Okay, so SVD is an exact decomposition of a matrix and allows us to pull out distinct topics from data (due to their orthonormality (*orthogonality?*)).
But doing so for a large data corpus is ... bad. Especially if most of the data's meaning / information relevant to us is captured by a small prominent subset. IE: prevalence of articles like *a* and *the* are likely poor indicators of any particular meaning in a piece of text since they're everywhere in English. Likewise for other types of data.
Hmm, so, if I understood correctly, the Σ/S/s/σ matrix is ordered by value max$\rightarrow$min.. but computing the SVD of a large dataset $A$ is exactly what we want to avoid using T-SVD. Okay so how?
$\rightarrow$Full SVD we're calculating the full dimension of topics -- but its handy to limit to the most important ones -- this is how SVD is used in compression.
*Aha*. This is where I was confused. Truncation is used *with* Randomization in R-SVD. The *Truncated* section was just introducing the concept. Got it.
So that's where, in R-SVD, we use a buffer in addition to the portion of the dataset we take for SVD.
And *yay* `scikit-learn` has R-SVD built in.
```
from sklearn import decomposition
# ofc this is just dummy data to test it works
datavectors = np.random.randint(-1000,1000,size=(10,50))
U,S,V = decomposition.randomized_svd(datavectors, n_components=5)
U.shape, S.shape, V.shape
```
The idea of T-SVD is that we want to compute an approximation to the range of $A$. The range of $A$ is the space covered by the column basis.
ie: `Range(A) = {y: Ax = y}`
that is: all $y$ you can achieve by multiplying $x$ with $A$.
Depending on your space, the bases are vectors that you can take linear combinations of to get any value in your space.
## 3. Details of Randomized SVD (Truncated)
Our goal is to have an algorithm to perform Truncated SVD using Randomized values from the dataset matrix. We want to use randomization to calculate the topics we're interested in, instead of calculating *all* of them.
Aha. So.. the way to do that, using randomization, is to have a *special kind* of randomization. Find a matrix $Q$ with some special properties that will allow us to pull a matrix that is a near match to our dataset matrix $A$ in the ways we want it to be. Ie: It'll have the same **singular values**, meaning the same importance-ordered topics.
*Wow mathematics is really.. somethin.*
That process:
1. Compute an approximation to the range of $A$. ie: we want $Q$ with $r$ orthonormal columns st:
$$A \approx QQ^TA$$
2. Construct $B = Q^TA,$, which is small $(r \times n)$
3. Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$), $B = SΣV^T$
4. Since: $$A \approx QQ^TA = Q(SΣV^T)$$ if we set $U = QS$, then we have a low-rank approximation of $A \approx UΣV^T$.
-- okay so.. confusion here. What is $S$ and $Σ$? Because I see them elsewhere taken to mean the same thing on this subject, but all of a sudden they seem to be totally different things.
-- [oh, so apparently](https://youtu.be/C8KEtrWjjyo?list=PLtmWHNX-gukIc92m1K0P6bIOnZb-mg0hY&t=5224) $S$ here is actually something different. $Σ$ is what's been interchangeably referred to in Hellenic/Latin letters throughout the notebook.
**NOTE** that $A: m \times n$ while $Q: m \times r$, so $Q$ is generally a tall, skinny matrix and therefore much smaller & easier to compute with than $A$.
Also, because $S$ & $Q$ are both orthonormal, setting $R = QS$ makes $R$ orthonormal as well.
### How do we find Q (in step 1)?
**General Idea:** we find this special $Q$, then we do SVD on this smaller matrix $Q^TA$, and we plug that back in to have our Truncated-SVD for $A$.
And ***HERE*** is where the *Random* part of Randomized SVD comes in! How do we find $Q$?:
We just take a bunch of random vectors $w_i$ and look at / evaluate the subspace formed by $Aw_i$. We form a matrix $W$ with the $w_i$'s as its columns. Then we take the `QR Decomposition` of $AW = QR$. Then the colunms of $Q$ form an **orthonormal basis** for $AW$, which is the range of $A$.
Basically a QR Decomposition exists for any matrix, and is an **orthonormal matrix** $\times$ an **upper triangular matrix**.
So basically: we take $AW$, $W$ is random, get the $QR$ -- and a property of the QR-Decomposition is that $Q$ forms an orthonormal basis for $AW$ -- and $AW$ gives the range of $A$.
Since $AW$ has far more rows than columns, it turns out in practice that these columns are approximately orthonormal. It's very unlikely you'll get linearly-dependent columns when you choose random values.
Aand apparently the QR-Decomp is v.foundational to Numerical Linear Algebra.
### How do we choose r?
We chose $Q$ to have $r$ orthonormal columns, and $r$ gives us the dimension of $B$.
We choose $r$ to be the number of topics we want to retrieve $+$ some buffer.
See the [lesson notebook](https://github.com/fastai/numerical-linear-algebra/blob/master/nbs/2.%20Topic%20Modeling%20with%20NMF%20and%20SVD.ipynb) and [accompanying lecture time](https://youtu.be/C8KEtrWjjyo?list=PLtmWHNX-gukIc92m1K0P6bIOnZb-mg0hY&t=5605) for an implementatinon of Randomized SVD. **NOTE** that Scikit-Learn's implementation is more powerful; the example is for example purposes.
---
## 4. Non-negative Matrix Factorization
[Wiki](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization)
> NMF is a group of algorithms in multivariate analysis and linear algebra where a matrix $V$ is factorized into (usually) two matrices $W$ & $H$, with the property that all three matrices have no negative elements.
[Lecture 2 40:32](https://youtu.be/kgd40iDT8yY?list=PLtmWHNX-gukIc92m1K0P6bIOnZb-mg0hY&t=2432)
The key thing in SVD is orthogonality -- basically everything is orthogonal to eachother -- the key idea in NMF is that nothing is negative. The lower-bound is zero-clamped.
**NOTE** your original dataset shoudl be nonnegative if you use NMF, or else you won't be able to reconstruct it.
### Idea
> Rather than constraining our factors to be *orthogonal*, another idea would be to constrain them to be *non-negative*. NMF is a factorization of a non-negative dataset $V$: $$V=WH$$ into non-negative matrices $W$, $H$. Often positive factors will be **more easily interpretable** (and this is the reason behind NMF's popularity).
*huh.. really now.?..*
For example if your dataset is a matrix of faces $V$, where each columns holds a vectorized face, then $W$ would be a matrix of column facial features, and $H$ a matrix of column relative importance of features in each image.
### Applications of NMF / Sklearn
NMF is a 'difficult' problem because it is unconstrained and NP-Hard
NMF looks smth like this in schematic form:
```
Documents Topics Topic Importance Indicators
W --------- --- -----------------
o | | | | | ||| | | | | | | | | |
r | | | | | ≈ ||| -----------------
d | | | | | |||
s --------- ---
V W H
```
```
# workflow w NMF is something like this
V = np.random.randint(0, 20, size=(10,10))
m,n = V.shape
d = 5 # num_topics
clsf = decomposition.NMF(n_components=d, random_state=1)
W1 = clsf.fit_transform(V)
H1 = clsf.components_
```
**NOTE**: NMF is non-exact. You'll get something close to the original matrix back.
### NMF Summary:
Benefits: fast and easy to use.
Downsides: took years of research and expertise to create
NOTES:
* For NMF, matrix needs to be at least as tall as it is wide, or we get an error with `fit_transform`
* Can use `df_min` in `CountVectorizer` to only look at workds that were in at least `k` of the split texts.
WNx: Okay, I'm not going to go through and implement NMF in NumPy & PyTorch using SGD today. Maybe later. -- 19:44
[Lecture 2 @ 51:09](https://youtu.be/kgd40iDT8yY?list=PLtmWHNX-gukIc92m1K0P6bIOnZb-mg0hY&t=3069)
| github_jupyter |
```
import numpy as np
import pandas as pd
from math import floor, ceil
from numpy.linalg import cholesky, inv, solve
from scipy.linalg import cho_solve
from scipy.stats import wishart, invwishart, gamma
#from lifetimes import BetaGeoFitter, GammaGammaFitter
#from lifetimes.utils import calibration_and_holdout_data, summary_data_from_transaction_data
#from lifetimes.plotting import plot_calibration_purchases_vs_holdout_purchases, plot_period_transactions
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
def load_dataset(datafile, parse_dates=None):
df = pd.read_csv(datafile, delimiter=',', parse_dates=parse_dates)
return df
```
## Implementation
```
# x ==> number of repeat purchases
# t ==> First purchase to last purchase
# T ==> First purchase to end of observation period
# Setup Regressors (Covariates) for location of 1st-stage prior, i.e. beta = [log(lambda), log(mu)]
def set_regressors(data, covariates=[]):
data['intercept'] = 1.0
covariates = ['intercept'] + covariates
covars = np.matrix(data[covariates])
K = len(covariates)
return covariates, covars, K
def get_diag(shape, val):
d = np.zeros(shape=shape)
np.fill_diagonal(d, val)
return d
def get_map_from_array(x):
a_map = {}
count = 0
for val in x:
a_map[val] = count
count += 1
return a_map
# set hyper priors "log_lambda", "log_mu"
def set_hyperpriors(K):
beta_0 = np.zeros(shape=(K, 2))
A_0 = get_diag(shape=(K, K), val=0.01) # diffuse precision matrix
# set diffuse hyper-parameters for 2nd-stage prior of gamma_0; follows defaults from rmultireg example
nu_00 = 3 + K # 30
gamma_00 = get_diag(shape=(2, 2), val=nu_00) # diffuse precision matrix
hyper_prior = {'beta_0': beta_0, 'A_0':A_0, 'nu_00':nu_00, 'gamma_00':gamma_00}
return hyper_prior
def draw_z(data, level_1, level_1_params_map):
tx = data['t_cal']
Tcal = data['T_cal']
p_lambda = level_1[level_1_params_map['lambda'], ]
p_mu = level_1[level_1_params_map['mu'], ]
mu_lam = p_mu + p_lambda
t_diff = Tcal - tx
prob = 1 / (1 + (p_mu / mu_lam) * (np.exp(mu_lam * t_diff) - 1))
z = (np.random.uniform(size=len(prob)) < prob)
z[z == True] = 1
z = z.astype(int)
return list(z.values)
def draw_tau(data, level_1, level_1_params_map):
N = len(data)
tx = data['t_cal']
Tcal = data['T_cal']
p_lambda = level_1[level_1_params_map['lambda'], ]
p_mu = level_1[level_1_params_map['mu'], ]
mu_lam = p_mu + p_lambda
z = level_1[level_1_params_map['z'], ]
alive = (z == 1)
tau = np.zeros(shape=(N))
# Case: still alive - left truncated exponential distribution -> [T.cal, Inf]
if (np.sum(alive) > 0):
tau[alive] = Tcal[alive] + np.random.exponential(scale=1.0/p_mu[alive], size=np.sum(alive))
# Case: churned - double truncated exponential distribution -> [tx, T.cal]
if (np.sum(~alive) > 0):
mu_lam_tx = np.minimum(700, mu_lam[~alive] * tx[~alive])
mu_lam_Tcal = np.minimum(700, mu_lam[~alive] * Tcal[~alive])
rand = np.random.uniform(size=np.sum(~alive))
tau[~alive] = (-1.0 * np.log((1.0 - rand) * np.exp(-1.0 * mu_lam_tx) + rand * np.exp((-1.0 * mu_lam_Tcal)))) / mu_lam[~alive]
return tau
def chol2inv(chol):
return cho_solve((chol, False), np.eye(chol.shape[0]))
def draw_wishart(df, scale):
W = wishart.rvs(df, scale)
IW = inv(W)
C = cholesky(W).T
CI = inv(C)
return W, IW, C, CI
def rmultireg(Y, X, Bbar, A, nu, V):
# standard multi-variate normal regression update
# Slide 33 in http://ice.uchicago.edu/2008_presentations/Rossi/ICE_tutorial_2008.pdf
n = Y.shape[0]
m = Y.shape[1]
k = X.shape[1]
RA = cholesky(A)
W = np.concatenate((X, RA), axis=0)
Z = np.concatenate((Y, RA*Bbar), axis=0)
IR = solve(np.triu(cholesky(np.dot(W.T, W)).T), np.eye(k,k)) #trimatu interprets the matrix as upper triangular and makes solve more efficient
Btilde = np.dot(np.dot(IR, IR.T), np.dot(W.T,Z))
E = Z - np.dot(W, Btilde)
S = np.dot(E.T, E)
W, IW, C, CI = draw_wishart(df=nu+n, scale=chol2inv(cholesky(V+S).T))
samples = np.random.normal(size=k*m).reshape(k,m)
B = Btilde + np.dot(IR, np.dot(samples, CI.T))
return {'beta': B.T, 'gamma':IW}
def draw_level_2(covars, level_1, level_1_params_map, hyper_prior):
# standard multi-variate normal regression update
Y = np.log(level_1[[level_1_params_map['lambda'], level_1_params_map['mu']],].T)
X = covars
Bbar = hyper_prior['beta_0']
A = hyper_prior['A_0']
nu = hyper_prior['nu_00']
V = hyper_prior['gamma_00']
return rmultireg(Y, X, Bbar, A, nu, V)
def log_post(log_theta, mvmean, x, z, Tcal, tau, inv_gamma):
log_lambda = log_theta[0,:]
log_mu = log_theta[1,:]
diff_theta = np.subtract(log_theta, mvmean.T)
diff_lambda = diff_theta[0,:]
diff_mu = diff_theta[1,:]
likel = (x * log_lambda) + ((1 - z) * log_mu) - (((z * Tcal) + (1 - z) * tau) * (np.exp(log_lambda) + np.exp(log_mu)))
prior = -0.5 * ((np.square(diff_lambda) * inv_gamma[0, 0]) + (2 * np.multiply(diff_lambda, diff_mu) * inv_gamma[0, 1]) + (np.square(diff_mu) * inv_gamma[1, 1]))
post = np.add(likel[0], prior)
post[0,log_mu > 5] = np.NINF # cap !!
return post
def step(cur_log_theta, cur_post, gamma, N, mvmean, x, z, Tcal, tau, inv_gamma):
new_log_theta = cur_log_theta + np.vstack((gamma[0, 0] * np.random.standard_t(df=3, size=N), gamma[1, 1] * np.random.standard_t(df=3, size=N)))
new_log_theta[0,:] = np.maximum(np.minimum(new_log_theta[0,:], 70), -70)
new_log_theta[1,:] = np.maximum(np.minimum(new_log_theta[1,:], 70), -70)
new_post = log_post(new_log_theta, mvmean, x, z, Tcal, tau, inv_gamma)
# accept/reject new proposal
mhratio = np.exp(new_post - cur_post)
unif = np.random.uniform(size=N)
accepted = np.asarray(mhratio > unif)[0]
cur_log_theta[:,accepted] = new_log_theta[:, accepted]
cur_post[0,accepted] = new_post[0,accepted]
return {'cur_log_theta':cur_log_theta, 'cur_post':cur_post}
def draw_level_1(data, covars, level_1, level_1_params_map, level_2):
# sample (lambda, mu) given (z, tau, beta, gamma)
N = len(data)
x = data['x_cal']
Tcal = data['T_cal']
z = level_1[level_1_params_map['z'], ]
tau = level_1[level_1_params_map['tau'], ]
mvmean = np.dot(covars, level_2['beta'].T)
gamma = level_2['gamma']
inv_gamma = inv(gamma)
cur_lambda = level_1[level_1_params_map['lambda'], ]
cur_mu = level_1[level_1_params_map['mu'], ]
# current state
cur_log_theta = np.vstack((np.log(cur_lambda), np.log(cur_mu)))
cur_post = log_post(cur_log_theta, mvmean, x, z, Tcal, tau, inv_gamma)
iter = 1 # how high do we need to set this? 1/5/10/100?
for i in range(0, iter):
draw = step(cur_log_theta, cur_post, gamma, N, mvmean, x, z, Tcal, tau, inv_gamma)
cur_log_theta = draw['cur_log_theta']
cur_post = draw['cur_post']
cur_theta = np.exp(cur_log_theta)
return {'lambda':cur_theta[0,:], 'mu':cur_theta[1,:]}
def run_single_chain(data, covariates, K, hyper_prior, nsample, nburnin, nskip):
## initialize arrays for storing draws ##
LOG_LAMBDA = 0
LOG_MU = 1
nr_of_cust = len(data)
#nr_of_draws = nburnin + nsample * nskip
nr_of_draws = nburnin + nsample
# The 4 is for "lambda", "mu", "tau", "z"
level_1_params_map = get_map_from_array(['lambda', 'mu', 'tau', 'z'])
level_1_draws = np.zeros(shape=(nsample, 4, nr_of_cust))
level_2_draws = np.zeros(shape=(nsample, (2*K)+3))
nm = ['log_lambda', 'log_mu']
if (K > 1):
nm = ['{}_{}'.format(val2, val1) for val1 in covariates for val2 in nm]
nm.extend(['var_log_lambda', 'cov_log_lambda_log_mu', 'var_log_mu'])
level_2_params_map = get_map_from_array(nm)
## initialize parameters ##
data['t_cal_tmp'] = data['t_cal']
data['t_cal_tmp'][data.t_cal == 0] = data['T_cal'][data.t_cal == 0]
level_1 = level_1_draws[1,]
x_cal_mean = np.mean(data['x_cal'])
t_cal_tmp_mean = np.mean(data['t_cal_tmp'])
level_1[level_1_params_map['lambda'], ] = x_cal_mean/t_cal_tmp_mean
level_1[level_1_params_map['mu'], ] = 1 / (data['t_cal'] + 0.5 / level_1[level_1_params_map['lambda'], ])
## run MCMC chain ##
hyper_prior['beta_0'][0, LOG_LAMBDA] = np.log(np.mean(level_1[level_1_params_map['lambda'], ]))
hyper_prior['beta_0'][0, LOG_MU] = np.log(np.mean(level_1[level_1_params_map['mu'], ]))
for i in range(0, nr_of_draws):
# draw individual-level parameters
level_1[level_1_params_map['z'], ] = draw_z(data, level_1, level_1_params_map)
level_1[level_1_params_map['tau'], ] = draw_tau(data, level_1, level_1_params_map)
level_2 = draw_level_2(covars, level_1, level_1_params_map, hyper_prior)
draw = draw_level_1(data, covars, level_1, level_1_params_map, level_2)
level_1[level_1_params_map['lambda'], ] = draw["lambda"]
level_1[level_1_params_map['mu'], ] = draw["mu"]
#nk = int(round((i - nburnin) / nskip))
if (i >= nburnin):
#Store
idx = i - nburnin
level_1_draws[idx,:,:] = level_1 # nolint
level_2_draws[idx,:] = list(np.array(level_2['beta'].T).reshape(-1)) + [level_2['gamma'][0, 0], level_2['gamma'][0, 1], level_2['gamma'][1,1]]
if (i % 100) == 0:
print('draw: {}'.format(i))
coeff_mean = np.mean(level_2_draws, axis=0)
coeff_stddev = np.std(level_2_draws, axis=0)
coeff = {}
for param in level_2_params_map:
coeff[param] = {}
coeff[param]['mean'] = coeff_mean[level_2_params_map[param]]
coeff[param]['stddev'] = coeff_stddev[level_2_params_map[param]]
return {"level_1":level_1_draws, "level_1_params_map":level_1_params_map
, "level_2":level_2_draws, "level_2_params_map":level_2_params_map
, "coeff": coeff}
####MCMC Functions
def get_correlation(draws):
l2pmap = draws["level_2_params_map"]
draw_means = np.mean(draws['level_2'], axis=0)
corr = draw_means[l2pmap['cov_log_lambda_log_mu']]/(np.sqrt(draw_means[l2pmap['var_log_lambda']]) * np.sqrt(draw_means[l2pmap['var_log_mu']]))
return corr
def get_nr_of_cust(draws):
nr_of_cust = draws["level_1"].shape[2]
return nr_of_cust
def PAlive(draws):
l1pmap = draws["level_1_params_map"]
nr_of_cust = get_nr_of_cust(draws)
p_alive = np.mean(draws["level_1"][:,l1pmap['z'],:], axis=0)
return p_alive
def draw_left_truncated_gamma(lower, k, lamda):
pg = gamma.cdf(x=lower, a=k, scale=1.0/(k*lamda))
rand = np.random.uniform(1, pg, 1)
qg = gamma.ppf(q=rand, a=k, scale=1.0/(k*lamda))
return qg
def DrawFutureTransactions(data, draws, sample_size=None):
nr_of_draws = draws["level_2"].shape[0]
if sample_size is not None:
nr_of_draws = sample_size
nr_of_cust = get_nr_of_cust(draws)
parameters = draws["level_1_params_map"]
x_holdout = np.zeros(shape=(nr_of_draws, nr_of_cust))
t_cal = data['t_cal']
T_holdout = data['T_holdout']
T_cal = data['T_cal']
for i in range(0, nr_of_cust):
print('...processing customer: {} of {}'.format(i, nr_of_cust))
Tcal = T_cal[i]
Tholdout = T_holdout[i]
tcal = t_cal[i]
taus = draws['level_1'][:,parameters['tau'],i]
ks = np.ones(shape=(len(taus)))
lamdas = draws['level_1'][:,parameters['lambda'],i]
if sample_size is not None:
taus = taus[sample_size]
ks = ks[sample_size]
lambdas = lambdas[sample_size]
alive = taus > Tcal
# Case: customer alive
idx = 0
for alive_val in alive:
if alive_val:
# sample itt which is larger than (Tcal-tx)
itts = draw_left_truncated_gamma(Tcal - tcal, ks[idx], lamdas[idx])
# sample 'sufficiently' large amount of inter-transaction times
minT = np.minimum(Tcal + Tholdout - tcal, taus[idx] - tcal)
nr_of_itt_draws = int(np.maximum(10, np.round(minT * lamdas[idx])))
itts = np.hstack((itts, np.array(gamma.rvs(a=ks[idx], loc=ks[idx]*lamdas[idx], size=nr_of_itt_draws*2))))
if (np.sum(itts) < minT):
itts = np.hstack((itts, np.array(gamma.rvs(a=ks[idx], loc=ks[idx]*lamdas[idx], size=nr_of_itt_draws*4))))
if (np.sum(itts) < minT):
itts = np.hstack((itts, np.array(gamma.rvs(a=ks[idx], loc=ks[idx]*lamdas[idx], size=nr_of_itt_draws*800))))
if (np.sum(itts) < minT):
print("...not enough inter-transaction times sampled! cust: {}, draw: {}, {} < {}".format(i, idx, np.sum(itts), minT))
x_holdout[idx, i] = np.sum(np.cumsum(itts) < minT)
idx += 1
if (np.any(~alive)):
x_holdout[~alive, i] = 0
return x_holdout
def PActive(x_holdout_draws):
nr_of_cust = x_holdout_draws.shape[1]
p_alive = np.zeros(shape=(nr_of_cust))
for i in range(0, nr_of_cust):
cd = x_holdout_draws[:,i]
p_alive[i] = np.mean(cd[cd > 0])
return p_alive
# Main routine
g_datafolder = '/development/data'
g_cbs_dataset = '{}/cbs.csv'.format(g_datafolder)
parse_dates = ['first']
df = load_dataset(g_cbs_dataset, parse_dates=parse_dates)
df.head()
covariates, covars, K = set_regressors(df, covariates=["first_sales"])
hyper_prior = set_hyperpriors(K)
draws = run_single_chain(df, covariates=covariates, K=K, hyper_prior=hyper_prior, nsample=500, nburnin=500, nskip=10)
x_holdout_draws = DrawFutureTransactions(df, draws, sample_size=None)
df['x_predicted'] = np.mean(x_holdout_draws, axis=0)
p_alive = PAlive(draws)
df['palive'] = p_alive
draws['coeff']
df.head(n=50)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Eager Execution
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/eager"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/eager.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
TensorFlow's eager execution is an imperative programming environment that
evaluates operations immediately, without building graphs: operations return
concrete values instead of constructing a computational graph to run later. This
makes it easy to get started with TensorFlow and debug models, and it
reduces boilerplate as well. To follow along with this guide, run the code
samples below in an interactive `python` interpreter.
Eager execution is a flexible machine learning platform for research and
experimentation, providing:
* *An intuitive interface*—Structure your code naturally and use Python data
structures. Quickly iterate on small models and small data.
* *Easier debugging*—Call ops directly to inspect running models and test
changes. Use standard Python debugging tools for immediate error reporting.
* *Natural control flow*—Use Python control flow instead of graph control
flow, simplifying the specification of dynamic models.
Eager execution supports most TensorFlow operations and GPU acceleration. For a
collection of examples running in eager execution, see:
[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).
Note: Some models may experience increased overhead with eager execution
enabled. Performance improvements are ongoing, but please
[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find a
problem and share your benchmarks.
## Setup and basic usage
To start eager execution, add `` to the beginning of
the program or console session. Do not add this operation to other modules that
the program calls.
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
```
Now you can run TensorFlow operations and the results will return immediately:
```
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
```
Enabling eager execution changes how TensorFlow operations behave—now they
immediately evaluate and return their values to Python. `tf.Tensor` objects
reference concrete values instead of symbolic handles to nodes in a computational
graph. Since there isn't a computational graph to build and run later in a
session, it's easy to inspect results using `print()` or a debugger. Evaluating,
printing, and checking tensor values does not break the flow for computing
gradients.
Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPy
operations accept `tf.Tensor` arguments. TensorFlow
[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convert
Python objects and NumPy arrays to `tf.Tensor` objects. The
`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
```
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
```
## Dynamic control flow
A major benefit of eager execution is that all the functionality of the host
language is available while your model is executing. So, for example,
it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
```
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
```
This has conditionals that depend on tensor values and it prints these values
at runtime.
## Build a model
Many machine learning models are represented by composing layers. When
using TensorFlow with eager execution you can either write your own layers or
use a layer provided in the `tf.keras.layers` package.
While you can use any Python object to represent a layer,
TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit from
it to implement your own layer:
```
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
```
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it has
a superset of its functionality (it can also add a bias).
When composing layers into models you can use `tf.keras.Sequential` to represent
models which are a linear stack of layers. It is easy to use for basic models:
```
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
```
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.
This is a container for layers that is a layer itself, allowing `tf.keras.Model`
objects to contain other `tf.keras.Model` objects.
```
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
```
It's not required to set an input shape for the `tf.keras.Model` class since
the parameters are set the first time input is passed to the layer.
`tf.keras.layers` classes create and contain their own model variables that
are tied to the lifetime of their layer objects. To share layer variables, share
their objects.
## Eager training
### Computing gradients
[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)
is useful for implementing machine learning algorithms such as
[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for training
neural networks. During eager execution, use `tf.GradientTape` to trace
operations for computing gradients later.
`tf.GradientTape` is an opt-in feature to provide maximal performance when
not tracing. Since different operations can occur during each call, all
forward-pass operations get recorded to a "tape". To compute the gradient, play
the tape backwards and then discard. A particular `tf.GradientTape` can only
compute one gradient; subsequent calls throw a runtime error.
```
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
```
### Train a model
The following example creates a multi-layer model that classifies the standard
MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
trainable graphs in an eager execution environment.
```
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
```
Even without training, call the model and inspect the output in eager execution:
```
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
```
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
```
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
```
### Variables and optimizers
`tf.Variable` objects store mutable `tf.Tensor` values accessed during
training to make automatic differentiation easier. The parameters of a model can
be encapsulated in classes as variables.
Better encapsulate model parameters by using `tf.Variable` with
`tf.GradientTape`. For example, the automatic differentiation example above
can be rewritten:
```
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
```
## Use objects for state during eager execution
With graph execution, program state (such as the variables) is stored in global
collections and their lifetime is managed by the `tf.Session` object. In
contrast, during eager execution the lifetime of state objects is determined by
the lifetime of their corresponding Python object.
### Variables are objects
During eager execution, variables persist until the last reference to the object
is removed, and is then deleted.
```
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
```
### Object-based saving
`tf.train.Checkpoint` can save and restore `tf.Variable`s to and from
checkpoints:
```
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
```
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,
without requiring hidden variables. To record the state of a `model`,
an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
```
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
### Object-oriented metrics
`tf.metrics` are stored as objects. Update a metric by passing the new data to
the callable, and retrieve the result using the `tf.metrics.result` method,
for example:
```
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
```
#### Summaries and TensorBoard
[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool for
understanding, debugging and optimizing the model training process. It uses
summary events that are written while executing the program.
TensorFlow 1 summaries only work in eager mode, but can be run with the `compat.v2` module:
```
from tensorflow.compat.v2 import summary
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# your model code goes here
summary.scalar('global_step', global_step, step=global_step)
!ls tb/
```
## Advanced automatic differentiation topics
### Dynamic models
`tf.GradientTape` can also be used in dynamic models. This example for a
[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)
algorithm looks like normal NumPy code, except there are gradients and is
differentiable, despite the complex control flow:
```
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
```
### Custom gradients
Custom gradients are an easy way to override gradients in eager and graph
execution. Within the forward function, define the gradient with respect to the
inputs, outputs, or intermediate results. For example, here's an easy way to clip
the norm of the gradients in the backward pass:
```
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
```
Custom gradients are commonly used to provide a numerically stable gradient for a
sequence of operations:
```
def log1pexp(x):
return tf.log(1 + tf.exp(x))
class Grad(object):
def __init__(self, f):
self.f = f
def __call__(self, x):
x = tf.convert_to_tensor(x)
with tf.GradientTape() as tape:
tape.watch(x)
r = self.f(x)
g = tape.gradient(r, x)
return g
grad_log1pexp = Grad(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.).numpy()
```
Here, the `log1pexp` function can be analytically simplified with a custom
gradient. The implementation below reuses the value for `tf.exp(x)` that is
computed during the forward pass—making it more efficient by eliminating
redundant calculations:
```
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = Grad(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.).numpy()
```
## Performance
Computation is automatically offloaded to GPUs during eager execution. If you
want control over where a computation runs you can enclose it in a
`tf.device('/gpu:0')` block (or the CPU equivalent):
```
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tf.test.is_gpu_available():
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
```
A `tf.Tensor` object can be copied to a different device to execute its
operations:
```
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
```
### Benchmarks
For compute-heavy models, such as
[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)
training on a GPU, eager execution performance is comparable to graph execution.
But this gap grows larger for models with less computation and there is work to
be done for optimizing hot code paths for models with lots of small operations.
## Work with graphs
While eager execution makes development and debugging more interactive,
TensorFlow graph execution has advantages for distributed training, performance
optimizations, and production deployment. However, writing graph code can feel
different than writing regular Python code and more difficult to debug.
For building and training graph-constructed models, the Python program first
builds a graph representing the computation, then invokes `Session.run` to send
the graph for execution on the C++-based runtime. This provides:
* Automatic differentiation using static autodiff.
* Simple deployment to a platform independent server.
* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).
* Compilation and kernel fusion.
* Automatic distribution and replication (placing nodes on the distributed system).
Deploying code written for eager execution is more difficult: either generate a
graph from the model, or run the Python runtime and code directly on the server.
### Write compatible code
The same code written for eager execution will also build a graph during graph
execution. Do this by simply running the same code in a new Python session where
eager execution is not enabled.
Most TensorFlow operations work during eager execution, but there are some things
to keep in mind:
* Use `tf.data` for input processing instead of queues. It's faster and easier.
* Use object-oriented layer APIs—like `tf.keras.layers` and
`tf.keras.Model`—since they have explicit storage for variables.
* Most model code works the same during eager and graph execution, but there are
exceptions. (For example, dynamic models using Python control flow to change the
computation based on inputs.)
* Once eager execution is enabled with `tf.enable_eager_execution`, it
cannot be turned off. Start a new Python session to return to graph execution.
It's best to write code for both eager execution *and* graph execution. This
gives you eager's interactive experimentation and debuggability with the
distributed performance benefits of graph execution.
Write, debug, and iterate in eager execution, then import the model graph for
production deployment. Use `tf.train.Checkpoint` to save and restore model
variables, this allows movement between eager and graph execution environments.
See the examples in:
[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).
### Use eager execution in a graph environment
Selectively enable eager execution in a TensorFlow graph environment using
`tfe.py_func`. This is used when `` has *not*
been called.
```
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tf.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TF Lattice Aggregate Function Models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/aggregate_function_learning_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/aggregate_function_learning_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/aggregate_function_learning_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/aggregate_function_learning_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
TFL Premade Aggregate Function Models are quick and easy ways to build TFL `tf.keras.model` instances for learning complex aggregation functions. This guide outlines the steps needed to construct a TFL Premade Aggregate Function Model and train/test it.
## Setup
Installing TF Lattice package:
```
#@test {"skip": true}
!pip install tensorflow-lattice pydot
```
Importing required packages:
```
import tensorflow as tf
import collections
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
```
Downloading the Puzzles dataset:
```
train_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv')
train_dataframe.head()
test_dataframe = pd.read_csv(
'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv')
test_dataframe.head()
```
Extract and convert features and labels
```
# Features:
# - star_rating rating out of 5 stars (1-5)
# - word_count number of words in the review
# - is_amazon 1 = reviewed on amazon; 0 = reviewed on artifact website
# - includes_photo if the review includes a photo of the puzzle
# - num_helpful number of people that found this review helpful
# - num_reviews total number of reviews for this puzzle (we construct)
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'star_rating', 'word_count', 'is_amazon', 'includes_photo', 'num_helpful',
'num_reviews'
]
def extract_features(dataframe, label_name):
# First we extract flattened features.
flattened_features = {
feature_name: dataframe[feature_name].values.astype(float)
for feature_name in feature_names[:-1]
}
# Construct mapping from puzzle name to feature.
star_rating = collections.defaultdict(list)
word_count = collections.defaultdict(list)
is_amazon = collections.defaultdict(list)
includes_photo = collections.defaultdict(list)
num_helpful = collections.defaultdict(list)
labels = {}
# Extract each review.
for i in range(len(dataframe)):
row = dataframe.iloc[i]
puzzle_name = row['puzzle_name']
star_rating[puzzle_name].append(float(row['star_rating']))
word_count[puzzle_name].append(float(row['word_count']))
is_amazon[puzzle_name].append(float(row['is_amazon']))
includes_photo[puzzle_name].append(float(row['includes_photo']))
num_helpful[puzzle_name].append(float(row['num_helpful']))
labels[puzzle_name] = float(row[label_name])
# Organize data into list of list of features.
names = list(star_rating.keys())
star_rating = [star_rating[name] for name in names]
word_count = [word_count[name] for name in names]
is_amazon = [is_amazon[name] for name in names]
includes_photo = [includes_photo[name] for name in names]
num_helpful = [num_helpful[name] for name in names]
num_reviews = [[len(ratings)] * len(ratings) for ratings in star_rating]
labels = [labels[name] for name in names]
# Flatten num_reviews
flattened_features['num_reviews'] = [len(reviews) for reviews in num_reviews]
# Convert data into ragged tensors.
star_rating = tf.ragged.constant(star_rating)
word_count = tf.ragged.constant(word_count)
is_amazon = tf.ragged.constant(is_amazon)
includes_photo = tf.ragged.constant(includes_photo)
num_helpful = tf.ragged.constant(num_helpful)
num_reviews = tf.ragged.constant(num_reviews)
labels = tf.constant(labels)
# Now we can return our extracted data.
return (star_rating, word_count, is_amazon, includes_photo, num_helpful,
num_reviews), labels, flattened_features
train_xs, train_ys, flattened_features = extract_features(train_dataframe, 'Sales12-18MonthsAgo')
test_xs, test_ys, _ = extract_features(test_dataframe, 'SalesLastSixMonths')
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
```
Setting the default values used for training in this guide:
```
LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 500
MIDDLE_DIM = 3
MIDDLE_LATTICE_SIZE = 2
MIDDLE_KEYPOINTS = 16
OUTPUT_KEYPOINTS = 8
```
## Feature Configs
Feature calibration and per-feature configurations are set using [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig). Feature configurations include monotonicity constraints, per-feature regularization (see [tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig)), and lattice sizes for lattice models.
Note that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists. For aggregation models, these features will automaticaly be considered and properly handled as ragged.
### Compute Quantiles
Although the default setting for `pwl_calibration_input_keypoints` in `tfl.configs.FeatureConfig` is 'quantiles', for premade models we have to manually define the input keypoints. To do so, we first define our own helper function for computing quantiles.
```
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
```
### Defining Our Feature Configs
Now that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.
```
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='star_rating',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['star_rating'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='word_count',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['word_count'], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='is_amazon',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='includes_photo',
lattice_size=2,
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='num_helpful',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_helpful'], num_keypoints=5),
# Larger num_helpful indicating more trust in star_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="star_rating", trust_type="trapezoid"),
],
),
tfl.configs.FeatureConfig(
name='num_reviews',
lattice_size=2,
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
flattened_features['num_reviews'], num_keypoints=5),
)
]
```
## Aggregate Function Model
To construct a TFL premade model, first construct a model configuration from [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs). An aggregate function model is constructed using the [tfl.configs.AggregateFunctionConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/AggregateFunctionConfig). It applies piecewise-linear and categorical calibration, followed by a lattice model on each dimension of the ragged input. It then applies an aggregation layer over the output for each dimension. This is then followed by an optional output piecewise-lienar calibration.
```
# Model config defines the model structure for the aggregate function model.
aggregate_function_model_config = tfl.configs.AggregateFunctionConfig(
feature_configs=feature_configs,
middle_dimension=MIDDLE_DIM,
middle_lattice_size=MIDDLE_LATTICE_SIZE,
middle_calibration=True,
middle_calibration_num_keypoints=MIDDLE_KEYPOINTS,
middle_monotonicity='increasing',
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=OUTPUT_KEYPOINTS,
output_initialization=np.linspace(
min_label, max_label, num=OUTPUT_KEYPOINTS))
# An AggregateFunction premade model constructed from the given model config.
aggregate_function_model = tfl.premade.AggregateFunction(
aggregate_function_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
aggregate_function_model, show_layer_names=False, rankdir='LR')
```
The output of each Aggregation layer is the averaged output of a calibrated lattice over the ragged inputs. Here is the model used inside the first Aggregation layer:
```
aggregation_layers = [
layer for layer in aggregate_function_model.layers
if isinstance(layer, tfl.layers.Aggregation)
]
tf.keras.utils.plot_model(
aggregation_layers[0].model, show_layer_names=False, rankdir='LR')
```
Now, as with any other [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model), we compile and fit the model to our data.
```
aggregate_function_model.compile(
loss='mae',
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
aggregate_function_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
```
After training our model, we can evaluate it on our test set.
```
print('Test Set Evaluation...')
print(aggregate_function_model.evaluate(test_xs, test_ys))
```
| github_jupyter |
## 4 Pillars of OOP
```
# Encapsulation
# a simple class to explain Encapsulation
class Human():
_name = None
_age = 0
def __init__(self, name, age):
self._name = name
self._age = age
def greet(self, name):
print(f'Hello {name}. My name is {self._name}. Nice to meet you!')
# The meaning of Encapsulation here is the intenal data (_name, _age) and the behavior or method (greet()) are ecapsulated and organized together inside the Human class.
# They are tied together and have meaning together
human1 = Human('Ryan', 13)
human1.greet('Maia')
human1._name
# Abstraction
# means a class implement a functionality and hide the detail implmentation, but only expose the interface for users to use that functionality.
# Users never need to know the detail inside implemenation to be able to use the class
# For example, a laptop is extremmely complciated to build, but it expose the interface (screen, keyboard, mouse) for users to use it.
# Users never need to know how a laptop is built to be able to use it
# simple class to explain Abstraction
class DeliveryRobot():
_model = None
def __init__(self, model):
self._model = model
def deliver(self, package, address):
# locate package
# pick package
# locate the delivery address
# fly from warehouse to the address
# drop the package
print(f'Package {package} is successfully delivered to address {address}')
# A user of the DeliveryRobot class do NOT need to know the detail inside of the robot
# just need to instantiate an Robot object and simply call "deliver()" method
robot1 = DeliveryRobot('Model 1')
robot1.deliver('TV', "New York")
# Inheritance
# means a class can inherit from other classes to reuse the code have been written.
# This also means to avoid code duplication
# Simple class to explain Inheritance
class Student(Human):
pass
# We did not define any attrubutes or method for Student class
# But because Student is inheritted from Human, so it should already inherit the attributes and methods from Human
student1 = Student('Maia', 40)
student1.greet('Myra')
# Normally a class inherit the classes and extend the functionalities
class Student(Human):
_student_id = None
_grade = None
def __init__(self, name, age, student_id, grade):
super().__init__(name, age)
self._student_id = student_id
self._grade = grade
def study(self, course):
print(f'I am studying {course}')
student2 = Student('Arby', 9, '12345', 4)
student2.greet('Maia')
student2.study('Python')
human2 = Human('ABC', 1)
human2.study('Math')
# Indirect inheritance or multi-level inheritance
# means when a class inherits from a class, it actually also inherits attributes and methods of all ancesters of that classs too
class CollegeStudent(Student):
_college = None
_major = None
def __init__(self, name, age, student_id, grade, college, major):
super().__init__(name, age, student_id, grade)
self._college = college
self._major = major
def work_part_time(self):
print('I am a college student. I can work part-time')
def research(self, topic):
print(f'I study in major {self._major}. I am researching about {topic}')
student4 = CollegeStudent('Ryan', 17, '123', 'Excellent', 'Harvard', 'Computer')
student4.greet('Mom')
student4.study('Conputer')
student4.research('Database')
# Multi Inheritance or Multiple Inheritance
# means a class can inherits multiple classes at the same time, this is different with multiple-level inheritance, a.k.a. indirect inheritance
class Driver(Human):
_driver_id = None
def __init__(self, name, age, driver_id):
super().__init__(name, age)
self._driver_id = driver_id
def drive(self):
print(f'I can drive. My driver license is {self._driver_id}. Zoom zoooom.')
# Now, we can create a simple class to explain Multi-inheritance
class Graduate(CollegeStudent, Driver):
_certificate_id = None
def __init__(self, name, certicate_id, driver_id):
self._name = name
self._certificate_id = certicate_id
self._driver_id = driver_id
def seek_job(self):
print(f'I am a Graduate. My Certificate is {self._certificate_id}. I can drive too.')
print('I am seeking for a good job.')
graduate1 = Graduate('Maia', 'Certificate - 111', 'Driver ID - 123')
graduate1.drive()
graduate1.research('Math')
graduate1.seek_job()
```
| github_jupyter |
# Quick start: Impacts of Falls Lake on streamflow...
Recall from the Excel bootcamps session that the first step was to get the streamflow data into our working environment (Excel then, Python here) and tidy the data up. From there, we'd plot and summarize the data.
The code snippets below hint at the power of Python code; just a few lines can cover a number of clicks, selection, right-clicks, copy/paste,... and we quickly have a plot of stream flow from data pulled remotely.
This dense Python code, while effective, doesn't reveal key nuances in the language nor its flexibility. So, we'll dash through this example, and then we'll examine equvalent, less dense code that (1) exposes more about Python and is more "Pythonic" (a term we'll define soon...)
### Preparing our script
* First we import Pandas - a Python data analytics library.
```
#Import the Pandas library
import pandas as pd
```
### Getting & tidying the data
* Next, we make a **request** to the the NWIS site hosting the data we want (using the **URL** formed when we queried the data in the Excel exercise). We store server's **response** as a Pandas **dataframe** named `df`. <font color=gray>(*There's a lot going on here that we'll dig into shortly...*)</font>
```
#Retrieve the data directly into a Pandas data frame named 'df'
url = ('http://waterservices.usgs.gov/nwis/dv/?' +
'format=rdb&' +
'sites=02087500&' +
'startDT=1930-10-01&' +
'endDT=2017-09-30&' +
'statCd=00003&' +
'parameterCd=00060&' +
'siteStatus=all')
df = pd.read_csv(url,
skiprows=30,
sep='\t',
names=['agency_cd','site_no','datetime','MeanFlow_cfs','Confidence'],
dtype={'site_no':'str'},
parse_dates=['datetime'],
index_col='datetime'
)
#Show the first 5 rows of the data frame
df.head()
```
### Plotting the data
Ok, we now have a local copy of the data, let's plot it!
```
#This statements enables plots in our Jupyter notebook
%matplotlib inline
#Plot the data: Start with data up to 1980
ax = df[:'1979-12-31']['MeanFlow_cfs'].plot(title="Neuse River near Clayton",
linewidth=0.5,
figsize=(12,5),
fontsize=18)
#...add the data from 1984 on
df['1984-01-01':]['MeanFlow_cfs'].plot(linewidth=0.5)
#...add some aesthetics
ax.set_ylabel("Mean flow (cfs)",fontsize=18)
ax.set_xlabel("Year",fontsize=18);
```
### Summarizing the data
```
dfSummary = pd.concat((df.describe(),
df[:'1979-12-31'].describe(),
df['1984-01-01':].describe()),
axis='columns')
dfSummary.columns = ('All','1930-1980','1984-2017')
dfSummary
#Group data by confidence column and report count of MeanFlow values in each group
df[['Confidence','MeanFlow_cfs']].groupby(['Confidence']).count()
#Plot as a pie chart
df.groupby(['Confidence']).count()['MeanFlow_cfs'].plot.pie(figsize=(5,5),legend=True);
```
### Boom! That was fast!
But code written this tersely overlooks three key elements.
- First, it's a bit hard to learn from; there's a lot going on here that's masked by compound statements.
- Second, while effective, the code is note easily re-used. What if we wanted to do a similar analysis for a another gage site? It can be done, but it's not as easy as it could be.
- And third, a key feature of Python code is that it's readable (when written correctly). This code is not as readable as good Python code should be.<br>**Try adding a new code box below and run the statement `import this`...**
*So, let's revisit these procedures, but more slowly and deliberately so that we might actually learn a thing or two...*
| github_jupyter |
## CONVOLUTIONAL NEURAL NETWORK
```
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("CURRENT TF VERSION IS [%s]" % (tf.__version__))
print ("PACKAGES LOADED")
```
## LOAD MNIST
```
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print ("MNIST ready")
```
## DEFINE MODEL
```
# NETWORK TOPOLOGIES
n_input = 784
n_channel = 64
n_classes = 10
# INPUTS AND OUTPUTS
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# NETWORK PARAMETERS
stddev = 0.1
weights = {
'c1': tf.Variable(tf.random_normal([7, 7, 1, n_channel], stddev=stddev)),
'd1': tf.Variable(tf.random_normal([14*14*64, n_classes], stddev=stddev))
}
biases = {
'c1': tf.Variable(tf.random_normal([n_channel], stddev=stddev)),
'd1': tf.Variable(tf.random_normal([n_classes], stddev=stddev))
}
print ("NETWORK READY")
```
## DEFINE GRAPH
```
# MODEL
def CNN(_x, _w, _b):
# RESHAPE
_x_r = tf.reshape(_x, shape=[-1, 28, 28, 1])
# CONVOLUTION
_conv1 = tf.nn.conv2d(_x_r, _w['c1'], strides=[1, 1, 1, 1], padding='SAME')
# ADD BIAS
_conv2 = tf.nn.bias_add(_conv1, _b['c1'])
# RELU
_conv3 = tf.nn.relu(_conv2)
# MAX-POOL
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# VECTORIZE
_dense = tf.reshape(_pool, [-1, _w['d1'].get_shape().as_list()[0]])
# DENSE
_logit = tf.add(tf.matmul(_dense, _w['d1']), _b['d1'])
_out = {
'x_r': _x_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'logit': _logit
}
return _out
# PREDICTION
cnnout = CNN(x, weights, biases)
# LOSS AND OPTIMIZER
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=y, logits=cnnout['logit']))
optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
corr = tf.equal(tf.argmax(cnnout['logit'], 1), tf.argmax(y, 1))
accr = tf.reduce_mean(tf.cast(corr, "float"))
# INITIALIZER
init = tf.global_variables_initializer()
print ("FUNCTIONS READY")
```
## SAVER
```
savedir = "nets/cnn_mnist_simple/"
saver = tf.train.Saver(max_to_keep=3)
save_step = 4
if not os.path.exists(savedir):
os.makedirs(savedir)
print ("SAVER READY")
```
## RUN
```
# PARAMETERS
training_epochs = 20
batch_size = 100
display_step = 4
# LAUNCH THE GRAPH
sess = tf.Session()
sess.run(init)
# OPTIMIZE
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# ITERATION
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
feeds = {x: batch_xs, y: batch_ys}
sess.run(optm, feed_dict=feeds)
avg_cost += sess.run(cost, feed_dict=feeds)
avg_cost = avg_cost / total_batch
# DISPLAY
if (epoch+1) % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch+1, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys}
train_acc = sess.run(accr, feed_dict=feeds)
print ("TRAIN ACCURACY: %.3f" % (train_acc))
feeds = {x: mnist.test.images, y: mnist.test.labels}
test_acc = sess.run(accr, feed_dict=feeds)
print ("TEST ACCURACY: %.3f" % (test_acc))
# SAVE
if (epoch+1) % save_step == 0:
savename = savedir+"net-"+str(epoch+1)+".ckpt"
saver.save(sess, savename)
print ("[%s] SAVED." % (savename))
print ("OPTIMIZATION FINISHED")
```
## RESTORE
```
do_restore = 0
if do_restore == 1:
sess = tf.Session()
epoch = 20
savename = savedir+"net-"+str(epoch)+".ckpt"
saver.restore(sess, savename)
print ("NETWORK RESTORED")
else:
print ("DO NOTHING")
```
## LET'S SEE HOW CNN WORKS
```
input_r = sess.run(cnnout['x_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(cnnout['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(cnnout['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(cnnout['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(cnnout['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(cnnout['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(cnnout['logit'], feed_dict={x: trainimg[0:1, :]})
```
## INPUT
```
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# PLOT
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
```
# CONV
```
print ("SIZE OF 'CONV1' IS %s" % (conv1.shape,))
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
```
## CONV + BIAS
```
print ("SIZE OF 'CONV2' IS %s" % (conv2.shape,))
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
```
## CONV + BIAS + RELU
```
print ("SIZE OF 'CONV3' IS %s" % (conv3.shape,))
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
```
## POOL
```
print ("SIZE OF 'POOL' IS %s" % (pool.shape,))
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
```
## DENSE
```
print ("SIZE OF 'DENSE' IS %s" % (dense.shape,))
print ("SIZE OF 'OUT' IS %s" % (out.shape,))
plt.matshow(out, cmap=plt.get_cmap('gray'))
plt.title("OUT")
plt.colorbar()
plt.show()
```
## CONVOLUTION FILTER
```
wc1 = sess.run(weights['c1'])
print ("SIZE OF 'WC1' IS %s" % (wc1.shape,))
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
```
| github_jupyter |
# Analysis of schemes for the diffusion equation
<div id="diffu:pde1:analysis"></div>
The numerical experiments in the sections [diffu:pde1:FE:experiments](#diffu:pde1:FE:experiments) and [diffu:pde1:theta:experiments](#diffu:pde1:theta:experiments)
reveal that there are some
numerical problems with the Forward Euler and Crank-Nicolson schemes:
sawtooth-like noise is sometimes present in solutions that are,
from a mathematical point of view, expected to be smooth.
This section presents a mathematical analysis that explains the
observed behavior and arrives at criteria for obtaining numerical
solutions that reproduce the qualitative properties of the exact
solutions. In short, we shall explain what is observed in
Figures [diffu:pde1:FE:fig:F=0.5](#diffu:pde1:FE:fig:F=0.5)-[diffu:pde1:CN:fig:F=10](#diffu:pde1:CN:fig:F=10).
<!-- [diffu:pde1:FE:fig:F=0.5](#diffu:pde1:FE:fig:F=0.5), -->
<!-- [diffu:pde1:FE:fig:F=0.25](#diffu:pde1:FE:fig:F=0.25), -->
<!-- [diffu:pde1:FE:fig:F=0.51](#diffu:pde1:FE:fig:F=0.51), -->
<!-- [diffu:pde1:FE:fig:gauss:F=0.5](#diffu:pde1:FE:fig:gauss:F=0.5), -->
<!-- [diffu:pde1:BE:fig:F=0.5](#diffu:pde1:BE:fig:F=0.5), -->
<!-- [diffu:pde1:CN:fig:F=3](#diffu:pde1:CN:fig:F=3), -->
<!-- and -->
<!-- [diffu:pde1:CN:fig:F=10](#diffu:pde1:CN:fig:F=10). -->
## Properties of the solution
<div id="diffu:pde1:analysis:uex"></div>
A particular characteristic of diffusive processes, governed
by an equation like
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:eq"></div>
$$
\begin{equation}
u_t = \dfc u_{xx},
\label{diffu:pde1:eq} \tag{1}
\end{equation}
$$
is that the initial shape $u(x,0)=I(x)$ spreads out in space with
time, along with a decaying amplitude. Three different examples will
illustrate the spreading of $u$ in space and the decay in time.
### Similarity solution
The diffusion equation ([1](#diffu:pde1:eq)) admits solutions
that depend on $\eta = (x-c)/\sqrt{4\dfc t}$ for a given value
of $c$. One particular solution
is
<!-- Equation labels as ordinary links -->
<div id="diffu:pdf1:erf:sol"></div>
$$
\begin{equation}
u(x,t) = a\,\mbox{erf}(\eta) + b,
\label{diffu:pdf1:erf:sol} \tag{2}
\end{equation}
$$
where
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:erf:def"></div>
$$
\begin{equation}
\mbox{erf}(\eta) = \frac{2}{\sqrt{\pi}}\int_0^\eta e^{-\zeta^2}d\zeta,
\label{diffu:analysis:erf:def} \tag{3}
\end{equation}
$$
is the *error function*, and $a$ and $b$ are arbitrary constants.
The error function lies in $(-1,1)$, is odd around $\eta =0$, and
goes relatively quickly to $\pm 1$:
$$
\begin{align*}
\lim_{\eta\rightarrow -\infty}\mbox{erf}(\eta) &=-1,\\
\lim_{\eta\rightarrow \infty}\mbox{erf}(\eta) &=1,\\
\mbox{erf}(\eta) &= -\mbox{erf}(-\eta),\\
\mbox{erf}(0) &=0,\\
\mbox{erf}(2) &=0.99532227,\\
\mbox{erf}(3) &=0.99997791
\thinspace .
\end{align*}
$$
As $t\rightarrow 0$, the error function approaches a step function centered
at $x=c$. For a diffusion problem posed on the unit interval $[0,1]$,
we may choose the step at $x=1/2$ (meaning $c=1/2$), $a=-1/2$, $b=1/2$.
Then
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:pde1:step:erf:sol"></div>
$$
\begin{equation}
u(x,t) = \frac{1}{2}\left(1 -
\mbox{erf}\left(\frac{x-\frac{1}{2}}{\sqrt{4\dfc t}}\right)\right) =
\frac{1}{2}\mbox{erfc}\left(\frac{x-\frac{1}{2}}{\sqrt{4\dfc t}}\right),
\label{diffu:analysis:pde1:step:erf:sol} \tag{4}
\end{equation}
$$
where we have introduced the *complementary error function*
$\mbox{erfc}(\eta) = 1-\mbox{erf}(\eta)$.
The solution ([4](#diffu:analysis:pde1:step:erf:sol))
implies the boundary conditions
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:pde1:p1:erf:uL"></div>
$$
\begin{equation}
u(0,t) = \frac{1}{2}\left(1 - \mbox{erf}\left(\frac{-1/2}{\sqrt{4\dfc t}}\right)\right),
\label{diffu:analysis:pde1:p1:erf:uL} \tag{5}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="diffu:analysis:pde1:p1:erf:uR"></div>
$$
\begin{equation}
u(1,t) = \frac{1}{2}\left(1 - \mbox{erf}\left(\frac{1/2}{\sqrt{4\dfc t}}\right)\right)
\label{diffu:analysis:pde1:p1:erf:uR} \tag{6}
\thinspace .
\end{equation}
$$
For small enough $t$, $u(0,t)\approx 1$ and $u(1,t)\approx 0$, but as
$t\rightarrow\infty$, $u(x,t)\rightarrow 1/2$ on $[0,1]$.
### Solution for a Gaussian pulse
The standard diffusion equation $u_t = \dfc u_{xx}$ admits a
Gaussian function as solution:
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:sol:Gaussian"></div>
$$
\begin{equation}
u(x,t) = \frac{1}{\sqrt{4\pi\dfc t}} \exp{\left({-\frac{(x-c)^2}{4\dfc t}}\right)}
\label{diffu:pde1:sol:Gaussian} \tag{7}
\thinspace .
\end{equation}
$$
At $t=0$ this is a Dirac delta function, so for computational
purposes one must start to view the solution at some time $t=t_\epsilon>0$.
Replacing $t$ by $t_\epsilon +t$ in ([7](#diffu:pde1:sol:Gaussian))
makes it easy to operate with a (new) $t$ that starts at $t=0$
with an initial condition with a finite width.
The important feature of ([7](#diffu:pde1:sol:Gaussian)) is that
the standard deviation $\sigma$ of a sharp initial Gaussian pulse
increases in time according to $\sigma = \sqrt{2\dfc t}$, making
the pulse diffuse and flatten out.
<!-- Mention combinations of such kernels to build up a general analytical sol? -->
<!-- Or maybe an exercise for verification. -->
### Solution for a sine component
Also, ([1](#diffu:pde1:eq)) admits a solution of the form
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:sol1"></div>
$$
\begin{equation}
u(x,t) = Qe^{-at}\sin\left( kx\right)
\label{diffu:pde1:sol1} \tag{8}
\thinspace .
\end{equation}
$$
The parameters $Q$ and $k$ can be freely chosen, while
inserting ([8](#diffu:pde1:sol1)) in ([1](#diffu:pde1:eq)) gives the constraint
$$
a = -\dfc k^2
\thinspace .
$$
A very important feature is that the initial shape $I(x)=Q\sin\left( kx\right)$
undergoes a damping $\exp{(-\dfc k^2t)}$, meaning that
rapid oscillations in space, corresponding to large $k$, are very much
faster dampened than slow oscillations in space, corresponding to small
$k$. This feature leads to a smoothing of the initial condition with time.
(In fact, one can use a few steps of the diffusion equation as
a method for removing noise in signal processing.)
To judge how good a numerical method is, we may look at its ability to
smoothen or dampen the solution in the same way as the PDE does.
The following example illustrates the damping properties of
([8](#diffu:pde1:sol1)). We consider the specific problem
$$
\begin{align*}
u_t &= u_{xx},\quad x\in (0,1),\ t\in (0,T],\\
u(0,t) &= u(1,t) = 0,\quad t\in (0,T],\\
u(x,0) & = \sin (\pi x) + 0.1\sin(100\pi x)
\thinspace .
\end{align*}
$$
The initial condition has been chosen such that adding
two solutions like ([8](#diffu:pde1:sol1)) constructs
an analytical solution to the problem:
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:sol2"></div>
$$
\begin{equation}
u(x,t) = e^{-\pi^2 t}\sin (\pi x) + 0.1e^{-\pi^2 10^4 t}\sin (100\pi x)
\label{diffu:pde1:sol2} \tag{9}
\thinspace .
\end{equation}
$$
[Figure](#diffu:pde1:fig:damping) illustrates the rapid damping of
rapid oscillations $\sin (100\pi x)$ and the very much slower damping of the
slowly varying $\sin (\pi x)$ term. After about $t=0.5\cdot10^{-4}$ the rapid
oscillations do not have a visible amplitude, while we have to wait
until $t\sim 0.5$ before the amplitude of the long wave $\sin (\pi x)$
becomes very small.
<!-- dom:FIGURE: [fig-diffu/diffusion_damping.png, width=800] Evolution of the solution of a diffusion problem: initial condition (upper left), 1/100 reduction of the small waves (upper right), 1/10 reduction of the long wave (lower left), and 1/100 reduction of the long wave (lower right). <div id="diffu:pde1:fig:damping"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:damping"></div>
<p>Evolution of the solution of a diffusion problem: initial condition (upper left), 1/100 reduction of the small waves (upper right), 1/10 reduction of the long wave (lower left), and 1/100 reduction of the long wave (lower right).</p>
<img src="fig-diffu/diffusion_damping.png" width=800>
<!-- end figure -->
<!-- x/sqrt(t) solution, kernel with integral -->
## Analysis of discrete equations
A counterpart to ([8](#diffu:pde1:sol1)) is the complex representation
of the same function:
$$
u(x,t) = Qe^{-at}e^{ikx},
$$
where $i=\sqrt{-1}$ is the imaginary unit.
We can add such functions, often referred to as wave components,
to make a Fourier representation
of a general solution of the diffusion equation:
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:u:Fourier"></div>
$$
\begin{equation}
u(x,t) \approx \sum_{k\in K} b_k e^{-\dfc k^2t}e^{ikx},
\label{diffu:pde1:u:Fourier} \tag{10}
\end{equation}
$$
where $K$ is a set of an infinite number of $k$ values needed to construct
the solution. In practice, however, the series is truncated and
$K$ is a finite set of $k$ values
needed to build a good approximate solution.
Note that ([9](#diffu:pde1:sol2)) is a special case of
([10](#diffu:pde1:u:Fourier)) where $K=\{\pi, 100\pi\}$, $b_{\pi}=1$,
and $b_{100\pi}=0.1$.
The amplitudes $b_k$ of the individual Fourier waves must be determined
from the initial condition. At $t=0$ we have $u\approx\sum_kb_k\exp{(ikx)}$
and find $K$ and $b_k$ such that
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
I(x) \approx \sum_{k\in K} b_k e^{ikx}\thinspace .
\label{_auto1} \tag{11}
\end{equation}
$$
(The relevant formulas for $b_k$ come from Fourier analysis, or
equivalently, a least-squares method for approximating $I(x)$
in a function space with basis $\exp{(ikx)}$.)
Much insight about the behavior of numerical methods can be obtained
by investigating how a wave component $\exp{(-\dfc k^2
t)}\exp{(ikx)}$ is treated by the numerical scheme. mathcal{I}_t appears that
such wave components are also solutions of the schemes, but the
damping factor $\exp{(-\dfc k^2 t)}$ varies among the schemes. To
ease the forthcoming algebra, we write the damping factor as
$A^n$. The exact amplification factor corresponding to $A$ is $\Aex =
\exp{(-\dfc k^2\Delta t)}$.
## Analysis of the finite difference schemes
<div id="diffu:pde1:analysis:details"></div>
We have seen that a general solution of the diffusion equation
can be built as a linear combination of basic components
$$
e^{-\dfc k^2t}e^{ikx} \thinspace .
$$
A fundamental question is whether such components are also solutions of
the finite difference schemes. This is indeed the case, but the
amplitude $\exp{(-\dfc k^2t)}$ might be modified (which also happens when
solving the ODE counterpart $u'=-\dfc u$).
We therefore look for numerical solutions of the form
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:analysis:uni"></div>
$$
\begin{equation}
u^n_q = A^n e^{ikq\Delta x} = A^ne^{ikx},
\label{diffu:pde1:analysis:uni} \tag{12}
\end{equation}
$$
where the amplification factor $A$
must be determined by inserting the component into an actual scheme.
Note that $A^n$ means $A$ raised to the power of $n$, $n$ being the
index in the time mesh, while the superscript $n$ in $u^n_q$ just
denotes $u$ at time $t_n$.
### Stability
The exact amplification factor is $\Aex=\exp{(-\dfc^2 k^2\Delta t)}$.
We should therefore require $|A| < 1$ to have a decaying numerical
solution as well. If
$-1\leq A<0$, $A^n$ will change sign from time level to
time level, and we get stable, non-physical oscillations in the numerical
solutions that are not present in the exact solution.
### Accuracy
To determine how accurately a finite difference scheme treats one
wave component ([12](#diffu:pde1:analysis:uni)), we see that the basic
deviation from the exact solution is reflected in how well
$A^n$ approximates $\Aex^n$,
or how well $A$ approximates $\Aex$.
We can plot $\Aex$ and the various expressions for $A$, and we can
make Taylor expansions of $A/\Aex$ to see the error more analytically.
<!-- We shall in particular investigate the error $\Aex - A$ in the -->
<!-- amplification factor. -->
### Truncation error
As an alternative to examining the accuracy of the damping of a wave
component, we can perform a general truncation error analysis as
explained in "Truncation error analysis": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc). Such results are more general, but
less detailed than what we get from the wave component analysis. The
truncation error can almost always be computed and represents the
error in the numerical model when the exact solution is substituted
into the equations. In particular, the truncation error analysis tells
the order of the scheme, which is of fundamental importance when
verifying codes based on empirical estimation of convergence rates.
## Analysis of the Forward Euler scheme
<div id="diffu:pde1:analysis:FE"></div>
<!-- 2DO: refer to vib and wave -->
The Forward Euler finite difference scheme for $u_t = \dfc u_{xx}$ can
be written as
$$
[D_t^+ u = \dfc D_xD_x u]^n_q\thinspace .
$$
Inserting a wave component ([12](#diffu:pde1:analysis:uni))
in the scheme demands calculating the terms
$$
e^{ikq\Delta x}[D_t^+ A]^n = e^{ikq\Delta x}A^n\frac{A-1}{\Delta t},
$$
and
$$
A^nD_xD_x [e^{ikx}]_q = A^n\left( - e^{ikq\Delta x}\frac{4}{\Delta x^2}
\sin^2\left(\frac{k\Delta x}{2}\right)\right)
\thinspace .
$$
Inserting these terms in the discrete equation and
dividing by $A^n e^{ikq\Delta x}$ leads to
$$
\frac{A-1}{\Delta t} = -\dfc \frac{4}{\Delta x^2}\sin^2\left(
\frac{k\Delta x}{2}\right),
$$
and consequently
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
A = 1 -4F\sin^2 p
\label{_auto2} \tag{13}
\end{equation}
$$
where
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
F = \frac{\dfc\Delta t}{\Delta x^2}
\label{_auto3} \tag{14}
\end{equation}
$$
is the *numerical Fourier number*, and $p=k\Delta x/2$.
The complete numerical solution is then
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
u^n_q = \left(1 -4F\sin^2 p\right)^ne^{ikq\Delta x}
\thinspace .
\label{_auto4} \tag{15}
\end{equation}
$$
### Stability
We easily see that $A\leq 1$. However, the $A$ can be less than $-1$,
which will lead
to growth of a numerical wave component. The criterion $A\geq -1$ implies
$$
4F\sin^2 (p/2)\leq 2
\thinspace .
$$
The worst case is when $\sin^2 (p/2)=1$, so a sufficient criterion for
stability is
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
F\leq {\frac{1}{2}},
\label{_auto5} \tag{16}
\end{equation}
$$
or expressed as a condition on $\Delta t$:
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\Delta t\leq \frac{\Delta x^2}{2\dfc}\thinspace .
\label{_auto6} \tag{17}
\end{equation}
$$
Note that halving the spatial mesh size, $\Delta x \rightarrow {\frac{1}{2}}
\Delta x$, requires $\Delta t$ to be reduced by a factor of $1/4$.
The method hence becomes very expensive for fine spatial meshes.
<!-- 2DO: verification based on exact solutions -->
### Accuracy
Since $A$ is expressed in terms of $F$ and the parameter we now call
$p=k\Delta x/2$, we should also express $\Aex$ by $F$ and $p$. The exponent
in $\Aex$ is $-\dfc k^2\Delta t$, which equals $-F k^2\Delta x^2=-F4p^2$.
Consequently,
$$
\Aex = \exp{(-\dfc k^2\Delta t)} = \exp{(-4Fp^2)}
\thinspace .
$$
All our $A$ expressions as well as $\Aex$ are now functions of the two
dimensionless parameters $F$ and $p$.
Computing
the Taylor series expansion of $A/\Aex$ in terms of $F$
can easily be done with aid of `sympy`:
```
def A_exact(F, p):
return exp(-4*F*p**2)
def A_FE(F, p):
return 1 - 4*F*sin(p)**2
from sympy import *
F, p = symbols('F p')
A_err_FE = A_FE(F, p)/A_exact(F, p)
print(A_err_FE.series(F, 0, 6))
```
The result is
$$
\frac{A}{\Aex} = 1 - 4 F \sin^{2}p + 2F p^{2} - 16F^{2} p^{2} \sin^{2}p + 8 F^{2} p^{4} + \cdots
$$
Recalling that $F=\dfc\Delta t/\Delta x^2$, $p=k\Delta x/2$, and that
$\sin^2p\leq 1$, we
realize that the dominating terms in $A/\Aex$ are at most
$$
1 - 4\dfc \frac{\Delta t}{\Delta x^2} +
\dfc\Delta t - 4\dfc^2\Delta t^2
+ \dfc^2 \Delta t^2\Delta x^2 + \cdots
\thinspace .
$$
### Truncation error
We follow the theory explained in
"Truncation error analysis": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc). The recipe is to set up the
scheme in operator notation and use formulas from
"Overview of leading-order error terms in finite difference formulas": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc) to derive an expression for
the residual. The details are documented in
"Linear diffusion equation in 1D": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc). We end up with a truncation error
$$
R^n_i = \Oof{\Delta t} + \Oof{\Delta x^2}\thinspace .
$$
Although this is not the true error $\uex(x_i,t_n) - u^n_i$, it indicates
that the true error is of the form
$$
E = C_t\Delta t + C_x\Delta x^2
$$
for two unknown constants $C_t$ and $C_x$.
## Analysis of the Backward Euler scheme
<div id="diffu:pde1:analysis:BE"></div>
Discretizing $u_t = \dfc u_{xx}$ by a Backward Euler scheme,
$$
[D_t^- u = \dfc D_xD_x u]^n_q,
$$
and inserting a wave component ([12](#diffu:pde1:analysis:uni)),
leads to calculations similar to those arising from the Forward Euler scheme,
but since
$$
e^{ikq\Delta x}[D_t^- A]^n = A^ne^{ikq\Delta x}\frac{1 - A^{-1}}{\Delta t},
$$
we get
$$
\frac{1-A^{-1}}{\Delta t} = -\dfc \frac{4}{\Delta x^2}\sin^2\left(
\frac{k\Delta x}{2}\right),
$$
and then
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:analysis:BE:A"></div>
$$
\begin{equation}
A = \left(1 + 4F\sin^2p\right)^{-1}
\label{diffu:pde1:analysis:BE:A} \tag{18}
\thinspace .
\end{equation}
$$
The complete numerical solution can be written
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
u^n_q = \left(1 + 4F\sin^2 p\right)^{-n}
e^{ikq\Delta x} \thinspace .
\label{_auto7} \tag{19}
\end{equation}
$$
### Stability
We see from ([18](#diffu:pde1:analysis:BE:A)) that $0<A<1$, which means
that all numerical wave components are stable and non-oscillatory
for any $\Delta t >0$.
### Truncation error
The derivation of the truncation error for the Backward Euler scheme is almost
identical to that for the Forward Euler scheme. We end up with
$$
R^n_i = \Oof{\Delta t} + \Oof{\Delta x^2}\thinspace .
$$
## Analysis of the Crank-Nicolson scheme
<div id="diffu:pde1:analysis:CN"></div>
The Crank-Nicolson scheme can be written as
$$
[D_t u = \dfc D_xD_x \overline{u}^x]^{n+\frac{1}{2}}_q,
$$
or
$$
[D_t u]^{n+\frac{1}{2}}_q = \frac{1}{2}\dfc\left( [D_xD_x u]^{n}_q +
[D_xD_x u]^{n+1}_q\right)
\thinspace .
$$
Inserting ([12](#diffu:pde1:analysis:uni)) in the time derivative approximation
leads to
$$
[D_t A^n e^{ikq\Delta x}]^{n+\frac{1}{2}} = A^{n+\frac{1}{2}} e^{ikq\Delta x}\frac{A^{\frac{1}{2}}-A^{-\frac{1}{2}}}{\Delta t} = A^ne^{ikq\Delta x}\frac{A-1}{\Delta t}
\thinspace .
$$
Inserting ([12](#diffu:pde1:analysis:uni)) in the other terms
and dividing by
$A^ne^{ikq\Delta x}$ gives the relation
$$
\frac{A-1}{\Delta t} = -\frac{1}{2}\dfc\frac{4}{\Delta x^2}
\sin^2\left(\frac{k\Delta x}{2}\right)
(1 + A),
$$
and after some more algebra,
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
A = \frac{ 1 - 2F\sin^2p}{1 + 2F\sin^2p}
\thinspace .
\label{_auto8} \tag{20}
\end{equation}
$$
The exact numerical solution is hence
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
u^n_q = \left(\frac{ 1 - 2F\sin^2p}{1 + 2F\sin^2p}\right)^ne^{ikq\Delta x}
\thinspace .
\label{_auto9} \tag{21}
\end{equation}
$$
### Stability
The criteria $A>-1$ and $A<1$ are fulfilled for any $\Delta t >0$.
Therefore, the solution cannot grow, but it will oscillate if
$1-2F\sin^p < 0$. To avoid such non-physical oscillations, we must demand
$F\leq\frac{1}{2}$.
### Truncation error
The truncation error is derived in
"Linear diffusion equation in 1D": ""
[[Langtangen_deqbook_trunc]](#Langtangen_deqbook_trunc):
$$
R^{n+\frac{1}{2}}_i = \Oof{\Delta x^2} + \Oof{\Delta t^2}\thinspace .
$$
## Analysis of the Leapfrog scheme
<div id="diffu:pde1:analysis:leapfrog"></div>
An attractive feature of the Forward Euler scheme is the explicit
time stepping and no need for solving linear systems. However, the
accuracy in time is only $\Oof{\Delta t}$. We can get an explicit
*second-order* scheme in time by using the Leapfrog method:
$$
[D_{2t} u = \dfc D_xDx u + f]^n_q\thinspace .
$$
Written out,
$$
u_q^{n+1} = u_q^{n-1} + \frac{2\dfc\Delta t}{\Delta x^2}
(u^{n}_{q+1} - 2u^n_q + u^n_{q-1}) + f(x_q,t_n)\thinspace .
$$
We need some formula for the first step, $u^1_q$, but for that we can use
a Forward Euler step.
Unfortunately, the Leapfrog scheme is always unstable for the
diffusion equation. To see this, we insert a wave component $A^ne^{ikx}$
and get
$$
\frac{A - A^{-1}}{\Delta t} = -\dfc \frac{4}{\Delta x^2}\sin^2 p,
$$
or
$$
A^2 + 4F \sin^2 p\, A - 1 = 0,
$$
which has roots
$$
A = -2F\sin^2 p \pm \sqrt{4F^2\sin^4 p + 1}\thinspace .
$$
Both roots have $|A|>1$ so the amplitude always grows, which is not in
accordance with the physics of the problem.
However, for a PDE with a first-order derivative in space, instead of
a second-order one, the Leapfrog scheme performs very well.
## Summary of accuracy of amplification factors
We can plot the various amplification factors against $p=k\Delta x/2$
for different choices of the $F$ parameter. Figures
[diffu:pde1:fig:A:err:C20](#diffu:pde1:fig:A:err:C20), [diffu:pde1:fig:A:err:C0.5](#diffu:pde1:fig:A:err:C0.5), and
[diffu:pde1:fig:A:err:C0.1](#diffu:pde1:fig:A:err:C0.1) show how long and small waves are
damped by the various schemes compared to the exact damping. As long
as all schemes are stable, the amplification factor is positive,
except for Crank-Nicolson when $F>0.5$.
<!-- dom:FIGURE: [fig-diffu/diffusion_A_F20_F2.png, width=800] Amplification factors for large time steps. <div id="diffu:pde1:fig:A:err:C20"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:A:err:C20"></div>
<p>Amplification factors for large time steps.</p>
<img src="fig-diffu/diffusion_A_F20_F2.png" width=800>
<!-- end figure -->
<!-- dom:FIGURE: [fig-diffu/diffusion_A_F05_F025.png, width=800] Amplification factors for time steps around the Forward Euler stability limit. <div id="diffu:pde1:fig:A:err:C0.5"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:A:err:C0.5"></div>
<p>Amplification factors for time steps around the Forward Euler stability limit.</p>
<img src="fig-diffu/diffusion_A_F05_F025.png" width=800>
<!-- end figure -->
<!-- dom:FIGURE: [fig-diffu/diffusion_A_F01_F001.png, width=800] Amplification factors for small time steps. <div id="diffu:pde1:fig:A:err:C0.1"></div> -->
<!-- begin figure -->
<div id="diffu:pde1:fig:A:err:C0.1"></div>
<p>Amplification factors for small time steps.</p>
<img src="fig-diffu/diffusion_A_F01_F001.png" width=800>
<!-- end figure -->
The effect of negative amplification factors is that $A^n$ changes
sign from one time level to the next, thereby giving rise to
oscillations in time in an animation of the solution. We see from
[Figure](#diffu:pde1:fig:A:err:C20) that for $F=20$, waves with
$p\geq \pi/4$ undergo a damping close to $-1$, which means that the
amplitude does not decay and that the wave component jumps up and down
(flips amplitude) in time. For $F=2$ we have a damping of a factor of
0.5 from one time level to the next, which is very much smaller than
the exact damping. Short waves will therefore fail to be effectively
dampened. These waves will manifest themselves as high frequency
oscillatory noise in the solution.
A value $p=\pi/4$ corresponds to four mesh points per wave length of
$e^{ikx}$, while $p=\pi/2$ implies only two points per wave length,
which is the smallest number of points we can have to represent the
wave on the mesh.
To demonstrate the oscillatory behavior of the Crank-Nicolson scheme,
we choose an initial condition that leads to short waves with
significant amplitude. A discontinuous $I(x)$ will in particular serve
this purpose: Figures [diffu:pde1:CN:fig:F=3](#diffu:pde1:CN:fig:F=3) and
[diffu:pde1:CN:fig:F=10](#diffu:pde1:CN:fig:F=10) correspond to $F=3$ and $F=10$,
respectively, and we see how short waves pollute the overall solution.
## Analysis of the 2D diffusion equation
<div id="diffu:2D:analysis"></div>
Diffusion in several dimensions is treated later, but it is appropriate to
include the analysis here. We first consider the 2D diffusion equation
$$
u_{t} = \dfc(u_{xx} + u_{yy}),
$$
which has Fourier component solutions of the form
$$
u(x,y,t) = Ae^{-\dfc k^2t}e^{i(k_x x + k_yy)},
$$
and the schemes have discrete versions of this Fourier component:
$$
u^{n}_{q,r} = A\xi^{n}e^{i(k_x q\Delta x + k_y r\Delta y)}\thinspace .
$$
### The Forward Euler scheme
For the Forward Euler discretization,
$$
[D_t^+u = \dfc(D_xD_x u + D_yD_y u)]_{q,r}^n,
$$
we get
$$
\frac{\xi - 1}{\Delta t}
=
-\dfc\frac{4}{\Delta x^2}\sin^2\left(\frac{k_x\Delta x}{2}\right) -
\dfc\frac{4}{\Delta y^2}\sin^2\left(\frac{k_y\Delta y}{2}\right)\thinspace .
$$
Introducing
$$
p_x = \frac{k_x\Delta x}{2},\quad p_y = \frac{k_y\Delta y}{2},
$$
we can write the equation for $\xi$ more compactly as
$$
\frac{\xi - 1}{\Delta t}
=
-\dfc\frac{4}{\Delta x^2}\sin^2 p_x -
\dfc\frac{4}{\Delta y^2}\sin^2 p_y,
$$
and solve for $\xi$:
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:xi"></div>
$$
\begin{equation}
\xi = 1 - 4F_x\sin^2 p_x - 4F_y\sin^2 p_y\thinspace .
\label{diffu:2D:analysis:xi} \tag{22}
\end{equation}
$$
The complete numerical solution for a wave component is
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:FE:numexact"></div>
$$
\begin{equation}
u^{n}_{q,r} = A(1 - 4F_x\sin^2 p_x - 4F_y\sin^2 p_y)^n
e^{i(k_xq\Delta x + k_yr\Delta y)}\thinspace .
\label{diffu:2D:analysis:FE:numexact} \tag{23}
\end{equation}
$$
For stability we demand $-1\leq\xi\leq 1$, and $-1\leq\xi$ is the
critical limit, since clearly $\xi \leq 1$, and the worst case
happens when the sines are at their maximum. The stability criterion
becomes
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:FE:stab"></div>
$$
\begin{equation}
F_x + F_y \leq \frac{1}{2}\thinspace .
\label{diffu:2D:analysis:FE:stab} \tag{24}
\end{equation}
$$
For the special, yet common, case $\Delta x=\Delta y=h$, the
stability criterion can be written as
$$
\Delta t \leq \frac{h^2}{2d\dfc},
$$
where $d$ is the number of space dimensions: $d=1,2,3$.
### The Backward Euler scheme
The Backward Euler method,
$$
[D_t^-u = \dfc(D_xD_x u + D_yD_y u)]_{q,r}^n,
$$
results in
$$
1 - \xi^{-1} = - 4F_x \sin^2 p_x - 4F_y \sin^2 p_y,
$$
and
$$
\xi = (1 + 4F_x \sin^2 p_x + 4F_y \sin^2 p_y)^{-1},
$$
which is always in $(0,1]$. The solution for a wave component becomes
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:BN:numexact"></div>
$$
\begin{equation}
u^{n}_{q,r} = A(1 + 4F_x\sin^2 p_x + 4F_y\sin^2 p_y)^{-n}
e^{i(k_xq\Delta x + k_yr\Delta y)}\thinspace .
\label{diffu:2D:analysis:BN:numexact} \tag{25}
\end{equation}
$$
### The Crank-Nicolson scheme
With a Crank-Nicolson discretization,
$$
[D_tu]^{n+\frac{1}{2}}_{q,r} =
\frac{1}{2} [\dfc(D_xD_x u + D_yD_y u)]_{q,r}^{n+1} +
\frac{1}{2} [\dfc(D_xD_x u + D_yD_y u)]_{q,r}^n,
$$
we have, after some algebra,
$$
\xi = \frac{1 - 2(F_x\sin^2 p_x + F_x\sin^2p_y)}{1 + 2(F_x\sin^2 p_x + F_x\sin^2p_y)}\thinspace .
$$
The fraction on the right-hand side is always less than 1, so stability
in the sense of non-growing wave components is guaranteed for all
physical and numerical parameters. However,
the fraction can become negative and result in non-physical
oscillations. This phenomenon happens when
$$
F_x\sin^2 p_x + F_x\sin^2p_y > \frac{1}{2}\thinspace .
$$
A criterion against non-physical oscillations is therefore
$$
F_x + F_y \leq \frac{1}{2},
$$
which is the same limit as the stability criterion for the Forward Euler
scheme.
The exact discrete solution is
<!-- Equation labels as ordinary links -->
<div id="diffu:2D:analysis:CN:numexact"></div>
$$
\begin{equation}
u^{n}_{q,r} = A
\left(
\frac{1 - 2(F_x\sin^2 p_x + F_x\sin^2p_y)}{1 + 2(F_x\sin^2 p_x + F_x\sin^2p_y)}
\right)^n
e^{i(k_xq\Delta x + k_yr\Delta y)}\thinspace .
\label{diffu:2D:analysis:CN:numexact} \tag{26}
\end{equation}
$$
## Explanation of numerical artifacts
The behavior of the solution generated by Forward Euler discretization in time (and centered
differences in space) is summarized at the end of
the section [diffu:pde1:FE:experiments](#diffu:pde1:FE:experiments). Can we, from the analysis
above, explain the behavior?
We may start by looking at [Figure](#diffu:pde1:FE:fig:F=0.51)
where $F=0.51$. The figure shows that the solution is unstable and
grows in time. The stability limit for such growth is $F=0.5$ and
since the $F$ in this simulation is slightly larger, growth is
unavoidable.
[Figure](#diffu:pde1:FE:fig:F=0.5) has unexpected features:
we would expect the solution of the diffusion equation to be
smooth, but the graphs in [Figure](#diffu:pde1:FE:fig:F=0.5)
contain non-smooth noise. Turning to [Figure](#diffu:pde1:FE:fig:gauss:F=0.5), which has a quite similar
initial condition, we see that the curves are indeed smooth.
The problem with the results in [Figure](#diffu:pde1:FE:fig:F=0.5)
is that the initial condition is discontinuous. To represent it, we
need a significant amplitude on the shortest waves in the mesh.
However, for $F=0.5$, the shortest wave ($p=\pi/2$) gives
the amplitude in the numerical solution as $(1-4F)^n$, which oscillates
between negative and positive values at subsequent time levels
for $F>\frac{1}{4}$. Since the shortest waves have visible amplitudes in
the solution profile, the oscillations becomes visible. The
smooth initial condition in [Figure](#diffu:pde1:FE:fig:gauss:F=0.5),
on the other hand, leads to very small amplitudes of the shortest waves.
That these waves then oscillate in a non-physical way for
$F=0.5$ is not a visible effect. The oscillations
in time in the amplitude $(1-4F)^n$ disappear for $F\leq\frac{1}{4}$,
and that is why also the discontinuous initial condition always leads to
smooth solutions in [Figure](#diffu:pde1:FE:fig:F=0.25), where
$F=\frac{1}{4}$.
Turning the attention to the Backward Euler scheme and the experiments
in [Figure](#diffu:pde1:BE:fig:F=0.5), we see that even the discontinuous
initial condition gives smooth solutions for $F=0.5$ (and in fact all other
$F$ values). From the exact expression of the numerical amplitude,
$(1 + 4F\sin^2p)^{-1}$, we realize that this factor can never flip between
positive and negative values, and no instabilities can occur. The conclusion
is that the Backward Euler scheme always produces smooth solutions.
Also, the Backward Euler scheme guarantees that the solution cannot grow
in time (unless we add a source term to the PDE, but that is meant to
represent a physically relevant growth).
Finally, we have some small, strange artifacts when simulating the
development of the initial plug profile with the Crank-Nicolson scheme,
see [Figure](#diffu:pde1:CN:fig:F=10), where $F=3$.
The Crank-Nicolson scheme cannot give growing amplitudes, but it may
give oscillating amplitudes in time. The critical factor is
$1 - 2F\sin^2p$, which for the shortest waves ($p=\pi/2$) indicates
a stability limit $F=0.5$. With the discontinuous initial condition, we have
enough amplitude on the shortest waves so their wrong behavior is visible,
and this is what we see as small instabilities in
[Figure](#diffu:pde1:CN:fig:F=10). The only remedy is to lower the $F$ value.
# Exercises
<!-- --- begin exercise --- -->
## Exercise 1: Explore symmetry in a 1D problem
<div id="diffu:exer:1D:gaussian:symmetric"></div>
This exercise simulates the exact solution ([7](#diffu:pde1:sol:Gaussian)).
Suppose for simplicity that $c=0$.
**a)**
Formulate an initial-boundary value problem that has
([7](#diffu:pde1:sol:Gaussian)) as solution in the domain $[-L,L]$.
Use the exact solution ([7](#diffu:pde1:sol:Gaussian)) as Dirichlet
condition at the boundaries.
Simulate the diffusion of the Gaussian peak. Observe that the
solution is symmetric around $x=0$.
**b)**
Show from ([7](#diffu:pde1:sol:Gaussian)) that $u_x(c,t)=0$.
Since the solution is symmetric around $x=c=0$, we can solve the
numerical problem in frac{1}{2} of the domain, using a *symmetry boundary condition*
$u_x=0$ at $x=0$. Set up the
initial-boundary value problem in this case. Simulate the
diffusion problem in $[0,L]$ and compare with the solution in a).
<!-- --- begin solution of exercise --- -->
**Solution.**
$$
\begin{align*}
u_t &= \dfc u_xx,\\
u_x(0,t) &= 0,\\
u(L,t)& =\frac{1}{\sqrt{4\pi\dfc t}} \exp{\left({-\frac{x^2}{4\dfc t}}\right)}\thinspace .
\end{align*}
$$
<!-- --- end solution of exercise --- -->
Filename: `diffu_symmetric_gaussian`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 2: Investigate approximation errors from a $u_x=0$ boundary condition
<div id="diffu:exer:1D:ux:onesided"></div>
We consider the problem solved in [Exercise 1: Explore symmetry in a 1D problem](#diffu:exer:1D:gaussian:symmetric)
part b). The boundary condition $u_x(0,t)=0$ can be implemented in
two ways: 1) by a standard symmetric finite difference $[D_{2x}u]_i^n=0$,
or 2) by a one-sided difference $[D^+u=0]^n_i=0$.
Investigate the effect of these two conditions on the
convergence rate in space.
<!-- --- begin hint in exercise --- -->
**Hint.**
If you use a Forward Euler scheme, choose a discretization parameter
$h=\Delta t = \Delta x^2$ and assume the error goes like $E\sim h^r$.
The error in the scheme is $\Oof{\Delta t,\Delta x^2}$ so one should
expect that the estimated $r$ approaches 1. The question is if
a one-sided difference approximation to $u_x(0,t)=0$ destroys this
convergence rate.
<!-- --- end hint in exercise --- -->
Filename: `diffu_onesided_fd`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 3: Experiment with open boundary conditions in 1D
<div id="diffu:exer:1D:openBC"></div>
We address diffusion of a Gaussian function
as in [Exercise 1: Explore symmetry in a 1D problem](#diffu:exer:1D:gaussian:symmetric),
in the domain $[0,L]$,
but now we shall explore different types of boundary
conditions on $x=L$. In real-life problems we do not know
the exact solution on $x=L$ and must use something simpler.
**a)**
Imagine that we want to solve the problem numerically on
$[0,L]$, with a symmetry boundary condition $u_x=0$ at $x=0$,
but we do not know the exact solution and cannot of that
reason assign a correct Dirichlet condition at $x=L$.
One idea is to simply set $u(L,t)=0$ since this will be an
accurate approximation before the diffused pulse reaches $x=L$
and even thereafter it might be a satisfactory condition if the exact $u$ has
a small value.
Let $\uex$ be the exact solution and let $u$ be the solution
of $u_t=\dfc u_{xx}$ with an initial Gaussian pulse and
the boundary conditions $u_x(0,t)=u(L,t)=0$. Derive a diffusion
problem for the error $e=\uex - u$. Solve this problem
numerically using an exact Dirichlet condition at $x=L$.
Animate the evolution of the error and make a curve plot of
the error measure
$$
E(t)=\sqrt{\frac{\int_0^L e^2dx}{\int_0^L udx}}\thinspace .
$$
Is this a suitable error measure for the present problem?
**b)**
Instead of using $u(L,t)=0$ as approximate boundary condition for
letting the diffused Gaussian pulse move out of our finite domain,
one may try $u_x(L,t)=0$ since the solution for large $t$ is
quite flat. Argue that this condition gives a completely wrong
asymptotic solution as $t\rightarrow 0$. To do this,
integrate the diffusion equation from $0$ to $L$, integrate
$u_{xx}$ by parts (or use Gauss' divergence theorem in 1D) to
arrive at the important property
$$
\frac{d}{dt}\int_{0}^L u(x,t)dx = 0,
$$
implying that $\int_0^Ludx$ must be constant in time, and therefore
$$
\int_{0}^L u(x,t)dx = \int_{0}^LI(x)dx\thinspace .
$$
The integral of the initial pulse is 1.
**c)**
Another idea for an artificial boundary condition at $x=L$
is to use a cooling law
<!-- Equation labels as ordinary links -->
<div id="diffu:pde1:Gaussian:xL:cooling"></div>
$$
\begin{equation}
-\dfc u_x = q(u - u_S),
\label{diffu:pde1:Gaussian:xL:cooling} \tag{27}
\end{equation}
$$
where $q$ is an unknown heat transfer coefficient and $u_S$ is
the surrounding temperature in the medium outside of $[0,L]$.
(Note that arguing that $u_S$ is approximately $u(L,t)$ gives
the $u_x=0$ condition from the previous subexercise that is
qualitatively wrong for large $t$.)
Develop a diffusion problem for the error in the solution using
([27](#diffu:pde1:Gaussian:xL:cooling)) as boundary condition.
Assume one can take $u_S=0$ "outside the domain" since
$\uex\rightarrow 0$ as $x\rightarrow\infty$.
Find a function $q=q(t)$ such that the exact solution
obeys the condition ([27](#diffu:pde1:Gaussian:xL:cooling)).
Test some constant values of $q$ and animate how the corresponding
error function behaves. Also compute $E(t)$ curves as defined above.
Filename: `diffu_open_BC`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 4: Simulate a diffused Gaussian peak in 2D/3D
**a)**
Generalize ([7](#diffu:pde1:sol:Gaussian)) to multi dimensions by
assuming that one-dimensional solutions can be multiplied to solve
$u_t = \dfc\nabla^2 u$. Set $c=0$ such that the peak of
the Gaussian is at the origin.
**b)**
One can from the exact solution show
that $u_x=0$ on $x=0$, $u_y=0$ on $y=0$, and $u_z=0$ on $z=0$.
The approximately correct condition $u=0$ can be set
on the remaining boundaries (say $x=L$, $y=L$, $z=L$), cf. [Exercise 3: Experiment with open boundary conditions in 1D](#diffu:exer:1D:openBC).
Simulate a 2D case and make an animation of the diffused Gaussian peak.
**c)**
The formulation in b) makes use of symmetry of the solution such that we
can solve the problem in the first quadrant (2D) or octant (3D) only.
To check that the symmetry assumption is correct, formulate the problem
without symmetry in a domain $[-L,L]\times [L,L]$ in 2D. Use $u=0$ as
approximately correct boundary condition. Simulate the same case as
in b), but in a four times as large domain. Make an animation and compare
it with the one in b).
Filename: `diffu_symmetric_gaussian_2D`.
<!-- --- end exercise --- -->
<!-- --- begin exercise --- -->
## Exercise 5: Examine stability of a diffusion model with a source term
<div id="diffu:exer:uterm"></div>
Consider a diffusion equation with a linear $u$ term:
$$
u_t = \dfc u_{xx} + \beta u\thinspace .
$$
**a)**
Derive in detail the Forward Euler, Backward Euler,
and Crank-Nicolson schemes for this type of diffusion model.
Thereafter, formulate a $\theta$-rule to summarize the three schemes.
**b)**
Assume a solution like ([8](#diffu:pde1:sol1)) and find the relation
between $a$, $k$, $\dfc$, and $\beta$.
<!-- --- begin hint in exercise --- -->
**Hint.**
Insert ([8](#diffu:pde1:sol1)) in the PDE problem.
<!-- --- end hint in exercise --- -->
**c)**
Calculate the stability of the Forward Euler scheme. Design
numerical experiments to confirm the results.
<!-- --- begin hint in exercise --- -->
**Hint.**
Insert the discrete counterpart to ([8](#diffu:pde1:sol1)) in the
numerical scheme. Run experiments at the stability limit and slightly above.
<!-- --- end hint in exercise --- -->
**d)**
Repeat c) for the Backward Euler scheme.
**e)**
Repeat c) for the Crank-Nicolson scheme.
**f)**
How does the extra term $bu$ impact the accuracy of the three schemes?
<!-- --- begin hint in exercise --- -->
**Hint.**
For analysis of the accuracy,
compare the numerical and exact amplification factors, in
graphs and/or by Taylor series expansion.
<!-- --- end hint in exercise --- -->
Filename: `diffu_stability_uterm`.
<!-- --- end exercise --- -->
| github_jupyter |
# License
```
# Copyright 2022 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# [Run in Colab](https://colab.research.google.com/github/google/profit-bidder/blob/main/solution_test/profit_bidder_quickstart.ipynb)
# Overview
The current notebook acts as a quick startup guide to make you understand the different steps involved in the solution. Unlike the production pipeline that you can set up using the complete solution, the notebook runs through all the steps in one place using synthesized test data. Please note that you will **not be able to test the final step** because of fake synthesized data.
## Scope of this notebook
### Dataset
We provide synthesized data sets in the gitrepo that you will clone and use in the notebook. There are three csv files:
* p_Campaign_43939335402485897.csv
* p_Conversion_43939335402485897.csv
* client_profit.csv
In addition, we also provide the schema for the above files in json format which you will use in the notebook to create the tables in the BigQuery.
### Objective
To help you be conversant on the following:
1. Setup your environment (install the libraries, initialize the variables, authenticate to Google Cloud, etc.)
1. Create a service account and two BigQuery datasets
1. Transform the data, create batches of the data, and push the data through a REST API call to CM360
### Costs
This tutorial uses billable components of Google Cloud:
* [BigQuery](https://cloud.google.com/bigquery)
Use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage.
## Before you begin
For this reference guide, you need a [Google Cloud project](https://console.cloud.google.com/cloud-resource-manager).
You can create a new one, or select a project you already created.
The following steps are required, regardless where you are running your notebook (local or in Cloud AI Platform Notebook).
* [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
* [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
* (When using non-Google Cloud local envirionments)Install Google Cloud SDK [Google Cloud SDK](https://cloud.google.com/sdk/)
### Mandatory variables
You must set the below variables:
* PB_GCP_PROJECT to [Your Google Cloud Project]
* PB_GCP_APPLICATION_CREDENTIALS to [Full path with the file name to the Service Account json file, if you chose to use Service Account to authenticate to Google Cloud]
# Setup environment
## *PIP install appropriate packages*
```
%pip install google-cloud-storage # for Storage Account
%pip install google-cloud # for cloud sdk
%pip install google-cloud-bigquery # for BigQuery
%pip install google-cloud-bigquery-storage # for BigQuery Storage client
%pip install google-api-python-client # for Key management
%pip install oauth2client # for Key management
```
## *Initialize all the variables*
### *Remove all envrionment variables*
Comes handy in troubleshooting
```
# remove all localvariables
# ^^^^^^^^^^^^^^^^^^^^^
# beg utils
# ^^^^^^^^^^^^^^^^^^^^^
# local scope
myvar = [key for key in locals().keys() if not key.startswith('_')]
print (len(locals().keys()))
print (len(myvar))
# print (myvar)
for eachvar in myvar:
print (eachvar)
del locals()[eachvar]
print (len(locals().keys()))
# global scope
myvar = [key for key in globals().keys() if not key.startswith('_')]
print (len(globals().keys()))
print (len(myvar))
# print (myvar)
for eachvar in myvar:
print (eachvar)
del globals()[eachvar]
print (len(globals().keys()))
# ^^^^^^^^^^^^^^^^^^^^^
# end utils
# ^^^^^^^^^^^^^^^^^^^^^
```
### *Create Python and Shell envrionment variables*
```
# GCP Project
PB_GCP_PROJECT = "my-project" #@param {type:"string"}
# Default values
PB_SOLUTION_PREFIX="pb_" #@param {type:"string"}
# service account
PB_SERVICE_ACCOUNT_NAME=PB_SOLUTION_PREFIX+"profit-bidder" #@param {type:"string"}
PB_SERVICE_ACCOUNT_NAME=PB_SERVICE_ACCOUNT_NAME.replace('_','-')
PB_SA_ROLES="roles/bigquery.dataViewer roles/pubsub.publisher roles/iam.serviceAccountTokenCreator"
PB_SA_EMAIL=PB_SERVICE_ACCOUNT_NAME + '@' + PB_GCP_PROJECT + '.iam.gserviceaccount.com'
# BQ DS for SA360/CM360
PB_DS_SA360=PB_SOLUTION_PREFIX + "sa360_data" #@param {type:"string"}
# BQ DS for Business data
PB_DS_BUSINESS_DATA=PB_SOLUTION_PREFIX + "business_data" #@param {type:"string"}
# Client margin table
PB_CLIENT_MARGIN_DATA_TABLE_NAME="client_margin_data_table" #@param {type:"string"}
# Tranformed data table
PB_CM360_TABLE="my_transformed_data" #@param {type:"string"}
PB_CM360_PROFILE_ID="my_cm_profileid" #@param {type:"string"}
PB_CM360_FL_ACTIVITY_ID="my_fl_activity_id" #@param {type:"string"}
PB_CM360_FL_CONFIG_ID="my_fl_config_id" #@param {type:"string"}
# DON'T CHNAGE THE BELOW VARIABLES; it is hardcoded to match the test dataset
PB_SQL_TRANSFORM_ADVERTISER_ID="43939335402485897" #synthensized id to test.
PB_CAMPAIGN_TABLE_NAME="p_Campaign_" + PB_SQL_TRANSFORM_ADVERTISER_ID
PB_CONVERSION_TABLE_NAME="p_Conversion_" + PB_SQL_TRANSFORM_ADVERTISER_ID
PB_TIMEZONE="America/New_York"
PB_REQUIRED_KEYS = [
'conversionId',
'conversionQuantity',
'conversionRevenue',
'conversionTimestamp',
'conversionVisitExternalClickId',
]
PB_API_SCOPES = ['https://www.googleapis.com/auth/dfareporting',
'https://www.googleapis.com/auth/dfatrafficking',
'https://www.googleapis.com/auth/ddmconversions',
'https://www.googleapis.com/auth/devstorage.read_write']
PB_CM360_API_NAME = 'dfareporting'
PB_CM360_API_VERSION = 'v3.5'
PB_BATCH_SIZE=100
# create a variable that you can pass to the bq Cell magic
# import the variables to the shell
import os
PB_all_args = [key for key in locals().keys() if not key.startswith('_')]
# print (PB_all_args)
PB_BQ_ARGS = {}
for PB_each_key in PB_all_args:
# print (f"{PB_each_key}:{locals()[PB_each_key]}")
if PB_each_key.upper().startswith(PB_SOLUTION_PREFIX.upper()):
PB_BQ_ARGS[PB_each_key] = locals()[PB_each_key]
os.environ[PB_each_key] = str(PB_BQ_ARGS[PB_each_key])
print (PB_BQ_ARGS)
```
## *Setup your Google Cloud project*
```
# set the desired Google Cloud project
!gcloud config set project $PB_GCP_PROJECT
import os
os.environ['GOOGLE_CLOUD_PROJECT'] = PB_GCP_PROJECT
# validate that the Google Cloud project has been set properly.
!echo 'gcloud will use the below project:'
!gcloud info --format='value(config.project)'
```
## *Authenticate with Google Cloud*
### Authenticate using ServiceAccount Key file
```
# download the ServiceAccount key and provide the path to the file below
# PB_GCP_APPLICATION_CREDENTIALS = "<Full path with the file name to the above downloaded json file>"
# PB_GCP_APPLICATION_CREDENTIALS = "/Users/dpani/Downloads/dpani-sandbox-2-3073195cd132.json"
# uncomment the below code in codelab environment
# authenticate using service account
# from google.colab import files
# # Upload service account key
# keyfile_upload = files.upload()
# PB_GCP_APPLICATION_CREDENTIALS = list(keyfile_upload.keys())[0]
# import os
# os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = PB_GCP_APPLICATION_CREDENTIALS
# # set the account
# !echo "Setting Service Account:" $PB_GCP_APPLICATION_CREDENTIALS
# !gcloud auth activate-service-account --key-file=$PB_GCP_APPLICATION_CREDENTIALS
```
### Authenticate using OAuth
```
# uncomment the below code in codelab environment
# authenticate using oauth
import sys
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
```
## *Enable the below Google Cloud Services for the solution*
```
# set the proper Permission for the required Google Cloud Services
!gcloud services enable \
bigquery.googleapis.com \
bigquerystorage.googleapis.com \
bigquerydatatransfer.googleapis.com \
doubleclickbidmanager.googleapis.com \
doubleclicksearch.googleapis.com \
storage-api.googleapis.com
```
# Utilities fuctions
## *Delete a dataset in BigQuery (DDL)*
```
# delete the BigQuery dataset...!!! BE CAREFUL !!!
def delete_dataset(dataset_id):
"""Deletes a BigQuery dataset
This is not recommendated to use it in a production enviornment.
Comes handy in the iterative development and testing phases of the SDLC.
!!! BE CAREFUL !!!!
Args:
dataset_id(:obj:`str`): The BigQuery dataset name that we want to delete
"""
# [START bigquery_delete_dataset]
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
# dataset_id = 'your-project.your_dataset'
# Use the delete_contents parameter to delete a dataset and its contents.
# Use the not_found_ok parameter to not receive an error if the
# dataset has already been deleted.
client.delete_dataset(
dataset_id, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(dataset_id))
```
## *Delete a table in BigQuery (DDL)*
```
# delete BigQuery table if not needed...!!! BE CAREFUL !!!
def delete_table(table_id):
"""Deletes a BigQuery table
This is not recommendated to use it in a production enviornment.
Comes handy in the iterative development and testing phases of the SDLC.
!!! BE CAREFUL !!!!
Args:
table_id(:obj:`str`): The BigQuery table name that we want to delete
"""
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
# client.delete_table(table_id, not_found_ok=True) # Make an API request.
client.delete_table(table_id) # Make an API request.
print("Deleted table '{}'.".format(table_id))
```
## *Deletes a Service Account*
```
# delete a service account
def delete_service_account(PB_GCP_PROJECT: str,
PB_ACCOUNT_NAME: str
):
"""The function deletes a service account
This is not recommendated to use it in a production enviornment.
Comes handy in the iterative development and testing phases of the SDLC.
!!! BE CAREFUL !!!!
Args:
PB_GCP_PROJECT:(:obj:`str`): Google Cloud project for deployment
PB_ACCOUNT_NAME:(:obj:`str`): Name of the service account.
"""
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('iam', 'v1', credentials=credentials)
# The resource name of the service account in the following format:
# `projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}`.
# Using `-` as a wildcard for the `PROJECT_ID` will infer the project from
# the account. The `ACCOUNT` value can be the `email` address or the
# `unique_id` of the service account.
name = f'projects/{PB_GCP_PROJECT}/serviceAccounts/{PB_ACCOUNT_NAME}@{PB_GCP_PROJECT}.iam.gserviceaccount.com'
print("Going to delete service account '{}'.".format(name))
request = service.projects().serviceAccounts().delete(name=name)
request.execute()
print("Account deleted")
```
# Profit bid solution
## *Creates the Service Account and BigQuery DSs:*
* Service account (the same one used to push the conversion to the SA360/CM360)
* BQ DS for SA360/CM360
* BQ DS for Business data
```
%%bash
# create the service account
# and add necessary iam roles
function get_roles {
gcloud projects get-iam-policy ${PB_GCP_PROJECT} --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members:${PB_SA_EMAIL}"
}
function create_service_account {
echo "Creating service account $PB_SA_EMAIL"
gcloud iam service-accounts describe $PB_SA_EMAIL > /dev/null 2>&1
RETVAL=$?
if (( ${RETVAL} != "0" )); then
gcloud iam service-accounts create ${PB_SERVICE_ACCOUNT_NAME} --description 'Profit Bidder Service Account' --project ${PB_GCP_PROJECT}
fi
for role in ${PB_SA_ROLES}; do
echo -n "Adding ${PB_SERVICE_ACCOUNT_NAME} to ${role} "
if get_roles | grep $role &> /dev/null; then
echo "already added."
else
gcloud projects add-iam-policy-binding ${PB_GCP_PROJECT} --member="serviceAccount:${PB_SA_EMAIL}" --role="${role}"
echo "added."
fi
done
}
# Creates the service account and adds necessary permissions
create_service_account
function create_bq_ds {
dataset=$1
echo "Creating BQ dataset: '${dataset}'"
bq --project_id=${PB_GCP_PROJECT} show --dataset ${dataset} > /dev/null 2>&1
RETVAL=$?
if (( ${RETVAL} != "0" )); then
bq --project_id=${PB_GCP_PROJECT} mk --dataset ${dataset}
else
echo "Reusing ${dataset}."
fi
}
#create the BQ DSs
create_bq_ds $PB_DS_SA360
create_bq_ds $PB_DS_BUSINESS_DATA
```
## *Download the test data*
Test data is in 'solution_test' folder
```
%%bash
# Download the test data from gitrepo
DIR=$HOME/solutions/profit-bidder
if [ -d "$DIR" ]
then
echo $DIR already exists.
else
mkdir -p $HOME/solutions/profit-bidder
cd $HOME/solutions/profit-bidder
git clone https://github.com/google/profit-bidder.git .
fi
export PB_TEST_DATA_DIR=$DIR/solution_test
ls -ltrah $PB_TEST_DATA_DIR
echo $PB_TEST_DATA_DIR folder contains the test data.
```
## *Uploads Test data to BigQuery*
```
%%bash
# uploades the test data into the BigQuery
function create_bq_table {
dataset=$1
table_name=$2
schema_name=$3
sql_result=$(list_bq_table $1 $2)
echo "Creating BQ table: '${dataset}.${table_name}'"
if [[ "$sql_result" == *"1"* ]]; then
echo "Reusing ${dataset}.${table_name}."
else
bq --project_id=${PB_GCP_PROJECT} mk -t --schema ${schema_name} --time_partitioning_type DAY ${dataset}.${table_name}
fi
}
function delete_bq_table {
dataset=$1
table_name=$2
sql_result=$(list_bq_table $1 $2)
echo "Deleting BQ table: '${dataset}.${table_name}'"
if [[ "$sql_result" == *"1"* ]]; then
bq rm -f -t $PB_GCP_PROJECT:$dataset.$table_name
else
echo "${dataset}.${table_name} doesn't exists."
fi
}
function list_bq_table {
dataset=$1
table_name=$2
echo "Checking BQ table exist: '${dataset}.${table_name}'"
sql_query='SELECT
COUNT(1) AS cnt
FROM
`<myproject>`.<mydataset>.__TABLES_SUMMARY__
WHERE table_id = "<mytable_name>"'
sql_query="${sql_query/<myproject>/${PB_GCP_PROJECT}}"
sql_query="${sql_query/<mydataset>/${dataset}}"
sql_query="${sql_query/<mytable_name>/${table_name}}"
bq_qry_cmd="bq query --use_legacy_sql=false --format=csv '<mysql_qery>'"
bq_qry_cmd="${bq_qry_cmd/<mysql_qery>/${sql_query}}"
sql_result=$(eval $bq_qry_cmd)
if [[ "$sql_result" == *"1"* ]]; then
echo "${dataset}.${table_name} exist"
echo "1"
else
echo "${dataset}.${table_name} doesn't exist"
echo "0"
fi
}
function load_bq_table {
dataset=$1
table_name=$2
data_file=$3
schema_name=$4
sql_result=$(list_bq_table $1 $2)
echo "Loading data to BQ table: '${dataset}.${table_name}'"
if [[ "$sql_result" == *"1"* ]]; then
delete_bq_table $dataset $table_name
fi
if [[ "$schema_name" == *"autodetect"* ]]; then
bq --project_id=${PB_GCP_PROJECT} load \
--autodetect \
--source_format=CSV \
$dataset.$table_name \
$data_file
else
create_bq_table $dataset $table_name $schema_name
bq --project_id=${PB_GCP_PROJECT} load \
--source_format=CSV \
--time_partitioning_type=DAY \
--skip_leading_rows=1 \
${dataset}.${table_name} \
${data_file}
fi
}
# save the current working dierctory
current_working_dir=`pwd`
# change to the test data directory
DIR=$HOME/solutions/profit-bidder
export PB_TEST_DATA_DIR=$DIR/solution_test
ls -ltrah $PB_TEST_DATA_DIR
echo $PB_TEST_DATA_DIR folder contains the test data.
cd $PB_TEST_DATA_DIR
pwd
# create campaign table
# load test data to campaign table
load_bq_table $PB_DS_SA360 $PB_CAMPAIGN_TABLE_NAME "p_Campaign_${PB_SQL_TRANSFORM_ADVERTISER_ID}.csv" "p_Campaign_schema.json"
# create conversion table
# load test data to conversion
load_bq_table $PB_DS_SA360 $PB_CONVERSION_TABLE_NAME "p_Conversion_${PB_SQL_TRANSFORM_ADVERTISER_ID}.csv" "${PB_TEST_DATA_DIR}/p_Conversion_schema.json"
# load test profit data
load_bq_table $PB_DS_BUSINESS_DATA $PB_CLIENT_MARGIN_DATA_TABLE_NAME "client_profit.csv" "autodetect"
# change to original working directory
cd $current_working_dir
pwd
```
## *Create a BigQuery client, import the libraries, load the bigquery Cell magic*
```
# create a BigQuery client
from google.cloud import bigquery
bq_client = bigquery.Client(project=PB_GCP_PROJECT)
# load the bigquery Cell magic
# %load_ext google.cloud.bigquery
%reload_ext google.cloud.bigquery
# test that BigQuery client works
sql = """
SELECT name
FROM `bigquery-public-data.usa_names.usa_1910_current`
WHERE state = 'TX'
LIMIT 100
"""
# Run a Standard SQL query using the environment's default project
df = bq_client.query(sql).to_dataframe()
df
```
## *Transform and aggregate*
```
# The below query transforms the data from Campaign, Conversion,
# and profit tables.
aggregate_sql = f"""
-- Copyright 2021 Google LLC
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.
-- ****** TEMPLATE CODE ******
-- NOTE: Please thoroughly review and test your version of this query before launching your pipeline
-- The resulting data from this script should provide all the necessary columns for upload via
-- the CM360 API and the SA360 API
--
-- the below placeholders must be replaced with appropriate values.
-- install.sh does so
-- project_id as: {PB_GCP_PROJECT}
-- sa360_dataset_name as: {PB_DS_SA360}
-- advertiser_id as: {PB_SQL_TRANSFORM_ADVERTISER_ID}
-- timezone as: America/New_York e.g. America/New_York
-- floodlight_name as: My Sample Floodlight Activity
-- account_type as: Other engines
-- gmc_dataset_name as: pb_gmc_data
-- gmc_account_id as: mygmc_account_id
-- business_dataset_name as: {PB_DS_BUSINESS_DATA}
-- client_margin_data_table as: {PB_CLIENT_MARGIN_DATA_TABLE_NAME}
-- client_profit_data_sku_col as: sku
-- client_profit_data_profit_col as: profit
-- target_floodlight_name as: My Sample Floodlight Activity
-- product_sku_var as: u9
-- product_quantity_var as: u10
-- product_unit_price_var as: u11
-- product_sku_regex as: (.*?);
-- product_quantity_regex as: (.*?);
-- product_unit_price_regex as: (.*?);
-- product_sku_delim as: |
-- product_quantity_delim as: |
-- product_unit_price_delim as: |
--
WITH
campaigns AS (
-- Example: Extracting all campaign names and IDs if needed for filtering for
-- conversions for a subset of campaigns
SELECT
campaign,
campaignId,
row_number() OVER (partition BY campaignId ORDER BY lastModifiedTimestamp DESC) as row_num -- for de-duping
FROM `{PB_GCP_PROJECT}.{PB_DS_SA360}.p_Campaign_{PB_SQL_TRANSFORM_ADVERTISER_ID}`
-- Be sure to replace the Timezone with what is appropriate for your use case
WHERE EXTRACT(DATE FROM _PARTITIONTIME) >= DATE_SUB(CURRENT_DATE('America/New_York'), INTERVAL 7 DAY)
)
,expanded_conversions AS (
-- Parses out all relevant product data from a conversion request string
SELECT
conv.*,
campaign,
-- example of U-Variables that are parsed to extract product purchase data
SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, "u9=(.*?);"),"|") AS u9,
SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, "u10=(.*?);"),"|") AS u10,
SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, "u11=(.*?);"),"|") AS u11,
FROM `{PB_GCP_PROJECT}.{PB_DS_SA360}.p_Conversion_{PB_SQL_TRANSFORM_ADVERTISER_ID}` AS conv
LEFT JOIN (
SELECT campaign, campaignId
FROM campaigns
WHERE row_num = 1
GROUP BY 1,2
) AS camp
USING (campaignId)
WHERE
-- Filter for conversions that occured in the previous day
-- Be sure to replace the Timezone with what is appropriate for your use case
floodlightActivity IN ('My Sample Floodlight Activity')
AND accountType = 'Other engines' -- filter by Account Type as needed
)
,flattened_conversions AS (
-- Flattens the extracted product data for each conversion which leaves us with a row
-- of data for each product purchased as part of a given conversion
SELECT
advertiserId,
campaignId,
conversionId,
skuId,
pos1,
quantity,
pos2,
cost,
pos3
FROM expanded_conversions,
UNNEST(expanded_conversions.u9) AS skuId WITH OFFSET pos1,
UNNEST(expanded_conversions.u10) AS quantity WITH OFFSET pos2,
UNNEST(expanded_conversions.u11) AS cost WITH OFFSET pos3
WHERE pos1 = pos2 AND pos1 = pos3 AND skuId != ''
GROUP BY 1,2,3,4,5,6,7,8,9
ORDER BY conversionId
)
,inject_gmc_margin AS (
-- Merges Margin data with the products found in the conversion data
SELECT
advertiserId,
campaignId,
conversionId,
skuId,
quantity,
IF(cost = '', '0', cost) as cost,
pos1,
pos2,
pos3,
-- PLACEHOLDER MARGIN, X% for unclassified items
CASE
WHEN profit IS NULL THEN 0.0
ELSE profit
END AS margin,
sku,
FROM flattened_conversions
LEFT JOIN `{PB_GCP_PROJECT}.{PB_DS_BUSINESS_DATA}.{PB_CLIENT_MARGIN_DATA_TABLE_NAME}`
ON flattened_conversions.skuId = sku
group by 1,2,3,4,5,6,7,8,9,10,11
)
,all_conversions as (
-- Rolls up all previously expanded conversion data while calculating profit based on the matched
-- margin value. Also assigns timestamp in millis and micros
SELECT
e.account,
e.accountId,
e.accountType,
e.advertiser,
igm.advertiserId,
e.agency,
e.agencyId,
igm.campaignId,
e.campaign,
e.conversionAttributionType,
e.conversionDate,
-- '00' may be changed to any string value that will help you identify these
-- new conversions in reporting
CONCAT(igm.conversionId, '00') as conversionId,
e.conversionLastModifiedTimestamp,
-- Note:Rounds float quantity and casts to INT, change based on use case
-- This is done to support CM360 API
CAST(ROUND(e.conversionQuantity) AS INT64) AS conversionQuantity,
e.conversionRevenue,
SUM(
FLOOR(CAST(igm.cost AS FLOAT64))
) AS CALCULATED_REVENUE,
-- PROFIT CALCULATED HERE, ADJUST LOGIC AS NEEDED FOR YOUR USE CASE
ROUND(
SUM(
-- multiply item cost by class margin
SAFE_MULTIPLY(
CAST(igm.cost AS FLOAT64),
igm.margin)
),2
) AS CALCULATED_PROFIT,
e.conversionSearchTerm,
e.conversionTimestamp,
-- SA360 timestamp should be in millis
UNIX_MILLIS(e.conversionTimestamp) as conversionTimestampMillis,
-- CM360 Timestamp should be in micros
UNIX_MICROS(e.conversionTimestamp) as conversionTimestampMicros,
e.conversionType,
e.conversionVisitExternalClickId,
e.conversionVisitId,
e.conversionVisitTimestamp,
e.deviceSegment,
e.floodlightActivity,
e.floodlightActivityId,
e.floodlightActivityTag,
e.floodlightEventRequestString,
e.floodlightOrderId,
e.floodlightOriginalRevenue,
status
FROM inject_gmc_margin AS igm
LEFT JOIN expanded_conversions AS e
ON igm.advertiserID = e.advertiserId AND igm.campaignId = e.campaignID AND igm.conversionId = e.conversionId
GROUP BY 1,2,3,4,5,6,8,7,9,10,11,12,13,14,15,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33
)
-- The columns below represent the original conversion data with their new profit
-- values calculated (assigned to conversionRevenue column) along with any original
-- floofdlight data that the client wishes to keep for trouble shooting.
SELECT
account,
accountId,
accountType,
advertiser,
advertiserId,
agency,
agencyId,
campaignId,
campaign,
conversionId,
conversionAttributionType,
conversionDate,
conversionTimestamp,
conversionTimestampMillis,
conversionTimestampMicros,
CALCULATED_PROFIT AS conversionRevenue,
conversionQuantity,
-- The below is used only troublehsooting purpose.
"My Sample Floodlight Activity" AS floodlightActivity,
conversionSearchTerm,
conversionType,
conversionVisitExternalClickId,
conversionVisitId,
conversionVisitTimestamp,
deviceSegment,
CALCULATED_PROFIT,
CALCULATED_REVENUE,
-- Please prefix any original conversion values you wish to keep with "original".
-- These values may help with troubleshooting
conversionRevenue AS originalConversionRevenue,
floodlightActivity AS originalFloodlightActivity,
floodlightActivityId AS originalFloodlightActivityId,
floodlightActivityTag AS originalFloodlightActivityTag,
floodlightOriginalRevenue AS originalFloodlightRevenue,
floodlightEventRequestString,
floodlightOrderId
FROM all_conversions
WHERE CALCULATED_PROFIT > 0.0
ORDER BY account ASC
"""
# execute the transform query
df = bq_client.query(aggregate_sql).to_dataframe()
# print a couple of records of the transformed query
df.head()
# write the data to a table
df.to_gbq(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}',
project_id=PB_GCP_PROJECT,
if_exists='replace',
progress_bar=True,)
```
## *Formulate the payload and push to CM360*
```
# Reads the from transformed table, chunks the data,
# and uploads the data to CM360
# We need to chunk the data so as to adhere
# to the payload limit of the CM360 REST API.
import pytz
import datetime
import decimal
import logging
import json
import google.auth
import google.auth.impersonated_credentials
import google_auth_httplib2
from googleapiclient import discovery
def today_date(timezone):
"""Returns today's date using the timezone
Args:
timezone(:obj:`str`): The timezone with default to America/New_York
Returns:
Date: today's date
"""
tz = pytz.timezone(timezone)
return datetime.datetime.now(tz).date()
def time_now_str(timezone):
"""Returns today's date using the timezone
Args:
timezone(:obj:`str`): The timezone with default to America/New_York
Returns:
Timezone: current timezone
"""
# set correct timezone for datetime check
tz = pytz.timezone(timezone)
return datetime.datetime.now(tz).strftime("%m-%d-%Y, %H:%M:%S")
def pluralize(count):
"""An utility function
Args:
count(:obj:`int`): A number
Returns:
str: 's' or empty
"""
if count > 1:
return 's'
return ''
def get_data(table_ref_name, cloud_client, batch_size):
"""Returns the data from the transformed table.
Args:
table_ref_name(:obj:`google.cloud.bigquery.table.Table`): Reference to the table
cloud_client(:obj:`google.cloud.bigquery.client.Client`): BigQuery client
batch_size(:obj:`int`): Batch size
Returns:
Array[]: list/rows of data
"""
current_batch = []
table = cloud_client.get_table(table_ref_name)
print(f'Downloading {table.num_rows} rows from table {table_ref_name}')
skip_stats = {}
for row in cloud_client.list_rows(table_ref_name):
missing_keys = []
for key in PB_REQUIRED_KEYS:
val = row.get(key)
if val is None:
missing_keys.append(key)
count = skip_stats.get(key, 0)
count += 1
skip_stats[key] = count
if len(missing_keys) > 0:
row_as_dict = dict(row.items())
logging.debug(f'Skipped row: missing values for keys {missing_keys} in row {row_as_dict}')
continue
result = {}
conversionTimestamp = row.get('conversionTimestamp')
# convert floating point seconds to microseconds since the epoch
result['conversionTimestampMicros'] = int(conversionTimestamp.timestamp() * 1_000_000)
for key in row.keys():
value = row.get(key)
if type(value) == datetime.datetime or type(value) == datetime.date:
result[key] = value.strftime("%y-%m-%d ")
elif type(value) == decimal.Decimal:
result[key] = float(value)
else:
result[key] = value
current_batch.append(result)
if len(current_batch) >= batch_size:
yield current_batch
current_batch = []
if len(current_batch) > 0:
yield current_batch
pretty_skip_stats = ', '.join([f'{val} row{pluralize(val)} missing key "{key}"' for key, val in skip_stats.items()])
logging.info(f'Processed {table.num_rows} from table {table_ref_name} skipped {pretty_skip_stats}')
def setup(sa_email, api_scopes, api_name, api_version):
"""Impersonates a service account, authenticate with Google Service,
and returns a discovery api for further communication with Google Services.
Args:
sa_email(:obj:`str`): Service Account to impersonate
api_scopes(:obj:`Any`): An array of scope that the service account
expectes to have permission in the CM360
api_name(:obj:`str`): CM360 API Name
api_version(:obj:`str`): CM360 API version
Returns:
module:discovery: to interact with Goolge Services.
"""
source_credentials, project_id = google.auth.default()
target_credentials = google.auth.impersonated_credentials.Credentials(
source_credentials=source_credentials,
target_principal=sa_email,
target_scopes=api_scopes,
delegates=[],
lifetime=500)
http = google_auth_httplib2.AuthorizedHttp(target_credentials)
# setup API service here
try:
return discovery.build(
api_name,
api_version,
cache_discovery=False,
http=http)
except:
print('Could not authenticate')
def upload_data(timezone, rows, profile_id, fl_configuration_id, fl_activity_id):
"""POSTs the conversion data using CM360 API
Args:
timezone(:obj:`Timezone`): Current timezone or defaulted to America/New_York
rows(:obj:`Any`): An array of conversion data
profile_id(:obj:`str`): Profile id - should be gathered from the CM360
fl_configuration_id(:obj:`str`): Floodlight config id - should be gathered from the CM360
fl_activity_id(:obj:`str`): Floodlight activity id - should be gathered from the CM360
"""
print('Starting conversions for ' + time_now_str(timezone))
if not fl_activity_id or not fl_configuration_id:
print('Please make sure to provide a value for both floodlightActivityId and floodlightConfigurationId!!')
return
# Build the API connection
try:
service = setup(PB_SA_EMAIL, PB_API_SCOPES,
PB_CM360_API_NAME, PB_CM360_API_VERSION)
# upload_log = ''
print('Authorization successful')
currentrow = 0
all_conversions = """{"kind": "dfareporting#conversionsBatchInsertRequest", "conversions": ["""
while currentrow < len(rows):
for row in rows[currentrow:min(currentrow+100, len(rows))]:
conversion = json.dumps({
'kind': 'dfareporting#conversion',
'gclid': row['conversionVisitExternalClickId'],
'floodlightActivityId': fl_activity_id, # (Use short form CM Floodlight Activity Id )
'floodlightConfigurationId': fl_configuration_id, # (Can be found in CM UI)
'ordinal': row['conversionId'],
'timestampMicros': row['conversionTimestampMicros'],
'value': row['conversionRevenue'],
'quantity': row['conversionQuantity'] #(Alternatively, this can be hardcoded to 1)
})
# print('Conversion: ', conversion) # uncomment if you want to output each conversion
all_conversions = all_conversions + conversion + ','
all_conversions = all_conversions[:-1] + ']}'
payload = json.loads(all_conversions)
print(f'CM360 request payload: {payload}')
request = service.conversions().batchinsert(profileId=profile_id, body=payload)
print('[{}] - CM360 API Request: '.format(time_now_str()), request)
response = request.execute()
print('[{}] - CM360 API Response: '.format(time_now_str()), response)
if not response['hasFailures']:
print('Successfully inserted batch of 100.')
else:
status = response['status']
for line in status:
try:
if line['errors']:
for error in line['errors']:
print('Error in line ' + json.dumps(line['conversion']))
print('\t[%s]: %s' % (error['code'], error['message']))
except:
print('Conversion with gclid ' + line['gclid'] + ' inserted.')
print('Either finished or found errors.')
currentrow += 100
all_conversions = """{"kind": "dfareporting#conversionsBatchInsertRequest", "conversions": ["""
except:
print('Could not authenticate')
def partition_and_distribute(cloud_client, table_ref_name, batch_size, timezone,
profile_id, fl_configuration_id, fl_activity_id):
"""Partitions the data to chunks of batch size and
uploads to the CM360
Args:
table_ref_name(:obj:`google.cloud.bigquery.table.Table`): Reference to the table
cloud_client(:obj:`google.cloud.bigquery.client.Client`): BigQuery client
batch_size(:obj:`int`): Batch size
timezone(:obj:`Timezone`): Current timezone or defaulted to America/New_York
profile_id(:obj:`str`): Profile id - should be gathered from the CM360
fl_configuration_id(:obj:`str`): Floodlight config id - should be gathered from the CM360
fl_activity_id(:obj:`str`): Floodlight activity id - should be gathered from the CM360
"""
for batch in get_data(table_ref_name, cloud_client, batch_size):
# print(f'Batch size: {len(batch)} batch: {batch}')
upload_data(timezone, batch, profile_id, fl_configuration_id,
fl_activity_id)
# DEBUG BREAK!
if batch_size == 1:
break
try:
table = bq_client.get_table(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')
except:
print ('Could not find table with the provided table name: {}.'.format(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}'))
table = None
todays_date = today_date(PB_TIMEZONE)
if table is not None:
table_ref_name = table.full_table_id.replace(':', '.')
if table.modified.date() == todays_date or table.created.date() == todays_date:
print('[{}] is up-to-date. Continuing with upload...'.format(table_ref_name))
partition_and_distribute(bq_client, table_ref_name, PB_BATCH_SIZE,
PB_TIMEZONE, PB_CM360_PROFILE_ID,
PB_CM360_FL_CONFIG_ID, PB_CM360_FL_ACTIVITY_ID)
else:
print('[{}] data may be stale. Please check workflow to verfiy that it has run correctly. Upload is aborted!'.format(table_ref_name))
else:
print('Table not found! Please double check your workflow for any errors.')
```
# Clean up - !!! BE CAREFUL!!!
## Delete the transformed table
```
# deletes the transformed table
delete_table(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')
```
## Delete the SA and BQ DSs:
* Service account (the same one used to push the conversion to the SA360/CM360)
* BQ DS for SA360/CM360
* BQ DS for Business data
```
# deletes the service account
delete_service_account(PB_GCP_PROJECT, PB_SERVICE_ACCOUNT_NAME)
# deletes the dataset
delete_dataset(PB_DS_SA360)
delete_dataset(PB_DS_BUSINESS_DATA)
```
## Delete the Google Cloud Project
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial is to **Delete the project**.
The easiest way to eliminate billing is to delete the project you created for the tutorial.
**Caution**: Deleting a project has the following effects:
* *Everything in the project is deleted.* If you used an existing project for this tutorial, when you delete it, you also delete any other work you've done in the project.
* <b>Custom project IDs are lost. </b>When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com</b> URL, delete selected resources inside the project instead of deleting the whole project.
If you plan to explore multiple tutorials and quickstarts, reusing projects can help you avoid exceeding project quota limits.
<br>
<ol type="1">
<li>In the Cloud Console, go to the <b>Manage resources</b> page.</li>
Go to the <a href="https://console.cloud.google.com/iam-admin/projects">Manage resources page</a>
<li>In the project list, select the project that you want to delete and then click <b>Delete</b> Trash icon.</li>
<li>In the dialog, type the project ID and then click <b>Shut down</b> to delete the project. </li>
</ol>
```
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import sys
import pickle
import time
import datetime
import matplotlib.pyplot as plt
import seaborn as sns
from importlib import reload
%matplotlib inline
from IPython.core.display import display, HTML, clear_output
display(HTML("<style>.container { width:80% !important; }</style>"))
```
# Load Content Embeddings
```
cwd = os.getcwd()
content_embeddings = pd.read_pickle(os.path.join("..", "..", "data", "ml-20m", "autoencoder_embeddings.pkl"))
content_embeddings = pd.DataFrame(content_embeddings)
print(content_embeddings.shape)
content_embeddings.head()
```
# Load Collaborative Embeddings
```
cwd = os.getcwd()
collaborative_embeddings = pd.read_pickle(os.path.join("..", "..", "data", "ml-20m", "movie_embeddings_1.pkl"))
print(collaborative_embeddings.shape)
collaborative_embeddings.head()
```
# Format Movie Lookup Data
```
# Load index mapping
with open('../../data/ml-20m/movie_to_idx.pkl', 'rb') as handle:
movie2idx = pickle.load(handle)
movies = pd.read_csv(os.path.join("..", "..", "data", "ml-20m", "movies.csv"))
print("{} unique movies in movies.csv".format(len(movies.movieId.unique())))
ratings = pd.read_csv(os.path.join("..", "..", "data", "ml-20m", "ratings.csv"))
print("{} unique movies in ratings.csv".format(len(ratings.movieId.unique())))
movies = pd.merge(movies, ratings, on="movieId", how="inner")
movies.movieId = movies.movieId.apply(lambda x: movie2idx[x])
#get popularity
popularity = pd.DataFrame(movies[['userId', 'title', 'movieId']].groupby(['title', 'movieId']).agg(['count']))
popularity.reset_index(inplace=True)
popularity.columns = ['title', 'movieId', 'ratings_count']
popularity.sort_values('ratings_count', ascending=False, inplace=True)
movies = pd.merge(popularity[['movieId', 'ratings_count']], movies, on='movieId')
movies.reset_index(inplace=True)
#get average ratings
average_ratings = pd.DataFrame(movies[['rating', 'title', 'movieId']].groupby(['title', 'movieId']).agg(['mean']))
average_ratings.reset_index(inplace=True)
average_ratings.columns = ['title', 'movieId', 'avg_rating']
movies = pd.merge(average_ratings[['movieId', 'avg_rating']], movies, on='movieId')
movies.reset_index(inplace=True)
movies = movies[['movieId', 'title', 'genres', 'ratings_count', 'avg_rating']]
movies.drop_duplicates(inplace=True)
print("{} unique movies in embeddings".format(len(movies.movieId.unique())))
movies.set_index('movieId', inplace=True, drop=True)
movies.sort_index(ascending=True, inplace=True)
print(movies.shape)
movies.head(5)
movies.to_csv('../../data/movie_demographics.csv')
movies.query('title == "Zodiac (2007)"')
```
# Recommendations
```
#Import class
import os; import sys
cwd = os.getcwd()
path = os.path.join('..' , '..', 'movie_recommender')
if not path in sys.path:
sys.path.append(path)
del cwd, path
from similarity import SimilarityPredictions
def lookup_movie_id_by_title(movie_title):
return movies[movies.title.str.contains(movie_title)]
lookup_movie_id_by_title("Notebook")
primer = 3006
lotr = 131 #fellowship of the ring
inception = 2087
zodiac = 3995
pulp_fiction = 11
notebook = 1764
def get_detailed_recs(movie_id, embeddings, file_path):
#get similar movies
sim_model = SimilarityPredictions(embeddings, similarity_metric="cosine")
output = sim_model.predict_similar_items(seed_item=movie_id, n=20)
similar_movies = pd.DataFrame(output)
similar_movies.set_index('item_id', inplace=True)
sim_df = pd.merge(movies, similar_movies, left_index=True, right_index=True)
sim_df.sort_values('similarity_score', ascending=False, inplace=True)
#save recs locally
sim_df.head(20).to_csv(file_path, index=False, header=True)
return sim_df.head(20)
def get_ensemble_recs(movie_id, content_embeddings, collaborative_embeddings, file_path):
#get similar movies from content
sim_model_cont = SimilarityPredictions(content_embeddings, similarity_metric="cosine")
cont_output = sim_model_cont.predict_similar_items(seed_item=movie_id, n=26744)
similar_movies = pd.DataFrame(cont_output)
similar_movies.set_index('item_id', inplace=True)
sim_df_cont = pd.merge(movies, similar_movies, left_index=True, right_index=True)
sim_df_cont.sort_values('similarity_score', ascending=False, inplace=True)
sim_df_cont = sim_df_cont.rename(index=str, columns={"similarity_score": "content_similarity_score"})
#get similar movies from collaborative
sim_model_coll = SimilarityPredictions(collaborative_embeddings, similarity_metric="cosine")
coll_output = sim_model_coll.predict_similar_items(seed_item=movie_id, n=26744)
similar_movies = pd.DataFrame(coll_output)
similar_movies.set_index('item_id', inplace=True)
sim_df_coll = pd.merge(movies, similar_movies, left_index=True, right_index=True)
sim_df_coll.sort_values('similarity_score', ascending=False, inplace=True)
sim_df_coll = sim_df_coll.rename(index=str, columns={"similarity_score": "collaborative_similarity_score"})
#ensemble results
sim_df_avg = pd.merge(sim_df_coll, pd.DataFrame(sim_df_cont['content_similarity_score']), left_index=True, right_index=True)
sim_df_avg['average_similarity_score'] = (sim_df_avg['content_similarity_score'] + sim_df_avg['collaborative_similarity_score'])/2
#sim_df_avg.drop("collaborative_similarity_score", axis=1, inplace=True)
#sim_df_avg.drop("content_similarity_score", axis=1, inplace=True)
sim_df_avg.sort_values('average_similarity_score', ascending=False, inplace=True)
#save recs locally
sim_df_avg.head(20).to_csv(file_path, index=False, header=True)
return sim_df_avg.head(20)
```
## Lord of the Rings, Fellowship of the Ring
### Collaborative Recommendations
```
get_detailed_recs(lotr, collaborative_embeddings, '../../data/collaborative_recs_lotr.csv')
```
### Content Recommendations
```
get_detailed_recs(lotr, content_embeddings, '../../data/content_recs_lotr.csv')
```
### Averaged Ensemble Recommendations
```
get_ensemble_recs(lotr, content_embeddings, collaborative_embeddings, '../../data/ensemble_recs_lotr.csv')
```
## Inception
### Collaborative Recommendations
```
get_detailed_recs(primer, collaborative_embeddings, '../../data/collaborative_recs_primer.csv')
```
### Content Recommendations
```
get_detailed_recs(primer, content_embeddings, '../../data/content_recs_primer.csv')
```
### Averaged Ensemble Recommendations
```
get_ensemble_recs(primer, content_embeddings, collaborative_embeddings, '../../data/ensemble_recs_primer.csv')
```
| github_jupyter |
```
## Name: Chandni Patel
## ID: A20455322
## CS 512 - Fall 2020
## Extract Chessboard Features
import numpy as np
import cv2
class Extract_Features:
#initialling the image processor
def __init__(self, img):
self.H=7
self.W=7
self.TITLE = 'Extract Features'
self.support_keys = {ord('h'),ord('i'),ord('w'),ord('f')}
self.img_h()
self.in_img = cv2.imread(img)
self.gray_img = cv2.cvtColor(self.in_img, cv2.COLOR_BGR2GRAY)
self.curr_img = np.copy(self.in_img)
self.points_2D_chess = []
self.points_2D_click = []
self.get_preview()
#showing the image and waiting for key press
def get_preview(self):
cv2.imshow(self.TITLE, self.curr_img)
cv2.setMouseCallback(self.TITLE, self.clickevent)
while True:
k = cv2.waitKey(10) & 0xFF
if k == 27:
self.exit()
break
if k in self.support_keys:
self.command_key(k)
#controlling all support keys
def command_key(self, k):
if k == ord('h'):
self.img_h()
if k == ord('i'):
self.img_i()
if k == ord('w'):
self.img_w()
if k == ord('f'):
self.points_2D_chess = []
self.img_f()
#exit
def exit(self):
cv2.destroyAllWindows()
#display a short description of the program, command line argument and support support_keys
def img_h(self):
print("\nSupport support_keys are as below:")
print("Left mouse click -> Mark feature point")
print("f -> Extract Chessboard feature points")
print("h -> Display help")
print("i -> Reload the original image")
print("w -> Save the current image and features extracted")
print("ESC -> Exit\n")
#reload the original images
def img_i(self):
self.points_2D_chess = []
self.points_2D_click = []
self.curr_img = np.copy(self.in_img)
cv2.imshow(self.TITLE, self.curr_img)
print('\n***** REFRESH *****\n')
#save the current processed images
def img_w(self):
H = self.H
W = self.W
# 3D World points: (0,0,0), (1,0,0), (2,0,0), ... , (6,6,0)
points_3D = np.zeros((H*W,3), np.float32)
points_3D[:,:2] = np.mgrid[0:H,0:W].T.reshape(-1,2)
for i in range(len(points_3D)):
points_3D[i][0] *= 50 #50mm
points_3D[i][1] *= 50 #50mm
points_3D[i][2] = 1 #z=1
if (len(self.points_2D_chess) == 1):
points = self.points_2D_chess[0]
f = open('points_3Dto2D.txt',"w+")
for i in range(len(points)):
X = str(points_3D[i][0])
Y = str(points_3D[i][1])
Z = str(points_3D[i][2])
x = str(points[i][0][0])
y = str(points[i][0][1])
line = X + " " + Y + " " + Z + " " + x + " " + y
if (i != 0):
line = "\n" + line
f.write(line)
f.close()
if (len(self.points_2D_click) > 0):
points = self.points_2D_click
f = open('marked_2D.txt',"w+")
for i in range(len(points)):
x = str(points[i][0])
y = str(points[i][1])
line = x + " " + y
if (i != 0):
line = "\n" + line
f.write(line)
f.close()
cv2.imwrite('out.jpg', self.curr_img)
print('\n***** SAVED *****\n')
def img_f(self):
# Find the chess board corners
H = self.H
W = self.W
patternfound, corners = cv2.findChessboardCorners(self.gray_img, (H,W), None)
if patternfound == True:
final_corners = cv2.cornerSubPix(self.gray_img, corners, (11,11), (-1,-1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001))
self.points_2D_chess.append(final_corners)
self.curr_img = cv2.drawChessboardCorners(self.curr_img, (H,W), final_corners, patternfound)
cv2.imshow(self.TITLE, self.curr_img)
print(len(self.points_2D_chess[0]))
else:
print(0)
def clickevent(self, event, x, y, flags, params):
# on left mouse click
if event == cv2.EVENT_LBUTTONDOWN:
print(x, ' ', y)
self.points_2D_click.append((x,y))
self.curr_img = cv2.circle(self.curr_img, (x,y), radius = 10, color = (0, 255, 0), thickness = 2)
cv2.imshow(self.TITLE, self.curr_img)
x = Extract_Features('chessboard.jpg')
```
| github_jupyter |
<H1> Symbolic Computation: The Pitfalls </H1>
This collection of notebooks is mostly numerical, with not a lot of exact or symbolic computation. Why not? And, for that matter, why is numerical computing (even with all the unexpected behaviour of floating-point arithmetic) so much more popular than symbolic or exact computing?
This section explores symbolic computation and its pitfalls. We do so from the point of view of experience and with some authority: we have used symbolic computation (usually in Maple, but also in other symbolic languages) for years (decades!) and know it and its benefits well. <b>Caveat: We do not know SymPy so well, and so if we say that SymPy <i>can't</i> do something, we may well be wrong.</b>
One of RMC's earliest co-authors, Honglin Ye, put it well when he suggested that not everyone needs numerical methods but that <i>everyone</i> could use symbolic computation.
Wait. Isn't that contradictory? If everyone could use it, why aren't they?
There are, we think, a few main obstacles.
<OL>
<LI> Symbolic computation systems are hard to learn how to use well, because there's a lot to learn (indeed, you kind of have to know the math first, too). Look at the <A HREF="https://docs.sympy.org/latest/tutorial/index.html">SymPy Tutorial</A> for example. It has ten sections, one labeled "Gotchas". The SAGEMATH system, which also works with Python, is both more powerful and more complicated: <A HREF="https://doc.sagemath.org/html/en/tutorial/">See the SAGEMATH Tutorial </A> to get started there.</LI>
<LI> Some mathematical problems are inherently too expensive to solve in human lifetimes, even with today's computers, and people unfairly blame symbolic computation systems for this.</LI>
<LI> Even if you can solve a problem exactly, with extra effort, that effort might be wasted because the approximate answers are <i>also</i> the "exact" answers to similar problems, and those similar problems might be just as good a model of
whatever system you were trying to understand. This is especially true if the data is only known approximately. </LI>
<LI> "Symbolic Computation" and "Computer Algebra" are related terms---about as close as "Numerical Analysis" and "Computational Science" if that comparison means anything---but the differences are remarkably important, because what gets <i>implemented</i> is usually a Computer Algebra system, whereas what people actually <i>want to use</i> is a symbolic computation system. We'll show you what that means.</LI>
<LI> Symbolic computation systems are hard to implement well. The major systems (Maple, Mathematica, and Matlab) charge money for their products, and get what they ask for; this is because their systems are better than the free ones in many respects, because they have invested significant programmer time to address the inherent difficulties. Free systems, such as SymPy, will do the easy things for you; and we will see that they can be useful. But in reality there's no comparison (although we admit that the SAGEMATH people may well disagree with our opinion).</LI>
</OL>
All that said, symbolic computation <i>can</i> be extremely useful (and interesting), and is sometimes worth all the bother. Let's look first at what Python and SymPy can do. Later we'll look at what the difficulties are.
```
n = 100
p = 1
for i in range(n):
p = p*(i+1)
print( n, ' factorial is ', p)
print( 'The floating point value of p is ', 1.0*p )
```
The first thing we see is that Python has, built-in, arbitrary precision integer arithmetic. Yay?
```
n = 720
p = 1
for i in range(n):
p = p*(i+1)
print( n, ' factorial is ', p)
# print( 'The floating point value of p is ', 1.0*p ) # Causes OverflowError
```
Large integers cost more to manipulate---the above number is pretty long. But SymPy will do it if you ask. One thing you might want to do is <i>factor</i> those numbers. Or one might just want to know the prime factors.
```
from sympy import primefactors
primefactors_n = primefactors(n)
print("The prime factors of {} : {}".format(n, primefactors_n))
primefactors_p = primefactors(p)
print("The prime factors of {} : {}".format(p, primefactors_p))
```
Factoring seems like such a simple problem, and it's so natural to have it implemented in a symbolic computation system. The number 720! is 1747 digits long. Maybe all 1700--odd digits long integers are so easy to factor?
Um, no. See the discussion at https://en.wikipedia.org/wiki/Integer_factorization to get started. Let's take a modest problem and time it here.
```
funny = 3000000000238000000004719
#notfunny = 45000000000000000057990000000000000024761900000000000003506217
from sympy import factorint
import time
start_time = time.time()
factordict = factorint(funny)
print("The prime factors of {} : {}".format(funny, factordict))
print("--- %s seconds ---" % (time.time() - start_time))
```
That factoring of $3000000000238000000004719$ took between 8 and 11 seconds on this machine (different times if executed more than once); on this very same machine, Maple's "ifactor" command succeeded so quickly that it registered no time taken at all, possibly because it was using a very specialized method; factoring integers is an important feature of symbolic computation and Maple's procedures for it have been a subject of serious research for a long time. Maple's help pages cite three important papers, and tell you that it uses an algorithm called the quadratic sieve. Maple can factor $45000000000000000057990000000000000024761900000000000003506217$ into its three prime factors in about 7.5 seconds on this machine; in contrast, after fifty minutes running trying to factor that with factorint as above, RMC had to hard-restart to get Python's attention.
That SymPy takes so long to factor integers, in comparsion, suggests that it isn't using the best methods (the documentation says that it switches between three methods, trial division, Pollard rho, and Pollard p-1) ; and because factoring is such a basic algorithm (an even more basic one is GCD or Greatest Common Divisor) this will have important knock-on effects.
But factoring, as old an idea as it is, is complicated enough to be used as a basic idea in modern cryptography. The slowness of SymPy is not completely its fault: the problem is hard.
Let's move on to computing with functions. As previously stated, most supposedly "symbolic" systems are really "algebra" systems: this means that they work well with polynomials (even multivariate polynomials). A polynomial considered as an algebraic object is isomorphic to a polynomial considered as a function---but the difference in viewpoint can alter the affordances. An "affordance" is a word meaning "something can happen with it": for instance, you can pick out a lowest-degree term; or you can add it to another polynomial; or you can square it; and so on. As a function, you can evaluate it at a particular value for the symbols (variables).
```
from sympy import *
x = symbols('x')
solveset(Eq(x**2, 3), x)
solveset(Eq(x**3+x-1, 0), x)
start_time = time.time()
#solveset(Eq(x**4+x-1, 0), x) # Interrupted after about two hours: the code did not succeed
print("--- %s seconds ---" % (time.time() - start_time))
```
In those two hours, RMC went and had his dinner; then downloaded <A HREF="https://www.tandfonline.com/doi/pdf/10.1080/00029890.2007.11920389">a paper by Dave Auckly from the American Mathematical Monthly 2007 </A> which talks about solving the quartic with a pencil (an algebraic geometer's pencil!), read it, and solved the problem by hand, including solving the resolvent cubic by hand, which he already knew how to do. And got it right, too. So there.
In contrast, Maple (nearly instantaneously) returns---if you force it to by saying you want the explicit solution---the answer
$$
\frac{\sqrt{6}\, \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{12}+\frac{\mathrm{I} \sqrt{6}\, \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}+12 \sqrt{6}\, \left(108+12 \sqrt{849}\right)^{\frac{1}{3}}-48 \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}}}{12}
,
\frac{\sqrt{6}\, \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{12}-\frac{\mathrm{I} \sqrt{6}\, \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}+12 \sqrt{6}\, \left(108+12 \sqrt{849}\right)^{\frac{1}{3}}-48 \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}}}{12}
,
-\frac{\sqrt{6}\, \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{12}+\frac{\sqrt{6}\, \sqrt{\frac{-\left(108+12 \sqrt{849}\right)^{\frac{2}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}+12 \sqrt{6}\, \left(108+12 \sqrt{849}\right)^{\frac{1}{3}}+48 \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}}}{12}
,
-\frac{\sqrt{6}\, \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{12}-\frac{\sqrt{6}\, \sqrt{\frac{-\left(108+12 \sqrt{849}\right)^{\frac{2}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}+12 \sqrt{6}\, \left(108+12 \sqrt{849}\right)^{\frac{1}{3}}+48 \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}} \sqrt{\frac{\left(108+12 \sqrt{849}\right)^{\frac{2}{3}}-48}{\left(108+12 \sqrt{849}\right)^{\frac{1}{3}}}}}}}{12}
$$
This is an example of what Velvel Kahan calls a "wallpaper expression." He also famously said, "Have you ever asked a computer algebra system a question, and then, as the screensful of answer whizzed past your eyes, said "I wish I hadn't asked?""
The use of that exact answer (quickly obtained or not) is questionable. Then, of course, the Abel-Ruffini theorem says that there is <i>no</i> general formula for solving polynomials of degree $5$ or higher <i>in terms of radicals</i>. For degree $5$ polynomials, there <i>is</i> a solution in terms of elliptic functions; again, it's complicated enough that it's of questionable use. Then there is Galois theory which describes the algebraic structures of polynomials. See the interesting historical essay by Nick Trefethen on <A HREF="https://people.maths.ox.ac.uk/trefethen/galois.pdf"> What we learned from Galois </A>.
The lesson here is that even when you <i>can</i> solve something exactly, maybe you shouldn't.
There are some interesting things you can do with univariate polynomials of high degree, including with the algebraic numbers that are their roots. But computation with them isn't so easy. SymPy actually has some quite advanced features for polynomials, including multivariate polynomials.
<H3> Symbolic computation with functions </H3>
Let's try some calculus-like things.
```
y = symbols('y')
solveset(Eq(exp(y), x), y)
```
RMC <b>really</b> doesn't like that "solution"! It has separated the real and imaginary parts without needing to. A perfectly good answer would be $\ln_n(x)$, which looks a lot simpler.
$\ln_k(z)$, which might not look familiar to you, means $\ln(z) + 2\pi i k$. Also, SymPy has chosen to emulate Maple and use I for the square root of $-1$, which made a kind of sense in the 1980's when Maple chose to do it that way, but given all the fonts we have nowadays that doesn't seem sensible at all.
Fine. We will live with it. The solution is not actually <i>wrong</i>.
```
solveset(Eq( y*exp(y), x), y)
```
Oh, that's disappointing. See <A HREF="https://en.wikipedia.org/wiki/Lambert_W_function"> the Wikipedia article on Lambert W</A> to see what that should have been.
```
solve( Eq( y*exp(y), x ), y )
```
That's better, but---like the logarithm above---there should be multiple branches.
<H4> Integrals, and the difference between Computer Algebra and Symbolic Computation</H4>
We'll start with a nasty example. You can find nice examples of SymPy and integration in many places, so we will assume that you have seen instances of computer implementations of the fundamental theorem of calculus: to find areas under curves by using antiderivatives. The nasty integral that we will use is
$$
f(x) = \frac{3}{5+4\cos x}
$$
and we will try to integrate this (infinitely smooth) function on various intervals in the real axis. Since $\cos x$ is never larger than $1$ for real $x$, the denominator of that function is always positive, so the function is always positive. Therefore the integral of $f(x)$ from any $a$ to any $b > a$ will also be positive. Positive functions have positive area underneath them, end of story.
```
from matplotlib import pyplot as plt
# plt imported in a previous cell.
import numpy as np
n = 2021
xi = np.linspace(-2*np.pi, 2*np.pi, n)
yi = np.zeros(n)
for i in range(n):
yi[i] = 3.0/(5.0+4.0*np.cos(xi[i]))
# Trapezoidal rule is spectrally accurate for periodic integrands
area = np.trapz(yi, x=xi)
print( 'Area under the curve from 0 to 2pi is approximately ', area/2/np.pi, ' times pi ')
plt.plot(xi,yi,'k,')
plt.axis([-2*np.pi, 2*np.pi, 0, 4])
plt.show()
f = 3/(5+4*cos(x))
F = integrate(f, x )
print( 'The integral from SymPy is ', F )
FTOC = F.subs(x,2*pi) - F.subs(x,0)
print( 'The area under the curve from 0 to 2pi is positive, not ', FTOC )
defint = integrate( f, (x,0,2*pi))
print( 'No, the area is positive, not ', defint )
```
As of this writing, many computer algebra systems (not just SymPy) are broken by this example. Maple's indefinite integral returns something that looks a little nicer, namely
$$
\int f(x)\,dx = 2 \arctan \! \left(\frac{\tan \! \left(\frac{x}{2}\right)}{3}\right)
$$
but when you evaluate that at $x=2\pi$ you get the same value that you do at $x=0$, because the function is <i>periodic</i>. So the difference is, as with SymPy's answer, wrongly zero.
There is something very important happening in that expression, though, which also happens in that more complex expression returned by SymPy: there is a jump discontinuity at $x=\pi$. The computer algebra system has returned a <i>discontinuous</i> function as an antiderivative of a <i>continuous</i> function.
Anyone remember the tl;dr of the Fundamental Theorem of Calculus? Likely not. Even we have to look it up before we teach it, to be sure we have the fine details right. The basic idea is, though, that integration <i>smooths</i> things out: integrate a continuous function, you get a <i>continuously differentiable</i> function, which is smoother. Jump discontinuities spuriously introduced are <i>right out</i>.
So, what the computer algebra system is doing there <i>is not integration</i>. Anyone get their knuckles rapped for not adding the "constant of integration" when you integrated on a test? Lost a mark or two, maybe?
Turns out that's what's going on here. By adding <i>different constants</i> in <i>different intervals</i> to this purported antiderivative, we can find a smooth antiderivative of $f(x)$. The answer returned by SymPy (and Maple) is "correct" from the point of view of differential algebra, where constants are things that differentiate to zero and don't have "values" as such.
Many people who implement computer algebra systems (even some of our friends at Maple) argue that they are right and we are wrong, and don't see the point. But they are wrong, and one day they'll get it. Now, they do have a fix-up in their <i>definite</i> integration code: if you ask Maple int( f, x=0..2\*Pi) you will get the correct answer $2\pi$. But you have to ask the right way, by using the syntax for definite integrals (as you see above, SymPy does not give the right answer that way).
Matlab gets it. Matlab's Symbolic Toolbox works quite hard to get continuous antiderivatives. So we have confidence that Maple will one day get it. But Wolfram Alpha gets a discontinuous antiderivative, too, at this time of writing, so there's only a <i>little</i> pressure from the competition.
These facts have been known for quite a while now. See <A HREF="https://www.jstor.org/stable/pdf/2690852.pdf">The Importance of Being Continuous</A> by David Jeffrey, a paper that was written in 1994 (long predating Python or SymPy).
The disagreement amongst mathematicians---algebra versus analysis---has been going on for much longer, and goes at least back to Cauchy. We are on Cauchy's side, here. But there are lots of people who just want formulas, and don't care if they are discontinuous.
RMC retweeted something apropos this morning: Prof Michael Kinyon said "I used to think it was weird when people disagreed with me, but now I understand that some people just enjoy the feeling of being wrong."
| github_jupyter |
```
import os
import random
import catboost
import numpy as np
import pandas as pd
import xarray
from sklearn.metrics import roc_auc_score
import warnings
warnings.filterwarnings('ignore')
SEED = 42
VAL_MONTHS = 6
ITERATIONS = 1000
DATA_PATH = '../data'
MODELS_PATH = './'
def reseed(seed=SEED):
np.random.seed(seed)
random.seed(seed)
def evaluate(y_true, y_pred):
gt = np.zeros_like(y_pred, dtype=np.int8)
gt[np.arange(y_true.shape[0]), y_true - 1] = 1
result = {'roc_auc_micro': roc_auc_score(gt, y_pred, average='micro')}
for ft in range(1, 12):
gt = (y_true == ft)
if gt.max() == gt.min():
roc_auc = 0
else:
roc_auc = roc_auc_score(gt, y_pred[:, ft - 1])
result[f'roc_auc_{ft}'] = roc_auc
return result
def preprocess(df):
df['longitude'] = df['longitude'].astype(np.float32)
df['latitude'] = df['latitude'].astype(np.float32)
df['weekday'] = df.date.dt.weekday.astype(np.int8)
df['month'] = df.date.dt.month.astype(np.int8)
df['ym'] = (df.date.dt.month + (df.date.dt.year - 2000) * 12).astype(np.int16)
df['fire_type'] = df.fire_type.astype(np.uint8)
df.set_index('fire_id', inplace=True)
df.drop(['fire_type_name'], axis=1, inplace=True)
def load_ncep_var(var, press_level):
result = []
for year in range(2012, 2020):
dataset_filename = os.path.join(DATA_PATH, 'ncep', f'{var}.{year}.nc')
ds = xarray.open_dataset(dataset_filename)
ds = ds.sel(drop=True, level=press_level)[var]
ds = ds[:, (ds.lat >= 15 * 2.5 - 0.1) & (ds.lat <= 29 * 2.5 + 0.1),
(ds.lon >= 6 * 2.5 - 0.1) & (ds.lon <= 71 * 2.5 + 0.1)]
result.append(ds)
ds = xarray.merge(result)
df = ds.to_dataframe()[[var]].reset_index()
df = df.merge(ds.rolling(time=7).mean().to_dataframe()[[var]].reset_index(),
on=['lon', 'lat', 'time'], suffixes=('', '_7d'), how='left')
df = df.merge(ds.rolling(time=14).mean().to_dataframe()[[var]].reset_index(),
on=['lon', 'lat', 'time'], suffixes=('', '_14d'), how='left')
df = df.merge(ds.rolling(time=30).mean().to_dataframe()[[var]].reset_index(),
on=['lon', 'lat', 'time'], suffixes=('', '_30d'), how='left')
df['lat'] = np.round(df.lat / 2.5).astype(np.int8)
df['lon'] = np.round(df.lon / 2.5).astype(np.int8)
return df.copy()
def add_ncep_features(df):
df['lon'] = np.round(df.longitude / 2.5).astype(np.int8)
df['lat'] = np.round(df.latitude / 2.5).astype(np.int8)
for var, press_level in (('air', 1000), ('uwnd', 1000), ('rhum', 1000)):
var_df = load_ncep_var(var, press_level)
mdf = df.reset_index().merge(var_df, left_on=['lon', 'lat', 'date'], right_on=['lon', 'lat', 'time'],
how='left', ).set_index('fire_id')
for suffix in ('', '_7d', '_14d', '_30d'):
df[var + suffix] = mdf[var + suffix]
df.drop(['lon', 'lat'], axis=1, inplace=True)
def prepare_dataset(filename):
df = pd.read_csv(filename, parse_dates=['date'])
preprocess(df)
add_ncep_features(df)
return df
def train_model(df_train):
last_month = df_train.ym.max()
train = df_train[df_train.ym <= last_month - VAL_MONTHS]
val = df_train[df_train.ym > last_month - VAL_MONTHS]
X_train = train.drop(['fire_type', 'ym', 'date'], axis=1)
Y_train = train.fire_type
X_val = val.drop(['fire_type', 'ym', 'date'], axis=1)
Y_val = val.fire_type
clf = catboost.CatBoostClassifier(loss_function='MultiClass',
verbose=10, random_state=SEED, iterations=ITERATIONS)
clf.fit(X_train, Y_train, eval_set=(X_val, Y_val))
pred_train = clf.predict_proba(X_train)
pred_val = clf.predict_proba(X_val)
train_scores = evaluate(Y_train, pred_train)
val_scores = evaluate(Y_val, pred_val)
print("Train scores:")
for k, v in train_scores.items():
print("%s\t%f" % (k, v))
print("Validation scores:")
for k, v in val_scores.items():
print("%s\t%f" % (k, v))
clf.save_model(os.path.join(MODELS_PATH, 'catboost.cbm'))
reseed()
df_train = prepare_dataset(os.path.join(DATA_PATH, 'wildfires_train.csv'))
train_model(df_train)
```
| github_jupyter |
# Confined Aquifer Test
**This test is taken from examples presented in MLU tutorial.**
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from ttim import *
```
The test is condected at a fully confined two-aquifer system. Both the pumping well and the observation piezometer are screened at the second aquifer.
Set basic parameters:
```
Q = 82.08 #constant discharge in m^3/d
zt0 = -46 #top boundary of upper aquifer in m
zb0 = -49 #bottom boundary of upper aquifer in m
zt1 = -52 #top boundary of lower aquifer in m
zb1 = -55 #bottom boundary of lower aquifer in m
rw = 0.05 #well radius in m
```
Load data of two observation wells:
```
data1 = np.loadtxt('data/schroth_obs1.txt', skiprows = 1)
t1 = data1[:, 0]
h1 = data1[:, 1]
r1 = 0
data2 = np.loadtxt('data/schroth_obs2.txt', skiprows = 1)
t2 = data2[:, 0]
h2 = data2[:, 1]
r2 = 46 #distance between observation well2 and pumping well
```
Create single layer model (overlying aquifer and aquitard are excluded):
```
ml_0 = ModelMaq(z=[zt1, zb1], kaq=10, Saq=1e-4, tmin=1e-4, tmax=1)
w_0 = Well(ml_0, xw=0, yw=0, rw=rw, tsandQ = [(0, Q), (1e+08, 0)])
ml_0.solve()
ca_0 = Calibrate(ml_0)
ca_0.set_parameter(name='kaq0', initial=10)
ca_0.set_parameter(name='Saq0', initial=1e-4)
ca_0.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0)
ca_0.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0)
ca_0.fit(report=True)
display(ca_0.parameters)
print('RMSE:', ca_0.rmse())
hm1_0 = ml_0.head(r1, 0, t1)
hm2_0 = ml_0.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_0[-1], label='ttim1')
plt.semilogx(t2, hm2_0[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_one1.eps');
```
To improve model's performance, rc & res are adding:
```
ml_1 = ModelMaq(z=[zt1, zb1], kaq=10, Saq=1e-4, tmin=1e-4, tmax=1)
w_1 = Well(ml_1, xw=0, yw=0, rw=rw, rc=0, res=5, tsandQ = [(0, Q), (1e+08, 0)])
ml_1.solve()
ca_1 = Calibrate(ml_1)
ca_1.set_parameter(name='kaq0', initial=10)
ca_1.set_parameter(name='Saq0', initial=1e-4)
ca_1.set_parameter_by_reference(name='rc', parameter=w_1.rc[:], initial=0.2)
ca_1.set_parameter_by_reference(name='res', parameter=w_1.res[:], initial=3)
ca_1.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0)
ca_1.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0)
ca_1.fit(report=True)
display(ca_1.parameters)
print('RMSE:', ca_1.rmse())
hm1_1 = ml_1.head(r1, 0, t1)
hm2_1 = ml_1.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_1[-1], label='ttim1')
plt.semilogx(t2, hm2_1[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_one2.eps');
```
Create three-layer conceptual model:
```
ml_2 = ModelMaq(kaq=[17.28, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[1.2e-4, 1e-5],\
Sll=3e-5, topboundary='conf', tmin=1e-4, tmax=0.5)
w_2 = Well(ml_2, xw=0, yw=0, rw=rw, tsandQ = [(0, Q), (1e+08, 0)], layers=1)
ml_2.solve()
ca_2 = Calibrate(ml_2)
ca_2.set_parameter(name= 'kaq0', initial=20, pmin=0)
ca_2.set_parameter(name='kaq1', initial=1, pmin=0)
ca_2.set_parameter(name='Saq0', initial=1e-4, pmin=0)
ca_2.set_parameter(name='Saq1', initial=1e-5, pmin=0)
ca_2.set_parameter_by_reference(name='Sll', parameter=ml_2.aq.Sll[:],\
initial=1e-4, pmin=0)
ca_2.set_parameter(name='c1', initial=100, pmin=0)
ca_2.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)
ca_2.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)
ca_2.fit(report=True)
display(ca_2.parameters)
print('RMSE:',ca_2.rmse())
hm1_2 = ml_2.head(r1, 0, t1)
hm2_2 = ml_2.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_2[-1], label='ttim1')
plt.semilogx(t2, hm2_2[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three1.eps');
```
Try adding res & rc:
```
ml_3 = ModelMaq(kaq=[19, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[4e-4, 1e-5],\
Sll=1e-4, topboundary='conf', tmin=1e-4, tmax=0.5)
w_3 = Well(ml_3, xw=0, yw=0, rw=rw, rc=None, res=0, tsandQ = [(0, Q), (1e+08, 0)], \
layers=1)
ml_3.solve()
ca_3 = Calibrate(ml_3)
ca_3.set_parameter(name= 'kaq0', initial=20, pmin=0)
ca_3.set_parameter(name='kaq1', initial=1, pmin=0)
ca_3.set_parameter(name='Saq0', initial=1e-4, pmin=0)
ca_3.set_parameter(name='Saq1', initial=1e-5, pmin=0)
ca_3.set_parameter_by_reference(name='Sll', parameter=ml_3.aq.Sll[:],\
initial=1e-4, pmin=0)
ca_3.set_parameter(name='c1', initial=100, pmin=0)
ca_3.set_parameter_by_reference(name='res', parameter=w_3.res[:], initial=0, pmin=0)
ca_3.set_parameter_by_reference(name='rc', parameter=w_3.rc[:], initial=0.2, pmin=0)
ca_3.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)
ca_3.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)
ca_3.fit(report=True)
display(ca_3.parameters)
print('RMSE:', ca_3.rmse())
hm1_3 = ml_3.head(r1, 0, t1)
hm2_3 = ml_3.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_3[-1], label='ttim1')
plt.semilogx(t2, hm2_3[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three2.eps');
```
Calibrate with fitted characters for upper aquifer:
```
ml_4 = ModelMaq(kaq=[17.28, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[1.2e-4, 1e-5],\
Sll=3e-5, topboundary='conf', tmin=1e-4, tmax=0.5)
w_4 = Well(ml_4, xw=0, yw=0, rw=rw, rc=None, res=0, tsandQ = [(0, Q), (1e+08, 0)], \
layers=1)
ml_4.solve()
```
The optimized value of res is very close to the minimum limitation, thus res has little effect on the performance of the model. res is removed in this calibration.
```
ca_4 = Calibrate(ml_4)
ca_4.set_parameter(name='kaq1', initial=1, pmin=0)
ca_4.set_parameter(name='Saq1', initial=1e-5, pmin=0)
ca_4.set_parameter(name='c1', initial=100, pmin=0)
ca_4.set_parameter_by_reference(name='rc', parameter=w_4.rc[:], initial=0.2, pmin=0)
ca_4.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)
ca_4.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)
ca_4.fit(report=True)
display(ca_4.parameters)
print('RMSE:', ca_4.rmse())
hm1_4 = ml_4.head(r1, 0, t1)
hm2_4 = ml_4.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_4[-1], label='ttim1')
plt.semilogx(t2, hm2_4[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three3.eps');
```
## Summary of values simulated by MLU
Results of calibrations done with three-layer model of ttim are presented below.
```
t = pd.DataFrame(columns=['k0[m/d]','k1[m/d]','Ss0[1/m]','Ss1[1/m]','Sll[1/m]','c[d]',\
'res', 'rc'], \
index=['MLU', 'MLU-fixed k1','ttim','ttim-rc','ttim-fixed upper'])
t.loc['ttim-rc'] = ca_3.parameters['optimal'].values
t.iloc[2,0:6] = ca_2.parameters['optimal'].values
t.iloc[4,5] = ca_4.parameters['optimal'].values[2]
t.iloc[4,7] = ca_4.parameters['optimal'].values[3]
t.iloc[4,0] = 17.28
t.iloc[4,1] = ca_4.parameters['optimal'].values[0]
t.iloc[4,2] = 1.2e-4
t.iloc[4,3] = ca_4.parameters['optimal'].values[1]
t.iloc[4,4] = 3e-5
t.iloc[0, 0:6] = [17.424, 6.027e-05, 1.747, 6.473e-06, 3.997e-05, 216]
t.iloc[1, 0:6] = [2.020e-04, 9.110e-04, 3.456, 6.214e-05, 7.286e-05, 453.5]
t['RMSE'] = [0.023452, 0.162596, ca_2.rmse(), ca_3.rmse(), ca_4.rmse()]
t
```
| github_jupyter |
# Road Following - Live demo
In this notebook, we will use model we trained to move jetBot smoothly on track.
### Load Trained Model
We will assume that you have already downloaded ``best_steering_model_xy.pth`` to work station as instructed in "train_model.ipynb" notebook. Now, you should upload model file to JetBot in to this notebooks's directory. Once that's finished there should be a file named ``best_steering_model_xy.pth`` in this notebook's directory.
> Please make sure the file has uploaded fully before calling the next cell
Execute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
```
import torchvision
import torch
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
```
Next, load the trained weights from the ``best_steering_model_xy.pth`` file that you uploaded.
```
model.load_state_dict(torch.load('best_steering_model_xy.pth'))
```
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
```
device = torch.device('cuda')
model = model.to(device)
model = model.eval().half()
```
### Creating the Pre-Processing Function
We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:
1. Convert from HWC layout to CHW layout
2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0
3. Transfer the data from CPU memory to GPU memory
4. Add a batch dimension
```
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np
mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()
def preprocess(image):
image = PIL.Image.fromarray(image)
image = transforms.functional.to_tensor(image).to(device).half()
image.sub_(mean[:, None, None]).div_(std[:, None, None])
return image[None, ...]
```
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.
Now, let's start and display our camera. You should be pretty familiar with this by now.
```
from IPython.display import display
import ipywidgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg
camera = Camera()
image_widget = ipywidgets.Image()
traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
display(image_widget)
```
We'll also create our robot instance which we'll need to drive the motors.
```
from jetbot import Robot
robot = Robot()
```
Now, we will define sliders to control JetBot
> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment
1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider``
2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth
3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets
> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
```
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.2, description='steering gain')
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')
display(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider)
```
Next, let's display some sliders that will let us see what JetBot is thinking. The x and y sliders will display the predicted x, y values.
The steering slider will display our estimated steering value. Please remember, this value isn't the actual angle of the target, but simply a value that is
nearly proportional. When the actual angle is ``0``, this will be zero, and it will increase / decrease with the actual angle.
```
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')
display(ipywidgets.HBox([y_slider, speed_slider]))
display(x_slider, steering_slider)
```
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps
1. Pre-process the camera image
2. Execute the neural network
3. Compute the approximate steering value
4. Control the motors using proportional / derivative control (PD)
```
angle = 0.0
angle_last = 0.0
def execute(change):
global angle, angle_last
image = change['new']
xy = model(preprocess(image)).detach().float().cpu().numpy().flatten()
x = xy[0]
y = (0.5 - xy[1]) / 2.0
x_slider.value = x
y_slider.value = y
speed_slider.value = speed_gain_slider.value
angle = np.arctan2(x, y)
pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
angle_last = angle
steering_slider.value = pid + steering_bias_slider.value
robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.0)
robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.0)
execute({'new': camera.value})
```
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.
We accomplish that with the observe function.
>WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
```
camera.observe(execute, names='value')
```
Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame.
You can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.
If you want to stop this behavior, you can unattach this callback by executing the code below.
```
import time
camera.unobserve(execute, names='value')
time.sleep(0.1) # add a small sleep to make sure frames have finished processing
robot.stop()
```
### Conclusion
That's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track follwing the road!!!
If your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)
| github_jupyter |
Copyright 2020 The dnn-predict-accuracy Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# README
This notebook contains code for training predictors of DNN accuracy.
Contents:
(1) Loading the Small CNN Zoo dataset
(2) Figure 2 of the paper
(3) Examples of training Logit-Linear / GBM / DNN predictors
(4) Transfer of predictors across CNN collections
(5) Various visualizations of CNN collections
Code dependencies:
Light-GBM package
```
from __future__ import division
import time
import os
import json
import sys
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import colors
import pandas as pd
import seaborn as sns
from scipy import stats
from tensorflow import keras
from tensorflow.io import gfile
import lightgbm as lgb
DATAFRAME_CONFIG_COLS = [
'config.w_init',
'config.activation',
'config.learning_rate',
'config.init_std',
'config.l2reg',
'config.train_fraction',
'config.dropout']
CATEGORICAL_CONFIG_PARAMS = ['config.w_init', 'config.activation']
CATEGORICAL_CONFIG_PARAMS_PREFIX = ['winit', 'act']
DATAFRAME_METRIC_COLS = [
'test_accuracy',
'test_loss',
'train_accuracy',
'train_loss']
TRAIN_SIZE = 15000
# TODO: modify the following lines
CONFIGS_PATH_BASE = 'path_to_the_file_with_best_configs'
MNIST_OUTDIR = "path_to_files_with_mnist_collection"
FMNIST_OUTDIR = 'path_to_files_with_fmnist_collection'
CIFAR_OUTDIR = 'path_to_files_with_cifar10gs_collection'
SVHN_OUTDIR = 'path_to_files_with_svhngs_collection'
def filter_checkpoints(weights, dataframe,
target='test_accuracy',
stage='final', binarize=True):
"""Take one checkpoint per run and do some pre-processing.
Args:
weights: numpy array of shape (num_runs, num_weights)
dataframe: pandas DataFrame which has num_runs rows. First 4 columns should
contain test_accuracy, test_loss, train_accuracy, train_loss respectively.
target: string, what to use as an output
stage: flag defining which checkpoint out of potentially many we will take
for the run.
binarize: Do we want to binarize the categorical hyperparams?
Returns:
tuple (weights_new, metrics, hyperparams, ckpts), where
weights_new is a numpy array of shape (num_remaining_ckpts, num_weights),
metrics is a numpy array of shape (num_remaining_ckpts, num_metrics) with
num_metric being the length of DATAFRAME_METRIC_COLS,
hyperparams is a pandas DataFrame of num_remaining_ckpts rows and columns
listed in DATAFRAME_CONFIG_COLS.
ckpts is an instance of pandas Index, keeping filenames of the checkpoints
All the num_remaining_ckpts rows correspond to one checkpoint out of each
run we had.
"""
assert target in DATAFRAME_METRIC_COLS, 'unknown target'
ids_to_take = []
# Keep in mind that the rows of the DataFrame were sorted according to ckpt
# Fetch the unit id corresponding to the ckpt of the first row
current_uid = dataframe.axes[0][0].split('/')[-2] # get the unit id
steps = []
for i in range(len(dataframe.axes[0])):
# Fetch the new unit id
ckpt = dataframe.axes[0][i]
parts = ckpt.split('/')
if parts[-2] == current_uid:
steps.append(int(parts[-1].split('-')[-1]))
else:
# We need to process the previous unit
# and choose which ckpt to take
steps_sort = sorted(steps)
target_step = -1
if stage == 'final':
target_step = steps_sort[-1]
elif stage == 'early':
target_step = steps_sort[0]
else: # middle
target_step = steps_sort[int(len(steps) / 2)]
offset = [j for (j, el) in enumerate(steps) if el == target_step][0]
# Take the DataFrame row with the corresponding row id
ids_to_take.append(i - len(steps) + offset)
current_uid = parts[-2]
steps = [int(parts[-1].split('-')[-1])]
# Fetch the hyperparameters of the corresponding checkpoints
hyperparams = dataframe[DATAFRAME_CONFIG_COLS]
hyperparams = hyperparams.iloc[ids_to_take]
if binarize:
# Binarize categorical features
hyperparams = pd.get_dummies(
hyperparams,
columns=CATEGORICAL_CONFIG_PARAMS,
prefix=CATEGORICAL_CONFIG_PARAMS_PREFIX)
else:
# Make the categorical features have pandas type "category"
# Then LGBM can use those as categorical
hyperparams.is_copy = False
for col in CATEGORICAL_CONFIG_PARAMS:
hyperparams[col] = hyperparams[col].astype('category')
# Fetch the file paths of the corresponding checkpoints
ckpts = dataframe.axes[0][ids_to_take]
return (weights[ids_to_take, :],
dataframe[DATAFRAME_METRIC_COLS].values[ids_to_take, :].astype(
np.float32),
hyperparams,
ckpts)
def build_fcn(n_layers, n_hidden, n_outputs, dropout_rate, activation,
w_regularizer, w_init, b_init, last_activation='softmax'):
"""Fully connected deep neural network."""
model = keras.Sequential()
model.add(keras.layers.Flatten())
for _ in range(n_layers):
model.add(
keras.layers.Dense(
n_hidden,
activation=activation,
kernel_regularizer=w_regularizer,
kernel_initializer=w_init,
bias_initializer=b_init))
if dropout_rate > 0.0:
model.add(keras.layers.Dropout(dropout_rate))
if n_layers > 0:
model.add(keras.layers.Dense(n_outputs, activation=last_activation))
else:
model.add(keras.layers.Dense(
n_outputs,
activation='sigmoid',
kernel_regularizer=w_regularizer,
kernel_initializer=w_init,
bias_initializer=b_init))
return model
def extract_summary_features(w, qts=(0, 25, 50, 75, 100)):
"""Extract various statistics from the flat vector w."""
features = np.percentile(w, qts)
features = np.append(features, [np.std(w), np.mean(w)])
return features
def extract_per_layer_features(w, qts=None, layers=(0, 1, 2, 3)):
"""Extract per-layer statistics from the weight vector and concatenate."""
# Indices of the location of biases/kernels in the flattened vector
all_boundaries = {
0: [(0, 16), (16, 160)],
1: [(160, 176), (176, 2480)],
2: [(2480, 2496), (2496, 4800)],
3: [(4800, 4810), (4810, 4970)]}
boundaries = []
for layer in layers:
boundaries += all_boundaries[layer]
if not qts:
features = [extract_summary_features(w[a:b]) for (a, b) in boundaries]
else:
features = [extract_summary_features(w[a:b], qts) for (a, b) in boundaries]
all_features = np.concatenate(features)
return all_features
```
# 1. Loading the Small CNN Zoo dataset
The following code loads the dataset (trained weights from *.npy files and all the relevant metrics, including accuracy, from *.csv files).
```
all_dirs = [MNIST_OUTDIR, FMNIST_OUTDIR, CIFAR_OUTDIR, SVHN_OUTDIR]
weights = {'mnist': None,
'fashion_mnist': None,
'cifar10': None,
'svhn_cropped': None}
metrics = {'mnist': None,
'fashion_mnist': None,
'cifar10': None,
'svhn_cropped': None}
for (dirname, dataname) in zip(
all_dirs, ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']):
print('Loading %s' % dataname)
with gfile.GFile(os.path.join(dirname, "all_weights.npy"), "rb") as f:
# Weights of the trained models
weights[dataname] = np.load(f)
with gfile.GFile(os.path.join(dirname, "all_metrics.csv")) as f:
# pandas DataFrame with metrics
metrics[dataname] = pd.read_csv(f, index_col=0)
```
Next it filters the dataset by keeping only checkpoints corresponding to 18 epochs and discarding runs that resulted in numerical instabilities. Finally, it performs the train / test splits.
```
weights_train = {}
weights_test = {}
configs_train = {}
configs_test = {}
outputs_train = {}
outputs_test = {}
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
# Take one checkpoint per each run
# If using GBM as predictor, set binarize=False
weights_flt, metrics_flt, configs_flt, ckpts = filter_checkpoints(
weights[dataset], metrics[dataset], binarize=True)
# Filter out DNNs with NaNs and Inf in the weights
idx_valid = (np.isfinite(weights_flt).mean(1) == 1.0)
inputs = np.asarray(weights_flt[idx_valid], dtype=np.float32)
outputs = np.asarray(metrics_flt[idx_valid], dtype=np.float32)
configs = configs_flt.iloc[idx_valid]
ckpts = ckpts[idx_valid]
# Shuffle and split the data
random_idx = list(range(inputs.shape[0]))
np.random.shuffle(random_idx)
weights_train[dataset], weights_test[dataset] = (
inputs[random_idx[:TRAIN_SIZE]], inputs[random_idx[TRAIN_SIZE:]])
outputs_train[dataset], outputs_test[dataset] = (
1. * outputs[random_idx[:TRAIN_SIZE]],
1. * outputs[random_idx[TRAIN_SIZE:]])
configs_train[dataset], configs_test[dataset] = (
configs.iloc[random_idx[:TRAIN_SIZE]],
configs.iloc[random_idx[TRAIN_SIZE:]])
```
# 2. Figure 2 of the paper
Next we plot distribution of CNNs from 4 collections in Small CNN Zoo according to their train / test accuracy
```
plt.figure(figsize = (16, 8))
pic_id = 0
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
pic_id += 1
sp = plt.subplot(2, 4, pic_id)
outputs = outputs_train[dataset]
if dataset == 'mnist':
plt.title('MNIST', fontsize=24)
if dataset == 'fashion_mnist':
plt.title('Fashion MNIST', fontsize=24)
if dataset == 'cifar10':
plt.title('CIFAR10-GS', fontsize=24)
if dataset == 'svhn_cropped':
plt.title('SVHN-GS', fontsize=24)
# 1. test accuracy hist plots
sns.distplot(np.array(outputs[:, 0]), bins=15, kde=False, color='green')
plt.xlim((0.0, 1.0))
sp.axes.get_xaxis().set_ticklabels([])
sp.axes.get_yaxis().set_ticklabels([])
pic_id += 4
sp = plt.subplot(2, 4, pic_id)
# 2. test / train accuracy scatter plots
NUM_POINTS = 1000
random_idx = range(len(outputs))
np.random.shuffle(random_idx)
plt.plot([0.0, 1.0], [0.0, 1.0], 'r--')
sns.scatterplot(np.array(outputs[random_idx[:NUM_POINTS], 0]), # test acc
np.array(outputs[random_idx[:NUM_POINTS], 2]), # train acc
s=30
)
if pic_id == 5:
plt.ylabel('Train accuracy', fontsize=22)
sp.axes.get_yaxis().set_ticklabels([0.0, 0.2, .4, .6, .8, 1.])
else:
sp.axes.get_yaxis().set_ticklabels([])
plt.xlim((0.0, 1.0))
plt.ylim((0.0, 1.0))
sp.axes.get_xaxis().set_ticks([0.0, 0.2, .4, .6, .8, 1.])
sp.axes.tick_params(axis='both', labelsize=18)
plt.xlabel('Test accuracy', fontsize=22)
pic_id -= 4
plt.tight_layout()
```
# 3. Examples of training Logit-Linear / GBM / DNN predictors
Next we train 3 models on all 4 CNN collections with the best hyperparameter configurations we found during our studies (documented in Table 2 and Section 4 of the paper).
First, we load the best hyperparameter configurations we found.
The file best_configs.json contains a list.
Each entry of that list corresponds to the single hyperparameter configuration.
It consists of:
(1) name of the CNN collection (mnist/fashion mnist/cifar10/svhn)
(2) predictor type (linear/dnn/lgbm)
(3) type of inputs, (refer to Table 2)
(4) value of MSE you will get training with these settings,
(5) dictionary of "parameter name"-> "parameter value" for the given type of predictor.
```
with gfile.GFile(os.path.join(CONFIGS_PATH_BASE, 'best_configs.json'), 'r') as file:
best_configs = json.load(file)
```
# 3.1 Training GBM predictors
GBM code below requires the lightgbm package.
This is an example of training GBM on CIFAR10-GS CNN collection using per-layer weights statistics as inputs.
```
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == 'cifar10' and
el[1] == 'lgbm' and
el[2] == 'wstats-perlayer'][0]
# Pre-process the weights
train_x = np.apply_along_axis(
extract_per_layer_features, 1,
weights_train['cifar10'],
qts=None,
layers=(0, 1, 2, 3))
test_x = np.apply_along_axis(
extract_per_layer_features, 1,
weights_test['cifar10'],
qts=None,
layers=(0, 1, 2, 3))
# Get the target values
train_y, test_y = outputs_train['cifar10'][:, 0], outputs_test['cifar10'][:, 0]
# Define the GBM model
lgbm_model = lgb.LGBMRegressor(
num_leaves=config['num_leaves'],
max_depth=config['max_depth'],
learning_rate=config['learning_rate'],
max_bin=int(config['max_bin']),
min_child_weight=config['min_child_weight'],
reg_lambda=config['reg_lambda'],
reg_alpha=config['reg_alpha'],
subsample=config['subsample'],
subsample_freq=1, # it means always subsample
colsample_bytree=config['colsample_bytree'],
n_estimators=2000,
first_metric_only=True
)
# Train the GBM model;
# Early stopping will be based on rmse of test set
eval_metric = ['rmse', 'l1']
eval_set = [(test_x, test_y)]
lgbm_model.fit(train_x, train_y, verbose=100,
early_stopping_rounds=500,
eval_metric=eval_metric,
eval_set=eval_set,
eval_names=['test'])
# Evaluate the GBM model
assert hasattr(lgbm_model, 'best_iteration_')
# Choose the step which had the best rmse on the test set
best_iter = lgbm_model.best_iteration_ - 1
lgbm_history = lgbm_model.evals_result_
mse = lgbm_history['test']['rmse'][best_iter] ** 2.
mad = lgbm_history['test']['l1'][best_iter]
var = np.mean((test_y - np.mean(test_y)) ** 2.)
r2 = 1. - mse / var
print('Test MSE = ', mse)
print('Test MAD = ', mad)
print('Test R2 = ', r2)
```
# 3.2 Training DNN predictors
This is an example of training DNN on MNIST CNN collection using all weights as inputs.
```
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == 'mnist' and
el[1] == 'dnn' and
el[2] == 'weights'][0]
train_x, test_x = weights_train['cifar10'], weights_test['cifar10']
train_y, test_y = outputs_train['cifar10'][:, 0], outputs_test['cifar10'][:, 0]
# Get the optimizer, initializers, and regularizers
optimizer = keras.optimizers.get(config['optimizer_name'])
optimizer.learning_rate = config['learning_rate']
w_init = keras.initializers.get(config['w_init_name'])
if config['w_init_name'].lower() in ['truncatednormal', 'randomnormal']:
w_init.stddev = config['init_stddev']
b_init = keras.initializers.get('zeros')
w_reg = (keras.regularizers.l2(config['l2_penalty'])
if config['l2_penalty'] > 0 else None)
# Get the fully connected DNN architecture
dnn_model = build_fcn(int(config['n_layers']),
int(config['n_hiddens']),
1, # number of outputs
config['dropout_rate'],
'relu',
w_reg, w_init, b_init,
'sigmoid') # Last activation
dnn_model.compile(
optimizer=optimizer,
loss='mean_squared_error',
metrics=['mse', 'mae'])
# Train the model
dnn_model.fit(
train_x, train_y,
batch_size=int(config['batch_size']),
epochs=300,
validation_data=(test_x, test_y),
verbose=1,
callbacks=[keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=10,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False)]
)
# Evaluate the model
eval_train = dnn_model.evaluate(train_x, train_y, batch_size=128, verbose=0)
eval_test = dnn_model.evaluate(test_x, test_y, batch_size=128, verbose=0)
assert dnn_model.metrics_names[1] == 'mean_squared_error'
assert dnn_model.metrics_names[2] == 'mean_absolute_error'
mse = eval_test[1]
var = np.mean((test_y - np.mean(test_y)) ** 2.)
r2 = 1. - mse / var
print('Test MSE = ', mse)
print('Test MAD = ', eval_test[2])
print('Test R2 = ', r2)
```
# 3.3 Train Logit-Linear predictors
This is an example of training Logit-Linear model on CIFAR10 CNN collection using hyperparameters as inputs.
```
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == 'cifar10' and
el[1] == 'linear' and
el[2] == 'hyper'][0]
# Turn DataFrames to numpy arrays.
# Since we used "binarize=True" when calling filter_checkpoints all the
# categorical columns were binarized.
train_x = configs_train['cifar10'].values.astype(np.float32)
test_x = configs_test['cifar10'].values.astype(np.float32)
train_y, test_y = outputs_train['cifar10'][:, 0], outputs_test['cifar10'][:, 0]
# Get the optimizer, initializers, and regularizers
optimizer = keras.optimizers.get(config['optimizer_name'])
optimizer.learning_rate = config['learning_rate']
w_init = keras.initializers.get(config['w_init_name'])
if config['w_init_name'].lower() in ['truncatednormal', 'randomnormal']:
w_init.stddev = config['init_stddev']
b_init = keras.initializers.get('zeros')
w_reg = (keras.regularizers.l2(config['l2_penalty'])
if config['l2_penalty'] > 0 else None)
# Get the linear architecture (DNN with 0 layers)
dnn_model = build_fcn(int(config['n_layers']),
int(config['n_hiddens']),
1, # number of outputs
None, # Dropout is not used
'relu',
w_reg, w_init, b_init,
'sigmoid') # Last activation
dnn_model.compile(
optimizer=optimizer,
loss='mean_squared_error',
metrics=['mse', 'mae'])
# Train the model
dnn_model.fit(
train_x, train_y,
batch_size=int(config['batch_size']),
epochs=300,
validation_data=(test_x, test_y),
verbose=1,
callbacks=[keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=10,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False)]
)
# Evaluate the model
eval_train = dnn_model.evaluate(train_x, train_y, batch_size=128, verbose=0)
eval_test = dnn_model.evaluate(test_x, test_y, batch_size=128, verbose=0)
assert dnn_model.metrics_names[1] == 'mean_squared_error'
assert dnn_model.metrics_names[2] == 'mean_absolute_error'
mse = eval_test[1]
var = np.mean((test_y - np.mean(test_y)) ** 2.)
r2 = 1. - mse / var
print('Test MSE = ', mse)
print('Test MAD = ', eval_test[2])
print('Test R2 = ', r2)
```
# 4. Figure 4: Transfer across datasets
Train GBM predictor using statistics of all layers as inputs on all 4 CNN collections. Then evaluate them on each of the 4 CNN collections (without fine-tuning). Store all results.
```
transfer_results = {}
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
print('Training on %s' % dataset)
transfer_results[dataset] = {}
train_x = weights_train[dataset]
test_x = weights_test[dataset]
train_y = outputs_train[dataset][:, 0]
test_y = outputs_test[dataset][:, 0]
# Pre-process the weights by taking the statistics across layers
train_x = np.apply_along_axis(
extract_per_layer_features, 1,
train_x, qts=None, layers=(0, 1, 2, 3))
test_x = np.apply_along_axis(
extract_per_layer_features, 1,
test_x, qts=None, layers=(0, 1, 2, 3))
# Take the best config we found
config = [el[-1] for el in best_configs if
el[0] == dataset and
el[1] == 'lgbm' and
el[2] == 'wstats-perlayer'][0]
lgbm_model = lgb.LGBMRegressor(
num_leaves=config['num_leaves'],
max_depth=config['max_depth'],
learning_rate=config['learning_rate'],
max_bin=int(config['max_bin']),
min_child_weight=config['min_child_weight'],
reg_lambda=config['reg_lambda'],
reg_alpha=config['reg_alpha'],
subsample=config['subsample'],
subsample_freq=1, # Always subsample
colsample_bytree=config['colsample_bytree'],
n_estimators=4000,
first_metric_only=True,
)
# Train the GBM model
lgbm_model.fit(
train_x,
train_y,
verbose=100,
# verbose=False,
early_stopping_rounds=500,
eval_metric=['rmse', 'l1'],
eval_set=[(test_x, test_y)],
eval_names=['test'])
# Evaluate on all 4 CNN collections
for transfer_to in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
print('Evaluating on %s' % transfer_to)
# Take the test split of the dataset
transfer_x = weights_test[transfer_to]
transfer_x = np.apply_along_axis(
extract_per_layer_features, 1,
transfer_x, qts=None, layers=(0, 1, 2, 3))
y_hat = lgbm_model.predict(transfer_x)
transfer_results[dataset][transfer_to] = y_hat
```
And plot everything
```
plt.figure(figsize = (15, 15))
pic_id = 0
for dataset in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
for transfer_to in ['mnist', 'fashion_mnist', 'cifar10', 'svhn_cropped']:
pic_id += 1
sp = plt.subplot(4, 4, pic_id)
# Take true labels
y_true = outputs_test[transfer_to][:, 0]
# Take the predictions of the model
y_hat = transfer_results[dataset][transfer_to]
plt.plot([0.01, .99], [0.01, .99], 'r--', linewidth=2)
sns.scatterplot(y_true, y_hat)
# Compute the Kendall's tau coefficient
tau = stats.kendalltau(y_true, y_hat)[0]
plt.text(0.05, 0.9, r"$\tau=%.3f$" % tau, fontsize=25)
plt.xlim((0.0, 1.0))
plt.ylim((0.0, 1.0))
if pic_id % 4 != 1:
sp.axes.get_yaxis().set_ticklabels([])
else:
plt.ylabel('Predictions', fontsize=22)
sp.axes.tick_params(axis='both', labelsize=15)
if pic_id < 13:
sp.axes.get_xaxis().set_ticklabels([])
else:
plt.xlabel('Test accuracy', fontsize=22)
sp.axes.tick_params(axis='both', labelsize=15)
if pic_id == 1:
plt.title('MNIST', fontsize=22)
if pic_id == 2:
plt.title('Fashion-MNIST', fontsize=22)
if pic_id == 3:
plt.title('CIFAR10-GS', fontsize=22)
if pic_id == 4:
plt.title('SVHN-GS', fontsize=22)
plt.tight_layout()
```
# 5. Figure 3: various 2d plots based on subsets of weights statistics
Take weight statistics for the CIFAR10 CNN collection. Plot various 2d plots
```
# Take the per-layer weights stats for the train split of CIFAR10-GS collection
per_layer_stats = np.apply_along_axis(
extract_per_layer_features, 1,
weights_train['cifar10'])
train_test_accuracy = outputs_train['cifar10'][:, 0]
# Positions of various stats
b0min = 0 # min of the first layer
b0max = 4 # max of the first layer
bnmin = 6*7 + 0 # min of the last layer
bnmax = 6*7 + 4 # max of the last layer
x = per_layer_stats[:,b0max] - per_layer_stats[:,b0min]
y = per_layer_stats[:,bnmax] - per_layer_stats[:,bnmin]
plt.figure(figsize=(10,8))
plt.scatter(x, y, s=15,
c=train_test_accuracy,
cmap="jet",
vmin=0.1,
vmax=0.54,
linewidths=0)
plt.yscale("log")
plt.xscale("log")
plt.ylim(0.1, 10)
plt.xlim(0.1, 10)
plt.xlabel("Bias range, first layer", fontsize=22)
plt.ylabel("Bias range, final layer", fontsize=22)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=18)
plt.tight_layout()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/https-deeplearning-ai/GANs-Public/blob/master/C3W2_Pix2PixHD_(Optional).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Pix2PixHD
*Please note that this is an optional notebook, meant to introduce more advanced concepts if you're up for a challenge, so don't worry if you don't completely follow!*
It is recommended that you should already be familiar with:
- Residual blocks, from [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) (He et al. 2015)
- Perceptual loss, from [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https://arxiv.org/abs/1603.08155) (Johnson et al. 2016)
- VGG architecture, from [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556) (Simonyan et al. 2015)
- Instance normalization (which you should know from StyleGAN), from [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022) (Ulyanov et al. 2017)
- Reflection padding, which Pytorch has implemented in [torch.nn.ReflectionPad2d](https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html)
**Goals**
In this notebook, you will learn about Pix2PixHD, which synthesizes high-resolution images from semantic label maps. Proposed in [High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https://arxiv.org/abs/1711.11585) (Wang et al. 2018), Pix2PixHD improves upon Pix2Pix via multiscale architecture, improved adversarial loss, and instance maps.
## Residual Blocks
The residual block, which is relevant in many state-of-the-art computer vision models, is used in all parts of Pix2PixHD. If you're not familiar with residual blocks, please take a look [here](https://paperswithcode.com/method/residual-block). Now, you'll start by first implementing a basic residual block.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
class ResidualBlock(nn.Module):
'''
ResidualBlock Class
Values
channels: the number of channels throughout the residual block, a scalar
'''
def __init__(self, channels):
super().__init__()
self.layers = nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(channels, channels, kernel_size=3, padding=0),
nn.InstanceNorm2d(channels, affine=False),
nn.ReLU(inplace=True),
nn.ReflectionPad2d(1),
nn.Conv2d(channels, channels, kernel_size=3, padding=0),
nn.InstanceNorm2d(channels, affine=False),
)
def forward(self, x):
return x + self.layers(x)
```
## Multiscale Generator: Generating at multiple scales (resolutions)
The Pix2PixHD generator is comprised of two separate subcomponent generators: $G_1$ is called the global generator and operates at low resolution (1024 x 512) to transfer styles. $G_2$ is the local enhancer and operates at high resolution (2048 x 1024) to deal with higher resolution.
The architecture for each network is adapted from [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https://arxiv.org/abs/1603.08155) (Johnson et al. 2016) and is comprised of
\begin{align*}
G = \left[G^{(F)}, G^{(R)}, G^{(B)}\right],
\end{align*}
where $G^{(F)}$ is a frontend of convolutional blocks (downsampling), $G^{(R)}$ is a set of residual blocks, and $G^{(B)}$ is a backend of transposed convolutional blocks (upsampling). This is just a type of encoder-decoder generator that you learned about with Pix2Pix!
$G_1$ is trained first on low-resolution images. Then, $G_2$ is added to the pre-trained $G_1$ and both are trained jointly on high-resolution images. Specifically, $G_2^{(F)}$ encodes a high-resolution image, $G_1$ encodes a downsampled, low-resolution image, and the outputs from both are summed and passed sequentially to $G_2^{(R)}$ and $G_2^{(B)}$. This pre-training and fine-tuning scheme works well because the model is able to learn accurate coarser representations before using them to touch up its refined representations, since learning high-fidelity representations is generally a pretty hard task.
> 
*Pix2PixHD Generator, taken from Figure 3 of [High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https://arxiv.org/abs/1711.11585) (Wang et al. 2018). Following our notation, $G = \left[G_2^{(F)}, G_1^{(F)}, G_1^{(R)}, G_1^{(B)}, G_2^{(R)}, G_2^{(B)}\right]$ from left to right.*
### Global Subgenerator ($G_1$)
Let's first start by building the global generator ($G_1$). Even though the global generator is nested inside the local enhancer, you'll still need a separate module for training $G_1$ on its own first.
```
class GlobalGenerator(nn.Module):
'''
GlobalGenerator Class:
Implements the global subgenerator (G1) for transferring styles at lower resolutions.
Values:
in_channels: the number of input channels, a scalar
out_channels: the number of output channels, a scalar
base_channels: the number of channels in first convolutional layer, a scalar
fb_blocks: the number of frontend / backend blocks, a scalar
res_blocks: the number of residual blocks, a scalar
'''
def __init__(self, in_channels, out_channels,
base_channels=64, fb_blocks=3, res_blocks=9):
super().__init__()
# Initial convolutional layer
g1 = [
nn.ReflectionPad2d(3),
nn.Conv2d(in_channels, base_channels, kernel_size=7, padding=0),
nn.InstanceNorm2d(base_channels, affine=False),
nn.ReLU(inplace=True),
]
channels = base_channels
# Frontend blocks
for _ in range(fb_blocks):
g1 += [
nn.Conv2d(channels, 2 * channels, kernel_size=3, stride=2, padding=1),
nn.InstanceNorm2d(2 * channels, affine=False),
nn.ReLU(inplace=True),
]
channels *= 2
# Residual blocks
for _ in range(res_blocks):
g1 += [ResidualBlock(channels)]
# Backend blocks
for _ in range(fb_blocks):
g1 += [
nn.ConvTranspose2d(channels, channels // 2, kernel_size=3, stride=2, padding=1, output_padding=1),
nn.InstanceNorm2d(channels // 2, affine=False),
nn.ReLU(inplace=True),
]
channels //= 2
# Output convolutional layer as its own nn.Sequential since it will be omitted in second training phase
self.out_layers = nn.Sequential(
nn.ReflectionPad2d(3),
nn.Conv2d(base_channels, out_channels, kernel_size=7, padding=0),
nn.Tanh(),
)
self.g1 = nn.Sequential(*g1)
def forward(self, x):
x = self.g1(x)
x = self.out_layers(x)
return x
```
### Local Enhancer Subgenerator ($G_2$)
And now onto the local enhancer ($G_2$)! Recall that the local enhancer uses (a pretrained) $G_1$ as part of its architecture. Following our earlier notation, recall that the residual connections from the last layers of $G_2^{(F)}$ and $G_1^{(B)}$ are added together and passed through $G_2^{(R)}$ and $G_2^{(B)}$ to synthesize a high-resolution image. Because of this, you should reuse the $G_1$ implementation so that the weights are consistent for the second training phase.
```
class LocalEnhancer(nn.Module):
'''
LocalEnhancer Class:
Implements the local enhancer subgenerator (G2) for handling larger scale images.
Values:
in_channels: the number of input channels, a scalar
out_channels: the number of output channels, a scalar
base_channels: the number of channels in first convolutional layer, a scalar
global_fb_blocks: the number of global generator frontend / backend blocks, a scalar
global_res_blocks: the number of global generator residual blocks, a scalar
local_res_blocks: the number of local enhancer residual blocks, a scalar
'''
def __init__(self, in_channels, out_channels, base_channels=32, global_fb_blocks=3, global_res_blocks=9, local_res_blocks=3):
super().__init__()
global_base_channels = 2 * base_channels
# Downsampling layer for high-res -> low-res input to g1
self.downsample = nn.AvgPool2d(3, stride=2, padding=1, count_include_pad=False)
# Initialize global generator without its output layers
self.g1 = GlobalGenerator(
in_channels, out_channels, base_channels=global_base_channels, fb_blocks=global_fb_blocks, res_blocks=global_res_blocks,
).g1
self.g2 = nn.ModuleList()
# Initialize local frontend block
self.g2.append(
nn.Sequential(
# Initial convolutional layer
nn.ReflectionPad2d(3),
nn.Conv2d(in_channels, base_channels, kernel_size=7, padding=0),
nn.InstanceNorm2d(base_channels, affine=False),
nn.ReLU(inplace=True),
# Frontend block
nn.Conv2d(base_channels, 2 * base_channels, kernel_size=3, stride=2, padding=1),
nn.InstanceNorm2d(2 * base_channels, affine=False),
nn.ReLU(inplace=True),
)
)
# Initialize local residual and backend blocks
self.g2.append(
nn.Sequential(
# Residual blocks
*[ResidualBlock(2 * base_channels) for _ in range(local_res_blocks)],
# Backend blocks
nn.ConvTranspose2d(2 * base_channels, base_channels, kernel_size=3, stride=2, padding=1, output_padding=1),
nn.InstanceNorm2d(base_channels, affine=False),
nn.ReLU(inplace=True),
# Output convolutional layer
nn.ReflectionPad2d(3),
nn.Conv2d(base_channels, out_channels, kernel_size=7, padding=0),
nn.Tanh(),
)
)
def forward(self, x):
# Get output from g1_B
x_g1 = self.downsample(x)
x_g1 = self.g1(x_g1)
# Get output from g2_F
x_g2 = self.g2[0](x)
# Get final output from g2_B
return self.g2[1](x_g1 + x_g2)
```
And voilà! You now have modules for both the global subgenerator and local enhancer subgenerator!
## Multiscale Discriminator: Discriminating at different scales too!
Pix2PixHD uses 3 separate subcomponents (subdiscriminators $D_1$, $D_2$, and $D_3$) to generate predictions. They all have the same architectures but $D_2$ and $D_3$ operate on inputs downsampled by 2x and 4x, respectively. The GAN objective is now modified as
\begin{align*}
\min_G \max_{D_1,D_2,D_3}\sum_{k=1,2,3}\mathcal{L}_{\text{GAN}}(G, D_k)
\end{align*}
Each subdiscriminator is a PatchGAN, which you should be familiar with from Pix2Pix!
Let's first implement a single PatchGAN - this implementation will be slightly different than the one you saw in Pix2Pix since the intermediate feature maps will be needed for computing loss.
```
class Discriminator(nn.Module):
'''
Discriminator Class
Implements the discriminator class for a subdiscriminator,
which can be used for all the different scales, just with different argument values.
Values:
in_channels: the number of channels in input, a scalar
base_channels: the number of channels in first convolutional layer, a scalar
n_layers: the number of convolutional layers, a scalar
'''
def __init__(self, in_channels, base_channels=64, n_layers=3):
super().__init__()
# Use nn.ModuleList so we can output intermediate values for loss.
self.layers = nn.ModuleList()
# Initial convolutional layer
self.layers.append(
nn.Sequential(
nn.Conv2d(in_channels, base_channels, kernel_size=4, stride=2, padding=2),
nn.LeakyReLU(0.2, inplace=True),
)
)
# Downsampling convolutional layers
channels = base_channels
for _ in range(1, n_layers):
prev_channels = channels
channels = min(2 * channels, 512)
self.layers.append(
nn.Sequential(
nn.Conv2d(prev_channels, channels, kernel_size=4, stride=2, padding=2),
nn.InstanceNorm2d(channels, affine=False),
nn.LeakyReLU(0.2, inplace=True),
)
)
# Output convolutional layer
prev_channels = channels
channels = min(2 * channels, 512)
self.layers.append(
nn.Sequential(
nn.Conv2d(prev_channels, channels, kernel_size=4, stride=1, padding=2),
nn.InstanceNorm2d(channels, affine=False),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(channels, 1, kernel_size=4, stride=1, padding=2),
)
)
def forward(self, x):
outputs = [] # for feature matching loss
for layer in self.layers:
x = layer(x)
outputs.append(x)
return outputs
```
Now you're ready to implement the multiscale discriminator in full! This puts together the different subdiscriminator scales.
```
class MultiscaleDiscriminator(nn.Module):
'''
MultiscaleDiscriminator Class
Values:
in_channels: number of input channels to each discriminator, a scalar
base_channels: number of channels in first convolutional layer, a scalar
n_layers: number of downsampling layers in each discriminator, a scalar
n_discriminators: number of discriminators at different scales, a scalar
'''
def __init__(self, in_channels, base_channels=64, n_layers=3, n_discriminators=3):
super().__init__()
# Initialize all discriminators
self.discriminators = nn.ModuleList()
for _ in range(n_discriminators):
self.discriminators.append(
Discriminator(in_channels, base_channels=base_channels, n_layers=n_layers)
)
# Downsampling layer to pass inputs between discriminators at different scales
self.downsample = nn.AvgPool2d(3, stride=2, padding=1, count_include_pad=False)
def forward(self, x):
outputs = []
for i, discriminator in enumerate(self.discriminators):
# Downsample input for subsequent discriminators
if i != 0:
x = self.downsample(x)
outputs.append(discriminator(x))
# Return list of multiscale discriminator outputs
return outputs
@property
def n_discriminators(self):
return len(self.discriminators)
```
## Instance Boundary Map: Learning boundaries between instances
Here's a new method that adds additional information as conditional input!
The authors observed that previous approaches have typically taken in a label map (aka. segmentation map) that labels all the pixels to be of a certain class (i.e. car) but doesn't differentiate between two instances of the same class (i.e. two cars in the image). This is the difference between *semantic label maps*, which have class labels but not instance labels, and *instance label maps*, which represent unique instances with unique numbers.
The authors found that the most important information in the instance lelab map is actually the boundaries between instances (i.e. the outline of each car). You can create boundary maps by mapping each pixel maps to a 1 if it's a different instance from its 4 neighbors, and 0 otherwise.
To include this information, the authors concatenate the boundary map with the semantic label map as input. From the figure below, you can see that including both as input results in much sharper generated images (right) than only inputting the semantic label map (left).
> 

*Semantic label map input (top left) and its blurry output between instances (bottom left) vs. instance boundary map (top right) and the much clearer output between instances from inputting both the semantic label map and the instance boundary map (bottom right). Taken from Figures 4 and 5 of [High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https://arxiv.org/abs/1711.11585) (Wang et al. 2018).*
## Instance-level Feature Encoder: Adding controllable diversity
As you already know, the task of generation has more than one possible realistic output. For example, an object of class `road` could be concrete, cobblestone, dirt, etc. To learn this diversity, the authors introduce an encoder $E$, which takes the original image as input and outputs a feature map (like the feature extractor from Course 2, Week 1). They apply *instance-wise averaging*, averaging the feature vectors across all occurrences of each instance (so that every pixel corresponding to the same instance has the same feature vector). They then concatenate this instance-level feature embedding with the semantic label and instance boundary maps as input to the generator.
What's cool is that the encoder $E$ is trained jointly with $G_1$. One huge backprop! When training $G_2$, $E$ is fed a downsampled image and the corresponding output is upsampled to pass into $G_2$.
To allow for control over different features (e.g. concrete, cobblestone, and dirt) for inference, the authors first use K-means clustering to cluster all the feature vectors for each object class in the training set. You can think of this as a dictionary, mapping each class label to a set of feature vectors (so $K$ centroids, each representing different clusters of features). Now during inference, you can perform a random lookup from this dictionary for each class (e.g. road) in the semantic label map to generate one type of feature (e.g. dirt). To provide greater control, you can select among different feature types for each class to generate diverse feature types and, as a result, multi-modal outputs from the same input.
Higher values of $K$ increase diversity and potentially decrease fidelity. You've seen this tradeoff between diversity and fidelity before with the truncation trick, and this is just another way to trade-off between them.
```
class Encoder(nn.Module):
'''
Encoder Class
Values:
in_channels: number of input channels to each discriminator, a scalar
out_channels: number of channels in output feature map, a scalar
base_channels: number of channels in first convolutional layer, a scalar
n_layers: number of downsampling layers, a scalar
'''
def __init__(self, in_channels, out_channels, base_channels=16, n_layers=4):
super().__init__()
self.out_channels = out_channels
channels = base_channels
layers = [
nn.ReflectionPad2d(3),
nn.Conv2d(in_channels, base_channels, kernel_size=7, padding=0),
nn.InstanceNorm2d(base_channels),
nn.ReLU(inplace=True),
]
# Downsampling layers
for i in range(n_layers):
layers += [
nn.Conv2d(channels, 2 * channels, kernel_size=3, stride=2, padding=1),
nn.InstanceNorm2d(2 * channels),
nn.ReLU(inplace=True),
]
channels *= 2
# Upsampling layers
for i in range(n_layers):
layers += [
nn.ConvTranspose2d(channels, channels // 2, kernel_size=3, stride=2, padding=1, output_padding=1),
nn.InstanceNorm2d(channels // 2),
nn.ReLU(inplace=True),
]
channels //= 2
layers += [
nn.ReflectionPad2d(3),
nn.Conv2d(base_channels, out_channels, kernel_size=7, padding=0),
nn.Tanh(),
]
self.layers = nn.Sequential(*layers)
def instancewise_average_pooling(self, x, inst):
'''
Applies instance-wise average pooling.
Given a feature map of size (b, c, h, w), the mean is computed for each b, c
across all h, w of the same instance
'''
x_mean = torch.zeros_like(x)
classes = torch.unique(inst, return_inverse=False, return_counts=False) # gather all unique classes present
for i in classes:
for b in range(x.size(0)):
indices = torch.nonzero(inst[b:b+1] == i, as_tuple=False) # get indices of all positions equal to class i
for j in range(self.out_channels):
x_ins = x[indices[:, 0] + b, indices[:, 1] + j, indices[:, 2], indices[:, 3]]
mean_feat = torch.mean(x_ins).expand_as(x_ins)
x_mean[indices[:, 0] + b, indices[:, 1] + j, indices[:, 2], indices[:, 3]] = mean_feat
return x_mean
def forward(self, x, inst):
x = self.layers(x)
x = self.instancewise_average_pooling(x, inst)
return x
```
## Additional Loss Functions
In addition to the architectural and feature-map enhancements, the authors also incorporate a feature matching loss based on the discriminator. Essentially, they output intermediate feature maps at different resolutions from the discriminator and try to minimize the difference between the real and fake image features.
The authors found this to stabilize training. In this case, this forces the generator to produce natural statistics at multiple scales. This feature-matching loss is similar to StyleGAN's perceptual loss. For some semantic label map $s$ and corresponding image $x$,
\begin{align*}
\mathcal{L}_{\text{FM}} = \mathbb{E}_{s,x}\left[\sum_{i=1}^T\dfrac{1}{N_i}\left|\left|D^{(i)}_k(s, x) - D^{(i)}_k(s, G(s))\right|\right|_1\right]
\end{align*}
where $T$ is the total number of layers, $N_i$ is the number of elements at layer $i$, and $D^{(i)}_k$ denotes the $i$th layer in discriminator $k$.
The authors also report minor improvements in performance when adding perceptual loss, formulated as
\begin{align*}
\mathcal{L}_{\text{VGG}} = \mathbb{E}_{s,x}\left[\sum_{i=1}^N\dfrac{1}{M_i}\left|\left|F^i(x) - F^i(G(s))\right|\right|_1\right]
\end{align*}
where $F^i$ denotes the $i$th layer with $M_i$ elements of the VGG19 network. `torchvision` provides a pretrained VGG19 network, so you'll just need a simple wrapper for it to get the intermediate outputs.
The overall loss looks like this:
\begin{align*}
\mathcal{L} = \mathcal{L}_{\text{GAN}} + \lambda_1\mathcal{L}_{\text{FM}} + \lambda_2\mathcal{L}_{\text{VGG}}
\end{align*}
where $\lambda_1 = \lambda_2 = 10$.
```
import torchvision.models as models
class VGG19(nn.Module):
'''
VGG19 Class
Wrapper for pretrained torchvision.models.vgg19 to output intermediate feature maps
'''
def __init__(self):
super().__init__()
vgg_features = models.vgg19(pretrained=True).features
self.f1 = nn.Sequential(*[vgg_features[x] for x in range(2)])
self.f2 = nn.Sequential(*[vgg_features[x] for x in range(2, 7)])
self.f3 = nn.Sequential(*[vgg_features[x] for x in range(7, 12)])
self.f4 = nn.Sequential(*[vgg_features[x] for x in range(12, 21)])
self.f5 = nn.Sequential(*[vgg_features[x] for x in range(21, 30)])
for param in self.parameters():
param.requires_grad = False
def forward(self, x):
h1 = self.f1(x)
h2 = self.f2(h1)
h3 = self.f3(h2)
h4 = self.f4(h3)
h5 = self.f5(h4)
return [h1, h2, h3, h4, h5]
class Loss(nn.Module):
'''
Loss Class
Implements composite loss for GauGAN
Values:
lambda1: weight for feature matching loss, a float
lambda2: weight for vgg perceptual loss, a float
device: 'cuda' or 'cpu' for hardware to use
norm_weight_to_one: whether to normalize weights to (0, 1], a bool
'''
def __init__(self, lambda1=10., lambda2=10., device='cuda', norm_weight_to_one=True):
super().__init__()
self.vgg = VGG19().to(device)
self.vgg_weights = [1.0/32, 1.0/16, 1.0/8, 1.0/4, 1.0]
lambda0 = 1.0
# Keep ratio of composite loss, but scale down max to 1.0
scale = max(lambda0, lambda1, lambda2) if norm_weight_to_one else 1.0
self.lambda0 = lambda0 / scale
self.lambda1 = lambda1 / scale
self.lambda2 = lambda2 / scale
def adv_loss(self, discriminator_preds, is_real):
'''
Computes adversarial loss from nested list of fakes outputs from discriminator.
'''
target = torch.ones_like if is_real else torch.zeros_like
adv_loss = 0.0
for preds in discriminator_preds:
pred = preds[-1]
adv_loss += F.mse_loss(pred, target(pred))
return adv_loss
def fm_loss(self, real_preds, fake_preds):
'''
Computes feature matching loss from nested lists of fake and real outputs from discriminator.
'''
fm_loss = 0.0
for real_features, fake_features in zip(real_preds, fake_preds):
for real_feature, fake_feature in zip(real_features, fake_features):
fm_loss += F.l1_loss(real_feature.detach(), fake_feature)
return fm_loss
def vgg_loss(self, x_real, x_fake):
'''
Computes perceptual loss with VGG network from real and fake images.
'''
vgg_real = self.vgg(x_real)
vgg_fake = self.vgg(x_fake)
vgg_loss = 0.0
for real, fake, weight in zip(vgg_real, vgg_fake, self.vgg_weights):
vgg_loss += weight * F.l1_loss(real.detach(), fake)
return vgg_loss
def forward(self, x_real, label_map, instance_map, boundary_map, encoder, generator, discriminator):
'''
Function that computes the forward pass and total loss for generator and discriminator.
'''
feature_map = encoder(x_real, instance_map)
x_fake = generator(torch.cat((label_map, boundary_map, feature_map), dim=1))
# Get necessary outputs for loss/backprop for both generator and discriminator
fake_preds_for_g = discriminator(torch.cat((label_map, boundary_map, x_fake), dim=1))
fake_preds_for_d = discriminator(torch.cat((label_map, boundary_map, x_fake.detach()), dim=1))
real_preds_for_d = discriminator(torch.cat((label_map, boundary_map, x_real.detach()), dim=1))
g_loss = (
self.lambda0 * self.adv_loss(fake_preds_for_g, True) + \
self.lambda1 * self.fm_loss(real_preds_for_d, fake_preds_for_g) / discriminator.n_discriminators + \
self.lambda2 * self.vgg_loss(x_fake, x_real)
)
d_loss = 0.5 * (
self.adv_loss(real_preds_for_d, True) + \
self.adv_loss(fake_preds_for_d, False)
)
return g_loss, d_loss, x_fake.detach()
```
## Training Pix2PixHD
You now have the Pix2PixHD model coded up! All you have to do now is prepare your dataset. Pix2PixHD is trained on the Cityscapes dataset, which unfortunately requires registration. You'll have to download the dataset and put it in your `data` folder to initialize the dataset code below.
Specifically, you should download the `gtFine_trainvaltest` and `leftImg8bit_trainvaltest` and specify the corresponding data splits into the dataloader.
```
import os
import numpy as np
import torchvision.transforms as transforms
from PIL import Image
def scale_width(img, target_width, method):
'''
Function that scales an image to target_width while retaining aspect ratio.
'''
w, h = img.size
if w == target_width: return img
target_height = target_width * h // w
return img.resize((target_width, target_height), method)
class CityscapesDataset(torch.utils.data.Dataset):
'''
CityscapesDataset Class
Values:
paths: (a list of) paths to load examples from, a list or string
target_width: the size of image widths for resizing, a scalar
n_classes: the number of object classes, a scalar
'''
def __init__(self, paths, target_width=1024, n_classes=35):
super().__init__()
self.n_classes = n_classes
# Collect list of examples
self.examples = {}
if type(paths) == str:
self.load_examples_from_dir(paths)
elif type(paths) == list:
for path in paths:
self.load_examples_from_dir(path)
else:
raise ValueError('`paths` should be a single path or list of paths')
self.examples = list(self.examples.values())
assert all(len(example) == 3 for example in self.examples)
# Initialize transforms for the real color image
self.img_transforms = transforms.Compose([
transforms.Lambda(lambda img: scale_width(img, target_width, Image.BICUBIC)),
transforms.Lambda(lambda img: np.array(img)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Initialize transforms for semantic label and instance maps
self.map_transforms = transforms.Compose([
transforms.Lambda(lambda img: scale_width(img, target_width, Image.NEAREST)),
transforms.Lambda(lambda img: np.array(img)),
transforms.ToTensor(),
])
def load_examples_from_dir(self, abs_path):
'''
Given a folder of examples, this function returns a list of paired examples.
'''
assert os.path.isdir(abs_path)
img_suffix = '_leftImg8bit.png'
label_suffix = '_gtFine_labelIds.png'
inst_suffix = '_gtFine_instanceIds.png'
for root, _, files in os.walk(abs_path):
for f in files:
if f.endswith(img_suffix):
prefix = f[:-len(img_suffix)]
attr = 'orig_img'
elif f.endswith(label_suffix):
prefix = f[:-len(label_suffix)]
attr = 'label_map'
elif f.endswith(inst_suffix):
prefix = f[:-len(inst_suffix)]
attr = 'inst_map'
else:
continue
if prefix not in self.examples.keys():
self.examples[prefix] = {}
self.examples[prefix][attr] = root + '/' + f
def __getitem__(self, idx):
example = self.examples[idx]
# Load image and maps
img = Image.open(example['orig_img']).convert('RGB') # color image: (3, 512, 1024)
inst = Image.open(example['inst_map']) # instance map: (512, 1024)
label = Image.open(example['label_map']) # semantic label map: (512, 1024)
# Apply corresponding transforms
img = self.img_transforms(img)
inst = self.map_transforms(inst)
label = self.map_transforms(label).long() * 255
# Convert labels to one-hot vectors
label = torch.zeros(self.n_classes, img.shape[1], img.shape[2]).scatter_(0, label, 1.0).to(img.dtype)
# Convert instance map to instance boundary map
bound = torch.ByteTensor(inst.shape).zero_()
bound[:, :, 1:] = bound[:, :, 1:] | (inst[:, :, 1:] != inst[:, :, :-1])
bound[:, :, :-1] = bound[:, :, :-1] | (inst[:, :, 1:] != inst[:, :, :-1])
bound[:, 1:, :] = bound[:, 1:, :] | (inst[:, 1:, :] != inst[:, :-1, :])
bound[:, :-1, :] = bound[:, :-1, :] | (inst[:, 1:, :] != inst[:, :-1, :])
bound = bound.to(img.dtype)
return (img, label, inst, bound)
def __len__(self):
return len(self.examples)
@staticmethod
def collate_fn(batch):
imgs, labels, insts, bounds = [], [], [], []
for (x, l, i, b) in batch:
imgs.append(x)
labels.append(l)
insts.append(i)
bounds.append(b)
return (
torch.stack(imgs, dim=0),
torch.stack(labels, dim=0),
torch.stack(insts, dim=0),
torch.stack(bounds, dim=0),
)
```
Now initialize everything you'll need for training. Don't be worried if there looks like a lot of random code, it's all stuff you've seen before!
```
from tqdm import tqdm
from torch.utils.data import DataLoader
n_classes = 35 # total number of object classes
rgb_channels = n_features = 3
device = 'cuda'
train_dir = ['data']
epochs = 200 # total number of train epochs
decay_after = 100 # number of epochs with constant lr
lr = 0.0002
betas = (0.5, 0.999)
def lr_lambda(epoch):
''' Function for scheduling learning '''
return 1. if epoch < decay_after else 1 - float(epoch - decay_after) / (epochs - decay_after)
def weights_init(m):
''' Function for initializing all model weights '''
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
nn.init.normal_(m.weight, 0., 0.02)
loss_fn = Loss(device=device)
## Phase 1: Low Resolution (1024 x 512)
dataloader1 = DataLoader(
CityscapesDataset(train_dir, target_width=1024, n_classes=n_classes),
collate_fn=CityscapesDataset.collate_fn, batch_size=1, shuffle=True, drop_last=False, pin_memory=True,
)
encoder = Encoder(rgb_channels, n_features).to(device).apply(weights_init)
generator1 = GlobalGenerator(n_classes + n_features + 1, rgb_channels).to(device).apply(weights_init)
discriminator1 = MultiscaleDiscriminator(n_classes + 1 + rgb_channels, n_discriminators=2).to(device).apply(weights_init)
g1_optimizer = torch.optim.Adam(list(generator1.parameters()) + list(encoder.parameters()), lr=lr, betas=betas)
d1_optimizer = torch.optim.Adam(list(discriminator1.parameters()), lr=lr, betas=betas)
g1_scheduler = torch.optim.lr_scheduler.LambdaLR(g1_optimizer, lr_lambda)
d1_scheduler = torch.optim.lr_scheduler.LambdaLR(d1_optimizer, lr_lambda)
## Phase 2: High Resolution (2048 x 1024)
dataloader2 = DataLoader(
CityscapesDataset(train_dir, target_width=2048, n_classes=n_classes),
collate_fn=CityscapesDataset.collate_fn, batch_size=1, shuffle=True, drop_last=False, pin_memory=True,
)
generator2 = LocalEnhancer(n_classes + n_features + 1, rgb_channels).to(device).apply(weights_init)
discriminator2 = MultiscaleDiscriminator(n_classes + 1 + rgb_channels).to(device).apply(weights_init)
g2_optimizer = torch.optim.Adam(list(generator2.parameters()) + list(encoder.parameters()), lr=lr, betas=betas)
d2_optimizer = torch.optim.Adam(list(discriminator2.parameters()), lr=lr, betas=betas)
g2_scheduler = torch.optim.lr_scheduler.LambdaLR(g2_optimizer, lr_lambda)
d2_scheduler = torch.optim.lr_scheduler.LambdaLR(d2_optimizer, lr_lambda)
```
And now the training loop, which is pretty much the same between the two phases:
```
from torchvision.utils import make_grid
import matplotlib.pyplot as plt
# Parse torch version for autocast
# ######################################################
version = torch.__version__
version = tuple(int(n) for n in version.split('.')[:-1])
has_autocast = version >= (1, 6)
# ######################################################
def show_tensor_images(image_tensor):
'''
Function for visualizing images: Given a tensor of images, number of images, and
size per image, plots and prints the images in an uniform grid.
'''
image_tensor = (image_tensor + 1) / 2
image_unflat = image_tensor.detach().cpu()
image_grid = make_grid(image_unflat[:1], nrow=1)
plt.imshow(image_grid.permute(1, 2, 0).squeeze())
plt.show()
def train(dataloader, models, optimizers, schedulers, device):
encoder, generator, discriminator = models
g_optimizer, d_optimizer = optimizers
g_scheduler, d_scheduler = schedulers
cur_step = 0
display_step = 100
mean_g_loss = 0.0
mean_d_loss = 0.0
for epoch in range(epochs):
# Training epoch
for (x_real, labels, insts, bounds) in tqdm(dataloader, position=0):
x_real = x_real.to(device)
labels = labels.to(device)
insts = insts.to(device)
bounds = bounds.to(device)
# Enable autocast to FP16 tensors (new feature since torch==1.6.0)
# If you're running older versions of torch, comment this out
# and use NVIDIA apex for mixed/half precision training
if has_autocast:
with torch.cuda.amp.autocast(enabled=(device=='cuda')):
g_loss, d_loss, x_fake = loss_fn(
x_real, labels, insts, bounds, encoder, generator, discriminator
)
else:
g_loss, d_loss, x_fake = loss_fn(
x_real, labels, insts, bounds, encoder, generator, discriminator
)
g_optimizer.zero_grad()
g_loss.backward()
g_optimizer.step()
d_optimizer.zero_grad()
d_loss.backward()
d_optimizer.step()
mean_g_loss += g_loss.item() / display_step
mean_d_loss += d_loss.item() / display_step
if cur_step % display_step == 0 and cur_step > 0:
print('Step {}: Generator loss: {:.5f}, Discriminator loss: {:.5f}'
.format(cur_step, mean_g_loss, mean_d_loss))
show_tensor_images(x_fake.to(x_real.dtype))
show_tensor_images(x_real)
mean_g_loss = 0.0
mean_d_loss = 0.0
cur_step += 1
g_scheduler.step()
d_scheduler.step()
```
And now you can train your models! Remember to set the local enhancer subgenerator to the global subgenerator that you train in the first phase.
In their official repository, the authors don't continue to train the encoder. Instead, they precompute all feature maps upsample them, and concatenate this to the input to the local enhancer subgenerator. (They also leave a re-train option for it). For simplicity, the script below will just downsample and upsample high-resolution inputs.
```
# Phase 1: Low Resolution
#######################################################################
train(
dataloader1,
[encoder, generator1, discriminator1],
[g1_optimizer, d1_optimizer],
[g1_scheduler, d1_scheduler],
device,
)
# Phase 2: High Resolution
#######################################################################
# Update global generator in local enhancer with trained
generator2.g1 = generator1.g1
# Freeze encoder and wrap to support high-resolution inputs/outputs
def freeze(encoder):
encoder.eval()
for p in encoder.parameters():
p.requires_grad = False
@torch.jit.script
def forward(x, inst):
x = F.interpolate(x, scale_factor=0.5, recompute_scale_factor=True)
inst = F.interpolate(inst.float(), scale_factor=0.5, recompute_scale_factor=True)
feat = encoder(x, inst.int())
return F.interpolate(feat, scale_factor=2.0, recompute_scale_factor=True)
return forward
train(
dataloader2,
[freeze(encoder), generator2, discriminator2],
[g2_optimizer, d2_optimizer],
[g2_scheduler, d2_scheduler],
device,
)
```
## Inference with Pix2PixHD
Recall that in inference time, the encoder feature maps from training are saved and clustered with K-means by object class. Again, you'll have to download the Cityscapes dataset into your `data` folder and then run these functions.
```
from sklearn.cluster import KMeans
# Encode features by class label
features = {}
for (x, _, inst, _) in tqdm(dataloader2):
x = x.to(device)
inst = inst.to(device)
area = inst.size(2) * inst.size(3)
# Get pooled feature map
with torch.no_grad():
feature_map = encoder(x, inst)
for i in torch.unique(inst):
label = i if i < 1000 else i // 1000
label = int(label.flatten(0).item())
# All indices should have same feature per class from pooling
idx = torch.nonzero(inst == i, as_tuple=False)
n_inst = idx.size(0)
idx = idx[0, :]
# Retrieve corresponding encoded feature
feature = feature_map[idx[0], :, idx[2], idx[3]].unsqueeze(0)
# Compute rate of feature appearance (in official code, they compute per block)
block_size = 32
rate_per_block = 32 * n_inst / area
rate = torch.ones((1, 1), device=device).to(feature.dtype) * rate_per_block
feature = torch.cat((feature, rate), dim=1)
if label in features.keys():
features[label] = torch.cat((features[label], feature), dim=0)
else:
features[label] = feature
# Cluster features by class label
k = 10
centroids = {}
for label in range(n_classes):
if label not in features.keys():
continue
feature = features[label]
# Thresholding by 0.5 isn't mentioned in the paper, but is present in the
# official code repository, probably so that only frequent features are clustered
feature = feature[feature[:, -1] > 0.5, :-1].cpu().numpy()
if feature.shape[0]:
n_clusters = min(feature.shape[0], k)
kmeans = KMeans(n_clusters=n_clusters).fit(feature)
centroids[label] = kmeans.cluster_centers_
```
After getting the encoded feature centroids per class, you can now run inference! Remember that the generator is trained to take in a concatenation of the semantic label map, instance boundary map, and encoded feature map.
Congrats on making it to the end of this complex notebook! Have fun with this powerful model and be responsible of course ;)
```
def infer(label_map, instance_map, boundary_map):
# Sample feature vector centroids
b, _, h, w = label_map.shape
feature_map = torch.zeros((b, n_features, h, w), device=device).to(label_map.dtype)
for i in torch.unique(instance_map):
label = i if i < 1000 else i // 1000
label = int(label.flatten(0).item())
if label in centroids.keys():
centroid_idx = random.randint(0, centroids[label].shape[0] - 1)
idx = torch.nonzero(instance_map == int(i), as_tuple=False)
feature = torch.from_numpy(centroids[label][centroid_idx, :]).to(device)
feature_map[idx[:, 0], :, idx[:, 2], idx[:, 3]] = feature
with torch.no_grad():
x_fake = generator2(torch.cat((label_map, boundary_map, feature_map), dim=1))
return x_fake
for x, labels, insts, bounds in dataloader2:
x_fake = infer(labels.to(device), insts.to(device), bounds.to(device))
show_tensor_images(x_fake.to(x.dtype))
show_tensor_images(x)
break
```
| github_jupyter |
# How to integrate Financial Data from Refinitiv Data Platform to Excel with Xlwings - Part 2
## Overview
This notebook is the second part of the series that demonstrate how to export financial data and report from Python/Jupyter application to Excel report file using xlwings CE and xlwings Pro libraries. The demo applications use content from [Refinitiv Data Platform (RDP)](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-apis) as an example dataset.
This second notebook is focusing on xlwings Reports, Embedded Code, and PDF features.
*Note*: All figures and reports demonstrate Time-Series 90 days data queried on 24th November 2020.
## Introduction to xlwings
[xlwings](https://www.xlwings.org) is a Python library that makes it easy to call Python from Excel and vice versa on Windows and macOS. The library lets you automate Excel from Python source code to produce reports or to interact with Jupyter notebook applications. It also allows you to replace VBA macros with Python Code or write UDFs (user defined functions - Windows only).
* The [xlwings CE](https://docs.xlwings.org/en/stable) is a free and open-source library ([BSD-licensed](https://opensource.org/licenses/BSD-3-Clause)) which provides basic functionalities to lets developers integrate Python with Excel.
* The [xlwings PRO](https://www.xlwings.org/pro) provides more advanced features such as [reports](https://www.xlwings.org/reporting), embedded Python code in Excel, one-click installers for easy deployment, video training, dedicated support, and much more.
If you are not familiar with xlwings library or xlwings CE, please see more detail in the [first notebook](./rdp_xlwingsce_notebook.ipynb) and [How to integrate Financial Data from Refinitiv Data Platform to Excel with Xlwings - Part 1](https://developers.refinitiv.com/en/article-catalog/article/how-to-integrate-financial-data-from-refinitiv-data-platform-to-) article.
This notebook application is based on xlwings version **0.21.3**.
## Introduction to Refinitiv Data Platform (RDP) Libraries
Refinitiv provides a wide range of contents and data which require multiple technologies, delivery mechanisms, data formats, and the multiple APIs to access each content. The [RDP Libraries](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-libraries) are a suite of ease-of-use interfaces providing unified access to streaming and non-streaming data services offered within the [Refinitiv Data Platform (RDP)](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-apis). The Libraries simplified how to access data to various delivery modes such as Request Response, Streaming, Bulk File, and Queues via a single library.
For more deep detail regarding the RDP Libraries, please refer to the following articles and tutorials:
- [Developer Article: Discover our Refinitiv Data Platform Library part 1](https://developers.refinitiv.com/article/discover-our-upcoming-refinitiv-data-platform-library-part-1).
- [Developer Article: Discover our Refinitiv Data Platform Library part 2](https://developers.refinitiv.com/en/article-catalog/article/discover-our-refinitiv-data-platform-library-part-2).
- [Refinitiv Data Platform Libraries Document: An Introduction page](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-libraries/documentation).
### Disclaimer
As this notebook is based on alpha versions **1.0.0.a5** and **1.0.0.a7** of the Python library, the method signatures, data formats, etc. are subject to change.
## xlwings Reports
The [xlwings Reports](https://www.xlwings.org/reporting) is part of [xlwings PRO ](https://www.xlwings.org/pro) and a solution for template-based Excel and PDF reporting. The xlwings Reports let business users design and maintain their reports directly within Excel without being dependent on a dedicated reporting team or Python programmer.
The main features of the xlwings Reports are the following:
- **Separation of code and design**: Users without coding skills can change the template on their own without having to touch the Python code.
- **Template variables**: Python variables (between curly braces) can be directly used in cells, e.g. ```{{ title }}```. They act as placeholders that will be replaced by the actual values.
- **Frames for dynamic tables**: Frames are vertical containers that dynamically align and style tables that have a variable number of rows.
You can get a free trial for xlwings PRO [here](https://www.xlwings.org/pro), then follow the instructions on [How to activate xlwings PRO](https://docs.xlwings.org/en/stable/installation.html#how-to-activate-xlwings-pro) page.
## Intel vs AMD Report Template Preparation
We will use Intel and AMD stock price comparison as example data for this xlwings Reports file.
Firstly, we create the Excel template as *part2_rdp_report_template.xlsx* file. The report template contains two sheets, one for daily pricing comparison and one for volume comparison.
The daily pricing sheet template example is following:

### Templates Variables
You will noticed the double curly bracket placeholders like ```{{ intel_price_title}}```, ```{{ intel_price_df}}```, ```{{ amd_graph }}```, etc in the Excel template file. They are called *templates variable*. xlwings Reports will replace those template variables with data (Pandas DataFrame, text, Matplotlib/Plotly Charts, etc) from the Python code automatically.
### Frames
The other placeholder that you will be noticed is ```<frame>```. The xlwings Reports use Frames to align dynamic tables vertically: xlwings Reports will automatically insert rows for as long as your table is and apply the same styling as defined in your template - you save hours of manual reformatting. Please see the example below:

Image from [xlwings Reporting page](https://www.xlwings.org/reporting).
### Excel Table
Let take a closer look in the Daily Price Sheet, the ```{{ intel_price_df}}``` and ``` {{ amd_price_df}}``` template variables are in the Excel Table.

Using Excel tables is the recommended way to format tables as the styling (themes and alternating colors) can be applied dynamically across columns and rows. You can create Excel Table by go to ```Insert``` > ```Table``` menus and make sure that you activate ```My table has headers``` before clicking on OK. Then add the placeholder as usual on the top-left in your template.
*Note*:
* For Excel table support, you need at least [xlwings version 0.21.0](https://pypi.org/project/xlwings/0.21.0/).
* This feature supports Pandas DataFrame objects only (As of November 2020)
* When using Excel tables, DataFrame indices are excluded by default (xlwings version **0.21.0** to **0.21.2**).
* Since [xlwings version 0.21.3](https://docs.xlwings.org/en/stable/whatsnew.html#v0-21-3-nov-22-2020), the index is now transferred to Excel by default. If you would like to exclude the DataFrame index, you would need to use ```df.set_index('column_name')``` instead of ```df.reset_index()``` to hide the index.
### Multiple Sheets
The xlwings Reports also support multiple Excel Sheets. Business users just create new Sheets in a single Excel template file and place template variables, frame tags in those Sheets based on the business requirements. xlwings PRO automatically replaces associate data in all Sheets. Developers do not need to manually create and manipulate new Excel Sheet(s) anymore.
Let's demonstrate with Intel vs AMD Volume comparison Sheet template.

Now the template file is ready, we can continue on the data preparation side of the Python Code.
### Shape Text
With newly released [xlwings version 0.21.4](https://docs.xlwings.org/en/stable/whatsnew.html#v0-21-4-nov-23-2020), xlwings Report supports template text in Shapes objects like Boxes or Rectangles with the templates variable. Please see more detail on [the Shape Text feature page](https://docs.xlwings.org/en/stable/reports.html#shape-text).
## Intel vs AMD Data Preparation
### Intel vs AMD 90 Days daily pricing data with RDP Content Layers
The RDP Content layer refers to logical market data objects, largely representing financial items like level 1 market data prices and quotes, Order Books, News, Historical Pricing, Company Research data, and so on.
When comparing to the RDP Function Layer, the Content Layer provides much more flexible manners for developers:
- Richer / fuller response e.g. metadata, sentiment scores - where available
- Using Asynchronous or Event-Driven operating modes - in addition to Synchronous
- Streaming Level 1 Market Price Data - as well as Snapshot requests
The Content layer can easily be used by both professional developers and financial coders. It provides great flexibility for familiar and commonly used financial data models.
Please find more detail regarding the Content Layer on [RDP Libraries document page](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-libraries/documentation).
```
# import xlwings and RDP libraries
import xlwings as xw
from xlwings.pro.reports import create_report
import refinitiv.dataplatform as rdp
# import all required libraries for this notebook
import datetime
import configparser as cp
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.ticker as tick
import json
import datetime
import asyncio
```
You should save a text file with **filename** `rdp.cfg` having the following contents:
[rdp]
username = YOUR_RDP_EMAIL_USERNAME
password = YOUR_RDP_PASSWORD
app_key = YOUR_RDP_APP_KEY
This file should be readily available (e.g. in the current working directory) for the next steps.
```
cfg = cp.ConfigParser()
cfg.read('rdp.cfg')
```
The RDP Libraries let application consumes data from the following platforms
- DesktopSession (Eikon/Refinitiv Workspace)
- PlatformSession (RDP, Refinitiv Real-Time Optimized)
- DeployedPlatformSession (deployed Refinitiv Real-Time/ADS)
This Jupyter Notebook is focusing on the *PlatformSession* only. However, the main logic for other session types is the same when interacts with the xlwings library.
```
# Open RDP Platform Session
session = rdp.open_platform_session(
cfg['rdp']['app_key'],
rdp.GrantPassword(
username = cfg['rdp']['username'],
password = cfg['rdp']['password']
)
)
session.get_open_state()
```
Firstly, we define all the necessary variables for requesting data.
```
# Define RICs
intel_ric = 'INTC.O'
amd_ric = 'AMD.O'
fields = ['BID','ASK','OPEN_PRC','HIGH_1','LOW_1','TRDPRC_1','BLKVOLUM']
count = 90
```
This notebook example utilizes Python [asyncio library](https://docs.python.org/3.7/library/asyncio.html) to retrieve data RDP Content Layer's ```HistoricalPricing``` interface ```get_summaries_async()``` function asynchronously.
```
help(rdp.HistoricalPricing.get_summaries_async)
# Run two requests processes concurrently.
tasks = asyncio.gather(
rdp.HistoricalPricing.get_summaries_async(intel_ric, interval = rdp.Intervals.DAILY, fields = fields, count = count),
rdp.HistoricalPricing.get_summaries_async(amd_ric, interval = rdp.Intervals.DAILY, fields = fields, count = count)
)
asyncio.get_event_loop().run_until_complete(tasks)
# Assign requests results to intel_interday and amd_interday variables
intel_interday, amd_interday = tasks._result
```
Once the task (requests Daily data of Intel and AMD) is completed, get the response data in [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) object format via ```<response>.data.df``` statement.
```
print("\nHistorical Pricing Summaries - Interday - Intel")
intel_df_pricing = intel_interday.data.df
display(intel_df_pricing)
print("\nHistorical Pricing Summaries - Interday - AMD")
amd_df_pricing = amd_interday.data.df
display(amd_df_pricing)
```
Now we have the raw Intel and AMD Daily Price data. The next phase is restructuring data to make it easier to read and to plot the report graphs.
### Restructure DataFrame
Please note that the restructure steps are identical to the [part 1 notebook](./rdp_xlwingsce_notebook.ipynb) application. We start by naming the index column to "Date"
```
intel_df_pricing.index.name = 'Date'
amd_df_pricing.index.name = 'Date'
intel_df_pricing.head(5)
```
Next, we change all non-Date columns data type from String to Float.
```
for column in intel_df_pricing:
intel_df_pricing[column]=intel_df_pricing[column].astype(float)
for column in amd_df_pricing:
amd_df_pricing[column]=amd_df_pricing[column].astype(float)
```
We change the DataFrame Date index to be a data column. This will let us plot a graph using **Date** as X-Axis.
```
intel_df_pricing.reset_index(level=0, inplace=True)
amd_df_pricing.reset_index(level=0, inplace=True)
intel_df_pricing.head(5)
```
Then sort data in ascending order.
```
# Sort DataFrame by Date
intel_df_pricing.sort_values('Date',ascending=True,inplace=True)
amd_df_pricing.sort_values('Date',ascending=True,inplace=True)
```
### Plotting Graphs
We use [Matplotlib](https://matplotlib.org/index.html)'s [Pyplot](https://matplotlib.org/api/pyplot_api.html) library to plot Intel and AMD Daily Pricing graphs. Each graph represents interday data for the last 90 days pricing information.
The source code also creates the [Pyplot Figure](https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.pyplot.figure.html) objects which we will pass them to the report file as pictures.
```
# Plotting a Graph for Intel
columns = ['OPEN_PRC','HIGH_1','LOW_1','TRDPRC_1']
intel_df_pricing.set_index('Date',drop=True,inplace=True)
intel_figure = plt.figure()
plt.xlabel('Date', fontsize='large')
plt.ylabel('Price', fontsize='large')
# Create graph title from Company and RIC names dynamically.
plt.ticklabel_format(style = 'plain')
plt.title('Intel interday data for last 90 days', color='black',fontsize='x-large')
ax = intel_figure.gca()
intel_df_pricing.plot(kind='line', ax = intel_figure.gca(),y=columns,figsize=(14,7) , grid = True)
plt.show()
# Plotting a Graph for AMD
columns = ['OPEN_PRC','HIGH_1','LOW_1','TRDPRC_1']
amd_df_pricing.set_index('Date',drop=True,inplace=True)
amd_figure = plt.figure()
plt.xlabel('Date', fontsize='large')
plt.ylabel('Price', fontsize='large')
# Create graph title from Company and RIC names dynamically.
plt.ticklabel_format(style = 'plain')
plt.title('AMD interday data for last 90 days', color='black',fontsize='x-large')
ax = amd_figure.gca()
amd_df_pricing.plot(kind='line', ax = amd_figure.gca(),y=columns,figsize=(14,7), grid = True )
plt.show()
```
Now we got the charts and figure objects ready for the Pricing Sheet report. Next, we will create the Volume comparison chart for the Intel vs AMD Volume comparison Sheet.
### Intel vs AMD Volume Comparison
The next chart is block trading volume comparison which is the *BLKVOLUM* data field. This chart contains Intel and AMD data in the same figure.
```
columns = ['BLKVOLUM']
# Intel
intel_amd_volume_figure = plt.figure()
plt.xlabel('Date', fontsize='large')
plt.ylabel('Trading Volume', fontsize='large')
# Create graph title from Company and RIC names dynamically.
plt.ticklabel_format(style = 'plain')
plt.title('AMD vs Intel total block trading volume comparison for last 90 days', color='black',fontsize='x-large')
ax = intel_amd_volume_figure.gca()
intel_df_pricing.plot(kind='line', ax = intel_amd_volume_figure.gca(),y=columns,figsize=(14,7) , label=['Intel trading volume'],grid = True)
# AMD
amd_df_pricing.plot(kind='line', ax = ax ,y=columns,figsize=(14,7), label=['AMD trading volume'],grid = True)
plt.show()
```
## Generate Report with xlwings PRO
Now all data (DataFrame and Charts) is ready. We have demonstrated the [Reports-API](https://docs.xlwings.org/en/stable/api.html#reports-api) and with Reports API ```create_report()``` function in the [part-1 notebook](./rdp_xlwingsce_notebook.ipynb) as the following example.
```
wb = create_report(
'rdp_report_template.xlsx',
'rdp_report_pro.xlsx',
historical_title=historical_title,
df_historical=df_historical.head(10),
graph= fig
)
```
The above code is ok for small data. This part-2 notebook will show more features that developers can work with the ```create_report()``` function for supporting various requirements and template variables.
Firstly, let's define static texts and template/report file location.
```
# Define Static texts and template/report file location.
intel_price_title='Intel Hitorical Data'
amd_price_title = 'AMD Historical Data'
template_file = 'part2_rdp_report_template.xlsx'
report_file = 'part2_rdp_intel_vs_amd.xlsx'
```
Next, we create the Python [Dictionary](https://docs.python.org/3.7/tutorial/datastructures.html#dictionaries) object to collect all data for template variables. Please note that the Dictionary keys must have the same names as template variables.
#### xlwings versions 0.21.0 to 0.21.2
When using Excel tables, DataFrame indices are excluded by default. We would like to include them in the report, so we reset the index before providing the DataFrame to the ```create_report``` function with [df.reset_index()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html) function.
#### xlwings version 0.21.3
Since [xlwings version 0.21.3](https://docs.xlwings.org/en/stable/whatsnew.html#v0-21-3-nov-22-2020), the index is now transferred to Excel by default. If you would like to exclude the DataFrame index, you would need to use ```df.set_index('column_name')``` instead of ```df.reset_index()``` to hide the index.
This notebook application is based on xlwings version 0.21.3.
```
# Create a Dictionary to collect all report data
data = dict(
intel_price_title=intel_price_title,
intel_price_df = intel_df_pricing.head(15),
intel_graph = intel_figure,
amd_price_title = amd_price_title,
amd_price_df = amd_df_pricing.head(15),
amd_graph = amd_figure,
intel_amd_volume_graph = intel_amd_volume_figure,
)
```
Then we call the create_report function.
```
wb = create_report(
template_file,
report_file,
**data
)
```
The above ```create_report()``` function generates *part2_rdp_intel_vs_amd.xlsx* Excel report file with format/style defined in part2_rdp_report_template.xlsx and data that we pass to the function. With the default parameter, the part2_rdp_intel_vs_amd.xlsx file will be open automatically.


```
# close this Open Excel Report file
wb.close()
```
Developers can control the Excel instance by passing in an xlwings App instance. For example, to run the report in a separate and hidden instance of Excel. This is a useful feature if Developers aim to create an application that runs as a background service to generate reports daily, monthly, or weekly based on the business requirements.
```
app = xw.App(visible=False)
wb = create_report(
template_file,
'part_2_daily_report.xlsx',
app = app,
**data
)
app.quit() # Close the wb and quit the Excel instance
```
Now the *part_2_daily_report.xlsx* Excel report file is created in the background.
## Close RDP Session
```
# -- Close Session, just calls close_session() function
rdp.close_session()
print(session.get_open_state())
```
## Embedded Code
The [xlwings Embedded Code](https://docs.xlwings.org/en/stable/deployment.html#embedded-code) is part of [xlwings PRO](https://www.xlwings.org/pro). It allows developers to store the Python code directly in Excel so developers don’t have to distribute separate Python files. This feature lets business users can consume RDP content with the macro-enabled Excel file (*xlsm* file) directly without any Python file required (the [Python](https://www.python.org/downloads/) or [Anaconda](https://www.anaconda.com/distribution/)/ [MiniConda](https://docs.conda.io/en/latest/miniconda.html) is still required).
Developers can implement the Python application to consume RDP data, then run xlwings ```xlwings code embed``` command in the console to import all Python files from the current directory and paste them into sheets with the same name of the currently active Excel workbook.
The Python console application is more suitable to demonstrate this Embedded Code feature than via the Jupyter notebook. Please refer to the following RDP - IPA Bond examples file in *python_embedded* folder of this xlwings RDP project:
- rdp_ipa_bond.py: Python application source code.
- rdp_ipa_bond.xlsm: Excel file with embedded code from rdp_ipa_bond.py file.
- README.md: Embedded Code example readme file.
The Python Embedded example shows how to consume bond analytics data to the macro-enabled Excel file via RDP Libraries Financial Contracts API (```rdp.get_bond_analytics()``` function).
An example of an Excel file that has separated data and embedded Python Sheets is the following.


Once you have setup [xlwings Add-in: Run main](https://docs.xlwings.org/en/stable/addin.html#run-main) and [Embedded Code](https://docs.xlwings.org/en/stable/deployment.html#embedded-code) features; you can run the embedded Python Sheet by go to the xlwings toolbar and click the ```Run Main``` button to get IPA Data and display it in Excel Sheet.
Alternatively, you can replace xlwings Add-in with a VBA module which lets you run embedded Python code from VBA function ```RunPython ("import mymodule;mymodule.myfunction()")``` without to install the add-in. Please see more detail in [xlwings add-in: quickstart command example](https://docs.xlwings.org/en/stable/vba.html#xlwings-add-in) page.

## Exporting Excel report to PDF
With newly release xlwings version *0.21.1* onward, the xlwings CE can export the whole Excel workbook or a subset of the sheets to a PDF file with xlwings ```Book.to_pdf()``` function. Please see more detail regarding the ```to_pdf``` function on [xlwings API reference page](https://docs.xlwings.org/en/stable/api.html#xlwings.Book.to_pdf).
We will demonstrate this feature with a quick Python source code from the [first notebook](./rdp_xlwingsce_notebook.ipynb) to create the Intel Daily Pricing report in PDF file format.
Firstly, create a new blank Excel report file and set a basic Report style.
```
wb = xw.Book() # Creating an new excel file. wb = xw.Book(filename) would open an existing file
intel_price_sheet = wb.sheets[0]
intel_price_sheet.name = 'Intel Pricing'
intel_price_sheet.range("A1").value = 'Intel Pricing'
intel_price_sheet.range("A1").api.Font.Size = 14 # Change font size
intel_price_sheet.range("A1").api.Font.ColorIndex = 2 # Change font color
intel_price_sheet.range('A1:H1').color = (0,0,255) # Change cell background color
# Set Pandas DataFrame object to newly created Excel File
intel_price_sheet.range("A2").value = intel_df_pricing.head(15)
# Set data table format
intel_price_sheet.range('2:1').api.Font.Bold = True #Make Column headers bold
intel_price_sheet.range('A2:H2').color = (144,238,144) # Change cell background color
intel_price_sheet.autofit('c') # Set sheet autofit the width of column
```
Next, find the position of the last row of the report table as a position to plot a graph (```intel_figure```).
```
# historical_sheet.cells.last_cell.row = row of the lower right cell
'''
change to your specified column, then go up until you hit a non-empty cell
'''
intel_price_last_row = intel_price_sheet.range((intel_price_sheet.cells.last_cell.row, 1)).end('up').row
rng = intel_price_sheet.range('A{row}'.format(row = intel_price_last_row + 1))
# Resize inte_figure Figure object
intel_figure.set_figheight(6)
intel_figure.set_figwidth(6)
# Add figure to Excel report file as a picture
intel_price_sheet.pictures.add(intel_figure, name='MyPlot', update=True, top=rng.top, left=rng.left)
```
### Save to PDF file
Then call the ```Book.to_pdf()``` function to save this Excel report as PDF file.
```
wb.to_pdf('./part_2_xlwings_to_pdf.pdf') # defaults to the same name as the workbook, in the same directory
```
The Excel report with RDP content will be saved as *part_2_xlwings_to_pdf.pdf* file.

Finally, we close the Excel file without saving.
```
wb.close()
```
## Conclusion and Next Step
The xlwings CE library lets Python developers integrate data with Excel in a simple way. The xlwings PRO allows Python developers and business users to work together to integrate data with Excel or PDF report file in much easier than xlwings CE.
The xlwings Reports help businesses and financial teams design the report to match their business requirement freely. The Python developers/data engineers can focus on how to retrieve and optimize data without no need to worry about report design, look & feel. xlwings Reports also help developers can automate report generator process periodicity (such as a daily, weekly, or monthly report).
If users want dynamic data and charts in the report file, the xlwings Embedded Code feature lets users run Python code in the macro-enabled Excel report directly. Users do not need to run a separate Python code themselves or wait for Developers to generate a report file for them.
The newly introduced ```to_pdf``` feature also lets developers export the Excel Workbook/Sheets to the PDF file. This function helps business users who do not have [Microsoft Office](https://www.office.com/) installed can still be able to open the PDF report file.
At the same time, the [Refinitiv Data Platform (RDP) Libraries](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-apis) let developers rapidly access Refinitiv Platform content with a few lines of code that easy to understand and maintain. Developers can focus on implement the business logic or analysis data without worry about the connection, authentication detail with the Refinitiv Platforms.
The integration between Refinitiv APIs and xlwings is not limited to only RDP Libraries. Any [Refinitiv APIs](https://developers.refinitiv.com/en/api-catalog?i=1;q1=page-type%3Aapi;q2=devportal%3Alanguages~2Fpython;sort=title;sp_c=12;sp_cs=UTF-8;sp_k=devportal-prod;view=xml;x1=w-page-type-id;x2=api-language) that support Python programming language such as [Eikon Data API](https://developers.refinitiv.com/en/api-catalog/eikon/eikon-data-api) ([Eikon Data API-xlwings article](https://developers.refinitiv.com/en/article-catalog/article/financial-reporting-with-eikon-and-excel)), or [RKD API](https://developers.refinitiv.com/en/api-catalog/refinitiv-knowledge-direct/refinitiv-knowledge-direct-api-rkd-api), or [DataStream Web Service - Python](https://developers.refinitiv.com/en/api-catalog/eikon/datastream-web-service/) can work with xlwings library using the same concept and code logic as this RDP Libraries notebook examples.
## References
You can find more details regarding the Refinitiv Data Platform Libraries, xlwings, and related technologies for this notebook from the following resources:
* [Refinitiv Data Platform (RDP) Libraries](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-libraries) on the [Refinitiv Developer Community](https://developers.refinitiv.com/) web site.
* [xlwings web site](https://www.xlwings.org/).
* [xlwings PRO page](https://www.xlwings.org/pro).
* [xlwings PRO Document page](https://docs.xlwings.org/en/stable/pro.html).
* [xlwings API Reference page](https://docs.xlwings.org/en/stable/api.html).
* [xlwings Reports page](https://www.xlwings.org/reporting).
* [xlwings Embedded Code page](https://docs.xlwings.org/en/stable/deployment.html#embedded-code).
* [RDP Libraries Quick Start Guide page](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-libraries/quick-start).
* [RDP Libraries Tutorial page](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-platform-libraries/tutorials).
* [Discover our Refinitiv Data Platform Library (part 1)](https://developers.refinitiv.com/en/article-catalog/article/discover-our-refinitiv-data-platform-library-part-1).
* [Discover our Refinitiv Data Platform Library (part 2)](https://developers.refinitiv.com/en/article-catalog/article/discover-our-refinitiv-data-platform-library-part-2).
* [How to integrate Financial Data from Refinitiv Data Platform to Excel with Xlwings - Part 1](https://developers.refinitiv.com/en/article-catalog/article/how-to-integrate-financial-data-from-refinitiv-data-platform-to-).
* [Financial Reporting with Eikon, xlwings and Excel](https://developers.refinitiv.com/en/article-catalog/article/financial-reporting-with-eikon-and-excel).
For any questions related to this article or Refinitiv Data Platform Libraries, please use the Developers Community [Q&A Forum](https://community.developers.refinitiv.com/spaces/321/refinitiv-data-platform-libraries.html).
| github_jupyter |
```
import numpy as np
import casadi as cas
class MPC:
def __init__(self, dt):
self.dt = dt
self.k_total = 1.0 # the overall weighting of the total costs
self.theta_iamb = np.pi/4 # my theta wrt to ambulance
self.L = 4.5
self.W = 1.8
self.n_circles = 2
## State Costs Constants
self.k_x = 0
self.k_y = 0
self.k_phi = 0
self.k_delta = 0
self.k_v = 0
self.k_s = -0.1
## Control Costs Constants
self.k_u_delta = 10.0
self.k_u_v = 1.0
## Derived State Costs Constants
self.k_lat = 10.0
self.k_lon = 1.0
self.k_phi_error = 1.0
self.k_phi_dot = 1.0
self.k_x_dot = 0.0
self.k_change_u_v = 1.0
self.k_change_u_delta = 1.0
self.k_final = 0
# Constraints
self.max_delta_u = 5 * np.pi/180
self.max_acceleration = 4
self.max_v_u = self.max_acceleration
self.min_v_u = -self.max_v_u
self.max_v = 25 * 0.447 # m/s
self.min_v = 0.0
self.max_y = np.infty
self.min_y = -np.infty
self.max_X_dev = np.infty
self.max_Y_dev = np.infty
self.f = self.gen_f_vehicle_dynamics()
self.fd = None
# Distance used for collision avoidance
self.circle_radius = np.sqrt(2) * self.W/2.0
self.min_dist = 2 * self.circle_radius # 2 times the radius of 1.5
def generate_lateral_cost(self, X, X_desired):
lateral_cost = np.sum([(-cas.sin(X_desired[2,k]) * (X[0,k]-X_desired[0,k]) +
cas.cos(X_desired[2,k]) * (X[1,k]-X_desired[1,k]))**2
for k in range(X.shape[1])])
return lateral_cost
def generate_longitudinal_cost(self, X, X_desired):
longitudinal_cost = np.sum([(cas.cos(X_desired[2,k]) * (X[0,k]-X_desired[0,k]) +
cas.sin(X_desired[2,k]) * (X[1,k]-X_desired[1,k]))**2
for k in range(X.shape[1])])
return longitudinal_cost
def generate_phidot_cost(self, X):
phid = X[4,:] * cas.tan(X[3,:]) / self.L
phid_cost = cas.sumsqr(phid)
return phid_cost
def generate_costs(self, X, U, X_desired):
self.u_delta_cost = cas.sumsqr(U[0,:])
self.u_v_cost = cas.sumsqr(U[1,:])
self.lon_cost = self.generate_longitudinal_cost(X, X_desired)
self.phi_error_cost = cas.sumsqr(X_desired[2,:]-X[2,:])
X_ONLY = False
if X_ONLY:
self.s_cost = cas.sumsqr(X[0,-1])
self.lat_cost = cas.sumsqr(X[1,:])
else:
self.lat_cost = self.generate_lateral_cost(X, X_desired)
self.s_cost = cas.sumsqr(X[5,-1])
self.final_costs = self.generate_lateral_cost(X[:,-5:],X_desired[:,-5:]) + cas.sumsqr(X_desired[2,-5:]-X[2,-5:])
self.v_cost = cas.sumsqr(X[4, :])
self.phidot_cost = self.generate_phidot_cost(X)
N = U.shape[1]
self.change_u_delta = cas.sumsqr(U[0,1:N-1] - U[0,0:N-2])
self.change_u_v = cas.sumsqr(U[1,1:N-1] - U[1,0:N-2])
self.x_cost = cas.sumsqr(X[0,:])
self.x_dot_cost = cas.sumsqr(self.dt * X[4, :] * cas.cos(X[2]))
def total_cost(self):
all_costs = [
self.k_u_delta * self.u_delta_cost,
self.k_u_v * self.u_v_cost,
self.k_lat * self.lat_cost,
self.k_lon * self.lon_cost,
self.k_phi_error * self.phi_error_cost,
self.k_phi_dot * self.phidot_cost,
self.k_s * self.s_cost,
self.k_v * self.v_cost,
self.k_change_u_v * self.change_u_v,
self.k_change_u_delta * self.change_u_delta,
self.k_final * self.final_costs,
self.k_x * self.x_cost,
self.k_x_dot * self.x_dot_cost
]
all_costs = np.array(all_costs)
total_cost = np.sum(all_costs)
return total_cost, all_costs
def add_state_constraints(self, opti, X, U, X_desired, T, x0=None):
if x0 is None:
x_start = 0
else:
x_start = x0[0]
opti.subject_to( opti.bounded(-1, X[0,:], np.infty )) #Constraints on X, Y
opti.subject_to( opti.bounded(self.min_y+self.W/2.0, X[1,:], self.max_y-self.W/2.0) )
opti.subject_to( opti.bounded(-np.pi/2, X[2,:], np.pi/2) ) #no crazy angle
opti.subject_to(opti.bounded(-self.max_delta_u, U[0,:], self.max_delta_u))
opti.subject_to(opti.bounded(self.min_v_u, U[1,:], self.max_v_u)) # 0-60 around 4 m/s^2
opti.subject_to(opti.bounded(self.min_v, X[4,:], self.max_v))
# Lane Deviations
if self.max_X_dev < np.infty:
opti.subject_to( opti.bounded(-self.max_X_dev, X[0,:] - X_desired[0,:], self.max_X_dev))
opti.subject_to( opti.bounded(-self.max_Y_dev, X[1,:] - X_desired[1,:], self.max_Y_dev))
def add_dynamics_constraints(self, opti, X, U, X_desired, x0):
if self.fd == None:
raise Exception("No Desired Trajectory Defined")
# State Dynamics
N = U.shape[1]
for k in range(N):
# opti.subject_to( X[:, k+1] == F(X[:, k], U[:, k], dt))
opti.subject_to( X[:, k+1] == self.F_kutta(self.f, X[:, k], U[:, k]))
for k in range(N+1):
opti.subject_to( X_desired[:, k] == self.fd(X[-1, k]) ) #This should be the trajectory dynamic constraint
opti.subject_to(X[:,0] == x0)
def F_kutta(self, f, x_k, u_k):
k1 = f(x_k, u_k)
k2 = f(x_k+self.dt/2*k1, u_k)
k3 = f(x_k+self.dt/2*k2, u_k)
k4 = f(x_k+self.dt*k3, u_k)
x_next = x_k + self.dt/6*(k1+2*k2+2*k3+k4)
return x_next
def gen_f_vehicle_dynamics(self):
X = cas.MX.sym('X')
Y = cas.MX.sym('Y')
Phi = cas.MX.sym('Phi')
Delta = cas.MX.sym('Delta')
V = cas.MX.sym('V')
s = cas.MX.sym('s')
delta_u = cas.MX.sym('delta_u')
v_u = cas.MX.sym('v_u')
x = cas.vertcat(X, Y, Phi, Delta, V, s)
u = cas.vertcat(delta_u, v_u)
ode = cas.vertcat(V * cas.cos(Phi),
V * cas.sin(Phi),
V * cas.tan(Delta) / self.L,
delta_u,
v_u,
V)
f = cas.Function('f',[x,u],[ode],['x','u'],['ode'])
return f
def gen_f_desired_lane(self, world, lane_number, right_direction=True):
if right_direction == False:
raise Exception("Haven't implemented left lanes")
s = cas.MX.sym('s')
xd = s
yd = world.get_lane_centerline_y(lane_number, right_direction)
phid = 0
des_traj = cas.vertcat(xd, yd, phid)
fd = cas.Function('fd',[s],[des_traj],['s'],['des_traj'])
return fd
def get_car_circles(self, X, n_circles=2):
if n_circles==2:
x_circle_front = X[0:2,:] + (self.W/2) * cas.vertcat(cas.cos(X[2,:]), cas.sin(X[2,:]))
x_circle_rear = X[0:2,:] - (self.W/2) * cas.vertcat(cas.cos(X[2,:]), cas.sin(X[2,:]))
# x_circle_front = X[0:2,:] + (self.L/2 - self.W/2) * cas.vertcat(cas.cos(X[2,:]), cas.sin(X[2,:]))
# x_circle_rear = X[0:2,:] - (self.L/2 - self.W/2) * cas.vertcat(cas.cos(X[2,:]), cas.sin(X[2,:]))
radius = 1.5
min_dist = 2*radius
centers = [x_circle_rear, x_circle_front]
elif n_circles==3:
x_circle_mid = X[0:2,:]
dist_from_center = self.L/2.0 - self.W/2
x_circle_rear = X[0:2,:] - dist_from_center * cas.vertcat(cas.cos(X[2,:]), cas.sin(X[2,:]))
x_circle_front = X[0:2,:] + dist_from_center * cas.vertcat(cas.cos(X[2,:]), cas.sin(X[2,:]))
centers = (x_circle_rear, x_circle_mid, x_circle_front)
radius = 1.1
min_dist = 2*radius
return centers, radius
def get_car_circles_np(self, X):
if self.n_circles == 2:
# x_circle_front = X[0:2,:] + (self.L/2 - self.W/2) * np.array([np.cos(X[2,:]),np.sin(X[2,:])])
# x_circle_rear = X[0:2,:] - (self.L/2 - self.W/2) * np.array([np.cos(X[2,:]),np.sin(X[2,:])])
x_circle_front = X[0:2,:] + (self.W/2) * np.array([np.cos(X[2,:]),np.sin(X[2,:])])
x_circle_rear = X[0:2,:] - (self.W/2) * np.array([np.cos(X[2,:]),np.sin(X[2,:])])
radius = 1.5
# min_dist = 2*radius
return [x_circle_rear, x_circle_front], radius
elif self.n_circles == 3:
x_circle_mid = X[0:2,:]
dist_from_center = self.L/2.0 - self.W/2
x_circle_rear = X[0:2,:] - dist_from_center* np.array([np.cos(X[2,:]),np.sin(X[2,:])])
x_circle_front = X[0:2,:] + dist_from_center* np.array([np.cos(X[2,:]),np.sin(X[2,:])])
centers = [x_circle_rear, x_circle_mid, x_circle_front]
radius = 1.1
# min_dist = 2*radius
return centers, radius
def load_state(file_name):
x1 = np.load(file_name + "x1.npy",allow_pickle=False)
u1 = np.load(file_name + "u1.npy",allow_pickle=False)
x1_des = np.load(file_name + "x1_des.npy", allow_pickle=False)
x2 = np.load(file_name + "x2.npy",allow_pickle=False)
u2 = np.load(file_name + "u2.npy",allow_pickle=False)
x2_des = np.load(file_name + "x2_des.npy",allow_pickle=False)
xamb = np.load(file_name + "xamb.npy",allow_pickle=False)
uamb = np.load(file_name + "uamb.npy",allow_pickle=False)
xamb_des = np.load(file_name + "xamb_des.npy",allow_pickle=False)
return x1, u1, x1_des, x2, u2, x2_des, xamb, uamb, xamb_des
def save_state(file_name, x1, u1, x1_des, x2, u2, x2_des, xamb, uamb, xamb_des):
np.save(file_name + "x1", x1,allow_pickle=False)
np.save(file_name + "u1", u1,allow_pickle=False)
np.save(file_name + "x1_des", x1_des, allow_pickle=False)
np.save(file_name + "x2", x2,allow_pickle=False)
np.save(file_name + "u2", u2,allow_pickle=False)
np.save(file_name + "x2_des", x2_des, allow_pickle=False)
np.save(file_name + "xamb", xamb,allow_pickle=False)
np.save(file_name + "uamb", uamb,allow_pickle=False)
np.save(file_name + "xamb_des", xamb_des, allow_pickle=False)
return file_name
# import numpy as np
# import casadi as cas
# import src.MPC_Casadi as mpc
class IterativeBestResponseMPCMultiple:
### We always assume that car1 is the one being optimized
def __init__(self, responseMPC, ambulanceMPC, otherMPClist):
self.responseMPC = responseMPC
self.otherMPClist = otherMPClist
self.ambMPC = ambulanceMPC
self.opti = cas.Opti()
self.min_dist = 2 * 1.5 # 2 times the radius of 1.5
self.k_slack = 99999
self.k_CA = 0
self.collision_cost = 0
def generate_optimization(self, N, T, x0, x0_amb, x0_other, print_level=5, slack=True):
# t_amb_goal = self.opti.variable()
n_state, n_ctrl = 6, 2
#Variables
self.x_opt = self.opti.variable(n_state, N+1)
self.u_opt = self.opti.variable(n_ctrl, N)
self.x_desired = self.opti.variable(3, N+1)
p = self.opti.parameter(n_state, 1)
# Presume to be given...and we will initialize soon
if self.ambMPC:
self.xamb_opt = self.opti.variable(n_state, N+1)
self.uamb_opt = self.opti.parameter(n_ctrl, N)
self.xamb_desired = self.opti.variable(3, N+1)
pamb = self.opti.parameter(n_state, 1)
self.allother_x_opt = [self.opti.variable(n_state, N+1) for i in self.otherMPClist]
self.allother_u_opt = [self.opti.parameter(n_ctrl, N) for i in self.otherMPClist]
self.allother_x_desired = [self.opti.variable(3, N+1) for i in self.otherMPClist]
self.allother_p = [self.opti.parameter(n_state, 1) for i in self.otherMPClist]
#### Costs
self.responseMPC.generate_costs(self.x_opt, self.u_opt, self.x_desired)
self.car1_costs, self.car1_costs_list = self.responseMPC.total_cost()
if self.ambMPC:
self.ambMPC.generate_costs(self.xamb_opt, self.uamb_opt, self.xamb_desired)
self.amb_costs, self.amb_costs_list = self.ambMPC.total_cost()
else:
self.amb_costs, self.amb_costs_list = 0, []
## We don't need the costs for the other vehicles
# self.slack1, self.slack2, self.slack3 = self.generate_slack_variables(slack, N)
# We will do collision avoidance for ego vehicle with all other vehicles
self.slack_vars_list = self.generate_slack_variables(slack, N, len(self.otherMPClist), n_circles = self.responseMPC.n_circles)
if self.ambMPC:
self.slack_amb = self.generate_slack_variables(slack, N, 1)[0]
self.slack_cost = 0
for slack_var in self.slack_vars_list:
self.slack_cost += cas.sumsqr(slack_var)
if self.ambMPC:
self.slack_cost += cas.sumsqr(self.slack_amb)
self.response_svo_cost = np.cos(self.responseMPC.theta_iamb)*self.car1_costs
self.other_svo_cost = np.sin(self.responseMPC.theta_iamb)*self.amb_costs
self.total_svo_cost = self.response_svo_cost + self.other_svo_cost + self.k_slack * self.slack_cost + self.k_CA * self.collision_cost
######## optimization ##################################
self.opti.minimize(self.total_svo_cost)
##########################################################
#constraints
self.responseMPC.add_dynamics_constraints(self.opti, self.x_opt, self.u_opt, self.x_desired, p)
self.responseMPC.add_state_constraints(self.opti, self.x_opt, self.u_opt, self.x_desired, T, x0)
if self.ambMPC:
self.ambMPC.add_dynamics_constraints(self.opti, self.xamb_opt, self.uamb_opt, self.xamb_desired, pamb)
for i in range(len(self.otherMPClist)):
self.otherMPClist[i].add_dynamics_constraints(self.opti,
self.allother_x_opt[i], self.allother_u_opt[i], self.allother_x_desired[i],
self.allother_p[i])
## Generate the circles
# Proxy for the collision avoidance points on each vehicle
self.c1_vars = [self.opti.variable(2, N+1) for c in range(self.responseMPC.n_circles)]
if self.ambMPC:
ca_vars = [self.opti.variable(2, N+1) for c in range(self.responseMPC.n_circles)]
self.other_circles = [[self.opti.variable(2, N+1) for c in range(self.responseMPC.n_circles)] for i in range(len(self.allother_x_opt))]
self.collision_cost = 0
# Collision Avoidance
for k in range(N+1):
# center_offset
centers, response_radius = self.responseMPC.get_car_circles(self.x_opt[:,k])
for c1_circle in centers:
for i in range(len(self.allother_x_opt)):
other_centers, other_radius = self.otherMPClist[i].get_car_circles(self.allother_x_opt[i][:,k])
for ci in range(len(other_centers)):
dist_sqr = cas.sumsqr(c1_circle - other_centers[ci])
self.opti.subject_to(dist_sqr - (response_radius + other_radius)**2 > 0 - self.slack_vars_list[i][ci,k])
dist_btw_object = cas.sqrt(dist_sqr) - 1.1*(response_radius + other_radius)
self.collision_cost += 1/dist_btw_object**4
# Don't forget the ambulance
if self.ambMPC:
amb_circles, amb_radius = self.ambMPC.get_car_circles(self.xamb_opt[:,k])
for ci in range(len(amb_circles)):
self.opti.subject_to(cas.sumsqr(c1_circle - amb_circles[ci]) > (response_radius + amb_radius)**2 - self.slack_amb[ci,k])
dist_btw_object = cas.sqrt(cas.sumsqr(c1_circle - amb_circles[ci])) - 1.1*(response_radius + amb_radius)
self.collision_cost += 1/dist_btw_object**4
self.opti.set_value(p, x0)
for i in range(len(self.allother_p)):
self.opti.set_value(self.allother_p[i], x0_other[i])
if self.ambMPC:
self.opti.set_value(pamb, x0_amb)
self.opti.solver('ipopt',{'warn_initial_bounds':True},{'print_level':print_level, 'max_iter':10000})
def solve(self, uamb, uother):
if self.ambMPC:
self.opti.set_value(self.uamb_opt, uamb)
for i in range(len(self.allother_u_opt)):
self.opti.set_value(self.allother_u_opt[i], uother[i])
# self.opti.set_value(self.u2_opt, u2)
self.solution = self.opti.solve()
def get_bestresponse_solution(self):
x1, u1, x1_des, = self.solution.value(self.x_opt), self.solution.value(self.u_opt), self.solution.value(self.x_desired)
return x1, u1, x1_des
def get_solution(self):
x1, u1, x1_des, = self.solution.value(self.x_opt), self.solution.value(self.u_opt), self.solution.value(self.x_desired)
if self.ambMPC:
xamb, uamb, xamb_des, = self.solution.value(self.xamb_opt), self.solution.value(self.uamb_opt), self.solution.value(self.xamb_desired)
else:
xamb, uamb, xamb_des, = None, None, None
other_x = [self.solution.value(self.allother_x_opt[i]) for i in range(len(self.allother_x_opt))]
other_u = [self.solution.value(self.allother_u_opt[i]) for i in range(len(self.allother_u_opt))]
other_des = [self.solution.value(self.allother_x_desired[i]) for i in range(len(self.allother_x_desired))]
return x1, u1, x1_des, xamb, uamb, xamb_des, other_x, other_u, other_des
def generate_slack_variables(self, slack, N, number_slack_vars=3, n_circles=2):
if slack == True:
slack_vars = [self.opti.variable(n_circles, N+1) for i in range(number_slack_vars)]
for slack in slack_vars:
self.opti.subject_to(cas.vec(slack)>=0)
# self.opti.subject_to(slack<=1.0)
else:
slack_vars = [self.opti.parameter(n_circles, N+1) for i in range(number_slack_vars)]
for slack in slack_vars:
self.opti.set_value(slack, np.zeros((n_circles,N+1)))
return slack_vars
def load_state(file_name, n_others):
xamb = np.load(file_name + "xamb.npy",allow_pickle=False)
uamb = np.load(file_name + "uamb.npy",allow_pickle=False)
xamb_des = np.load(file_name + "xamb_des.npy",allow_pickle=False)
xothers, uothers, xothers_des = [], [], []
for i in range(n_others):
x = np.load(file_name + "x%0d.npy"%i, allow_pickle=False)
u = np.load(file_name + "u%0d.npy"%i, allow_pickle=False)
x_des = np.load(file_name + "x_des%0d.npy"%i, allow_pickle=False)
xothers += [x]
uothers += [u]
xothers_des += [x_des]
return xamb, uamb, xamb_des, xothers, uothers, xothers_des
def save_state(file_name, xamb, uamb, xamb_des, xothers, uothers, xothers_des):
np.save(file_name + "xamb", xamb,allow_pickle=False)
np.save(file_name + "uamb", uamb,allow_pickle=False)
np.save(file_name + "xamb_des", xamb_des, allow_pickle=False)
for i in range(len(xothers)):
x, u, x_des = xothers[i], uothers[i], xothers_des[i]
np.save(file_name + "x%0d"%i, x, allow_pickle=False)
np.save(file_name + "u%0d"%i, u, allow_pickle=False)
np.save(file_name + "x_des%0d"%i, x_des, allow_pickle=False)
return file_name
def save_costs(file_name, ibr):
### Get the value for each cost variable
car1_costs_list = np.array([ibr.opti.debug.value(cost) for cost in ibr.car1_costs_list])
amb_costs_list = np.array([ibr.opti.debug.value(cost) for cost in ibr.amb_costs_list])
svo_cost = ibr.opti.debug.value(ibr.response_svo_cost)
other_svo_cost = ibr.opti.debug.value(ibr.other_svo_cost)
total_svo_cost = ibr.opti.debug.value(ibr.total_svo_cost)
np.save(file_name + "car1_costs_list", car1_costs_list, allow_pickle=False)
np.save(file_name + "amb_costs_list", amb_costs_list, allow_pickle=False)
np.save(file_name + "svo_cost", svo_cost, allow_pickle=False)
np.save(file_name + "other_svo_cost", other_svo_cost, allow_pickle=False)
np.save(file_name + "total_svo_cost", total_svo_cost, allow_pickle=False)
return file_name
def load_costs(file_name):
car1_costs_list = np.load(file_name + "car1_costs_list.npy", allow_pickle=False)
amb_costs_list = np.load(file_name + "amb_costs_list.npy", allow_pickle=False)
svo_cost = np.load(file_name + "svo_cost.npy", allow_pickle=False)
other_svo_cost = np.load(file_name + "other_svo_cost.npy", allow_pickle=False)
total_svo_cost = np.load(file_name + "total_svo_cost.npy", allow_pickle=False)
return car1_costs_list, amb_costs_list, svo_cost, other_svo_cost , total_svo_cost
def load_costs_int(i):
car1_costs_list = np.load("%03dcar1_costs_list.npy"%i, allow_pickle=False)
amb_costs_list = np.load("%03damb_costs_list.npy"%i, allow_pickle=False)
svo_cost = np.load("%03dsvo_cost.npy"%i, allow_pickle=False)
other_svo_cost = np.load("%03dother_svo_cost.npy"%i, allow_pickle=False)
total_svo_cost = np.load("%03dtotal_svo_cost.npy"%i, allow_pickle=False)
return car1_costs_list, amb_costs_list, svo_cost, other_svo_cost , total_svo_cost
```
| github_jupyter |
# Linking elements together
As explained in the Data Structure chapter, `momepy` relies on links between different morphological elements. Each element needs ID, and each of the small-scale elements also needs to know the ID of the relevant higher-scale element. The case of block ID is explained in the previous chapter, `momepy.Blocks` generates it together with blocks gdf.
## Getting the ID of the street network
This notebook will explore how to link street network, both nodes and edges, to buildings and tessellation.
### Edges
For linking street network edges to buildings (or tessellation or other elements), `momepy` offers `momepy.get_network_id`. It simply returns a `Series` of network IDs for analysed gdf.
```
import momepy
import geopandas as gpd
import matplotlib.pyplot as plt
```
For illustration, we can use `bubenec` dataset embedded in `momepy`.
```
path = momepy.datasets.get_path('bubenec')
buildings = gpd.read_file(path, layer='buildings')
streets = gpd.read_file(path, layer='streets')
tessellation = gpd.read_file(path, layer='tessellation')
```
First, we have to be sure that streets segments have their unique IDs.
```
streets['nID'] = momepy.unique_id(streets)
```
Then we can link it to buildings. The only argument we might want to look at is `min_size`, which should be a value such that if you build a box centred in each building centroid with edges of size `2 * min_size`, you know a priori that at least one segment is intersected with the box. You can see it as a sort of tolerance.
```
buildings['nID'] = momepy.get_network_id(buildings, streets,
'nID', min_size=100)
f, ax = plt.subplots(figsize=(10, 10))
buildings.plot(ax=ax, column='nID', categorical=True, cmap='tab20b')
streets.plot(ax=ax)
ax.set_axis_off()
plt.show()
```
Note: colormap does not have enough colours, that is why everything on the top-left looks the same. It is not.
### Nodes
The situation with nodes is slightly more complicated as you usually don't have or even need nodes. However, `momepy` includes some functions which are calculated on nodes (mostly in `graph` module). For that reason, we will pretend that we follow the usual workflow:
1. Street network `GeoDataFrame` (edges only)
2. networkx `Graph`
3. Street network - edges and nodes as separate `GeoDataFrames`.
```
graph = momepy.gdf_to_nx(streets)
```
Some [graph-based analysis](../graph/graph.rst) happens here.
```
nodes, edges = momepy.nx_to_gdf(graph)
f, ax = plt.subplots(figsize=(10, 10))
edges.plot(ax=ax, column='nID', categorical=True, cmap='tab20b')
nodes.plot(ax=ax, zorder=2)
ax.set_axis_off()
plt.show()
```
For attaching node ID to buildings, we will need both, nodes and edges. We have already determined which edge building belongs to, so now we only have to find out which end of the edge is the closer one. Nodes come from `momepy.nx_to_gdf` automatically with node ID:
```
nodes.head()
```
The same ID is now inlcuded in edges as well, denoting each end of edge. (Length of the edge is also present as it was necessary to keep as an attribute for the graph.)
```
edges.head()
buildings['nodeID'] = momepy.get_node_id(buildings, nodes, edges,
'nodeID', 'nID')
f, ax = plt.subplots(figsize=(10, 10))
buildings.plot(ax=ax, column='nodeID', categorical=True, cmap='tab20b')
nodes.plot(ax=ax, zorder=2)
edges.plot(ax=ax, zorder=1)
ax.set_axis_off()
plt.show()
```
### Transfer IDs to tessellation
All IDs are now stored in buildings gdf. We can copy them to tessellation using `merge`. First, we select columns we are interested in, then we merge them with tessellation based on the shared unique ID. Usually, we will have more columns than we have now.
```
buildings.columns
columns = ['uID', 'nID', 'nodeID']
tessellation = tessellation.merge(buildings[columns], on='uID')
tessellation.head()
```
Now we should be able to link all elements together as needed for all types of morphometric analysis in `momepy`.
| github_jupyter |
# Working with Environments
By now you've run many experiments in your Azure Machine Learning workspace, and in some cases you've had to specify the particular Python packages required in the environment where the experiment code is run. In this lab, you'll explore environments in a little more detail.
## Connect to Your Workspace
The first thing you need to do is to connect to your workspace using the Azure ML SDK.
> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## Prepare Data for an Experiment
In this lab, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if you created it in a previous lab, the code will find the existing version)
```
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
```
## Create a Training Script
Run the following two cells to create:
1. A folder for a new experiment
2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
```
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Set regularization hyperparameter (passed as an argument to the script)
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
```
## Define an Environment
When you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.
You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.
```
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib', 'pandas'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
```
Now you can use the environment for the experiment by assigning it to an Estimator (or RunConfig).
The following code assigns the environment you created to a generic estimator, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
```
from azureml.train.estimator import Estimator
from azureml.core import Experiment
from azureml.widgets import RunDetails
# Set the script parameters
script_params = {
'--regularization': 0.1
}
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
inputs=[diabetes_ds.as_named_input('diabetes')], # Pass the dataset as an input
script_params=script_params,
compute_target = 'local',
environment_definition = diabetes_env,
entry_script='diabetes_training.py')
# Create an experiment
experiment = Experiment(workspace = ws, name = 'diabetes-training')
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
```
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
```
## Register the Environment
Having gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
```
# Register the environment
diabetes_env.register(workspace=ws)
```
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).
With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
```
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
```
Now you can retrieve the registered environment and use it to configure the estimator for a new experiment that runs the alternative training script (there are no script parameters this time because a Decision Tree classifier doesn't require any hyperparameter values).
```
from azureml.train.estimator import Estimator
from azureml.core import Environment, Experiment
from azureml.widgets import RunDetails
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
inputs=[diabetes_ds.as_named_input('diabetes')], # Pass the dataset as an input
compute_target = 'local',
environment_definition = registered_env,
entry_script='diabetes_training.py')
# Create an experiment
experiment = Experiment(workspace = ws, name = 'diabetes-training')
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
```
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.
Let's look at the metrics and outputs from the experiment.
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
```
It looks like this model is slightly better than the logistic regression model, so let's register it.
```
from azureml.core import Model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Estimator + Environment (Decision Tree)'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
```
## View Registered Environments
In addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
```
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
```
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).
Let's explore the curated environments in more depth and see what packages are included in each of them.
```
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
```
> **More Information**: For more information about environments in Azure Machine Learning, see [the Azure Machine Learning documentation](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments)
| github_jupyter |
# Analytical Modelling
Often, a well testing problem can be efficiently investigated using analytical solutions but these may require a degree of sophistication that is cumbersome for hand or Excel calculation.
The purpose of this notebook is to demonstrate a few Python techniques for well test modelling.
## 1. Implementing a Theis solution
Theis is the workhorse of pump test analysis in well-confined aquifers. It's reasonably easy to implement once you understand how to code up the well function, $W(u)$
\begin{equation}
W(u)=\int\limits_{u}^{\infty}\frac{1}{y}e^{-y}dy
\end{equation}
Fortunately, the exponential integral above is already implemented in `scipy`. The cell below implements the well function as the Python function `W(u)`.
```
from scipy.special import expi
def W(u):
return -expi(-u)
```
Run the cell below to see what the well function looks like. You can see where the logarithmic approximation is valid, for $u<0.05$.
```
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
f,ax=plt.subplots(1,1)
u = np.logspace(-2,1,101)
ax.semilogx(u, W(u),'k.')
ax.axvline(0.05, color = 'r', linestyle=':')
ax.set_xlabel('$u$'); ax.set_ylabel('$W(u)$')
```
The `Theis` function is now straightforward to implement (run the cell below)
```
def Theis(r,t,Q,S,T):
return Q/(4*np.pi*T)*W(r**2*S/(4*T*t))
```
The function is already "vectorized", which means any of the inputs can be passed as a vector. The plot below gives drawdown 100 m from a well pumped at 20 L/s, in a formation with $T$=2200 m$^2$/d and $S$=10$^{-4}$.
```
f,ax=plt.subplots(1,1)
t = np.linspace(0.001,3,101)
r,Q,S,T = [100., 20, 1.e-4, 2200]
ax.plot(t, Theis(r,t,Q*1.e-3*24*3600,S,T),'k.')
ax.set_xlabel('time [days]'); ax.set_ylabel('drawdown [m]')
```
***Why did we apply the scaling `Q*1.e-3*24*3600`?***
***Create a plot of drawdown over time inside a well of radius 0.3 m pumped at 2 m$^3$/min, $S$=0.2, $T$=200 m$^2$/d***
***Create a plot of drawdown with distance for the same parameters, at `t=0.1` and `t=1.5` days.***
```
# your code here
```
## 2. Superposition of elementary solutions
We can use superposition to:
1. Model the drawdown at a location due to multiple pumping wells.
2. Model the drawdown due a single well being pumped at different rates.
### 2.1 Multiple wells
Suppose we are monitoring an observation well that is:
1. 100 m from a well that has been pumping at 20 L/s for the last 3 days.
2. 50 m from a well that has been pumping at 30 L/s for only the last day.
What should the drawdown profile look like over time? (Assume the same aquifer parameters as the example in 1.)
```
# APPROACH: create a time vector for the solution and use a loop to compute drawdown contributions from the individual wells
t = np.linspace(0.001,3,101)
h = 0.*t
S,T = [1.e-4, 2200]
r1,t1,Q1 = [100,0., 20] # well 1 starts at t=0 (pumping for the last three days)
r2,t2,Q2 = [50, 2., 30] # well 2 starts at t=2 (pumping for only the last day)
for i in range(len(t)):
# contribution from well 1
if t[i]>t1:
h[i] = h[i] + Theis(r1, t[i]-t1, Q1*1.e-3*24*3600, S, T)
# contribution from well 2
if t[i]>t2:
h[i] = h[i] + Theis(r2, t[i]-t2, Q2*1.e-3*24*3600, S, T)
f,ax=plt.subplots(1,1)
ax.plot(t, h,'k-')
ax.set_xlabel('time [days]'); ax.set_ylabel('drawdown [m]')
```
***Add a third well that has been pumping for 10 L/s for the last 2 days at a distance of 40 m from the observation well.***
```
# your code here
```
### 2.2 Step-rate pumping
Suppose we pump a well at:
1. 10 L/s for 30 mins, then
2. 15 L/s for another 30 mins.
What should the drawdown profile look like in the well, if the diameter is 0.7 m and there are no well losses?
```
# APPROACH: same as 2.1, except R1 = R2 = well radius and Q2 = size of pumping step.
# your code here
```
***Add a third pumping step, at 25 L/s for another 60 mins.***
***Pumping is halted and the well is allowed to recover. Model this as a negative pumping step of 25 L/s. Plot the recovery for the next 2 hours.***
## 3. Solving equations containing pumping solutions
Pumping solutions introduce all sorts of non-linearity and the potential for analytically non-invertible equations.
For example, suppose we are monitoring an observation bore. We have pumped a well 100 m away at 10 L/s for the last 30 mins and a second well 75 m away at 5 L/s for the last 60 mins. If the drawdown is 30 cm and the transmissivity is known to be 1650 m$^2$/d, what is the storativity, $S$?
Obviously, this is a complex (and contrived!) problem. Let's write out the superposition of Theis solutions
\begin{equation}
h = \underbrace{\frac{Q_1}{4\pi T}W\left(\frac{r_1^2S}{4T(t-t_1)}\right)}_{\text{first well}} + \underbrace{\frac{Q_2}{4\pi T}W\left(\frac{r_2^2S}{4T(t-t_2)}\right)}_{\text{second well}}
\end{equation}
Which with known quantities converted to metres and days, and then substituted is
\begin{equation}
0.3 = \frac{864}{4\pi\cdot 1650}W\left(\frac{100^2\cdot S}{4\cdot1650\cdot 0.021}\right) + \frac{432}{4\pi\cdot 1650}W\left(\frac{75^2\cdot S}{4\cdot1650\cdot0.042}\right)
\end{equation}
One way to solve for $S$ is by guess-and-check but that can take a while.
Another way is to use Python's root finding functions to solve the above equation in the form $LHS-RHS=0$.
```
from scipy.optimize import fsolve
# define the known parameters
h,r1,r2,Q1,Q2,t1,t2,T = [0.3,100,75,864,432,1./48,1./24,1650]
# define the root function for minimising, LHS - RHS, with the function input the unknown quantity
def f(S):
return h-Q1/(4*np.pi*T)*W(r1**2*S/(4*T*t1))-Q2/(4*np.pi*T)*W(r2**2*S/(4*T*t2))
# pass the FUNCTION HANDLE (name) and an initial guess [0.001] to the root function
S = fsolve(f,0.001)[0]
print("Storativity is", S)
```
***Given the values $\beta$=10$^{-3}$ d/m$^2$, $r_w$=0.25 m, $S$=10$^{-4}$ and $t_{test}$=30 mins, use `fsolve` to show that the transmissivity in the equation below is $T$=1280 m$^2$/d.***
\begin{equation}
\beta = \frac{1}{4\pi T}W\left(\frac{r_w^2 S}{4Tt_{test}}\right)
\end{equation}
***For $h_{max}$=0.5 m, $\alpha$=1.e-13, $n$=3.5 and $t_{pump}$=1 year, use `fsolve` to show that the max. pumping rate in the equation below is $Q_{max}$=311 m$^3$/d.***
\begin{equation}
h_{max} = \frac{Q_{max}}{4\pi T}W\left(\frac{r_w^2 S}{4Tt_{pump}}\right)+\alpha Q_{max}^n
\end{equation}
```
# your code here
```
## (Extra) Fitting pumping solutions to data
We use pumping solutions to make sense of the data. One way to do this is the graphical method you learned in class.
A more general approach is to plot the data and a numerical model over top of each other. Then, make changes to the model parameters until a good match is achieved.
This can be done automatically with a Python function called `curve_fit`.
First, let's create some fake pumping data with *known* transmissivity $T$=1500 m$^2$/d and $S$=0.0034
```
T,S = [1500,0.0034]
r,Q,tpump = [200., 25.*1.e-3*24*3600, 3.] # 3 day test at 25 L/s, observed at 200 m
td = np.logspace(-2,np.log10(tpump), 31) # 31 log-spaced measurements
hd = Theis(r,td,Q,S,T) # drawdown observations
hd = hd*(1.+0.1*np.random.randn(len(hd))) # add 10% normally distributed random noise - for a challenge
f,(ax1,ax2) = plt.subplots(1,2, figsize=(10,4))
ax1.plot(td,hd,'ko',mfc='w')
ax2.semilogx(td,hd,'ko',mfc='w')
for ax in [ax1,ax2]:ax.set_xlabel('time [days]');ax.set_ylabel('drawdown [m]')
# note, this is not the most accurate way to model pump test noise, it is for demonstration purposes only
```
Python `curve_fit` works by finding the parameters that minimize the sum-of-squares misfit between data and a model.
The 'model' must be expressed as a Python function, $f$, with a very particular input structure:
1. The first argument is the independent variable (time).
2. Subsequent arguments are parameters that `curve_fit` can play with ($T$ and $S$).
3. Any other parameters should be hard-coded.
4. The function must return the same thing measured by the data (drawdown).
See below for a function meeting this requirement.
```
def pump_test_model(t, T, S):
# note how r and Q have been hard-coded - we know what these are
return Theis(200., t, 25.*1.e-3*24*3600, S, T)
```
Now we call `curve_fit`, passing it the model (function handle/name), the data, and our starting guess at the model parameters.
```
from scipy.optimize import curve_fit
p,pcov = curve_fit(pump_test_model, td, hd, [1000, 1.e-2])
print('best-fit T =', p[0],'and S =', p[1])
```
The first output is a vector of the estimated parameters. We'll use it to plot the 'best' model over the data.
```
f,(ax1,ax2) = plt.subplots(1,2, figsize=(10,4))
ax1.plot(td,hd,'ko',mfc='w')
ax1.plot(td,pump_test_model(td, p[0], p[1]),'r-')
ax2.semilogx(td,hd,'ko',mfc='w')
ax2.semilogx(td,pump_test_model(td, p[0], p[1]),'r-')
for ax in [ax1,ax2]:ax.set_xlabel('time [days]');ax.set_ylabel('drawdown [m]')
```
## (Extra) Uncertainty of best-fit pumping solutions
If we are not precisely certain of the data, then it can be possible for more than one model to provide a credible fit to the data.
There are numerous ways to handle model uncertainty (and the best way depends on where the error is coming from - are the data noisy or is the conceptual model wrong?)
A simple way to get a first cut at model uncertainty is called Linear Sensitivity Analysis. `curve_fit` returns a second output called the covariance matrix, which gives some indication about how confident it is estimating the values of $T$ and $S$.
We can use the covariance matrix to generate "possible pairs" of $[T,S]$ and plot these models.
We can also use ranges of the sampled parameters to construct uncertainty estimates. In this case, we estimate transmissivity to be between 1375 and 1635 m$^2$/d with 90% confidence (in fact, it is 1500 m$^2$/d), and storativity between 2.8 and 4.0e-3 (in fact, it is 3.4e-3).
```
from scipy.optimize import curve_fit
p,pcov = curve_fit(pump_test_model, td, hd, [1000, 1.e-2])
print('covariance matrix is', pcov)
f,(ax1,ax2) = plt.subplots(1,2, figsize=(10,4))
ax1.plot(td,hd,'ko',mfc='w')
ax2.semilogx(td,hd,'ko',mfc='w')
N = 100 # number of possible pairs to generate
Ts = []; Ss = []
for p in np.random.multivariate_normal(p, pcov, N):
ax1.plot(t,pump_test_model(t, *p),'r-', lw=0.2, alpha=0.2)
ax2.semilogx(t,pump_test_model(t, *p),'r-', lw=0.2, alpha=0.2)
Ts.append(p[0]); Ss.append(p[1])
for ax in [ax1,ax2]:ax.set_xlabel('time [days]');ax.set_ylabel('drawdown [m]')
print('5 to 95-percentile range of T is [{:3.2f},{:3.2f}]'.format(*np.percentile(Ts,[5,95])))
print('5 to 95-percentile range of S is [{:3.2e},{:3.2e}]'.format(*np.percentile(Ss,[5,95])))
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
```
# Embedding CPLEX in a ML Spark Pipeline
`Spark ML` provides a uniform set of high-level APIs that help users create and tune practical machine learning pipelines.
In this notebook, we show how to embed CPLEX as a Spark _transformer_ class.
DOcplex provides transformer classes that take a matrix `X` of constraints and a vector `y` of costs and solves a linear problem using CPLEX.
Transformer classes share a `solve(X, Y, **params)` method which expects:
- an X matrix containing the constraints of the linear problem
- a Y vector containing the cost coefficients.
The transformer classes requires a Spark DataFrame for the 'X' matrix, and support various formats for the 'Y' vector:
- Python lists,
- numpy vector,
- pandas Series,
- Spark columns
The same formats are also supported to optionally specify upper bounds for decision variables.
## DOcplex transformer classes
There are two DOcplex transformer classes:
- __$CplexTransformer$__ expects to solve a linear problem in the classical form:
$$ minimize\ C^{t} x\\ s.t.\\
Ax <= B$$
Where $A$ is a (M,N) matrix describing the constraints and $B$ is a scalar vector of size M, containing the _right hand sides_ of the constraints, and $C$ is the _cost vector_ of size N. In this case the transformer expects a (M,N+1) matrix, where the last column contains the right hand sides.
- __$CplexRangeTransformer$__ expects to solve linear problem as a set of _range_ constraints:
$$ minimize\ C^{t} x\\ s.t.\\
m <= Ax <= M$$
Where $A$ is a (M,N) matrix describing the constraints, $m$ and $M$ are two scalar vectors of size M, containing the _minimum_ and _maximum_ values for the row expressions, and $C$ is the _cost vector_ of size N. In this case the transformer expects a (M,N+2) matrix, where the last two columns contains the minimum and maximum values (in this order).
```
try:
import numpy as np
except ImportError:
raise RuntimError('This notebook requires numpy')
```
In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
## The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
The __FOOD_NUTRIENTS__ data intentionally contains a missing value ($np.nan$) to illustrate the use of a pipeline involving a data cleansing stage.
```
# the baseline diet data as Python lists of tuples.
FOODS = [
("Roasted Chicken", 0.84, 0, 10),
("Spaghetti W/ Sauce", 0.78, 0, 10),
("Tomato,Red,Ripe,Raw", 0.27, 0, 10),
("Apple,Raw,W/Skin", .24, 0, 10),
("Grapes", 0.32, 0, 10),
("Chocolate Chip Cookies", 0.03, 0, 10),
("Lowfat Milk", 0.23, 0, 10),
("Raisin Brn", 0.34, 0, 10),
("Hotdog", 0.31, 0, 10)
]
NUTRIENTS = [
("Calories", 2000, 2500),
("Calcium", 800, 1600),
("Iron", 10, 30),
("Vit_A", 5000, 50000),
("Dietary_Fiber", 25, 100),
("Carbohydrates", 0, 300),
("Protein", 50, 100)
]
FOOD_NUTRIENTS = [
# ("Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0.0, 0.0, 42.2),
("Roasted Chicken", 277.4, 21.9, 1.8, np.nan, 0.0, 0.0, 42.2), # Set a value as missing (NaN)
("Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2),
("Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1.0),
("Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21.0, 0.3),
("Grapes", 15.1, 3.4, 0.1, 24.0, 0.2, 4.1, 0.2),
("Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0.0, 9.3, 0.9),
("Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0.0, 11.7, 8.1),
("Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4.0, 27.9, 4.0),
("Hotdog", 242.1, 23.5, 2.3, 0.0, 0.0, 18.0, 10.4)
]
nb_foods = len(FOODS)
nb_nutrients = len(NUTRIENTS)
print('#foods={0}'.format(nb_foods))
print('#nutrients={0}'.format(nb_nutrients))
assert nb_foods == len(FOOD_NUTRIENTS)
```
### Creating a Spark session
```
try:
import findspark
findspark.init()
except ImportError:
# Ignore exception: the 'findspark' module is required when executing Spark in a Windows environment
pass
import pyspark # Only run after findspark.init() (if running in a Windows environment)
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
if spark.version < '2.2':
raise "This notebook requires at least version '2.2' for PySpark"
```
## Using the transformer with a Spark dataframe
In this section we show how to use a transformer with data stored in a Spark dataframe.
### Prepare the data as a numpy matrix
In this section we build a numpy matrix to be passed to the transformer.
First, we extract the food to nutrient matrix by stripping the names.
```
mat_fn = np.matrix([FOOD_NUTRIENTS[f][1:] for f in range(nb_foods)])
print('The food-nutrient matrix has shape: {0}'.format(mat_fn.shape))
```
Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the `FOODS` collection of tuples into columns
```
nutrient_mins = [NUTRIENTS[n][1] for n in range(nb_nutrients)]
nutrient_maxs = [NUTRIENTS[n][2] for n in range(nb_nutrients)]
food_names ,food_costs, food_mins, food_maxs = map(list, zip(*FOODS))
```
We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional `min` and `max` columns
```
# step 1. add two lines for nutrient mins, maxs
nf2 = np.append(mat_fn, np.matrix([nutrient_mins, nutrient_maxs]), axis=0)
mat_nf = nf2.transpose()
mat_nf.shape
```
### Populate a Spark dataframe with the matrix data
In this section we build a Spark dataframe matrix to be passed to the transformer.
Using a Spark dataframe will also allow us to chain multiple transformers in a pipeline.
```
from pyspark.sql import SQLContext
sc = spark.sparkContext
sqlContext = SQLContext(sc)
columns = food_names + ['min', 'max']
food_nutrients_df = sqlContext.createDataFrame(mat_nf.tolist(), columns)
```
Let's display the dataframe schema and content
```
food_nutrients_df.printSchema()
food_nutrients_df.show()
```
### Chaining a data cleansing stage with the $CplexRangeTransformer$ in a Pipeline
To use the transformer, create an instance and pass the following parameters to the `transform` method
- the `X` matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the `Y` cost vector (using __"y"__ parameter id)
- whether one wants to solve a minimization (`min`) or maximization (`max`) problem (using __"sense"__ parameter id)
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments:
- `ubs` denotes the upper bound for the column variables that are created. The expected size of this scalar vector is N (when matrix has size (M,N+2))
- `minCol` and `maxCol` are the names of the columns corresponding to the constraints min and max range in the `X` matrix
Since the input data contains some missing values, we'll actually define a pipeline that will:
- first, perform a data cleansing stage: here missing values are replaced by the column mean value
- then, perform the optimization stage: the Cplex transformer will be invoked using the output dataframe from the cleansing stage as the constraint matrix.
```
from docplex.mp.sparktrans.transformers import CplexRangeTransformer
from pyspark.ml.feature import Imputer
from pyspark.ml import Pipeline
from pyspark.sql.functions import *
# Create a data cleansing stage to replace missing values with column mean value
data_cleansing = Imputer(inputCols=food_names, outputCols=food_names)
# Create an optimization stage to calculate the optimal quantity for each food for a balanced diet.
cplexSolve = CplexRangeTransformer(minCol='min', maxCol='max', ubs=food_maxs)
# Configure an ML pipeline, which chains these two stages
pipeline = Pipeline(stages=[data_cleansing, cplexSolve])
# Fit the pipeline: during this step, the data cleansing estimator is configured
model = pipeline.fit(food_nutrients_df)
# Make evaluation on input data. One can still specify stage-specific parameters when invoking 'transform' on the PipelineModel
diet_df = model.transform(food_nutrients_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'min'})
diet_df.orderBy(desc("value")).show()
```
Just for checking purpose, let's have a look at the Spark dataframe at the output of the cleansing stage.<br>
This is the dataframe that is fed to the __$CplexRangeTransformer$__ in the pipeline.<br>
One can check that the first entry in the fourth row has been set to the average of the other values in the same column ($57.2167$).
```
data_cleansing.fit(food_nutrients_df).transform(food_nutrients_df).show()
```
### Example with CplexTransformer
To illustrate the usage of the __$CplexTransformer$__, let's remove the constraint on the minimum amount for nutrients, and reformulate the problem as a cost maximization.
First, let's define a new dataframe for the constraints matrix by removing the `min` column from the `food_nutrients_df` dataframe so that it is a well-formed input matrix for the __$CplexTransformer$__:
```
food_nutrients_LP_df = food_nutrients_df.select([item for item in food_nutrients_df.columns if item not in ['min']])
food_nutrients_LP_df.show()
from docplex.mp.sparktrans.transformers import CplexTransformer
# Create a data cleansing stage to replace missing values with column mean value
data_cleansing = Imputer(inputCols=food_names, outputCols=food_names)
# Create an optimization stage to calculate the optimal quantity for each food for a balanced diet.
# Here, let's use the CplexTransformer by specifying only a maximum amount for each nutrient.
cplexSolve = CplexTransformer(rhsCol='max', ubs=food_maxs)
# Configure an ML pipeline, which chains these two stages
pipeline = Pipeline(stages=[data_cleansing, cplexSolve])
# Fit the pipeline: during this step, the data cleansing estimator is configured
model = pipeline.fit(food_nutrients_LP_df)
# Make evaluation on input data. One can still specify stage-specific parameters when invoking 'transform' on the PipelineModel
# Since there is no lower range for decision variables, let's maximize cost instead! (otherwise, the result is all 0's)
diet_max_cost_df = model.transform(food_nutrients_LP_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'max'})
diet_max_cost_df.orderBy(desc("value")).show()
%matplotlib inline
import matplotlib.pyplot as plt
def plot_radar_chart(labels, stats, **kwargs):
angles=np.linspace(0, 2*np.pi, len(labels), endpoint=False)
# close the plot
stats = np.concatenate((stats, [stats[0]]))
angles = np.concatenate((angles, [angles[0]]))
fig = plt.figure()
ax = fig.add_subplot(111, polar=True)
ax.plot(angles, stats, 'o-', linewidth=2, **kwargs)
ax.fill(angles, stats, alpha=0.30, **kwargs)
ax.set_thetagrids(angles * 180/np.pi, labels)
#ax.set_title([df.loc[386,"Name"]])
ax.grid(True)
diet = diet_df.toPandas()
plot_radar_chart(labels=diet['name'], stats=diet['value'], color='r')
diet_max_cost = diet_max_cost_df.toPandas()
plot_radar_chart(labels=diet_max_cost['name'], stats=diet_max_cost['value'], color='r')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.